Disruption with GitHub's code search
- Started
- 2026-04-01 15:02 UTC
- Resolved
- 2026-04-01 23:45 UTC
- Duration
- 523 minutes
- Date
- 2026-04-01
Incident Timeline
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of impacted performance for some GitHub services.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
Code search has recovered and is serving production traffic.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.
On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.