Collapse group
Container builds
us-east-1 - Operational
us-east-1
Previous page
Next page
eu-central-1 - Operational
eu-central-1
us-west-2 - Operational
us-west-2
GitHub Actions
Depot-managed Actions Runners - Operational
Depot-managed Actions Runners
Github.com - Actions - Operational
Github.com - Actions
API - Operational
Website - Operational
This incident has been resolved, CLI downloads are functional again.
We have deployed a workaround to bypass the GitHub outage / rate limit, we are monitoring.
Due to a GitHub outage, the depot CLI is unable to be downloaded from GitHub releases. This is blocking the ability of the depot/setup-action to download the CLI from Actions jobs, whether they are Depot-hosted or GitHub-hosted.
depot
depot/setup-action
All jobs are processing normally, we are no longer seeing GitHub errors.
We are seeing a reduction of errors from GitHub and an increase in successful jobs processing. We will continue to monitor.
While some jobs are processing, many are still failing. Upstream GitHub appears to still be returning errors from their API. We are continuing to monitor.
This appears to have been caused by a GitHub rate limit reached during the outage. We are beginning to see jobs processing again and are monitoring progress.
GitHub has resolved their incident. Jobs running on Depot never had above normal queue times, indicating that the outage impacted systems Depot is no longer reliant on.
We are monitoring an incident with GitHub Actions: https://www.githubstatus.com/incidents/cbdzqm5fw0fmCurrently, we are seeing minimal disruption to queue times for jobs running on Depot, but will continue to monitor for any impact.
We are seeing queue times resolved and normal processing again.
We are currently investigating a subset of GitHub Actions jobs that are queueing for longer than normal.
We are seeing queue times decreasing to normal levels. We are monitoring.
We are currently investigating alerts that Ubuntu 22.04 jobs have increased queue time.
The stuck jobs have all processed successfully.
We are currently observing a group of GitHub jobs that have not yet started processing on runners. New jobs seem to be unaffected. We are investigating.
We are investigating longer queue times for Windows GitHub Actions jobs.
We are observing queue times returning to normal. We will continue to monitor.
We are continuing to see recovery but are continuing to monitor the remaining outliers.
We have deployed a fix and are monitoring for queue time recovery.
We've discovered a failure in our task scheduler that is blocking operations. We're deploying a fix that should unblock jobs.
We are currently investigating possible delays in Actions jobs starting.
Queue times have returned to normal.
Build service is restored. The underlying cause was an issue with a certificate / system time in the macOS VM image itself. We have deployed a working image to all macOS hosts.
We are currently deploying a fix to all macOS hosts, build service is beginning to be restored.
We have identified the potential root cause and are working on populating a fix out across the fleet of compute for macOS runners.
We are currently investigating an issue with macOS runners for GitHub Actions not launching.
GitHub have marked their incident as resolved.
We are currently monitoring the ongoing GitHub outage that is impacting GitHub Actions and other services: https://www.githubstatus.com/incidents/lb0d8kp99f2v
Mar 2025 to May 2025
Next