Third Party Service Provider
We are seeing increased errors across multiple Fastly services that utilize a common third party service provider, unrelated to Fastly's Edge Cloud Network.
Fastly has launched acute incident response practices to investigate into this issue further in an effort to reduce the impact to our customers.
As our engineers determine the scope of impact our Customer Escalation Management team will be updating the impacted components on this status post.
We're aware of an ongoing Google Cloud incident, detailed on their status page, which is affecting several Fastly services. Customers may be experiencing increased latency due to impact to our KV Store.
Specifically, those trying to access their control plane via manage.fastly.com might notice that several key Observability features are not loading, including Billing, Historical Stats, Log Explorer & Insights, Origin Inspector, Real-Time Log Streaming, and Real-time Analytics. You may also find it difficult to engage with API endpoints. Additionally, customers might see elevated errors when accessing Fastly webpages and documented resources.
Our engineers are working to restore these services as a high priority and we will provide more information shortly.
You can monitor the Google Cloud incident here:
We've confirmed that customers won't receive status post notifications through their Support Slack channels during this incident.
However, we want to assure you that your ability to request a support case through our Support Slack services remains unaffected. You can still open new support cases as needed.
Google Cloud has communicated that they have successfully deployed mitigations to the majority of their services. We are now observing a gradual recovery of Fastly services.
We will continue to monitor this situation with the highest priority until all Fastly and customer services are fully restored.
We have continued to monitor the Google Cloud Status Page for the latest information by the third party service provider.
Incident Update: Full Recovery and Root Cause Identified
Current Status: Resolved
We're confirming that all services impacted during yesterday's incident have fully recovered. Our teams continuously monitored the situation and verified the stability of all affected systems.
Root Cause Analysis
Our investigation has confirmed that the increased errors and latency across our KV Store, control plane, and certain Observability features were a direct result of the Google Cloud disruption on the 12th of June 2025.
Google has publicly shared a post-mortem regarding the incident. It includes specific information about the root cause—an invalid automated quota update to their API management system—along with their mitigation strategies and steps they're taking for future prevention.
Next Steps
For a comprehensive understanding of the incident's origin, mitigation, and Google's preventative measures, we encourage customers to review the official Google Cloud Status Page. This provides a full breakdown from their perspective.
Additionally, our Product and Engineering teams will thoroughly review all Google Cloud Platform (GCP) reports to determine mitigation strategies and ensure they are adopted as part of our long-term preventive measures to ensure our systems are more resilient to third-party outages of this kind in the future.
We appreciate your patience and understanding as we worked through this third-party event. Our focus remains on providing reliable and high-performance services.
We're investigating possible performance impact affecting the Support Chat System related to a vendor reported incident.
- Vendor status page: https://slack-status.com/2025-05/7b32241eb41a54aa
Customers may experience failed support case functions when attempting to generate a support case from Support Chat systems. We ask that customers email Support teams at support@fastly.com to ensure no delays.
This event has been resolved.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-05-12 22:52:12 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
We are investigating elevated errors to our Johannesburg (JNB) Point of Presence (POP).
Traffic engineering has been performed in the region to minimize possible impact while we continue to research into this incident.
Our engineers have identified a power outage that caused the impact our engineers observed in our Johannesburg (JNB) POP.
We have contacted a local Third Party Service Provider who has restored power and our engineers have begun to mitigate the POP.
Customers will continue to see their traffic rerouted to surrounding regions until the POP has been fully restored and traffic can be returned to their typical routes. All other products and services remain unaffected by this event.
Engineering has confirmed the impact to our JNB POP has been mitigated.
Our Network Engineering team will gradually return traffic as part of our standard traffic engineering best practices.
This incident has been resolved.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-05-10 12:52:12 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
On the 3rd of May 2025 from 09:00 to 12:00 UTC, a third-party service provider in Vancouver performed maintenance on their services. As a result, Fastly temporarily redirected customer traffic typically served from our Vancouver (YVR) Point of Presence (POP) from neighboring regions.
During this temporary reroute, customers may have experienced intermittent errors and increased latency.
The traffic engineering implemented during the maintenance was reversed once the service provider completed their work, and our Network engineers confirmed that the YVR POP was no longer affected.
Our ability to deliver all other products and services was not affected by this event.
Fastly Engineering has identified a performance impact with issuing TLS certificates through one of our third-party service providers that is currently experiencing an outage.
Customers may experience a failure obtaining a TLS certificate as a result of this outage. Customers who need assistance with obtaining a TLS certificate can engage with our Support team through https://support.fastly.com . We apologize for this inconvenience and remain readily available to resolve any impact experienced as a result of this event.
We're investigating possible performance impact affecting the Support Chat System related to a vendor reported incident.
- Vendor status page: https://slack-status.com/2025-02/1b757d1d0f444c34
Customers may experience failed support case functions when attempting to generate a support case from Support Chat systems. We ask that customers email Support teams at support@fastly.com to ensure no delays.
Slack Status Updates
Slack is actively working to resolve the issue and is providing updates on the incident status at the following link: Slack Status. We encourage you to monitor this page for the latest information.
Impact on Our Services
While we are closely monitoring the Slack incident, we want to assure you that our ability to provide customer support and the accessibility of our Network remain unaffected. We have implemented our established communication protocols to ensure continuity of support during this vendor outage. All other products and services are also operating normally and are not impacted by this incident.
Slack has successfully addressed their incident and reported that all services have been restored.
We are conducting internal tests on Support Chat Systems to verify that all functions are operating properly.
The vendor has confirmed a second event impacting Support Chat Systems.
- We are monitoring their updates here: https://slack-status.com/2025-02/d41e4bfd1ccae26a
Customer should continue to request support through email, via support@fastly.com
This incident has been resolved.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-02-26 16:50:53 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
We're investigating elevated errors in our Madrid (MAD) Point of Presence (POP).
All other locations and services are unaffected
Fastly Engineering has observed elevated errors in our Madrid (MAD) Point of Presence (POP).
This has been identified as an issue with a third party service provider and a fix is being implemented.
All other locations and services are unaffected.
Fastly Engineering has applied our standard acute incident response practices and restored our MAD POP.
Customer services have returned to pre-incident performance levels.
We will continue to monitor the third party service provider issue for recovery. All locations and other services remain unaffected.
This event has been resolved.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-02-05 22:01:42 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
One of our selected vendors will be performing scheduled maintenance on our Fastly Application from 18:00 to 18:15 UTC on the 24th of April 2024.
During this maintenance window our customers may experience elevated page load errors across some of the Fastly Application views.
The scheduled maintenance has been completed.
On Tuesday, the 23rd of April 2024 from 11:30 UTC to the 24th of April 01:36 UTC, our engineers observed an unplanned event that contained impact across the Chennai (MAA), Mumbai (BOM), Kolkata (CCU), Delhi (DEL), and Hyderabad (HYD) Points of Presence (POPs) with a common transit provider.
This event is resolved, and there is no remaining impact.
We are seeing increased errors across multiple customers with origins utilizing a common cloud provider, unrelated to Fastly’s Edge Cloud Network.
The third party vendor has identified the issue and implementing a fix.
All other locations and services are unaffected.
A fix has been implemented and we are monitoring the results.
Engineering has confirmed that traffic has returned to pre-incident levels. Customers may have experienced increased errors from 23:48 to 00:55 UTC.
This incident is resolved.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.