Image Optimization
We are investigating elevated errors to our Image Optimization service.
All other products and services are unaffected by this incident.
Our engineers have identified the contributing factor and are developing a fix to our Image Optimization service. Our engineers have confirmed that the impact is localized to US-EAST IO services only.
All other locations and services are unaffected.
Engineering has confirmed the impact to Image Optimization service has been mitigated.
Our engineers have identified additional contributing factors and are developing an adjusted mitigation strategy to mitigate the remaining intermittent errors to Image Optimization services.
All other locations and services are unaffected.
Engineering has deployed a fix and have confirmed a gradual recovery to US-EAST Image Optimization services. We will continue to monitor until we’ve confirmed that customer experience has been fully restored.
Our ability to provide core content delivery and security services remain unaffected by this event.
Engineering has confirmed that US-East Image Optimization (IO) services has been fully restored. Customers may have experienced elevated errors for IO services from the 2nd of September 2025 at 21:00 UTC to the 3rd of September 01:35 UTC.
This incident is resolved.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-09-03 00:45:06 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
We are seeing increased errors across multiple Fastly services that utilize a common third party service provider, unrelated to Fastly's Edge Cloud Network.
Fastly has launched acute incident response practices to investigate into this issue further in an effort to reduce the impact to our customers.
As our engineers determine the scope of impact our Customer Escalation Management team will be updating the impacted components on this status post.
We're aware of an ongoing Google Cloud incident, detailed on their status page, which is affecting several Fastly services. Customers may be experiencing increased latency due to impact to our KV Store.
Specifically, those trying to access their control plane via manage.fastly.com might notice that several key Observability features are not loading, including Billing, Historical Stats, Log Explorer & Insights, Origin Inspector, Real-Time Log Streaming, and Real-time Analytics. You may also find it difficult to engage with API endpoints. Additionally, customers might see elevated errors when accessing Fastly webpages and documented resources.
Our engineers are working to restore these services as a high priority and we will provide more information shortly.
You can monitor the Google Cloud incident here:
We've confirmed that customers won't receive status post notifications through their Support Slack channels during this incident.
However, we want to assure you that your ability to request a support case through our Support Slack services remains unaffected. You can still open new support cases as needed.
Google Cloud has communicated that they have successfully deployed mitigations to the majority of their services. We are now observing a gradual recovery of Fastly services.
We will continue to monitor this situation with the highest priority until all Fastly and customer services are fully restored.
We have continued to monitor the Google Cloud Status Page for the latest information by the third party service provider.
Incident Update: Full Recovery and Root Cause Identified
Current Status: Resolved
We're confirming that all services impacted during yesterday's incident have fully recovered. Our teams continuously monitored the situation and verified the stability of all affected systems.
Root Cause Analysis
Our investigation has confirmed that the increased errors and latency across our KV Store, control plane, and certain Observability features were a direct result of the Google Cloud disruption on the 12th of June 2025.
Google has publicly shared a post-mortem regarding the incident. It includes specific information about the root cause—an invalid automated quota update to their API management system—along with their mitigation strategies and steps they're taking for future prevention.
Next Steps
For a comprehensive understanding of the incident's origin, mitigation, and Google's preventative measures, we encourage customers to review the official Google Cloud Status Page. This provides a full breakdown from their perspective.
Additionally, our Product and Engineering teams will thoroughly review all Google Cloud Platform (GCP) reports to determine mitigation strategies and ensure they are adopted as part of our long-term preventive measures to ensure our systems are more resilient to third-party outages of this kind in the future.
We appreciate your patience and understanding as we worked through this third-party event. Our focus remains on providing reliable and high-performance services.
We are investigating elevated errors to our Web Delivery service within our Madrid (MAD) Point of Presence (POP).
All other products and services are unaffected by this incident.
Our engineers are continuing to investigate elevated errors to our Madrid (MAD), Web Delivery service within our Madrid (MAD) Point of Presence (POP).
Our engineers have identified the contributing factor and are applying a fix to our Web Delivery service within our Madrid (MAD) Point of Presence (POP).
All other locations and services are unaffected.
Our engineers have identified an additional contributing factor and are applying an adjusted mitigation strategy to our Web Delivery service within our Madrid (MAD) Point of Presence (POP).
All other locations and services are unaffected.
Engineering has confirmed that our Web Delivery within our Madrid (POP) has been fully restored. Customers may have experienced timeouts and/or inability to view content that was serviced by impacted IPs from the 3rd of February 2025 at 15:48 to the 10th of February 2025 at 16:43 UTC.
This incident is resolved.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.
If customers are still experiencing elevated errors within this region please reach out to https://support.fastly.com.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2025-02-03 18:04:21 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
We're currently investigating performance issues with our Image Optimization service.
All other services are unaffected.
A fix has been implemented and we are monitoring the results.
This incident has been resolved.
----------------------
Update added on 27th OCT 2023
On Wednesday, the 25th of October 2023, Fastly received customer reports of elevated 5xx errors for Image Optimization services from 13:00 to 17:39 UTC.
Our network availability, point of presence locations and all other services were unaffected by this incident.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.
The issue has been identified and a fix is being implemented.