Real-time Log Streaming
Due to a recent configuration change, duplicate log messages are being emitted for Compute Log Tailing and for Log Explorer & Insights. Fastly has identified the issue and is reverting the configuration change.
Remote Log Streaming services is not affected by the duplicate log messages.
We are investigating elevated errors to our Real-time Log Streaming services.
Our engineers have identified the contributing factor and are applying a fix to our Real-time Log Streaming services.
Engineering has confirmed the impact to our Real-time Log Streaming services has been mitigated.
Engineering has confirmed that our Real-time Log Streaming services have been fully restored. Customers may have experienced logs being discarded or missing in addition to potential missing data in their WAF metrics from 18:09 to 18:50 UTC.
This incident is resolved.
To offer feedback on our status page, click "Give Feedback"
Status Post, Created Date/Time: 2024-10-04 18:37:33 UTC
Note: Our Customer Escalation Management team will update the start date and time of the initial "investigating" status post upon the resolution of this incident. This update is meant to provide our customers and their end users with a potential impact window. The date and time mentioned in the message above indicates when the status post was requested by our Acute Incident Response team.
We are investigating elevated errors to our Real-time Log Streaming service.
Our engineers have identified the contributing factor and are applying a fix to our Real-time Log Streaming service.
Engineering has also confirmed that this is only impacting our Compute service and Log Tailing within our Observability service.
All other locations and services are unaffected.
Engineering has deployed a fix and have confirmed a gradual recovery to our Real-time Log Streaming service. We will continue to monitor until we’ve confirmed that the customer experience has been fully restored.
Network availability and all other services were unaffected by this incident.
Engineering has confirmed that Real-time Log Streaming service has been fully restored. Customers on our Compute service and Log Tailing within our Observability service may have experienced partial data loss from 21:00 UTC on the 18th of September 2024 to 13:19 UTC on the 20th of September 2024.
This incident is resolved.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.
To offer feedback on our status page, click "Give Feedback"
We are investigating elevated errors to our Real-time Log Streaming services.
Our engineers have identified the contributing factor and are applying a fix to our Real-time Log Streaming services.
All other services are unaffected.
Engineering has confirmed the impact to Real-time Log Streaming services has been mitigated.
Engineering has confirmed that Real-time Log Streaming services has been fully restored. Customers may have experienced logs being discarded or missing from 3:10 to 5:55 UTC.
This incident is resolved.
To offer feedback on our status page, click "Give Feedback"
We are investigating elevated errors to our Real-time Log Streaming service.
Our engineers have identified the contributing factor and are applying a fix to our Real-time Log Streaming service. During this time, customers may experience the log endpoint error status not being reported in the UI. Log delivery and changes to logging configurations are not impacted at this time.
Our engineers have identified the contributing factor and are continuing to apply the mitigation strategy to our Real-time Log Streaming service.
Engineering has deployed a fix and have confirmed a gradual recovery to our Real-time Log Streaming service. We will continue to monitor until we’ve confirmed that customer experience has been fully restored.
Engineering has confirmed that Real-time Logging services have been fully restored.
Customers may have experienced a log endpoint error code in their UI from the 7th of August 2024 at 19:16 UTC to the 8th of August 2024 at 01:46 UTC.
Log delivery and changes to logging configurations were not impacted by this issue.
This incident is resolved.
To offer feedback on our status page, click "Give Feedback"
We're currently investigating performance issues with our Streaming Logs service.
All other services are unaffected.
Our engineers have identified the contributing factor and are applying a fix to our Real-time Log Streaming service.
Engineering has deployed a fix and have confirmed a gradual recovery to Real-time Log Streaming service. We will continue to monitor until we’ve confirmed that customer experience has been fully restored.
Engineering has confirmed that our Real-time Log Streaming service has been fully restored. Customers who have a logging format of classic, Loggly, or Logplex may have experienced malformed logs from the 11th of April at 18:02 to the 7th of May at 03:23 UTC.
This incident is resolved.
Fastly Engineering detected a performance impact event affecting Streaming Logs in various POPs throughout our network. Customers may have experienced a delay or discarded log messages from 16:19 to 16:38 UTC.
This incident is resolved.
We have identified the cause of elevated errors in our Streaming Logs service and are deploying a fix.
Our network availability and all other services are unaffected by this incident
A fix has been implemented and we are monitoring the results.
Engineering has confirmed that the degraded performance for streaming log services has been fully restored. Customers may have experienced varying degrees of log loss or delays in log delivery from 20:50 UTC to 21:50 UTC as a result of this incident.
This incident has been resolved.
Fastly has identified an issue in which customers may see error messages in the Fastly UI for S3 and Kinesis endpoints indicating that a token is expired. However, this is isolated to the Fastly logging system is intermittently hitting a rate with the AWS Security Token Service (STS) API. This intermittent error does not appear to be causing log loss, but results in an error messaging in the UI. This only affects endpoints S3 and Kinesis endpoints that are using role-based authentication.
Fastly is currently working to resolve this intermittent error. All other locations and services are unaffected.
Engineering has deployed a fix to mitigate rate limiting errors and have observed a gradual recovery for streaming log services. We will continue to monitor the effects of the change and will post an update once services have been fully restored.
Our investigations into previously deployed mitigation measures has verified that our customers should no longer experience log loss as a result of this incident.
We investigated into the continued reports of error messages observed within the Fastly App and identified an error in the timing when reacquiring temporary credentials. We have confirmed that the impact to streaming log services has been resolved, and we do not see log loss in connection to this error message.
We are deploying an additional fix to resolve this Fastly App UI error message for our customers. We will post an update once all remaining error messages have been fully corrected.
A fix was deployed and we have observed role-based S3 and Kinesis logging endpoints returning to normal in the Fastly UI. Services that handle little to no traffic may see the error remaining until the logging system has successfully sent a batch of logs.
We're investigating elevated errors in Streaming Logs.
This issue has been identified and a fix is being implemented.
A fix has been implemented and we are monitoring the results.
Engineering has confirmed that Streaming Logs has been fully restored. Customers sending log messages from the affected POPs would experience a similar proportion of log messages discarded from 15:20 to 15:53 UTC.
This incident is resolved.
Affected customers may have experienced impact to varying degrees and to a shorter duration than as set forth above.