In an abundance of caution we have left this incident in a monitoring state for an extended period of time, but have not seen a reoccurrence of the original issue. This incident is now resolved. We have implemented additional monitoring and improved automated validation tasks as a result.
Posted Dec 03, 2019 - 22:47 UTC
The background validation process has been paused to ensure it does not have an impact on normal operations after a spike in the system error rate earlier today. Issues for known impacted customers have been resolved. Once the behavior of the validation system has been validated, the process will be resumed.
Posted Dec 02, 2019 - 22:52 UTC
Automated validation of the cluster continues. Issues for known impacted customers have been resolved. If you believe you are impacted please contact support at firstname.lastname@example.org.
Posted Nov 27, 2019 - 17:35 UTC
The build deployed on 22 Nov has resolved the source of the replication issue. Repair work is ongoing for the subset of users impacted.
Posted Nov 25, 2019 - 15:49 UTC
We’ve deployed a new build that appears to address the symptom and are actively exploring repairing affected data for the subset of users this impacts.
Posted Nov 22, 2019 - 18:07 UTC
We are seeing limited reports of index updates not being fully replicated to all replicas in the cluster. All data is present, but the indexes do not always display the data. Engineering is investigating the cause.