Instaclustr Status

Mon Apr 21 2025 07:15:53 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 21, 07:15 UTC
Resolved - This incident has been resolved.

Apr 20, 21:47 UTC
Update - We have seen a consistent reduction in errors rates from Apache Cassandra backup events over the past 3 days, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 19, 21:49 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 22:29 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 02:38 UTC
Update - A fix has been deployed to all Apache Cassandra nodes. Initial results indicate the problem has been resolved, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Sun Apr 20 2025 21:47:05 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 20, 21:47 UTC
Update - We have seen a consistent reduction in errors rates from Apache Cassandra backup events over the past 3 days, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 19, 21:49 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 22:29 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 02:38 UTC
Update - A fix has been deployed to all Apache Cassandra nodes. Initial results indicate the problem has been resolved, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Sat Apr 19 2025 21:49:52 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 19, 21:49 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 22:29 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 02:38 UTC
Update - A fix has been deployed to all Apache Cassandra nodes. Initial results indicate the problem has been resolved, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Fri Apr 18 2025 22:29:43 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 18, 22:29 UTC
Update - We are continuing to observe a reduction in errors rates from Apache Cassandra backup events, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 02:38 UTC
Update - A fix has been deployed to all Apache Cassandra nodes. Initial results indicate the problem has been resolved, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Fri Apr 18 2025 02:38:51 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 18, 02:38 UTC
Update - A fix has been deployed to all Apache Cassandra nodes. Initial results indicate the problem has been resolved, however, we will continue to closely monitor. We expect to provide another update in the next 24 hours.

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Fri Apr 18 2025 00:01:18 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 18, 00:01 UTC
Monitoring - We have started rolling out a fix, this is expected to take a few hours to be applied to all Apache Cassandra nodes, we will be monitoring the progress and effectiveness of the rollout closely.

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Thu Apr 17 2025 22:58:30 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 17, 22:58 UTC
Identified - The issue has been identified and we are preparing to rollout a fix.

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Thu Apr 17 2025 01:29:56 GMT+0000 (Coordinated Universal Time)

Increased Failure Rate of Apache Cassandra Backups

Apr 17, 01:29 UTC
Investigating - We are currently seeing an elevated rate of Backup Failures for AWS nodes for our Apache Cassandra offering. Currently we are expecting that these backups will continuously retry and eventually succeed, however this will be visible in the Instaclustr console and APIs as failed backup events.

We are actively monitoring and working on a solution to this, and will provide more updates as investigation continues.

If you have any questions or concerns please reach out via [email protected]


Thu Feb 13 2025 16:45:28 GMT+0000 (Coordinated Universal Time)

Instaclustr Terraform Provider Issue

Feb 13, 16:45 UTC
Resolved - This incident has been resolved.

Feb 13, 16:17 UTC
Monitoring - We have applied the fix and are monitoring the result.

Feb 13, 15:49 UTC
Identified - We have identified and are applying the fix.

Feb 13, 13:25 UTC
Investigating - We are currently experiencing a validation issue with Instaclustr Terraform Provider. You may see the error message "Internal validation of the provider failed" due to this issue and may not be able to manage your clusters using Instaclustr Terraform Provider. Our team is working on resolving this problem and we apologise for any inconvenience caused.

Cluster Management API and Management Console are still operational and can be used to manage your clusters.

Additional details will be provided as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).


Mon Jan 27 2025 16:36:37 GMT+0000 (Coordinated Universal Time)

New Clusters may get Stuck during the Provisioning Stage and the Monitoring API is experiencing elevated latencies and error rates

Jan 27, 16:36 UTC
Resolved - This incident has been resolved.

Jan 27, 14:02 UTC
Update - We are actively monitoring the situation concerning Monitoring API latencies and are pleased to announce that the latency levels are currently aligning with our expectations. We will maintain our oversight and provide updates as new information becomes available.

If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).

Jan 27, 12:44 UTC
Update - We have successfully resolved the issue that was causing provisioning clusters to get stuck during the provisioned stage. Currently, we are continuing to monitor the situation regarding Monitoring API latencies, and we are pleased to report that latency levels are returning to normal.

Additional details will be provided as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).

Jan 27, 12:19 UTC
Monitoring - We are monitoring the results of the fix and we are seeing reduced latencies in our Monitoring API. We anticipate that latency levels will return to normal in the next few hours and are continuing to monitor the outcome of the fix.

Existing customer clusters are unaffected.

Additional details will be provided as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).

Jan 27, 11:07 UTC
Update - We have implemented a fix that has resulted in improved latencies for the Monitoring API. Our team is continuously investigating a permanent fix to resolve this issue. We are also still seeing issues with clusters stuck during the Provisioning stage, and investigating a fix for this as-well.

Additional details will be provided as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).

Jan 27, 07:50 UTC
Investigating - We are still seeing elevated latencies and error rates with our Monitoring API. Our team is actively working to resolve the issue and we apologise for any inconvenience caused.

Jan 27, 07:06 UTC
Identified - The issue has been identified and a fix is being implemented.

Jan 27, 06:42 UTC
Update - We are also investigating elevated latencies and error rates with our Monitoring API. Our team is actively working to resolve the issue and we apologise for any inconvenience caused.

Existing customer clusters are unaffected.

We will provide updates as soon as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).

Jan 27, 05:46 UTC
Investigating - We are currently experiencing issues with provisioning clusters on through the Cluster Management API and Management Console. New Clusters created via the Instaclustr Console and Cluster Management API may get stuck during the Provisioning stage. Our team is actively working on resolving this problem and we apologise for any inconvenience caused.

Existing customer clusters are unaffected, and the Monitoring API and Prometheus API are operational.

Additional details will be provided as they become available. If you have an urgent matter, please reach out to us via the support portal (support.instaclustr.com) or email ([email protected]).