March 14, 2025 10:27 CET
Investigating - We have noticed issues of scaling-up volumes utilizing V2(lightbits) storage.
Currently there are workarounds and options available but we are investigating why default behaviour and operations are not functioning optimally.
Should you encounter issues with volume-scaling please reach out to our 24/7 support.
April 24, 2025 08:09 CEST
Resolved - The issues with volume-resizing has been resolved and has been in stable-state for 24 hours; there should now be no more issues related to resizing v2-storage volumes in any environments.
April 3, 2025 12:03 CEST
Investigating - Currently we are seeing issues re-deploying on nodes in sto2 in Virtuozzo and are working with the vendor to find the root-cause.
Potentially migrating loads off the impacted hardware could be a scenario and we will communicate further if we start scheduling this (live-migrations where possible first).
April 3, 2025 13:50 CEST
Identified - Update
At 1400 we will start with migrations off the impacted hardnode, these will all be live-migrations meaning no impact with one exception:
Storage nodes cannot be live-migrated and will be offline for a short period of time while migration is being done. We have a identified a limited amount of these and will take them last one by one to minimize impact.
Further updates during the afternoon.
April 4, 2025 07:35 CEST
Monitoring - Morning update:
All impacted environments have been successfully migrated away from the hardnode and are now mitigated from impact or risk.
Incident is being changed into monitoring-state during the day and then hopefully closed before the weekend.
April 11, 2025 16:18 CEST
Resolved - Issue has been fully resolved and the impacted hard-node is functionally restored.
Incident will be closed and is considered resolved.
March 25, 2025 10:25 CET
Scheduled - We will be upgrading our database management system. During the maintenance window, you might see intermittent unavailability and your datastores may appear as 'Unreachable' in the user interface. This is purely a visual issue, and connections to your databases will not be affected.
There will be no impact on customer datastores.
April 9, 2025 09:00 CEST
Active - Scheduled maintenance is starting.
April 9, 2025 16:00 CEST
Completed - Scheduled maintenance is complete.
March 25, 2025 10:25 CET
Scheduled - We will be upgrading our database management system. During the maintenance window, you might see intermittent unavailability and your datastores may appear as 'Unreachable' in the user interface. This is purely a visual issue, and connections to your databases will not be affected.
There will be no impact on customer datastores.
April 9, 2025 09:00 CEST
Active - Scheduled maintenance is starting.
April 3, 2025 12:03 CEST
Investigating - Currently we are seeing issues re-deploying on nodes in sto2 in Virtuozzo and are working with the vendor to find the root-cause.
Potentially migrating loads off the impacted hardware could be a scenario and we will communicate further if we start scheduling this (live-migrations where possible first).
April 3, 2025 13:50 CEST
Identified - Update
At 1400 we will start with migrations off the impacted hardnode, these will all be live-migrations meaning no impact with one exception:
Storage nodes cannot be live-migrated and will be offline for a short period of time while migration is being done. We have a identified a limited amount of these and will take them last one by one to minimize impact.
Further updates during the afternoon.
April 4, 2025 07:35 CEST
Monitoring - Morning update:
All impacted environments have been successfully migrated away from the hardnode and are now mitigated from impact or risk.
Incident is being changed into monitoring-state during the day and then hopefully closed before the weekend.
April 3, 2025 12:03 CEST
Investigating - Currently we are seeing issues re-deploying on nodes in sto2 in Virtuozzo and are working with the vendor to find the root-cause.
Potentially migrating loads off the impacted hardware could be a scenario and we will communicate further if we start scheduling this (live-migrations where possible first).
April 3, 2025 13:50 CEST
Identified - Update
At 1400 we will start with migrations off the impacted hardnode, these will all be live-migrations meaning no impact with one exception:
Storage nodes cannot be live-migrated and will be offline for a short period of time while migration is being done. We have a identified a limited amount of these and will take them last one by one to minimize impact.
Further updates during the afternoon.
April 3, 2025 12:03 CEST
Investigating - Currently we are seeing issues re-deploying on nodes in sto2 in Virtuozzo and are working with the vendor to find the root-cause.
Potentially migrating loads off the impacted hardware could be a scenario and we will communicate further if we start scheduling this (live-migrations where possible first).
April 1, 2025 12:06 CEST
Investigating - We have currently a problem with the DBaaS control plane monitoring resulting in non-functional UI metric dashboard.
April 2, 2025 10:43 CEST
Resolved - The issue has now been resolved.
April 1, 2025 12:06 CEST
Investigating - We have currently a problem with the DBaaS control plane monitoring resulting in non-functional UI metric dashboard.
March 27, 2025 07:32 CET
Investigating - We are currently seeing performance issues on one of our nodes at STO2 in Virtuozzo and are investigating root-cause and impact and will update continuously as work progresses.
March 27, 2025 08:06 CET
Monitoring - We have identified a hardnode having issues and after troubleshooting performed a restart, all environments and instances are now looking stable and we see no continuing issues.
We will keep the incident open in monitoring-state during the morning while going through more data to ensure we see no more issues.
March 27, 2025 11:07 CET
Resolved - After the node-restart early this morning combined with manual restart of certain environments and services all issues are resolved.
We have kept monitoring the hardware and services and everything has been stable and we are closing the incident.