JuiceFS Status

Tue May 16 2017 06:30:00 GMT+0000 (Coordinated Universal Time)

Performance Degraded in AWS Beijing Region

May 16, 2017 14:12 +08
Investigating - 6:14 Got a report that JuiceFS in AWS Beijing region is slow, timeout to connect to metadata service.

6:16 Start investigating. We realized that a node from UCloud Beijing region is elected as active to serve requests from AWS Beijing. The egress of AWS Beijing region is slow and dropping packets.
6:30 The egress of AWS Beijing is back to normal.

May 16, 2017 14:30 +08
Resolved - 13:34 All the node for AWS Beijing are moved to AWS Beijing to make sure that slowness of egress will not degrade the performance of JuiceFS. We also double check all other regions to make sure that the metadata service is access within the same region for better performance.


Mon Jul 03 2017 06:50:00 GMT+0000 (Coordinated Universal Time)

Deploy new release of metadata service

July 3, 2017 14:30 +08
Scheduled - Deploy a new release with huge performance improvements.

July 3, 2017 14:30 +08
Active - 6:30 Deployed to Aliyun HK region,
6:40 Start to deploy to all regions.


July 3, 2017 14:50 +08
Completed - 6:50 Deployed to all regions successfully.


Thu Nov 16 2017 20:26:00 GMT+0000 (Coordinated Universal Time)

JuiceFS in UCloud Beijing region was temporary not available

November 17, 2017 04:16 +08
Investigating - 12:18 Received alarm about disruption of JuiceFS metadata service in UCloud beijing region. Investigation started.
12:20 Saw that two follower nodes can't replay change log from leader, kept restarting.
12:23 Restarted the previous leader.
12:26 Service is fully recovered, back to normal.
13:30 Find the bug that cause the failure of applying.
20:30 Deployed the bugfix.

November 17, 2017 04:26 +08
Resolved - This is caused by a bug introduced in 4.2.2, which have a slightly different logic to apply change log, that introduce different internal result and fail the checksum. The bug was fixed and deployed.


Mon Jan 22 2018 12:54:00 GMT+0000 (Coordinated Universal Time)

Outage of meta service in Aliyun Qingdao

January 22, 2018 20:42 +08
Investigating - Out of memory when rotating changelog to disk. It causes slow response in election and finally losing leader.

January 22, 2018 20:54 +08
Resolved - - Increase memory
- Adjust alarm threshold for memory


Sun Feb 18 2018 04:55:00 GMT+0000 (Coordinated Universal Time)

Outage of meta service in UCloud North

February 18, 2018 12:08 +08
Investigating - Two of the three meta services failed to accept new connections. No syslog for past 20 hours.

February 18, 2018 12:55 +08
Resolved - - Reboot node to recover
- Rollback recent changes in syslog configuration


Tue May 15 2018 13:38:00 GMT+0000 (Coordinated Universal Time)

Outage of meta service in UCloud North

May 15, 2018 21:08 +08
Investigating - Two of the three meta service for UCloud North failed, which caused service outage.

May 15, 2018 21:38 +08
Resolved - - Apply for new available zone E and distribute meta services in zone B, D, E to reduce the outage chance.
- Lower the alarm threshold to detect failure sooner


Sun May 12 2019 18:26:00 GMT+0000 (Coordinated Universal Time)

Outrage in Asia Regions

May 13, 2019 02:15 +08
Investigating - Recieved alarm on disrupted service in for Asia regions (Singapore).


May 13, 2019 02:26 +08
Resolved - Restarted those instanses and add more memory, the service was fully recovered.