SJC is out of hardware resources.
The new SSD, RAM, and servers are on the way.
- Two nodes will be rebooted next weekend (March 17 or 19) for a RAM upgrade.
- The block storage will be doubled, and the snapshot will be available this weekend (March 10 or 12).
- Your VM might be cold-migrated (one reboot) without notice to a new node since tight hardware resources.
- The new instance purchase will be available the next day after the action (March 11 or 13.) if the no.2 action is completed on time.
We have to performe emergency reboot for some SJC node. It will be done very soon.
We've noticed IO Error in SJC; Investigating. Keep you posted.
The network components we used implemented hash Layer3+4 for Bond interface, which is not supported by Infiniband.
It caused the disonnection-dead-loop for the entire SJC Ceph cluster.
We've removed the config implemented by component and locked it.
We experience extramly high load in SJC, the new hardware is on the way;
The new NVMe block storage hardware will be installed tomorrow.
We are working on restore Ceph-OSD; The problem is found; It still takes more time to recovery.
We are still working on it; we suggest do not reboot if you still able to run your system, since the I/O is currently suspended.
The remote hand is on the way to the SJC location to implement hardware requirements for repair.
The SLA solution will be posted after repair done.
OSD recovery, backfill in progress.
Step 1 still need ~4hrs; 70% VM will return to normal;
Step 2 will take another ~4hrs; 99% VM will return to normal;
Step 3 needs whole day, it only leads to IO performance impact but not uptime impact.
The SLA is lower than TOS offered. The reimbursement will issue case by case; please submit ticket after the event end.
We are deeply sorry for the recent SLA drop that may have caused inconvenience to your business operations. We understand the importance of our services to your business and we take full responsibility for this interruption.
The fault report will be posted after the event.
Ceph does not allow to run after partitial recovery; step 2 in process.
Step 2 complete;
Due to one OSD failed to be recovery, and data difference during time; there are 13/512 (2.5390625%) data is unable to recovered.
Once again, we apologize for any inconvenience or concern that this may have caused. We value your trust and we will continue to work hard to earn and maintain it.
The initial Summary:
On or about March 1, [ISP Name] San Jose received a large number of VM orders. (almost double the number of VM at that time).
[ISP Name] had noticed the tight resources and immediately stopped accepting new orders.
Memory resources were released to the two new nodes that were newly purchased last month.
The available storage resources were already lower than 30% at that time.
On March 6, we increased the set-full-ratio of the OSD from 90% to 95% in order to prevent IO outages.
But this was still not enough to solve the problem, and we had ordered a enought amount of P5510 P5520 7.68TB on March 3.
FedEx expected to deliver on March 7, and we were scheduled to install these SSDs on March 8.
Due to the California weather, the delivery was delayed to March 9 and we planned to install the SSDs immediately on March 10 to relieve the pressure.
On the night of March 8, we completed network maintenance, which caused the OSD to reboot.
Also due to OSD overload, BlueStore did not have enough space to allocate 4% log during start, resulting in OSD refusing to boot. This still only resulted in reduced IO performance.
Due to the continued writes, on the morning of March 9, another OSD triggered a failure and caused backfill, which caused a chain reaction that resulted in a third OSD being written to full and then failing to start. This eventually led to current condition.
We immediately arranged to the on-site installation on March 9, but this still caused some PGs to be lost.
=== Tech Notes
- San Jose uses the latest tech stack of [ISP Name]. We do not know bluestore will use 4% of the total OSD as a log. We thought it should be included in the data.
Once the data uses all space, the log cannot be issued during initiating. It leads to failure.
- San Jose does not have that much VM increase rate as before, the double order gave us limited time to upgrade.
=== Management Notes
- [ISP Name] will prepare to upgrade the locations once resources are over 60%.
- [ISP Name] will reject the order if we don't have the ability immediately to keep resources lower than 80%.