Update 2024-08-10-recovering-ceph-cluster.md

This commit is contained in:
0x3bb 2024-08-10 15:11:39 +00:00
parent 6b8bb568e4
commit 58b16e9500

View File

@ -122,7 +122,7 @@ pgs without the offending RADOS objects.
Sure enough, the 2 new OSDs started.
Since the `osd-0` with the actual data still wouldn't start, the cluster was
Since `osd-0` with the actual data still wouldn't start, the cluster was
still in a broken state.
Now down to the last OSD, at this point I knew that I was going to make many,
@ -300,7 +300,7 @@ Next, I needed to inspect the OSD somehow, because the existing deployment would
Running this command allowed me to observe the OSD without it actually joining
the cluster. The "real" OSD deployment need only be scheduled, but crashing
continously was ok.
continuously was ok.
Once you execute that command, it will scale the OSD daemon down and create a
new deployment that mirrors the configuration but _without_ the daemon running
@ -532,7 +532,7 @@ osd.0 : 3099 osdmaps trimmed, 635 osdmaps added.
After copying the now _rebuilt_ `mon-a` store back, and bringing everything up
again, the cluster was finally resurrecting.
It took some time for the rebalancing and replication to occur, but hours
It took some time for the rebalancing and replication to finish, but hours
later, `ceph -s` reported a healthy cluster and services resumed being entirely
unaware of the chaos that had ensued over the previous few days:
@ -672,7 +672,7 @@ bad mappings.
// healthy
```
With `k+m=5`, though -- or anything great than `3` OSDs...
With `k+m=5`, though -- or anything greater than `3` OSDs...
```
[root@ad9e4c6e7343 rook]# crushtool -i crush --test --num-rep 5 --show-bad-mappings