The OSD even when it's down, I can still access it's contents, looks like I
need to check out ceph-objectstore-tool.

# id    weight  type name       up/down reweight
-1      98.44   root default
-2      32.82           host ceph-node-1
0       3.64                    osd.0   up      1
1       3.64                    osd.1   up      1
2       3.64                    osd.2   up      1
3       3.64                    osd.3   up      1
4       3.64                    osd.4   up      1
5       3.64                    osd.5   up      1
6       3.7                     osd.6   up      1
7       3.64                    osd.7   up      1
8       3.64                    osd.8   up      1
-3      32.74           host ceph-node-2
10      3.64                    osd.10  down    1
11      3.64                    osd.11  up      1
12      3.64                    osd.12  up      1
13      3.64                    osd.13  up      1
15      3.64                    osd.15  up      1
16      3.64                    osd.16  up      1
17      3.64                    osd.17  up      1
27      3.63                    osd.27  up      1
14      3.63                    osd.14  DNE
-4      32.88           host ceph-node-3
18      3.7                     osd.18  up      1
19      3.7                     osd.19  up      1
20      3.64                    osd.20  up      1
21      3.64                    osd.21  up      1
22      3.64                    osd.22  up      1
23      3.64                    osd.23  up      1
24      3.64                    osd.24  up      1
25      3.64                    osd.25  up      1
26      3.64                    osd.26  up      1

[root@ceph-node-2 ceph-10]# pwd
/var/lib/ceph/osd/ceph-10
[root@ceph-node-2 ceph-10]#
[root@ceph-node-2 ceph-10]# ll
total 52
-rw-r--r--.  1 root root  499 Aug 12  2014 activate.monmap
-rw-r--r--.  1 root root    3 Aug 12  2014 active
-rw-r--r--.  1 root root   37 Aug 12  2014 ceph_fsid
drwxr-xr-x. 91 root root 8192 Oct 19 11:56 current
-rw-r--r--.  1 root root   37 Aug 12  2014 fsid
lrwxrwxrwx.  1 root root    9 Feb 26  2017 journal -> /dev/sdj2
-rw-------.  1 root root   57 Aug 12  2014 keyring
-rw-r--r--.  1 root root   21 Aug 12  2014 magic
-rw-r--r--.  1 root root    6 Aug 12  2014 ready
-rw-r--r--.  1 root root    4 Aug 12  2014 store_version
-rw-r--r--.  1 root root   42 Aug 12  2014 superblock
-rw-r--r--.  1 root root    0 Oct 17 10:23 sysvinit
-rw-r--r--.  1 root root    3 Aug 12  2014 whoami
[root@ceph-node-2 ceph-10]




- Vlad

ᐧ

On Thu, Oct 17, 2019 at 11:56 AM Ashley Merrick <[email protected]>
wrote:

> I think your better off doing the DD method, you can export and import a
> PG at a time (ceph-objectstore-tool)
>
> But if the disk is failing a DD is probably your best method.
>
>
>
> ---- On Thu, 17 Oct 2019 11:44:20 +0800 *vladimir franciz blando
> <[email protected] <[email protected]>>* wrote ----
>
> Sorry for not being clear, when I say healthy disk, I mean those are
> already an OSD, so I need to transfer the data from the failed OSD to the
> other OSDs that are healthy.
>
> - Vlad
>
> ᐧ
>
> On Thu, Oct 17, 2019 at 11:31 AM Konstantin Shalygin <[email protected]>
> wrote:
>
>
> On 10/17/19 10:29 AM, vladimir franciz blando wrote:
> > I have a not ideal setup on one of my cluster,  3 ceph  nodes but
> > using replication 1 on all pools (don't ask me why replication 1, it's
> > a long story).
> >
> > So it has come to this situation that a disk keeps on crashing,
> > possible a hardware failure and I need to recover from that.
> >
> > What's my best option for me to recover the data from the failed disk
> > and transfer it to the other healthy disks?
> >
> > This cluster is using Firefly
>
>
> `dd if=/dev/old_drive of=/dev/new_drive` I guess.
>
>
>
> k
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
>
>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to