On Mon, Jul 18, 2016 at 12:20 PM, Henrik Korkuc wrote:
> This file was removed by Sage:
>
> commit 9f76b9ff31525eac01f04450d72559ec99927496
> Author: Sage Weil
> Date: Mon Apr 18 09:16:02 2016 -0400
>
> udev: remove 60-ceph-partuuid-workaround-rules
>
>
On 16-07-18 11:11, Henrik Korkuc wrote:
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last
On 16-07-18 10:53, Henrik Korkuc wrote:
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition
On 16-07-15 10:40, Oliver Dzombic wrote:
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition size: 976754385 sectors (3.6 TiB)
Hi,
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition size: 976754385 sectors (3.6 TiB)
Attribute flags:
Partition name:
Hello George,
i did what you suggested, but it didn't help...no autostart - i have to
start them manually
root@cephosd01:~# sgdisk -i 1 /dev/sdb
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 48B7EC4E-A582-4B84-B823-8C3A36D9BB0A
First sector:
As you can see you have 'unknown' partition type. It should be 'ceph
journal' and 'ceph data'.
Stop ceph-osd, unmount partitions and change typecodes for partition
properly:
/sbin/sgdisk --typecode=PART:4fbd7e29-9d25-41b8-afd0-062c0ceff05d --
/dev/DISK
PART - number of partition with data
and this- after starting the osd manually
root@cephosd01:~# df
Filesystem 1K-blocksUsed Available Use% Mounted on
/dev/dm-0 15616412 1583180 13216900 11% /
udev 10240 0 10240 0% /dev
tmpfs 496564636 45020 10% /run
tmpfs
root@cephosd01:~# fdisk -l /dev/sdb
Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier:
Check out partition type for data partition for ceph.
fdisk -l /dev/sdc
On 07/11/2016 04:03 PM, Dirk Laurenz wrote:
hmm, helps partially ... running
/usr/sbin/ceph-disk trigger /dev/sdc1 or sdb1 works and brings osd up..
systemctl enable does not help
Am 11.07.2016 um 14:49 schrieb
hmm, helps partially ... running
/usr/sbin/ceph-disk trigger /dev/sdc1 or sdb1 works and brings osd up..
systemctl enable does not help
Am 11.07.2016 um 14:49 schrieb George Shuklin:
Short story how OSDs are started in systemd environments:
Ceph OSD parittions has specific typecode
Short story how OSDs are started in systemd environments:
Ceph OSD parittions has specific typecode (partition type
4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D). It handled by udev rules shipped
by ceph package:
/lib/udev/rules.d/95-ceph-osd.rules
It set up proper owner/group for this disk ('ceph'
Hi,
what i do to reproduce the failure:
root@cephadmin:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.26340 root default
-2 0.08780 host cephosd01
0 0.04390 osd.0 up 1.0 1.0
1 0.04390 osd.1 up
Hi Dirk,
without any information, its impossible to tell you anything.
Please provide us some detail information about what is going wrong,
including error messages and so on.
As an admin you should be enough familar with your system to give us
more information but just "its not working". As
Hello,
i'm new to ceph an try to do some first steps with ceph to understand
concepts.
my setup is at first completly in vm
i deployed (with ceph-deploy) three monitors and three osd hosts. (3+3 vms)
my frist test was to find out, if everything comes back online after a
system
15 matches
Mail list logo