Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-17 Thread Rainer Krienke
Hello,

thanks for your answer, but zapping the disk did not make any
difference. I still get the same error.  Looking at the debug output I
found this error message that is probably the root of all trouble:

# ceph-volume lvm prepare --bluestore --data /dev/sdg

stderr: 2019-02-18 08:29:25.544 7fdaa50ed240 -1
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid

I found the bugreport below that seems to be exactly that problem I have:
http://tracker.ceph.com/issues/15386

However there seems to be no solution  up to now.

Does anyone have more information how to get around this problem?

Thanks
Rainer

Am 15.02.19 um 18:12 schrieb David Turner:
> I have found that running a zap before all prepare/create commands with
> ceph-volume helps things run smoother.  Zap is specifically there to
> clear everything on a disk away to make the disk ready to be used as an
> OSD.  Your wipefs command is still fine, but then I would lvm zap the
> disk before continuing.  I would run the commands like [1] this.  I also
> prefer the single command lvm create as opposed to lvm prepare and lvm
> activate.  Try that out and see if you still run into the problems
> creating the BlueStore filesystem.
> 
> [1] ceph-volume lvm zap /dev/sdg
> ceph-volume lvm prepare --bluestore --data /dev/sdg
> 
> On Thu, Feb 14, 2019 at 10:25 AM Rainer Krienke  > wrote:
> 
> Hi,
> 
> I am quite new to ceph and just try to set up a ceph cluster. Initially
> I used ceph-deploy for this but when I tried to create a BlueStore osd
> ceph-deploy fails. Next I tried the direct way on one of the OSD-nodes
> using ceph-volume to create the osd, but this also fails. Below you can
> see what  ceph-volume says.
> 
> I ensured that there was no left over lvm VG and LV on the disk sdg
> before I started the osd creation for this disk. The very same error
> happens also on other disks not just for /dev/sdg. All the disk have 4TB
> in size and the linux system is Ubuntu 18.04 and finally ceph is
> installed in version 13.2.4-1bionic from this repo:
> https://download.ceph.com/debian-mimic.
> 
> There is a VG and two LV's  on the system for the ubuntu system itself
> that is installed on two separate disks configured as software raid1 and
> lvm on top of the raid. But I cannot imagine that this might do any harm
> to cephs osd creation.
> 
> Does anyone have an idea what might be wrong?
> 
> Thanks for hints
> Rainer
> 
> root@ceph1:~# wipefs -fa /dev/sdg
> root@ceph1:~# ceph-volume lvm prepare --bluestore --data /dev/sdg
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> -i - osd new 14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /sbin/vgcreate --force --yes
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b /dev/sdg
>  stdout: Physical volume "/dev/sdg" successfully created.
>  stdout: Volume group "ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b"
> successfully created
> Running command: /sbin/lvcreate --yes -l 100%FREE -n
> osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b
>  stdout: Logical volume "osd-block-14d041d6-0beb-4056-8df2-3920e2febce0"
> created.
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> --> Absolute path not found for executable: restorecon
> --> Ensure $PATH environment variable contains common executable
> locations
> Running command: /bin/chown -h ceph:ceph
> 
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> Running command: /bin/chown -R ceph:ceph /dev/dm-8
> Running command: /bin/ln -s
> 
> /dev/ceph-1433ffd0-0a80-481a-91f5-d7a47b78e17b/osd-block-14d041d6-0beb-4056-8df2-3920e2febce0
> /var/lib/ceph/osd/ceph-0/block
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
> mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
>  stderr: got monmap epoch 1
> Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
> --create-keyring --name osd.0 --add-key
> AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ==
>  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
> added entity osd.0 auth auth(auid = 18446744073709551615
> key=AQAAY2VcU968HxAAvYWMaJZmriUc4H9bCCp8XQ== with 0 caps)
> Running command: /bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-0/keyring
> Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap
> 

Re: [ceph-users] Understanding EC properties for CephFS / small files.

2019-02-17 Thread jesper
Hi Paul.

Thanks for you comments.

> For your examples:
>
> 16 MB file -> 4x 4 MB objects -> 4x 4x 1 MB data chunks, 4x 2x 1 MB
> coding chunks
>
> 512 kB file -> 1x 512 kB object -> 4x 128 kB data chunks, 2x 128 kb
> coding chunks
>
>
> You'll run into different problems once the erasure coded chunks end
> up being smaller than 64kb each due to bluestore min allocation sizes
> and general metadata overhead making erasure coding a bad fit for very
> small files.

Thanks for the clairification, which makes this a "very bad fit" for CephFS:

# find . -type f -print0 | xargs -0 stat | grep Size | perl -ane '/Size:
(\d+)/; print $1 . "\n";' | ministat -n
x 
N   Min   MaxMedian   AvgStddev
x 12651568 0 1.0840049e+11  9036 2217611.6 
32397960

Gives me 6,3M files < 9036 bytes in size, that'll be stored as 6 x 64KB at
the bluestore
level if I understand it correctly.

We come from a xfs world where default blocksize is 4K so above situation
worked quite nicely. Guess I probably would be way better off with a
RBD with xfs on top to solve this case using Ceph.

Is it fair to summarize your input as:

In a EC4+2 configuration, minimal used space is 256KB+128KB(coding)
regardless of file-size
In a EC8+3 configuraiton, minimal used space is 512KB+192KB(coding)
regardless of file-size

And for the access side:
All access to files in EC pool requires as a minimum IO-requests to
k-shards for the first
bytes to be returned, with fast_read it becomes k+n, but returns when k
has responded.

Any experience with inlining data on the MDS - that would obviously help
here I guess.

Thanks.

-- 
Jesper

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS - read latency.

2019-02-17 Thread jesper
> Probably not related to CephFS. Try to compare the latency you are
> seeing to the op_r_latency reported by the OSDs.
>
> The fast_read option on the pool can also help a lot for this IO pattern.

Magic, that actually cut the read-latency in half - making it more
aligned with what to expect from the HW+network side:

N   Min   MaxMedian   AvgStddev
x 100  0.015687  0.221538  0.0252530.03259606   0.028827849

25ms as a median, 32ms average is still on the high side,
but way, way better.

Thanks.

-- 
Jesper

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] jewel10.2.11 EC pool out a osd, its PGs remap to the osds in the same host

2019-02-17 Thread lin zhou
Thanks so much.
my ceph osd df tree output is
here:https://gist.github.com/hnuzhoulin/e83140168eb403f4712273e3bb925a1c

just like the output, based on the reply of David:
when I out osd.132, its pg remap to just its host cld-osd12-56-sata.
it seems the out do not change Host's weight.
but if I out osd.10 which is in a replication pool, its pg remap to
its media site1-rack1-ssd, not its host cld-osd1-56-ssd. This seems
the out change Host's weight.
so the out command does diff action among firstn and indep, am I right?
if it is right, we need to reserve more available size for each disk
in indep pool.

when I try to execute ceph osd crush reweight osd.132 0.0, then its pg
remap to its media site1-rack1-ssd just like only do out in firstn,
so its Host's weight changed.
pg diffs https://gist.github.com/hnuzhoulin/aab164975b4e3d31bbecbc5c8b2f1fef
from this diff output,  I got the difference of different strategies
for selecting items OSDs in a CRUSH hierarchy.
But I can not get why out command do a different action for firstn and indep.

David Turner  于2019年2月16日周六 上午1:22写道:
>
> I'm leaving the response on the CRUSH rule for Gregory, but you have another 
> problem you're running into that is causing more of this data to stay on this 
> node than you intend.  While you `out` the OSD it is still contributing to 
> the Host's weight.  So the host is still set to receive that amount of data 
> and distribute it among the disks inside of it.  This is the default behavior 
> (even if you `destroy` the OSD) to minimize the data movement for losing the 
> disk and again for adding it back into the cluster after you replace the 
> device.  If you are really strapped for space, though, then you might 
> consider fully purging the OSD which will reduce the Host weight to what the 
> other OSDs are.  However if you do have a problem in your CRUSH rule, then 
> doing this won't change anything for you.
>
> On Thu, Feb 14, 2019 at 11:15 PM hnuzhoulin2  wrote:
>>
>> Thanks. I read the your reply in 
>> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg48717.html
>> so using indep will do fewer data remap when osd failed.
>> using firstn: 1, 2, 3, 4, 5 -> 1, 2, 4, 5, 6 , 60% data remap
>> using indep :1, 2, 3, 4, 5 -> 1, 2, 6, 4, 5, 25% data remap
>>
>> am I right?
>> if so, what recommend to do when a disk failed and the total available size 
>> of the rest disk in the machine is not enough(can not replace failed disk 
>> immediately). or I should reserve more available size in EC situation.
>>
>> On 02/14/2019 02:49,Gregory Farnum wrote:
>>
>> Your CRUSH rule for EC spools is forcing that behavior with the line
>>
>> step chooseleaf indep 1 type ctnr
>>
>> If you want different behavior, you’ll need a different crush rule.
>>
>> On Tue, Feb 12, 2019 at 5:18 PM hnuzhoulin2  wrote:
>>>
>>> Hi, cephers
>>>
>>>
>>> I am building a ceph EC cluster.when a disk is error,I out it.But its all 
>>> PGs remap to the osds in the same host,which I think they should remap to 
>>> other hosts in the same rack.
>>> test process is:
>>>
>>> ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2 
>>> site1_sata_erasure_ruleset 4
>>> ceph osd df tree|awk '{print $1" "$2" "$3" "$9" "$10}'> /tmp/1
>>> /etc/init.d/ceph stop osd.2
>>> ceph osd out 2
>>> ceph osd df tree|awk '{print $1" "$2" "$3" "$9" "$10}'> /tmp/2
>>> diff /tmp/1 /tmp/2 -y --suppress-common-lines
>>>
>>> 0 1.0 1.0 118 osd.0   | 0 1.0 1.0 126 osd.0
>>> 1 1.0 1.0 123 osd.1   | 1 1.0 1.0 139 osd.1
>>> 2 1.0 1.0 122 osd.2   | 2 1.0 0 0 osd.2
>>> 3 1.0 1.0 113 osd.3   | 3 1.0 1.0 131 osd.3
>>> 4 1.0 1.0 122 osd.4   | 4 1.0 1.0 136 osd.4
>>> 5 1.0 1.0 112 osd.5   | 5 1.0 1.0 127 osd.5
>>> 6 1.0 1.0 114 osd.6   | 6 1.0 1.0 128 osd.6
>>> 7 1.0 1.0 124 osd.7   | 7 1.0 1.0 136 osd.7
>>> 8 1.0 1.0 95 osd.8   | 8 1.0 1.0 113 osd.8
>>> 9 1.0 1.0 112 osd.9   | 9 1.0 1.0 119 osd.9
>>> TOTAL 3073T 197G | TOTAL 3065T 197G
>>> MIN/MAX VAR: 0.84/26.56 | MIN/MAX VAR: 0.84/26.52
>>>
>>>
>>> some config info: (detail configs see: 
>>> https://gist.github.com/hnuzhoulin/575883dbbcb04dff448eea3b9384c125)
>>> jewel 10.2.11  filestore+rocksdb
>>>
>>> ceph osd erasure-code-profile get ISA-4-2
>>> k=4
>>> m=2
>>> plugin=isa
>>> ruleset-failure-domain=ctnr
>>> ruleset-root=site1-sata
>>> technique=reed_sol_van
>>>
>>> part of ceph.conf is:
>>>
>>> [global]
>>> fsid = 1CAB340D-E551-474F-B21A-399AC0F10900
>>> auth cluster required = cephx
>>> auth service required = cephx
>>> auth client required = cephx
>>> pid file = /home/ceph/var/run/$name.pid
>>> log file = /home/ceph/log/$cluster-$name.log
>>> mon osd nearfull ratio = 0.85
>>> mon osd full ratio = 0.95
>>> admin socket = /home/ceph/var/run/$cluster-$name.asok
>>> osd pool default size = 3
>>> osd pool default min size 

Re: [ceph-users] CephFS - read latency.

2019-02-17 Thread Paul Emmerich
Probably not related to CephFS. Try to compare the latency you are
seeing to the op_r_latency reported by the OSDs.

The fast_read option on the pool can also help a lot for this IO pattern.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Fri, Feb 15, 2019 at 10:05 PM  wrote:
>
> Hi.
>
> I've got a bunch of "small" files moved onto CephFS as archive/bulk storage
> and now I have the backup (to tape) to spool over them. A sample of the
> single-threaded backup client delivers this very consistent pattern:
>
> $ sudo strace -T -p 7307 2>&1 | grep -A 7 -B 3 open
> write(111, "\377\377\377\377", 4)   = 4 <0.11>
> openat(AT_FDCWD, "/ceph/cluster/rsyncbackups/fileshare.txt", O_RDONLY) =
> 38 <0.30>
> write(111, "\0\0\0\021197418 2 67201568", 21) = 21 <0.36>
> read(38,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\33\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
> 65536) = 65536 <0.049733>
> write(111,
> "\0\1\0\0CLC\0\0\0\0\2\0\0\0\0\0\0\0\33\0\0\0\0\0\0\0\0\0\0\0\0"...,
> 65540) = 65540 <0.37>
> read(38, " $$ $$\16\33\16 \16\33"..., 65536) = 65536
> <0.000199>
> write(111, "\0\1\0\0 $$ $$\16\33\16 $$"..., 65540) = 65540
> <0.26>
> read(38, "$ \33  \16\33\25 \33\33\33   \33\33\33
> \25\0\26\2\16NVDOLOVB"..., 65536) = 65536 <0.35>
> write(111, "\0\1\0\0$ \33  \16\33\25 \33\33\33   \33\33\33
> \25\0\26\2\16NVDO"..., 65540) = 65540 <0.24>
>
> The pattern is very consistent, thus it is not one PG or one OSD being
> contented.
> $ sudo strace -T -p 7307 2>&1 | grep -A 3 open  |grep read
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 11968 <0.070917>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 23232 <0.039789>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0P\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 65536 <0.053598>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 28240 <0.105046>
> read(41, "NZCA_FS_CLCGENOMICS, 1, 1\nNZCA_F"..., 65536) = 73 <0.061966>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 65536 <0.050943>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\30\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
> 65536) = 65536 <0.031217>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 7392 <0.052612>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 288 <0.075930>
> read(41, "1316919290-DASPHYNBAAPe2218b"..., 65536) = 940 <0.040609>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 22400 <0.038423>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 11984 <0.039051>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 9040 <0.054161>
> read(41, "NZCA_FS_CLCGENOMICS, 1, 1\nNZCA_F"..., 65536) = 73 <0.040654>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 22352 <0.031236>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0N\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 65536 <0.123424>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 49984 <0.052249>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 28176 <0.052742>
> read(41,
> "CLC\0\0\0\0\2\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65536)
> = 288 <0.092039>
>
> Or to sum:
> sudo strace -T -p 23748 2>&1 | grep -A 3 open  | grep read |  perl
> -ane'/<(\d+\.\d+)>/; print $1 . "\n";' | head -n 1000 | ministat
>
> N   Min   MaxMedian   AvgStddev
> x 1000   3.2e-05  2.141551  0.054313   0.065834359   0.091480339
>
>
> As can be seen the "initial" read averages at 65.8ms - which - if the
> filesize is say 1MB and
> the rest of the time is 0 - caps read performance mostly 20MB/s .. at that
> pace, the journey
> through double digit TB is long even with 72 OSD's backing.
>
> Spec: Ceph Luminous 12.2.5 - Bluestore
> 6 OSD nodes, 10TB HDDs, 4+2 EC pool, 10GbitE
>
> Locally the drives deliver latencies of approximately 6-8ms for a random
> read. Any suggestion
> on where to find out where the remaining 50ms is being spend would be
> truely helpful.
>
> Large files "just works" as read-ahead does a nice job in getting
> performance up.
>
> --
> Jesper
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Understanding EC properties for CephFS / small files.

2019-02-17 Thread Paul Emmerich
File layouts in CephFS and erasure coding are unrelated as they happen
on a completely different layer.

CephFS will split files into multiple 4 MB RADOS objects by default,
this is completely independent from how RADOS then store these 4 MB
(or smaller) objects.

For your examples:

16 MB file -> 4x 4 MB objects -> 4x 4x 1 MB data chunks, 4x 2x 1 MB
coding chunks

512 kB file -> 1x 512 kB object -> 4x 128 kB data chunks, 2x 128 kb
coding chunks


You'll run into different problems once the erasure coded chunks end
up being smaller than 64kb each due to bluestore min allocation sizes
and general metadata overhead making erasure coding a bad fit for very
small files.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Sun, Feb 17, 2019 at 8:11 AM  wrote:
>
> Hi List.
>
> I'm trying to understand the nuts and bolts of EC / CephFS
> We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty
> slow bulk / archive storage.
>
> # getfattr -n ceph.dir.layout /mnt/home/cluster/mysqlbackup
> getfattr: Removing leading '/' from absolute path names
> # file: mnt/home/cluster/mysqlbackup
> ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304
> pool=cephfs_data_ec42"
>
> This configuration is taken directly out of the online documentation:
> (Which may have been where it went all wrong from our perspective):
>
> http://docs.ceph.com/docs/master/cephfs/file-layouts/
>
> Ok, this means that a 16MB file will be split at 4 chuncks of 4MB each
> with 2 erasure coding chuncks? I dont really understand the stripe_count
> element?
>
> And since erasure-coding works at the object level, striping individual
> objects across - here 4 replicas - it'll end up filling 16MB ? Or
> is there an internal optimization causing this not to be the case?
>
> Additionally, when reading the file, all 4 chunck need to be read to
> assemble the object. Causing (at a minumum) 4 IOPS per file.
>
> Now, my common file size is < 8MB and commonly 512KB files are on
> this pool.
>
> Will that cause a 512KB file to be padded to 4MB with 3 empty chuncks
> to fill the erasure coded profile and then 2 coding chuncks on top?
> In total 24MB for storing 512KB ?
>
> And when reading it I'll hit 4 random IO's to read 512KB or can
> it optimize around not reading "empty" chuncks?
>
> If this is true, then I would be both performance and space/cost-wise
> way better off with 3x replication.
>
> Or is it less worse than what I get to here?
>
> If the math is true, then we can begin to calculate chunksize and
> EC profiles for when EC begins to deliver benefits.
>
> In terms of IO it seems like I'll always suffer a 1:4 ratio on IOPS in
> a reading scenario on a 4+2 EC pool, compared to a 3x replication.
>
> Side-note: I'm trying to get bacula (tape-backup) to read off my archive
> to tape in a "resonable time/speed".
>
> Thanks in advance.
>
> --
> Jesper
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com