[ceph-users] НА: Turning on rbd cache safely

2015-05-05 Thread Межов Игорь Александрович
I test performance from inside VM using fio and a 64G test file, located on the same volume with VM's rootfs. fio 2.0.8 from Debian Wheezy repos was running with cmdline: #fio --filename=/test/file --direct=1 --sync=0 --rw=write --bs=4k --runtime=60 \ --ioengine=aio --iodeph=32 --time_based

[ceph-users] sparse RBD devices

2015-05-05 Thread Steffen W Sørensen
I’ve live migrated RBD images of our VMs (with ext4 FS) through our Proxmox PVE cluster from one pool to anther and now it seems those device are no longer so sparse as before, ie. pool usage has grown to almost sum of full image sizes, wondering if there’s a way to untrim RBD images to become

[ceph-users] НА: Turning on rbd cache safely

2015-05-05 Thread Межов Игорь Александрович
Hi! Sorry, I've found the reason of this strange results - rbd cache was enabled in local ceph.conf on client node, I used for testing. I remove it from config and get more sane results. On all tests direct=1 iodepth=32 ioengine=aio fio=seqwr bs=4k sync=0 cache=wb - iops=31700,bw=126Mb/s, 75%

[ceph-users] v9.0.0 released

2015-05-05 Thread Sage Weil
This is the first development release for the Infernalis cycle, and the first Ceph release to sport a version number from the new numbering scheme. The 9 indicates this is the 9th release cycle--I (for Infernalis) is the 9th letter. The first 0 indicates this is a development release (1 will

[ceph-users] RGW + erasure coding

2015-05-05 Thread Somnath Roy
Hi, I am planning to setup RGW on top of Erasure coded pool. RGW stores all of its data in the .rgw.buckets pool and I am planning to configure this pool as erasure-coded. I think configuring all other rgw pools as replicated should be fine as they don't store lot of data. Please let me know if

Re: [ceph-users] replace dead SSD journal

2015-05-05 Thread Matthew Monaco
On 05/05/2015 08:55 AM, Andrija Panic wrote: Hi, small update: in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in between of each SSD death) - cant believe it - NOT due to wearing out... I really hope we got efective series from suplier... That's ridiculous. Are

Re: [ceph-users] RGW + erasure coding

2015-05-05 Thread Italo Santos
I use RGW with the .rgw.bucktes as EC pool and works fine as well, I’m able to reach ~300MB/s using a physical RGW server with 4 OSD nodes with SAS 10K drives w/out SSD journal. Also, I’ve tested create all other pools as EC pool too but RGW daemon doesn’t start, so I realised that the only

Re: [ceph-users] RGW + erasure coding

2015-05-05 Thread Somnath Roy
Thanks for the information Italo. I think RGW should support all the pools on top of EC backend, not sure this is because of bucket-index sharding or not. You should probably raise a defect in the community. Regards Somnath From: Italo Santos [mailto:okd...@gmail.com] Sent: Tuesday, May 05,

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Joao Eduardo Luis
On 05/04/2015 05:09 PM, Sage Weil wrote: The first Ceph release back in Jan of 2008 was 0.1. That made sense at the time. We haven't revised the versioning scheme since then, however, and are now at 0.94.1 (first Hammer point release). To avoid reaching 0.99 (and 0.100 or 1.00?) we have

Re: [ceph-users] Kicking 'Remapped' PGs

2015-05-05 Thread Paul Evans
Gregory Farnum g...@gregs42.commailto:g...@gregs42.com wrote: Oh. That's strange; they are all mapped to two OSDs but are placed on two different ones. I'm...not sure why that would happen. Are these PGs active? What's the full output of ceph -s? Those 4 PG’s went inactive at some point, and we

[ceph-users] НА: НА: Turning on rbd cache safely

2015-05-05 Thread Межов Игорь Александрович
Hi! Which ceph.conf do you talk about ? The one on host server (on which vm is running) ? Yes, that ceph.conf on client host, which is not part of a ceph cluster (no OSD, no MON) and it used solely to run VMs with RBD backend. Interesting, can you explain this please ? I think, that libvirt

Re: [ceph-users] replace dead SSD journal

2015-05-05 Thread Andrija Panic
Hi, small update: in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in between of each SSD death) - cant believe it - NOT due to wearing out... I really hope we got efective series from suplier... Regards On 18 April 2015 at 14:24, Andrija Panic andrija.pa...@gmail.com

[ceph-users] installing ceph giant on ubuntu 15,04

2015-05-05 Thread Alphe Salas
Hello everyone, I recently had to install ceph giant on ubuntu 15.04 and had to solve some problems, so here is the best way to do it. 1)replace in your ubuntu 15.04 fresh install systemd with upstart apt-get update apt-get install upstart apt-get install upstart-sysv (remove systemd and

Re: [ceph-users] xfs corruption, data disaster!

2015-05-05 Thread Nick Fisk
Just another quick question, Do you know if you RAID Controller is disabling the local disk write caches? I'm wondering how this corruption occurred and if this is a problem that is specific to your hardware/software config or is a general Ceph issue that makes it vulnerable to sudden power

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Sage Weil
On Tue, 5 May 2015, Tony Harris wrote: So with this, will even numbers then be LTS?  Since 9.0.0 is following 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, 12.x.x, etc. will be LTS... It looks that way now, although I can't promise the pattern will hold!

Re: [ceph-users] Shadow Files

2015-05-05 Thread Anthony Alba
Unfortunately it immediately aborted (running against a 0.80.9 Ceph). Does Ceph also have to be a 0.94 level? last error was -3 2015-05-06 01:11:11.710947 7f311dd15880 0 run(): building index of all objects in pool -2 2015-05-06 01:11:11.710995 7f311dd15880 1 -- 10.200.3.92:0/1001510 --

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Sage Weil
On Tue, 5 May 2015, Joao Eduardo Luis wrote: On 05/04/2015 05:09 PM, Sage Weil wrote: The first Ceph release back in Jan of 2008 was 0.1. That made sense at the time. We haven't revised the versioning scheme since then, however, and are now at 0.94.1 (first Hammer point release). To

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Tony Harris
So with this, will even numbers then be LTS? Since 9.0.0 is following 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, 12.x.x, etc. will be LTS... On Tue, May 5, 2015 at 11:45 AM, Sage Weil sw...@redhat.com wrote: On Tue, 5 May 2015, Joao Eduardo Luis wrote: On

[ceph-users] Failing to respond to cache pressure?

2015-05-05 Thread Lincoln Bryant
Hello all, I'm seeing some warnings regarding trimming and cache pressure. We're running 0.94.1 on our cluster, with erasure coding + cache tiering backing our CephFS. health HEALTH_WARN mds0: Behind on trimming (250/30) mds0: Client 74135 failing to respond to

Re: [ceph-users] Civet RadosGW S3 not storing complete obects; civetweb logs stop after rotation

2015-05-05 Thread Sean
* Hello Yehuda and the rest of the mailing list. My main question currently is why are the bucket index and the object manifest ever different? Based on how we are uploading data I do not think that the rados gateway should ever know the full file size without having all of the objects

Re: [ceph-users] Rename or Remove Pool

2015-05-05 Thread Robert LeBlanc
Can you try ceph osd pool rename new-name On Tue, May 5, 2015 at 12:43 PM, Georgios Dimitrakakis gior...@acmac.uoc.gr wrote: Hi all! Somehow I have a pool without a name... $ ceph osd lspools 3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10 .intent-log,11 .usage,12

Re: [ceph-users] Motherboard recommendation?

2015-05-05 Thread Mohamed Pakkeer
Hi Mark , Thanks for your reply and your CPU test report. It really help us to identify appropriate hardware for EC based Ceph cluster. Currently we are using Intel Xeon 2630 V3 ( 16 core * 2.4 Ghz = 38.4 GHz) processor. I think, you have tested with Intel Xeon 2630L V2 ( 12 * 2.4 Ghz = 28.8 GHz)

Re: [ceph-users] xfs corruption, data disaster!

2015-05-05 Thread Nick Fisk
This is probably similar to what you want to try and do, but also mark those failed OSD's as lost as I don't think you will have much luck getting them back up and running. http://ceph.com/community/incomplete-pgs-oh-my/#more-6845 The only other option would be if anyone knows a way to rebuild

Re: [ceph-users] Btrfs defragmentation

2015-05-05 Thread Lionel Bouton
On 05/05/15 06:30, Timofey Titovets wrote: Hi list, Excuse me, what I'm saying is off topic @Lionel, if you use btrfs, did you already try to use btrfs compression for OSD? If yes, сan you share the your experience? Btrfs compresses by default using zlib. We force lzo compression instead

[ceph-users] Can not sign up the ceph wiki system

2015-05-05 Thread 黄文俊
Hi, I want to sign up an account for the ceph wiki system, but I can not find the entry. I can only find the sign in entry in the page. Can someone tell me why, is the system reject registry recently? Thanks Wenjun Huang ___ ceph-users mailing list

Re: [ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-05 Thread Marc
Hi, The cache doesn't give you any additional storage capacity as the cache can never store data, thats not on the tier below it (or store more writes than the underlying storage has room for). As for how much you should go for... thats very much up to your use case. Try to come up with an

[ceph-users] capacity planing with SSD Cache Pool Tiering

2015-05-05 Thread Götz Reinicke - IT Koordinator
Hi folks, one more question: after some more interanal discussions, I'm faced with the question how a SSD Cache Pool Tiering is calculated in the overall usable storage space. And how big do I calculate an SSD Cache Pool? From my understanding, the cache pool is not calculated into the overall

Re: [ceph-users] Rename or Remove Pool

2015-05-05 Thread Georgios Dimitrakakis
Robert, I did try that without success. The error was: Invalid command: missing required parameter srcpool(poolname) Upon debian112's recommendation on IRC channel and looking at this post: http://cephnotes.ksperis.com/blog/2014/10/29/remove-pool-without-name I 've used the command:

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-05 Thread Steffen W Sørensen
On 05/05/2015, at 18.52, Sage Weil sw...@redhat.com wrote: On Tue, 5 May 2015, Tony Harris wrote: So with this, will even numbers then be LTS? Since 9.0.0 is following 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x, 12.x.x, etc. will be LTS... It looks that

Re: [ceph-users] Shadow Files

2015-05-05 Thread Yehuda Sadeh-Weinraub
Yes, so it seems. The librados::nobjects_begin() call expects at least a Hammer (0.94) backend. Probably need to add a try/catch there to catch this issue, and maybe see if using a different api would be better compatible with older backends. Yehuda - Original Message - From: Anthony

[ceph-users] Rename or Remove Pool

2015-05-05 Thread Georgios Dimitrakakis
Hi all! Somehow I have a pool without a name... $ ceph osd lspools 3 data,4 metadata,5 rbd,6 .rgw,7 .rgw.control,8 .rgw.gc,9 .log,10 .intent-log,11 .usage,12 .users,13 .users.email,14 .users.swift,15 .users.uid,16 .rgw.root,17 .rgw.buckets.index,18 .rgw.buckets,19 .rgw.buckets.extra,20

Re: [ceph-users] Shadow Files

2015-05-05 Thread Anthony Alba
...sorry clicked send to quickly /opt/ceph/bin/radosgw-admin orphans find --pool=.rgw.buckets --job-id=abcd ERROR: failed to open log pool ret=-2 job not found On Tue, May 5, 2015 at 6:36 PM, Anthony Alba ascanio.al...@gmail.com wrote: Hi Yehuda, First run: /opt/ceph/bin/radosgw-admin

[ceph-users] I can not visit ceph.com

2015-05-05 Thread zhengbin.08...@h3c.com
When I visit ceph.com, it returns an error, like thus [cid:image001.png@01D0875B.CAA04860] Is this my question?How can I resolve it, thanks -

Re: [ceph-users] Shadow Files

2015-05-05 Thread Anthony Alba
Hi Yehuda, First run: /opt/ceph/bin/radosgw-admin --pool=.rgw.buckets --job-id=testing ERROR: failed to open log pool ret=-2 job not found Do I have to precreate some pool? On Tue, May 5, 2015 at 8:17 AM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote: I've been working on a new tool that

[ceph-users] Turning on rbd cache safely

2015-05-05 Thread Межов Игорь Александрович
Hi! After examining our running OSD configuration through an admin socket we suddenly noticed, that rbd_cache parameter is set to false. Till that moment, I suppose, that rbd cache is entirly client-side feature, and it is enabled with cache=writeback parameter in libvirt VM xml definition.

Re: [ceph-users] Turning on rbd cache safely

2015-05-05 Thread Alexandre DERUMIER
Hi, rbd_cache is client config only, so no need to restart osd. if you set cache=writeback in libvirt, it'll enable it, so you don't need to setup rbd_cache=true in ceph.conf. (it should override it) you can verify it enable, doing a sequantial write benchmark with 4k block. you should have a