Re: [ceph-users] Ceph, SSD, and NVMe

2015-09-30 Thread James (Fei) Liu-SSI
Hi David, Generally speaking, it is going to be super difficult to maximize the bandwidth of NVMe with current Ceph latest release. In my humble opinion, I don't think Ceph is aiming at high performance storage. Here is link for your reference for some good work done by Samsung and SanDisk

Re: [ceph-users] RPM repo connection reset by peer when updating

2015-09-30 Thread Alkaid
I tried ceph.com/rpm-hammer download.ceph.com/rpm-hammer eu.ceph.com/rpm-hammer On Oct 1, 2015 01:09, "Alkaid" wrote: > I try to update packages today, but I got a "connection reset by peer" > error every time. > It seems that the server will block my IP if I request a

Re: [ceph-users] Ceph, SSD, and NVMe

2015-09-30 Thread Somnath Roy
David, You should move to Hammer to get all the benefits of performance. It's all added to Giant and migrated to the present hammer LTS release. FYI, focus was so far with read performance improvement and what we saw in our environment with 6Gb SAS SSDs so far that we are able to saturate drives

Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-30 Thread Yehuda Sadeh-Weinraub
At the moment radosgw just doesn't support v3 (so it seems). I created issue #13303. If anyone wants to pick this up (or provide some information as to what it would require to support that) it would be great. Thanks, Yehuda On Wed, Sep 30, 2015 at 3:32 AM, Robert Duncan

Re: [ceph-users] Ceph, SSD, and NVMe

2015-09-30 Thread Mark Nelson
On 09/30/2015 09:34 AM, J David wrote: Because we have a good thing going, our Ceph clusters are still running Firefly on all of our clusters including our largest, all-SSD cluster. If I understand right, newer versions of Ceph make much better use of SSDs and give overall much higher

Re: [ceph-users] Changing monitors whilst running OpenNebula VMs

2015-09-30 Thread Jimmy Goffaux
Hello, For the largest VMs is the declaration of the XML file for the VM .. example: function='0x0'/> Try adding a " via "virsh" and via the command ""netstat -laputen" verify that the

Re: [ceph-users] Issue with journal on another drive

2015-09-30 Thread Jan Schermer
I have some experience with Kingstons - which model do you plan to use? Shorter version: don't use Kingstons. For anything. Ever. Jan > On 30 Sep 2015, at 11:24, Andrija Panic wrote: > > Make sure to check this blog page >

[ceph-users] НА: Changing monitors whilst running OpenNebula VMs

2015-09-30 Thread Межов Игорь Александрович
Hi! Yes, we did exactly the same and do not have practically any problems except some minor issues with recreatind VMs. At first, OpenNebula use specified in a template ceph monitors only when creating VM or migrating it. This template values passed as qemu parameters, when bootstrapping VM.

Re: [ceph-users] Issue with journal on another drive

2015-09-30 Thread Jiri Kanicky
Thanks to all for responses. Great thread with a lot of info. I will go with the 3 partitions on Kingstone SDD for 3 OSDs on each node. Thanks Jiri On 30/09/2015 00:38, Lionel Bouton wrote: Hi, Le 29/09/2015 13:32, Jiri Kanicky a écrit : Hi Lionel. Thank you for your reply. In this case I

[ceph-users] Changing monitors whilst running OpenNebula VMs

2015-09-30 Thread george.ryall
Hi, I have also posted on the OpenNebula community forum (https://forum.opennebula.org/t/changing-ceph-monitors-for-running-vms/1266). Does anyone have any experience of changing the monitors in their Ceph cluster whilst running OpenNebula VMs? We have recently bought new hardware to replace

Re: [ceph-users] Issue with journal on another drive

2015-09-30 Thread Andrija Panic
Make sure to check this blog page http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ Since Im not sure if you are playing arround with CEPH, or plan it for production and good performance. My experience SSD as journal: SSD Samsung 850 PRO =

Re: [ceph-users] high density machines

2015-09-30 Thread J David
On Wed, Sep 30, 2015 at 8:19 AM, Mark Nelson wrote: > FWIW, I've mentioned to Supermicro that I would *really* love a version of the > 5018A-AR12L that replaced the Atom with an embedded Xeon-D 1540. :) Is even that enough? (It's a serious question; due to our insatiable

Re: [ceph-users] high density machines

2015-09-30 Thread Wido den Hollander
On 30-09-15 14:19, Mark Nelson wrote: > On 09/29/2015 04:56 PM, J David wrote: >> On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh >> wrote: The density would be higher than the 36 drive units but lower than the 72 drive units (though with shorter rack

[ceph-users] Erasure Coding pool stuck at creation because of pre-existing crush ruleset ?

2015-09-30 Thread SCHAER Frederic
Hi, With 5 hosts, I could successfully create pools with k=4 and m=1, with the failure domain being set to "host". With 6 hosts, I could also create k=4,m=1 EC pools. But I suddenly failed with 6 hosts k=5 and m=1, or k=4,m=2 : the PGs were never created - I reused the pool name for my tests,

Re: [ceph-users] CephFS Attributes Question Marks

2015-09-30 Thread Yan, Zheng
On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote: > I'm positive the client I sent you the log is 94. We do have one client > still on 87. > which version of kernel are you using? I found a kernel bug which can cause this issue in 4.1 and later kernels. Regards Yan, Zheng >

Re: [ceph-users] CephFS Attributes Question Marks

2015-09-30 Thread Scottix
OpenSuse 12.1 3.1.10-1.29-desktop On Wed, Sep 30, 2015, 5:34 AM Yan, Zheng wrote: > On Tue, Sep 29, 2015 at 9:51 PM, Scottix wrote: > >> I'm positive the client I sent you the log is 94. We do have one client >> still on 87. >> > which version of kernel

[ceph-users] Zenoss Integration

2015-09-30 Thread Georgios Dimitrakakis
All, I was wondering if anyone has integrated his CEPH installation with Zenoss monitoring software and is willing to share his knowledge. Best regards, George ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Annoying libust warning on ceph reload

2015-09-30 Thread Goncalo Borges
Dear Ceph Gurus - This is just a report about an annoying warning we keep getting every time our logs are rotated. ibust[8241/8241]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305) - I am running ceph

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi, Am 2015-09-17 um 19:02 schrieb Stefan Eriksson: > I purged all nodes and did purgedata aswell and restarted, after this > Everything was fine. You are most certainly right, if anyone else have > this error, reinitialize the cluster might be the fastest way forward. Great that it worked for

[ceph-users] Ceph, SSD, and NVMe

2015-09-30 Thread J David
Because we have a good thing going, our Ceph clusters are still running Firefly on all of our clusters including our largest, all-SSD cluster. If I understand right, newer versions of Ceph make much better use of SSDs and give overall much higher performance on the same equipment. However, the

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi, Some more info: ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 3.59998 root default -2 1.7 host ceph1 0 0.8 osd.0 up 1.0 1.0 1 0.8 osd.1 up 1.0 1.0 -3 1.7 host ceph2 2

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-30 Thread Kurt Bauer
Hi Jogi, you can specify any repository you like with 'ceph-deploy install --repo-url ', given you have the repo keys installed. Best regards, Kurt Jogi Hofmüller wrote: > Hi, > > Am 2015-09-25 um 22:23 schrieb Udo Lembke: > >> you can use this sources-list >> >> cat

[ceph-users] chain replication scheme

2015-09-30 Thread Wouter De Borger
Hi all, In the original paper (RADOS: a scalable, reliable storage service for petabyte-scale storage clusters), three replication schemes were described (primary copy, chain and splay). Now the documentation only discusses primary copy. Does the chain scheme still exist? It would be much more

Re: [ceph-users] chain replication scheme

2015-09-30 Thread Sage Weil
On Wed, 30 Sep 2015, Wouter De Borger wrote: > Hi all, > In the original paper (RADOS: a scalable, reliable storage service for > petabyte-scale storage clusters), three replication schemes were described > (primary copy, chain and splay). > > Now the documentation only discusses primary copy.

Re: [ceph-users] Issue with journal on another drive

2015-09-30 Thread J David
On Tue, Sep 29, 2015 at 7:32 AM, Jiri Kanicky wrote: > Thank you for your reply. In this case I am considering to create separate > partitions for each disk on the SSD drive. Would be good to know what is the > performance difference, because creating partitions is kind of waste

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Jogi Hofmüller
Hi Kurt, Am 2015-09-30 um 17:09 schrieb Kurt Bauer: > You have two nodes but repl.size 3 for your test-data pool. With the > default crushmap this won't work as it tries to replicate on different > nodes. > > So either change to rep.size 2, or add another node ;-) Thanks a lot! I did not set

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-30 Thread Kurt Bauer
Hi, looking at the outputs below the following puzzles me: You have two nodes but repl.size 3 for your test-data pool. With the default crushmap this won't work as it tries to replicate on different nodes. So either change to rep.size 2, or add another node ;-) best regards, Kurt Jogi

[ceph-users] RPM repo connection reset by peer when updating

2015-09-30 Thread Alkaid
I try to update packages today, but I got a "connection reset by peer" error every time. It seems that the server will block my IP if I request a little frequently ( refresh page a few times manually per second). I guess yum downloads packages in parallel and triggers something like fail2ban. Any