Re: [ceph-users] bucket cleanup speed

2014-11-14 Thread Daniel Hoffman
No one had this problem? I found a forum/mail list post from 2013 with the same issue but no responses either. Any pointers appreciated. Daniel On 2014-11-14 20:20, Daniel Hoffman wrote: Hi All. Running a Ceph Cluster (firefly) ceph version 0.80.5 We use ceph mainly for backups via the r

[ceph-users] How to upgrade ceph from Firefly to Giant on Wheezy smothly?

2014-11-14 Thread debian Only
Dear all i have one Ceph Firefily test cluster on Debian Wheezy too, i want to upgrade ceph from Firefly to Giant, could you tell me how to do upgrade ? i saw the release notes like below , bu ti do not know how to upgrade, could you give me some guide ? *Upgrade Sequencin

Re: [ceph-users] Recreating the OSD's with same ID does not seem to work

2014-11-14 Thread JIten Shah
Ok. I will do that. Thanks --Jiten > On Nov 14, 2014, at 4:57 PM, Gregory Farnum wrote: > > It's still creating and storing keys in case you enable it later. > That's exactly what the error is telling you and that's why it's not > working. > >> On Fri, Nov 14, 2014 at 4:45 PM, JIten Shah w

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-11-14 Thread Christopher Armstrong
Will do. We haven't really had any stability issues once the containers are running - they are the same daemons that would run on your host OS, they just happen to be in a container. We have seen a few cases during startup where there are placement groups stuck inactive/peering, which prevents the

Re: [ceph-users] Recreating the OSD's with same ID does not seem to work

2014-11-14 Thread Gregory Farnum
It's still creating and storing keys in case you enable it later. That's exactly what the error is telling you and that's why it's not working. On Fri, Nov 14, 2014 at 4:45 PM, JIten Shah wrote: > But I am not using “cephx” for authentication. I have already disabled that. > > —Jiten > > On Nov 1

Re: [ceph-users] Recreating the OSD's with same ID does not seem to work

2014-11-14 Thread Gregory Farnum
You didn't remove them from the auth monitor's keyring. If you're removing OSDs you need to follow the steps in the documentation. -Greg On Fri, Nov 14, 2014 at 4:42 PM, JIten Shah wrote: > Hi Guys, > > I had to rekick some of the hosts where OSD’s were running and after > re-kick, when I try to

Re: [ceph-users] Recreating the OSD's with same ID does not seem to work

2014-11-14 Thread JIten Shah
But I am not using “cephx” for authentication. I have already disabled that. —Jiten On Nov 14, 2014, at 4:44 PM, Gregory Farnum wrote: > You didn't remove them from the auth monitor's keyring. If you're > removing OSDs you need to follow the steps in the documentation. > -Greg > > On Fri, Nov

[ceph-users] Recreating the OSD's with same ID does not seem to work

2014-11-14 Thread JIten Shah
Hi Guys, I had to rekick some of the hosts where OSD’s were running and after re-kick, when I try to run puppet and install OSD’s again, it gives me a key mismatch error (as below). After the hosts were shutdown for rekick, I removed the OSD’s from the osd tree and the crush map too. Why is it

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-11-14 Thread Robert LeBlanc
Ceph in Docker is very intriguing to me, but I understood that there were still a number of stability and implementation issues. What is your experience? Please post a link to your blog when you are done, I'd be interested in reading it. On Fri, Nov 14, 2014 at 4:15 PM, Christopher Armstrong wrot

Re: [ceph-users] Anyone deploying Ceph on Docker?

2014-11-14 Thread Christopher Armstrong
Hi Loic, Thanks for the reply! I started implementing these first using ceph-deploy, but ran into lots of issues because there is no upstart daemon running within the containers. So while ceph-deploy would successfully log into the containers remotely (I had to run sshd in the containers), it woul

Re: [ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Andrei Mikhailovsky
Any other suggestions why several osds are going down on Giant and causing IO to stall? This was not happening on Firefly. Thank s - Original Message - On 11/14/2014 01:50 PM, Andrei Mikhailovsky wrote: > Wido, I've not done any changes from the default settings. There are no > fi

Re: [ceph-users] v0.88 released

2014-11-14 Thread Robert LeBlanc
Will there be RPMs built for this release? Thanks, On Tue, Nov 11, 2014 at 5:24 PM, Sage Weil wrote: > This is the first development release after Giant. The two main > features merged this round are the new AsyncMessenger (an alternative > implementation of the network layer) from Haomai Wang

Re: [ceph-users] Federated gateways

2014-11-14 Thread Craig Lewis
I have identical regionmaps in both clusters. I only created the zone's pools in that cluster. I didn't delete the default .rgw.* pools, so those exist in both zones. Both users need to be system on both ends, and have identical access and secrets. If they're not, this is likely your problem.

Re: [ceph-users] Performance data collection for Ceph

2014-11-14 Thread Dan Ryder (daryder)
Hi, Take a look at the built in perf counters - http://ceph.com/docs/master/dev/perf_counters/. Through this you can get individual daemon performance as well as some cluster level statistics. Other (cluster-level) disk space utilization and pool utilization/performance is available through “c

Re: [ceph-users] Federated gateways

2014-11-14 Thread Aaron Bassett
Well I upgraded both clusters to giant this morning just to see if that would help, and it didn’t. I have a couple questions though. I have the same regionmap on both clusters, with both zones in it, but then i only have the buckets and zone info for one zone in each cluster, is this right? Or d

Re: [ceph-users] Installing CephFs via puppet

2014-11-14 Thread JIten Shah
Hi Guys, I got 3 MONs and 3 OSD’s configured via puppet but when trying to run MDS on the MON servers, I am running into the error, which I can’t figure out. BTW, I already manually installed MDS on cephmon001 and trying to install the second MDS via puppet to see if I can have multiple MDS’s

Re: [ceph-users] jbod + SMART : how to identify failing disks ?

2014-11-14 Thread SCHAER Frederic
Hi, Thanks for your replies :] Indeed, I did not think about the /sys/class/leds, but unfortunately I have nothing in there on my systems. This is kernel related, so I presume it would be the module duty to expose leds there (in my case, mpt2sas) ... that would indeed be welcome ! /sys/block is

[ceph-users] Performance data collection for Ceph

2014-11-14 Thread 10 minus
Hi, I 'm trying to collect performance data for Ceph I 'm looking to run some commands .. on regular intervals. to collect data. Apart from "ceph osd perf" . Are there other commands one can use. Can I also track how much data is being replicated ? Does Ceph maintain performance counters for

[ceph-users] ceph-deploy not creating osd data path directories

2014-11-14 Thread Anthony Alba
HI list, I am running ceph-deploy with a non-standard cluster name. At the activate stage of the OSD the data directory /var/lib/ceph/osd/-0 is not created so the mount fails., If I manually create the directory it works. I can't quite figure out when the directory is supposed to be created.

Re: [ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Andrei Mikhailovsky
- Original Message - On 11/14/2014 01:50 PM, Andrei Mikhailovsky wrote: > Wido, I've not done any changes from the default settings. There are no > firewalls between the ceph cluster members and I do not see a great deal of > network related errors either. There is a tiny amount of TX e

Re: [ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Wido den Hollander
On 11/14/2014 01:50 PM, Andrei Mikhailovsky wrote: > Wido, I've not done any changes from the default settings. There are no > firewalls between the ceph cluster members and I do not see a great deal of > network related errors either. There is a tiny amount of TX errors on the > network interfa

Re: [ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Andrei Mikhailovsky
Wido, I've not done any changes from the default settings. There are no firewalls between the ceph cluster members and I do not see a great deal of network related errors either. There is a tiny amount of TX errors on the network interface, which accounts to 0.0001% of the total packets. Regar

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Mark Nelson
On 11/14/2014 05:54 AM, Florent Bautista wrote: On 11/14/2014 12:52 PM, Alexandre DERUMIER wrote: Unfortunately I didn't, do you think hdparm could wrong results ? I really don't known how hdparm is doing his bench (block size ? number of thread ?). BTW, do you have also upgraded librbd on

Re: [ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Wido den Hollander
On 14-11-14 13:26, Andrei Mikhailovsky wrote: > Hello guys, > > Since upgrading my cluster to Giant from the previous stable release I > started having massive problems with client IO. I've done the upgrade 2 > weeks ago and since then the IO on my ceph cluster has been unavailable > 3 times alrea

[ceph-users] Giant osd problems - loss of IO

2014-11-14 Thread Andrei Mikhailovsky
Hello guys, Since upgrading my cluster to Giant from the previous stable release I started having massive problems with client IO. I've done the upgrade 2 weeks ago and since then the IO on my ceph cluster has been unavailable 3 times already. Quick info on my storage cluster - 3 mons, 2 osd

Re: [ceph-users] Typical 10GbE latency

2014-11-14 Thread Wido den Hollander
On 13-11-14 19:39, Stephan Seitz wrote: > Indeed, there must be something! But I can't figure it out yet. Same controllers, tried the same OS, direct cables, but the latency is 40% higher. > > Wido, > > just an educated guess: > > Did you check the offload settings of your

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Florent Bautista
On 11/14/2014 12:52 PM, Alexandre DERUMIER wrote: >>> Unfortunately I didn't, do you think hdparm could wrong results ? > I really don't known how hdparm is doing his bench (block size ? number of > thread ?). > > > BTW, do you have also upgraded librbd on your kvm node ? (and restarted the > vm

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Alexandre DERUMIER
>>Unfortunately I didn't, do you think hdparm could wrong results ? I really don't known how hdparm is doing his bench (block size ? number of thread ?). BTW, do you have also upgraded librbd on your kvm node ? (and restarted the vm) - Mail original - De: "Florent Bautista" À: "Ale

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Florent Bautista
Unfortunately I didn't, do you think hdparm could wrong results ? On 11/14/2014 12:37 PM, Alexandre DERUMIER wrote: > Do you have tried to bench with something like fio ? > > > - Mail original - > > De: "Florent Bautista" > À: "Alexandre DERUMIER" > Cc: ceph-us...@ceph.com > Envoyé:

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Alexandre DERUMIER
Do you have tried to bench with something like fio ? - Mail original - De: "Florent Bautista" À: "Alexandre DERUMIER" Cc: ceph-us...@ceph.com Envoyé: Vendredi 14 Novembre 2014 12:30:21 Objet: Re: [ceph-users] RBD read performance in Giant ? Hi, I don't think this is the proble

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Florent Bautista
Hi, I don't think this is the problem, because : - I use kvm RBD disk for my VM, with cache=none option - after upgrade, I didn't restart my VM to see performance drop (so rbd cache may not have changed from firefly) I put this in my config, restarted VM but nothing changed. On 11/14/2014 12:20

Re: [ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Alexandre DERUMIER
can you try to disable rbd cache ? (which is enable by default in giant) [client] rbd_cache = false - Mail original - De: "Florent Bautista" À: ceph-us...@ceph.com Envoyé: Vendredi 14 Novembre 2014 11:54:19 Objet: [ceph-users] RBD read performance in Giant ? Hi all, On a testin

Re: [ceph-users] Deep scrub parameter tuning

2014-11-14 Thread Loic Dachary
Hi, On 14/11/2014 12:11, Mallikarjun Biradar wrote: > Hi, > > Default deep scrub interval is once per week, which we can set using > osd_deep_scrub_interval parameter. > > Whether can we reduce it to less than a week or minimum interval is one week? You can reduce it to a shorter period. It is

[ceph-users] Deep scrub parameter tuning

2014-11-14 Thread Mallikarjun Biradar
Hi, Default deep scrub interval is once per week, which we can set using osd_ deep_scrub_interval parameter. Whether can we reduce it to less than a week or minimum interval is one week? -Thanks & regards, Mallikarjun Biradar ___ ceph-users mailing lis

[ceph-users] RBD read performance in Giant ?

2014-11-14 Thread Florent Bautista
Hi all, On a testing cluster, I upgraded from Firefly to Giant. Without changing anything, read performance on a RBD volume (in a VM) has been divided by 4 ! With Firefly, in the VM (/dev/vda is a Virtio RBD device) : hdparm -Tt /dev/vda /dev/vda: Timing cached reads: 20568 MB in 1.99 sec

Re: [ceph-users] Ceph Monitoring with check_MK

2014-11-14 Thread Nick Fisk
Hi Robert, I've just been testing your ceph check and I have made a small modification to allow it to adjust itself to suit the autoscaling of the units Ceph outputs. Here is the relevant section I have modified:- if line[1] == 'TB': used = saveint(line[0]) * 1099511627776

Re: [ceph-users] [SOLVED] Very Basic question

2014-11-14 Thread Luca Mazzaferro
Hi, problem solved it was a very stupid firewall problem. It was not configured correctly in the monitor part. The thing that I don't understand is the reason why the OSD didn't want to start if it was the monitor part with and issue?! Thank you. Regards. Luca On 11/13/2014 06:40 PM, Luca Ma

[ceph-users] bucket cleanup speed

2014-11-14 Thread Daniel Hoffman
Hi All. Running a Ceph Cluster (firefly) ceph version 0.80.5 We use ceph mainly for backups via the radosGW at the moment. There had to be an account deleted/bucket removed which had a very large number of objects and was about 60TB in space. We have been monitoring it for days now, and the

[ceph-users] (no subject)

2014-11-14 Thread idzzy
Hello, I see following message on calamari GUI. -- New Calamari Installation This appears to be the first time you have started Calamari and there are no clusters currently configured. 3 Ceph servers are connected to Calamari, but no Ceph cluster has been created

[ceph-users] Radosgw /var/lib/ceph/radosgw is empty

2014-11-14 Thread Anthony Alba
What is the role of /var/lib/ceph/radosgw/ceph-radosgw.$ID? Following the Simple guide and creating the directory it is empty during runtime. Is this correct behaviour. The S3 gateway seems to be working though. Anthony ___ ceph-users mailing list ceph-

Re: [ceph-users] calamari build failure

2014-11-14 Thread idzzy
Hi, Probably the error is causing when salt tries to report the final state summary. the packages are created. I used it and could install calamari. now I could see the calamari gui. Thank you. On November 14, 2014 at 11:52:26 AM, idzzy (idez...@gmail.com) wrote: Hi, Sure, Thanks. As describe