Re: [ceph-users] how run multiple node in single machine in previous version of ceph

2016-09-20 Thread Brad Hubbard
Just use git to checkout and build that branch (older branches use autotools) and then follow the instructions for that release. http://docs.ceph.com/docs/infernalis/dev/quick_guide/ http://docs.ceph.com/docs/hammer/dev/quick_guide/ On Tue, Sep 20, 2016 at 12:19 AM, agung Laksono

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Haomai Wang
On Wed, Sep 21, 2016 at 2:41 AM, Wido den Hollander wrote: > >> Op 20 september 2016 om 20:30 schreef Haomai Wang : >> >> >> On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander wrote: >> > >> >> Op 20 september 2016 om 19:27 schreef Gregory Farnum

Re: [ceph-users] Jewel Docs | error on mount.ceph page

2016-09-20 Thread Ilya Dryomov
On Tue, Sep 20, 2016 at 7:48 PM, David wrote: > Sorry I don't know the correct way to report this. > > Potential error on this page: > > on http://docs.ceph.com/docs/jewel/man/8/mount.ceph/ > > Currently: > > rsize > int (bytes), max readahead, multiple of 1024, Default:

Re: [ceph-users] Best Practices for Managing Multiple Pools

2016-09-20 Thread Wido den Hollander
> Op 20 september 2016 om 21:23 schreef Heath Albritton : > > > I'm wondering if anyone has some tips for managing different types of > pools, each of which fall on a different type of OSD. > > Right now, I have a small cluster running with two kinds of OSD nodes, > ones

Re: [ceph-users] Auto recovering after loosing all copies of a PG(s)

2016-09-20 Thread Gregory Farnum
On Tue, Sep 20, 2016 at 6:19 AM, Iain Buclaw wrote: > On 1 September 2016 at 23:04, Wido den Hollander wrote: >> >>> Op 1 september 2016 om 17:37 schreef Iain Buclaw : >>> >>> >>> On 16 August 2016 at 17:13, Wido den Hollander

Re: [ceph-users] cache tier not flushing 10.2.2

2016-09-20 Thread Jim Kilborn
Please disregard this. I have a error in my target_max_bytes, that was causing the issue. I now have it evicting the cache. Sent from Mail for Windows 10 From: Jim Kilborn Sent: Tuesday, September 20, 2016 12:59

[ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-20 Thread Martin Bureau
Hello, I noticed that the same pg gets scrubbed repeatedly on our new Jewel cluster: Here's an excerpt from log: 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster [INF] 25.3f scrub ok 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster [INF]

[ceph-users] Best Practices for Managing Multiple Pools

2016-09-20 Thread Heath Albritton
I'm wondering if anyone has some tips for managing different types of pools, each of which fall on a different type of OSD. Right now, I have a small cluster running with two kinds of OSD nodes, ones with spinning disks (and SSD journals) and another with all SATA SSD. I'm currently running

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Wido den Hollander
> Op 20 september 2016 om 20:30 schreef Haomai Wang : > > > On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander wrote: > > > >> Op 20 september 2016 om 19:27 schreef Gregory Farnum : > >> > >> > >> In librados getting a stat is basically

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Haomai Wang
On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander wrote: > >> Op 20 september 2016 om 19:27 schreef Gregory Farnum : >> >> >> In librados getting a stat is basically equivalent to reading a small >> object; there's not an index or anything so FileStore needs

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Gregory Farnum
On Tue, Sep 20, 2016 at 11:26 AM, Wido den Hollander wrote: > >> Op 20 september 2016 om 19:27 schreef Gregory Farnum : >> >> >> In librados getting a stat is basically equivalent to reading a small >> object; there's not an index or anything so FileStore needs

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Wido den Hollander
> Op 20 september 2016 om 19:27 schreef Gregory Farnum : > > > In librados getting a stat is basically equivalent to reading a small > object; there's not an index or anything so FileStore needs to descend its > folder hierarchy. If looking at metadata for all the objects in

[ceph-users] cache tier not flushing 10.2.2

2016-09-20 Thread Jim Kilborn
Simple issue I cant find with the cache tier. Thanks for taking the time… Setup a new cluster with ssd cache tier. My cache tier is on 1TB ssd. With 2 replicas. It just fills up my cache until the ceph filesystem stops allowing access. I even set the target_max_bytes to 1048576 (1GB) and still

[ceph-users] Jewel Docs | error on mount.ceph page

2016-09-20 Thread David
Sorry I don't know the correct way to report this. Potential error on this page: on http://docs.ceph.com/docs/jewel/man/8/mount.ceph/ Currently: rsize int (bytes), max readahead, multiple of 1024, Default: 524288 (512*1024) Should it be something like the following? rsize int (bytes), max

Re: [ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Gregory Farnum
In librados getting a stat is basically equivalent to reading a small object; there's not an index or anything so FileStore needs to descend its folder hierarchy. If looking at metadata for all the objects in the system efficiently is important you'll want to layer an index in somewhere. -Greg On

[ceph-users] Stat speed for objects in ceph

2016-09-20 Thread Iain Buclaw
Hi, As a general observation, the speed of calling stat() on any object in ceph is relatively slow. I'm probably getting a rate of about 10K per second using AIO, and even then it is really *really* bursty, to the point where there could be 5 seconds of activity going in one direction, then the

Re: [ceph-users] Auto recovering after loosing all copies of a PG(s)

2016-09-20 Thread Iain Buclaw
On 1 September 2016 at 23:04, Wido den Hollander wrote: > >> Op 1 september 2016 om 17:37 schreef Iain Buclaw : >> >> >> On 16 August 2016 at 17:13, Wido den Hollander wrote: >> > >> >> Op 16 augustus 2016 om 15:59 schreef Iain Buclaw

Re: [ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Christian Balzer
Hello, On Tue, 20 Sep 2016 14:40:25 +0200 Stefan Priebe - Profihost AG wrote: > Hi Christian, > > Am 20.09.2016 um 13:54 schrieb Christian Balzer: > > This and the non-permanence of reweight is why I use CRUSH reweight (a > > more distinct naming would be VERY helpful, too) and do it manually,

Re: [ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Stefan Priebe - Profihost AG
Hi Christian, Am 20.09.2016 um 13:54 schrieb Christian Balzer: > This and the non-permanence of reweight is why I use CRUSH reweight (a > more distinct naming would be VERY helpful, too) and do it manually, which > tends to beat all the automated approaches so far. so you do it really by hand

Re: [ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Stefan Priebe - Profihost AG
Am 20.09.2016 um 13:49 schrieb Dan van der Ster: > Hi Stefan, > > What's the current reweight value for osd.110? It cannot be increased above 1. ah OK it's 1 already. But that doesn't make sense cause this means all other osds (f.e. 109 osds) have to be touches to get lower values before 110

Re: [ceph-users] swiftclient call radosgw, it always response 401 Unauthorized

2016-09-20 Thread Radoslaw Zarzynski
Hi Brian, Responded inline. On Tue, Sep 20, 2016 at 5:45 AM, Brian Chang-Chien wrote: > > > 2016-09-20 10:14:38.761635 7f2049ffb700 20 > HTTP_X_AUTH_TOKEN=b243614d27244d00b12b2f366b58d709 > 2016-09-20 10:14:38.761636 7f2049ffb700 20 QUERY_STRING= > ... > 2016-09-20

Re: [ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Christian Balzer
Hello, This and the non-permanence of reweight is why I use CRUSH reweight (a more distinct naming would be VERY helpful, too) and do it manually, which tends to beat all the automated approaches so far. Christian On Tue, 20 Sep 2016 13:49:50 +0200 Dan van der Ster wrote: > Hi Stefan, > >

Re: [ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Dan van der Ster
Hi Stefan, What's the current reweight value for osd.110? It cannot be increased above 1. Cheers, Dan On Tue, Sep 20, 2016 at 12:13 PM, Stefan Priebe - Profihost AG wrote: > Hi, > > while using ceph hammer i saw in the doc of ceph reweight-by-utilization > that there

Re: [ceph-users] Increase PG number

2016-09-20 Thread Matteo Dacrema
Thanks a lot guys. I’ll try to do as you told me. Best Regards Matteo This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system

Re: [ceph-users] Increase PG number

2016-09-20 Thread Vincent Godin
Hi, In fact, when you increase your pg number, the new pgs will have to peer first and during this time, a lot a pg will be unreachable. The best way to upgrade the number of PG of a cluster (you 'll need to adjust the number of PGP too) is : - Don't forget to apply Goncalo advices to keep

[ceph-users] ceph reweight-by-utilization and increasing

2016-09-20 Thread Stefan Priebe - Profihost AG
Hi, while using ceph hammer i saw in the doc of ceph reweight-by-utilization that there is a --no-increasing flag. I do not use it but never saw an increased weight value even some of my osds are really empty. Example: 821G 549G 273G 67% /var/lib/ceph/osd/ceph-110 vs. 821G 767G 54G 94%

Re: [ceph-users] rgw bucket index manual copy

2016-09-20 Thread Wido den Hollander
> Op 20 september 2016 om 10:55 schreef Василий Ангапов : > > > Hello, > > Is there any way to copy rgw bucket index to another Ceph node to > lower the downtime of RGW? For now I have a huge bucket with 200 > million files and its backfilling is blocking RGW completely for

[ceph-users] rgw bucket index manual copy

2016-09-20 Thread Василий Ангапов
Hello, Is there any way to copy rgw bucket index to another Ceph node to lower the downtime of RGW? For now I have a huge bucket with 200 million files and its backfilling is blocking RGW completely for an hour and a half even with 10G network. Thanks!