Re: [ceph-users] Multi-site replication speed

2019-04-14 Thread Brian Topping


> On Apr 14, 2019, at 2:08 PM, Brian Topping  wrote:
> 
> Every so often I might see the link running at 20 Mbits/sec, but it’s not 
> consistent. It’s probably going to take a very long time at this rate, if 
> ever. What can I do?

Correction: I was looking at statistics on an aggregate interface while my 
laptop was rebuilding a mailbox. The typical transfer is around 60Kbits/sec, 
but as I said, iperf3 can easily push the link between the two points to 
>750Mbits/sec. Also, system load always has >90% idle on both machines...

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multi-site replication speed

2019-04-14 Thread Brian Topping
Hi all! I’m finally running with Ceph multi-site per 
http://docs.ceph.com/docs/nautilus/radosgw/multisite/ 
, woo hoo!

I wanted to confirm that the process can be slow. It’s been a couple of hours 
since the sync started and `radosgw-admin sync status` does not report any 
errors, but the speeds are nowhere near link saturation. iperf3 reports 773 
Mbits/sec on the link in TCP mode, latency is about 5ms. 

Every so often I might see the link running at 20 Mbits/sec, but it’s not 
consistent. It’s probably going to take a very long time at this rate, if ever. 
What can I do?

I’m using civetweb without SSL on the gateway endpoints, only one 
master/mon/rgw for each end on Nautilus 14.2.0.

Apologies if I’ve missed some crucial tuning docs or archive messages somewhere 
on the subject.

Thanks! Brian___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Decreasing pg_num

2019-04-14 Thread Feng Zhang
You might have some clients with older version?

Or need to do: ceph osd require-osd-release ***?

Best,

Feng

Best,

Feng


On Sun, Apr 14, 2019 at 1:31 PM Alfredo Daniel Rezinovsky
 wrote:
>
> autoscale-status reports some of my PG_NUMs are way too big
>
> I have 256 and need 32
>
> POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET
> RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
> rbd   1214G3.0 56490G
> 0.0645   256  32  warn
>
> If I try to decrease the pg_num I get:
>
> # ceph osd pool set rbd pg_num  32
> Error EPERM: nautilus OSDs are required to decrease pg_num
>
> But all my osds are nautilus
>
> ceph tell osd.* version
> osd.0: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.1: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.2: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.3: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.4: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.5: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.6: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.7: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.8: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.9: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.10: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
> osd.11: {
>  "version": "ceph version 14.2.0
> (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
> }
>
> Should I let pg_num as they are now or there's a way to reduce them?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Decreasing pg_num

2019-04-14 Thread Alfredo Daniel Rezinovsky

autoscale-status reports some of my PG_NUMs are way too big

I have 256 and need 32

POOL   SIZE  TARGET SIZE  RATE  RAW CAPACITY RATIO  TARGET 
RATIO  PG_NUM  NEW PG_NUM  AUTOSCALE
rbd   1214G    3.0 56490G  
0.0645   256  32  warn


If I try to decrease the pg_num I get:

# ceph osd pool set rbd pg_num  32
Error EPERM: nautilus OSDs are required to decrease pg_num

But all my osds are nautilus

ceph tell osd.* version
osd.0: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.1: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.2: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.3: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.4: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.5: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.6: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.7: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.8: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.9: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.10: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}
osd.11: {
    "version": "ceph version 14.2.0 
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"

}

Should I let pg_num as they are now or there's a way to reduce them?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com