Re: [ceph-users] Multi-site replication speed

2019-04-19 Thread Brian Topping
Hi Casey, I set up a completely fresh cluster on a new VM host.. everything is fresh fresh fresh. I feel like it installed cleanly and because there is practically zero latency and unlimited bandwidth as peer VMs, this is a better place to experiment. The behavior is the same as the other

Re: [ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Robin H. Johnson
On Fri, Apr 19, 2019 at 12:10:02PM +0200, Marc Roos wrote: > I am a bit curious on how production ceph clusters are being used. I am > reading here that the block storage is used a lot with openstack and > proxmox, and via iscsi with vmare. Have you looked at the Ceph User Surveys/Census?

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Mike Lowe
I’ve run production Ceph/OpenStack since 2015. The reality is running OpenStack Newton (the last one with pki) with a post Nautilus release just isn’t going to work. You are going to have bigger problems than trying to make object storage work with keystone issued tokens. Worst case is you

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Anthony D'Atri
I've been away from OpenStack for a couple of years now, so this may have changed. But back around the Icehouse release, at least, upgrading between OpenStack releases was a major undertaking, so backing an older OpenStack with newer Ceph seems like it might be more common than one might

Re: [ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Brian Topping
> On Apr 19, 2019, at 10:59 AM, Janne Johansson wrote: > > May the most significant bit of your life be positive. Marc, my favorite thing about open source software is it has a 100% money back satisfaction guarantee: If you are not completely satisfied, you can have an instant refund, just

Re: [ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Janne Johansson
Den fre 19 apr. 2019 kl 12:10 skrev Marc Roos : > > [...]since nobody here is interested in a better rgw client for end > users. I am wondering if the rgw is even being used like this, and what > most production environments look like. > > "Like this" ? People use tons of scriptable and built-in

Re: [ceph-users] iSCSI LUN and target Maximums in ceph-iscsi-3.0+

2019-04-19 Thread Jason Dillaman
On Thu, Apr 18, 2019 at 3:47 PM Wesley Dillingham wrote: > > I am trying to determine some sizing limitations for a potential iSCSI > deployment and wondering whats still the current lay of the land: > > Are the following still accurate as of the ceph-iscsi-3.0 implementation > assuming CentOS

Re: [ceph-users] Explicitly picking active iSCSI gateway at RBD/LUN export time.

2019-04-19 Thread Jason Dillaman
On Wed, Apr 17, 2019 at 10:48 AM Wesley Dillingham wrote: > > The man page for gwcli indicates: > > "Disks exported through the gateways use ALUA attributes to provide > ActiveOptimised and ActiveNonOptimised access to the rbd images. Each disk > is assigned a primary owner at creation/import

Re: [ceph-users] Ceph inside Docker containers inside VirtualBox

2019-04-19 Thread Varun Singh
On Fri, Apr 19, 2019 at 10:44 AM Varun Singh wrote: > > On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl > wrote: > > > > Hi ! > > > > I am not 100% sure, but i think, --net=host does not propagate /dev/ > > inside the conatiner. > > > > From the Error Message : > > > > 2019-04-18 07:30:06

Re: [ceph-users] Intel SSD D3-S4510 and Intel SSD D3-S4610 firmware advisory notice

2019-04-19 Thread Vytautas Jonaitis
Hello all, Thanks! According to Intel, affected are D3-S4510 and D3-S4610 Series 1.92TB and 3.84TB. For those, who have these SSDs connected to LSI/Avago/Broadcom MegaRAID controller - do not forget to run before updating: isdct set -system EnableLSIAdapter=true Regards, Vytautas J.

Re: [ceph-users] rgw, nss: dropping the legacy PKI token support in RadosGW (removed in OpenStack Ocata)

2019-04-19 Thread Sage Weil
[Adding ceph-users for better usability] On Fri, 19 Apr 2019, Radoslaw Zarzynski wrote: > Hello, > > RadosGW can use OpenStack Keystone as one of its authentication > backends. Keystone in turn had been offering many token variants > over the time with PKI/PKIz being one of them. Unfortunately,

Re: [ceph-users] Is it possible to run a standalone Bluestore instance?

2019-04-19 Thread Brad Hubbard
OK. So this works for me with master commit bdaac2d619d603f53a16c07f9d7bd47751137c4c on Centos 7.5.1804. I cloned the repo and ran './install-deps.sh' and './do_cmake.sh -DWITH_FIO=ON' then 'make all'. # find ./lib -iname '*.so*' | xargs nm -AD 2>&1 | grep _ZTIN13PriorityCache8PriCacheE

[ceph-users] Are there any statistics available on how most production ceph clusters are being used?

2019-04-19 Thread Marc Roos
I am a bit curious on how production ceph clusters are being used. I am reading here that the block storage is used a lot with openstack and proxmox, and via iscsi with vmare. But I since nobody here is interested in a better rgw client for end users. I am wondering if the rgw is even being

Re: [ceph-users] rgw windows/mac clients shitty, develop a new one?

2019-04-19 Thread Brian :
I've always used the standalone mac and Linux package version. Wasn't aware of the 'bundled software' in the installers. Ugh. Thanks for pointing it out. On Thursday, April 18, 2019, Janne Johansson wrote: > https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/ > > not saying it

Re: [ceph-users] Intel SSD D3-S4510 and Intel SSD D3-S4610 firmware advisory notice

2019-04-19 Thread Irek Fasikhov
Wow!!! пт, 19 апр. 2019 г. в 10:16, Stefan Kooman : > Hi List, > > TL;DR: > > For those of you who are running a Ceph cluster with Intel SSD D3-S4510 > and or Intel SSD D3-S4610 with firmware version XCV10100 please upgrade > to firmware XCV10110 ASAP. At least before ~ 1700 power up hours. > >

[ceph-users] Intel SSD D3-S4510 and Intel SSD D3-S4610 firmware advisory notice

2019-04-19 Thread Stefan Kooman
Hi List, TL;DR: For those of you who are running a Ceph cluster with Intel SSD D3-S4510 and or Intel SSD D3-S4610 with firmware version XCV10100 please upgrade to firmware XCV10110 ASAP. At least before ~ 1700 power up hours. More information here: