[ceph-users] Giant on Centos 7 with custom cluster name

2015-01-17 Thread Erik McCormick
Hello all, I've got an existing Firefly cluster on Centos 7 which I deployed with ceph-deploy. In the latest version of ceph-deploy, it refuses to handle commands issued with a cluster name. [ceph_deploy.install][ERROR ] custom cluster names are not supported on sysvinit hosts This is a

Re: [ceph-users] Ceph and Openstack

2015-04-01 Thread Erik McCormick
Can you both set Cinder and / or Glance logging to debug and provide some logs? There was an issue with the first Juno release of Glance in some vendor packages, so make sure you're fully updated to 2014.2.2 On Apr 1, 2015 7:12 PM, Quentin Hartman qhart...@direwolfdigital.com wrote: I am

Re: [ceph-users] Rados Gateway and keystone

2015-04-13 Thread Erik McCormick
I haven't really used the S3 stuff much, but the credentials should be in keystone already. If you're in horizon, you can download them under Access and Security-API Access. Using the CLI you can use the openstack client like openstack credential list | show | create | delete | set or with the

Re: [ceph-users] ceph and glance... permission denied??

2015-04-06 Thread Erik McCormick
Glance needs some additional permissions including write access to the pool you want to add images to. See the docs at: http://ceph.com/docs/master/rbd/rbd-openstack/ Cheers, Erik On Apr 6, 2015 7:21 AM, florian.rom...@datalounges.com wrote: Hi, first off: long time reader, first time poster

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
- IT Center for Science, Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland mobile: +358 503 812758 tel. +358 9 4572001 fax +358 9 4572302 http://www.csc.fi/ On 02 Apr 2015, at 03:10, Erik McCormick emccorm...@cirrusseven.com

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
Espoo, Finland mobile: +358 503 812758 tel. +358 9 4572001 fax +358 9 4572302 http://www.csc.fi/ On 02 Apr 2015, at 03:10, Erik McCormick emccorm...@cirrusseven.com wrote: Can you both set Cinder and / or Glance logging to debug

Re: [ceph-users] Ceph and Openstack

2015-04-02 Thread Erik McCormick
On Thu, Apr 2, 2015 at 10:06 AM, Erik McCormick emccorm...@cirrusseven.com wrote: The RDO glance-store package had a bug in it that miscalculated the chunk size. I should hope that it's been patched by Redhat now since the fix was committed upstream before the first Juno rleease, but perhaps

Re: [ceph-users] QEMU Venom Vulnerability

2015-05-19 Thread Erik McCormick
Sorry, I made the assumption you were on 7. If you're on 6 then I defer to someone else ;) If you're on 7, go here. http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/ On May 19, 2015 2:47 PM, Georgios Dimitrakakis gior...@acmac.uoc.gr wrote: Erik, are you talking about

Re: [ceph-users] Rename Ceph cluster

2015-08-18 Thread Erik McCormick
I've got a custom named cluster integrated with Openstack (Juno) and didn't run into any hard-coded name issues that I can recall. Where are you seeing that? As to the name change itself, I think it's really just a label applying to a configuration set. The name doesn't actually appear *in* the

Re: [ceph-users] hadoop on cephfs

2016-04-30 Thread Erik McCormick
I think what you are thinking of is the driver that was built to actually replace hdfs with rbd. As far as I know that thing had a very short lifespan on one version of hadoop. Very sad. As to what you proposed: 1) Don't use Cephfs in production pre-jewel. 2) running hdfs on top of ceph is a

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Erik McCormick
On Oct 27, 2016 3:16 PM, "Oliver Dzombic" wrote: > > Hi, > > i can recommand > > X710-DA2 > Weer also use this nice for everything. > 10G Switch is going over our bladeinfrastructure, so i can't recommand > something for you there. > > I assume that the usual

[ceph-users] Change ownership of objects

2016-12-07 Thread Erik McCormick
Hello everyone, I am running Ceph (firefly) Radosgw integrated with Openstack Keystone. Recently we built a whole new Openstack cloud and created users in that cluster. The names were the same, but the UUID's are not. Both clouds are using the same Ceph cluster with their own RGW. I have managed

Re: [ceph-users] Ceph Giant Repo problem

2017-03-30 Thread Erik McCormick
Try setting obsoletes=0 in /etc/yum.conf and see if that doesn't make it happier. The package is clearly there and it even shows it available in your log. -Erik On Thu, Mar 30, 2017 at 8:55 PM, Vlad Blando wrote: > Hi Guys, > > I encountered some issue with installing

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil wrote: > On Thu, 8 Jun 2017, Sage Weil wrote: >> Questions: >> >> - Does anybody on the list use a non-default cluster name? >> - If so, do you have a reason not to switch back to 'ceph'? > > It sounds like the answer is "yes," but

Re: [ceph-users] Creating a custom cluster name using ceph-deploy

2017-10-15 Thread Erik McCormick
Do not, under any circumstances, make a custom named cluster. There be pain and suffering (and dragons) there, and official support for it has been deprecated. On Oct 15, 2017 6:29 PM, "Bogdan SOLGA" wrote: > Hello, everyone! > > We are trying to create a custom cluster

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Erik McCormick
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon wrote: > On 02/10/17 12:34, Osama Hasebou wrote: >> Hi Everyone, >> >> Is there a guide/tutorial about how to setup Ceph monitoring system >> using collectd / grafana / graphite ? Other suggestions are welcome as >> well ! > > We

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-16 Thread Erik McCormick
I was told at the Openstack Summit that 12.2.2 should drop "In a few days." That was a week ago yesterday. If you have a little leeway, it may be best to wait. I know I am, but I'm paranoid. There was also a performance regression mentioned recently that's supposed to be fixed. -Erik On Nov

Re: [ceph-users] removing cluster name support

2017-11-06 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil <s...@newdream.net> wrote: > On Fri, 9 Jun 2017, Erik McCormick wrote: >> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil <s...@newdream.net> wrote: >> > On Thu, 8 Jun 2017, Sage Weil wrote: >> >> Questions: >>

Re: [ceph-users] removing cluster name support

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" wrote: On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil wrote: > On Tue, 7 Nov 2017, Alfredo Deza wrote: >> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai wrote: >> > On Fri, Jun 9, 2017 at 3:37 AM, Sage

Re: [ceph-users] Luminous v12.2.2 released

2017-12-05 Thread Erik McCormick
On Dec 5, 2017 10:26 AM, "Florent B" wrote: On Debian systems, upgrading packages does not restart services ! You really don't want it to restart services. Many small clusters run mons and osds on the same nodes, and auto restart makes it impossible to order restarts.

[ceph-users] Multiple Rados Gateways with different auth backends

2018-06-12 Thread Erik McCormick
Hello all, I have recently had need to make use of the S3 API on my Rados Gateway. We've been running just Swift API backed by Openstack for some time with no issues. Upon trying to use the S3 API I discovered that our combination of Jewel and Keystone renders AWS v4 signatures unusable.

Re: [ceph-users] civetweb: ssl_private_key

2018-05-29 Thread Erik McCormick
On Tue, May 29, 2018, 11:00 AM Marc Roos wrote: > > I guess we will not get this ssl_private_key option unless we upgrade > from Luminous? > > > http://docs.ceph.com/docs/master/radosgw/frontends/ > > That option is only for Beast. For civetweb you just feed it ssl_certificate with a combined

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-01 Thread Erik McCormick
Don't set a cluster name. It's no longer supported. It really only matters if you're running two or more independent clusters on the same boxes. That's generally inadvisable anyway. Cheers, Erik On Wed, Aug 1, 2018, 9:17 PM Glen Baars wrote: > Hello Ceph Users, > > Does anyone know how to set

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-08-08 Thread Erik McCormick
: > Hi, > > > > We are still blocked by this problem on our end. Glen did you or someone > else figure out something for this ? > > > > Regards > > Jocelyn Thode > > > > From: Glen Baars [mailto:g...@onsitecomputers.com.au] > Sent: jeudi, 2 août 2

Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread Erik McCormick
Wherever I go, there you are ;). Glad to have you back again! Cheers, Erik On Tue, Aug 28, 2018, 10:25 PM Dan Mick wrote: > On 08/28/2018 06:13 PM, Sage Weil wrote: > > Hi everyone, > > > > Please help me welcome Mike Perez, the new Ceph community manager! > > > > Mike has a long history with

Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Erik McCormick
On Feb 28, 2018 10:06 AM, "Max Cuttins" wrote: Il 28/02/2018 15:19, Jason Dillaman ha scritto: > On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini > wrote: > >> I was building ceph in order to use with iSCSI. >> But I just see from the docs that

Re: [ceph-users] list admin issues

2018-10-09 Thread Erik McCormick
Without an example of the bounce response itself it's virtually impossible to troubleshoot. Can someone with mailman access please provide an example of a bounce response? All the attachments on those rejected messages are just HTML copies of the message (which are not on the list of filtered

[ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
Hello, I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and running into difficulties getting the current stable release running. The versions in the Luminous repo is stuck at 2.6.1, whereas the current stable version is 2.6.3. I've seen a couple of HA issues in pre 2.6.3 versions

Re: [ceph-users] list admin issues

2018-10-06 Thread Erik McCormick
This has happened to me several times as well. This address is hosted on gmail. -Erik On Sat, Oct 6, 2018, 9:06 AM Elias Abacioglu < elias.abacio...@deltaprojects.com> wrote: > Hi, > > I'm bumping this old thread cause it's getting annoying. My membership get > disabled twice a month. > Between

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote: > On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick > wrote: > > > > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick > > wrote: > > > > > > Hello, > > > > > > I'm trying to set up

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
(xdrproc_t) xdr_entry4)) I'm guessing I am missing some newer version of a library somewhere, but not sure. Any tips for successfully getting it to build? -Erik > Kevin > > > Am Di., 9. Okt. 2018 um 19:39 Uhr schrieb Erik McCormick < > emccorm...@cirrusseven.com>: > >

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick wrote: > > Hello, > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > running into difficulties getting the current stable release running. > The versions in the Luminous repo is stuck at 2.6.1, whereas th

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick wrote: > > > > On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote: >> >> I had a similar problem: >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html >> >> But even the recent 2

Re: [ceph-users] Ceph on Azure ?

2018-12-23 Thread Erik McCormick
Dedicated links are not that difficult to come by anymore. It's mainly done with SDN. I know Megaport, for example, let's you provision virtual circuits to dozens of providers including Azure, AWS, and GCP. You can run several virtual circuits over a single ccross-connect. I look forward to

Re: [ceph-users] network architecture questions

2018-09-18 Thread Erik McCormick
On Tue, Sep 18, 2018, 7:56 PM solarflow99 wrote: > thanks for the replies, I don't know that cephFS clients go through the > MONs, they reach the OSDs directly. When I mentioned NFS, I meant NFS > clients (ie. not cephFS clients) This should have been pretty straight > forward. > Anyone doing

Re: [ceph-users] Bluestore WAL/DB decisions

2019-03-29 Thread Erik McCormick
On Fri, Mar 29, 2019 at 1:48 AM Christian Balzer wrote: > > On Fri, 29 Mar 2019 01:22:06 -0400 Erik McCormick wrote: > > > Hello all, > > > > Having dug through the documentation and reading mailing list threads > > until my eyes rolled back in my head, I am left

[ceph-users] Bluestore WAL/DB decisions

2019-03-28 Thread Erik McCormick
Hello all, Having dug through the documentation and reading mailing list threads until my eyes rolled back in my head, I am left with a conundrum still. Do I separate the DB / WAL or not. I had a bunch of nodes running filestore with 8 x 8TB spinning OSDs and 2 x 240 GB SSDs. I had put the OS on

Re: [ceph-users] Commercial support

2019-01-23 Thread Erik McCormick
Suse as well https://www.suse.com/products/suse-enterprise-storage/ On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote: > > > > Hi, > > > > How is the commercial support for Ceph? More specifically, I was > recently pointed in the direction of the

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:39 AM Erik McCormick wrote: > > > On Thu, Apr 11, 2019, 12:07 AM Brayan Perera > wrote: > >> Dear Jason, >> >> >> Thanks for the reply. >> >> We are using python 2.7.5 >> >> Yes. script is based on opensta

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-11 Thread Erik McCormick
On Thu, Apr 11, 2019, 12:07 AM Brayan Perera wrote: > Dear Jason, > > > Thanks for the reply. > > We are using python 2.7.5 > > Yes. script is based on openstack code. > > As suggested, we have tried chunk_size 32 and 64, and both giving same > incorrect checksum value. > The value of

Re: [ceph-users] Glance client and RBD export checksum mismatch

2019-04-12 Thread Erik McCormick
On Thu, Apr 11, 2019, 8:53 AM Jason Dillaman wrote: > On Thu, Apr 11, 2019 at 8:49 AM Erik McCormick > wrote: > > > > > > > > On Thu, Apr 11, 2019, 8:39 AM Erik McCormick > wrote: > >> > >> > >> > >> On Thu,

Re: [ceph-users] IMPORTANT : NEED HELP : Low IOPS on hdd : MAX AVAIL Draining fast

2019-04-27 Thread Erik McCormick
On Sat, Apr 27, 2019, 3:49 PM Nikhil R wrote: > We have baremetal nodes 256GB RAM, 36core CPU > We are on ceph jewel 10.2.9 with leveldb > The osd’s and journals are on the same hdd. > We have 1 backfill_max_active, 1 recovery_max_active and 1 > recovery_op_priority > The osd crashes and starts

Re: [ceph-users] ceph-volume failed after replacing disk

2019-07-05 Thread Erik McCormick
If you create the OSD without specifying an ID it will grab the lowest available one. Unless you have other gaps somewhere, that ID would probably be the one you just removed. -Erik On Fri, Jul 5, 2019, 9:19 AM Paul Emmerich wrote: > > On Fri, Jul 5, 2019 at 2:17 PM Alfredo Deza wrote: > >>

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Erik McCormick
On Fri, Jun 28, 2019, 10:05 AM Alfredo Deza wrote: > On Fri, Jun 28, 2019 at 7:53 AM Stolte, Felix > wrote: > > > > Thanks for the update Alfredo. What steps need to be done to rename my > cluster back to "ceph"? > > That is a tough one, the ramifications of a custom cluster name are > wild -