Re: [ceph-users] rgw: Moving index objects to the right index_pool

2018-02-14 Thread Ingo Reimann
Hi Yehuda, Thanks for you help. No, listing does not work, if I remove the old index objects. I guessed, I could use the resharding for my purpose. I just tried * copy the index object * rewrite bucket metadata * reshard => I get new index objects at the old place. Metadata gets turned back

Re: [ceph-users] mgr[influx] Cannot transmit statistics: influxdb python module not found.

2018-02-14 Thread knawnd
Benjeman Meekhof wrote on 12/02/18 23:50: In our case I think we grabbed the SRPM from Fedora and rebuilt it on Scientific Linux (another RHEL derivative). I've just done the same: rebuild from fc28 srpm (some spec-file tunning was required to build it on centos 7). Presumably the binary

Re: [ceph-users] rgw gives MethodNotAllowed for OPTIONS?

2018-02-14 Thread Yehuda Sadeh-Weinraub
The CORS related operations are working on specific buckets, not on the service root. You'll need to set CORS on a bucket, and specify it in the path. Yehuda On Mon, Feb 12, 2018 at 5:17 PM, Piers Haken wrote: > I’m trying to do direct-from-browser upload to rgw using

Re: [ceph-users] rgw: Moving index objects to the right index_pool

2018-02-14 Thread Yehuda Sadeh-Weinraub
On Tue, Feb 13, 2018 at 11:27 PM, Ingo Reimann wrote: > Hi List, > > we want to brush up our cluster and correct things, that have been changed > over time. When we started with bobtail, we put all index objects together > with data into the pool rgw.buckets: > >

Re: [ceph-users] Ceph luminous performance - how to calculate expected results

2018-02-14 Thread Maged Mokhtar
On 2018-02-14 20:14, Steven Vacaroaia wrote: > Hi, > > It is very useful to "set up expectations" from a performance perspective > > I have a cluster using 3 DELL R620 with 64 GB RAM and 10 GB cluster network > > I've seen numerous posts and articles about the topic mentioning the >

Re: [ceph-users] RGW Metadata Search - Elasticserver

2018-02-14 Thread Yehuda Sadeh-Weinraub
On Wed, Feb 14, 2018 at 2:54 AM, Amardeep Singh wrote: > Hi, > > I am trying to setup RGW Metadata Search with Elastic server tier type as > per blog post here. https://ceph.com/rgw/new-luminous-rgw-metadata-search/ > > The environment setup is done using ceph-ansible

Re: [ceph-users] Is there a "set pool readonly" command?

2018-02-14 Thread ceph
Am 12. Februar 2018 20:32:24 MEZ schrieb David Turner : >The pause flag also pauses recovery traffic. It is literally a flag to >stop anything and everything in the cluster so you can get an expert in >to >prevent something even worse from happening. > I am not sure but

Re: [ceph-users] Ceph Day Germany :)

2018-02-14 Thread Danny Al-Gaaf
Am 12.02.2018 um 10:39 schrieb Wido den Hollander: > > > On 02/12/2018 12:33 AM, c...@elchaka.de wrote: >> >> >> Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer : >>> Hi all, >>> >>> On 02/08/2018 11:23 AM, Martin Emrich wrote: >>> I just want to thank all organizers

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Mike Christie
On 02/13/2018 01:09 PM, Steven Vacaroaia wrote: > Hi, > > I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made available > so I have upgraded my test environment > ... > > It will be appreciated if someone can provide instructions / stpes for > upgrading the kernel without

[ceph-users] Ceph luminous performance - how to calculate expected results

2018-02-14 Thread Steven Vacaroaia
Hi, It is very useful to "set up expectations" from a performance perspective I have a cluster using 3 DELL R620 with 64 GB RAM and 10 GB cluster network I've seen numerous posts and articles about the topic mentioning the following formula ( for disks WAL/DB on it ) OSD / replication / 2

Re: [ceph-users] Understanding/correcting sudden onslaught of unfound objects

2018-02-14 Thread Gregory Farnum
On Tue, Feb 13, 2018 at 8:41 AM Graham Allan wrote: > I'm replying to myself here, but it's probably worth mentioning that > after this started, I did bring back the failed host, though with "ceph > osd weight 0" to avoid more data movement. > > For inconsistent pgs containing

Re: [ceph-users] Killall in the osd log

2018-02-14 Thread Gregory Farnum
On Wed, Feb 14, 2018 at 1:21 AM Marc Roos wrote: > > I guess this is normal, because I see this in all osd logs that I have > checked? > > 2018-02-14 03:18:01.615926 7f7d62f99700 -1 received signal: Hangup from > PID: 13737 task name: killall -q -1 ceph-mon ceph-mgr

Re: [ceph-users] Bluestores+LVM via ceph-volume in Luminous?

2018-02-14 Thread Andre Goree
On 2018/02/01 1:42 pm, Andre Goree wrote: On 2018/02/01 1:17 pm, Andre Goree wrote: On 2018/02/01 11:58 am, Alfredo Deza wrote: This is the actual command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new

Re: [ceph-users] Monitor won't upgrade

2018-02-14 Thread David Turner
>From the mon.0 server run `ceph --version`. If you've restarted the mon daemon and it is still showing 0.94.5, it is most likely because that is the version of the packages on that server. On Wed, Feb 14, 2018 at 10:56 AM Mark Schouten wrote: > Hi, > > > > I have a (Proxmox)

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread David Turner
All Ceph flags are global. Setting them from any server that can (usually osd nodes, mons, etc have the right keyring) will set the flag for the entire cluster. Setting pause on the cluster will prevent everything changing anything. OSDs will not be able to be marked down, no map updates will

Re: [ceph-users] removing cache of ec pool (bluestore) with ec_overwrites enabled

2018-02-14 Thread David Turner
http://tracker.ceph.com/issues/22754 This is a bug in Luminous for cephfs volumes. This is not anything you're doing wrong. The mon check for removing a cache tier only checks that it's EC on CephFS and says no. The above tracker has a PR marked for backporting into Luminous to respond yes if

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Steven Vacaroaia
all --upgrade . > > Processing /root/pyudev > > Collecting six (from pyudev==0.21.0dev-20180214) > > Downloading six-1.11.0-py2.py3-none-any.whl > > Installing collected packages: six, pyudev > > Found existing installation: six 1.9.0 > > Unin

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread DHilsbos
All; This might be a noob type question, but this thread is interesting, and there's one thing I would like clarified. David Turner mentions setting 3 flags on OSDs, Götz has mentioned 5 flags, do the commands need to be run on all OSD nodes, or just one? Thank you, Dominic L. Hilsbos, MBA

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread David Turner
ceph osd set noout ceph osd set nobackfill ceph osd set norecover Noout will prevent OSDs from being marked out during the maintenance and no PGs will be able to shift data around with the other 2 flags. After everything is done, unset the 3 flags and you're good to go. On Wed, Feb 14, 2018 at

Re: [ceph-users] BlueStore & Journal

2018-02-14 Thread DHilsbos
David; Thank you for responding so quickly. I believe I've been looking at Master. I found the information on BlueStore five or ten minutes after I sent the email, but I appreciate the summary. Thank you, Dominic L. Hilsbos, MBA Director – Information Technology Perform Air International

[ceph-users] Monitor won't upgrade

2018-02-14 Thread Mark Schouten
Hi, I have a (Proxmox) cluster with Hammer and I started thinking about different versions of daemons that are running. So last night I started restarting daemons on a lot of clusters to get all versions per cluster in sync. There is one cluster that is giving me issues: root@proxmox2:~#

Re: [ceph-users] Deployment with Xen

2018-02-14 Thread David Turner
First off to answer your questions about mons, you need to understand that they work in a Paxos Quorum. What that means is that there needs to be a majority of Mons that agree that they are in charge. This is why even numbers of mons is a bad idea as they can potentially split themselves in

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Jason Dillaman
nstalled latest version of > python-pydev ( 0.21) > > git clone git://github.com/pyudev/pyudev.git > > pyudev]# pip install --upgrade . > Processing /root/pyudev > Collecting six (from pyudev==0.21.0dev-20180214) > Downloading six-1.11.0-py2.py3-none-any.whl >

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Steven Vacaroaia
Thank you for the prompt response I was unable to install rtslib even AFTER I installed latest version of python-pydev ( 0.21) git clone git://github.com/pyudev/pyudev.git pyudev]# pip install --upgrade . Processing /root/pyudev Collecting six (from pyudev==0.21.0dev-20180214) Downloading

[ceph-users] removing cache of ec pool (bluestore) with ec_overwrites enabled

2018-02-14 Thread Kenneth Waegeman
Hi all, I'm trying to remove the cache from a erasure coded pool where all osds are bluestore osds and allow_ec_overwrites is true. I followed the steps on http://docs.ceph.com/docs/master/rados/operations/cache-tiering/, but with the remove-overlay step I'm getting a EBUSY error:

[ceph-users] RGW Metadata Search - Elasticserver

2018-02-14 Thread Amardeep Singh
Hi, I am trying to setup RGW Metadata Search with Elastic server tier type as per blog post here. https://ceph.com/rgw/new-luminous-rgw-metadata-search/ The environment setup is done using ceph-ansible  docker containers. Containers running on*Node 1* - rgw, mds, mgr, mon , 5 osds

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread Götz Reinicke
Thanks! Götz > Am 14.02.2018 um 11:16 schrieb Kai Wagner >: > > Hi, > > maybe it's worth looking at this: > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html >

Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread Kai Wagner
Hi, maybe it's worth looking at this: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html Kai On 02/14/2018 11:06 AM, Götz Reinicke wrote: > Hi, > > We have some work to do on our power lines for all building and we have to > shut down all systems. So there is also no

[ceph-users] Shutting down half / full cluster

2018-02-14 Thread Götz Reinicke
Hi, We have some work to do on our power lines for all building and we have to shut down all systems. So there is also no traffic on any ceph client. Pitty, we have to shot down some ceph nodes too in an affected building. To avoid rebalancing - as I see there is no need for it, as there is no

[ceph-users] Killall in the osd log

2018-02-14 Thread Marc Roos
I guess this is normal, because I see this in all osd logs that I have checked? 2018-02-14 03:18:01.615926 7f7d62f99700 -1 received signal: Hangup from PID: 13737 task name: killall -q -1 ceph-mon ceph-mgr ceph-mds ceph-osd ceph-fuse radosgw UID: 0 [Tue Feb 13 03:15:36 2018] libceph:

Re: [ceph-users] Newbie question: stretch ceph cluster

2018-02-14 Thread Maged Mokhtar
Hi, You need to set the min_size to 2 in crush rule. The exact location and replication flow when a client writes data depends on the object name and num of pgs. the crush rule determines which osds will serve a pg, the first is the primary osd for that pg. The client computes the pg from the