Re: [ceph-users] repomd.xml: [Errno 14] HTTP Error 404 - Not Found on download.ceph.com for rhel7

2016-07-08 Thread Alexander Lim
Yes, happened to me too. Simply changing the URL fixed the problem. On Fri, Jul 8, 2016 at 6:55 PM, Martin Palma wrote: > It seems that the packages "ceph-release-*.noarch.rpm" contain a > ceph.repo pointing to the baseurl > "http://ceph.com/rpm-hammer/rhel7/$basearch; which

Re: [ceph-users] Active MON aborts on Jewel 10.2.2 with FAILED assert(info.state == MDSMap::STATE_STANDBY

2016-07-08 Thread Bill Sharer
Just for giggles I tried the rolling upgrade to 10.2.2 again today. This time I rolled mon.0 and osd.0 first while keeping the mds servers up and then rolled them before moving on to the other three. No assertion failure this time since I guess I always had an mds active. I wonder if I will

Re: [ceph-users] Data recovery stuck

2016-07-08 Thread Brad Hubbard
On Sat, Jul 9, 2016 at 1:20 AM, Pisal, Ranjit Dnyaneshwar wrote: > Hi All, > > > > I am in process of adding new OSDs to Cluster however after adding second > node Cluster recovery seems to be stopped. > > > > Its more than 3 days but Objects degraded % has not improved

[ceph-users] Backing up RBD snapshots to a different cloud service

2016-07-08 Thread Brendan Moloney
Hi, We have a smallish Ceph cluster for RBD images. We use snapshotting for local incremental backups. I would like to start sending some of these snapshots to an external cloud service (likely Amazon) for disaster recovery purposes. Does anyone have advice on how to do this? I suppose I

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
I even tried with bare .raw file but error is still the same 016-07-08 16:29:40.931 86007 INFO nova.compute.claims [req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance: cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total memory:

Re: [ceph-users] ceph + vmware

2016-07-08 Thread Jan Schermer
There is no Ceph plugin for VMware (and I think you need at least an Enterprise license for storage plugins, much $$$). The "VMware" way to do this without the plugin would be to have a VM running on every host serving RBD devices over iSCSI to the other VMs (the way their storage applicances

[ceph-users] ceph + vmware

2016-07-08 Thread Oliver Dzombic
Hi, does anyone have experience how to connect vmware with ceph smart ? iSCSI multipath does not really worked well. NFS could be, but i think thats just too much layers in between to have some useable performance. Systems like ScaleIO have developed a vmware addon to talk with it. Is there

Re: [ceph-users] Quick short survey which SSDs

2016-07-08 Thread Carlos M. Perez
I posted a bunch of the more recent numbers in the specs. Had some down time and had a bunch of SSD's lying around and just curious if any were hidden gems... Interestingly, the Intel drives seem to not require the write cache off, while other drives had to be "forced" off using the hdparm -W0

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
[root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$ [DEFAULT] instance_usage_audit = True instance_usage_audit_period = hour notify_on_state_change = vm_and_task_state notification_driver = messagingv2 rbd_user=cinder rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb

Re: [ceph-users] Question about how to start ceph OSDs with systemd

2016-07-08 Thread Tom Barron
On 07/08/2016 11:59 AM, Manuel Lausch wrote: > hi, > > In the last days I do play around with ceph jewel on debian Jessie and > CentOS 7. Now I have a question about systemd on this Systems. > > I installed ceph jewel (ceph version 10.2.2 > (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian

[ceph-users] Question about how to start ceph OSDs with systemd

2016-07-08 Thread Manuel Lausch
hi, In the last days I do play around with ceph jewel on debian Jessie and CentOS 7. Now I have a question about systemd on this Systems. I installed ceph jewel (ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)) on debian Jessie and prepared some OSDs. While playing around I

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hi Kees, I regenerated the UUID as per your suggestion. Now i have same UUID in host1 and host2. I could create volumes and attach them to existing VMs. I could create new glance images. But still finding the same error while instance launch via GUI. 2016-07-08 11:23:25.067 86007 INFO

[ceph-users] Data recovery stuck

2016-07-08 Thread Pisal, Ranjit Dnyaneshwar
Hi All, I am in process of adding new OSDs to Cluster however after adding second node Cluster recovery seems to be stopped. Its more than 3 days but Objects degraded % has not improved even by 1%. Will adding further OSDs help improve situation or is there any other way to improve recovery

Re: [ceph-users] 5 pgs of 712 stuck in active+remapped

2016-07-08 Thread Nathanial Byrnes
Thanks for the pointer, I didn't know the answer, but now I do, and unfortunately, XenServer is relying on the kernel module. It's surprising that their latest release XenServer 7 which was released on the 6th of July is only using kernel 3.10 ... I guess since it is based upon CentOS 7 and

Re: [ceph-users] mds standby + standby-reply upgrade

2016-07-08 Thread Patrick Donnelly
Hi Dzianis, On Thu, Jun 30, 2016 at 4:03 PM, Dzianis Kahanovich wrote: > Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global > stop/restart everything oneshot. > > Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby > > Now after

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hi Kees, Thanks for your help! Node 1 controller + compute -rw-r--r-- 1 root root63 Jul 5 12:59 ceph.client.admin.keyring -rw-r--r-- 1 glance glance 64 Jul 5 14:51 ceph.client.glance.keyring -rw-r--r-- 1 cinder cinder 64 Jul 5 14:53 ceph.client.cinder.keyring -rw-r--r-- 1 cinder

Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi, I'd recommend generating an UUID and use it for all your compute nodes. This way, you can keep your configuration in libvirt constant. Regards, Kees On 08-07-16 16:15, Gaurav Goyal wrote: > > For below section, should i generate separate UUID for both compte hosts? >

Re: [ceph-users] 5 pgs of 712 stuck in active+remapped

2016-07-08 Thread Micha Krause
Hi, > Ah, Thanks Micha, That makes sense. I'll see if I can dig up another server to build on OSD server. Sadly, XenServer is not tolerant of new kernels. > Do you happen to know if there is a dkms package of RBD anywhere? I might be able to build the latest RBD against the 3.10 kernel that

Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi Gaurav, Have you distributed your Ceph authentication keys to your compute nodes? And, do they have the correct permissions in terms of Ceph? K. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hello, Thanks i could restore my cinder service. But while trying to launch an instance, i am getting same error. Can you please help me to know what am i doing wrong? 2016-07-08 09:28:31.368 31909 INFO nova.compute.manager [req-c56770a7-5bab-426b-b763-7473254c6410

[ceph-users] Bad performance while deleting many small objects via radosgw S3

2016-07-08 Thread Martin Emrich
Hi! Our little dev ceph cluster (nothing fancy; 3x1 OSD with 100GB each, 3x monitor with radosgw) takes over 20 minutes to delete ca. 44000 small objects (<1GB in total). Deletion is done by listing objects in blocks of 1000 and then deleting them in one call for each block; each deletion of

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-08 Thread John Spray
On Fri, Jul 8, 2016 at 8:01 AM, Goncalo Borges wrote: > Hi Brad, Patrick, All... > > I think I've understood this second problem. In summary, it is memory > related. > > This is how I found the source of the problem: > > 1./ I copied and adapted the user application

Re: [ceph-users] ceph/daemon mon not working and status exit (1)

2016-07-08 Thread Daniel Gryniewicz
On 07/07/2016 08:06 PM, Rahul Talari wrote: I am trying to use Ceph in Docker. I have built the ceph/base and ceph/daemon DockeFiles. I am trying to deploy a Ceph monitor according to the instructions given in the tutorial but when I execute the command without KV store and type: sudo docker ps

[ceph-users] Resize when booting from volume fails

2016-07-08 Thread mario martinez
Hi, We are running Openstack Liberty with a Ceph Jewel backend for glance, cinder, and nova. Creating a new instance booting from volume works fine, but resizing this fails with the following: error opening image 9257fcc2-94b5-4c3f-950a-eadee03550a6_disk at snapshot None, error code 500. Full

Re: [ceph-users] repomd.xml: [Errno 14] HTTP Error 404 - Not Found on download.ceph.com for rhel7

2016-07-08 Thread Martin Palma
It seems that the packages "ceph-release-*.noarch.rpm" contain a ceph.repo pointing to the baseurl "http://ceph.com/rpm-hammer/rhel7/$basearch; which does not exist. It should probably point to "http://ceph.com/rpm-hammer/el7/$basearch;. - Martin On Thu, Jul 7, 2016 at 5:57 PM, Martin Palma

Re: [ceph-users] 5 pgs of 712 stuck in active+remapped

2016-07-08 Thread Micha Krause
Hi, As far as I know, this is exactly the problem why the new tunables where introduced, If you use 3 Replicas with only 3 hosts, crush sometimes doesn't find a solution to place all pgs. If you are really stuck with bobtail turntables, I can think of 2 possible workarounds: 1. Add another

Re: [ceph-users] RBD Watch Notify for snapshots

2016-07-08 Thread Nick Fisk
Thanks Jason, I think I'm going to start with a bash script which SSH's into the machine to check if the process has finished writing and then calls the fsfreeze as I've got time constraints to getting this working. But I will definitely revisit this and see if there is something I can create

Re: [ceph-users] multiple journals on SSD

2016-07-08 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Zoltan Arnold Nagy > Sent: 08 July 2016 08:51 > To: Christian Balzer > Cc: ceph-users ; n...@fisk.me.uk > Subject: Re: [ceph-users] multiple

Re: [ceph-users] Jewel Multisite RGW Memory Issues

2016-07-08 Thread Ben Agricola
So I've narrowed this down a bit further, I *think* this is happening during bucket listing - I started a radosgw process with increased logging, and killed it as soon as I saw the RSS jump. This was accompanied by a ton of logs from 'RGWRados::cls_bucket_list' printing out the names of the files

Re: [ceph-users] multiple journals on SSD

2016-07-08 Thread Zoltan Arnold Nagy
Hi Christian,On 08 Jul 2016, at 02:22, Christian Balzer wrote:Hello,On Thu, 7 Jul 2016 23:19:35 +0200 Zoltan Arnold Nagy wrote:Hi Nick,How large NVMe drives are you running per 12 disks?In my current setup I have 4xP3700 per 36 disks but I feel like I couldget by with 2… Just

Re: [ceph-users] Ceph cluster upgrade

2016-07-08 Thread Kees Meijs
Thank you everyone, I just tested and verified the ruleset and applied it so some pools. Worked like a charm! K. On 06-07-16 19:20, Bob R wrote: > See http://dachary.org/?p=3189 for some simple instructions on testing > your crush rule logic. ___

Re: [ceph-users] (no subject)

2016-07-08 Thread Fran Barrera
Hello, You only need a create a pool and authentication in Ceph for cinder. Your configuration should be like this (This is an example configuration with Ceph Jewel and Openstack Mitaka): [DEFAULT] enabled_backends = ceph [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool =

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-08 Thread Goncalo Borges
Hi Brad, Patrick, All... I think I've understood this second problem. In summary, it is memory related. This is how I found the source of the problem: 1./ I copied and adapted the user application to run in another cluster of ours. The idea was for me to understand the application

Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi Gaurav, The following snippets should suffice (for Cinder, at least): > [DEFAULT] > enabled_backends=rbd > > [rbd] > volume_driver = cinder.volume.drivers.rbd.RBDDriver > rbd_pool = cinder-volumes > rbd_ceph_conf = /etc/ceph/ceph.conf > rbd_flatten_volume_from_snapshot = false >