[ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-06-30 Thread mq
Hi list I have tested suse enterprise storage3 using 2 iscsi gateway attached to vmware. The performance is bad. I have turn off VAAI following the (https://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=1033665)

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
Thanks Somnath and Christian, Yes, it looks like the latest version of XenServer still runs on an old kernel (3.10). I know the method Christian linked, but it doesn’t work if XenServer is installed from iso. It is really annoying there has been no movement on this for 3 years… I really like

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Somnath Roy
It seems your client kernel is pretty old ? Either upgrade your kernel to 3.15 or later or you need to disable CRUSH_TUNABLES3. ceph osd crush tunables bobtail or ceph osd crush tunables legacy should help. This will start rebalancing and also you will lose improvement added in Firefly. So,

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Christian Balzer
Hello, On Thu, 30 Jun 2016 19:27:05 -0700 Mike Jacobacci wrote: > Thanks Jake! I enabled the epel 7 repo and was able to get ceph-common > installed. Here is what happens when I try to map the drive: > > rbd map rbd/enterprise-vm0 --name client.admin -m mon0 > -k

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
Thanks Jake! I enabled the epel 7 repo and was able to get ceph-common installed. Here is what happens when I try to map the drive: rbd map rbd/enterprise-vm0 --name client.admin -m mon0 -k /etc/ceph/ceph.client.admin.keyring rbd: sysfs write failed In some cases useful info is found in

Re: [ceph-users] Ceph for online file storage

2016-06-30 Thread Christian Balzer
Hello, On Thu, 30 Jun 2016 08:34:12 + (GMT) m.da...@bluewin.ch wrote: > Thank you all for your prompt answers. > > >firstly, wall of text, makes things incredibly hard to read. > >Use paragraphs/returns liberally. > > I actually made sure to use paragraphs. For some reason, the formatting

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17112.html On Thursday, June 30, 2016, Mike Jacobacci wrote: > So after adding the ceph repo and enabling the cents-7 repo… It fails > trying to install ceph-common: > > Loaded plugins: fastestmirror > Loading

Re: [ceph-users] object size changing after a pg repair

2016-06-30 Thread Goncalo Borges
Hi Greg Opened this one http://tracker.ceph.com/issues/16567 Let us see what they say. Cheers G. On 07/01/2016 04:09 AM, Gregory Farnum wrote: On Wed, Jun 29, 2016 at 10:50 PM, Goncalo Borges wrote: Hi Shinobu Sorry probably I don't understand your

Re: [ceph-users] Hammer: PGs stuck creating

2016-06-30 Thread Brad Hubbard
On Thu, Jun 30, 2016 at 11:34 PM, Brian Felton wrote: > Sure. Here's a complete query dump of one of the 30 pgs: > http://pastebin.com/NFSYTbUP Looking at that something immediately stands out. There are a lot of entries in "past intervals" like so. "past_intervals": [

Re: [ceph-users] rbd cache command thru admin socket

2016-06-30 Thread Jason Dillaman
Can you check the permissions on "/var/run/ceph/" and ensure that the user your client runs under has permissions to access the directory? If the permissions are OK, do you have SElinux or AppArmor enabled and enforcing? On Thu, Jun 30, 2016 at 5:37 PM, Deneau, Tom wrote: > I

[ceph-users] rbd cache command thru admin socket

2016-06-30 Thread Deneau, Tom
I was following the instructions in https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/ because I wanted to look at some of the rbd cache state and possibly flush and invalidate it My ceph.conf has [client] rbd default features = 1 rbd

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
So after adding the ceph repo and enabling the cents-7 repo… It fails trying to install ceph-common: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.web-ster.com Resolving Dependencies --> Running transaction check ---> Package ceph-common.x86_64

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
Hi Jake, I will give that a try and see if that helps, thank you! Yes I have that open in a browser tab, it gave me the idea of using ceph-deploy to install on the xenserver. I will update with the results. Cheers, Mike > On Jun 30, 2016, at 12:42 PM, Jake Young wrote: >

Re: [ceph-users] mds standby + standby-reply upgrade

2016-06-30 Thread Gregory Farnum
On Thu, Jun 30, 2016 at 1:03 PM, Dzianis Kahanovich wrote: > Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global > stop/restart everything oneshot. > > Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby > > Now after upgrade start and

[ceph-users] mds standby + standby-reply upgrade

2016-06-30 Thread Dzianis Kahanovich
Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global stop/restart everything oneshot. Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 up:standby Now after upgrade start and next mon restart, active monitor falls with "assert(info.state ==

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
Can you install the ceph client tools on your server? They may give you a more obvious error. Try to install the package and config/keys manually instead of with ceph-deploy. Also see this: http://xenserver.org/blog/entry/tech-preview-of-xenserver-libvirt-ceph.html Jake On Thursday, June 30,

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
Just adding some more info in case it helps… looking at the ceph-osd.admin.log and I see this on every disk: 2016-06-30 09:47:03.326176 7f15353aa800 1 journal _open /dev/sdb1 fd 4: 24006098944 bytes, block size 4096 bytes, directio = 0, aio = 0 2016-06-30 09:47:03.326472 7f15353aa800 1 journal

Re: [ceph-users] Improving metadata throughput

2016-06-30 Thread Gregory Farnum
On Wed, Jun 29, 2016 at 2:02 PM, Daniel Davidson wrote: > I am starting to work with and benchmark our ceph cluster. While throughput > is so far looking good, metadata performance so far looks to be suffering. > Is there anything that can be done to speed up the

Re: [ceph-users] object size changing after a pg repair

2016-06-30 Thread Gregory Farnum
On Wed, Jun 29, 2016 at 10:50 PM, Goncalo Borges wrote: > Hi Shinobu > >> Sorry probably I don't understand your question properly. >> Is what you're worry about that object mapped to specific pg could be >> overwritten on different osds? > > Not really. I was

Re: [ceph-users] Expected behavior of blacklisted host and cephfs

2016-06-30 Thread Gregory Farnum
On Thu, Jun 30, 2016 at 9:09 AM, Mauricio Garavaglia wrote: > Hello, > > What's the expected behavior of a host that has a cephfs mounted and is then > blacklisted? It doesn't seem to fail in a consistent way. Thanks Well, once blacklisted it won't be allowed to

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
I am not sure why the mapping is failing, so I tried to install ceph on XenServer with ceph-deploy but got the following error: [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: XenServer xenenterprise 7.0.0 fddf I feel like I am close but I am not sure where to go from

Re: [ceph-users] Double OSD failure (won't start) any recovery options?

2016-06-30 Thread XPC Design
I was talking on IRC and we're guessing it was a memory issue. I've woken up every morning now with some sort of scrub errors, with must (but not all) spawning from the one system with the now dead osds. This morning I didn't wake up to find any scrub errors, (but I can't tell if it has anything

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Mike Jacobacci
Hi Jake, Interesting… XenServer 7 does has rbd installed but trying to map the rbd image with this command: # echo {ceph_monitor_ip} name={ceph_admin},secret={ceph_key} {ceph_pool} {ceph_image} >/sys/bus/rbd/add It fails with just an i/o error… I am looking into now. My cluster health is

[ceph-users] Expected behavior of blacklisted host and cephfs

2016-06-30 Thread Mauricio Garavaglia
Hello, What's the expected behavior of a host that has a cephfs mounted and is then blacklisted? It doesn't seem to fail in a consistent way. Thanks Mauricio ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] changing k and m in a EC pool

2016-06-30 Thread stephane.davy
Hi Luis, I think you are looking for that: http://ceph.com/planet/ceph-pool-migration/ -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Luis Periquito Sent: Thursday, June 30, 2016 11:31 To: Christian Balzer Cc: Ceph Users Subject: Re:

Re: [ceph-users] Hammer: PGs stuck creating

2016-06-30 Thread Brian Felton
Sure. Here's a complete query dump of one of the 30 pgs: http://pastebin.com/NFSYTbUP Brian On Wed, Jun 29, 2016 at 6:25 PM, Brad Hubbard wrote: > On Thu, Jun 30, 2016 at 3:22 AM, Brian Felton wrote: > > Greetings, > > > > I have a lab cluster running

Re: [ceph-users] osd current.remove.me.somenumber

2016-06-30 Thread Mike Miller
Hi Greg, thanks, highly appreciated. And yes, that was on an osd with btrfs. We switched back to xfs because of btrfs instabilities. Regards, -Mike On 6/27/16 10:13 PM, Gregory Farnum wrote: On Sat, Jun 25, 2016 at 11:22 AM, Mike Miller wrote: Hi, what is the

Re: [ceph-users] Running ceph in docker

2016-06-30 Thread Josef Johansson
Hi, You could actually managed every osd and mon and mds through docker swarm, since all just software it make sense to deploy it through docker where you add the disk that is needed. Mons does not need permanent storage either. Not that a restart of the docker instance would remove the but

Re: [ceph-users] Running ceph in docker

2016-06-30 Thread xiaoxi chen
It make sense to me to run MDS inside docker or k8s as MDS is stateless.But Mon and OSD do have data in local , what's the motivation to run it in docker? > To: ceph-users@lists.ceph.com > From: d...@redhat.com > Date: Thu, 30 Jun 2016 08:36:45 -0400 > Subject: Re: [ceph-users] Running ceph in

Re: [ceph-users] Running ceph in docker

2016-06-30 Thread Daniel Gryniewicz
On 06/30/2016 02:05 AM, F21 wrote: Hey all, I am interested in running ceph in docker containers. This is extremely attractive given the recent integration of swarm into the docker engine, making it really easy to set up a docker cluster. When running ceph in docker, should monitors, radosgw

[ceph-users] RADOSGW buckets via NFS?

2016-06-30 Thread Sean Redmond
Hi, I noticed in the jewel release notes: "You can now access radosgw buckets via NFS (experimental)." Are there any docs that explain the configuration of NFS to access RADOSGW buckets? Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] ceph osd set up?

2016-06-30 Thread fridifree
Hi, I had 4 osds and 2 of the servers have been halted.(so no data access cause I had ECpool with 3+1 and replica 2) After power up the servers those osds was down and out. I don't know how to bring them back. Another error I get, maybe it is not relevance but I saw , osd/ECUtil.h: 43: FAILED

Re: [ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-06-30 Thread Ops Cloud
Hello, See  this thread, https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23852.html   And the author has replied himself, I just resolved this issue. It was probably due to a faulty region map configuration, where more than 1 regions were marked as default. After updating the

[ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-06-30 Thread Micha Krause
Hi, If i try to create a bucket (using s3cmd) im getting this error: WARNING: 500 (UnknownError): The rados-gateway server says: ERROR: endpoints not configured for upstream zone The Servers where updated to jewel, but I'm not sure the error wasn't there before. Micha Krause

Re: [ceph-users] changing k and m in a EC pool

2016-06-30 Thread Luis Periquito
>> I have created an Erasure Coded pool and would like to change the K >> and M of it. Is there any way to do it without destroying the pool? >> > No. > > http://docs.ceph.com/docs/master/rados/operations/erasure-code/ > > "Choosing the right profile is important because it cannot be modified >

Re: [ceph-users] Ceph for online file storage

2016-06-30 Thread Oliver Dzombic
hi Moïn, two suggestions, based on my experience: 1. max HDD size of GOOD QUALITY 7200 RPM spinning SATA/SAS HDD's is 4 TB. Everything else will ruin ur performance ( as long as you dont do pure archiving of files ( writing one time, "never" touching them again ) If you have 8 TB HDDs, just

Re: [ceph-users] Ceph for online file storage

2016-06-30 Thread m.da...@bluewin.ch
Thank you all for your prompt answers. >firstly, wall of text, makes things incredibly hard to read. >Use paragraphs/returns liberally. I actually made sure to use paragraphs. For some reason, the formatting was removed. >Is that your entire experience with Ceph, ML archives and docs? Of

Re: [ceph-users] changing k and m in a EC pool

2016-06-30 Thread Christian Balzer
On Thu, 30 Jun 2016 09:16:50 +0100 Luis Periquito wrote: > Hi all, > > I have created an Erasure Coded pool and would like to change the K > and M of it. Is there any way to do it without destroying the pool? > No. http://docs.ceph.com/docs/master/rados/operations/erasure-code/ "Choosing the

[ceph-users] changing k and m in a EC pool

2016-06-30 Thread Luis Periquito
Hi all, I have created an Erasure Coded pool and would like to change the K and M of it. Is there any way to do it without destroying the pool? The cluster doesn't have much IO, but the pool (rgw data) has just over 10T, and I didn't wanted to lose it. thanks,

Re: [ceph-users] Another cluster completely hang

2016-06-30 Thread Brian ::
Hi Mario Perhaps its covered under proxmox support, Do you have support on your proxmox install from the guys in Proxmox? Otherwise you can always buy from Redhat https://www.redhat.com/en/technologies/storage/ceph On Thu, Jun 30, 2016 at 7:37 AM, Mario Giammarco

Re: [ceph-users] Double OSD failure (won't start) any recovery options?

2016-06-30 Thread Tomasz Kuzemko
With pool size=3 Ceph still should be able to recover from 2 failed OSDs. It will however disallow client access to the PGs that have only 1 copy until they are replicated at least min_size times. Such PGs are not marked as "active". As to the reason of your problems it seems hardware related.

Re: [ceph-users] object size changing after a pg repair

2016-06-30 Thread Shinobu Kinjo
Thank you for your clarification. On Thu, Jun 30, 2016 at 2:50 PM, Goncalo Borges < goncalo.bor...@sydney.edu.au> wrote: > Hi Shinobu > > > Sorry probably I don't understand your question properly. > > Is what you're worry about that object mapped to specific pg could be > overwritten on

Re: [ceph-users] Another cluster completely hang

2016-06-30 Thread Mario Giammarco
Last two questions: 1) I have used other systems in the past. In case of split brain or serious problems they offered me to choose which copy is "good" and then work again. Is there a way to tell ceph that all is ok? This morning again I have 19 incomplete pgs after recovery 2) Where can I find

[ceph-users] Double OSD failure (won't start) any recovery options?

2016-06-30 Thread XPC Design
I've had two osds fail and I'm pretty sure they wont recover from this. I'm looking for help trying to get them back online if possible... terminate called after throwing an instance of 'ceph::buffer::malformed_input' what(): buffer::malformed_input: bad checksum on pg_log_entry_t - I'm

[ceph-users] Running ceph in docker

2016-06-30 Thread F21
Hey all, I am interested in running ceph in docker containers. This is extremely attractive given the recent integration of swarm into the docker engine, making it really easy to set up a docker cluster. When running ceph in docker, should monitors, radosgw and OSDs all be on separate

[ceph-users] Double OSD failure (won't start) any recovery options?

2016-06-30 Thread XPC Design
I've had two osds fail and I'm pretty sure they wont recover from this. I'm looking for help trying to get them back online if possible... terminate called after throwing an instance of 'ceph::buffer::malformed_input' what(): buffer::malformed_input: bad checksum on pg_log_entry_t - I'm