Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-07-01 Thread Mike Jacobacci
Yes, I would like to know too… I decided nott to update the kernel as it could possibly affect xenserver’s stability and/or performance. Cheers, Mike > On Jun 30, 2016, at 11:54 PM, Josef Johansson wrote: > > Also, is it possible to recompile the rbd kernel module in XenServer? I am > under th

Re: [ceph-users] rbd cache command thru admin socket

2016-07-01 Thread Deneau, Tom
Thanks, Jason-- Turns out AppArmor was indeed enabled (I was not aware of that). Disabled it and now I see the socket but it seems to only be there temporarily while some client app is running. The original reason I wanted to use this socket was that I am also using an rbd images thru kvm and I w

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-01 Thread Nick Fisk
To summarise, LIO is just not working very well at the moment because of the ABORT Tasks problem, this will hopefully be fixed at some point. I'm not sure if SUSE works around this, but see below for other pain points with RBD + ESXi + iSCSI TGT is easy to get going, but performance isn't the b

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-01 Thread Kenneth Waegeman
On 01/07/16 12:59, John Spray wrote: On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman wrote: Hi all, While syncing a lot of files to cephfs, our mds cluster got haywire: the mdss have a lot of segments behind on trimming: (58621/30) Because of this the mds cluster gets degraded. RAM usage

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-01 Thread Yan, Zheng
On Fri, Jul 1, 2016 at 6:59 PM, John Spray wrote: > On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman > wrote: >> Hi all, >> >> While syncing a lot of files to cephfs, our mds cluster got haywire: the >> mdss have a lot of segments behind on trimming: (58621/30) >> Because of this the mds cluste

Re: [ceph-users] performance issue with jewel on ubuntu xenial (kernel)

2016-07-01 Thread Yoann Moulin
Hello, >>> I found a performance drop between kernel 3.13.0-88 (default kernel on >>> Ubuntu >>> Trusty 14.04) and kernel 4.4.0.24.14 (default kernel on Ubuntu Xenial >>> 16.04) >>> >>> ceph version is Jewel (10.2.2). >>> All tests have been done under Ubuntu 14.04 >>

Re: [ceph-users] CEPH Replication

2016-07-01 Thread Adrien Gillard
To start safely you need a replication factor of 3 and at least 4 nodes (think size+1) to allow for smooth maintenance on your nodes. On Fri, Jul 1, 2016 at 2:31 PM, Ashley Merrick wrote: > Hello, > > Okie makes perfect sense. > > So if run CEPH with a replication of 3, is it still required to r

Re: [ceph-users] CEPH Replication

2016-07-01 Thread Ashley Merrick
Hello, Okie makes perfect sense. So if run CEPH with a replication of 3, is it still required to run an odd number of OSD Nodes. Or could I run 4 OSD Nodes to start with, with a replication of 3, with each replication on a separate server. ,Ashley Merrick -Original Message- From: cep

Re: [ceph-users] CEPH Replication

2016-07-01 Thread Tomasz Kuzemko
Still in case of object corruption you will not be able to determine which copy is valid. Ceph does not provide data integrity with filestore (it's planned for bluestore). On 01.07.2016 14:20, David wrote: > It will work but be aware 2x replication is not a good idea if your data > is important. T

Re: [ceph-users] CEPH Replication

2016-07-01 Thread David
It will work but be aware 2x replication is not a good idea if your data is important. The exception would be if the OSD's are DC class SSD's that you monitor closely. On Fri, Jul 1, 2016 at 1:09 PM, Ashley Merrick wrote: > Hello, > > Perfect, I want to keep on separate node's, so wanted to make

Re: [ceph-users] CEPH Replication

2016-07-01 Thread Ashley Merrick
Hello, Perfect, I want to keep on separate node's, so wanted to make sure the expected behaviour was that it would do that. And no issues with running an odd number of nodes for a replication of 2? I know you have quorum, just wanted to make sure would not effect when running an even replicati

Re: [ceph-users] CEPH Replication

2016-07-01 Thread ceph
It will put each object on 2 OSD, on 2 separate node All nodes, and all OSDs will have the same used space (approx) If you want to allow both copies of an object to put stored on the same node, you should use osd_crush_chooseleaf_type = 0 (see http://docs.ceph.com/docs/master/rados/operations/crus

[ceph-users] CEPH Replication

2016-07-01 Thread Ashley Merrick
Hello, Looking at setting up a new CEPH Cluster, starting with the following. 3 x CEPH OSD Servers Each Server: 20Gbps Network 12 OSD's SSD Journal Looking at running with replication of 2, will there be any issues using 3 nodes with a replication of two, this should "technically" give me ½ t

Re: [ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-07-01 Thread Micha Krause
Hi, > In Infernalis there was this command: radosgw-admin regions list But this is missing in Jewel. Ok, I just found out that this was renamed to zonegroup list: root@rgw01:~ # radosgw-admin --id radosgw.rgw zonegroup list read_default_id : -2 { "default_info": "", "zonegroups": [

Re: [ceph-users] mds0: Behind on trimming (58621/30)

2016-07-01 Thread John Spray
On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman wrote: > Hi all, > > While syncing a lot of files to cephfs, our mds cluster got haywire: the > mdss have a lot of segments behind on trimming: (58621/30) > Because of this the mds cluster gets degraded. RAM usage is about 50GB. The > mdses were r

[ceph-users] mds0: Behind on trimming (58621/30)

2016-07-01 Thread Kenneth Waegeman
Hi all, While syncing a lot of files to cephfs, our mds cluster got haywire: the mdss have a lot of segments behind on trimming: (58621/30) Because of this the mds cluster gets degraded. RAM usage is about 50GB. The mdses were respawning and replaying continiously, and I had to stop all syncs

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-01 Thread mq
HI 1. 2 sw iscsi gateways(deploy on osd/monitor ) using lrbd to create,the iscsi target is LIO configuration: { "auth": [ { "target": "iqn.2016-07.org.linux-iscsi.iscsi.x86:testvol", "authentication": "none" } ], "targets": [ { "target": "iqn.2016-07.org.l

[ceph-users] confused by ceph quick install and manual install

2016-07-01 Thread Chengwei Yang
Hi List, Sorry if this question was answered before. I'm new to ceph and following the ceph document to setting up a ceph cluster. However, I noticed that the manual install guide said below http://docs.ceph.com/docs/master/install/install-storage-cluster/ > Ensure your YUM ceph.repo entry incl

Re: [ceph-users] Can't create bucket (ERROR: endpoints not configured for upstream zone)

2016-07-01 Thread Micha Krause
Hi, > See this thread, https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23852.html Yes, I found this as well, but I don't think I have configured more than one region. I never touched any region settings, and I have to admit I wouldn't even know how to check which regions I have. In

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-01 Thread Oliver Dzombic
Hi, my experience: ceph + iscsi ( multipath ) + vmware == worst Better you search for another solution. vmware + nfs + vmware might have a much better performance. If you are able to get vmware run with iscsi and ceph, i would be >>very<< intrested in what/how you did that. -- Mit

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-01 Thread Christian Balzer
Hello, On Fri, 1 Jul 2016 13:04:45 +0800 mq wrote: > Hi list > I have tested suse enterprise storage3 using 2 iscsi gateway attached > to vmware. The performance is bad. First off, it's somewhat funny that you're testing the repackaged SUSE Ceph, but asking for help here (with Ceph being ow

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-01 Thread Christoph Adomeit
Hi, is there meanwhile a proven solution to this issue ? What can be done do fix the scheduler bug ? 1 Patch, 3 Patches, 20 Patches ? Thanks Christoph On Wed, Jun 29, 2016 at 12:02:11PM +0200, Stefan Priebe - Profihost AG wrote: > Hi, > > to be precise i've far more patches attached to the s