[ceph-users] tgt+librbd error 4

2016-12-16 Thread ZHONG
Hi All, I'm using tgt(1.0.55) + librbd(H 0.94.5) for iSCSI service。Recently encountered problems, TGT in the absence of pressure crush, exception information is as follows:“kernel: tgtd[52067]: segfault at 0 ip 7f424cb0d76a sp 7f4228fe0b90 error 4 in

Re: [ceph-users] cephfs quota

2016-12-16 Thread Goncalo Borges
Hi all Even when using ceph fuse, quotas are only enabled once you mount with the --client-quota option. Cheers Goncalo From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of gjprabu [gjpr...@zohocorp.com] Sent: 16 December 2016 18:18 To:

[ceph-users] CentOS Storage SIG

2016-12-16 Thread Patrick McGarry
Hey cephers, Just wanted to put this out there in case there were any package-maintenance wizards itching to contribute. The CentOS storage SIG has worked hard to make sure that Ceph builds make it through their own build system and have published two releases based on Jewel and Hammer. The

Re: [ceph-users] ceph and rsync

2016-12-16 Thread Brian ::
The fact that you are all SSD I would do exactly what Wido said - gracefully remove the OSD and gracefully bring up the OSD on the new SSD. Let Ceph do what its designed to do. The rsync idea looks great on paper - not sure what issues you will run into in practise. On Fri, Dec 16, 2016 at

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread sandeep.cool...@gmail.com
Thanks Burkhard, JiaJia.. able to resolve the issue with the "* --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 " *for journal & "*--typecode=2:4fbd7e29-9d25-41b8-afd0-062c0ceff05d" *for the data partition while creating the partition with sgdisk ! Thanks Sandeep On Fri, Dec 16, 2016 at 3:01

Re: [ceph-users] OSD creation and sequencing.

2016-12-16 Thread Craig Chi
Hi Daniel, If you deploy your cluster by manual method, you can specify the OSD number as you wish. Here are the steps of manual deployment: http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-osds Sincerely, Craig Chi On 2016-12-16 21:51, Daniel

[ceph-users] OSD creation and sequencing.

2016-12-16 Thread Daniel Corley
Is there a way to specify and OSD number on creation ? We run into situations where we have nodes that if the OSds are not created sequentially following the sda,sdb naming convention then the numbers are less than easy to correlate to hardware. In the example shown below we know OSD #341

Re: [ceph-users] cephfs quota

2016-12-16 Thread David Disseldorp
Hi Matthew, On Fri, 16 Dec 2016 12:30:06 +, Matthew Vernon wrote: > Hello, > On 15/12/16 10:25, David Disseldorp wrote: > > > Are you using the Linux kernel CephFS client (mount.ceph), or the > > userspace ceph-fuse back end? Quota enforcement is performed by the > > client, and is

Re: [ceph-users] ceph and rsync

2016-12-16 Thread Alessandro Brega
2016-12-16 10:19 GMT+01:00 Wido den Hollander : > > > Op 16 december 2016 om 9:49 schreef Alessandro Brega < > alessandro.bre...@gmail.com>: > > > > > > 2016-12-16 9:33 GMT+01:00 Wido den Hollander : > > > > > > > > > Op 16 december 2016 om 9:26 schreef Alessandro

Re: [ceph-users] cephfs quota

2016-12-16 Thread David Disseldorp
On Fri, 16 Dec 2016 12:48:39 +0530, gjprabu wrote: > Now we are mounted client using ceph-fuse and still allowing me to put a data > above the limit(100MB). Below is quota details. > > > > getfattr -n ceph.quota.max_bytes test > > # file: test > > ceph.quota.max_bytes="1" > > >

Re: [ceph-users] cephfs quota

2016-12-16 Thread Matthew Vernon
Hello, On 15/12/16 10:25, David Disseldorp wrote: > Are you using the Linux kernel CephFS client (mount.ceph), or the > userspace ceph-fuse back end? Quota enforcement is performed by the > client, and is currently only supported by ceph-fuse. Is server enforcement of quotas planned? Regards,

Re: [ceph-users] Suggestion:-- Disable warning in ceph -s output

2016-12-16 Thread Jayaram Radhakrishnan
Hello JiaJia, I tried with the below directives enable experimental unrecoverable data corrupting features = "bluestore,rocksdb" & enable experimental unrecoverable data corrupting features = * Still the below warning showing in the ceph -s output ~~~ WARNING: the following

Re: [ceph-users] Performance issues on Jewel 10.2.2

2016-12-16 Thread Frédéric Nass
Hi, 1 - rados or rbd bug ? We're using rados bench. 2 - This is not bandwith related. If it was, it should happen almost instantly and not 15 minutes after I start to write to the pool. Once it has happened on the pool, I can then reproduce with a fewer --concurrent-ios, like 12 or even 1.

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread sandeep.cool...@gmail.com
Hi, The manual method is good if you have small number of OSD's, but in case of OSD's > 200 it will be a very time consuming task to create the OSD's like that. Also i used the ceph-ansible to setup my cluster with 2 OSD's per SSD and my cluster was UP & running but i encountered the auto mount

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread sandeep.cool...@gmail.com
Hi Burkhard, How can i achieve that so all the OSD's will auto start at boot time? Regards, Sandeep On Fri, Dec 16, 2016 at 2:39 PM, Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de> wrote: > Hi, > > On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote: > > Hi, > > I was

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread Burkhard Linke
Hi, On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote: Hi, I was trying the scenario where i have partitioned my drive (/dev/sdb) into 4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility: # sgdisk -z /dev/sdb # sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal" # sgdisk -n 1:0:+1024

Re: [ceph-users] [EXTERNAL] Ceph performance is too good (impossible..)...

2016-12-16 Thread Mike Miller
Hi, you need to flush all caches before starting read tests. With fio you can probably do this if you keep the files that it creates. as root on all clients and all osd nodes run: echo 3 > /proc/sys/vm/drop_caches But fio is a little problematic for ceph because of the caches in the

Re: [ceph-users] can cache-mode be set to readproxy for tiercachewith ceph 0.94.9 ?

2016-12-16 Thread JiaJia Zhong
hi skinjo, forgot to ask that if it's necessary to disconnect all the client before doing set-overlay ? we didn't sweep the clients out while setting overlay -- Original -- From: "JiaJia Zhong"; Date: Wed, Dec 14, 2016 11:24 AM To:

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-12-16 Thread Robert Sander
On 15.12.2016 16:49, Bjoern Laessig wrote: > What does your Cluster do? Where is your data. What happens now? You could configure the interfaces between the nodes as pointopoint links and run OSPF on them. The cluster nodes then would have their node IP on a dummy interface. OSPF would sort out

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread LOIC DEVULDER
Ok, I understand. And the same configuration has worked on your NVMe servers? If yes it’s strange, but I think that Ceph developers can tell you why better than me for this part :-) Regards, ___ PSA Groupe Loïc Devulder

Re: [ceph-users] ceph and rsync

2016-12-16 Thread Alessandro Brega
2016-12-16 9:33 GMT+01:00 Wido den Hollander : > > > Op 16 december 2016 om 9:26 schreef Alessandro Brega < > alessandro.bre...@gmail.com>: > > > > > > Hi guys, > > > > I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD > > only. I'd like to replace some SSDs

Re: [ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread LOIC DEVULDER
Hi, I’m not sure that having multiple OSD on one drive is supported. And also: why do you want this? It’s not good for perfomance and more important for data redundancy. Regards, ___ PSA Groupe Loïc Devulder

Re: [ceph-users] ceph and rsync

2016-12-16 Thread Wido den Hollander
> Op 16 december 2016 om 9:26 schreef Alessandro Brega > : > > > Hi guys, > > I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD > only. I'd like to replace some SSDs because they are close to their TBW. > > I know I can simply shutdown the

[ceph-users] ceph and rsync

2016-12-16 Thread Alessandro Brega
Hi guys, I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD only. I'd like to replace some SSDs because they are close to their TBW. I know I can simply shutdown the OSD, replace the SSD, restart the OSD and ceph will take care of the rest. However I don't want to do it this

[ceph-users] 2 OSD's per drive , unable to start the osd's

2016-12-16 Thread sandeep.cool...@gmail.com
Hi, I was trying the scenario where i have partitioned my drive (/dev/sdb) into 4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility: # sgdisk -z /dev/sdb # sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal" # sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal" # sgdisk -n 1:0:+4096 /dev/sdb -c