Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-21 Thread Adrian Saul
I could only see it being done using FCIP as the OSD processes use IP to communicate. I guess it would depend on why you are looking to use something like FC instead of Ethernet or IB. > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >

Re: [ceph-users] Replace Journal

2016-04-21 Thread Shinobu Kinjo
This is a previous thread about journal disk replacement. http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-May/039434.html I hope this would be helpful for you. Cheers, S - Original Message - From: "Martin Wilderoth" To: ceph-us...@ceph.com

[ceph-users] Replace Journal

2016-04-21 Thread Martin Wilderoth
I have a ceph cluster and I will change my journal devices to new SSD's. In some instructions of doing this they refer to a journal file (link to UUID of journal ) In my OSD folder this journal don’t exist. This instructions is renaming the UUID of new device to the old UUID not to break

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Jason Dillaman
Slight clarification: to disable these features on existing images, you should run the following: rbd feature disable deep-flatten,fast-diff,object-map, exclusive-lock (note the commas instead of the spaces when disabling multiple features at once). -- Jason On Thu, Apr 21, 2016 at 4:48 AM,

[ceph-users] fibre channel as ceph storage interconnect

2016-04-21 Thread Schlacta, Christ
Is it possible? Can I use fibre channel to interconnect my ceph OSDs? Intuition tells me it should be possible, yet experience (Mostly with fibre channel) tells me no. I don't know enough about how ceph works to know for sure. All my googling returns results about using ceph as a BACKEND for

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread 席智勇
hi: I am using the same verion with you. This set only effects new created volumes. 2016-04-21 17:41 GMT+08:00 Mika c : > Hi xizhiyong, > Thanks for your infomation. I am using Jewel right now(10.1.2), the > setting "rbd_default_features = 3" not working for me. > And

Re: [ceph-users] Ceph weird "corruption" but no corruption and performance = abysmal.

2016-04-21 Thread Christian Balzer
Hello, On Thu, 21 Apr 2016 15:35:52 +0300 Florian Rommel wrote: > Ok, weird problem,(s) if you want to call it that.. > > So i run a 10 OSD Ceph cluster on 4 hosts with SSDs (Intel DC3700) as > journals. > Small number of OSDs (at replication 3 at best the sustained performance of 3 HDDs) in

[ceph-users] ceph startup issues : OSDs don't start

2016-04-21 Thread SCHAER Frederic
Hi, I'm sure I'm doing something wrong, I hope someone can enlighten me... I'm encountering many issues when I restart a ceph server (any ceph server). This is on CentOS 7.2, ceph-0.94.6-0.el7.x86_64. Firt : I have disabled abrt. I don't need abrt. But when I restart, I see these logs in the

[ceph-users] ceph startup issues : OSDs don't start

2016-04-21 Thread SCHAER Frederic
Hi, I'm sure I'm doing something wrong, I hope someone can enlighten me... I'm encountering many issues when I restart a ceph server (any ceph server). This is on CentOS 7.2, ceph-0.94.6-0.el7.x86_64. Firt : I have disabled abrt. I don't need abrt. But when I restart, I see these logs in the

[ceph-users] v10.2.0 Jewel released

2016-04-21 Thread Sage Weil
This major release of Ceph will be the foundation for the next long-term stable release. There have been many major changes since the Infernalis (9.2.x) and Hammer (0.94.x) releases, and the upgrade process is non-trivial. Please read these release notes carefully. For the complete release

Re: [ceph-users] ceph-deploy jewel stopped working

2016-04-21 Thread Stephen Lord
Sorry about the mangled urls in there, these are all from download.ceph.com rpm-jewel el7 xfs_64 Steve > On Apr 21, 2016, at 1:17 PM, Stephen Lord wrote: > > > > Running this command > > ceph-deploy install --stable jewel ceph00 > > And using the 1.5.32 version

[ceph-users] ceph-deploy jewel stopped working

2016-04-21 Thread Stephen Lord
Running this command ceph-deploy install --stable jewel ceph00 And using the 1.5.32 version of ceph-deploy onto a redhat 7.2 system is failing today (worked yesterday) [ceph00][DEBUG ] [ceph00][DEBUG ]

Re: [ceph-users] ceph-10.1.2, debian stretch and systemd's target files

2016-04-21 Thread kefu chai
On Thu, Apr 21, 2016 at 8:04 PM, John Depp wrote: > Hello everyone! > I'm trying to test the bleeding edge Ceph configuration with ceph-10.1.2 on > Debian Stretch. > I've built ceph from git clone with dpkg-buildpackage and managed to start > it, but run into some issues: > -

Re: [ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-21 Thread Serkan Çoban
I cannot install a different kernel that is not supported by redhat to clients. Any other way to increase fuse performance with default 6.7 kernel? Maybe I can compile jewel ceph-fuse packages for rhel6, is this make a difference? On Thu, Apr 21, 2016 at 5:24 PM, Oliver Dzombic

Re: [ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-21 Thread Oliver Dzombic
Hi, yes, it should be. If you want to do something good, try to use a recent kernel on the centos 6.7 things. Then you could also complile something, that you dont need fuse. The speed might be awesome bad if you use centos 6.7 std. kernel with fuse. -- Mit freundlichen Gruessen / Best

[ceph-users] is it possible using different ceph-fuse version on clients from server

2016-04-21 Thread Serkan Çoban
Hi, I would like to install and test ceph jewel release. My servers are rhel 7.2 but clients are rhel6.7. Is it possible to install jewel release to server and use hammer ceph-fuse rpms on clients? Thanks, Serkan ___ ceph-users mailing list

[ceph-users] Ceph cache tier, flushed objects does not appear to be written on disk

2016-04-21 Thread Benoît LORIOT
Hello, we want to disable readproxy cache tier but before doing so we would like to make sure we won't loose data. Is there a way to confirm that flush actually write objects to disk ? We're using ceph version 0.94.6. I tried that, with cephfs_data_ro_cache being the hot storage pool and

Re: [ceph-users] cache tier

2016-04-21 Thread Oliver Dzombic
Hi min, just like Paul already explained. The cache is made out of OSD's ( which have just like any other OSD's their own journal ). So it depends on you what structure you will build. You can place all journals of hot and cold storage ( hot = cache, cold = regular storage ) together on same

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-21 Thread Mike Miller
Hi Udo, thanks, just to make sure, further increased the readahead: $ sudo blockdev --getra /dev/rbd0 1048576 $ cat /sys/block/rbd0/queue/read_ahead_kb 524288 No difference here. First one is sectors (512 bytes), second one KB. The second read (after drop cache) is somewhat faster (10%-20%)

[ceph-users] Ceph weird "corruption" but no corruption and performance = abysmal.

2016-04-21 Thread Florian Rommel
Ok, weird problem,(s) if you want to call it that.. So i run a 10 OSD Ceph cluster on 4 hosts with SSDs (Intel DC3700) as journals. I have a lot of mixed workloads running and the linux machines seem to get somehow corrupted in a weird way and the performance kind of sucks. First off: All hosts

[ceph-users] ceph-10.1.2, debian stretch and systemd's target files

2016-04-21 Thread John Depp
Hello everyone! I'm trying to test the bleeding edge Ceph configuration with ceph-10.1.2 on Debian Stretch. I've built ceph from git clone with dpkg-buildpackage and managed to start it, but run into some issues: - i've had to install ceph from debian packages, as ceph-deploy could not install it

Re: [ceph-users] cache tier

2016-04-21 Thread min fang
thanks Oliver, does the journal need be committed twice? One is for write IO to the cache tier? the other is for write IO destaged to SATA backend pool? 2016-04-21 19:38 GMT+08:00 Oliver Dzombic : > Hi, > > afaik cache does not have to do anything with journals. > > So

[ceph-users] ceph & mainframes with KVM

2016-04-21 Thread Mahesh Govind
Hi , Any has used ceph with mainframes ? If it is possible could you please point to example solutions . regards ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Ilya Dryomov
On Thu, Apr 21, 2016 at 10:00 AM, Mika c wrote: > Hi cephers, > Had the same issue too. But the command "rbd feature disable" not > working to me. > Any comment will be appreciated. > > $sudo rbd feature disable timg1 deep-flatten fast-diff object-map > exclusive-lock

Re: [ceph-users] cache tier

2016-04-21 Thread Oliver Dzombic
Hi, afaik cache does not have to do anything with journals. So your OSD's need journals, and for performance, you will take SSD's. The Cache should be something faster than your OSD's. Usually SSD or NVMe. The Cache is an extra Space in front of your OSD's which is supposed to speed up things

Re: [ceph-users] inconsistencies from read errors during scrub

2016-04-21 Thread Dan van der Ster
On Thu, Apr 21, 2016 at 1:23 PM, Dan van der Ster wrote: > Hi cephalapods, > > In our couple years of operating a large Ceph cluster, every single > inconsistency I can recall was caused by a failed read during > deep-scrub. In other words, deep scrub reads an object, the

[ceph-users] cache tier

2016-04-21 Thread min fang
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend pool. For this configuration, do I need use SSD as journal device? I do not know whether cache tier take the journal role? thanks ___ ceph-users mailing list

[ceph-users] inconsistencies from read errors during scrub

2016-04-21 Thread Dan van der Ster
Hi cephalapods, In our couple years of operating a large Ceph cluster, every single inconsistency I can recall was caused by a failed read during deep-scrub. In other words, deep scrub reads an object, the read fails with dmesg reporting "Sense Key : Medium Error [current]", "Add. Sense:

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Mika c
Hi xizhiyong, Thanks for your infomation. I am using Jewel right now(10.1.2), the setting "rbd_default_features = 3" not working for me. And this setting will enable "exclusive-lock, object-map, fast-diff, deep-flatten" features. Best wishes, Mika 2016-04-21 16:56 GMT+08:00 席智勇

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread 席智勇
That's true for me too. You can disable them via set in the conf file. #ceph.conf rbd_default_features = 3 #meens only enable layering and striping 2016-04-21 16:00 GMT+08:00 Mika c : > Hi cephers, > Had the same issue too. But the command "rbd feature disable" not >

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-21 Thread Udo Lembke
Hi Mike, Am 21.04.2016 um 09:07 schrieb Mike Miller: Hi Nick and Udo, thanks, very helpful, I tweaked some of the config parameters along the line Udo suggests, but still only some 80 MB/s or so. this mean you have reached factor 3 (this are round about the value I see with single thread on

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-21 Thread Mika c
Hi cephers, Had the same issue too. But the command "rbd feature disable" not working to me. Any comment will be appreciated. $sudo rbd feature disable timg1 deep-flatten fast-diff object-map exclusive-lock rbd: failed to update image features: (22) Invalid argument 2016-04-21 15:53:10.260671

Re: [ceph-users] Multiple OSD crashing a lot

2016-04-21 Thread Blade Doyle
That was a poor example, because it was an older version of ceph and the clock was not set correctly. But I don't think either of those things causes the problem because I see it on multiple nodes: root@node8:/var/log/ceph# grep hit_set_trim ceph-osd.2.log | wc -l 2524 root@node8:/var/log/ceph#

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-21 Thread Mike Miller
Hi Nick and Udo, thanks, very helpful, I tweaked some of the config parameters along the line Udo suggests, but still only some 80 MB/s or so. Kernel 4.3.4 running on the client machine and comfortable readahead configured $ sudo blockdev --getra /dev/rbd0 262144 Still not more than about

Re: [ceph-users] Howto reduce the impact from cephx with small IO

2016-04-21 Thread Udo Lembke
Hi Mark, thanks for the links. If I search for wip-auth I found nothing in docs.ceph.com... this mean, that wip-auth don't find the way in the ceph code base?! But I'm wonder about the RHEL7 position at the link http://www.spinics.net/lists/ceph-devel/msg22416.html Unfortunality there are