Re: [ceph-users] cephfs: Client hp-s3-r4-compute failing to respondtocapabilityrelease

2015-11-09 Thread Burkhard Linke
Hi, On 11/09/2015 02:07 PM, Burkhard Linke wrote: Hi, *snipsnap* Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use ceph-fuse with patches for improved page cache handling, but the problem also occur with the official hammer packages from download.ceph.com I've tested

Re: [ceph-users] cephfs: Client hp-s3-r4-compute failing to respondtocapabilityrelease

2015-11-09 Thread Gregory Farnum
On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke wrote: > Hi, > > On 11/09/2015 02:07 PM, Burkhard Linke wrote: >> >> Hi, > > *snipsnap* > >> >> >> Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use >> ceph-fuse with patches for

Re: [ceph-users] Erasure coded pools and 'feature set mismatch'issue

2015-11-09 Thread Burkhard Linke
Hi, On 11/09/2015 11:49 AM, Ilya Dryomov wrote: *snipsnap* You can install an ubuntu kernel from a newer ubuntu release, or pretty much any mainline kernel from kernel-ppa. Ubuntu Trusty has backported kernels from newer releases, e.g. linux-generic-lts-vivid. By using this packages you will

Re: [ceph-users] python binding - snap rollback - progress reporting

2015-11-09 Thread Jason Dillaman
Since the RBD python bindings use the C++ librbd interface and the librbd rollback interface requires you to provide an instance of class ProgressContext, you could add a new class to the API which derives from ProgressContext and bridges the interface to a simple C-style function callback.

[ceph-users] cephfs: Client hp-s3-r4-compute failing to respond to capabilityrelease

2015-11-09 Thread Burkhard Linke
Hi, I'm currently investigating a lockup problem involving CephFS and SQLite databases. Applications lock up if the same database is accessed from multiple hosts. I was able to narrow the problem down to two host: host A: sqlite3 .schema host B: sqlite3 .schema If both .schema commands

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Alex Gorbachev
Hi Timofey, With Nick's, Jan's, RedHat's and others' help we have a stable and, in my best judgement, well performing system using SCST as the iSCSI delivery framework. SCST allows the use of Linux page cache when utilizing the vdisk_fileio backend. LIO should be able to do this to using FILEIO

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Timofey Titovets
Great thanks, Alex, you give me a hope, i'll try SCST later in configuration what you suggest 2015-11-09 16:25 GMT+03:00 Alex Gorbachev : > Hi Timofey, > > With Nick's, Jan's, RedHat's and others' help we have a stable and, in my > best judgement, well performing system

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
This is currently not a possibility, but there is active research into providing improved/persistent client-side caching for RBD use-cases. In the meantime as an alternative, you can expose a portion of your SSD to each VM that needs higher IOPS and apply dm-cache / bcache on the RBD and SSD

[ceph-users] Ceph cluster filling up with "_TEMP" data

2015-11-09 Thread Jan Siersch
Hi, I am currently operating a multi-node Ceph cluster with the "Hammer" release under CentOS 7 with writeback cache tiering on SSDs as described here: http://docs.ceph.com/docs/master/rados/operations/cache-tiering/

Re: [ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-09 Thread Ken Dreyer
It is not a known problem. Mind filing a ticket @ http://tracker.ceph.com/ so we can track the fix for this? On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de wrote: > > > Dear Ceph-users, > > I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install >

Re: [ceph-users] PGs stuck in active+clean+replay

2015-11-09 Thread Andras Pataki
Hi Greg, I’ve tested the patch below on top of the 0.94.5 hammer sources, and it works beautifully. No more active+clean+replay stuck PGs. Thanks! Andras On 10/27/15, 4:46 PM, "Andras Pataki" wrote: >Yes, this definitely sounds plausible (the

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Lazuardi Nasution
Hi Jason, What is the worst case if I made cache pool from local ASDa owner by all compute nodes? Is using block cache inside VM has compatibility issue with live migration? Best regards, On Nov 9, 2015 9:04 PM, "Jason Dillaman" wrote: > This is currently not a

Re: [ceph-users] crush rule with two parts

2015-11-09 Thread Gregory Farnum
On Mon, Nov 9, 2015 at 9:42 AM, Deneau, Tom wrote: > I don't have much experience with crush rules but wanted one that does the > following: > > On a 3-node cluster, I wanted a rule where I could have an erasure-coded pool > of k=3,m=2 > and where the first 3 chunks (the

[ceph-users] ceph-deploy not in debian repo?

2015-11-09 Thread Chad William Seys
Hi all, I cannot find ceph-deploy in the debian catalogs. I have these in my sources: deb http://ceph.com/debian-hammer/ jessie main # ceph-deploy not yet in jessie repo deb http://ceph.com/debian-hammer wheezy main I also see ceph-deploy in the repo.

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
If your goal is to localize reads and writes to the same node as a given VM (at least that's how I read your intent), creating a cache tier across your hypervisor hosts will not achieve that result since you can expect your data to be distributed across the pool as directed by the CRUSH map.

Re: [ceph-users] Seeing which Ceph version OSD/MON data is

2015-11-09 Thread Wido den Hollander
On 09-11-15 16:25, Gregory Farnum wrote: > The daemons print this in their debug logs on every boot. (There might > be a minimum debug level required, but I think it's at 0!) > -Greg True, but in this case all logs were lost. I had no boot/OS disks available. I got a fresh install of a OS with

Re: [ceph-users] Ceph cluster filling up with "_TEMP" data

2015-11-09 Thread Jan Siersch
Upgrading the cluster to Ceph version 0.94.5 seems to have resolved the problem. TEMP data is now only a small fraction of the overall usage. On 09.11.2015 14:18, Jan Siersch wrote: > Hi, > > I am currently operating a multi-node Ceph cluster with the "Hammer" > release under CentOS 7 with

Re: [ceph-users] cephfs: Client hp-s3-r4-compute failing torespondtocapabilityrelease

2015-11-09 Thread Burkhard Linke
Hi, On 11/09/2015 04:03 PM, Gregory Farnum wrote: On Mon, Nov 9, 2015 at 6:57 AM, Burkhard Linke wrote: Hi, On 11/09/2015 02:07 PM, Burkhard Linke wrote: Hi, *snipsnap* Cluster is running Hammer 0.94.5 on top of Ubuntu 14.04. Clients use

[ceph-users] Ceph MeetUp Berlin on November 23

2015-11-09 Thread Robert Sander
Hi, I would like to invite you to our next MeetUp in Berlin on November 23: http://www.meetup.com/de/Ceph-Berlin/events/222906642/ Marcel Wallschläger will talk about Ceph in a research environment. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin

Re: [ceph-users] Ceph performances

2015-11-09 Thread Björn Lässig
On 11/07/2015 09:44 AM, Oliver Dzombic wrote: > setting inode64 in osd_mount_options_xfs might help a little. sorry, inode64 is the default mount option with xfs. Björn ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Seeing which Ceph version OSD/MON data is

2015-11-09 Thread Gregory Farnum
The daemons print this in their debug logs on every boot. (There might be a minimum debug level required, but I think it's at 0!) -Greg On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander wrote: > Hi, > > Recently I got my hands on a Ceph cluster which was pretty damaged due > to a

[ceph-users] Using straw2 crush also with Hammer

2015-11-09 Thread Vickey Singh
Hello Ceph Geeks Need your comments with my understanding on straw2. - Is Straw2 better than straw ? - Is it straw2 recommended for production usage ? I have a production Ceph Firefly cluster , that i am going to upgrade to Ceph hammer pretty soon. Should i use straw2 for all my ceph

[ceph-users] Seeing which Ceph version OSD/MON data is

2015-11-09 Thread Wido den Hollander
Hi, Recently I got my hands on a Ceph cluster which was pretty damaged due to a human error. I had no ceph.conf nor did I have any original Operating System data. With just the MON/OSD data I had to rebuild the cluster by manually re-writing the ceph.conf and installing Ceph. The problem was,

[ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-09 Thread c...@dolphin-it.de
Dear Ceph-users, I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install --release infernalis host1 host2 ..." fails with: [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh --replacepkgs

[ceph-users] XFS calltrace exporting RBD via NFS

2015-11-09 Thread deeepdish
Hello, This is the second time I experienced this so I thought and post to get some perspective. When this first happened, I suspected the kernel, and upgraded from 3.18.22 to 3.18.23. Scenario: - lab scenario - single osd host — osdhost01. Supermicro X8DTE-F - 2x X5570 + 48G RAM + 20x

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-09 Thread Wido den Hollander
On 11/09/2015 05:27 PM, Vickey Singh wrote: > Hello Ceph Geeks > > Need your comments with my understanding on straw2. > >- Is Straw2 better than straw ? It is not persé better then straw(1). straw2 distributes data better when not all OSDs are equally sized/weighted. >- Is it straw2

[ceph-users] Reduce the size of the pool .log

2015-11-09 Thread Chang, Fangzhe (Fangzhe)
It seems that the pool .log increases in its size as ceph runs over time. I've using 20 placement groups (pgs) for the .log pool. Now it complains that "HEALTH_WARN pool .log has too few pgs". I don't have a good understanding on when ceph will remove the old log entries by itself. I saw some

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Timofey Titovets
Alex, are you use ESXi? If yes, you use iSCSI Software adapter? If yes, you use active/passive, fixed, RoundRobin MPIO? Do you tune something on Initiator side? If possible can you give more details? Please 2015-11-09 17:41 GMT+03:00 Timofey Titovets : > Great thanks, Alex,

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-09 Thread Ilya Dryomov
On Mon, Nov 9, 2015 at 10:44 AM, Bogdan SOLGA wrote: > Hello Adam! > > Thank you very much for your advice, I will try setting the tunables to > 'firefly'. Won't work. OS Recommendations page clearly states that firefly tunables are supported starting with 3.15. 3.13,

Re: [ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-09 Thread Jason Altorf
On Tue, Nov 10, 2015 at 7:34 AM, Ken Dreyer wrote: > It is not a known problem. Mind filing a ticket @ > http://tracker.ceph.com/ so we can track the fix for this? > > On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de wrote: >> >> >> Dear Ceph-users, >>

Re: [ceph-users] Erasure coded pools and 'feature set mismatch' issue

2015-11-09 Thread Bogdan SOLGA
Hello Adam! Thank you very much for your advice, I will try setting the tunables to 'firefly'. As there seem to be a few features which would require the 4.1 kernel... is there any 'advised' Linux distribution on which Ceph is known to work best? According to this

Re: [ceph-users] v9.2.0 Infernalis released

2015-11-09 Thread Francois Lafont
Oops, sorry Dan, I would like to send my message to the list. Sorry. > On Mon, Nov 9, 2015 at 11:55 AM, Francois Lafont >> >> 1. Ok, so, the rank of my monitors are 0, 1, 2 but the its ID are 1, 2, 3 >> (ID chosen by himself because the hosts are called ceph01, ceph02 and >> ceph03 and these ID

[ceph-users] Problem with infernalis el7 package

2015-11-09 Thread Bob R
Hello, We've got two problems trying to update our cluster to infernalis- ceph-deploy install --release infernalis neb-kvm00 [neb-kvm00][INFO ] Running command: sudo rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [neb-kvm00][INFO ] Running command: sudo rpm -Uvh

[ceph-users] Problem with infernalis el7 package

2015-11-09 Thread c...@dolphin-it.de
Hello, I filed a new ticket: http://tracker.ceph.com/issues/13739 Regards, Kevin [ceph-users] Problem with infernalis el7 package (10-Nov-2015 1:57) From: Bob R To:ceph-users@lists.ceph.com Hello, We've got two problems trying to update our cluster to infernalis- ceph-deploy install