Re: [ceph-users] RBD client newer than cluster

2017-02-14 Thread Lukáš Kubín
Yes, also. The main reason though is temporarily missing connection from Ceph nodes to package repo - this will take some days or weeks to reconnect. The client nodes can connect and update. Thanks, Lukáš On Tue, Feb 14, 2017 at 6:56 PM, Shinobu Kinjo wrote: > On Wed, Feb

[ceph-users] Jewel to Kraken OSD upgrade issues

2017-02-14 Thread Benjeman Meekhof
Hi all, We encountered an issue updating our OSD from Jewel (10.2.5) to Kraken (11.2.0). OS was RHEL derivative. Prior to this we updated all the mons to Kraken. After updating ceph packages I restarted the 60 OSD on the box with 'systemctl restart ceph-osd.target'. Very soon after the system

[ceph-users] ceph-deploy and debian stretch 9

2017-02-14 Thread Zorg
Hello debian stretch is almost stable so i wanted to deploy ceph jewel on it but with ceph-deploy new mynode I have this error [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: debian 9.0 I Know I can cheat changing /etc/debian_version to 8.0 but i'm sure there is a

Re: [ceph-users] RBD client newer than cluster

2017-02-14 Thread Shinobu Kinjo
On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote: > Hi, > I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when > libvirt mounted RBD disks suspend I/O during snapshot creation until hard > reboot. > > My Ceph cluster (monitors and OSDs) is running

Re: [ceph-users] admin_socket: exception getting command descriptions

2017-02-14 Thread Vince
Hi Liuchang0812, Thank you for replying the thread. I have corrected this issue. This was due to incorrect ownership of the /var/lib/ceph. It was under root and I have changed it to ceph ownership to resolve this. However, I am seeing a new error while preparing the osd's. Any idea about

Re: [ceph-users] CephFS : minimum stripe_unit ?

2017-02-14 Thread John Spray
On Tue, Feb 14, 2017 at 11:38 AM, Florent B wrote: > Hi everyone, > > I use Ceph-fuse on a Jewel cluster. > > I would like to set stripe_unit to 8192 on a directory but it seems not > possible : > > # setfattr -n ceph.dir.layout.stripe_unit -v "8192" maildata1 > setfattr:

[ceph-users] 来自guimark的邮件

2017-02-14 Thread guimark
unsubscribe ceph-users___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Dongsheng Yang
Hi Sage and all, We are going to use SSDs for cache in ceph. But I am not sure which one is the best solution, bcache? flashcache? or cache tier? I found there are some CAUTION in ceph.com about cache tiering. Is cache tiering is already production ready? especially for rbd. thanx in

Re: [ceph-users] To backup or not to backup the classic way - How to backup hundreds of TB?

2017-02-14 Thread Nick Fisk
Hardware failures are just one possible cause. If you value your data you will have a backup and preferably going to some sort of removable media that can be taken offsite, like those things that everybody keeps saying are dead…..what are they called….oh yeah tapes. J A online copy of your data

Re: [ceph-users] How to repair MDS damage?

2017-02-14 Thread John Spray
On Tue, Feb 14, 2017 at 9:33 AM, Oliver Schulz wrote: > Dear Ceph Experts, > > after upgrading our Ceph cluster from Hammer to Jewel, > the MDS (after a few days) found some metadata damage: > ># ceph status >[...] >health HEALTH_ERR > mds0: Metadata

Re: [ceph-users] Shrink cache target_max_bytes

2017-02-14 Thread Kees Meijs
Hi Cephers, Although I might be stating an obvious fact: altering the parameter works as advertised. The only issue I encountered was lowering the parameter too much at once results in some slow requests because the cache pool is "full". So in short: it works when lowering the parameter bit by

[ceph-users] How to repair MDS damage?

2017-02-14 Thread Oliver Schulz
Dear Ceph Experts, after upgrading our Ceph cluster from Hammer to Jewel, the MDS (after a few days) found some metadata damage: # ceph status [...] health HEALTH_ERR mds0: Metadata damage detected [...] The output of # ceph tell mds.0 damage ls is: [ {

Re: [ceph-users] PG stuck peering after host reboot

2017-02-14 Thread george.vasilakakos
Hi Brad, I'll be doing so later in the day. Thanks, George From: Brad Hubbard [bhubb...@redhat.com] Sent: 13 February 2017 22:03 To: Vasilakakos, George (STFC,RAL,SC); Ceph Users Subject: Re: [ceph-users] PG stuck peering after host reboot I'd suggest

Re: [ceph-users] To backup or not to backup the classic way - How to backup hundreds of TB?

2017-02-14 Thread Irek Fasikhov
Hi. We use Ceph Rados GW S3. And we are very happy :). Each administrator is responsible for its service. Using the following clients S3: Linux - s3cmd, duply; Windows - cloudberry. P.S 500 TB data, 3x replication, 3 datacenter. С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Dongsheng Yang > Sent: 14 February 2017 09:01 > To: Sage Weil > Cc: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com > Subject: [ceph-users] bcache vs

[ceph-users] Where did monitors keep their keys?

2017-02-14 Thread George Shuklin
Hello. Where monitors are keeping their keys? I can't see them in 'ceph auth list'. Are they in that list but I have no permission to see them (as admin), or are they stored somewhere else? How can I see that list? ___ ceph-users mailing list

[ceph-users] extending ceph cluster with osds close to near full ratio (85%)

2017-02-14 Thread Tyanko Aleksiev
Hi Cephers, At University of Zurich we are using Ceph as a storage back-end for our OpenStack installation. Since we recently reached 70% of occupancy (mostly caused by the cinder pool served by 16384PGs) we are in the phase of extending the cluster with additional storage nodes of the same type

Re: [ceph-users] Slow performances on our Ceph Cluster

2017-02-14 Thread Jason Dillaman
On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason wrote: > Any idea on how we could increase performances ? as this really impact our > openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to 15 > minutes... Have you configured Glance RBD store properly? The

Re: [ceph-users] Slow performances on our Ceph Cluster

2017-02-14 Thread Beard Lionel (BOSTON-STORAGE)
Hi, > On Tue, Feb 14, 2017 at 3:48 AM, David Ramahefason > wrote: > > Any idea on how we could increase performances ? as this really impact > > our openstack MOS9.0 Mitaka infrastructure, VM spawning can take up to > > 15 minutes... > > Have you configured Glance RBD store

[ceph-users] How to change the owner of a bucket

2017-02-14 Thread Yoann Moulin
Dear list, I was looking on how to change the owner of a bucket. There is a lack of documentation on that point (even the man page is not clear), I found how with the Help of Orit. > radosgw-admin metadata get bucket: > radosgw-admin bucket link --uid= --bucket= > --bucket-id= this issue

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Wido den Hollander
> Op 14 februari 2017 om 11:14 schreef Nick Fisk : > > > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Dongsheng Yang > > Sent: 14 February 2017 09:01 > > To: Sage Weil > > Cc:

Re: [ceph-users] Radosgw scaling recommendation?

2017-02-14 Thread Benjeman Meekhof
Thanks everyone for the suggestions, playing with all three of the tuning knobs mentioned has greatly increased the number of client connections an instance can deal with. We're still experimenting to find the max values to saturate our hardware. With values as below we'd see something around 50

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Nick Fisk
> -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: 14 February 2017 16:25 > To: Dongsheng Yang ; n...@fisk.me.uk > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] bcache vs flashcache vs cache tiering > > > > Op 14

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Tomasz Kuzemko
We are running flashcache in production for RBD behind OSDs since over two years now. We had a few issues with it: • one rare kernel livelock between XFS and flashcache that took some effort to track down and fix (we could release patched flashcache if there is interest) • careful tuning of skip

[ceph-users] RBD client newer than cluster

2017-02-14 Thread Lukáš Kubín
Hi, I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when libvirt mounted RBD disks suspend I/O during snapshot creation until hard reboot. My Ceph cluster (monitors and OSDs) is running v0.94.3, while clients (OpenStack/KVM computes) run v0.94.5. Can I still update the client

[ceph-users] async-ms with RDMA or DPDK?

2017-02-14 Thread Bastian Rosner
Hi, according to kraken release-notes and documentation, AsyncMessenger now also supports RDMA and DPDK. Is anyone already using async-ms with RDMA or DPDK and might be able to tell us something about real-world performance gains and stability? Best, Bastian

Re: [ceph-users] extending ceph cluster with osds close to near full ratio (85%)

2017-02-14 Thread Brian Andrus
On Tue, Feb 14, 2017 at 5:27 AM, Tyanko Aleksiev wrote: > Hi Cephers, > > At University of Zurich we are using Ceph as a storage back-end for our > OpenStack installation. Since we recently reached 70% of occupancy > (mostly caused by the cinder pool served by 16384PGs)

Re: [ceph-users] Jewel to Kraken OSD upgrade issues

2017-02-14 Thread Gregory Farnum
On Tue, Feb 14, 2017 at 11:38 AM, Benjeman Meekhof wrote: > Hi all, > > We encountered an issue updating our OSD from Jewel (10.2.5) to Kraken > (11.2.0). OS was RHEL derivative. Prior to this we updated all the > mons to Kraken. > > After updating ceph packages I restarted

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Gregory Farnum
On Tue, Feb 14, 2017 at 8:25 AM, Wido den Hollander wrote: > >> Op 14 februari 2017 om 11:14 schreef Nick Fisk : >> >> >> > -Original Message- >> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> > Dongsheng Yang >> > Sent: 14

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Nick Fisk
> -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: 14 February 2017 21:05 > To: Wido den Hollander > Cc: Dongsheng Yang ; Nick Fisk > ; Ceph Users > Subject: Re:

Re: [ceph-users] RBD client newer than cluster

2017-02-14 Thread Christian Balzer
On Wed, 15 Feb 2017 02:56:22 +0900 Shinobu Kinjo wrote: > On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote: > > Hi, > > I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when > > libvirt mounted RBD disks suspend I/O during snapshot creation until hard

Re: [ceph-users] Ceph OSDs advice

2017-02-14 Thread Sam Huracan
Hi Khang, What file system do you use in OSD node? XFS always use Memory for caching data before writing to disk. So, don't worry, it always holds memory in your system as much as possible. 2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật : > Hi all, > My ceph

Re: [ceph-users] async-ms with RDMA or DPDK?

2017-02-14 Thread Haomai Wang
On Tue, Feb 14, 2017 at 11:44 PM, Bastian Rosner wrote: > > Hi, > > according to kraken release-notes and documentation, AsyncMessenger now also > supports RDMA and DPDK. > > Is anyone already using async-ms with RDMA or DPDK and might be able to tell > us

Re: [ceph-users] Ceph OSDs advice

2017-02-14 Thread Khang Nguyễn Nhật
Hi Sam, Thank for your reply. I use BTRFS file system on OSDs. Here is result of "*free -hw*": total used freeshared buffers cache available Mem: 125G 58G 31G1.2M3.7M 36G 60G and "*ceph df*":

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Christian Balzer
On Tue, 14 Feb 2017 22:42:21 - Nick Fisk wrote: > > -Original Message- > > From: Gregory Farnum [mailto:gfar...@redhat.com] > > Sent: 14 February 2017 21:05 > > To: Wido den Hollander > > Cc: Dongsheng Yang ; Nick Fisk > >

[ceph-users] Ceph OSDs advice

2017-02-14 Thread Khang Nguyễn Nhật
Hi all, My ceph OSDs is running on Fedora-server24 with config are: 128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs (8TB per OSD). My cluster was use ceph object gateway with S3 API. Now, it had contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will dead if i

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Stefan Priebe - Profihost AG
I've been testing flashcache, bcache, dm-cache and even dm-writeboost in production ceph clusters. The only one that is working fine and gives the speed we need is bcache. All others failed with slow speeds or low latencies. Stefan Excuse my typo sent from my mobile phone. > Am 15.02.2017 um