Re: [ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
Hi David, Thanks a lot for your quick response... *What are you doing that only allows you to add one at a time?* I'm trying to create a scrip for adding/removing a mon in my environment --> I want to execute it from a simple web page... Thanks a lot! 2017-08-15 19:26 GMT+02:00 David Turner

[ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
Hi, I'd like to test and script the adding monitors process adding one by one monitors to the ceph infrastructure. Is it possible to have two mon's running on two servers (one mon each) --> I can assume that mon quorum won't be reached until both servers are up. Is this right? I have not been

Re: [ceph-users] Two mons

2017-08-15 Thread David Turner
There is nothing that will stop you from having an even number of mons (including 2). You just run the chance of getting into a split brain scenario. As long as you aren't planning to stay in that scenario, I don't see a problem with it. I have 3 mons in my home cluster and I've had to remove

[ceph-users] error: cluster_uuid file exists with value

2017-08-15 Thread Oscar Segarra
Hi, After adding a new monitor cluster I'm getting an estrange error: vdicnode02/store.db/MANIFEST-86 succeeded,manifest_file_number is 86, next_file_number is 88, last_sequence is 8, log_number is 0,prev_log_number is 0,max_column_family is 0 2017-08-15 22:00:58.832599 7f6791187e40 4

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Mehmet
I am Not Sure but perhaps nodown/out could help to Finish? - Mehmet Am 15. August 2017 16:01:57 MESZ schrieb Andreas Calminder : >Hi, >I got hit with osd suicide timeouts while deep-scrub runs on a >specific pg, there's a RH article

[ceph-users] v12.1.4 Luminous (RC) released

2017-08-15 Thread Abhishek
This is the fifth release candidate for Luminous, the next long term stable release. We’ve had to do this release as there was a bug in the previous RC, which affected upgrades to Luminous.[1] Please note that this is still a *release candidate* and not the final release, we're expecting the

Re: [ceph-users] v12.1.4 Luminous (RC) released

2017-08-15 Thread Gregory Farnum
On Tue, Aug 15, 2017 at 2:05 PM, Abhishek wrote: > This is the fifth release candidate for Luminous, the next long term > stable release. We’ve had to do this release as there was a bug in > the previous RC, which affected upgrades to Luminous.[1] In particular, this will fix

Re: [ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
Thanks a lot Greg, nice to hear! 2017-08-15 21:17 GMT+02:00 Gregory Farnum : > On Tue, Aug 15, 2017 at 10:28 AM David Turner > wrote: > >> There is nothing that will stop you from having an even number of mons >> (including 2). You just run the chance

Re: [ceph-users] cluster unavailable for 20 mins when downed server was reintroduced

2017-08-15 Thread Gregory Farnum
Sounds like you've got a few different things happening here. On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy wrote: > Luminous 12.1.1 rc1 > > Hi, > > > I have a three node cluster with 6 OSD and 1 mon per node. > > I had to turn off one node for rack reasons. While the

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Gregory Farnum
On Tue, Aug 15, 2017 at 7:03 AM Andreas Calminder < andreas.calmin...@klarna.com> wrote: > Hi, > I got hit with osd suicide timeouts while deep-scrub runs on a > specific pg, there's a RH article > (https://access.redhat.com/solutions/2127471) suggesting changing >

[ceph-users] Ceph mount error and mds laggy

2017-08-15 Thread gjprabu
Hi Team, We are having a issue with mounting ceph and its toughing error "mount error 5 = Input/output error" and also MDS seems mds ceph-zstorage1 is laggy . Kindly provide help us on this issue. cluster a8c92ae6-6842-4fa2-bfc9-8cdefd28df5c health HEALTH_WARN

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Andreas Calminder
Thanks, I'll try and do that. Since I'm running a cluster with multiple nodes, do I have to set this in ceph.conf on all nodes or does it suffice with just the node with that particular osd? On 15 August 2017 at 22:51, Gregory Farnum wrote: > > > On Tue, Aug 15, 2017 at 7:03

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Gregory Farnum
Yes, you can set it on the one one. That configuration is for an entirely internal system and can mismatch across OSDs without trouble. On Tue, Aug 15, 2017 at 4:25 PM Andreas Calminder < andreas.calmin...@klarna.com> wrote: > Thanks, I'll try and do that. Since I'm running a cluster with >

Re: [ceph-users] ceph Cluster attempt to access beyond end of device

2017-08-15 Thread ZHOU Yuan
Hi Hauke, It's possibly the XFS issue as discussed in the previous thread. I also saw this issue in some JBOD setup, running with RHEL 7.3 Sincerely, Yuan On Tue, Aug 15, 2017 at 7:38 PM, Hauke Homburg wrote: > Hello, > > > I found some error in the Cluster with dmes

[ceph-users] Atomic object replacement with libradosstriper

2017-08-15 Thread Jan Kasprzak
Hello, Ceph users, I would like to use RADOS as an object storage (I have written about it to this list a while ago), and I would like to use libradosstriper with C, as has been suggested to me here. My question is - when writing an object, is it possible to do it so that either

[ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread moftah moftah
Hi All, I have search everywhere for some sort of table that show kernel version to what rbd image features supported and didnt find any. basically I am looking at latest kernels from kernel.org , and i am thinking of upgrading to 4.12 since it is stable but i want to make sure i can get rbd

[ceph-users] ceph Cluster attempt to access beyond end of device

2017-08-15 Thread Hauke Homburg
Hello, I found some error in the Cluster with dmes -T: attempt to access beyond end of device I found the following Post: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg39101.html Is this a Problem with the Size of the Filesystem itself oder "only" eine Driver Bug? I ask becaue we

Re: [ceph-users] Jewel -> Luminous on Debian 9.1

2017-08-15 Thread Abhishek Lekshmanan
Dajka Tamás writes: > Dear All, > > > > I'm trying to upgrade our env. from Jewel to the latest RC. Packages are > installed (latest 12.1.3), but I'm unable to install the mgr. I've tried the > following (nodes in cluster are from 03-05, 03 is the admin node): > > > >

[ceph-users] cluster unavailable for 20 mins when downed server was reintroduced

2017-08-15 Thread Sean Purdy
Luminous 12.1.1 rc1 Hi, I have a three node cluster with 6 OSD and 1 mon per node. I had to turn off one node for rack reasons. While the node was down, the cluster was still running and accepting files via radosgw. However, when I turned the machine back on, radosgw uploads stopped

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread Shinobu Kinjo
It would be much better to explain why as of today, object-map feature is not supported by the kernel client, or document it. On Tue, Aug 15, 2017 at 8:08 PM, Ilya Dryomov wrote: > On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote: >> Hi All, >> >> I

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread Ilya Dryomov
On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote: > Hi All, > > I have search everywhere for some sort of table that show kernel version to > what rbd image features supported and didnt find any. > > basically I am looking at latest kernels from kernel.org , and i am

[ceph-users] Luminous OSD startup errors

2017-08-15 Thread Andras Pataki
After upgrading to the latest Luminous RC (12.1.3), all our OSD's are crashing with the following assert: 0> 2017-08-15 08:28:49.479238 7f9b7615cd00 -1

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread moftah moftah
I dont think so , I tested with with kernel-4.10.17-1-pve which is proxmox5 kernel and that one didnt have object-map support had to disable the feature from the rbd image in order for the krbd rbd module to deal with it and not complain about features Thanks On Tue, Aug 15, 2017 at 9:25 AM,

Re: [ceph-users] Luminous OSD startup errors

2017-08-15 Thread Andras Pataki
Thanks for the quick response and the pointer. The dev build fixed the issue. Andras On 08/15/2017 09:19 AM, Jason Dillaman wrote: I believe this is a known issue [1] and that there will potentially be a new 12.1.4 RC released because of it. The tracker ticket has a link to a set of

Re: [ceph-users] Luminous OSD startup errors

2017-08-15 Thread Abhishek
On 2017-08-15 15:38, Andras Pataki wrote: Thanks for the quick response and the pointer. The dev build fixed the issue. Andras On 08/15/2017 09:19 AM, Jason Dillaman wrote: I believe this is a known issue [1] and that there will potentially be a new 12.1.4 RC released because of it. The

[ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-15 Thread Andreas Calminder
Hi, I got hit with osd suicide timeouts while deep-scrub runs on a specific pg, there's a RH article (https://access.redhat.com/solutions/2127471) suggesting changing osd_scrub_thread_suicide_timeout' from 60s to a higher value, problem is the article is for Hammer and the

Re: [ceph-users] Luminous OSD startup errors

2017-08-15 Thread Jason Dillaman
I believe this is a known issue [1] and that there will potentially be a new 12.1.4 RC released because of it. The tracker ticket has a link to a set of development packages that should resolve the issue in the meantime. [1] http://tracker.ceph.com/issues/20985 On Tue, Aug 15, 2017 at 9:08 AM,

Re: [ceph-users] ceph Cluster attempt to access beyond end of device

2017-08-15 Thread David Turner
The error found in that thread, iirc, is that the block size of the disk does not match the block size of the FS and is trying to access the rest of a block at the end of a disk. I also remember that the error didn't cause any problems. Why raid 6? Rebuilding a raid 6 seems like your cluster

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread David Turner
I thought that object-map, introduced with Jewel, was included with the 4.9 kernel and every kernel since then. On Tue, Aug 15, 2017, 7:26 AM Shinobu Kinjo wrote: > It would be much better to explain why as of today, object-map feature > is not supported by the kernel client,

Re: [ceph-users] which kernel version support object-map feature from rbd kernel client

2017-08-15 Thread Jason Dillaman
I believe you are thinking of the "exclusive-lock" feature which has been supported since kernel v4.9. The latest kernel only supports layering, exclusive-lock, and data-pool features. There is also support for tolerating the striping feature when it's (erroneously) enabled on an image but doesn't