Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread John Hearns
Errr is this very wise ? I have both its Ethernets connected to the same LAN, with different IPs in the same subnet (like, 192.168.200.230/24 and 192.168.200.231/24) In my experience setting up to interfaces on the same subnet means that your ssystem doesnt know which one

[ceph-users] inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread shrey chauhan
Hi, I keep getting inconsistent placement groups and every time its the whiteout. cluster [ERR] 9.f repair stat mismatch, got 1563/1563 objects, 0/0 clones, 1551/1551 dirty, 78/78 omap, 0/0 pinned, 12/12 hit_set_archive, 0/-9 whiteouts, 28802382/28802382 bytes, 16107/16107 hit_set_archive

Re: [ceph-users] inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread Brad Hubbard
On Fri, Jun 1, 2018 at 6:41 PM, shrey chauhan wrote: > Hi, > > I keep getting inconsistent placement groups and every time its the > whiteout. > > > cluster [ERR] 9.f repair stat mismatch, got 1563/1563 objects, 0/0 clones, > 1551/1551 dirty, 78/78 omap, 0/0 pinned, 12/12 hit_set_archive, 0/-9 >

[ceph-users] ceph with rdma

2018-06-01 Thread Muneendra Kumar M
Hi , I have created a ceph cluster on centos 7 Servers and it is working fine with tcp and able to run all benchmarks. Now want to check the same with rdma and I followed the below link to deploy the same https://community.mellanox.com/docs/DOC-2721 After following the above document

[ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Wladimir Mutel
Dear all, I am experimenting with Ceph setup. I set up a single node (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs, Ubuntu 18.04 Bionic, Ceph packages from http://download.ceph.com/debian-luminous/dists/xenial/ and iscsi parts built

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread John Hearns
It is worth asking - why do you want to have two interfaces? If you have 1Gbps interfaces and this is a bandwidth requirement then 10Gbps cards and switches are very cheap these days. On 1 June 2018 at 10:37, Panayiotis Gotsis wrote: > Hello > > Bonding and iscsi are not a best practice

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Marc Roos
Indeed, you have to add routes and rules to routing table. Just bond them. -Original Message- From: John Hearns [mailto:hear...@googlemail.com] Sent: vrijdag 1 juni 2018 10:00 To: ceph-users Subject: Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ? Errr

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Panayiotis Gotsis
Hello Bonding and iscsi are not a best practice architecture. Multipath is, however I can attest to problems with the multipathd and debian. In any case, what you should try to do and check is: 1) Use two vlans, one for each ethernet port, with different ip address space. Your initiators on

Re: [ceph-users] Migrating (slowly) from spinning rust to ssd

2018-06-01 Thread Paul Emmerich
You can't have a server with both SSDs and HDDs in this setup because you can't write a crush rule that is able to pick n distinct servers when also specifying different device classes. A crush rule for this looks like this: step take default class=ssd step choose firstn 1 type host emit step

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Mike Christie
On 06/01/2018 02:01 AM, Wladimir Mutel wrote: > Dear all, > > I am experimenting with Ceph setup. I set up a single node > (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs, > Ubuntu 18.04 Bionic, Ceph packages from >

Re: [ceph-users] Data recovery after loosing all monitors

2018-06-01 Thread Yan, Zheng
Bryan Henderson 于 2018年6月2日周六 10:23写道: > >Luckily; it's not. I don't remember if the MDS maps contain entirely > >ephemeral data, but on the scale of cephfs recovery scenarios that's just > >about the easiest one. Somebody would have to walk through it; you > probably > >need to look up the

Re: [ceph-users] Data recovery after loosing all monitors

2018-06-01 Thread Bryan Henderson
>Luckily; it's not. I don't remember if the MDS maps contain entirely >ephemeral data, but on the scale of cephfs recovery scenarios that's just >about the easiest one. Somebody would have to walk through it; you probably >need to look up the table states and mds counts from the RADOS store and

[ceph-users] Ceph Developer Monthly - June 2018

2018-06-01 Thread Leonardo Vaz
Hey Cephers, This is just a friendly reminder that the next Ceph Developer Monthly meeting is coming up: http://wiki.ceph.com/Planning If you have work that you're doing that it a feature work, significant backports, or anything you would like to discuss with the core team, please add it to

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Jason Dillaman
The (1) TPG per gateway is expected behavior since that is how ALUA active/passive is configured. On Fri, Jun 1, 2018 at 1:20 PM, Wladimir Mutel wrote: > Ok, I looked into Python sources of ceph-iscsi-{cli,config} and > found that per-host configuration sections use short host

[ceph-users] Problems while sending email to Ceph mailings

2018-06-01 Thread Leonardo Vaz
Hi, Some of our community members reported problems while sending email to mailings hosted by Ceph project. We reported the problem to the host company and it's happening because some changes on DNS records done some time ago, and they're working to fix it. The issue happens to all mailings

Re: [ceph-users] Fwd: v13.2.0 Mimic is out

2018-06-01 Thread Alexandre DERUMIER
CephFS snapshot is now stable and enabled by default on new filesystems :) Alexandre Derumier Ingénieur système et stockage Manager Infrastructure Fixe : +33 3 59 82 20 10 125 Avenue de la république 59110 La Madeleine [ https://twitter.com/OdisoHosting ] [

[ceph-users] Fwd: inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread Brad Hubbard
-- Forwarded message -- From: Brad Hubbard Date: Fri, Jun 1, 2018 at 9:24 PM Subject: Re: [ceph-users] inconsistent pgs :- stat mismatch in whiteouts To: shrey chauhan Cc: ceph-users Too late for me today. If you send your reply to the list someone else may provide an answer

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-06-01 Thread Alfredo Deza
On Thu, May 31, 2018 at 10:33 PM, Marc Roos wrote: > > I actually tried to search the ML before bringing up this topic. Because > I do not get the logic choosing this direction. > > - Bluestore is created to cut out some fs overhead, > - everywhere 10Gb is recommended because of better latency.

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Wladimir Mutel
Well, ok, I moved second address into different subnet (192.168.201.231/24) and also reflected that in 'hosts' file But that did not help much : /iscsi-target...test/gateways> create p10s2 192.168.201.231 skipchecks=true OS version/package checks have been bypassed Adding gateway,

[ceph-users] Fwd: v13.2.0 Mimic is out

2018-06-01 Thread ceph
FYI De: "Abhishek" À: "ceph-devel" , "ceph-users" , ceph-maintain...@ceph.com, ceph-annou...@ceph.com Envoyé: Vendredi 1 Juin 2018 14:11:00 Objet: v13.2.0 Mimic is out We're glad to announce the first stable release of Mimic, the next long term release series. There have been major changes

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Jason Dillaman
Are your firewall ports open for rbd-target-api? Is the process running on the other host? If you run "gwcli -d" and try to add the second gateway, what messages do you see? On Fri, Jun 1, 2018 at 8:15 AM, Wladimir Mutel wrote: > Well, ok, I moved second address into different subnet >

Re: [ceph-users] Sudden increase in "objects misplaced"

2018-06-01 Thread Jake Grimmett
Hi Greg, Firstly, many thanks for your advice. I'm perplexed as to why the crush map is upset; the host names look the same, each node has a fixed IP on a single bond0 interface. Perhaps the problems were an artefact of having "nodown" set? As you suggested, I've unset "osd nodown" and am

Re: [ceph-users] inconsistent pgs :- stat mismatch in whiteouts

2018-06-01 Thread shrey chauhan
yes it is cache, moreover what are these whiteouts? and when does this mismatch occur. Thanks On Fri, Jun 1, 2018 at 3:51 PM, Brad Hubbard wrote: > On Fri, Jun 1, 2018 at 6:41 PM, shrey chauhan > wrote: > > Hi, > > > > I keep getting inconsistent placement groups and every time its the > >

[ceph-users] Migrating (slowly) from spinning rust to ssd

2018-06-01 Thread Jonathan Proulx
Hi All, I looking at starting to move my deployed ceph cluster to SSD. As a first step my though is to get a large enough set of SSD expantion that I can set crush map to ensure 1 copy of every (important) PG is on SSD and use primary affinity to ensure that copy is primary. I know this won't

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-06-01 Thread Marc Roos
Yes it is indeed difficult to find a good balance between asking multiple things in one email and risk that not all are answered, or putting them as individual questions. -Original Message- From: David Turner [mailto:drakonst...@gmail.com] Sent: donderdag 31 mei 2018 23:50 To: Marc

Re: [ceph-users] SSD recommendation

2018-06-01 Thread Simon Ironside
Thanks for the input, both. I've gone ahead with the SM863As. I've no input on the Microns I'm afraid. The specs look good to me, I just can't get them easily. Sean, I didn't know you'd lost 10 in all. I do have 4x 480GB S4600s I'm using as Filestore journals in production for a couple of

Re: [ceph-users] Ceph EC profile, how are you using?

2018-06-01 Thread Vasu Kulkarni
Thanks to those who have added their config, Request anyone in list using EC profile in production to add high level config which will be helpful for tests. Thanks On Wed, May 30, 2018 at 12:16 PM, Vasu Kulkarni wrote: > Hello Ceph Users, > > I would like to know how folks are using EC profile

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Wladimir Mutel
Ok, I looked into Python sources of ceph-iscsi-{cli,config} and found that per-host configuration sections use short host name (returned by this_host() function) as their primary key. So I can't trick gwcli with alternative host name like p10s2 which I put