Re: [ceph-users] CephFS dropping data with rsync?

2018-06-15 Thread Hector Martin
On 2018-06-16 13:04, Hector Martin wrote: > I'm at a loss as to what happened here. Okay, I just realized CephFS has a default 1TB file size... that explains what triggered the problem. I just bumped it to 10TB. What that doesn't explain is why rsync didn't complain about anything. Maybe when

[ceph-users] CephFS dropping data with rsync?

2018-06-15 Thread Hector Martin
I'm at a loss as to what happened here. I'm testing a single-node Ceph "cluster" as a replacement for RAID and traditional filesystems. 9 4TB HDDs, one single (underpowered) server. Running Luminous 12.2.5 with BlueStore OSDs. I set up CephFS on a k=6,m=2 EC pool, mounted it via FUSE, and ran an

[ceph-users] move rbd image (with snapshots) to different pool

2018-06-15 Thread Marc Roos
If I would like to copy/move an rbd image, this is the only option I have? (Want to move an image from a hdd pool to an ssd pool) rbd clone mypool/parent@snap otherpool/child ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Wladimir Mutel
Jason Dillaman wrote: чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 [DEBUG] dbus_name_acquired:461: name org.kernel.TCMUService1 acquired чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13 08:38:30.521 54121 [DEBUG_SCSI_CMD] tcmu_print_cdb_info:1205

[ceph-users] RGW Dynamic bucket index resharding keeps resharding all buckets

2018-06-15 Thread Sander van Schie / True
Hello, We're into some problems with dynamic bucket index resharding. After an upgrade from Ceph 12.2.2 to 12.2.5, which fixed an issue with the resharding when using tenants (which we do), the cluster was busy resharding for 2 days straight, resharding the same buckets over and over again.

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Jason Dillaman
On Fri, Jun 15, 2018 at 6:19 AM, Wladimir Mutel wrote: > Jason Dillaman wrote: > >>> чер 13 08:38:14 p10s tcmu-runner[54121]: 2018-06-13 08:38:14.726 54121 >>> [DEBUG] dbus_name_acquired:461: name org.kernel.TCMUService1 acquired >>> чер 13 08:38:30 p10s tcmu-runner[54121]: 2018-06-13

Re: [ceph-users] OSDs too slow to start

2018-06-15 Thread Alfredo Daniel Rezinovsky
Too long is 120 seconds The DB is in SSD devices. The devices are fast. The process OSD reads about 800Mb but I cannot be sure from where. On 13/06/18 11:36, Gregory Farnum wrote: How long is “too long”? 800MB on an SSD should only be a second or three. I’m not sure if that’s a reasonable

[ceph-users] MDS: journaler.pq decode error

2018-06-15 Thread Benjeman Meekhof
Have seen some posts and issue trackers related to this topic in the past but haven't been able to put it together to resolve the issue I'm having. All on Luminous 12.2.5 (upgraded over time from past releases). We are going to upgrade to Mimic near future if that would somehow resolve the

Re: [ceph-users] MDS: journaler.pq decode error

2018-06-15 Thread John Spray
On Fri, Jun 15, 2018 at 2:55 PM, Benjeman Meekhof wrote: > Have seen some posts and issue trackers related to this topic in the > past but haven't been able to put it together to resolve the issue I'm > having. All on Luminous 12.2.5 (upgraded over time from past > releases). We are going to

Re: [ceph-users] move rbd image (with snapshots) to different pool

2018-06-15 Thread Jason Dillaman
The "rbd clone" command will just create a copy-on-write cloned child of the source image. It will not copy any snapshots from the original image to the clone. With the Luminous release, you can use "rbd export --export-format 2 - | rbd import --export-format 2 - " to export / import an image

Re: [ceph-users] move rbd image (with snapshots) to different pool

2018-06-15 Thread Steve Taylor
I have done this with Luminous by deep-flattening a clone in a different pool. It seemed to do what I wanted, but the RBD appeared to lose its sparseness in the process. Can anyone verify that and/or comment on whether Mimic's "rbd deep copy" does the same? Steve Taylor | Senior Software

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-15 Thread Wladimir Mutel
Jason Dillaman wrote: [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/ I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10 initiator (not Win2016 but hope there is no much difference). I try to make it work with a single session first. Also, right now I only

Re: [ceph-users] move rbd image (with snapshots) to different pool

2018-06-15 Thread Jason Dillaman
On Fri, Jun 15, 2018 at 12:15 PM, Steve Taylor wrote: > I have done this with Luminous by deep-flattening a clone in a different > pool. It seemed to do what I wanted, but the RBD appeared to lose its > sparseness in the process. Hmm, Luminous librbd clients should have kept object-sized

Re: [ceph-users] PM1633a

2018-06-15 Thread Paul Emmerich
Hi, we've evaluated them but they were worse than the SM863a in the usual quick sync write IOPS benchmark. Not saying that it's a bad disk (10k IOPS with one thread, ~20k with more threads), we haven't run any long-term tests. Paul 2018-06-15 21:02 GMT+02:00 Brian : : > Hello List - anyone

[ceph-users] PM1633a

2018-06-15 Thread Brian :
Hello List - anyone using these drives and have any good / bad things to say about them? Thanks! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] osd_op_threads appears to be removed from the settings

2018-06-15 Thread Piotr Dalek
No, it’s no longer valid. -- Piotr Dałek piotr.da...@corp.ovh.com https://ovhcloud.com/ From: ceph-users On Behalf Of Matthew Stroud Sent: Friday, June 15, 2018 8:11 AM To: ceph-users Subject: [ceph-users] osd_op_threads appears to be removed from the settings So I’m trying to update the

Re: [ceph-users] osd_op_threads appears to be removed from the settings

2018-06-15 Thread Matthew Stroud
Thanks for the info From: Piotr Dalek Date: Friday, June 15, 2018 at 12:33 AM To: Matthew Stroud , ceph-users Subject: RE: osd_op_threads appears to be removed from the settings No, it’s no longer valid. -- Piotr Dałek piotr.da...@corp.ovh.com https://ovhcloud.com/ From: ceph-users On

[ceph-users] osd_op_threads appears to be removed from the settings

2018-06-15 Thread Matthew Stroud
So I’m trying to update the osd_op_threads setting that was in jewel, that now doesn’t appear to be in luminous. What’s more confusing is that the docs state that is a valid option. Is osd_op_threads still valid? I’m currently running ceph 12.2.2 Thanks, Matthew Stroud