On Mon, Mar 19, 2018 at 11:45 PM, Nicolas Huillard
wrote:
> Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit :
>> Default for mds_log_events_per_segment is 1024, in my set up I ended
>> up with 8192.
>> I calculated that value like IOPS / log segments * 5 seconds
I’m sorry for my late reply.
Thank you for your reply.
Yes, this error only exists while backend is xfs.
Ext4 will not trigger the error.
> 在 2018年3月12日,下午6:31,Peter Woodman 写道:
>
> from what i've heard, xfs has problems on arm. use btrfs, or (i
> believe?) ext4+bluestore
> 在 2018年3月12日,上午9:49,Christian Wuerdig 写道:
>
> Hm, so you're running OSD nodes with 2GB of RAM and 2x10TB = 20TB of storage?
> Literally everything posted on this list in relation to HW requirements and
> related problems will tell you that this simply isn't
On 03/20/2018 01:33 PM, Robert Stanford wrote:
Hello,
Does object expiration work on indexless (blind) buckets?
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
No.
@Pavan, I did not know about 'filestore split rand factor'. That looks
like it was added in Jewel and I must have missed it. To disable it, would
I just set it to 0 and restart all of the OSDs? That isn't an option at
the moment, but restarting the OSDs after this backfilling is done is
Hi Paul,
Many thanks for the replies, I actually did (1) and it worked perfectly, I was
also able to reproduce this via a test monitor too.
I have updated the bug with all of this info so hopefully no one hits this
again.
Many thanks.
Warren
From: Paul Emmerich
I wanted to report an update.
We added more ceph storage nodes, so we can take the problem OSDs out.
speeds are faster.
I found a way to monitor OSD latency in ceph, using "ceph pg dump osds"
The commit latency is always "0" for us.
fs_perf_stat/commit_latency_ms
But the apply latency shows us
Hi all,
Here's the output of 'rados df' for one of our clusters (Luminous 12.2.2):
ec_pool 75563G 19450232 0 116701392 0 0 0 385351922 27322G 800335856 294T
rbd 42969M 10881 0 32643 0 0 0 615060980 14767G 970301192 207T
rbdssd 252G 65446 0 196338 0 0 0 29392480 1581G 211205402 2601G
Good evening everyone.
My ceph is cross-compiled and runs on armv7l 32-bit development board.The ceph
version is 10.2.3,The compiler version is 6.3.0.
After I placed an object in the rados cluster, I scrubed the object manually.
At this time, the main osd crashed.
Here is the osd log:
ceph
On Tue, Mar 20, 2018 at 3:27 AM, James Poole wrote:
> I have a query regarding cephfs and prefered number of clients. We are
> currently using luminous cephfs to support storage for a number of web
> servers. We have one file system split into folders, example:
>
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I have a query regarding cephfs and prefered number of clients. We are
currently using luminous cephfs to support storage for a number of web
servers. We have one file system split into folders, example:
/vol1
/vol2
/vol3
/vol4
At the moment the
That is great news!
Thanks,
Ovidiu
On 03/19/2018 10:44 AM, Gregory Farnum wrote:
Maybe (likely?) in Mimic. Certainly the next release.
Some code has been written but the reason we haven’t done this before
is the number of edge cases involved, and it’s not clear how long
rounding those off
Hi,
agreed. but the packages built for stretch do depend on the library
I had a wrong debian version in my sources list :-(
Thanks for looking into it.
Micha Krause
___
ceph-users mailing list
ceph-users@lists.ceph.com
13 matches
Mail list logo