Hi,
we are using cephfs on a ceph cluster (V0.94.5, 3x MON, 1x MDS, ~50x OSD).
Recently, we observed a spontaneous (and unwanted) change in the access
rights of newly created directories:
$ umask
0077
$ mkdir test
$ ls -ld test
drwx-- 1 me me 0 Jan 6 14:59 test
$ touch test/foo
$ ls -ld
Hi all,
When I tested randomrw on my cluster through filebench (running ceph
0.94.5) , one of the osds was marked down. but I could still get the
process with ps command.
So I checked the log fiile and found follow message:
>
2016-01-07
I want your opinion guys regarding two features implemented in attempt to
greatly reduce number of memory allocation without major surgery in the
code.
The features are:
1. Custom STL allocator, which allocates first N items from the STL
container itself. This is semi-transparent replacement of
On Thu, 7 Jan 2016, Javen Wu wrote:
> Hi Sage,
>
> Sorry to bother you. I am not sure if it is appropriate to send email to you
> directly, but I cannot find any useful information to address my confusion
> from Internet. Hope you can help me.
>
> Occasionally, I heard that you are going to
Thanks Sage for your reply.
I am not sure I understand the challenges you mentioned about
backfill/scrub.
I will investigate from the code and let you know if we can conquer the
challenge by easy means.
Our rough idea for ZFSStore are:
1. encapsulate dnode object as onode and add onode
Hi Sage,
thanks for your quick response. Javen and I once the zfs developer,are
currently focusing on how to
leverage some of the zfs ideas to improve the ceph backend performance
in userspace.
Based on your encouraging reply, we come up with 2 schemes to continue
our future work
1. the
In http://download.ceph.com/tarballs/ , there's two tarballs:
"ceph_10.0.1.orig.tar.gz" and "ceph_10.0.1.orig.tar.gz.1"
Which one is correct? Can we delete one?
- Ken
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org