Thanks for the replies.
I'll move all our testbed installation to Luminous and redo the tests.
Cheers,
Tyanko
On 17 October 2017 at 10:14, Yan, Zheng <uker...@gmail.com> wrote:
> On Tue, Oct 17, 2017 at 1:07 AM, Tyanko Aleksiev
> <tyanko.alex...@gmail.com> wrote:
> &g
Hi,
At UZH we are currently evaluating cephfs as a distributed file system
for the scratch space of an HPC installation. Some slow down of the
metadata operations seems to occur under certain circumstances. In
particular, commands issued after some big file deletion could take
several
Hi Brian,
On 14 February 2017 at 19:33, Brian Andrus <brian.and...@dreamhost.com>
wrote:
>
>
> On Tue, Feb 14, 2017 at 5:27 AM, Tyanko Aleksiev <tyanko.alex...@gmail.com
> > wrote:
>
>> Hi Cephers,
>>
>> At University of Zurich we are using Ce
Hi Cephers,
At University of Zurich we are using Ceph as a storage back-end for our
OpenStack installation. Since we recently reached 70% of occupancy
(mostly caused by the cinder pool served by 16384PGs) we are in the
phase of extending the cluster with additional storage nodes of the same
type