On Wed, Jul 19, 2017 at 7:09 PM, Gregory Farnum wrote:
>
>
> On Wed, Jul 19, 2017 at 10:25 AM David wrote:
>
>> On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite <
>> blair.bethwa...@gmail.com> wrote:
>>
>>> We are a data-intensive university, with an increasingly large fleet
>>> of scientific in
Hi,
While not necessarily CephFS specific - we somehow seem to manage to
frequently end up with objects that have inconsistent omaps. This seems to
be replication (as anecdotally it's a replica that ends up diverging, and
it's at least a few times something that happened after the osd that held
th
Thanks Greg. I thought it was impossible when I reported 34MB for 52
million files.
On Jul 19, 2017 1:17 PM, "Gregory Farnum" wrote:
>
>
> On Wed, Jul 19, 2017 at 10:25 AM David wrote:
>
>> On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite <
>> blair.bethwa...@gmail.com> wrote:
>>
>>> We are a d
On Wed, Jul 19, 2017 at 10:25 AM David wrote:
> On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite <
> blair.bethwa...@gmail.com> wrote:
>
>> We are a data-intensive university, with an increasingly large fleet
>> of scientific instruments capturing various types of data (mostly
>> imaging of one k
On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite
wrote:
> We are a data-intensive university, with an increasingly large fleet
> of scientific instruments capturing various types of data (mostly
> imaging of one kind or another). That data typically needs to be
> stored, protected, managed, share
On Wed, Jul 19, 2017 at 4:47 AM, 许雪寒 wrote:
> Is there anyone else willing to share some usage information of cephfs?
>
I look after 2 Cephfs deployments, both Jewel, been in production since
Jewel went stable so just over a year I think. We've had a really positive
experience, I've not experien
I got it, thank you☺
发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su]
发送时间: 2017年7月19日 18:20
收件人: 许雪寒
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] How's cephfs going?
You right. Forgot to mention that the client was using kernel 4.9.9.
19 июля 2017 г., в 12:36, 许雪寒 написал(а):
Hi, thanks
Hi, thanks for your sharing:-)
So I guess you have not put cephfs into real production environment, and it's
still in test phase, right?
Thanks again:-)
发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su]
发送时间: 2017年7月19日 17:33
收件人: 许雪寒
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] How's ceph
Is there anyone else willing to share some usage information of cephfs?
Could developers tell whether cephfs is a major effort in the whole ceph
development?
发件人: 许雪寒
发送时间: 2017年7月17日 11:00
收件人: ceph-users@lists.ceph.com
主题: How's cephfs going?
Hi, everyone.
We intend to use cephfs of Jewel ve
We have a cephfs data pool with 52.8M files stored in 140.7M objects. That
translates to a metadata pool size of 34.6MB across 1.5M objects.
On Jul 18, 2017 12:54 AM, "Blair Bethwaite"
wrote:
> We are a data-intensive university, with an increasingly large fleet
> of scientific instruments captu
We are a data-intensive university, with an increasingly large fleet
of scientific instruments capturing various types of data (mostly
imaging of one kind or another). That data typically needs to be
stored, protected, managed, shared, connected/moved to specialised
compute for analysis. Given the
No problem. We are a functional mri research institute. We have a fairly
mixed workload. But, I can tell you that we see 60+gbps of throughput when
multiple clients are reading sequencially on large files (1+GB) with 1-4MB
block sizes. IO involving small files and small block sizes are not very
goo
Thanks, sir☺
You are really a lot of help☺
May I ask what kind of business are you using cephFS for? What's the io
pattern:-)
If answering this may involve any business secret, I really understand if you
don't answer:-)
Thanks again:-)
发件人: Brady Deetz [mailto:bde...@gmail.com]
发送时间: 2017年7
Hi, thanks for the advice:-)
By the way, may I ask what kind of business you are using cephFS for? What's
the IO pattern of that business? And which version of ceph are you using? If
this involves any business secret, it's really understandable not to answer:-)
Thanks again for the help:-)
---
I work at Monash University. We are using active-standby MDS. We don't
yet have it in full production as we need some of the newer Luminous
features before we can roll it out more broadly, however we are moving
towards letting a subset of users on (just slowly ticking off related
work like putting
Hi, thanks for the quick reply:-)
May I ask which company are you in? I'm asking this because we are collecting
cephfs's usage information as the basis of our judgement about whether to use
cephfs. And also, how are you using it? Are you using single-mds, the so-called
active-standby mode? And
16 matches
Mail list logo