Hi Marc,
let me add Peter, he probably can answer your question.
Danny
Am 13.09.19 um 10:13 schrieb Marc Roos:
>
>
> How do I actually configure dovecot to use ceph for a mailbox? I have
> build the plugins as mentioned here[0]
>
> - but where do I copy/load what module?
> - can I
Hi,
you can find the slides here:
https://dalgaaf.github.io/Cephalocon-Barcelona-librmb/
And Wido is right, it's not production ready and we have some work ahead
to make it work with an acceptable performance atm especially in our scale.
If you have any questions don't hesitate to contact me.
Hi,
some time back we had similar discussions when we, as an email provider,
discussed to move away from traditional NAS/NFS storage to Ceph.
The problem with POSIX file systems and dovecot is that e.g. with mdbox
only around ~20% of the IO operations are READ/WRITE, the rest are
metadata IOs.
n Emrich wrote:
>>>
>>>> I just want to thank all organizers and speakers for the awesome Ceph
>>>> Day at Darmstadt, Germany yesterday.
>>>>
>>>> I learned of some cool stuff I'm eager to try out (NFS-Ganesha for
>>> RGW,
>>&g
In Sydney at the OpenStack Summit Sage announced a Cephalocon for
2018.03.22-23 in Beijing (China).
Danny
Am 12.10.2017 um 13:02 schrieb Matthew Vernon:
> Hi,
>
> The recent FOSDEM CFP reminded me to wonder if there's likely to be a
> Cephalocon in 2018? It was mentioned as a possibility when
Am 25.09.2017 um 10:00 schrieb Marc Roos:
>
>
> But from the looks of this dovecot mailinglist post, you didn’t start
> your project with talking to the dovecot guys, or have an ongoing
> communication with them during the development. I would think with that
> their experience could be a
Am 25.09.2017 um 09:00 schrieb Marc Roos:
>
>>From the looks of it, to bad the efforts could not be
> combined/coordinated, that seems to be an issue with many open source
> initiatives.
That's not right. The plan is to contribute the librmb code to the Ceph
project and the Dovecot part back
Am 22.09.2017 um 23:56 schrieb Gregory Farnum:
> On Fri, Sep 22, 2017 at 2:49 PM, Danny Al-Gaaf <danny.al-g...@bisect.de>
> wrote:
>> Am 22.09.2017 um 22:59 schrieb Gregory Farnum:
>> [..]
>>> This is super cool! Is there anything written down that explains this
&
Am 22.09.2017 um 22:59 schrieb Gregory Farnum:
[..]
> This is super cool! Is there anything written down that explains this
> for Ceph developers who aren't familiar with the workings of Dovecot?
> I've got some questions I see going through it, but they may be very
> dumb.
>
> *) Why are indexes
Am 13.05.2017 um 21:28 schrieb Joao Eduardo Luis:
> On 05/13/2017 09:06 AM, John Spray wrote:
>> On Fri, May 12, 2017 at 9:45 PM, Wido den Hollander
[...]
>>> Sad to here, especially the reasoning behind it. But understandable!
>>>
>>> Let's move this event to Europe :-)
>>
>> My
, tiered, and globally distributed
storage platform with Ceph, Sage Weil, https://goo.gl/Q33K2e
- From Hardware to Application - NFV@OpenStack and Ceph, Danny Al-Gaaf,
https://goo.gl/uZZH4K
- Micro Storage Servers at multi-PetaByte scale running Ceph, Joshua
Johnson/Sage Weil, https://goo.gl
Am 03.06.2014 20:55, schrieb Sushma R:
Haomai,
I'm using the latest ceph master branch.
ceph_smalliobench is a Ceph internal benchmarking tool similar to rados
bench and the performance is more or less similar to that reported by fio.
I tried to use fio with rbd ioengine (
Am 30.04.2014 14:18, schrieb Sage Weil:
Today we are announcing some very big news: Red Hat is acquiring Inktank.
We are very excited about what this means for Ceph, the community, the
team, our partners, and our customers. Ceph has come a long way in the ten
years since the first line of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Am 07.03.2014 18:27, schrieb Michael J. Kidd:
[...]
* I've not seen any documentation on each counter, aside from
occasional mailing list posts about specific counters..
[...]
One additional question, are these latency values in
Hi,
Am 28.02.2014 03:45, schrieb Haomai Wang:
[...]
I use fio which rbd supported from
TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
to test rbd.
I would recommend to no longer use this branch, it's outdated. The rbd
engine got contributed back to upstream fio and is
Am 06.11.2013 15:05, schrieb Gautam Saxena:
We're looking to deploy CEPH on about 8 Dell servers to start, each of
which typically contain 6 to 8 harddisks with Perc RAID controllers which
support write-back cache (~512 MB usually). Most machines have between 32
and 128 GB RAM. Our questions
Hi,
I've opened a pull request with some additional fixes for this issue:
https://github.com/ceph/ceph/pull/478
Danny
Am 30.07.2013 09:53, schrieb Erik Logtenberg:
Hi,
This patch adds two buildrequires to the ceph.spec file, that are needed
to build the rpms under Fedora. Danny Al-Gaaf
Hi,
I think this is a bug in packaging of the leveldb package in this case
since the spec-file already sets dependencies on on leveldb-devel.
leveldb depends on snappy, therefore the leveldb package should set a
dependency on snappy-devel for leveldb-devel (check the SUSE spec file
for leveldb:
, Danny Al-Gaaf wrote:
Hi,
I think this is a bug in packaging of the leveldb package in this case
since the spec-file already sets dependencies on on leveldb-devel.
leveldb depends on snappy, therefore the leveldb package should set a
dependency on snappy-devel for leveldb-devel (check the SUSE
19 matches
Mail list logo