Not sure what version of glibc Wheezy has, but try to make sure you have
one that supports syncfs (you'll also need a semi-new kernel, 3.0+
should be fine).
Hi, glibc from wheezy don't have syncfs support.
- Mail original -
De: Mark Nelson mark.nel...@inktank.com
À: Denis Fondras
Hi All,
Yes I'm asking the impossible question, what is the best hardware
confing.
I'm looking at (possibly) using ceph as backing store for images and
volumes on OpenStack as well as exposing at least the object store for
direct use.
The openstack cluster exists and is currently in the early
Hi,
On 08/22/2012 03:55 PM, Jonathan Proulx wrote:
Hi All,
Yes I'm asking the impossible question, what is the best hardware
confing.
I'm looking at (possibly) using ceph as backing store for images and
volumes on OpenStack as well as exposing at least the object store for
direct use.
The
On 08/22/2012 03:10 AM, Sage Weil wrote:
I pushed a branch that changes some of the crush terminology. Instead of
having a crush type called pool that requires you to say things like
pool=default in the ceph osd crush set ... command, it uses root
instead. That hopefully reinforces that
On Wed, Aug 22, 2012 at 9:23 AM, Denis Fondras c...@ledeuns.net wrote:
Are you sure your osd data and journal are on the disks you think? The
/home paths look suspicious -- especially for journal, which often
should be a block device.
I am :)
...
-rw-r--r-- 1 root root 1048576000 août 22
Hi Linus,
Please pull the following Ceph fixes for -rc3 from
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus
Jim's fix closes a narrow race introduced with the msgr changes. One fix
resolves problems with debugfs initialization that Yan found when multiple
On Wed, Aug 22, 2012 at 9:33 AM, Sage Weil s...@inktank.com wrote:
On Wed, 22 Aug 2012, Atchley, Scott wrote:
On Aug 22, 2012, at 10:46 AM, Florian Haas wrote:
On 08/22/2012 03:10 AM, Sage Weil wrote:
I pushed a branch that changes some of the crush terminology. Instead of
having a crush
What rbd block size were you using?
-Sam
On Tue, Aug 21, 2012 at 10:29 PM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
Samuel Just wrote:
Was the cluster complete healthy at the time that those traces were taken?
If there were osds going in/out/up/down, it would trigger osdmap
On Wed, Aug 22, 2012 at 06:29:12PM +0200, Tommi Virtanen wrote:
(...)
Your journal is a file on a btrfs partition. That is probably a bad
idea for performance. I'd recommend partitioning the drive and using
partitions as journals directly.
Hi Tommi,
can you please teach me how to use the
On Thu, 23 Aug 2012, Andrey Korolyov wrote:
Hi,
today during heavy test a pair of osds and one mon died, resulting to
hard lockup of some kvm processes - they went unresponsible and was
killed leaving zombie processes ([kvm] defunct). Entire cluster
contain sixteen osd on eight nodes and
On Thu, Aug 23, 2012 at 2:33 AM, Sage Weil s...@inktank.com wrote:
On Thu, 23 Aug 2012, Andrey Korolyov wrote:
Hi,
today during heavy test a pair of osds and one mon died, resulting to
hard lockup of some kvm processes - they went unresponsible and was
killed leaving zombie processes ([kvm]
On Wed, Aug 22, 2012 at 12:12 PM, Dieter Kasper (KD)
d.kas...@kabelmail.de wrote:
Your journal is a file on a btrfs partition. That is probably a bad
idea for performance. I'd recommend partitioning the drive and using
partitions as journals directly.
can you please teach me how to use the
The tcmalloc backtrace on the OSD suggests this may be unrelated, but
what's the fd limit on your monitor process? You may be approaching
that limit if you've got 500 OSDs and a similar number of clients.
On Wed, Aug 22, 2012 at 6:55 PM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Aug 23, 2012
On 22/08/12 22:24, David McBride wrote:
On 22/08/12 09:54, Denis Fondras wrote:
* Test with dd from the client using CephFS :
# dd if=/dev/zero of=testdd bs=4k count=4M
17179869184 bytes (17 GB) written, 338,29 s, 50,8 MB/s
Again, the synchronous nature of 'dd' is probably severely affecting
14 matches
Mail list logo