James Harper wrote:
Hi James,
do you VLAN's interfaces configured on your bonding interfaces? Because
I saw a similar situation in my setup.
No VLAN's on my bonding interface, although extensively used elsewhere.
What the OP described is *exactly* like a problem I've been struggling
with.
Alex Elsayed wrote:
Since Btrfs has implemented raid5/6 support (meaning raidz is only a feature
gain if you want 3x parity, which is unlikely to be useful for an OSD[1]),
the checksumming may be the only real benefit since it supports sha256 (in
addition to the non-cryptographic fletcher2/fletch
> On 04/17/2013 02:09 PM, Stefan Priebe wrote:
>>
>> Sorry to disturb, but what is the raeson / advantage of using zfs for
>> ceph?
A few things off the top of my head:
1) Very mature filesystem with full xattr support (this bug
notwithstanding) and copy-on-write snapshots. While the port to Linu
Henry C Chang wrote:
I looked into this problem earlier. The problem is that zfs does not
return ERANGE when the size of value buffer passed to getxattr is too
small. zfs returns with truncated xattr value.
Is this a bug in ZFS, or simply different behavior?
I've used ZFSonLinux quite a bit an
Dimitri Maziuk wrote:
(Last I looked "?op=create&poolname=foo" was the Old Busted CGI, The New
Shiny Hotness(tm) was supposed to look like "/create/foo" -- and I never
understood how the optional parameters are supposed to work. But that's
beside the point.)
They're different. One is using the
On Tue, Jan 22, 2013 at 7:25 PM, Josh Durgin wrote:
> On 01/22/2013 01:58 PM, Jeff Mitchell wrote:
>>
>> I'd be interested in figuring out the right way to migrate an RBD from
>> one pool to another regardless.
>
>
> Each way involves copying data, since by def
Mark Nelson wrote:
On 01/22/2013 03:50 PM, Stefan Priebe wrote:
Hi,
Am 22.01.2013 22:26, schrieb Jeff Mitchell:
Mark Nelson wrote:
It may (or may not) help to use a power-of-2 number of PGs. It's
generally a good idea to do this anyway, so if you haven't set up your
production c
Stefan Priebe wrote:
Hi,
Am 22.01.2013 22:26, schrieb Jeff Mitchell:
Mark Nelson wrote:
It may (or may not) help to use a power-of-2 number of PGs. It's
generally a good idea to do this anyway, so if you haven't set up your
production cluster yet, you may want to play around
Mark Nelson wrote:
It may (or may not) help to use a power-of-2 number of PGs. It's
generally a good idea to do this anyway, so if you haven't set up your
production cluster yet, you may want to play around with this. Basically
just take whatever number you were planning on using and round it up
Wido den Hollander wrote:
Now, when running just one Varnish instance which does loadbalancing
over multiple RGW instances is not a real problem. When it sees a PUT
operation it can "purge" (called banning in Varnish) the object from
it's cache.
When looking at the scenario where you have multip
Wido den Hollander wrote:
One thing is still having multiple Varnish caches and object banning. I
proposed something for this some time ago, some "hook" in RGW you could
use to inform a upstream cache to "purge" something from it's cache.
Hopefully not Varnish-specific; something like the Last-
Wido den Hollander wrote:
No, not really. 1Gbit should be more then enough for your monitors. 3
monitors should also be good. No need to go for 5 or 7.
I have 5 monitors, across 16 different OSD-hosting machines...is that
going to *harm* anything?
(I have had issues in my cluster when doing
Sage Weil wrote:
On Sun, 20 Jan 2013, Peter Smith wrote:
Thanks for the reply, Sage and everyone.
Sage, so I can expect Ceph-rbd works well on Centos 6.3 if I only use
it as the Cinder volume backend because the librbd in QEMU doesn't
make use of kernel client, right?
Then the dependency is o
FWIW, my ceph data dirs (for e.g. mons) are all on XFS. I've
experienced a lot of corruption on these on power loss to the node --
and in some cases even when power wasn't lost, and the box was simply
rebooted. This is on Ubuntu 12.04 with the ceph-provied 3.6.3 kernel
(as I'm using RBD on these).
14 matches
Mail list logo