Not sure about off_t. What is min and max size?
Stefan
Am 21.11.2012 um 18:03 schrieb Stefan Weil :
> Am 20.11.2012 13:44, schrieb Stefan Priebe:
>> rbd / rados tends to return pretty often length of writes
>> or discarded blocks. These values might be bigger than int.
>>
>> Signed-off-by: Stef
Still not certain I'm understanding *just* what you mean, but I'll point
out that you can set up a cluster with rbd images, mount them from a
separate non-virtualized host with kernel rbd, and expand those images
and take advantage of the newly-available space on the separate host,
just as thou
Yes i mean exactly this. it's a great pity :-( Maybe present some ceph
equivalent that solve my problem?
2012/11/21 Gregory Farnum :
> On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov
> wrote:
>> So, not possible use ceph as scalable block device without visualization?
>
> I'm not sure I understan
On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov wrote:
> So, not possible use ceph as scalable block device without visualization?
I'm not sure I understand, but if you're trying to take a bunch of
compute nodes and glue their disks together, no, that's not a
supported use case at this time. There
On Tue, Nov 20, 2012 at 8:28 PM, Drunkard Zhang wrote:
> 2012/11/21 Gregory Farnum :
>> No, absolutely not. There is no relationship between different RADOS
>> pools. If you've been using the cephfs tool to place some filesystem
>> data in different pools then your configuration is a little more
>
With 8 successful installs already done, I'm reasonably confident that
it's patch #50. I'm making another build which applies all patches
from the 3.5 backport branch, excluding that specific one. I'll let
you know if that turns up any unexpected failures.
What will the potential fall out be for
Am 20.11.2012 13:44, schrieb Stefan Priebe:
rbd / rados tends to return pretty often length of writes
or discarded blocks. These values might be bigger than int.
Signed-off-by: Stefan Priebe
---
block/rbd.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/rbd.
It's really looking like it's the
libceph_resubmit_linger_ops_when_pg_mapping_changes commit. When
patches 1-50 (listed below) are applied to 3.5.7, the hang is present.
So far I have gone through 4 successful installs with no hang with
only 1-49 applied. I'm still leaving my test run to make su
On Tue, 20 Nov 2012, Nick Bartos wrote:
> Since I now have a decent script which can reproduce this, I decided
> to re-test with the same 3.5.7 kernel, but just not applying the
> patches from the wip-3.5 branch. With the patches, I can only go 2
> builds before I run into a hang. Without the pat
(Sorry for the dupe message. vger rejected due to HTML).
Thanks, I'll try this patch this morning.
Client B should perform a single stat after a notification from Client
A. But, won't Sage's patch still be required, since Client A needs the
MDS time to pass to Client B?
On Tue, Nov 20, 2012 at 1
Responding to my own message. :)
Talked to Sage a bit offline about this. I think there are two opposing
forces:
On one hand, random IO may be spreading reads/writes out across more
OSDs than sequential IO that presumably would be hitting a single OSD
more regularly.
On the other hand, yo
Hi Guys,
I'm late to this thread but thought I'd chime in. Crazy that you are
getting higher performance with random reads/writes vs sequential! It
would be interesting to see what kind of throughput smalliobench reports
(should be packaged in bobtail) and also see if this behavior happens
Hi,
On 11/21/2012 09:56 PM, Stefan Priebe - Profihost AG wrote:
Hi Wido,
thanks for all your explanations.
This doesn't seem to work:
rbd export --snap BACKUP
rbd -p kvmpool1 export --snap BACKUP vm-101-disk-1 /vm-101-disk-1.img
rbd: error setting snapshot context: (2) No such file or dir
Hi,
On 11/21/2012 10:07 PM, Stefan Priebe - Profihost AG wrote:
Hello list,
i tried to create a snapshot of my disk vm-113-disk-1:
[: ~]# rbd -p kvmpool1 ls
vm-113-disk-1
[: ~]# rbd -p kvmpool1 snap create BACKUP vm-113-disk-1
rbd: extraneous parameter vm-113-disk-1
[: ~]# rbd -p kvmpool1 sn
Hello list,
i tried to create a snapshot of my disk vm-113-disk-1:
[: ~]# rbd -p kvmpool1 ls
vm-113-disk-1
[: ~]# rbd -p kvmpool1 snap create BACKUP vm-113-disk-1
rbd: extraneous parameter vm-113-disk-1
[: ~]# rbd -p kvmpool1 snap create vm-113-disk-1 BACKUP
rbd: extraneous parameter BACKUP
W
Hi Wido,
thanks for all your explanations.
This doesn't seem to work:
rbd export --snap BACKUP
rbd -p kvmpool1 export --snap BACKUP vm-101-disk-1 /vm-101-disk-1.img
rbd: error setting snapshot context: (2) No such file or directory
Or should i still create and delete a snapshot named BA
Hi,
On 11/21/2012 09:37 PM, Stefan Priebe - Profihost AG wrote:
Hello list,
is there a recommanded way to backup rbd images / disks?
Or is it just
rbd snap create BACKUP
rbd export BACKUP
You should use:
rbd export --snap BACKUP
rbd snap rm BACKUP
Is the snap needed at all? Or is an ex
Hello list,
is there a recommanded way to backup rbd images / disks?
Or is it just
rbd snap create BACKUP
rbd export BACKUP
rbd snap rm BACKUP
Is the snap needed at all? Or is an export save? Is there a way to make
sure the image is consistent?
Is it possible to use the BACKUP file as a loop
Hi,
Somehow I have managed to produce unkillable snapshot, which does not
allow to remove itself or parent image:
$ rbd snap purge dev-rack0/vm2
Removing all snapshots: 100% complete...done.
$ rbd rm dev-rack0/vm2
2012-11-21 16:31:24.184626 7f7e0d172780 -1 librbd: image has snapshots
- not removi
Hi,
no, I have it basically ready but I have to run some tests before.
You'll have it in the next days!
Danny
Am 21.11.2012 01:23, schrieb Sage Weil:
> If you haven't gotten to this yet, I'll go ahead and jump on it..
> let me know!
>
> Thanks- sage
>
>
> On Thu, 9 Aug 2012, Danny Kukawka wro
Hi,
I don't think it's the best place to ask your question since it's not
directly related to OpenStack but more about Ceph. I just put in c/c
the ceph ML. Anyway, CephFS is not ready yet for production but I
heard that some people use it. People from Inktank (the company behind
Ceph) don't recomm
On Wed, Nov 21, 2012 at 09:33:08AM +0100, Stefan Priebe - Profihost AG wrote:
> Am 21.11.2012 09:26, schrieb Stefan Hajnoczi:
> >On Wed, Nov 21, 2012 at 08:47:16AM +0100, Stefan Priebe - Profihost AG wrote:
> >>Am 21.11.2012 07:41, schrieb Stefan Hajnoczi:
> >QEMU is currently in hard freeze and on
Am 21.11.2012 09:26, schrieb Stefan Hajnoczi:
On Wed, Nov 21, 2012 at 08:47:16AM +0100, Stefan Priebe - Profihost AG wrote:
Am 21.11.2012 07:41, schrieb Stefan Hajnoczi:
We're going in circles here. I know the types are wrong in the code and
your patch fixes it, that's why I said it looks good
On Wed, Nov 21, 2012 at 08:47:16AM +0100, Stefan Priebe - Profihost AG wrote:
> Am 21.11.2012 07:41, schrieb Stefan Hajnoczi:
> >On Tue, Nov 20, 2012 at 8:16 PM, Stefan Priebe wrote:
> >>Hi Stefan,
> >>
> >>Am 20.11.2012 17:29, schrieb Stefan Hajnoczi:
> >>
> >>>On Tue, Nov 20, 2012 at 01:44:55PM
24 matches
Mail list logo