Am 26.11.2012 21:00, schrieb Dan Mick:
It writes zeros; there's no way for it to know how many zeros are
coming. It could make a half-hearted attempt depending on its buffer
size and the amount of data the source is willing to buffer.
Yes OK that's correct. Maybe the buffer should be the size
I did mkdir a chmod 777 a . So dir a is /home/hemant/a .
then I used mount.ceph 10.72.148.245:/ /home/hemant/a
after that following cmd is executed
root@hemantsec-virtual-machine:/home/hemant# cephfs /home/hemant/a
set_layout --pool 3
Error setting layout: Invalid argument
Please help me out
If the call to syncfs() fails, don't try to call syncfs again via
syscall(). If HAVE_SYS_SYNCFS is defined, don't use syscall() with
SYS_syncfs.
Signed-off-by: Danny Al-Gaaf danny.al-g...@bisect.de
---
src/common/sync_filesystem.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff
Hi,
forget this patch, I'll send a new version due to changes in master.
Danny
Am 27.11.2012 16:23, schrieb Danny Al-Gaaf:
If the call to syncfs() fails, don't try to call syncfs again via
syscall(). If HAVE_SYS_SYNCFS is defined, don't use syscall() with
SYS_syncfs.
Signed-off-by: Danny
If the call to syncfs() fails, don't try to call syncfs again via
syscall(). If HAVE_SYS_SYNCFS is defined, don't fall through to try
syscall() with SYS_syncfs or __NR_syncfs.
Signed-off-by: Danny Al-Gaaf danny.al-g...@bisect.de
---
src/common/sync_filesystem.h | 8 ++--
1 file changed, 2
Applied, thanks!
sage
On Tue, 27 Nov 2012, Danny Al-Gaaf wrote:
If the call to syncfs() fails, don't try to call syncfs again via
syscall(). If HAVE_SYS_SYNCFS is defined, don't fall through to try
syscall() with SYS_syncfs or __NR_syncfs.
Signed-off-by: Danny Al-Gaaf
On Tue, 27 Nov 2012, Sam Lang wrote:
Hi Noah,
I was able to reproduce your issue with a similar test using the fuse client
and the clock_offset option for the mds. This is what I see happening:
clientA's clock is a few seconds behind the mds clock
clientA creates the file
- the
On Tuesday, November 27, 2012 at 8:45 AM, Sam Lang wrote:
Hi Noah,
I was able to reproduce your issue with a similar test using the fuse
client and the clock_offset option for the mds. This is what I see
happening:
clientA's clock is a few seconds behind the mds clock
clientA
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
3. When a client acquires the cap for a file, have the mds provide its
current
time as well. As the client updates the mtime, it uses the timestamp
provided
by the mds and the time
On 11/27/2012 11:07 AM, Gregory Farnum wrote:
On Tuesday, November 27, 2012 at 8:45 AM, Sam Lang wrote:
Hi Noah,
I was able to reproduce your issue with a similar test using the fuse
client and the clock_offset option for the mds. This is what I see
happening:
clientA's clock is a few
On Fri, 23 Nov 2012 16:46:00 + Joao Eduardo Luis joao.l...@inktank.com
wrote:
On 11/16/2012 05:24 PM, Cláudio Martins wrote:
As for the monitor daemon on this cluster (running on a dedicated
machine), it is currently using 3.2GB of memory, and it got to that
point again in a matter
On 11/27/2012 11:03 AM, Sage Weil wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
Hi Noah,
I was able to reproduce your issue with a similar test using the fuse client
and the clock_offset option for the mds. This is what I see happening:
clientA's clock is a few seconds behind the mds clock
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
3. When a client acquires the cap for a file, have the mds provide its
current
time as well. As the client updates the mtime, it uses the
Hi Caleb,
On 11/26/2012 07:28 PM, caleb miles wrote:
Hello all,
Here's what I've done to try and validate the new chooseleaf_descend_once
tunable first described in commit f1a53c5e80a48557e63db9c52b83f39391bc69b8 in
the wip-crush branch of ceph.git.
First I set the new tunable to it's
On 11/27/2012 12:01 PM, Sage Weil wrote:
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
3. When a client acquires the cap for a file, have the mds provide its current
time as well. As the client
On Nov 27, 2012, at 11:05 AM, Sam Lang sam.l...@inktank.com wrote:
On 11/27/2012 12:01 PM, Sage Weil wrote:
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
3. When a client acquires the cap
On 11/27/2012 12:01 PM, Sage Weil wrote:
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang wrote:
3. When a client acquires the cap for a file, have the mds provide its
current
On 11/27/2012 01:38 PM, David Zafman wrote:
On Nov 27, 2012, at 11:05 AM, Sam Lang sam.l...@inktank.com wrote:
On 11/27/2012 12:01 PM, Sage Weil wrote:
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03 AM, Sage Weil s...@inktank.com wrote:
On Tue, 27 Nov 2012, Sam Lang
On 11/26/2012 04:52 PM, Josh Durgin wrote:
On 11/26/2012 04:10 PM, brady wrote:
Hello,
I have a general question. Is there a specific character limit for
rbd block devices? In attempting to map a block device with a name that
is 36 characters or more, I am getting the following error:
rbd
On Nov 27, 2012, at 1:14 PM, Sam Lang sam.l...@inktank.com wrote:
On 11/27/2012 01:38 PM, David Zafman wrote:
On Nov 27, 2012, at 11:05 AM, Sam Lang sam.l...@inktank.com wrote:
On 11/27/2012 12:01 PM, Sage Weil wrote:
On Tue, 27 Nov 2012, David Zafman wrote:
On Nov 27, 2012, at 9:03
On 11/22/2012 02:00 AM, Stefan Priebe wrote:
This one fixes a race which qemu had also in iscsi block driver
between cancellation and io completition.
qemu_rbd_aio_cancel was not synchronously waiting for the end of
the command.
To archieve this it introduces a new status flag which uses
On 11/27/2012 01:16 AM, Stefan Priebe - Profihost AG wrote:
Am 26.11.2012 21:00, schrieb Dan Mick:
As for fstrim: on the rbd image?sure, if it's a filesystem, it ought
to work (modulo some bugs I've heard about with 32-bit vs. 64-bit
offsets in qemu... :) )
it works fine with my patches i
Hi,
I am interested to use rbd block devices inside kvm/qemu VMs. I set up a
tiny ceph cluster using one server machines and used 6 SCSI disks for
storing data. At the client machine, the sequential read throughput
seems to be reasonable (~60 MB/s) when I run fio against rbd block
devices
Hi Stefan,
On Thu, 15 Nov 2012, Sage Weil wrote:
On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
Am 14.11.2012 15:59, schrieb Sage Weil:
Hi Stefan,
I would be nice to confirm that no clients are waiting on replies for
these requests; currently we suspect that the OSD
Ah, crap.
Okay, I think we should take the existing TMAPRM op and re-add the ENOENT
check, and then change the mds to use a new TMAPRMSLOPPY op that doesn't
error out.
sage
On Tue, 27 Nov 2012, Dan Mick wrote:
tmap_rm() no longer fails on nonexistent keys
On Tue, 27 Nov 2012, Sage Weil wrote:
Ah, crap.
Okay, I think we should take the existing TMAPRM op and re-add the ENOENT
check, and then change the mds to use a new TMAPRMSLOPPY op that doesn't
error out.
Pushed wip-tmap with a fix. It adds the tmap op code.
Conveniently, the old tmap
Am 28.11.2012 02:51, schrieb Sage Weil:
Hi Stefan,
Yes it is. So i have to specify admin socket at the KVM host?
Right. IIRC the disk line is a ; (or \;) separated list of key/value
pairs.
How do i query the admin socket for requests?
ceph --admin-daemon /path/to/socket help
ceph
On Wed, Nov 28, 2012 at 5:51 AM, Sage Weil s...@inktank.com wrote:
Hi Stefan,
On Thu, 15 Nov 2012, Sage Weil wrote:
On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
Am 14.11.2012 15:59, schrieb Sage Weil:
Hi Stefan,
I would be nice to confirm that no clients are waiting on
28 matches
Mail list logo