Hi,
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-develm=133758599712768w=2
will test tomorrow. Thanks.
Can we path this to the qemu-drive option?
Stefan
Am 22.05.2012 23:11, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
Am
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-develm=133758599712768w=2
will test tomorrow. Thanks.
Can we path this to the qemu-drive option?
Yup, see
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-develm=133758599712768w=2
will test tomorrow. Thanks.
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be pinned to
dedicated core or two, and all other processes
On 05/23/2012 12:22 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 09:19, schrieb Josh Durgin:
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
You can use any of the rbd-specific options (like rbd_cache_max_dirty)
with qemu= 0.15.
You can set them in a global ceph.conf file,
Am 23.05.2012 09:22, schrieb Andrey Korolyov:
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be
Am 23.05.2012 09:19, schrieb Josh Durgin:
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD writeback caching — see http://marc.info
On 05/23/2012 01:20 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 09:19, schrieb Josh Durgin:
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD
Am 22.05.2012 23:11, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
Am 22.05.2012 22:49, schrieb Greg Farnum:
Anyway, it looks like you're just paying a synchronous write penalty
What does that exactly mean? Shouldn't one threaded write to four
260MB/s
Hello
I have a question about ceph.
When I mount ceph, I do the command as follow :
# mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv
now I create an user foo and make a secretkey by ceph-authtool like that :
# ceph-authtool /etc/ceph/keyring.bin -n client.foo --gen-key
2012/5/23 Yehuda Sadeh yeh...@inktank.com:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that you (you as in the ceph community)
would like to see us taking.
There are a
Hello all!
We would like to have two independent clusters on the same cluster nodes,
specially one for user home directories (called homeuser) and one for backups
(backup). The reason is that in case the homeuser cephfs breaks (like it has
done several times in our tests), we still have
Am 23.05.2012 10:30, schrieb Stefan Priebe - Profihost AG:
Am 22.05.2012 23:11, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
Am 22.05.2012 22:49, schrieb Greg Farnum:
Anyway, it looks like you're just paying a synchronous write penalty
What does that
Hello,
How about this CloudFS management system ?
The CloudFS management system consists of two parts: a very simple
web-based management daemon called cloudfsd, and scripts to perform
various discrete functions.
http://git.fedorahosted.org/git/?p=CloudFS.git;a=summary
Sorry I missed to send this link.
http://git.fedorahosted.org/git/?p=CloudFS.git;a=blob;f=doc/mgmt_manual.md;h=bfcbbe9769f8726ecd1aefcf19e1159074971110;hb=HEAD
On Wed, May 23, 2012 at 3:47 PM, Kiran Patil kirantpa...@gmail.com wrote:
Hello,
How about this CloudFS management system ?
The
Tommi Virtanen tv at inktank.com writes:
The default logrotate script installed by ceph.deb rotates log files
daily and preserves 7 days of logs. If your /var is tiny, or you have
heavy debugging turned on, you probably need to rotate more often and
retain fewer log files. Or, if you're not
I would like to see radosgw supporting:
- Object versioning
- Bucket policy
+1 for these.
7. libradosgw
Yes, please :)
Cheers,
Xiaopong
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 5/23/12 2:22 AM, Andrey Korolyov wrote:
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be pinned to
I would like to see radosgw supporting:
- Object versioning
- Bucket policy
+1 for these.
7. libradosgw
Yes, please :)
Cheers,
Xiaopong
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hey,
ok i installed libc-dbg and run your commands now this comes up:
gdb /usr/bin/ceph-mds core
snip
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to
On Wed, May 23, 2012 at 02:34:43PM +0200, Christian Brunner wrote:
2012/5/22 Josef Bacik jo...@redhat.com:
Yeah you would also need to change orphan_meta_reserved. I fixed this by
just
taking the BTRFS_I(inode)-lock when messing with these since we don't want
to
take up all that
On Wed, May 23, 2012 at 02:34:43PM +0200, Christian Brunner wrote:
2012/5/22 Josef Bacik jo...@redhat.com:
Yeah you would also need to change orphan_meta_reserved. I fixed this by
just
taking the BTRFS_I(inode)-lock when messing with these since we don't want
to
take up all that
On Wed, 23 May 2012, Madhusudhana U wrote:
Tommi Virtanen tv at inktank.com writes:
The default logrotate script installed by ceph.deb rotates log files
daily and preserves 7 days of logs. If your /var is tiny, or you have
heavy debugging turned on, you probably need to rotate more often
Hi,
2012/5/22 Yehuda Sadeh yeh...@inktank.com:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that you (you as in the ceph community)
would like to see us taking.
There
On Tue, 22 May 2012, Alexandre Oliva wrote:
Add BuildRequires: libxml2-devel.
Move BuildRequires: libcurl-devel to a more proper place.
Uninstall libs3, incorrectly installed in RPM_BUILD_ROOT.
We fixed the Makefile so that libs3 is no longer installed in the first
place, took that part out.
On Wed, May 23, 2012 at 2:00 AM, Amon Ott a@m-privacy.de wrote:
So I started experimenting with the new cluster variable, but it does not
seem to be well supported so far. mkcephfs does not even know about it and
always uses ceph as cluster name. Setting a value for cluster in global
Hi Josef,
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and report tomorrow.
It looks very good ;-)
-martin
Am 23.05.2012 17:02, schrieb Josef Bacik:
Ok give this a shot, it should do it. Thanks,
--
To unsubscribe from this list: send
On Wed, May 23, 2012 at 1:51 AM, Frank frankwoo@gmail.com wrote:
Hello
I have a question about ceph.
When I mount ceph, I do the command as follow :
# mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv
now I create an user foo and make a secretkey by ceph-authtool
On Wed, 23 May 2012, Gregory Farnum wrote:
On Wed, May 23, 2012 at 1:51 AM, Frank frankwoo@gmail.com wrote:
Hello
I have a question about ceph.
When I mount ceph, I do the command as follow :
# mount -t ceph -o name=admin,secret=XX 10.1.0.1:6789/ /mnt/ceph -vv
now I create
On Wed, May 23, 2012 at 1:59 AM, Henry C Chang henry.cy.ch...@gmail.com wrote:
2012/5/23 Yehuda Sadeh yeh...@inktank.com:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that
On Wed, May 23, 2012 at 12:47 PM, Jerker Nyberg jer...@update.uu.se wrote:
On Tue, 22 May 2012, Gregory Farnum wrote:
Direct users of the RADOS object store (i.e., librados) can do all kinds
of things with the integrity guarantee options. But I don't believe there's
currently a way to make
On Wed, May 23, 2012 at 3:17 AM, Kiran Patil kirantpa...@gmail.com wrote:
Hello,
How about this CloudFS management system ?
The CloudFS management system consists of two parts: a very simple
web-based management daemon called cloudfsd, and scripts to perform
various discrete functions.
This point release fixes a librbd bug and a small items:
* librbd: possible livelock with the caching enabled
* libs3 compilation/install confusion (when building from source)
* ceph.spec updates
You can get it from the usual places:
* Git at git://github.com/ceph/ceph.git
* Tarball atÂ
I would like to test the effect of using the new write-through cache
on RBD volumes mounted to Openstack VMs. What, precisely, are the
changes I need to make to the volume XML in order to do so?
-Mandell
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
On 05/23/2012 04:44 PM, Mandell Degerness wrote:
I would like to test the effect of using the new write-through cache
on RBD volumes mounted to Openstack VMs. What, precisely, are the
changes I need to make to the volume XML in order to do so?
If your only volumes are rbd, you can append
Hi all,
Can anyone tried re-exporting CEPH cluster via NFS with success (I mean to
say, mount the CEPH cluster in one of the machine and then export that via
NFS to clients)? I need to do this bcz of my client kernel version and some
EDA tools compatibility.Can someone suggest me how I can
On 22-05-12 20:07, Yehuda Sadeh wrote:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that you (you as in the ceph community)
would like to see us taking.
There are a few
2012/5/24 Yehuda Sadeh yeh...@inktank.com:
On Wed, May 23, 2012 at 1:59 AM, Henry C Chang henry.cy.ch...@gmail.com
wrote:
2012/5/23 Yehuda Sadeh yeh...@inktank.com:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are
38 matches
Mail list logo