On 06/03/2012 02:56 PM, Mark Nelson wrote:
On 6/3/12 3:10 AM, Stefan Priebe wrote:
Hi List,
has anybody already tried CEPH over Infiniband 40GB?
Stefan
--
To unsubscribe from this list: send the line unsubscribe
ceph-devel in
the body of a message to majord...@vger.kernel.org
More
Am 04.06.2012 08:22, schrieb Hannes Reinecke:
Hehe.
Good luck with that.
We've tried on 10GigE with _disastrous_ results.
Up to the point where 1GigE was actually _faster_.
So you mean you've tried 10GBE or 10GB ipoib with Infiniband?
- osd-server is calling 'fsync' on each and every write
Am 04.06.2012 09:39, schrieb Hannes Reinecke:
On 06/04/2012 09:26 AM, Stefan Priebe - Profihost AG wrote:
Am 04.06.2012 08:22, schrieb Hannes Reinecke:
Hehe.
Good luck with that.
We've tried on 10GigE with _disastrous_ results.
Up to the point where 1GigE was actually _faster_.
So you mean
On 06/04/2012 09:53 AM, Stefan Priebe - Profihost AG wrote:
Am 04.06.2012 09:39, schrieb Hannes Reinecke:
On 06/04/2012 09:26 AM, Stefan Priebe - Profihost AG wrote:
Am 04.06.2012 08:22, schrieb Hannes Reinecke:
Hehe.
Good luck with that.
We've tried on 10GigE with _disastrous_ results.
Up
From: Yan, Zheng zheng.z@intel.com
ceph_snap_context-snaps is an u64 array
Signed-off-by: Zheng Yan zheng.z@intel.com
---
drivers/block/rbd.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 65665c9..8b9c173 100644
Hi Hannes,
our production environment is running on 10GB infrastructure. We had a
lot of troubles till we got to where we are today.
We use Intel X520 D2 cards on our OSD´s and nexus switch
infrastructure. All other cards we where testing failed horrible.
Some of the problems we encountered have
Hello,
Besides the performance inconsistency (see other thread titled poor OSD
performance using kernel 3.4) where I promised some tests (will run this
afternoon), we tried this week-end to stress test ceph, making backups
with bacula on a rbd volume of 15T (8 osd nodes, using 8 physical
Le 04/06/2012 10:23, Stefan Majer a écrit :
Hi Hannes,
our production environment is running on 10GB infrastructure. We had a
lot of troubles till we got to where we are today.
We use Intel X520 D2 cards on our OSD´s and nexus switch
infrastructure. All other cards we where testing failed
On Monday 04 June 2012 you wrote:
Le 04/06/2012 10:23, Stefan Majer a écrit :
Hi Hannes,
our production environment is running on 10GB infrastructure. We had a
lot of troubles till we got to where we are today.
We use Intel X520 D2 cards on our OSD´s and nexus switch
infrastructure.
Le 04/06/2012 11:35, Alexandre DERUMIER a écrit :
Hi,
about this:
Turning off Virtualisation extension in BIOS. Don't know why, but it
gaves us crappy performance. We usually put it on, because we use KVM a
lot. In our case, OSD are in bare metal and disabling virtualisation
extension gives us
Le 04/06/2012 11:47, Amon Ott a écrit :
even libc6 in Debian experimental has syncfs() support.
Also see thread OSD deadlock with cephfs client and OSD on same machine
Great , thanks for explanation.
... lots of tests to do this afternoon :) I need to convert my OSD with
xfs, benchmark with
Hi,
I'm currently doing some tests with xfs, debian wheezy with standard libc6
(2.11.3-3) and 3.2 kernel.
I'm doing some iostats(3 nodes with 5 osd), and I see constant writes to
disks.(as the datas are flushed each second from journal to disk).
Journal is big enough (20GB tmpfs) to handle
Hi Martin,
On Sat, 2 Jun 2012, Martin Wilderoth wrote:
I have some problems with my ceph filesystem. I have a folder that i
cant remove.
I.E.
root@lintx2:/mnt/backuppc/pc# ls -la toberemoved/
total 0
drwxr-x--- 1 backuppc backuppc 28804802 May 15 13:29 .
drwxr-x--- 1
On 6/4/12 1:22 AM, Hannes Reinecke wrote:
On 06/03/2012 02:56 PM, Mark Nelson wrote:
On 6/3/12 3:10 AM, Stefan Priebe wrote:
Hi List,
has anybody already tried CEPH over Infiniband 40GB?
Stefan
--
To unsubscribe from this list: send the line unsubscribe
ceph-devel in
the body of a message to
On Mon, Jun 4, 2012 at 2:28 PM, Mark Nelson mark.nel...@inktank.com wrote:
For syncfs support, upgrade to a distro with glibc 2.13+ (ie precise). I've
noticed a significant improvement in our spinning disk performance going
from oneiric and kernel 3.3 to precise and kernel 3.4. I think part
On 6/4/12 7:34 AM, Tomasz Paszkowski wrote:
On Mon, Jun 4, 2012 at 2:28 PM, Mark Nelsonmark.nel...@inktank.com wrote:
For syncfs support, upgrade to a distro with glibc 2.13+ (ie precise). I've
noticed a significant improvement in our spinning disk performance going
from oneiric and kernel
On 6/4/12 6:40 AM, Alexandre DERUMIER wrote:
Hi,
I'm currently doing some tests with xfs, debian wheezy with standard libc6
(2.11.3-3) and 3.2 kernel.
I'm doing some iostats(3 nodes with 5 osd), and I see constant writes to
disks.(as the datas are flushed each second from journal to disk).
Thanks Mark,
I'll rebuild my cluster with ubuntu precise tomorrow. (Don't have time to
backport/maintain libc6 ;)
BTW, do you use mainly ubuntu at intank for your tests ?
I'd like to have a setup as close as possible of intank setup.
- Mail original -
De: Mark Nelson
On Monday, June 4, 2012 at 6:28 AM, Mark Nelson wrote:
Hi Alexandre,
A lot of our testing is on Ubuntu right now. I'm using the ceph and
kernel debs from ceph.gitbuilder.com (http://ceph.gitbuilder.com) for my
tests. Post some results to
the list once you get your cluster setup!
I
Hi Amon,
thanks for your backported patch. At least it doesn't cleanly apply to
debian squeeze stable as it wants a glic 2.12 in Versions.def but Debian
is only at 2.11? Do you use another patch too?
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of
On Mon, Jun 4, 2012 at 1:44 AM, Yann Dupont yann.dup...@univ-nantes.fr wrote:
Results : Worked like a charm during two days, apart btrfs warn messages
then OSD begin to crash 1 after all 'domino style'.
Sorry to hear that. Reading through your message, there seem to be
several problems; whether
Can you send the osd logs? The merge_log crashes are probably fixable
if I can see the logs.
The leveldb crash is almost certainly a result of memory corruption.
Thanks
-Sam
On Mon, Jun 4, 2012 at 9:16 AM, Tommi Virtanen t...@inktank.com wrote:
On Mon, Jun 4, 2012 at 1:44 AM, Yann Dupont
This is probably the same/similar to http://tracker.newdream.net/issues/2462,
no? There's a log there, though I've no idea how helpful it is.
On Monday, June 4, 2012 at 10:40 AM, Sam Just wrote:
Can you send the osd logs? The merge_log crashes are probably fixable
if I can see the logs.
On Thursday, May 24, 2012 at 5:29 AM, Felix Feinhals wrote:
Hi,
i was using the Debian Packages, but i tried now from source.
I used the same version from GIT
(cb7f1c9c7520848b0899b26440ac34a8acea58d1) and compiled it. Same crash
report.
Then i applied your patch but again the same crash,
On 06/04/2012 03:11 AM, Yan, Zheng wrote:
From: Yan, Zhengzheng.z@intel.com
ceph_snap_context-snaps is an u64 array
You are correct. We'll get this in soon. Thanks.
Reviewed-by: Alex Elder el...@inktank.com
Signed-off-by: Zheng Yanzheng.z@intel.com
---
drivers/block/rbd.c |
On Thursday, May 31, 2012 at 4:58 PM, Noah Watkins wrote:
On May 31, 2012, at 3:39 PM, Greg Farnum wrote:
Nevermind to my last comment. Hmm, I've seen this, but very rarely.
Noah, do you have any leads on this? Do you think it's a bug in your Java
code or in the C/++ libraries?
On Mon, Jun 4, 2012 at 1:17 PM, Greg Farnum g...@inktank.com wrote:
I'm not quite sure what you mean here. Ceph is definitely using pthread
threading and mutexes, but I don't see how the use of a different threading
library can break pthread mutexes (which are just using the kernel futex
On Monday, June 4, 2012 at 1:47 PM, Noah Watkins wrote:
On Mon, Jun 4, 2012 at 1:17 PM, Greg Farnum g...@inktank.com
(mailto:g...@inktank.com) wrote:
I'm not quite sure what you mean here. Ceph is definitely using pthread
threading and mutexes, but I don't see how the use of a different
On Mon, Jun 4, 2012 at 2:52 PM, udit agarwal fzdu...@gmail.com wrote:
I ran the 5g iozone test on my ceph system and I got the following output on
the
terminal:
...
Message from syslogd@hp1 at Jun 4 22:19:03 ...
kernel:[ 7627.132065] Oops: [#1] PREEMPT SMP
Message from syslogd@hp1 at
Hi,
Thanks for your reply.
Please follow this link
https://docs.google.com/document;.
d/1mYVyI75FGMYqPes5T5fkI0aUX8h2q6TFeWdoV9uJdQI/edit?pli=1
to find the whole message. (plz concatenate both strings for the link as I
wasn't able to post it in whole i.e. link is https://?pli=1 .
Sorry ,the link is:
https://docs.google.com/document/d/
1mYVyI75FGMYqPes5T5fkI0aUX8h2q6TFeWdoV9uJdQI/edit?pli=1
hope you can help me in this matter.
--Udit Agarwal
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
On Mon, 4 Jun 2012, Noah Watkins wrote:
I wonder if the Mutex::lock(bool) being printed out is a red herring...
FWIW this assert usually means a use-after-free.
sage
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
32 matches
Mail list logo