[ceph-users] btrfs ready for production?

2015-09-07 Thread Alan Zhang
hi everyone: as ceph doc currently recommend: If you use the btrfs file system with Ceph, we recommend using a recent Linux kernel (v3.14 or later). so, I want to know:1. if we use btrfs base on linux kernel 3.14 or later, it is ready for production?2. Any one have use btrfs on production env?

[ceph-users] test

2015-09-07 Thread Zhuangzeqiang
- ??? This e-mail and its attachments

[ceph-users] Test

2015-09-07 Thread Wukongming
Test - 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本 邮件! This e-mail and its attachments

[ceph-users] [Problem] I cannot start the OSD daemon

2015-09-07 Thread Aaron
Hi All, I cannot start the OSD daemon, and need your helps. Any advice is appreciated. When I deployed the ceph cluster through ceph-deploy, it worked. But after some days, all the OSD daemons were down, and I could not start them. Then I redeployed it some times, and it was still this

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-07 Thread Eino Tuominen
Hello, Should we (somebody, please?) gather up a comprehensive list of suitable SSD devices to use as ceph journals? This seems to be a FAQ, and it would be nice if all the knowledge and user experiences from several different threads could be referenced easily in the future. I took a look at

[ceph-users] Ceph monitor ip address issue

2015-09-07 Thread Willi Fehler
Hello, I'm trying to setup my first Ceph Cluster on Hammer. [root@linsrv002 ~]# ceph -v ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b) [root@linsrv002 ~]# ceph -s cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7 health HEALTH_OK monmap e1: 3 mons at

[ceph-users] File striping configuration?

2015-09-07 Thread Alexander Walker
Hi, I've found this document https://ceph.com/docs/v0.80/dev/file-striping. But I don't understand how and where I can configure this. I'll use this in CephFS. Can me help someone? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] File striping configuration?

2015-09-07 Thread Ilya Dryomov
On Mon, Sep 7, 2015 at 11:19 AM, Alexander Walker wrote: > Hi, > I've found this document https://ceph.com/docs/v0.80/dev/file-striping. But > I don't understand how and where I can configure this. I'll use this in > CephFS. > Can me help someone? You configure file striping

[ceph-users] Is it indispensable to specified uid to rm 、modify 、create or get info?

2015-09-07 Thread Zhuangzeqiang
Hello everyone! I have a question that Is it indispensable to specified uid to rm 、modify 、create or get info? - 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出

[ceph-users] test

2015-09-07 Thread Zhuangzeqiang
- ??? This e-mail and its attachments

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-07 Thread Andrija Panic
There is http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ On the other hand, I'm not sure if SSD vendors would be happy to see their device listed performing total crap (for Journaling) ...but yes, I vote for having some oficial page if

[ceph-users] rgw potential security issue

2015-09-07 Thread Xusangdi
Hi Cephers, Recently when I did some tests of RGW functions I found that the swift key of a subuser is kept after removing the subuser. As a result, this subuser-swift_key pair can still pass authentication system and get an auth-token (without any permission though). Moreover, if we create a

[ceph-users] I still have a question. I hope you can help me as soon as possible.

2015-09-07 Thread Liupeiyang
Create users in the additional modify function can be removed, which will cause the configuration of the misunderstanding -

[ceph-users] I still have a question. I hope you can help me as soon as possible.

2015-09-07 Thread Liupeiyang
Create users in the additional modify function can be removed, which will cause the configuration of the misunderstanding. -

[ceph-users] test

2015-09-07 Thread Guce
- ??? This e-mail and its attachments

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Jan Schermer
Take a look at this: http://monolight.cc/2011/06/barriers-caches-filesystems/ LSI's answer just makes no sense to me... Jan > On 07 Sep 2015, at 11:07, Jan Schermer wrote: > > Are you absolutely sure there's

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Jan Schermer
Hmm, even network traffic went up. Nothing in logs on the mons which started 9/4 ~6 AM? Jan > On 07 Sep 2015, at 14:11, Mariusz Gronczewski > wrote: > > On Mon, 7 Sep 2015 13:44:55 +0200, Jan Schermer wrote: > >> Maybe some configuration

Re: [ceph-users] [Problem] I cannot start the OSD daemon

2015-09-07 Thread Aaron
Supplement[On OSD-0 Node]: [root@node1 ceph]# *ls /var/lib/ceph/osd/ceph-0/current/* 0.0_head 0.18_head 0.20_head 0.29_head 0.31_head 0.3a_head 0.6_head 0.f_head 2.0_head commit_op_seq 0.10_head 0.19_head 0.21_head 0.2a_head 0.32_head 0.3b_head 0.7_head 1.0_head 2.1_head meta

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
On Mon, 7 Sep 2015 13:02:38 +0200, Jan Schermer wrote: > Apart from bug causing this, this could be caused by failure of other OSDs > (even temporary) that starts backfills. > > 1) something fails > 2) some PGs move to this OSD > 3) this OSD has to allocate memory for all the

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
Are you using lacp in 10g interfaces? - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 9:58:33 PM Subject: Re:

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Jan Schermer
> On 07 Sep 2015, at 12:19, Christian Balzer wrote: > > On Mon, 7 Sep 2015 12:11:27 +0200 Jan Schermer wrote: > >> Dense SSD nodes are not really an issue for network (unless you really >> use all the throughput), > That's exactly what I wrote... > And dense in the sense of

Re: [ceph-users] Ceph cache-pool overflow

2015-09-07 Thread Квапил , Андрей
Hello. Somebody can answer a simple question? Why ceph, with the equal weights and sizes of OSD, writes to them not equally? - to some bit more, to other bit less... Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] НА: XFS and nobarriers on Intel SSD

2015-09-07 Thread Межов Игорь Александрович
Hi! Oh! My shame! I forgot about С60X. We have some Intel S2600 platforms and use only onboard sata3 for ssd or RMS25CB080 as a fully functional RAD module. With C60х SAS we have bad results with Debian 3.16 kernels: 'isci' driver works with sata devices and refuse to work with sas on the same

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
it looked like osds started flapping, there was a lot of: 2015-09-04 04:35:07.100185 7f3fb1fc5700 1 mon.0@0(leader).osd e6192 prepare_failure osd.1 10.100.226.51:6813/19433 from osd.6 10.100.226.52:6807/4286 is reporting failure:1 2015-09-04 04:35:07.100207 7f3fb1fc5700 0 log_channel(cluster)

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Christian Balzer
On Mon, 7 Sep 2015 12:11:27 +0200 Jan Schermer wrote: > Dense SSD nodes are not really an issue for network (unless you really > use all the throughput), That's exactly what I wrote... And dense in the sense of saturating his network would be 4 SSDs, so: > the issue is with CPU and memory

[ceph-users] НА: XFS and nobarriers on Intel SSD

2015-09-07 Thread Межов Игорь Александрович
Hi! >And for the record, _ALL_ the drives I tested are faster on Intel SAS than on >LSI (2308) and >often faster on a regular SATA AHCI then on their "high throughput" HBAs. But most of Intel HBAs are LSI based. It is the same chips with slightly different firmware, i think. We use RS2MB044,

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
that was on 10Gbit interface between OSDs, not from outside, traffic from outside was rather low (eth0/1 public - 2/3 cluster) On Mon, 7 Sep 2015 08:42:09 -0400 (EDT), Shinobu Kinjo wrote: > How heavy network traffic was? > > Have you tried to capture that traffic between

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Jan Schermer
Is this based on LSI? I don't think so. 03:00.0 Serial Attached SCSI controller: Intel Corporation C606 chipset Dual 4-Port SATA/SAS Storage Control Unit (rev 06) In any case it doesn't have any of the problems the LSI HBAs in the same machine exhibit (slow flushing, non-functional TRIM,

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
On Mon, 7 Sep 2015 13:44:55 +0200, Jan Schermer wrote: > Maybe some configuration change occured that now takes effect when you start > the OSD? > Not sure what could affect memory usage though - some ulimit values maybe > (stack size), number of OSD threads (compare the

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
I first had that problem on Giant, only upgraded to 0.94.3 in hopes it will eat less RAM On Mon, 7 Sep 2015 20:51:57 +0800, 池信泽 wrote: > Yeh, There is bug which would use huge memory. It be triggered when osd > down or add into cluster and do recovery/backfilling. > > The

[ceph-users] Network failure

2015-09-07 Thread MEGATEL / Rafał Gawron
Hi ! My cluster network configuration: ETH0 - access to system IB0 - public network IB1 - cluster network How my cluster will work if network is down: eth0 is down: I don't have access over Ethernet to my system, but cluster works good. ib0 is down: clients don't have access to cluster but data

Re: [ceph-users] Network failure

2015-09-07 Thread Shinobu Kinjo
The best answer is: http://ceph.com/docs/master/rados/configuration/network-config-ref/ I think that you should be able to know how each component communicates with each other with that. And this would be more help: https://ceph.com/docs/v0.79/rados/operations/auth-intro/ Shinobu

Re: [ceph-users] Best layout for SSD & SAS OSDs

2015-09-07 Thread Jan Schermer
Dense SSD nodes are not really an issue for network (unless you really use all the throughput), the issue is with CPU and memory throughput (and possibly crappy kernel scheduler depending on how up-to-date distro you use). Also if you want consistent performance even when failure occurs, you

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Jan Schermer
Maybe some configuration change occured that now takes effect when you start the OSD? Not sure what could affect memory usage though - some ulimit values maybe (stack size), number of OSD threads (compare the number from this OSD to the rest of OSDs), fd cache size. Look in /proc and compare

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
nope, master/slave, that's why on graph there is only traffic on eth2 On Mon, 7 Sep 2015 09:01:53 -0400 (EDT), Shinobu Kinjo wrote: > Are you using lacp in 10g interfaces? > > - Original Message - > From: "Mariusz Gronczewski" > To:

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Andrey Korolyov
On Mon, Sep 7, 2015 at 12:54 PM, Paul Mansfield wrote: > > > On 04/09/15 20:55, Richard Bade wrote: >> We have a Ceph pool that is entirely made up of Intel S3700/S3710 >> enterprise SSD's. >> >> We are seeing some significant I/O delays on the disks causing a

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
> master/slave Meaning that you are using bonding? - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 10:05:23 PM

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
no, we dont use it... 803.3ad is LACP or bonding option "mode=4" or "mode=802.3ad" we use mode=1 which is active-backup and have nothing to do with LACP explictly because we want to push all traffic thru one switch (we have a pair of them) to avoid pushing traffic thru link between those

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-07 Thread Quentin Hartman
fwiw, I am not confused about the various types of SSDs that Samsung offers. I knew exactly what I was getting when I ordered them. Based on their specs and my WAG on how much writing I would be doing they should have lasted about 6 years. Turns out my estimates were wrong, but even adjusting for

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
yes On Mon, 7 Sep 2015 09:15:55 -0400 (EDT), Shinobu Kinjo wrote: > > master/slave > > Meaning that you are using bonding? > > - Original Message - > From: "Mariusz Gronczewski" > To: "Shinobu Kinjo" > Cc: "Jan

Re: [ceph-users] Ceph OSD nodes in XenServer VMs

2015-09-07 Thread Jiri Kanicky
Hi. As we would like to use the CEPH storage with CloudStack/XS we have to use NFS or iSCSI client nodes to provide shared storage. To avoid having several nodes of physical hardware we thought that we could run NFS/iSCSI client node on the same box with Ceph OSD node. Possibly we could even

Re: [ceph-users] ceph-deploy prepare btrfs osd error

2015-09-07 Thread German Anders
Thanks a lot Simon, this helps me to resolved the issue, it was the bug that you mentioned. Best regards,​ *German* 2015-09-07 5:34 GMT-03:00 Simon Hallam : > Hi German, > > > > This is what I’m running to redo an OSD as btrfs (not sure if this is the > exact error you’re

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
O.k, that's the protocol, 803.ad. - Original Message - From: "Mariusz Gronczewski" To: "Shinobu Kinjo" Cc: "Jan Schermer" , ceph-users@lists.ceph.com Sent: Monday, September 7, 2015 10:19:23 PM Subject: Re:

[ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-07 Thread Vickey Singh
Dear Experts Can someone please help me , why my cluster is not able write data. See the below output cur MB/S is 0 and Avg MB/s is decreasing. Ceph Hammer 0.94.2 CentOS 6 (3.10.69-1) The Ceph status says OPS are blocked , i have tried checking , what all i know - System resources ( CPU

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-07 Thread Lincoln Bryant
Hi Vickey, I had this exact same problem last week, resolved by rebooting all of my OSD nodes. I have yet to figure out why it happened, though. I _suspect_ in my case it's due to a failing controller on a particular box I've had trouble with in the past. I tried setting 'noout', stopping

Re: [ceph-users] btrfs ready for production?

2015-09-07 Thread Quentin Hartman
btrfs has been discussed at length here. Search the archives if you want more detail, but my take on it is that you probably shouldn't use it production right now. Also from http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/ "We currently recommend XFS for production

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Christian Balzer
Hello, Note that I see exactly your errors (in a non-Ceph environment) with both Samsung 845DC EVO and Intel DC S3610. Though I need to stress things quite a bit to make it happen. Also setting nobarrier did alleviate it, but didn't fix it 100%, so I guess something still issues flushes at some

[ceph-users] Unable to add Ceph KVM node in cloudstack

2015-09-07 Thread Shetty, Pradeep
Hello Folks, We are working on ceph(hammer, 0.94.3) implementation on kvm hypervisor(centos 6.5) which will be integrated on cloudstack(4.4.2), --- Single KVM node( OS- CentOS 6.5, libvirt-0.10.2-54.el6.x86_64, openvswitch 2.3.2) + ceph hammer, cluster health is Ok. We are unable to add the

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Richard Bade
Thanks guys for the pointers to this Intel thread: https://communities.intel.com/thread/77801 It looks promising. I intend to update the firmware on disks in one node tonight and will report back after a few days to a week on my findings. I've also posted to that forum and will update there

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread Shinobu Kinjo
I have a bunch of question about performance of Lustre, which should be discussed in lustre-discuss list. How many OSTs are you using now? How did you configure LNET? How are you using extra RAM as read cache? Shinobu - Original Message - From: "Vickey Singh"

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Richard Bade
Hi Christian, On 8 September 2015 at 14:02, Christian Balzer wrote: > > Indeed. But first a word about the setup where I'm seeing this. > These are 2 mailbox server clusters (2 nodes each), replicating via DRBD > over Infiniband (IPoIB at this time), LSI 3008 controller. One

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Richard Bade
Hi Christian, Thanks for the info. I'm just wondering, have you updated your S3610's with the new firmware that was released on 21/08 as referred to in the thread? We thought we weren't seeing the issue on the intel controller also to start with, but after further investigation it turned out we

[ceph-users] test2

2015-09-07 Thread Guce
- ??? This e-mail and its attachments

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread Somnath Roy
Vickey, OSDs are on top of filesystem and those unused memory will be automatically part of paged cache by filesystem. But, the read performance improvement depends on the pattern application is reading data and the size of working set. Sequential pattern will benefit most (you may need to tweak

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Christian Balzer
Hello, On Tue, 8 Sep 2015 13:40:36 +1200 Richard Bade wrote: > Hi Christian, > Thanks for the info. I'm just wondering, have you updated your S3610's > with the new firmware that was released on 21/08 as referred to in the > thread? I did so earlier today, see below. >We thought we weren't

Re: [ceph-users] Ceph monitor ip address issue

2015-09-07 Thread Willi Fehler
Hi Chris, could you please send me your ceph.conf? I tried to set "mon addr" but it looks like that it was ignored all the time. Regards - Willi Am 07.09.15 um 20:47 schrieb Chris Taylor: My monitors are only connected to the public network, not the cluster network. Only the OSDs are

[ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread Vickey Singh
Hello Experts , I want to increase my Ceph cluster's read performance. I have several OSD nodes having 196G RAM. On my OSD nodes Ceph just uses 15-20 GB of RAM. So, can i instruct Ceph to make use of the remaining 150GB+ RAM as read cache. So that it should cache data in RAM and server to

Re: [ceph-users] Extra RAM use as Read Cache

2015-09-07 Thread ceph
IO cache may be handled be the kernel, not userspace Are you sure it is not already in use ? Do not look for userspace memory On 07/09/2015 23:19, Vickey Singh wrote: > Hello Experts , > > I want to increase my Ceph cluster's read performance. > > I have several OSD nodes having 196G RAM. On my

[ceph-users] НА: НА: which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-07 Thread Межов Игорь Александрович
Hi! >Meaning you're limited to 360MB/s writes per node at best. We use ceph as OpenNebula rbd datastote for running VMs, so bandwith constraints are not as important as iops limits. >I use 1:2 or 1:3 journals and haven't made any dent into >my 200GB S3700 yet. We started to move nodes

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-07 Thread Vickey Singh
On Mon, Sep 7, 2015 at 7:39 PM, Lincoln Bryant wrote: > Hi Vickey, > > Thanks a lot for replying to my problem. > I had this exact same problem last week, resolved by rebooting all of my > OSD nodes. I have yet to figure out why it happened, though. I _suspect_ in > my

Re: [ceph-users] XFS and nobarriers on Intel SSD

2015-09-07 Thread Paul Mansfield
On 04/09/15 20:55, Richard Bade wrote: > We have a Ceph pool that is entirely made up of Intel S3700/S3710 > enterprise SSD's. > > We are seeing some significant I/O delays on the disks causing a “SCSI > Task Abort” from the OS. This seems to be triggered by the drive > receiving a “Synchronize

Re: [ceph-users] which SSD / experiences with Samsung 843T vs. Intel s3700

2015-09-07 Thread Jan Schermer
It is not just a question of which SSD. It's the combination of distribution (kernel version), disk controller and firmware, SSD revision and firmware. There are several ways to select hardware 1) the most traditional way where you build your BoM on a single vendor - so you buy servers

[ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Mariusz Gronczewski
Hi, over a weekend (was on vacation so I didnt get exactly what happened) our OSDs started eating in excess of 6GB of RAM (well RSS), which was a problem considering that we had only 8GB of ram for 4 OSDs (about 700 pgs per osd and about 70GB space used. So spam of coredumps and OOMs blocked the

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Jan Schermer
Apart from bug causing this, this could be caused by failure of other OSDs (even temporary) that starts backfills. 1) something fails 2) some PGs move to this OSD 3) this OSD has to allocate memory for all the PGs 4) whatever fails gets back up 5) the memory is never released. A similiar

[ceph-users] НА: Ceph cache-pool overflow

2015-09-07 Thread Межов Игорь Александрович
Hi! Because distribution is computed with CRUSH rules algoritmically. So, as with any other hash algorithms, the result will depend of the 'data' itself. In ceph, 'data' - is the object name. Imagine, that you have a simple plain hashtable with 17 buckets. Bucket index is computed by a simple

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread Shinobu Kinjo
How heavy network traffic was? Have you tried to capture that traffic between cluster and public network to see where such a bunch of traffic came from? Shinobu - Original Message - From: "Jan Schermer" To: "Mariusz Gronczewski" Cc:

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-07 Thread 池信泽
Yeh, There is bug which would use huge memory. It be triggered when osd down or add into cluster and do recovery/backfilling. The patch https://github.com/ceph/ceph/pull/5656 https://github.com/ceph/ceph/pull/5451 merged into master would fix it, and it would be backport. I think ceph v0.93 or

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-07 Thread Vickey Singh
Adding ceph-users. On Mon, Sep 7, 2015 at 11:31 PM, Vickey Singh wrote: > > > On Mon, Sep 7, 2015 at 10:04 PM, Udo Lembke wrote: > >> Hi Vickey, >> > Thanks for your time in replying to my problem. > > >> I had the same rados bench output

Re: [ceph-users] Ceph monitor ip address issue

2015-09-07 Thread Chris Taylor
My monitors are only connected to the public network, not the cluster network. Only the OSDs are connected to the cluster network. Take a look at the diagram here: http://ceph.com/docs/master/rados/configuration/network-config-ref/ -Chris On 09/07/2015 03:15 AM, Willi Fehler wrote: Hi, any

Re: [ceph-users] Ceph cluster NO read / write performance :: Ops are blocked

2015-09-07 Thread Udo Lembke
Hi Vickey, I had the same rados bench output after changing the motherboard of the monitor node with the lowest IP... Due to the new mainboard, I assume the hw-clock was wrong during startup. Ceph health show no errors, but all VMs aren't able to do IO (very high load on the VMs - but no traffic).