Re: [ceph-users] All pgs stuck peering

2015-12-14 Thread Chris Dunlop
On Mon, Dec 14, 2015 at 09:29:20PM +0800, Jaze Lee wrote: > Should we add big packet test in heartbeat? Right now the heartbeat > only test the little packet. If the MTU is mismatched, the heartbeat > can not find that. It would certainly have saved me a great deal of stress! I imagine you

Re: [ceph-users] Fix active+remapped situation

2015-12-14 Thread Reno Rainz
Thank you for your help, while reading your answer, I realize that I totally misunderstood how cruh map algo and data placement work in CEPH. I fix my issue, with this new rule : "rules": [ { "rule_id": 0, "rule_name": "replicated_ruleset", "ruleset":

Re: [ceph-users] ceph-fuse and subtree cephfs mount question

2015-12-14 Thread Goncalo Borges
I think I've understood how to run it... ceph-fuse -m MON_IP:6789 -r /syd /coepp/cephfs/syd does what I want Cheers Goncalo On 12/15/2015 12:04 PM, Goncalo Borges wrote: Dear CephFS experts Before it was possible to mount a subtree of a filesystem using ceph-fuse and -r option. In

[ceph-users] ceph-fuse and subtree cephfs mount question

2015-12-14 Thread Goncalo Borges
Dear CephFS experts Before it was possible to mount a subtree of a filesystem using ceph-fuse and -r option. In invernalis, I am not understanding how that is working, and I am only able to mount the full tree. 'ceph-fuse --help' does not seem to show that option although 'man ceph-fuse'

Re: [ceph-users] All pgs stuck peering

2015-12-14 Thread Jaze Lee
Should we add big packet test in heartbeat? Right now the heartbeat only test the little packet. If the MTU is mismatched, the heartbeat can not find that. 2015-12-14 12:18 GMT+08:00 Chris Dunlop : > On Sun, Dec 13, 2015 at 09:10:34PM -0700, Robert LeBlanc wrote: >> I've had

Re: [ceph-users] Monitor rename / recreate issue -- probing state

2015-12-14 Thread Joao Eduardo Luis
On 12/14/2015 12:41 AM, deeepdish wrote: > Perhaps I’m not understanding something.. > > The “extra_probe_peers” ARE the other working monitors in quorum out of > the mon_host line in ceph.conf. > > In the example below 10.20.1.8 = b20s08; 10.20.10.251 = smon01s; > 10.20.10.252 = smon02s > >

[ceph-users] sync writes - expected performance?

2015-12-14 Thread Nikola Ciprich
Hello, i'm doing some measuring on test (3 nodes) cluster and see strange performance drop for sync writes.. I'm using SSD for both journalling and OSD. It should be suitable for journal, giving about 16.1KIOPS (67MB/s) for sync IO. (measured using fio --filename=/dev/xxx --direct=1 --sync=1

[ceph-users] Openstack Available HDD Space

2015-12-14 Thread magicb...@hotmail.com
Hi I think that this problem has already been reported, but I don't get clearly how to resolve it. I have an openstack deployment with some compute nodes. The OS deployment is configured to use a ceph cluster (cinder+glance+nova ephemeral) My problem is this: the OS hypervisor stats reports

Re: [ceph-users] problem after reinstalling system

2015-12-14 Thread Jacek Jarosiewicz
On 12/10/2015 02:56 PM, Jacek Jarosiewicz wrote: On 12/10/2015 02:50 PM, Dan van der Ster wrote: On Wed, Dec 9, 2015 at 1:25 PM, Jacek Jarosiewicz wrote: 2015-12-09 13:11:51.171377 7fac03c7f880 -1 filestore(/var/lib/ceph/osd/ceph-5) Error initializing leveldb :

[ceph-users] Ceph RBD performance

2015-12-14 Thread Michał Chybowski
Hi I've set up a small (5-node) cluster of Ceph. I'm trying to benchmark more real-life performance of ceph's block storage, but I'm seeing very weird (low) values of my benchmark setup. My cluster consists of 5 nodes, every node has: 2 x 3TB HGST SATA drive 1x Samsung SM 841 120GB SSD for

Re: [ceph-users] Ceph RBD performance

2015-12-14 Thread Adrien Gillard
Hi Michal, You can have a look at a thread I started a few days ago : http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-December/006494.html I had some interrogations about performances as well and I think the explanations apply to your case. Also, your SSD does not seem to be DC grade,

Re: [ceph-users] Openstack Available HDD Space

2015-12-14 Thread Le Quang Long
Hi, You need configure libvirt to use Ceph as its backend. Put this config to [libvirt] in nova.conf: [libvirt] inject_partition=-2 inject_password = false live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST inject_key=False

Re: [ceph-users] sync writes - expected performance?

2015-12-14 Thread Warren Wang - ISD
Which SSD are you using? Dsync flag will dramatically slow down most SSDs. You¹ve got to be very careful about the SSD you pick. Warren Wang On 12/14/15, 5:49 AM, "Nikola Ciprich" wrote: >Hello, > >i'm doing some measuring on test (3 nodes) cluster and see

Re: [ceph-users] Openstack Available HDD Space

2015-12-14 Thread magicb...@hotmail.com
Thanks, now it works!! On 14/12/15 10:10, Le Quang Long wrote: Hi, You need configure libvirt to use Ceph as its backend. Put this config to [libvirt] in nova.conf: [libvirt] inject_partition=-2 inject_password = false

Re: [ceph-users] python-flask not in repo's for infernalis

2015-12-14 Thread Alfredo Deza
In a brand new CentOS 7 box I do see python-flask coming from the extras repo: [vagrant@localhost ~]$ yum provides python-flask Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * base: mirror.teklinks.com * epel: fedora-epel.mirror.lstn.net * extras:

Re: [ceph-users] Possible to change RBD-Caching settings while rbd device is in use ?

2015-12-14 Thread Jason Dillaman
Sorry, none of the librbd configuration properties can be live-updated currently. -- Jason Dillaman - Original Message - > From: "Daniel Schwager" > To: "ceph-us...@ceph.com" > Sent: Friday, December 11, 2015 3:35:11 AM > Subject:

Re: [ceph-users] sync writes - expected performance?

2015-12-14 Thread Mark Nelson
On 12/14/2015 04:49 AM, Nikola Ciprich wrote: Hello, i'm doing some measuring on test (3 nodes) cluster and see strange performance drop for sync writes.. I'm using SSD for both journalling and OSD. It should be suitable for journal, giving about 16.1KIOPS (67MB/s) for sync IO. (measured

Re: [ceph-users] sync writes - expected performance?

2015-12-14 Thread Warren Wang - ISD
Whoops, I misread Nikola¹s original email, sorry! If all your SSDs are all performing at that level for sync IO, then I agree that it¹s down to other things, like network latency and PG locking. Sequential 4K writes with 1 thread and 1 qd is probably the worst performance you¹ll see. Is there a

[ceph-users] Debug / monitor osd journal usage

2015-12-14 Thread Mike Miller
Hi, is there a way to debug / monitor the osd journal usage? Thanks and regards, Mike ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] python-flask not in repo's for infernalis

2015-12-14 Thread Kenneth Waegeman
Hi, Is there a reason python-flask is not in the repo of infernalis anymore ? In centos7 it is still not in the standard repos or epel.. Thanks! Kenneth ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Fix active+remapped situation

2015-12-14 Thread Reno Rainz
Thank you for your answer, but I don't really understand what do you mean. I use this map to distribute replicat into 2 differents dc, but I don't know where the mistake is. Le 14 déc. 2015 7:56 PM, "Samuel Just" a écrit : > 2 datacenters. > -Sam > > On Mon, Dec 14, 2015 at

[ceph-users] Fix active+remapped situation

2015-12-14 Thread Reno Rainz
Hi, I got a functionnal and operationnal ceph cluster (in version 0.94.5), with 3 nodes (acting for MON and OSD), everything was fine. I added a 4th osd node (same configuration than 3 others) and now cluster status is health warn (active+remapped). cluster

Re: [ceph-users] Fix active+remapped situation

2015-12-14 Thread Samuel Just
2 datacenters. -Sam On Mon, Dec 14, 2015 at 10:17 AM, Reno Rainz wrote: > Hi, > > I got a functionnal and operationnal ceph cluster (in version 0.94.5), with > 3 nodes (acting for MON and OSD), everything was fine. > > I added a 4th osd node (same configuration than 3

Re: [ceph-users] about federated gateway

2015-12-14 Thread Yehuda Sadeh-Weinraub
On Sun, Dec 13, 2015 at 7:27 AM, 孙方臣 wrote: > Hi, All, > > I'm setting up federated gateway. One is master zone, the other is slave > zone. Radosgw-agent is running in slave zone. I have encountered some > problems, can anybody help answering this: > > 1. When put a object

Re: [ceph-users] sync writes - expected performance?

2015-12-14 Thread Jan Schermer
Even with 10G ethernet, the bottleneck is not the network, nor the drives (assuming they are datacenter-class). The bottleneck is the software. The only way to improve that is to either increase CPU speed (more GHz per core) or to simplify the datapath IO has to take before it is considered

Re: [ceph-users] Fix active+remapped situation

2015-12-14 Thread Samuel Just
You most likely have pool size set to 3, but your crush rule requires replicas to be separated across DCs, of which you have only 2. -Sam On Mon, Dec 14, 2015 at 11:12 AM, Reno Rainz wrote: > Thank you for your answer, but I don't really understand what do you mean. > > I

Re: [ceph-users] sync writes - expected performance?

2015-12-14 Thread Warren Wang - ISD
I get where you are coming from, Jan, but for a test this small, I still think checking network latency first for a single op is a good idea. Given that the cluster is not being stressed, CPUs may be running slow. It may also benefit the test to turn CPU governors to performance for all cores.