Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Gilles Mocellin
Le 07/05/2014 15:23, Vlad Gorbunov a écrit : It's easy to install tgtd with ceph support. ubuntu 12.04 for example: Connect ceph-extras repo: echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list Install tgtd with rbd

Re: [ceph-users] Ceph with VMWare / XenServer

2014-05-13 Thread Gilles Mocellin
Le 12/05/2014 15:45, Uwe Grohnwaldt a écrit : Hi, yes, we use it in production. I can stop/kill the tgt on one server and XenServer goes to the second one. We enabled multipathing in xenserver. In our setup we haven't multiple ip-ranges so we scan/login the second target on xenserverstartup

Re: [ceph-users] Public/Cluster addr how to

2013-10-17 Thread Gilles Mocellin
Le 17/10/2013 11:06, NEVEU Stephane a écrit : Hi list, I'm trying to figure out how can I set up 3 defined cluster IPs and 3 other public IPs on my 3 node cluster with ceph-deploy (Ubuntu raring, stable). Here are my IPs for the public network : 172.23.5.101, 172.23.5.102, 172.23.5.103

Re: [ceph-users] Ceph Performance MB/sec

2013-12-01 Thread Gilles Mocellin
Le 01/12/2013 15:22, German Anders a écrit : [...] ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3 4; do sudo dd if=/dev/zero of=./a bs=1M count=1000; done Hello, You should really write anything but zeros. I suspect that nothing is really written to disk, specially on btrfs, a cow

Re: [ceph-users] Openstack--instance-boot-from-ceph-volume:: error could not open disk image rbd

2013-12-06 Thread Gilles Mocellin
Le 05/12/2013 14:01, Karan Singh a écrit : Hello Everyone Trying to boot from ceph volume using bolg http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/ and http://docs.openstack.org/user-guide/content/boot_from_volume.html Need help for this error.

Re: [ceph-users] Performance questions (how original, I know)

2013-12-20 Thread Gilles Mocellin
Le 20/12/2013 03:51, Christian Balzer a écrit : Hello Mark, On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote: On 12/16/2013 02:42 AM, Christian Balzer wrote: Hello, Hi Christian! new to Ceph, not new to replicated storage. Simple test cluster with 2 identical nodes running Debian

Re: [ceph-users] How to deploy ceph with a Debian version other than stable (Hello James Page ^o^)

2014-01-08 Thread Gilles Mocellin
Le 08/01/2014 02:46, Christian Balzer a écrit : It is what it is. As in, sid (unstable) and testing are named jessie/sid in /etc/debian_version, including a notebook of mine that has been sid (as in /etc/apt/sources.list) for 10 years. This naming convention (next_release/sid) has been in place

Re: [ceph-users] Fluctuating I/O speed degrading over time

2014-03-07 Thread Gilles Mocellin
Le 07/03/2014 10:50, Indra Pramana a écrit : Hi, I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs with SSD drives and I noted that the I/O speed, especially write access to the cluster is degrading over time. When we first started the cluster, we can get up to 250-300

Re: [ceph-users] Running Ceph issues: HEALTH_WARN, unknown auth protocol, others

2013-05-02 Thread Gilles Mocellin
Le 01/05/2013 18:23, Wyatt Gorman a écrit : Here is my ceph.conf. I just figured out that the second host = isn't necessary, though it is like that on the 5-minute quick start guide... (Perhaps I'll submit my couple of fixes that I've had to implement so far). That fixes the redefined host

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Gilles Mocellin
Le 11/07/2013 12:08, Tom Verdaat a écrit : Hi guys, We want to use our Ceph cluster to create a shared disk file system to host VM's. Our preference would be to use CephFS but since it is not considered stable I'm looking into alternatives. The most appealing alternative seems to be to

Re: [ceph-users] Large storage nodes - best practices

2013-08-06 Thread Gilles Mocellin
Le 06/08/2013 02:57, James Harper a écrit : In the previous email, you are forgetting Raid1 has a write penalty of 2 since it is mirroring and now we are talking about different types of raid and nothing really to do about Ceph. One of the main advantages of Ceph is to have data replicated so

Re: [ceph-users] ceph-mon runs on 6800 not 6789.

2013-09-05 Thread Gilles Mocellin
Le 03/09/2013 14:56, Joao Eduardo Luis a écrit : On 09/03/2013 02:02 AM, 이주헌 wrote: Hi all. I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This has

[ceph-users] ceph-deploy depends on sudo

2013-09-06 Thread Gilles Mocellin
... Thank you devs for your work ! -- Gilles Mocellin Nuage Libre ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] newbie question: rebooting the whole cluster, powerfailure

2013-09-06 Thread Gilles Mocellin
at the interfaces traffic (with bwm-ng) I see that the cluster network is now used. (you can also look at established connextions with ss). -- Gilles Mocellin Nuage Libre ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] ceph-deploy depends on sudo

2013-09-06 Thread Gilles Mocellin
Le 06/09/2013 17:37, Alfredo Deza a écrit : On Fri, Sep 6, 2013 at 11:17 AM, Gilles Mocellin gilles.mocel...@nuagelibre.org wrote: Perhaps it's worth a bug report, or some changes in ceph-deploy : I've just deployed some test clusters with ceph-deploy on Debian Wheezy. I had errors with ceph

Re: [ceph-users] problem with ceph-deploy hanging

2013-09-17 Thread Gilles Mocellin
+= http_proxy https_proxy ftp_proxy no_proxy Hope it can help. -- Gilles Mocellin Nuage Libre ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] bluestore lvm scenario confusion

2018-07-21 Thread Gilles Mocellin
Le samedi 21 juillet 2018, 15:56:31 CEST Satish Patel a écrit : > I am trying to deploy ceph-ansible with lvm osd scenario and reading > at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html > > I have all SSD disk and i don't have separate journal, my plan was > keep WAL/DB on same

[ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-08 Thread Gilles Mocellin
Hello Cephers ! After having read since Luminuous that EC pools are now supported for writable RBD pools, I decided to use it in a new OpenStack Cloud deployment. The gain on storage is really noticeable, and I want to reduce the storage cost. So I decided to use ceph-ansible to deploy the

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Gilles Mocellin
Le 2018-07-10 06:26, Konstantin Shalygin a écrit : Does someone have used EC pools with OpenStack in production ? By chance, I found that link : https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/ Yes, this good post. My configuration is: cinder.conf:

[ceph-users] [ceph-ansible] create EC pools

2018-09-24 Thread Gilles Mocellin
Hello Cephers, I use ceph-ansible v3.1.5 to build a new Mimic CEph Cluster for OpenStack. I want to use Erasure Coding for certain pools (images, cinder backups, cinder for one additional backend, rgw data...). The examples in group_vars/all.yml.sample don't show how to specify an erasure