[ceph-users] ceph-volume sizing osds

2019-12-13 Thread Oscar Segarra
Hi, I have recently started working with Ceph Nautilus release and I have realized that you have to start working with LVM to create OSD instead of the "old fashioned" ceph-disk. In terms of performance and best practices, as far as I must use LVM I can create volume groups that joins or extends

Re: [ceph-users] What if etcd is lost

2019-07-16 Thread Oscar Segarra
the behavior of the system when some pieces could fail. Thanks a lot! Óscar El mar., 16 jul. 2019 18:23, Janne Johansson escribió: > Den tis 16 juli 2019 kl 18:15 skrev Oscar Segarra >: > >> Hi Paul, >> That is the initial question, is it possible to recover my ceph cluster &

Re: [ceph-users] What if etcd is lost

2019-07-16 Thread Oscar Segarra
store 2.- There is an electric blackout and all nodes of my cluster goes down and all data in my etcd is lost (but muy osd disks have useful data) Thanks a lot Óscar El mar., 16 jul. 2019 17:58, Oscar Segarra escribió: > Thanks a lot Janne, > > Well, maybe I'm missunderstanding how ce

Re: [ceph-users] What if etcd is lost

2019-07-16 Thread Oscar Segarra
ged=true \ --pid=host \ -v /dev/:/dev/ \ -e OSD_DEVICE=/dev/vdd \ -e KV_TYPE=etcd \ -e KV_IP=192.168.0.20 \ ceph/daemon osd Thanks a lot for your help, Óscar El mar., 16 jul. 2019 17:34, Janne Johansson escribió: > Den mån 15 juli 2019 kl 23:05 skrev Oscar Segarra >: > &g

Re: [ceph-users] What if etcd is lost

2019-07-15 Thread Oscar Segarra
(eg. a mon node) against a running cluster with mons in > quorum. > > Best regards, > > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: ceph-users on behalf of Oscar > Segarra >

[ceph-users] What if etcd is lost

2019-07-15 Thread Oscar Segarra
Hi, I'm planning to deploy a ceph cluster using etcd as kv store. I'm planning to deploy a stateless etcd docker to store the data. I'd like to know if ceph cluster will be able to boot when etcd container restarts (and looses al data written in it) If the etcd container restarts when the ceph

Re: [ceph-users] Luminous v12.2.2 released

2017-12-05 Thread Oscar Segarra
I have executed: yum upgrade -y ceph On each node and everything has worked fine... 2017-12-05 16:19 GMT+01:00 Florent B : > Upgrade procedure is OSD or MON first ? > > There was a change on Luminous upgrade about it. > > > On 01/12/2017 18:34, Abhishek Lekshmanan wrote: >

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
e for putting a clustered file system (or > similar) on top of the block device. For the vast majority of cases, you > shouldn't enable this in libvirt. > > [1] https://libvirt.org/formatdomain.html#elementsDisks > > On Tue, Nov 14, 2017 at 10:49 AM, Oscar Segarra <oscar.sega...@g

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
see in the KVM. I'd like to know the suggested configuration for rbd images and live migration [image: Imágenes integradas 1] Thanks a lot. 2017-11-14 16:36 GMT+01:00 Jason Dillaman <jdill...@redhat.com>: > On Tue, Nov 14, 2017 at 10:25 AM, Oscar Segarra <oscar.sega...@gmail.com>

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
locking should not interfere with live-migration. I > have > > a small virtualization cluster backed by ceph/rbd and I can migrate all > the > > VMs which RBD image have exclusive-lock enabled without any issue. > > > > > > > > Em 11/14/2017 9:47 AM, Oscar Se

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
Hi, I include Jason Dillaman, the creator of this post http://tracker.ceph.com/issues/15000 in this thread Thanks a lot 2017-11-14 12:47 GMT+01:00 Oscar Segarra <oscar.sega...@gmail.com>: > Hi Konstantin, > > Thanks a lot for your advice... > > I'm specially interested

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
rbd image at the same time It looks that enabling Exlucisve locking you can enable some other interessant features like "Object map" and/or "Fast diff" for backups. Thanks a lot! 2017-11-14 12:26 GMT+01:00 Konstantin Shalygin <k0...@k0ste.ru>: > On 11/14/2017 06:19 PM, O

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
your librbd? > > > On 11/14/2017 05:39 PM, Oscar Segarra wrote: > >> In this moment, I'm deploying and therefore I can upgrade every >> component... I have recently executed "yum upgrade -y" in order to update >> all operating system components. >> >> A

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-14 Thread Oscar Segarra
Hi Anthony, o I think you might have some misunderstandings about how Ceph works. Ceph is best deployed as a single cluster spanning multiple servers, generally at least 3. Is that your plan? I want to deply servers for 100VDI Windows 10 each (at least 3 servers). I plan to sell servers

Re: [ceph-users] features required for live migration

2017-11-14 Thread Oscar Segarra
Hi, Yes, but looks lots of features like snapshot, fast-diff require some other features... If I enable exclusive-locking or journaling, live migration will be possible too? Is it recommended to set KVM disk "shareable" depending on the activated features? Thanks a lot! 2017-11-14 4:52

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Oscar Segarra
18:40, "Brady Deetz" <bde...@gmail.com> escribió: On Nov 13, 2017 11:17 AM, "Oscar Segarra" <oscar.sega...@gmail.com> wrote: Hi Brady, Thanks a lot again for your comments and experience. This is a departure from what I've seen people do here. I agre

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Oscar Segarra
RAID5 + 1 Ceph daemon as 8 CEPH daemons. I appreciate a lot your comments! Oscar Segarra 2017-11-13 15:37 GMT+01:00 Marc Roos <m.r...@f1-outsourcing.eu>: > > Keep in mind also if you want to have fail over in the future. We were > running a 2nd server and were replicating vi

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Oscar Segarra
do that probably. > > But for some workloads, like RBD, ceph doesn't balance out the workload > very evenly for a specific client, only many clients at once... raid might > help solve that, but I don't see it as worth it. > > I would just software RAID1 the OS and mons, and mds, not

[ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Oscar Segarra
Hi, I'm designing my infraestructure. I want to provide 8TB (8 disks x 1TB each) of data per host just for Microsoft Windows 10 VDI. In each host I will have storage (ceph osd) and compute (on kvm). I'd like to hear your opinion about theese two configurations: 1.- RAID5 with 8 disks (I will

Re: [ceph-users] features required for live migration

2017-11-10 Thread Oscar Segarra
Hi, Anybody has experience with live migration features? Thanks a lot in advance. Óscar Segarra El 7 nov. 2017 14:02, "Oscar Segarra" <oscar.sega...@gmail.com> escribió: > Hi, > > In my environment I'm working with a 3 node ceph cluster based on Centos 7 &g

[ceph-users] features required for live migration

2017-11-07 Thread Oscar Segarra
Hi, In my environment I'm working with a 3 node ceph cluster based on Centos 7 and KVM. My VM is a clone of a protected snapshot as is suggested in the following document: http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/#getting-started-with-layering I'd like to use the live migration

Re: [ceph-users] Backup VM (Base image + snapshot)

2017-10-19 Thread Oscar Segarra
:00 Richard Hesketh <richard.hesk...@rd.bbc.co.uk>: > On 16/10/17 03:40, Alex Gorbachev wrote: > > On Sat, Oct 14, 2017 at 12:25 PM, Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Hi, > >> > >> In my VDI environment I have con

Re: [ceph-users] Backup VM (Base image + snapshot)

2017-10-15 Thread Oscar Segarra
nt to point out that you > might be using the documentation for an older version of Ceph: > > On 10/14/2017 12:25 PM, Oscar Segarra wrote: > > > > http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/ > > > > If you're not using the 'giant' version of Ceph (which has r

[ceph-users] Backup VM (Base image + snapshot)

2017-10-14 Thread Oscar Segarra
Hi, In my VDI environment I have configured the suggested ceph design/arquitecture: http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/ Where I have a Base Image + Protected Snapshot + 100 clones (one for each persistent VDI). Now, I'd like to configure a backup script/mechanism to perform

Re: [ceph-users] How to distribute data

2017-09-04 Thread Oscar Segarra
Hi, For VDI (Windows 10) use case... is there any document about the recommended configuration with rbd? Thanks a lot! 2017-08-18 15:40 GMT+02:00 Oscar Segarra <oscar.sega...@gmail.com>: > Hi, > > Yes, you are right, the idea is cloning a snapshot taken from the base > im

Re: [ceph-users] CephFS: mount fs - single posing of failure

2017-08-29 Thread Oscar Segarra
; 10.1.40.11,10.1.40.12,10.1.40.13:/cephfs1 > > > > > > > > > > *From: *ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of LOPEZ > Jean-Charles <jelo...@redhat.com> > *Date: *Monday, August 28, 2017 at 3:40 PM > *To: *Oscar Segar

[ceph-users] CephFS: mount fs - single posing of failure

2017-08-28 Thread Oscar Segarra
Hi, In Ceph, by design there is no single point of failure I terms of server roles, nevertheless, from the client point of view, it might exist. In my environment: Mon1: 192.168.100.101:6789 Mon2: 192.168.100.102:6789 Mon3: 192.168.100.103:6789 Client: 192.168.100.104 I have created a line in

Re: [ceph-users] How to distribute data

2017-08-18 Thread Oscar Segarra
d with a version snapshot that you clone each time you need to let someone log in. Is that what you're planning? On Thu, Aug 17, 2017, 9:51 PM Christian Balzer <ch...@gol.com> wrote: > > Hello, > > On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote: > > > Hi

Re: [ceph-users] How to distribute data

2017-08-17 Thread Oscar Segarra
your responses. > > On Thu, Aug 17, 2017, 7:41 PM Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Thanks a lot David!!! >> >> Let's wait the opinion of Christian about the suggested configuration for >> VDI... >> >> Óscar Segarr

[ceph-users] How to distribute data

2017-08-17 Thread Oscar Segarra
Hi, Sorry guys, during theese days I'm asking a lot about how to distribute my data. I have two kinds of VM: 1.- Management VMs (linux) --> Full SSD dedicated disks 2.- Windows VM --> SSD + HHD (with tiering). I'm working on installing two clusters on the same host but I'm encountering lots of

[ceph-users] ceph luminous: error in manual installation when security enabled

2017-08-16 Thread Oscar Segarra
Hi, As ceph-deploy utility does not work properly with named clusters (other than the default ceph) In order to have a named cluster I have created the monitor using the manual procedure: http://docs.ceph.com/docs/master/install/manual-deployment/#monitor-bootstrapping In the end, it starts up

[ceph-users] error: cluster_uuid file exists with value

2017-08-15 Thread Oscar Segarra
Hi, After adding a new monitor cluster I'm getting an estrange error: vdicnode02/store.db/MANIFEST-86 succeeded,manifest_file_number is 86, next_file_number is 88, last_sequence is 8, log_number is 0,prev_log_number is 0,max_column_family is 0 2017-08-15 22:00:58.832599 7f6791187e40 4

Re: [ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
ours while I re-provisioned the >> third and nothing funky happened. >> >> Most ways to deploy a cluster allow you to create the cluster with 3+ >> mons at the same time (inital_mons). What are you doing that only allows >> you to add one at a time? >> &

Re: [ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
t are you doing that only allows you to > add one at a time? > > On Tue, Aug 15, 2017 at 12:22 PM Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Hi, >> >> I'd like to test and script the adding monitors process adding one by one >> monitors to the ceph i

[ceph-users] Two mons

2017-08-15 Thread Oscar Segarra
Hi, I'd like to test and script the adding monitors process adding one by one monitors to the ceph infrastructure. Is it possible to have two mon's running on two servers (one mon each) --> I can assume that mon quorum won't be reached until both servers are up. Is this right? I have not been

Re: [ceph-users] Two clusters on same hosts - mirroring

2017-08-14 Thread Oscar Segarra
<jdill...@redhat.com>: > Personally, I didn't quite understand your use-case. You only have a > single host and two drives (one for live data and the other for DR)? > > On Mon, Aug 14, 2017 at 4:09 PM, Oscar Segarra <oscar.sega...@gmail.com> > wrote: > > Hi, >

Re: [ceph-users] Two clusters on same hosts - mirroring

2017-08-14 Thread Oscar Segarra
Hi, Anybody has been able to work with mirroring? does has any sense the scenario I'm proposing? Thanks a lot. 2017-08-08 20:05 GMT+02:00 Oscar Segarra <oscar.sega...@gmail.com>: > Hi, > > I'd like to use the mirroring feature > > http://docs.ceph.com/docs/master/rbd/rbd

[ceph-users] Two clusters on same hosts - mirroring

2017-08-08 Thread Oscar Segarra
Hi, I'd like to use the mirroring feature http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ In my environment I have just one host (at the moment for testing purposes before production deployment). I want to dispose: /dev/sdb for standard operatoin /dev/sdc for mirror Of course, I'd like

Re: [ceph-users] Networking/naming doubt

2017-07-28 Thread Oscar Segarra
d probably leave your /etc/hosts in this state. I don't know if it would work though. It's really not intended for any communication to happen on this subnet other than inter-OSD traffic. On Thu, Jul 27, 2017 at 6:31 PM Oscar Segarra <oscar.sega...@gmail.com> wrote: > Sorry! I'd like

Re: [ceph-users] Networking/naming doubt

2017-07-27 Thread Oscar Segarra
Sorry! I'd like to add that I want to use the cluster network for both purposes: ceph-deploy --username vdicceph new vdicnode01 --cluster-network 192.168.100.0/24 --public-network 192.168.100.0/24 Thanks a lot 2017-07-28 0:29 GMT+02:00 Oscar Segarra <oscar.sega...@gmail.com>: > Hi, &g

Re: [ceph-users] Error in boot.log - Failed to start Ceph disk activation - Luminous

2017-07-27 Thread Oscar Segarra
ed around it by disabling ceph-disk. > The osds can start without it. > > On Thu, Jul 27, 2017 at 3:36 PM Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Hi, >> >> First of all, my version: >> >> [root@vdicnode01 ~]# ceph -v >> ceph version

[ceph-users] Networking/naming doubt

2017-07-27 Thread Oscar Segarra
Hi, In my environment I have 3 hosts, every host has 2 network interfaces: public: 192.168.2.0/24 cluster: 192.168.100.0/24 The hostname "vdicnode01", "vdicnode02" and "vdicnode03" are resolved by public DNS through the public interface, that means the "ping vdicnode01" will resolve

[ceph-users] Error in boot.log - Failed to start Ceph disk activation - Luminous

2017-07-27 Thread Oscar Segarra
Hi, First of all, my version: [root@vdicnode01 ~]# ceph -v ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800) luminous (rc) When I boot my ceph node (I have an all in one) I get the following message in boot.log: *[FAILED] Failed to start Ceph disk activation: /dev/sdb2.* *See

Re: [ceph-users] Luminous: ceph mgr crate error - mon disconnected

2017-07-23 Thread Oscar Segarra
* mds allow * > -o /var/lib/ceph/mgr/ceph-nuc1/keyring > [nuc1][ERROR ] 2017-07-23 14:51:13.413218 7f62943cc700 0 librados: > client.bootstrap-mgr authentication error (22) Invalid argument > [nuc1][ERROR ] InvalidArgumentError does not take keyword arguments > [nuc1][ERROR ] exit code

[ceph-users] Luminous: ceph mgr crate error - mon disconnected

2017-07-22 Thread Oscar Segarra
Hi, I have upgraded from kraken version with a simple "yum upgrade command". Later the upgrade, I'd like to deploy the mgr daemon on one node of my ceph infrastrucute. But, for any reason, It gets stuck! Let's see the complete set of commands: [root@vdicnode01 ~]# ceph -s cluster: id:

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Oscar Segarra
ou look at the > docs, make sure you are looking at the proper version of the docs for your > version. Replace master, jewel, luminous, etc with kraken in the URL. > > On Thu, Jul 20, 2017, 5:33 AM Oscar Segarra <oscar.sega...@gmail.com> > wrote: > >> Hi, >> >>

Re: [ceph-users] Ceph kraken: Calamari Centos7

2017-07-20 Thread Oscar Segarra
t;> so my guess is that pretty much Calamari is dead. >> >> On Thu, Jul 20, 2017 at 4:28 AM, Oscar Segarra <oscar.sega...@gmail.com> >> wrote: >> >>> Hi, >>> >>> Anybody has been able to setup Calamari on Centos7?? >>> >&g

[ceph-users] Ceph kraken: Calamari Centos7

2017-07-19 Thread Oscar Segarra
Hi, Anybody has been able to setup Calamari on Centos7?? I've done a lot of Google but I haven't found any good documentation... The command "ceph-deploy calamari connect" does not work! Thanks a lot for your help! ___ ceph-users mailing list

[ceph-users] Ceph-Kraken: Error installing calamari

2017-07-18 Thread Oscar Segarra
Hi, I have created a VM called vdiccalamari where I'm trying to install the calamari server in order to view ceph status from a gui: [vdicceph@vdicnode01 ceph]$ sudo ceph status cluster 656e84b2-9192-40fe-9b81-39bd0c7a3196 health HEALTH_OK monmap e2: 1 mons at

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Oscar Segarra
d allow * mds allow * >> -o /var/lib/ceph/mgr/ceph-nuc2/keyring >> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700 0 librados: >> client.bootstrap-mgr authentication error (22) Invalid argument >> [nuc2][ERROR ] (22, 'error connecting to the cluster') >> [nuc

[ceph-users] admin_socket error

2017-07-10 Thread Oscar Segarra
Hi, My lab environment has just one node for testing purposes. As user ceph (with sudo privileges granted) I have executed the following commands in my environment: ceph-deploy install vdicnode01 ceph-deploy --cluster vdiccephmgmtcluster new vdicnode01 --cluster-network 192.168.100.0/24