Sure! I will not! --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36 Em ter, 6 de ago de 2019 às 10:02, Alwin Antreich via pve-user <[email protected]> escreveu: > > > > > ---------- Forwarded message ---------- > From: Alwin Antreich <[email protected]> > To: PVE User List <[email protected]> > Cc: > Bcc: > Date: Tue, 06 Aug 2019 15:02:00 +0200 > Subject: Re: [PVE-User] Reinstall Proxmox with Ceph storage > On August 6, 2019 2:46:21 PM GMT+02:00, Gilberto Nunes > <[email protected]> wrote: > >WOW! This is it??? Geez! So simple.... Thanks a lot > >--- > >Gilberto Nunes Ferreira > > > >(47) 3025-5907 > >(47) 99676-7530 - Whatsapp / Telegram > > > >Skype: gilberto.nunes36 > > > > > > > > > >Em ter, 6 de ago de 2019 às 06:48, Alwin Antreich > ><[email protected]> escreveu: > >> > >> Hello Gilberto, > >> > >> On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote: > >> > Hi there... > >> > > >> > Today we have 3 servers work on Cluster HA and Ceph. > >> > Proxmox all nodes is 5.4 > >> > We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH > >storage. > >> > So, we like to reinstall each node in an HDD SSD 120GB in order to > >you > >> > the third SAS into SAS CEPH POOL. > >> > We get 2 POOL's: > >> > SAS - which content 2 HDD SAS > >> > SATA - which content 3 HDD SATA > >> > > >> > In general we need move the disk image in SAS POOL to SATA POOL? > >> > Or there any other advice in how to proceed in this case?? > >> As you have 3x nodes, you can simply do it one node at a time. > >Assuming > >> you are using a size 3 / min_size 2 for your Ceph pools. No need to > >move > >> any image. > >> > >> Ceph OSDs are portable, meaning, if you configure the newly installed > >> node to be connected (and configured) to the same Ceph cluster, the > >OSDs > >> should just pop-in again. > >> > >> First deactivate HA on all nodes. Then you could try to clone the OS > >> disk to the SSD (eg. clonezilla, dd). Or remove the node from the > >> cluster (not from Ceph) and re-install it from scratch. Later on, the > >> old SAS disk can be reused as additional OSD. > >> > >> -- > >> Cheers, > >> Alwin > >> > >> _______________________________________________ > >> pve-user mailing list > >> [email protected] > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > >_______________________________________________ > >pve-user mailing list > >[email protected] > >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > But don't forget to let the Ceph cluster heal first, before you start the > next. ;) > > > > ---------- Forwarded message ---------- > From: Alwin Antreich via pve-user <[email protected]> > To: PVE User List <[email protected]> > Cc: Alwin Antreich <[email protected]> > Bcc: > Date: Tue, 06 Aug 2019 15:02:00 +0200 > Subject: Re: [PVE-User] Reinstall Proxmox with Ceph storage > _______________________________________________ > pve-user mailing list > [email protected] > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
