Re: [PVE-User] Upgrade 4.4 to 5.1

2017-10-25 Thread Mike O'Connor
On 25/10/2017 8:05 PM, Uwe Sauter wrote: > Hi, > > now that 5.1 is released will there be documentation how to upgrade from 4.4? > Is the wikie page [1] valid for 5.1? > > > Did someone already try the upgrade? Any experience is appreciated. I also would like to know what the steps are to upgrade

[PVE-User] CEPH Luminous source packages

2018-02-12 Thread Mike O'Connor
Hi All Where can I find the source packages that the Proxmox Ceph Luminous was built from ? Mike ___ pve-user mailing list pve-user@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Re: [PVE-User] CephFS on Proxmox cluster

2018-02-10 Thread Mike O'Connor
On 6/02/2018 8:26 PM, Mark Schouten wrote: > Hi, > > Is anyone actively using CephFS served by a Proxmox cluster. AFAICS there is > no technical problem to configure a Proxmox cluster to serve CephFS, but I'm > looking for pros and cons. > > Thanks, I have a cluster with a CephFS configured, I

Re: [PVE-User] Building CEPH from Proxmox source

2018-02-16 Thread Mike O'Connor
On 16/02/2018 9:01 PM, Fabian Grünbichler wrote: > > compiling ceph is very resource hungry - you need a lot of RAM and disk > space for it to succeed (roughly from memory: a few 10G of RAM, and > something like >50G of disk space) Ok well it needs more than 10, 12 did not work so I allocated 16

Re: [PVE-User] CEPH Luminous source packages

2018-02-15 Thread Mike O'Connor
On 13/2/18 10:58 pm, Alexandre DERUMIER wrote: > https://git.proxmox.com/?p=ceph.git;a=summary > Any idea how I clone the packages from that git system ? Mike ___ pve-user mailing list pve-user@pve.proxmox.com

[PVE-User] Source code for Kernel with patches

2019-05-16 Thread Mike O'Connor
Hi Guys Where can I download the source code for the PVE kernels with there patches (including old releases) ? I want to apply a patch to fix an issue. Thanks Mike ___ pve-user mailing list pve-user@pve.proxmox.com

[PVE-User] ZFS failed drive and spare

2019-05-01 Thread Mike O'Connor
Hi Guys How do I remove the FAILED/OFFLINE drive from ZFS. Nothing I've tried works Its been replaced by the spare but it did not auto remove it self an offline command and a replace do not work. Any ideas ? root@pve:~# zpool status   pool: rbd  state: DEGRADED status: One or more devices has

Re: [PVE-User] ZFS failed drive and spare

2019-05-02 Thread Mike O'Connor
On 3/5/19 3:59 am, Gianni Milo wrote: > The following blog post might help you getting a clearer picture on how hot > spares work on zfs and ultimately how to achieve your goal ... > > > https://blogs.oracle.com/eschrock/zfs-hot-spares > Hi I found the 'zpool detach' command a few hours after

Re: [PVE-User] Move VM's HDD incl. snapshots from one Ceph to another

2019-08-20 Thread Mike O'Connor
On 20/8/19 12:19 am, Mark Adams wrote: > On Mon, 19 Aug 2019 at 11:59, Uwe Sauter wrote: > >> Hi, >> >> @Eneko >> >> Both clusters are hyper-converged PVE clusters each running its own Ceph >> cluster. On the older PVE 5 cluster I created a new RBD >> storage configuration for the PVE6 Ceph. So I

[PVE-User] LXC not starting after V5 to V6 upgrade using ZFS for storage

2019-09-12 Thread Mike O'Connor
HI All I just finished upgrading from V5 to V6 of Proxmox and have an issue with LXC 's not starting. The issue seems to be that the LXC is being started with out first mounting the ZFS subvolume. This results in a dev directory being created which then means ZFS will not mount over anymore

Re: [PVE-User] LXC not starting after V5 to V6 upgrade using ZFS for storage

2019-09-13 Thread Mike O'Connor
On 12/9/19 11:16 pm, Daniel Speichert wrote: > I've had a similar problem. It was worse because i had to unmount > everything up to the root. > > I think I set the datasets for machines to automount by setting a > mountpoint attribute that was missing before. > > I can't recall if that was it

Re: [PVE-User] Trouble Creating CephFS UPDATE

2019-10-15 Thread Mike O'Connor
Hi > > so i guess you are either missing something, > or something in your network/setup is not working correctly Same here just finished a from scratch rebuild v4 to v6 and had no problems using the GUI. Going to need more details form the CEPH cluster for someone to work this out. Cheers

Re: [PVE-User] Recovering cluster after shutdown

2020-05-19 Thread Mike O'Connor
On 19/5/20 6:23 pm, José Manuel Giner wrote: > Hello, in the past (proxmox v4 and v5) we've used Proxmox's clustering > features and found problems when the whole cluster would shut down, > when we turned it back on it wouldn't synchronize. Has this problem > been fixed yet? > > Thanks! >