On 25/10/2017 8:05 PM, Uwe Sauter wrote:
> Hi,
>
> now that 5.1 is released will there be documentation how to upgrade from 4.4?
> Is the wikie page [1] valid for 5.1?
>
>
> Did someone already try the upgrade? Any experience is appreciated.
I also would like to know what the steps are to upgrade
Hi All
Where can I find the source packages that the Proxmox Ceph Luminous was
built from ?
Mike
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
On 6/02/2018 8:26 PM, Mark Schouten wrote:
> Hi,
>
> Is anyone actively using CephFS served by a Proxmox cluster. AFAICS there is
> no technical problem to configure a Proxmox cluster to serve CephFS, but I'm
> looking for pros and cons.
>
> Thanks,
I have a cluster with a CephFS configured, I
On 16/02/2018 9:01 PM, Fabian Grünbichler wrote:
>
> compiling ceph is very resource hungry - you need a lot of RAM and disk
> space for it to succeed (roughly from memory: a few 10G of RAM, and
> something like >50G of disk space)
Ok well it needs more than 10, 12 did not work so I allocated 16
On 13/2/18 10:58 pm, Alexandre DERUMIER wrote:
> https://git.proxmox.com/?p=ceph.git;a=summary
>
Any idea how I clone the packages from that git system ?
Mike
___
pve-user mailing list
pve-user@pve.proxmox.com
Hi Guys
Where can I download the source code for the PVE kernels with there
patches (including old releases) ? I want to apply a patch to fix an issue.
Thanks
Mike
___
pve-user mailing list
pve-user@pve.proxmox.com
Hi Guys
How do I remove the FAILED/OFFLINE drive from ZFS. Nothing I've tried works
Its been replaced by the spare but it did not auto remove it self an
offline command and a replace do not work.
Any ideas ?
root@pve:~# zpool status
pool: rbd
state: DEGRADED
status: One or more devices has
On 3/5/19 3:59 am, Gianni Milo wrote:
> The following blog post might help you getting a clearer picture on how hot
> spares work on zfs and ultimately how to achieve your goal ...
>
>
> https://blogs.oracle.com/eschrock/zfs-hot-spares
>
Hi
I found the 'zpool detach' command a few hours after
On 20/8/19 12:19 am, Mark Adams wrote:
> On Mon, 19 Aug 2019 at 11:59, Uwe Sauter wrote:
>
>> Hi,
>>
>> @Eneko
>>
>> Both clusters are hyper-converged PVE clusters each running its own Ceph
>> cluster. On the older PVE 5 cluster I created a new RBD
>> storage configuration for the PVE6 Ceph. So I
HI All
I just finished upgrading from V5 to V6 of Proxmox and have an issue
with LXC 's not starting.
The issue seems to be that the LXC is being started with out first
mounting the ZFS subvolume.
This results in a dev directory being created which then means ZFS will
not mount over anymore
On 12/9/19 11:16 pm, Daniel Speichert wrote:
> I've had a similar problem. It was worse because i had to unmount
> everything up to the root.
>
> I think I set the datasets for machines to automount by setting a
> mountpoint attribute that was missing before.
>
> I can't recall if that was it
Hi
>
> so i guess you are either missing something,
> or something in your network/setup is not working correctly
Same here just finished a from scratch rebuild v4 to v6 and had no
problems using the GUI.
Going to need more details form the CEPH cluster for someone to work
this out.
Cheers
On 19/5/20 6:23 pm, José Manuel Giner wrote:
> Hello, in the past (proxmox v4 and v5) we've used Proxmox's clustering
> features and found problems when the whole cluster would shut down,
> when we turned it back on it wouldn't synchronize. Has this problem
> been fixed yet?
>
> Thanks!
>
13 matches
Mail list logo