Hello
- I booted from ISO and tried to reinstall / fix Grub - doesnt worked
Guess you mean you chrooted into the guest os and you executed
'grub-install /dev/sda' or something similar. Assuming that disk has a MBR
layout of course, did you also check if the boot partition was active? What
error
On Thu, 30 May 2019 at 16:15, Adam Weremczuk
wrote:
> This is really what I want as both nodes will be using own local storage
> and and act as an active-passive node (shared zfs pool).
>
I would also try drbd9/linstor ontop of zfs and leverage from it's
active/active setup...
would fiddle
> AFAIK storage/migration traffic will not be performed using corosync IPs
> but primary ("hostname") IPs.
Yes, that's correct. Once you decide to separate the corosync from the rest
traffic, it will be used only for that purpose.
Storage and live migration traffic will use the other, "primary"
Can you please post the VM config file ?
Also, which type of storage are you using for storing the VM disk(s)?
Sounds strange that the event log security database gets corrupted, perhaps
RAM or disk issues?
Have you tried switching to IDE disk mode temporarily and see if the issue
is reproducible
On Mon, 24 Jun 2019 at 12:22, Rutger Verhoeven
wrote:
root@server5:/var/lib/vz# pvs
> PV VG Fmt Attr PSize PFree
> /dev/sda3 pve lvm2 a-- 499.50g 0
> /dev/sdb1 pve lvm2 a--1.37t 371.91g
Looks like you have (mistakenly) added both /dev/sda3 and /dev/sdb1 to the
Are you using LACP or linux bonding on node2,3 for the VM + cluster traffic
?
Are you using VLANs to separate VM/cluster traffic ?
Have you checked multicast notes in the pve wiki ? Have you tried UDPU
instead of multicast as last option ?
No idea about missing rrd graphs...
On Thu, 16 May
The following blog post might help you getting a clearer picture on how hot
spares work on zfs and ultimately how to achieve your goal ...
https://blogs.oracle.com/eschrock/zfs-hot-spares
On Wed, 1 May 2019 at 23:22, Mike O'Connor wrote:
> Hi Guys
>
> How do I remove the FAILED/OFFLINE drive
Have you created a PV and VG to be able to actually use that space on PVE?
Once you do that, you can add that VG to PVE storage providers.
You could then add a LVM based disk, from that storage provider, to the
Freenas VM.
P.S
Don't think that a virtualized Freenas is a good idea, especially
Are you able to boot on a previous kernel and see if the problem returns ?
Gianni
On Wed, 10 Jul 2019 at 16:06, JR Richardson
wrote:
> Hi All,
>
> I recently upgraded a host within a 4 Node Cluster from
> pve-manager/5.2-5/eb24855a (running kernel: 4.15.18-1-pve) to
>
Hello
I have a Synology with High Availability serving NFS storage to Proxmox.
> A couple of months ago, this started happening:
>
> Apr 24 12:32:51 proxmox-1 pvestatd[3298]: unable to activate storage
> 'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable
> Apr 24 13:08:00 proxmox-1
>
> I heave 2 qestion about zfs.
> In my four ssd disk RW speed is 550MB/s
> But zfs matrix work only 700MB/s (for comparison mdadm 1400MB/s).
> This same on nvme (m.2) disk.
> Self m.2 2500MB/s.
> In raid-z1 only shows 1500MB/s peer disk.
ZFS is much more complex than mdraid, so it's "normal"
Well, have you enabled the agent in the properties of the VM as described
in that guide? Without doing that, the device won't appear in Device
Manager ...
qm set VMID --agent 1
Also, you mentioned that you created the VM in VirtualBox. How did you
migrate it to PVE ? Did you remove the
> We want to be able to clone a template from a shared storage to a local
> storage on the node. I don’t see why that would not be technically
> possible, it is just copying one file from one disk to another.
When you right click on the "template -> clone" and you choose "mode:full
clone", it
> I want to "full clone” a template from a shared storage on a node (NFS) to
> the local storage on the same node (SSD) and the local storage is not
> listed in the target storage drop down box. Take a look at the attached
> image. Maybe it’s important to say this was with PVE 5.1. If anything
Haven't moved to zfs 0.8 but you could have similar results by enabling
compression on zfs and by periodically executing fstrim (linux) or sdelete
(windows) inside VMs to reclaim unused space ?
On Linux VMs, adding "discard" in fstab mount options (ext4) may have
similar results as when executing
I'm
> not clear on is if I need both to be 0.8.
>
> Also, I assume this can be done retrospectively ? so if I upgrade then I
> can run fstrim and it will clear the space in the host...? Maybe that
> question is better posed to the zfsonlinux list though...
>
> Cheers,
> Mark
>
All involved layers, from guest OS down to the actual backing storage, must
participate for a successful trim/discard support.
For more information on this have a look at the documentation notes...
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk_discard
Regards,
G.
On Thu,
Don't have experience with OVH, but depending on the situation, it might be
possible to "transfer" the existing OS installation to a ZFS backed setup.
To do so, it will be required to have an additional hdd, partition it,
create a ZFS pool on it with proper datasets (rpool/ROOT/pve-1 etc..), set
Check if you can find something useful in here ...
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/
On Fri, 25 Oct 2019 at 09:59, Marco Gaiarin wrote:
>
> Ok, it is a bit late to P2V a XP box, but...
>
>
> P2V done (on PVE 5.4), now the VMs ask at every boot
>
> > Have you tested the performance of your ssd zpool from the command line
> on
> > the host?
>
>
> Do you mean vzperf ?
>
I think he means doing zfs performance tests on the host itself rather than
inside the VM. This is in order to rule out the possibility that the slow
performance is caused
Could the following be the reason ...? zvols are being supported as far as
I can tell...
Limitations
- not possible to sync recursive
G.
On Tue, 3 Mar 2020 at 12:30, Roland @web.de wrote:
> hello,
>
> apparently pve-zsync does not seem to replicate zfs zvols (but only
> regular zfs
>
So ... Lets suppose I created 5 VMs about 200Gb each on a 1TB space.
> Im using less than 50% on each VM and now I need to add a 6th VM ? Even
> Im using thin storage , I will have no free space.
> What can I do ?
When using ThinLVM a VM initially allocates the same amount of data as the
Have you tried booting from a recovery media, then first check reported
size within fdisk, then after try to fsck the disk (this may lead to some
data loss by itself) ? Is the drive/partition mountable at all? If not,
then you may have to try repairing its qcow2/raw file. There are some
guides on
Things I would check or modify...
- output of 'pvecm s' and 'pvecm n' commands.
- syslog on each node for any clues.
- ntp.
- separate cluster (corosync) network from storage network (i.e In your
case, use --link2, LAN).
G.
On Sat, 25 Jan 2020 at 15:44, Frank Thommen
wrote:
> Dear all,
>
> I
First thing that comes to my mind when having only 2 nodes in the cluster
is that perhaps the cluster is not quorate ? I would check that first and
maybe restart the related services...
G.
On Tue, 28 Jan 2020 at 09:40, Dmytro O. Redchuk via pve-user <
pve-user@pve.proxmox.com> wrote:
>
>
>
>
To: PVE User List
> Cc:
> Bcc:
> Date: Tue, 28 Jan 2020 12:35:08 +0200
> Subject: Re: [PVE-User] Config/Status commands stopped to respond
> У вт., 28-го січ. 2020, о 10:26 Gianni Milo wrote:
> > First thing that comes to my mind when having only 2 nodes in the cluster
&g
If it's happening randomly, my best guess would be that it might be related
to high i/o during the time frame that the backup takes place.
Have you tried creating multiple backup schedules which will take place at
different times ? Setting backup bandwidth limits might also help.
Check the PVE
This has been discussed in the past, see the post below for some answers...
https://www.mail-archive.com/pve-user@pve.proxmox.com/msg10160.html
On Fri, 14 Feb 2020 at 22:57, Frank Thommen
wrote:
> Dear all,
>
> the PVE documentation on
>
Hello,
See comments below...
vmbr0 is on a 2x1Gbit bond0
> Ceph public and private are on 2x10Gbit bond2
> Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS.
>
Where's the cluster (corosync) traffic flowing ? On vmbr0 ? Would be a good
idea to split that as well if possible (perhaps by
This should be a case of removing (or comment out) its corresponding entry
in /etc/pve/storage.cfg and then removing it by using the usual lvm
commands.
You can then "convert" it to thick lvm and re-add it in the config file...
Gianni
On Thu, 16 Jan 2020 at 18:26, Frank Thommen
wrote:
> Dear
Does issuing 'udevadm trigger' helps?
On Fri, 10 Jan 2020 at 12:44, Frank Thommen
wrote:
> Dear all,
>
> after having (successfully) imported two KVM disk images from oVirt, LVM
> and pvesm complain about some udev initialization problem:
>
> root@pve01:~# pvesm status
>WARNING: Device
uch.
>
> Is this a step that normally has to be executed after having imported a
> disk image? If yes, then this could perhaps be added to
> https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Qemu.2FKVM
> .
>
> Cheers
> frank
>
>
> On 1/10/20 3:32 PM,
Maybe you can find some clues in syslogs during the time it goes off ?
Check for either enp1s0 or vmbr0 for example.
G.
On Thu, 16 Apr 2020 at 14:32, Gerald Brandt wrote:
> Hi,
>
> I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. I've
> been running a server there for
People are usually using more than one VM/CT per host, so allocating less
cpu cores and ram per VM/CT than the host can support it's considered a
"normal" thing to do.
On the other hand if you plan using just one VM/CT on a single host,
nothing is preventing you from allocating all cpu cores and
Everything related to pci and usb passthrough can be found in the wiki.
Have a look and see if you find anything useful in there.
Good luck...
https://pve.proxmox.com/wiki/Pci_passthrough
https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines
On Tue, 31 Mar 2020 at 17:22, Sivakumar
Go to Datacenter -> Storage and add a new (NFS) storage. Set the "content"
to include "ISO image". Go to the VM properties and select the iso image
from this storage location instead of "local:".
On Sun, 29 Mar 2020 at 19:24, Alarig Le Lay wrote:
> Hi,
>
> If an ISO is configured on a VM
I don't own this hardware too, but was able to compile its kernel modules
(.ko) from its source package which is provided at Mellanox web site.
I used a test PVE (VM) with pve-kernel and pve-headers installed on it.
Then I extracted the source package (tar.gz) on temp location and
executed
> is anybody using qcow2 on zfs in production at a larger scale or someone
> wants to share his thoughts/experience with using qcow2 on zfs ?
I would not use qcow2 images on a zfs dataset. I would prefer raw images
instead because the overhead is less and you can snapshot the VMs at the
zfs
38 matches
Mail list logo