--- Begin Message ---
The simplest thing to set also is to make sure you are using writeback
cache in your vms with ceph. It makes a huge difference in performance.
On Wed, 10 Jun 2020, 07:31 Eneko Lacunza, wrote:
> Hi Marco,
>
> El 9/6/20 a las 19:46, Marco Bellini escribió:
> > Dear All,
> >
--- Begin Message ---
Sivakumar - This is a "known issue" as far as I am aware, usually when you
are allocating quite a bit of memory (although 16G is not a lot in your
case, but maybe the server doesn't have much ram?) when starting a vm with
a PCI device passed through to it. It also only seems
--- Begin Message ---
Have you enabled IOMMU in the BIOS? Assuming your server hardware supports
it?
On Fri, 15 May 2020 at 15:03, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Hello,
>
> I am unable to add the PCI device to VM's, where I am getting below error
>
000,mac=3E:DE:DC:87:CE:75,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
> -rtc 'driftfix=slew,base=localtime' -machine 'type=pc+pve1' -global
> 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
>
>
> Best regards
> SK
>
>
> On Thu, May 14, 2020 at 6:37 PM Mark Adams via pv
> SK
>
> On Thu, May 14, 2020 at 6:20 PM Mark Adams via pve-user <
> pve-user@pve.proxmox.com> wrote:
>
> >
> >
> >
> > -- Forwarded message --
> > From: Mark Adams
> > To: PVE User List
> > Cc:
> > Bcc:
> >
--- Begin Message ---
Do you really need hugepages? if not disable it.
On Thu, 14 May 2020 at 17:17, Sivakumar SARAVANAN <
sivakumar.saravanan.jv@valeo-siemens.com> wrote:
> Hello Daniel,
>
> Thanks for coming back.
>
> I mean, I am unable to power ON the VM until shutdown the other VM's in
--- Begin Message ---
As Eneko already said, this really sounds like a network problem - if your
hosts lose connectivity to each other they will reboot themselves, and it
sounds like this is what happened to you.
You are sure there has been no changes to your network around the time this
--- Begin Message ---
Hi All,
I am having the issue that is detailed in this forum post:
https://forum.proxmox.com/threads/vm-start-timeout-with-pci-gpu.45843/
I thought I would take it to the mailing list to see if anyone here has any
ideas?
VM's boot fine the first time the machine starts
--- Begin Message ---
Is the data inside the VM's different? maybe the data on the bigger one is
not as compressible?
On Wed, 11 Mar 2020, 08:07 Renato Gallo via pve-user, <
pve-user@pve.proxmox.com> wrote:
>
>
>
> -- Forwarded message --
> From: Renato Gallo
> To: g noto
> Cc:
box option is only available on zfs storage as I
guess you have the option for both, where as lvmthin should always be thin?
On Thu, 5 Mar 2020 at 16:43, Mark Adams wrote:
> Thin provisioning is set on the storage, it is a checkbox and of course it
> has to be a storage type than can be thin
--- Begin Message ---
Atila - just to follow up on Giannis discard notes, depending on what OS
and filesystems you use inside of your VMs, you may need to run fstrim,
mount with different options, or run specific commands (ie zpool trim for
zfs) to get it all working correctly.
Regards,
Mark
On
andro.
>
>
>
>
> El jue., 5 mar. 2020 a las 13:44, Mark Adams via pve-user (<
> pve-user@pve.proxmox.com>) escribió:
>
> >
> >
> >
> > -- Forwarded message --
> > From: Mark Adams
> > To: PVE User List
> > Cc:
&
--- Begin Message ---
Thin provisioning is set on the storage, it is a checkbox and of course it
has to be a storage type than can be thin provisioned (ie lvmthin, zfs,
ceph etc).
Then every virtual disk that is created on that storage type is thin
provisioned.
Regards,
Mark
On Thu, 5 Mar 2020,
and clearing this up for me.
Cheers,
Mark
On Fri, 8 Nov 2019 at 15:48, Thomas Lamprecht
wrote:
> On 11/8/19 4:22 PM, Mark Adams wrote:
> > I didn't configure it to do
> > this myself, so is this an automatic feature? Everything I have read says
> > it should be configure
Hi All,
This cluster is on 5.4-11.
This is most probably a hardware issue either with ups or server psus, but
wanted to check if there is any default watchdog or auto reboot in a
proxmox HA cluster.
Explanation of what happened:
All servers have redundant psu, being fed from separate ups in
There is a WD Blue SSD - but it is a desktop drive, you probably shouldn't
use it in a server.
Are you using the virtio-scsi blockdev and the newest virtio drivers? also,
have you tried with writeback enabled?
Have you tested the performance of your ssd zpool from the command line on
the host?
Thanks for your responses Thomas and Fabian.
On Fri, 27 Sep 2019 at 09:37, Fabian Grünbichler
wrote:
> On September 27, 2019 10:30 am, Mark Adams wrote:
> > Hi All,
> >
> > I'm trying out one of these new processors, and it looks like I need at
> > least 5.2
Hi All,
I'm trying out one of these new processors, and it looks like I need at
least 5.2 kernel to get some support, preferably 5.3.
At present the machine will boot in to proxmox, but IOMMU does not work,
and I can see ECC memory is not working.
So my question is, whats the recommended way to
Is it potentially an issue with having the same pool name on 2 different
ceph clusters?
is there a vm-112-disk-0 on vdisks_cluster2?
On Fri, 6 Sep 2019, 12:45 Uwe Sauter, wrote:
> Hello Alwin,
>
> Am 06.09.19 um 11:32 schrieb Alwin Antreich:
> > Hello Uwe,
> >
> > On Fri, Sep 06, 2019 at
port-diff and import-diff to send just the changes between snapshots
though in case it doesn't and you want to build it up from the earliest
snapshot.
Sorry that's not more exact!
Regards,
Mark
>
> Regards,
>
> Uwe
>
>
>
> Am 19.08.19 um 12:30 schrieb Mark Adams
It is relatively straight forward using cli, use the rbd export (and
export-diff) command over ssh.
On Mon, 19 Aug 2019, 12:26 Eneko Lacunza, wrote:
> Hi Uwe,
>
> El 19/8/19 a las 10:14, Uwe Sauter escribió:
> > is it possible to move a VM's disks from one Ceph cluster to another,
> including
Do you need your storage to be High-Availability? This also has a bearing
on what might be a good solution.
If you don't need HA, then a zfs backed linux server sharing nfs would be
very straight forward. But even better would be using the zfs over iscsi
features of proxmox so that each of your
ession enabled.
> Then I would run zfs trim command inside the VM and see if the space is
> reclaimed back on the host.
>
> Note: fstrim command only works on specific filesystems, not in zfs.
>
> Gianni
>
>
>
> On Tue, 9 Jul 2019 at 21:01, Mark Adams wrote:
>
> >
x.com/wiki/Shrink_Qcow2_Disk_Files
>
> Gianni
>
>
> On Tue, 9 Jul 2019 at 09:49, Mark Adams wrote:
>
> > Hi All,
> >
> > Currently having an issue on a few servers where more space is being
> "used"
> > in the host (zfs), than is actually
Hi All,
Currently having an issue on a few servers where more space is being "used"
in the host (zfs), than is actually being used inside the VM. Discard is
enabled, but zfs 0.7 does not have support for it.
zfs 0.8 has brought in discard support, so I was wondering if anyone else
has upgraded
s://forum.proxmox.com/threads/windows-vm-fails-to-shutdown-from-proxmox-web.54233/#post-249883
>
> By the way, somebody should fix it.
> I got the solution from Redhat.
>
>
> On Mon, May 13, 2019 at 6:21 PM Mark Adams wrote:
>
> > So the appropriate thing for you to d
;
> > > However, the guest agent from the latest stable VirtIO ISO is *not* the
> > > latest one. You may need the version from
> > >
> >
> https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-qemu-ga/
> > >
> > > Bye,
I haven't tried 2019 as yet, but windows 10 and 2016 work fine for me.
Make sure your following this correctly!
https://pve.proxmox.com/wiki/Qemu-guest-agent
Regards
On Mon, 13 May 2019 at 12:52, Saint Michael wrote:
> I have not been able to shut down from Proxmox a Windows VM from any
>
would love to have your input.
Cheers,
Mark
On Wed, 8 May 2019 at 11:53, Mark Adams wrote:
>
>
> On Wed, 8 May 2019 at 11:34, Alwin Antreich
> wrote:
>
>> On Wed, May 08, 2019 at 09:34:44AM +0100, Mark Adams wrote:
>> > Thanks for getting back to me Alwin. See my re
On Wed, 8 May 2019 at 11:34, Alwin Antreich wrote:
> On Wed, May 08, 2019 at 09:34:44AM +0100, Mark Adams wrote:
> > Thanks for getting back to me Alwin. See my response below.
> >
> >
> > I have the same size and count in each node, but I have had a disk
>
Thanks for getting back to me Alwin. See my response below.
On Wed, 8 May 2019 at 08:10, Alwin Antreich wrote:
> Hello Mark,
>
> On Tue, May 07, 2019 at 11:26:17PM +0100, Mark Adams wrote:
> > Hi All,
> >
> > I would appreciate a little pointer or clarification on
interesting:
> > # qm config VMID
> >
> >
> >> Serial driver? Don't have have any odd devices showing up in the
> device list
> >>
> >>
> >>
> >> On 4/24/2019 2:02 PM, Mark Adams wrote:
> >>> Haven't tried this myself
Haven't tried this myself, but have you updated the qemu-agent and serial
driver to check it's not that?
On Wed, 24 Apr 2019, 18:59 David Lawley, wrote:
> I know, its an oldie, but.. Windows Server 2003
>
> But since moving it to PVE 5.4 (from3.4) its does not reboot/restart on
> its own. You
Why don't you just rename the zpool so they match?
On Fri, 15 Mar 2019, 09:10 Fabrizio Cuseo, wrote:
> Hello Gianni.
> I wrote in my email that pve-zsync is not suitable for my need (redundancy
> with VM migration from one host to another).
>
> Fabrizio
>
> - Il 15-mar-19, alle 10:08,
For posterity, I sorted this by speaking with ASRock Rack and getting an
as-yet unreleased bios that has the ARI forwarding option. Enabled this and
all working now.
On Fri, 1 Mar 2019 at 13:20, Dominik Csapak wrote:
> On 01.03.19 14:13, Mark Adams wrote:
> > On Fri, 1 Mar 2019
On Fri, 1 Mar 2019 at 12:52, Dominik Csapak wrote:
> On 01.03.19 13:37, Mark Adams wrote:
> > Hi All,
> >
> > I'm trying this out, based on the wiki post and the forum posts:
> >
> >
> https://forum.proxmox.com/threads/amd-s7150-mxgpu-with-proxmox-ve-5-x.50
Hi All,
I'm trying this out, based on the wiki post and the forum posts:
https://forum.proxmox.com/threads/amd-s7150-mxgpu-with-proxmox-ve-5-x.50464/
https://pve.proxmox.com/wiki/MxGPU_with_AMD_S7150_under_Proxmox_VE_5.x
However I'm having issues getting the gim driver working. Was just
.nunes36
>
>
>
>
>
> Em qua, 28 de nov de 2018 às 22:24, Mark Adams
> escreveu:
>
> > As long as you have access to the iSCSI storage from all nodes in the
> > cluster then why not?
> >
> > On Wed, 28 Nov 2018 at 19:20, Gilberto Nunes >
> &g
As long as you have access to the iSCSI storage from all nodes in the
cluster then why not?
On Wed, 28 Nov 2018 at 19:20, Gilberto Nunes
wrote:
> Hi there
>
> Is there any problem to use PVE cluster with iSCSI Direct or not ( I mean
> shared)?
>
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
Did you reinstall from the proxmox ISO after changing boot mode to legacy?
Regards.
Mark
On Tue, 30 Oct 2018 at 14:08, lord_Niedzwiedz wrote:
> I set legacy boot in bios.
> Use only one disk with lvm.
> And system not start with this.
>
> Any sugestion ?
> >> I have a problem.
> >> Im
What interface is your cluster communication (corosync) running over? As
this is the link that needs to be unavailable to initiate a VM start on
another node AFAIK.
Basically, the other nodes in the cluster need to be seeing a problem with
the node. If its still communicating over the whichever
assuming the OS in the VM supports it, as much as the host hardware can
support (no limit).
On Tue, 2 Oct 2018 at 19:35, Gilberto Nunes
wrote:
> Hi there!
>
> How many memory per VM I get in PVE?
> Is there some limit? 1 TB? 2 TB?
> Just curious
>
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
also, 3 out of 6 servers is not quorum. you need a majority of the total.
On Fri, 28 Sep 2018, 21:22 Mark Adams, wrote:
> the exact same 3 servers have been down and everything has worked? do you
> run ceph mons on every server?
>
> On Fri, 28 Sep 2018, 21:19 Gilberto Nun
If you have to stick with 2 servers, personally I would go for zfs as your
storage. Storage replication using zfs in proxmox has been made super
simple.
This is asynchronous though, unlike DRBD. You would have to manually start
your VM's should the "live" node go down and the data will be out of
Sounds like you need to speak to your local Microsoft license re seller.
On Fri, 28 Sep 2018 at 13:33, Gilberto Nunes
wrote:
> Hi there
>
> When using spice I will need the infamous Windows User Access Cal, to every
> simple user if will connect to Spice session??
> I need install 10 VM with
If you don't have a license, you need to change the repository.
https://pve.proxmox.com/wiki/Package_Repositories#_proxmox_ve_no_subscription_repository
On 12 September 2018 at 10:01, lord_Niedzwiedz wrote:
> How i make this upgrading ??!!
>
> root@hayne1:~# apt upgrade
> Reading
At least 1.3 gbps.
> > Don't know why!
> > Em 24/08/2018 17:36, "mj" escreveu:
> >
> > > Hi Mark,
> > >
> > > On 08/24/2018 06:20 PM, Mark Adams wrote:
> > >
> > >> also, balance-rr through a switch requires each nic to be
also, balance-rr through a switch requires each nic to be on a seperate
vlan. You probably need to remove your lacp config also but this depends on
switch model and configuration. so safest idea is remove it.
so I think you have 3 nodes
for example:
node1:
ens0 on port 1 vlan 10
ens1 on
What sort of OS are you using for VM's, that does not default to having
DHCP enabled? I personally can't think of one that isn't DHCP out of the
box.
As for using specific IP's based on MAC address, this would be easily set
in the dhcp server config?
On 21 August 2018 at 08:48, José Manuel Giner
Maybe lost in translation? He said "determine" not "configure".
That means installing qemu agent in the guest will allow the proxmox
interface to show you what IP it is using.
On 20 August 2018 at 23:16, Vinicius Barreto
wrote:
> Hello please could you tell which command what qemu agent do you
Just install your own DHCP server on proxmox if you want. I don't see this
as a feature many people would want, as in any "normal" network you always
have a dhcp server already?
On 20 August 2018 at 15:35, José Manuel Giner wrote:
> I thought cloud-init was connecting against a DHCP server.
>
>
a per image setting. you may need to make the rbd
>> image and migrate data.
>>
>> On 07/26/18 12:25, Mark Adams wrote:
>>
>>> Thanks for your suggestions. Do you know if it is possible to change an
>>> existing rbd pool to striping? or does this have t
Hi Ronny,
Thanks for your suggestions. Do you know if it is possible to change an
existing rbd pool to striping? or does this have to be done on first setup?
Regards,
Mark
On Wed, 25 Jul 2018, 19:20 Ronny Aasen, wrote:
> On 25. juli 2018 02:19, Mark Adams wrote:
> > Hi All,
>
Hi Alwin,
On 25 July 2018 at 07:10, Alwin Antreich wrote:
> Hi,
>
> On Wed, Jul 25, 2018, 02:20 Mark Adams wrote:
>
> > Hi All,
> >
> > I have a proxmox 5.1 + ceph cluster of 3 nodes, each with 12 x WD 10TB
> GOLD
> > drives. Network is 10Gbps on
Hi All,
I have a proxmox 5.1 + ceph cluster of 3 nodes, each with 12 x WD 10TB GOLD
drives. Network is 10Gbps on X550-T2, separate network for the ceph cluster.
I have 1 VM currently running on this cluster, which is debian stretch with
a zpool on it. I'm zfs sending in to it, but only getting
ng how to manually solve
> problems when they arise via the CLI in my opinion is also a
> must.Especially when you deal with a complicated storage like Ceph
>
> Y
> On Thu, Jul 5, 2018 at 11:53 AM Alwin Antreich
> wrote:
>
> > On Thu, Jul 05, 2018 at 11:05:52AM +0100, Mar
On 5 July 2018 at 11:04, Alwin Antreich wrote:
> On Thu, Jul 05, 2018 at 10:26:34AM +0100, Mark Adams wrote:
> > Hi Anwin;
> >
> > Thanks for that - It's all working now! Just to confirm though, shouldn't
> > the destroy button handle some of these actions? or is
Hi Anwin;
Thanks for that - It's all working now! Just to confirm though, shouldn't
the destroy button handle some of these actions? or is it left out on
purpose?
Regards,
Mark
On 3 July 2018 at 16:16, Alwin Antreich wrote:
> On Tue, Jul 03, 2018 at 12:18:53PM +0100, Mark Adams wrote:
>
Hi Alwin, please see my response below.
On 3 July 2018 at 10:07, Alwin Antreich wrote:
> On Tue, Jul 03, 2018 at 01:05:51AM +0100, Mark Adams wrote:
> > Currently running the newest 5.2-1 version, I had a test cluster which
> was
> > working fine. I since added more dis
k
On 3 July 2018 at 01:34, Woods, Ken A (DNR) wrote:
> http://docs.ceph.com/docs/mimic/rados/operations/add-or-
> rm-osds/#removing-osds-manual
>
> Are you sure you followed the directions?
>
> ____
> From: pve-user on behalf of Mark Ad
Currently running the newest 5.2-1 version, I had a test cluster which was
working fine. I since added more disks, first stopping, then setting out,
then destroying each osd so I could recreate it all from scratch.
However, when adding a new osd (either via GUI or pveceph CLI) it seems to
show a
have implemented ceph replication out of proxmox code, I'll
> try to work on it this summer.
>
>
> - Mail original -
> De: "Mark Adams" <m...@openvs.co.uk>
> À: "proxmoxve" <pve-user@pve.proxmox.com>
> Envoyé: Mardi 15 Mai 2018 00:13:0
gt; same.
>
> I'll try to look also at rbd mirror, but it's only work with librbd in
> qemu, not with krbd,
> so it can't be implemented for container.
>
>
> - Mail original -
> De: "Mark Adams" <m...@openvs.co.uk>
> À: "proxmoxve" <pve-use
Hi Wolfgang,
So does this mean that all those processes are sitting in a "queue" waiting
to execute? wouldn't it be more sensible for the script to terminate if a
process is already running for the same job?
Regards,
Mark
On 21 March 2018 at 12:40, Wolfgang Link wrote:
>
Hi All,
I've been using pve-zsync for a few months - it seems to work pretty well.
However, I have just noticed it doesn't seem to be terminating itself
correctly. at present I have around 800 pve-zsync processes (sleeping)
which all seems to be duplicates. (I would expect 1 per VMID?)
Has
doc hasn't been updated for newer versions...)
Regards,
Mark
On 13 March 2018 at 17:19, Alwin Antreich <a.antre...@proxmox.com> wrote:
> On Mon, Mar 12, 2018 at 04:51:32PM +, Mark Adams wrote:
> > Hi Alwin,
> >
> > The last I looked at it, rbd mirror only worked i
12, 2018 at 03:49:42PM +0000, Mark Adams wrote:
> > Hi All,
> >
> > Has anyone looked at or thought of making a version of pve-zsync for
> ceph?
> >
> > This would be great for DR scenarios...
> >
> > How easy do you think this would be to do? I imagine
Hi All,
Has anyone looked at or thought of making a version of pve-zsync for ceph?
This would be great for DR scenarios...
How easy do you think this would be to do? I imagine it wouId it be quite
similar to pve-zsync, but using rbd export-diff and rbd import-diff instead
of zfs send and zfs
Im just trying out the zfs replication in proxmox, nice work!
Just a few questions..
- Is it possible to change the network that does the replication? (IE be
good to use a direct connected with balance-rr for throughput)
- Is it possible to replicate between machines that are not in the same
On 5 December 2017 at 08:52, Thomas Lamprecht <t.lampre...@proxmox.com>
wrote:
> Hi,
>
> On 12/04/2017 07:51 PM, Mark Adams wrote:
> > On 17 November 2017 at 10:55, Thomas Lamprecht <t.lampre...@proxmox.com>
> wrote:
> >> On 11/16/2017 0
Hi,
On 17 November 2017 at 10:55, Thomas Lamprecht <t.lampre...@proxmox.com>
wrote:
> Hi,
>
> On 11/16/2017 07:20 PM, Mark Adams wrote:
> > Hi all,
> >
> > It looks like in newer versions of proxmox, the only fencing type advised
> > is watchdog. Is th
Hi all,
It looks like in newer versions of proxmox, the only fencing type advised
is watchdog. Is that the case?
Is it still possible to do PDU fencing as well? This should enable us to be
able to fail over faster as the fence will not fail if the machine has no
power right?
Thanks
Hi All,
On proxmox 5.1, with ceph as storage, I'm trying to disable the
snapshotting of a specific disk on a VM.
This is not an option in the gui, but I've added the option to the disk in
the conf file
scsi1: ssd_ceph_vm:vm-100-disk-2,discard=on,size=32G,snapshot=off
However, this seems to be
Try create more than 10 vdisks (doesn't have to be all in the same VM...)
It never worked after 10 for me.
On 28 October 2017 at 14:11, Gilberto Nunes
wrote:
> Just a note: with Ubuntu I am using IET ( iSCSI Enterprise Target) and it's
> seems faster than ever!
>
>
Imo, don't even bother trying to do this with tgt. I got it working some
time ago but it was flakey, and started having dataset naming issues after
about 10 disks.
The only working (stable) zfs over iscsi with proxmox afaik is using
comstar (which to be fair to proxmox devs, is what they say in
-10"
Is anyone using zfs over iscsi with iet? have you seen this behaviour?
Thanks,
Mark
On 23 November 2016 at 20:40, Michael Rasmussen <m...@miras.org> wrote:
> On Wed, 23 Nov 2016 09:40:55 +
> Mark Adams <m...@openvs.co.uk> wrote:
>
> >
> > Has any
Hi All,
I'm testing out proxmox and trying to get a working ZFS on iSCSI HA setup
going.
Because ZFS on iSCSI logs on to the iscsi server via ssh and creates a zfs
dataset then adds iscsi config to /etc/ietd.conf it works fine when you've
got a single iscsi host, but I haven't figured out a way
77 matches
Mail list logo