o handle it.
Alexandre Derumier
Ingénieur système et stockage
Manager Infrastructure
Fixe : +33 3 59 82 20 10
125 Avenue de la république
59110 La Madeleine
[ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [
https://www.linkedin.com/company/odiso ] [
https://www.viade
Hi,
you should give it a try to ceph octopus. librbd have greatly improved for
write, and I can recommand to enable writeback now by default
Here some iops result with 1vm - 1disk - 4k block iodepth=64, librbd, no
iothread.
nautilus-cache=none
>>AFAIK, it will be a challenge to get more that 2000 IOPS from one VM
>>using Ceph...
with iodetph=1, single queue, you'll have indeed the latency , and you
shouldn't be to reach more than 4000-5000iops.
(depend mainly of cpu frequency on client + cpu frequency on cluster + network
latency)
Hi,
Are you host clocks correctly synced ?
- Mail original -
De: "Sivakumar SARAVANAN"
À: "proxmoxve"
Envoyé: Mercredi 27 Mai 2020 12:21:00
Objet: Re: [PVE-User] Invalid PVE Ticket (401)
Hello,
Thanks for the reply.,
Yes we are using 20 server Proxmox in the datacenter and each
>>Unfortunately I can't remember the exact time this happened it was
>>something like 3 or 4 weeks ago, I've updated since then but as all
>>servers are in production right now, I can't test and verify if it's
>>been fixed or not.
Ok, no problem. I has been able to reproduce it. It was indeed
@Sonam:
>>
>>Finally, I cannot apply pending network changes from web even after
>>installing *ifupdown2* (without reboot). It complains about
>>subscription. Do I need enterprise subscription for that?
Do you have change the proxmox repository to no-subscription?
@Amin
>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network
>> information from web, the prefix (netmask) is slashed off from old
>> interface config irrespective of whether I specify CIDR or not. And
>> then, I cannot create cluster from web GUI with link0 address in
>>
>>What rates do you find on your proxmox/ceph cluster for single VMs?
with replicat x3 and 4k block random read/write with big queue depth, I'm
around 7iops read && 4iops write
(by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core
by disk)
with queue depth=1,
fixed package uploaded today
http://download.proxmox.com/debian/pve/dists/buster/pve-no-subscription/binary-amd64/ifupdown2_2.0.1-1%2Bpve8_all.deb
- Mail original -
De: "proxmoxve"
À: "proxmoxve"
Cc: "leesteken"
Envoyé: Lundi 16 Mars 2020 11:22:28
Objet: Re: [PVE-User] systemd
Hi,
edit /lib/systemd/system/networking.service
and remove exec line with "ExecStart=/sbin/ifup --allow=ovs"
It was a fix for openvswitch in ifupdown2, but it was fixed another way.
I have sent a mail to pve-devel to remove the patch.
thansk for reporting this.
- Mail original -
De:
Hi,
almost all storages are thin provisionned by default in proxmox
(Create a new disk of 400Gb, it'll use 0 until you create datas on it)
- Mail original -
De: "Leandro Roggerone"
À: "proxmoxve"
Envoyé: Lundi 9 Mars 2020 12:53:29
Objet: [PVE-User] Creating VM with thin provisioning
Hi,
currently they are no support for dpdk in proxmox.
I have send recently for proxmox6 ovs 2.12 (dpdk can be build too)
but they are a other things to implement to use it. (vhost-user for example for
qemu nic)
you also need transparent hugepage enabled in the vm
- Mail original -
Well you could use a virtual quorum vm, but
dc1: 4 nodes + vm ha dc2: 4nodes
you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2.
so it's not helping.
you really don't want to use HA here. but you can still play with "pvecm
expected" to get back the quorum
if 1 dc is
:32:49
Objet: Re: [PVE-User] Network interfaces renaming strangeness...
Mandi! Alexandre DERUMIER
In chel di` si favelave...
> is it a fresh proxmox6 install ? or upgraded from proxmox5?
Fresh proxmox 6, from scratch. Upgraded daily via APT.
> any /etc/udev/rules.d/70-persistent
everse slot/path
"NamePolicy=keep kernel database onboard path slot"
and reboot
I have see a user on the forum with same problem
- Mail original -
De: "Marco Gaiarin"
À: "proxmoxve"
Envoyé: Jeudi 13 Février 2020 12:32:49
Objet: Re: [PVE-User] N
uot;
and reboot
I have see a user on the forum with same problem
- Mail original -
De: "Marco Gaiarin"
À: "proxmoxve"
Envoyé: Jeudi 13 Février 2020 12:32:49
Objet: Re: [PVE-User] Network interfaces renaming strangeness...
Mandi! Alexandre DERUMIER
In chel di` s
hi,
is it a fresh proxmox6 install ? or upgraded from proxmox5?
any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be
removed)
no special grub option ? (net.ifnames, ...)
- Mail original -
De: "Marco Gaiarin"
À: "proxmoxve"
Envoyé: Jeudi 13 Février 2020
ceph client vs server are generally compatible between 2 or 3 releases.
They are no way to make working nautilus or luminous clients with firefly.
I think minimum is jewel server for nautilus client.
So best way could be to upgrade your old proxmox cluster first. (from 4->6,
this can be done
cked that /boot/grub/grub.cfg has been updated correctly
But after reboot nothing has changed,
freq is still btw 1.20 and 3.60 GHz
Any idea how to fix?
On 23/01/2020 20:38, Alexandre DERUMIER wrote:
> Hi,
> I'm setup this now for my new intel processors:
>
> /etc/defau
AN and tagged frames
- Le 24 Jan 20, à 8:20, Daniel Berteaud dan...@firewall-services.com a
écrit :
> - Le 23 Jan 20, à 20:53, Alexandre DERUMIER aderum...@odiso.com a écrit :
>>
>> I think if you want to do something like a simple vxlan tunnel, with
>> multiple
>
Hi,
>>So, what's the recommended setup for this ? Create one (non vlan aware)
>>bridge for each network zone, with 1 VxLAN tunnel per bridge between nodes ?
yes, you need 1 non-vlan aware bridge + 1 vxlan tunnel.
Technically they are vlan (from aware bridge) to vxlan mapping in kernel, but
Hi,
I'm setup this now for my new intel processors:
/etc/default/grub
GRUB_CMDLINE_LINUX="intel_idle.max_cstate=0 intel_pstate=disable
processor.max_cstate=1"
- Mail original -
De: "José Manuel Giner"
À: "proxmoxve"
Envoyé: Jeudi 23 Janvier 2020 15:34:47
Objet: [PVE-User] CPU freq
When a VM sends a discard/trim command, is it sent to the SSD, LVM does
not block the command?
Hi, yes it's working with lvm (lvm thin only)
>>Or is it useless, because mdadm handles discard/trim in his own way?
The trim command need to be send by the guest os.
on linux guest : /etc/fstab
ay, just for my education, is there someone here who can explain
shortly what was the problem (unicast management ?) or who have a good
link regarding this "behavior" ? Thanks !
Cheers,
rv
Le 08/11/2019 à 11:18, Alexandre DERUMIER a écrit :
> Hi,
>
> do you have upgrade
>>Reading that the final suggestion is: using a random number, then
>>perhaps pve could simply suggest a random PVID number between 1 and 4
>>billion.
use uuid in this case.
The problem is that vm tap interfaces for example, use vmid in the name
(tapi,
and it's limited to 16characters by
d a stable situation
> before upgrade it !)
>
> It seems to be a unicast or corosync3 problem, but logs are not really
> verbose at the time of reboot...
>
> Is there anything else to test ?
>
> Regards,
> Hervé
>
> Le 20/09/2019 à 17:00, Alexandre DERUMIER a écrit
is it between amd and intel host ?
because, in past, it was never stable. (I had also problem between different
amd generation).
- Mail original -
De: "proxmoxve"
À: "proxmoxve"
Cc: "Humberto Jose De Sousa"
Envoyé: Mardi 29 Octobre 2019 12:41:19
Objet: [PVE-User] kernel panic after
Hi,
a patch is available in pvetest
http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb
can you test it ?
(you need to restart corosync after install of the deb)
- Mail original -
De: "Laurent CARON"
À: "proxmoxve"
Envoyé: Lundi 16
can send detail of
cat /proc/slabinfo
?
- Mail original -
De: "Aaron Lauterer"
À: "proxmoxve"
Envoyé: Vendredi 20 Septembre 2019 15:12:04
Objet: Re: [PVE-User] Kernel Memory Leak on PVE6?
On 9/20/19 3:04 PM, Chris Hofstaedtler | Deduktiva wrote:
> * Aaron Lauterer [190920 14:58]:
I have done test with vmware, you need the iscsi gateway from ceph, but really
need to be on
mimic or better nautilus for iscsi gateway stability.
http://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Lundi 3 Juin 2019
>>so as Mimic was no stable release
That's not true anymore.
Since luminous, they have change their release cycle (each 9month),
and all releases are now stable
https://ceph.com/releases/v13-2-0-mimic-released/
"This is the first stable release of Mimic, the next long term release series"
fert the conntrack.
- Mail original -
De: "Thomas Lamprecht"
À: "proxmoxve" , "Mark Schouten"
Envoyé: Jeudi 9 Mai 2019 11:10:46
Objet: Re: [PVE-User] Ceph and firewalling
On 5/9/19 10:09 AM, Mark Schouten wrote:
> On Thu, May 09, 2019 at 07:53:50AM +0200,
Hi,
I had this problem with cephfs in the vm mainly, when firewall is stopped
(rules are flushed - but existing connections still conntrack), then start
again the firewall,
and conntrack put in invalid because it don't have tracked connection sequence
when firewall was stopped.
This could
>>Is there any way to copy *exactly* disk from one pool to another, using
>>PVE live disk moving ?
It's currently not possible with ceph/librbd. (it's missing some pieces in qemu
librbd driver)
>>Of course when I run "fstrim -va" on the guest, I can't reclaim space
>>because kernel thinks
>>Not sure what documentation exist. 40gbps links are internally 4x 10gbps
>>waves so a single io will flow on one of the 10gbps links at 10gbps
>>latency. a 25 Gbps link is a faster single wave, and 100Gbps is
>>4x25Gbps waves. since IOPS is usually much more important then
>>bandwidth (for
How many disk do you have in you node ?
maybe using filestore could help if you are low in memory.
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Lundi 18 Février 2019 03:37:34
Objet: [PVE-User] Proxmox Ceph Cluster and osd_target_memory
Hi there
I have a system w/ 6
I'm running 20 nodes cluster here, unicast.
I think if you have switch with big asic, and small latency,
you can do it better.
I'm currently testing corosync 3, with the new protocol knet,
it should help too for bigger cluster. (should be the default for proxmox6).
I think currently they are
as you have configured nfqueue=2, do you have setup
ips_queues: 2
?
- Mail original -
De: "Mark Kaye"
À: "proxmoxve"
Envoyé: Mardi 18 Décembre 2018 10:54:23
Objet: [PVE-User] Proxmox Suricata Setup
Hi,
I've followed the instructions for setting up Suricata on my Proxmox server as
for import, you can do it with command line only.
for an ovf:
qm importovf[OPTIONS]
for only 1disk:
qm importdisk[OPTIONS]
For export, they are no command line currently.
(but you can use "qemu-img convert ..." to convert disk to vmdk or other
format.)
- Mail original -
>>I presume a "node1:2,node8:1" and "restricted 1" should do the trick here.
restricted is only used, if both both node1 && node8 are down, the vm don't go
to another node.
But the weight indeed, should do the trick.
for example, n nodes with :2 , and spare(s) node(s) with :1
- Mail
Like all writeback, you'll lost datas in memory before the fsync of the
filesystem, but not corruption of the filesystem. (writeback = rbd_cache=true)
note that rbd writeback help only for sequential of small writes (aggregate in
1big transaction to send to ceph).
Also, read latency is bigger
>>Clearly i cannot use 'LVM Thin', because is not shared.
>[ https://pve.proxmox.com/wiki/Storage:_LVM_Thin |
>https://pve.proxmox.com/wiki/Storage:_LVM_Thin ]
>>Someone can give me some clue? Thanks.
Indeed, LVM thin can't work with shared SAN, only on local disk.
So you can't have thin
Hi,
It's also possible to manage luks encryption at qemu level
I have an opened bugzilla about this, but don't have time yet to work on it
https://bugzilla.proxmox.com/show_bug.cgi?id=1894
Advantage is that it's could work with any storage
- Mail original -
De: "Daniel Berteaud"
À:
Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em sex, 5 de out de 2018 às 03:55, Alexandre DERUMIER <
> aderum...@od
Hi,
Can you resend your schema, because it's impossible to read.
but you need to have to quorum on monitor to have the cluster working.
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Jeudi 4 Octobre 2018 22:05:16
Objet: [PVE-User] Proxmox CEPH 6 servers failures!
Hi
The qemu limit is 4TB.
proxmox don't have a max limit in vm config.
But I'm not sure that seabios support more than 1TB. (never tried it, but in
2013 it was not supported).
maybe enabling uefi could help.
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Mardi 2 Octobre
Hi,
maybe the network link on NODE1 to network storage is saturated ?
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Jeudi 20 Septembre 2018 15:31:17
Objet: [PVE-User] VM tooks a lot time to restart...
Hi there
I have a bunch of VM running in two cluster HA...
This
>>Hi,
>>I am just mount cephfs via fstab - I have not installed anything
>>ceph-related but I am relying on the kernel driver.
>>As far as I know, this cannot really be updated unless I upgrade the
>>kernel?
>>[ http://docs.ceph.com/docs/master/cephfs/kernel/ |
>>I am trying to diagnose a performance issue we have in our cluster.
>>What we found is that Proxmox 5 is using Ceph Luminous 12.2.7, but our
>>clients that is mounting Cephfs is running Jewel 10.2.5 – Is this an issue?
It's better to use matching version packages for client too. (last kernel
xmoxve"
Envoyé: Mardi 4 Septembre 2018 17:14:26
Objet: Re: [PVE-User] Cloning a running VM - is it safe?
Am 04.09.2018 um 16:39 schrieb Alexandre DERUMIER:
> live cloning don't use snapshot, but use qemu drive mirror (like move disk),
> but at the end, don't do the switch fro
live cloning don't use snapshot, but use qemu drive mirror (like move disk),
but at the end, don't do the switch from current disk to new disk.
It should work in almost all cases, but be carefull, I don't think that
pending write in memory are flushed to new disk.
(It's like you clone it, and at
Hi,
vma backup only work on running vm (attached disk),
so no, it's not possible currently.
Currently, I'm doing ceph backup to another remote ceph backup cluster
with custom script
* Start
* guest-fs-freeze
* rbd snap $image@vzdump_$timstamp
* guest-fs-thaw
* rbd export
Hi,
do you want to improve latency or throughput ?
They are lot of thing to tune in ceph.conf before tuning network.
can you send your ceph.conf too ? (ceph cluster && proxmox nodes if ceph is
outside proxmox)
ssd ? hdd ?
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
- Mail original -
De: "lyt_yudi"
À: "proxmoxve"
Envoyé: Jeudi 12 Juillet 2018 03:20:47
Objet: Re: [PVE-User] About the FRR integrated into the PVE?
> 在 2018年7月11日,下午9:17,Alexandre DERUMIER 写道:
>
> What do you want to do with frr ? (vxlan - bgp evpn ?)
Hi,
I have sent a package for proxmox last month,
I need to rebase it on frr 5.0.1.
(I need it for vxlan bgp evpn)
- Mail original -
De: "lyt_yudi"
À: "proxmoxve"
Envoyé: Mardi 10 Juillet 2018 03:41:38
Objet: [PVE-User] About the FRR integrated into the PVE?
Hi,
will it be
>>For example: with the dual 10G LACP connection to each server, we can
>>only use mtu size 1500. Are we loosing much there..? Or would there be a
>>way around this, somehow?
you can setup mtu 9000 on your bridge and bond.
if your vms have mtu 1500 (inside the vm), the packet will use 1500 mtu
di 17 Mai 2018 17:36:43
Objet: Re: [PVE-User] pve-csync version of pve-zsync?
Hi Alexander,
Could you please elaborate more on how you have implemented ceph
replication using proxmox?
Thanks,
Mark
On Thu, 17 May 2018, 15:26 Alexandre DERUMIER, <aderum...@odiso.com> wrote:
> Hi,
Envoyé: Mardi 15 Mai 2018 00:13:03
Objet: Re: [PVE-User] pve-csync version of pve-zsync?
Hi Alexandre,
Did you ever get a chance to take a look at this?
Regards,
Mark
On 13 March 2018 at 18:32, Alexandre DERUMIER <aderum...@odiso.com> wrote:
> Hi,
>
> I have plans to i
note that your only need SPEC-CTRL and last microcode, if your vms are windows,
or linux with a kernel without retpoline mitigation
PCID is only to improve performance (and you need to recent kernel (>4.13 I
think), in your vm, because it was not use before)
setting vcpu other than kvm64,
Hi,
>>Ceph has rather larger overheads
Agree. they are overhead, but performance increase with each release.
I think the biggest problem is that you can reach more than 70-90k iops with 1
vm disk currently.
and maybe latency could be improve too.
>>much bigger PITA to admin
don't agree. I'm
Hi,
I don't think it's possible currently.
BTW, do you have to use chrony instead ntpd ? It's really faster to keep clock
sync vs ntpd or openntpd
- Mail original -
De: "Andreas Herrmann"
À: "proxmoxve"
Envoyé: Lundi 26 Mars 2018 10:22:06
,
Josh
Josh Knight
On Wed, Mar 21, 2018 at 2:03 AM, Alexandre DERUMIER <aderum...@odiso.com>
wrote:
> I'm running 20 nodes clusters here, with unicast. (I need to increase
> corosync timeout a little bit with more than 12 nodes).
>
> I think the hard limit is around 100
Thanks for your script, I was looking for something like that.
I'll try it next week and try to debug.
I'm seeing 2 improvements:
- check that shared storage is available on target node
- take ksm value in memory count. (with ksm enable, we can have all nodes at
80% memory usage, but with
I'm running 20 nodes clusters here, with unicast. (I need to increase corosync
timeout a little bit with more than 12 nodes).
I think the hard limit is around 100nodes in last corosync, but recommendations
is around 32. with more than that, I think the corosync timeout need to be
increase
Hi,
I have plans to implement storage replication for rbd in proxmox,
like for zfs export|import. (with rbd export-diff |rbd import-diff )
I'll try to work on it next month.
I'm not sure that currently a plugin infrastructe in done in code,
and that it's able to manage storages with differents
ian Grünbichler" <f.gruenbich...@proxmox.com>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Lundi 12 Mars 2018 20:08:57
Objet: Re: [PVE-User] 4.15 based test kernel for PVE 5.x available
On Mon, Mar 12, 2018 at 07:43:09PM +0100, Alexandre DERUMIER wrote:
> Hi,
&
Hi,
Is retpoline support enabled like ubuntu build ? (builded with recent gcc ?)
- Mail original -
De: "Fabian Grünbichler"
À: "proxmoxve"
Envoyé: Lundi 12 Mars 2018 14:14:29
Objet: [PVE-User] 4.15 based test kernel for PVE 5.x
Please read the doc here :
https://pve.proxmox.com/wiki/Pci_passthrough
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Vendredi 2 Mars 2018 18:06:37
Objet: [PVE-User] Network Interface Card Passthroug
pve02:~# cat
CEPH Luminous source packages
On 13/2/18 10:58 pm, Alexandre DERUMIER wrote:
> https://git.proxmox.com/?p=ceph.git;a=summary
>
Any idea how I clone the packages from that git system ?
Mike
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
https://git.proxmox.com/?p=ceph.git;a=summary
- Mail original -
De: "Mike O'Connor"
À: "proxmoxve"
Envoyé: Mardi 13 Février 2018 01:56:47
Objet: [PVE-User] CEPH Luminous source packages
Hi All
Where can I find the source packages that the
Gbit
interface.
Do you have a link about these virtio issues in 4.10 ?
Met vriendelijke groeten,
--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
Van: Alexand
Try to upgrade to kernel 4.13, they are known virtio bug in 4.10.
(no sure it's related, but it could help)
do you use bonding on your host ? if yes, which mode ?
- Mail original -
De: "Mark Schouten"
À: "proxmoxve"
Envoyé: Mardi 2 Janvier 2018
yes, this is normal.
the block job mirror is cancelled at the end, to avoid that source vm switch on
the new disk.
- Mail original -
De: "Fabrizio Cuseo"
À: "proxmoxve"
Envoyé: Mardi 31 Octobre 2017 18:14:28
Objet: [PVE-User] PVE 5.1
already available in proxmox 5.1 :)
#qm importovf
- Mail original -
De: "Francois Deslauriers"
À: "proxmoxve"
Envoyé: Mercredi 8 Novembre 2017 23:20:49
Objet: [PVE-User] OVA, OVF support
is there any plan to support OVA, OVF
>>Has anybody installed a Luminous client on Proxmox 4.4 ?
yes, it's working.
But I have tested luminous client with jewel and luminous cluster only.
don't known if it's still compatible with hammer
- Mail original -
De: "Mark Schouten"
À: "proxmoxve"
Try to as to debian devs to add it to jessie-backports repo ?
or maybe, do you have tried to install deb package from stretch on jessie ?
- Mail original -
De: "Lindsay Mathieson"
À: "proxmoxve"
Envoyé: Dimanche 1 Octobre 2017
> interesting to note: after deactivating the system console on the
> VGA/NoVNC and switching to a serial console the above phenomens are
> fixed and the OpenBSD guest is running smoothly like with Proxmox VE
> 4.4.
could it be related to the change from cirrus to vga from the vga adapter ?
5.255.255.0
> mtu 9000
>
> # interface for vlan 130 on bond without IP (just for VMs)
> auto bond0.130
> iface bond0.130 inet manual
>
> -
>
>
> Am 23.08.2017 um 06:44 schrieb Alexandre DERUMIER:
>> Hi,
>>
>>
Hi,
you can create a bond0.120 interface and setup ip on it.
(and keep bond0 in bridge).
another way, enable "vlan aware" option in bridge. (see gui),
then you can create a
vmbr0.120, and setup ip on it.
- Mail original -
De: "Devin Acosta"
À: "proxmoxve"
can you post your vmid.conf ?
- Mail original -
De: "Bill Arlofski"
À: "proxmoxve"
Envoyé: Mardi 15 Août 2017 07:02:12
Objet: [PVE-User] Random kernel panics of my KVM VMs
Hello everyone.
I am not sure this is the right place to ask,
you need to define the usb port before starting the vm, (with physical port
number or usb device id)
After that, you can hotplug/unplug usb device
- Mail original -
De: "Gilberto Nunes"
À: "proxmoxve"
Envoyé: Vendredi 11 Août 2017
seem to be a multicast problem.
does it work with omping ?
- Mail original -
De: "Chris Tomkins"
À: "proxmoxve"
Envoyé: Vendredi 11 Août 2017 12:02:00
Objet: [PVE-User] Cluster won't reform so I can't restart VMs
Hi Proxmox users,
I
cpu hotplug/unplug works fine
memory hotplug works fine
memory unplug is mostly broken in linux and not implemented in windows.
see notes here :
https://pve.proxmox.com/wiki/Hotplug_(qemu_disk,nic,cpu,memory)
Alexandre Derumier
Ingénieur système et stockage
Manager
Hi,
they are no hook script for vm stop/start,
but maybe can you try to hack
/var/lib/qemu-server/pve-bridge
this is the script that qemu is executing when vm stop/start, or nic is
hotplugged.
(I'm not sure about live migration, as you need to announce ip only when the vm
is resumed on target
also discussion about youtube performance
https://www.spinics.net/lists/spice-devel/msg27403.html
"Since you are on el7 system you can test our nightly builds:
https://copr.fedorainfracloud.org/coprs/g/spice/nightly/
which provides ability to switch the video encoder in spicy (package
another interesting article in deutsh
http://linux-blog.anracom.com/2017/07/06/kvmqemu-mit-qxl-hohe-aufloesungen-und-virtuelle-monitore-im-gastsystem-definieren-und-nutzen-i/
- Mail original -
De: "aderumier"
À: "proxmoxve"
Envoyé: Mardi
ovirt have a good draft for auto tune value, I wonder if we could we use this
for proxmox ?
http://www.ovirt.org/documentation/draft/video-ram/
default value are ram='65536', vram='65536', vgamem='16384', 'heads=1'.
also, seem that a new vram64 value is available
hi,
I'm seeing that qemu 2.9 have new flags to make qemu-img convert async
http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d
"This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults
forgot to said, it's with proxmox 4, corosync 2.4.2-2~pve4+1
cpu are CPU E5-2687W v3 @ 3.10GHz
- Mail original -
De: "dietmar"
À: "aderumier"
Cc: "proxmoxve" , "pve-devel"
Envoyé: Mercredi
>>how many running VMs/Containers?
on 20 cluster nodes, around 1000 vm
on a 10 cluster nodes, 800vm + 800ct
on a 9 cluster nodes, 400vm
- Mail original -
De: "dietmar"
À: "aderumier" , "pve-devel"
Cc:
Hi,
just for the record,
I have migrate all my clusters with unicast, also big clusters with 16-20
nodes, and It's working fine.
"pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected " seem to
be gone.
don't see any error on the cluster.
traffic is around 3-4mbit/s on each
note that I'm just seeing , time to time (around once by hour),
pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected
But I don't have any corosync error / retransmit.
- Mail original -
De: "aderumier"
À: "pve-devel" ,
Hi,
I'm looking to remove multicast from my network (Don't have too much time to
explain, but we have multicast storm problem,because of igmp snooping bug)
Does somebody running it with "big" clusters ? (10-16 nodes)
I'm currently testing it with 9 nodes (1200vm+containers), I'm seeing around
>>What's the general opinion regarding incremental backups with a dirty map
>>like feature? http://wiki.qemu.org/Features/IncrementalBackup
>>
>>I see that as a very important and missing feature.
It need to be implement in proxmox vma format and backup code. (as proxmox
don't use qemu
Congrats to proxmox team !
BTW, small typo on
https://www.proxmox.com/en/training/video-tutorials
"New open-source storage replikation stack"
/replikation/replication
- Mail original -
De: "Martin Maurer"
À: "pve-devel" , "proxmoxve"
I'll not be include in proxmox 5.0. (proxmox devs are working on other things)
I'll try to push it for proxmox 5.1.
- Mail original -
De: lemonni...@ulrar.net
À: "proxmoxve"
Envoyé: Vendredi 30 Juin 2017 19:48:25
Objet: Re: [PVE-User] [pve-devel] Proxmox VE 5.0
uot; <aderum...@odiso.com>
Envoyé: Lundi 19 Juin 2017 06:38:22
Objet: Re: [PVE-User] [pve-devel]Proxmox VE 5.0 beta2 released!
在 2017年6月19日,上午11:45,lyt_yudi < [ mailto:lyt_y...@icloud.com |
lyt_y...@icloud.com ] > 写道:
BQ_BEGIN
在 2017年6月18日,下午11:50,Alexandre Derumier < [ m
- Mail original -
De: "lyt_yudi" <lyt_y...@icloud.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Samedi 17 Juin 2017 03:13:24
Objet: Re: [PVE-User] [pve-devel] Proxmox VE 5.0 beta2 released!
在
>>So we've seen a few examples in this thread of how to use it with qm set,
>>which
>>is great, but would we do it through API ? Is there some kind of
>>documentation
>>for this, or should I just use pvesh on 5.0 to try and find the new API
>>endpoints ?
>>'cause I have to admit, having
>>yes, other params is normal now.
Ok great :)
Currently I only have tested it on debian jessie
So, If you have time to test it on others distro, I'm interested to see results
:)
___
pve-user mailing list
pve-user@pve.proxmox.com
1 - 100 of 301 matches
Mail list logo