Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 9:15 PM, Michael Rasmussen wrote: > On Fri, 14 Oct 2016 19:56:04 +0200 Andreas Steinel > wrote: > > Isn't there a chicken and egg problem now? Where is the name defined if I > > install via PXE in an automatic fashion? > > For Debian based distributions preseed exists. Fo

Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 19:56:04 +0200 Andreas Steinel wrote: > > Isn't there a chicken and egg problem now? Where is the name defined if I > install via PXE in an automatic fashion? > For Debian based distributions preseed exists. For Redhat based distributions kickstart exists. Both are capable o

Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
Yes, that's one way to go. We used ddns almost 10 years ago and went for a "full static" setup. Isn't there a chicken and egg problem now? Where is the name defined if I install via PXE in an automatic fashion? Until now I setup the VM automatically (CLI) and give it a name and retrieve the mac,

Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 16:59:52 +0200 Andreas Steinel wrote: > On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen wrote: > > > On Fri, 14 Oct 2016 16:09:48 +0200 > > Andreas Steinel wrote: > > > > > > > > How do you guys solve this problem in big environments? Is there a > > simpler > > > w

Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen wrote: > On Fri, 14 Oct 2016 16:09:48 +0200 > Andreas Steinel wrote: > > > > > How do you guys solve this problem in big environments? Is there a > simpler > > way I don't see right now? > > > You could use DHCP assigned IP and a DNS server whic

Re: [pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Michael Rasmussen
On Fri, 14 Oct 2016 16:09:48 +0200 Andreas Steinel wrote: > > How do you guys solve this problem in big environments? Is there a simpler > way I don't see right now? > You could use DHCP assigned IP and a DNS server which automatically adds or removes IP from domain. If some VM's needs static I

[pve-devel] Finding VMs by hostname, not id

2016-10-14 Thread Andreas Steinel
Hi, I'd like to discuss a feature request about having a "real" hostname on KVM machines or some other mechanism to solve my problem. I have a rather big environment with over a hundred KVM VMs and also different networks including different DNS settings. Currently I "encode" further VM informati

[pve-devel] [RFC zfsonlinux 0/3] Update ZFS to 0.6.5.8

2016-10-14 Thread Fabian Grünbichler
this patch series moves from the ZoL package base to a Debian Jessie package base. because of the different packag names, this requires adding some transitional packages. the new packaging base is much closer to upstream and has some other nice features: - cleaner packaging scripts - included scr

[pve-devel] [RFC zfsonlinux 2/3] add transitional packages and relations for upgrades

2016-10-14 Thread Fabian Grünbichler
--- Makefile| 11 +- zfs-patches/fix-dependencies-for-upgrades.patch | 137 zfs-patches/series | 1 + 3 files changed, 147 insertions(+), 2 deletions(-) create mode 100644 zfs-patches/fix-dependencies

[pve-devel] [RFC zfsonlinux 1/3] switch pkg source to Debian Jessie

2016-10-14 Thread Fabian Grünbichler
update to 0.6.5.8 drop unneeded patches refresh no-DKMS and no-dracut patches --- Note: this patch does not apply because of the binary diff lines, remove those before applying. Makefile| 26 +-- pkg-spl.tar.gz | Bin 144777

[pve-devel] [RFC zfsonlinux 3/3] bump version to 0.6.5.8-pve11/-pve7

2016-10-14 Thread Fabian Grünbichler
--- Note: included for easily distinguished package names when test-building Makefile | 6 +++--- spl-changelog.Debian | 8 zfs-changelog.Debian | 10 ++ 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/Makefile b/Makefile index 7cbabaf..ca27979 10064

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Michael Rasmussen
That might explain the difference. On October 14, 2016 12:15:42 PM GMT+02:00, Andreas Steinel wrote: >On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote: > >> On 2016-10-14 11:13, Andreas Steinel wrote: >>> >>> So, what was your test environment? How big was the difference? >>> >>> Are you run

[pve-devel] [PATCH v2 container 1/2] fix #1147: allow marking non-volume mps as shared

2016-10-14 Thread Fabian Grünbichler
this introduces a new option for non-volume mount points, modeled after the way we define 'shared' storages: the boolean flag 'shared' marks a mount point as available on other nodes (default: false) when migrating containers with non-volume mount points, this new property is checked, and a migrat

[pve-devel] [PATCH v2 container 2/2] fix spelling: 'mountpoint' 'mount point'

2016-10-14 Thread Fabian Grünbichler
--- just rebased src/PVE/API2/LXC.pm| 14 +++--- src/PVE/CLI/pct.pm | 2 +- src/PVE/LXC.pm | 2 +- src/PVE/LXC/Config.pm | 14 +++--- src/PVE/LXC/Migrate.pm | 4 ++-- src/PVE/VZDump/LXC.pm | 6 +++--- 6 files changed, 21 insertions(+), 21 deletions(-) diff -

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote: > On 2016-10-14 11:13, Andreas Steinel wrote: >> >> So, what was your test environment? How big was the difference? >> >> Are you running your ZFS pool on the proxmox node? Yes, everything local on the node itself. ___

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread datanom.net
On 2016-10-14 11:13, Andreas Steinel wrote: Hi Mir, On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen wrote: I use virio-scsi-single exclusively because of the hough performance gain in comparison to virtio-scsi so I can concur to that. I just benchmarked it in on a full-SSD-ZFS system of

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Alexandre DERUMIER
>>So, what was your test environment? How big was the difference? That's strange, they are technical difference between virtio-scsi && virtio-scsi-single. with virtio-scsi-single you have 1 virtio-scsi controller by disk. for iothread, you should see difference with multiple disk in 1 vm. This

Re: [pve-devel] pve-manager and disk IO monitoring

2016-10-14 Thread Andreas Steinel
Hi Mir, On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen wrote: > I use virio-scsi-single exclusively because of the hough performance > gain in comparison to virtio-scsi so I can concur to that. I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse results. I used 4 cores,

[pve-devel] applied: [PATCH kernel] Fix #927: add IPoIB performance regression fix

2016-10-14 Thread Fabian Grünbichler
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] Applied [PATCH kernel] update to Ubuntu 4.4.0-43.63, bump version to 4.4.21-69

2016-10-14 Thread Fabian Grünbichler
--- Note: already applied Makefile | 4 ++-- changelog.Debian | 8 ubuntu-xenial.tgz | Bin 145659146 -> 145650761 bytes 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/Makefile b/Makefile index 6a608c5..0f41f5a 100644 --- a/Makefile +++ b/Makefile @@ -2,7 +