On Fri, Oct 14, 2016 at 9:15 PM, Michael Rasmussen wrote:
> On Fri, 14 Oct 2016 19:56:04 +0200 Andreas Steinel
> wrote:
> > Isn't there a chicken and egg problem now? Where is the name defined if I
> > install via PXE in an automatic fashion?
>
> For Debian based distributions preseed exists. Fo
On Fri, 14 Oct 2016 19:56:04 +0200
Andreas Steinel wrote:
>
> Isn't there a chicken and egg problem now? Where is the name defined if I
> install via PXE in an automatic fashion?
>
For Debian based distributions preseed exists. For Redhat based
distributions kickstart exists. Both are capable o
Yes, that's one way to go. We used ddns almost 10 years ago and went for a
"full static" setup.
Isn't there a chicken and egg problem now? Where is the name defined if I
install via PXE in an automatic fashion?
Until now I setup the VM automatically (CLI) and give it a name and
retrieve the mac,
On Fri, 14 Oct 2016 16:59:52 +0200
Andreas Steinel wrote:
> On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen wrote:
>
> > On Fri, 14 Oct 2016 16:09:48 +0200
> > Andreas Steinel wrote:
> >
> > >
> > > How do you guys solve this problem in big environments? Is there a
> > simpler
> > > w
On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen wrote:
> On Fri, 14 Oct 2016 16:09:48 +0200
> Andreas Steinel wrote:
>
> >
> > How do you guys solve this problem in big environments? Is there a
> simpler
> > way I don't see right now?
> >
> You could use DHCP assigned IP and a DNS server whic
On Fri, 14 Oct 2016 16:09:48 +0200
Andreas Steinel wrote:
>
> How do you guys solve this problem in big environments? Is there a simpler
> way I don't see right now?
>
You could use DHCP assigned IP and a DNS server which automatically
adds or removes IP from domain. If some VM's needs static I
Hi,
I'd like to discuss a feature request about having a "real" hostname on KVM
machines or some other mechanism to solve my problem.
I have a rather big environment with over a hundred KVM VMs and also
different networks including different DNS settings. Currently I "encode"
further VM informati
this patch series moves from the ZoL package base to a Debian Jessie package
base. because of the different packag names, this requires adding some
transitional packages.
the new packaging base is much closer to upstream and has some other nice
features:
- cleaner packaging scripts
- included scr
---
Makefile| 11 +-
zfs-patches/fix-dependencies-for-upgrades.patch | 137
zfs-patches/series | 1 +
3 files changed, 147 insertions(+), 2 deletions(-)
create mode 100644 zfs-patches/fix-dependencies
update to 0.6.5.8
drop unneeded patches
refresh no-DKMS and no-dracut patches
---
Note: this patch does not apply because of the binary diff lines,
remove those before applying.
Makefile| 26 +--
pkg-spl.tar.gz | Bin 144777
---
Note: included for easily distinguished package names when test-building
Makefile | 6 +++---
spl-changelog.Debian | 8
zfs-changelog.Debian | 10 ++
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/Makefile b/Makefile
index 7cbabaf..ca27979 10064
That might explain the difference.
On October 14, 2016 12:15:42 PM GMT+02:00, Andreas Steinel
wrote:
>On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote:
>
>> On 2016-10-14 11:13, Andreas Steinel wrote:
>>>
>>> So, what was your test environment? How big was the difference?
>>>
>>> Are you run
this introduces a new option for non-volume mount points,
modeled after the way we define 'shared' storages: the
boolean flag 'shared' marks a mount point as available on
other nodes (default: false)
when migrating containers with non-volume mount points,
this new property is checked, and a migrat
---
just rebased
src/PVE/API2/LXC.pm| 14 +++---
src/PVE/CLI/pct.pm | 2 +-
src/PVE/LXC.pm | 2 +-
src/PVE/LXC/Config.pm | 14 +++---
src/PVE/LXC/Migrate.pm | 4 ++--
src/PVE/VZDump/LXC.pm | 6 +++---
6 files changed, 21 insertions(+), 21 deletions(-)
diff -
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote:
> On 2016-10-14 11:13, Andreas Steinel wrote:
>>
>> So, what was your test environment? How big was the difference?
>>
>> Are you running your ZFS pool on the proxmox node?
Yes, everything local on the node itself.
___
On 2016-10-14 11:13, Andreas Steinel wrote:
Hi Mir,
On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen
wrote:
I use virio-scsi-single exclusively because of the hough performance
gain in comparison to virtio-scsi so I can concur to that.
I just benchmarked it in on a full-SSD-ZFS system of
>>So, what was your test environment? How big was the difference?
That's strange, they are technical difference between virtio-scsi &&
virtio-scsi-single.
with virtio-scsi-single you have 1 virtio-scsi controller by disk.
for iothread, you should see difference with multiple disk in 1 vm.
This
Hi Mir,
On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen wrote:
> I use virio-scsi-single exclusively because of the hough performance
> gain in comparison to virtio-scsi so I can concur to that.
I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse
results.
I used 4 cores,
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
Note: already applied
Makefile | 4 ++--
changelog.Debian | 8
ubuntu-xenial.tgz | Bin 145659146 -> 145650761 bytes
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/Makefile b/Makefile
index 6a608c5..0f41f5a 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +
20 matches
Mail list logo