On Wed, Jun 3, 2020 at 11:34 AM Andreas Steinel wrote:
> Hopefully, we'll see the storage plugin framework happening in the future,
> so that only additive changes are required in order to support new plugins.
> This would hugely improve third-party-support and solve the problem of
he problem of
integrating yet another Storage-Plugin.
[1] https://pve.proxmox.com/wiki/Developer_Documentation
[2]
https://deepdoc.at/dokuwiki/doku.php?id=virtualisierung:proxmox_kvm_und_lxc:proxmox_debian_als_zfs-over-iscsi_server_verwenden
--
With kind regards / Mit freundlichen Grüßen
And
Signed-off-by: Andreas Steinel
---
pvesm.adoc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pvesm.adoc b/pvesm.adoc
index 5340c3d..b76ce87 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -84,8 +84,8 @@ data to different nodes.
^1^: On file based storages, snapshots are
ith kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
da?
>
> so we probably want: winver < 6 -> AC97 else intel hda ?
> or is there any special reason why non windows machines get ac97?
>
Yes, that makes sense. I try to implement the q35 thing.
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
guest for sharing files.
Based on the Windows version, there are two possible sound implementations
used like it is described on the SPICE page on the wiki.
Signed-off-by: Andreas Steinel
Andreas Steinel (3):
Fix #2041: add spice webdav / folder sharing
fix #413: add SPICE audio device
---
PVE/QemuServer.pm | 32
1 file changed, 20 insertions(+), 12 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 173ae82..657cfad 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -607,6 +607,12 @@ EODESCR
default => "1 (autog
Adding the device and serial port for the service spice-webdavd on Linux and
Windows.
Signed-off-by: Andreas Steinel
---
PVE/QemuServer.pm | 5 +
PVE/QemuServer/PCI.pm | 1 +
2 files changed, 6 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1ccdccf..225f0c0
If you enable SPICE, the audio device will be automatically added. Intel
HD for newer Windows and AC97 otherwise.
---
PVE/QemuServer.pm | 9 +
PVE/QemuServer/PCI.pm | 1 +
2 files changed, 10 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 225f0c0..173ae82 10064
Hi Dominik,
On Wed, Jan 2, 2019 at 2:54 PM Dominik Csapak wrote:
> On 12/28/18 6:59 PM, Andreas Steinel wrote:
> > Adding the device and serial port for the service spice-webdavd on Linux
> and
> > Windows.
> >
>
> hi, thanks for the patch, a few points thoug
://www.spice-space.org/spice-user-manual.html
Andreas Steinel (1):
Fix #2041: add spice webdav / folder sharing
PVE/QemuServer.pm | 5 +
PVE/QemuServer/PCI.pm | 1 +
2 files changed, 6 insertions(+)
--
2.11.0
___
pve-devel mailing list
pve-devel
Adding the device and serial port for the service spice-webdavd on Linux and
Windows.
Signed-off-by: Andreas Steinel
---
PVE/QemuServer.pm | 5 +
PVE/QemuServer/PCI.pm | 1 +
2 files changed, 6 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1ccdccf..225f0c0
the forums with grub2 was that the Debian version was
just too old.
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/
ap,
compressed memory-backed, but still swap.
Unfortunately, zram-config is still not in Debian, but the Ubuntu
package works fine. It's just some systemd config files.
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computi
On Mon, Oct 1, 2018 at 3:53 PM Thomas Lamprecht wrote:
> Looks good to me, I'll let it still ripe a bit on the mailing list though,
> for other possible commenters :-)
Great!
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc.
mechanism, so I had to come up with a way to boot my
PVE ZFS, which worked, but is obviously not a rock solid solution.
https://pve.proxmox.com/wiki/Booting_a_ZFS_root_file_system_via_UEFI
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
evel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
thought this api (the big JSON thing)
is generated somehow from the code itself as a kind of documentation or
reference.
I'm willing to fix things if I found not so well documented things.
Can you point
me to the file(s) and package(s) in question to change?
--
With kind regards / Mit fre
case - maybe
others, yet I only tried a subset of the API. I'm going to switch back
to interfacing and checking the API manually, which also works.
Thanks again for the explanation.
On Tue, Apr 17, 2018 at 8:43 AM, Fabian Grünbichler
wrote:
> On Mon, Apr 16, 2018 at 06:03:58PM +0200,
cWCALee
> d3A9qiw2p0Np96neXKEoQbajjb2s4mVA9MgGf9AzUzA3IGShkfqoAXLkRIo0p3vU
> YO+1eTE7jNy7wh9ovc1TrvGXlwBrsfsfpCzA4BkSm/6peIYZdAwkaR8spgwJGkDA
> SZEZ0DR2dsWeAzgqlHRNICKCb4UyOdVZylrT+fxma0Px2lCGIIQnTvKsXpmHZn0S
> LfnT5OV85iYKHuPopt7UpWjGLWOTVBIKRkLryol0hVFO2GfOlXY=
> =yJHb
> -END PGP SIGNATURE-----
> _
type" : "array"
}
}
The return type is a bit more complex:
root@pve-5-apitest:~# pvesh get /nodes/pve-5-apitest/storage
200 OK
[
{
"active" : 1,
"avail" : 23488102400,
"content" : "rootdir,images",
"enabled" : 1,
eady quite a few people over in #zfsonlinux that use it
> quite heavily.
and that is great. We need those people.
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi Fabian,
On Wed, Apr 4, 2018 at 9:45 AM, Fabian Grünbichler
wrote:
> On Tue, Apr 03, 2018 at 08:45:59PM +0200, Andreas Steinel wrote:
>> Hi everyone,
>>
>> are you (Proxmox staff) actively testing encrypted ZFS or are you
>> waiting for the upstream "activatio
Hi everyone,
are you (Proxmox staff) actively testing encrypted ZFS or are you
waiting for the upstream "activation"?
--
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel ma
Hi Dietmar,
On Wed, Mar 28, 2018 at 5:17 PM, Dietmar Maurer wrote:
> I though OAuth2 is not even a authentication protocol, so how do you
> want to implement authentication on top of OAuth2? OpenID connect?
Both should work (at least with GitLab). I just tried - for another
project - the OAuth2
Hi,
Is OAuth2 on the list of features you want to have in PVE and if so,
is someone working on it?
We're migration step-by-step every service in our infrastructure
towards OAuth2 and it would be great to authenticate against OAuth2
too.
Best,
Lnxbil / Andreas St
-docs/api-viewer/apidoc.js
With kind regards / Mit freundlichen Grüßen
Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Sun, Feb 25, 2018 at 9:08 PM, Gandalf Corvotempesta
wrote:
> As I can see, old data still keep the old parity, with MD all data are
> rewritten thus making use of all disks for performance reason
Yes, data has to be "resynched" manually via send/receive - same
way as with an expansion of disks
On Sat, Feb 24, 2018 at 11:16 AM, Gandalf Corvotempesta
wrote:
> In example, you can add/remove single disks
> You can change from any raid level to any raid level on the fly with no
> downtime or loss of redundancy
> You can grow /shrink a volume
Good points, especially the shrink that, that doe
On Fri, Feb 23, 2018 at 7:11 PM, Gandalf Corvotempesta
wrote:
> MD is way more flexible than zfs
Please elaborate.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On Thu, Feb 22, 2018 at 6:04 PM, Gilberto Nunes
wrote:
>>> proven to work for decades now and small footprint
>
> Yes! EXT4 for are very stable, but leak something advanced tools such as
> compression, dedup and RAID... Well AFAIK, the ext4 doesn't has this
> features... Perhaps am I wrong?
Yeah
On Tue, Jan 30, 2018 at 2:05 PM, Geert Stappers
wrote:
> Which git repository has the file PVE/API2/LXC.pm ?
>
Check via dpkg:
root@proxmox ~ > dpkg -S PVE/API2/LXC.pm
pve-container: /usr/share/perl5/PVE/API2/LXC.pm
so
https://git.proxmox.com/?p=pve-container.git;a=history;f=src/PVE/
After a discussion on the forums [1], @fabian suggested to reopen this one.
What is the current way to import a whole ZFS tree into a container? My
solution was to have more bind mount devices and manually add each one of
the, stupid, but easy to implement.
[1]
https://forum.proxmox.com/threads/l
On Sun, Jan 21, 2018 at 1:14 PM, Gilberto Nunes
wrote:
> Why not using some server with Ubuntu 16.04 which is LTS 'till 2021??
> This Ubuntu bring iSCSI Enterprise Target (iscsitarget-dkms).
> I have implemented this with some customers (iscsitarget + zfs), and work
> very well!
>
I also used it
On Sun, Jan 21, 2018 at 2:19 PM, Gilberto Nunes
wrote:
> > Proxmox only accept COMSTAR, ISTGT and IET for ZFS over iSCSI backend.
> That's right?
>
According to the source code, yes
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxm
Hi everyone,
Inspired by a discussion on the forums, I tried to use istgt in Debian
Stretch (iscsitarget is not present anymore) so it could work with the
ZFS-over-iSCSI implementation. I'm unfamiliar with istgt in general but got
it to work with tweaking the example configuration and also adapted
Hi Adam,
Will you open source your GO library?
Best,
Andreas
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
For simple container/systems is is probably OK, but I cannot backup a
multiple TB system on tmpfs.
Also, the backup is normally limited by CPU power or network
throughput, not storage speed (at least for my machines).
On Sun, Jul 23, 2017 at 7:40 PM, Martin Lablans wrote:
> Dear all,
>
> allowin
Hi Rickard,
Not a bad idea.
I "simulate" this by moving them to a specific VMID range and a
specifiy pool or unpack it into my ZFS backup space. If I need to use
it again, just repack the backup and restore on a PVE box.
Best,
Andreas
On Fri, Jul 21, 2017 at 2:10 PM, Rickard Eriksson wrote:
> H
On Sat, Jun 17, 2017 at 5:05 PM, Michael Rasmussen wrote:
> On Sat, 17 Jun 2017 13:33:38 +0200
> Andreas Steinel wrote:
>
>>
>> You said that you do not have the required hardware, however, you do
>> not need additional hardware for that.
>>
>> I can provid
On Sat, Jun 17, 2017 at 10:33 AM, Michael Rasmussen wrote:
> On Sat, 17 Jun 2017 10:15:25 +0200
> Andreas Steinel wrote:
>
>> Mi Michael,
>>
>> On Fri, Jun 16, 2017 at 3:10 AM, Michael Rasmussen wrote:
>> > MPIO is not tested since I don't have the r
Mi Michael,
On Fri, Jun 16, 2017 at 3:10 AM, Michael Rasmussen wrote:
> MPIO is not tested since I don't have the required hardware to do so.
Is there more than adding another IP to the FreeNAS machine? I've done
that on a Debian-based target and it automatically logged into each
portal, so mult
Hi mir,
On Thu, Jun 15, 2017 at 7:23 PM, Michael Rasmussen wrote:
> On Thu, 15 Jun 2017 18:54:29 +0200
> Andreas Steinel wrote:
>> And I can't see how ZFS-over-NFS would be any different than
>> ZFS-over-iSCSI? Am I see this so wrong? What are the usecases for
>> ZF
On Thu, Jun 15, 2017 at 1:04 PM, Michael Rasmussen wrote:
> I still dont see a usecase for this?
And I can't see how ZFS-over-NFS would be any different than
ZFS-over-iSCSI? Am I see this so wrong? What are the usecases for
ZFS-over-iSCSI exactly?
___
p
On Thu, Jun 15, 2017 at 10:50 AM, Michael Rasmussen wrote:
> Hi Andreas,
> Even better, I can sent you the deb packages?
That would also work, I then can create a diff to see what you changed
and try to incorporate my idea with ZFS-over-NFS.
___
pve-dev
Hi mir,
have you a public git with all the changes? I'd like to try but I
don't want to extract all the patches from this list, try to figure
out which ones are already taken care of and manually apply them.
Best,
LnxBil
On Wed, Jun 14, 2017 at 9:55 PM, Michael Rasmussen wrote:
> Signed-off-by:
On Fri, May 26, 2017 at 11:14 PM, Michael Rasmussen wrote:
> For containers via nfs you can just use the current nfs in freenas.
Yeah, but that is not the point. I want snapshot capability inside
Proxmox VE for NFS, the same as ZFS-over-iSCSI provides for iSCSI.
I talked about this some time ago
Hi mir,
I'd really love to have ZFS-over-NFS for containers. I'd like to
assist if you like and if you can explain how you setup your test
environment.
Best,
Andreas Steinel
(LnxBil)
On Fri, May 26, 2017 at 6:20 PM, Michael Rasmussen wrote:
> Yes, I intend to upstream this plugi
Also a good idea would be docker-machine support for Proxmox VE. I started
a driver for that (docker vocabulary) but I'm no Go programmer, so I lost
interest after a few hours of painful slow trial and error cycles.
On Mon, Mar 13, 2017 at 8:11 PM, Gilberto Nunes
wrote:
> Never more!
>
> I found
Hi,
on the forums, there are often problems regarding inproper upgrade of
Proxmox VE. Maybe this can be handled better by a small hack.
Unfortunately, I did not found any apt hook to fire in such a situation and
I don't think it's manageable to handle this in the apt-get source package,
so a simpl
migrated to other node with less load... Something like
> that...
> Sorry if I messed up the concepts...
> Perhaps you can enlight me.
>
> Regards
>
> 2017-03-01 18:25 GMT-03:00 Andreas Steinel :
>
>> Hi Gilberto,
>>
>> What do you want to load balance? PV
Hi Gilberto,
What do you want to load balance? PVE is a IaaS architecture and
normally you would implement balancing on a SaaS architecture, so one
layer higher (towards the application).
Best,
Andreas
On Wed, Mar 1, 2017 at 12:31 PM, Gilberto Nunes
wrote:
> Hello friends
>
> Is there any plan
On Sun, Feb 19, 2017 at 3:33 PM, Michael Rasmussen wrote:
>
> No matter what you think this is the truth.
> https://wiki.list.org/DOC/I%20use%20Gmail-Googlemail%2C%
> 20but%20I%20can%27t%20tell%20if%20any%20of%20my%20messages%20have%20been%
> 20posted%20to%20the%20list
I can confirm that gmail "
Hi Detlef,
I really cannot understand why you do not create your own templates
automatically and blame Proxmox for that. I create my own dab templates
with my own mirrors and generate a whole bunch of them weekly with my
settings, keys and so on for internal and external use, such that you can
cho
Good! I presume you did not implement the variable page block size due to
the "controlled kernel" you provide which always has default 4K pages on
x64?
On Wed, Jan 25, 2017 at 4:03 PM, Fabian Grünbichler <
f.gruenbich...@proxmox.com> wrote:
> set properties according to upstream documentation
>
>
- Mail original -
> De: "Fabian Grünbichler"
> À: "pve-devel"
> Cc: "aderumier" , "Andreas Steinel" <
> a.stei...@gmail.com>
> Envoyé: Jeudi 19 Janvier 2017 09:35:43
> Objet: transparent huge pages support / disk passthrough corr
Only some experience:
I was not able easily interchange simple OVA generated from VMware with OVA
generated from VirtualBox without manual intervention. If you build
something that works, that should be fine. I also tried to import OVA
exported from VMware into Xen and it also did not work, not ev
Hi Marco,
I faced a similar problem and we ended up using different storages for
different groups of people like this:
ZFS base and Proxmox VE with different groups, e.g. group1
rpool/group1/data (for KVM and LXC)
rpool/group1/local as filesystem for templates
rpool/general/local as filesystem
rvers (not only Proxmox VE) and find it very useful, because it's a
"when" it crashes, not an "if". For most customers, it's very
important to at least report a crash back to support and it's always
nice to have a crashdump ready for that.
What do you guys
On Fri, Dec 9, 2016 at 1:17 AM, Alexandre DERUMIER
wrote:
>
> - implement a gui/tool to manage multiple clusters
>
That would be awesome!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi everyone,
I do not know if this is a real bug or simply a non-documented behaviour,
but If I setup a masqueraded, private bridge (e.g. with
https://pve.proxmox.com/wiki/Network_Model#Masquerading_.28NAT.29_with_iptables)
everything works as long as I do not enable firewalling for the containers
On Fri, Oct 14, 2016 at 9:15 PM, Michael Rasmussen wrote:
> On Fri, 14 Oct 2016 19:56:04 +0200 Andreas Steinel
> wrote:
> > Isn't there a chicken and egg problem now? Where is the name defined if I
> > install via PXE in an automatic fashion?
>
> For Debian based
reating a VM, but we still need to identify it afterwards if you see it in
the GUI. Therefore I add the IP/Hostname to the comment such that I can
find it easily.
On Fri, Oct 14, 2016 at 7:01 PM, Michael Rasmussen wrote:
> On Fri, 14 Oct 2016 16:59:52 +0200
> Andreas Steinel wrote:
>
>
On Fri, Oct 14, 2016 at 4:45 PM, Michael Rasmussen wrote:
> On Fri, 14 Oct 2016 16:09:48 +0200
> Andreas Steinel wrote:
>
> >
> > How do you guys solve this problem in big environments? Is there a
> simpler
> > way I don't see right now?
> >
> You
Hi,
I'd like to discuss a feature request about having a "real" hostname on KVM
machines or some other mechanism to solve my problem.
I have a rather big environment with over a hundred KVM VMs and also
different networks including different DNS settings. Currently I "encode"
further VM informati
On Fri, Oct 14, 2016 at 12:08 PM, datanom.net wrote:
> On 2016-10-14 11:13, Andreas Steinel wrote:
>>
>> So, what was your test environment? How big was the difference?
>>
>> Are you running your ZFS pool on the proxmox node?
Yes, everythi
Hi Mir,
On Fri, Oct 14, 2016 at 8:02 AM, Michael Rasmussen wrote:
> I use virio-scsi-single exclusively because of the hough performance
> gain in comparison to virtio-scsi so I can concur to that.
I just benchmarked it in on a full-SSD-ZFS system of mine and got reverse
results.
I used 4 cores,
On Mon, Oct 10, 2016 at 4:27 PM, Alexandre DERUMIER wrote:
>>>Having e.g. LXC working over NFS with ZFS on another server.
>
> Do you want to manage snasphot/clone on the zfs server ?
Yes, the purpose is to use a ZFS-based storage on multiple nodes
without iscsi. I don't know if Proxmox VE LXC is
This is a similar thing that I wanted to discuss with my ZFS-over-NFS
question a while ago. Having e.g. LXC working over NFS with ZFS on
another server.
On Mon, Oct 10, 2016 at 1:57 PM, Alexandre DERUMIER wrote:
> I think the more difficult part is to manage iscsi lun add|remove with
> iscsiadm.
Thank you Thomas, I really like the discussion
On Thu, Sep 29, 2016 at 8:09 AM, Thomas Lamprecht
wrote:
> [...]
>
> The ORAC and a Proxmox VE do different stuff, one is a application with
> quasi fail-silent characteristics running on the application level, the
> other is an operating system runn
On Wed, Sep 28, 2016 at 6:30 PM, Michael Rasmussen wrote:
> If you search the forum you will find a step guide for using a rpi as third
> node.
Yes, I know that.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailma
documentation of this all (which I'm doing
> atm).
>
> cheers,
> Thomas
>
>
>
> On 09/28/2016 03:26 PM, Andreas Steinel wrote:
>>
>> Hi,
>>
>> I'd like to ask if there are any plans to use e.g. the shared storage
>> as a quorum/voting
Hi,
I'd like to ask if there are any plans to use e.g. the shared storage
as a quorum/voting disk like the oracle grid infrastructure uses it to
get a two node ha cluster (for almost a decade). This obviously only
works for NAS or SAN storage.
Best,
Andreas
___
On Mon, Jul 25, 2016 at 9:07 PM, Dmitry Petuhov
wrote:
> Why not use qcow2 format over generic NFS? It will give you
> shapshot-rollback
> features and I don't think that with much worse speed than these features
> on ZFS level.
>
I want to have send/receive also and I use QCOW2 on top of ZFS to
be used in a cluster.
I really miss a cluster filesystem with snapshot capability (the one from
Proxmox VE) which can be used by "ordinary" users.
On Mon, Jul 25, 2016 at 6:02 PM, Michael Rasmussen wrote:
> On Mon, 25 Jul 2016 17:52:58 +0200
> Andreas Steinel wrote:
>
> >
Hi there,
are there any plans to support ZFS filesystems exported by NFS to be
managed by Proxmox? I'd be very great to have ZFS-backed storage similar to
ZFS-over-iSCSI but with file storage instead of block storage.
Is this even a wanted feature?
Best,
Andreas
_
That sounds very good Martin. I'd suggest to add settings for automatic
watchdog and GA aswell.
I could also add a page to the wiki describing how to preseed/kickstart
with an automatic detection of these features and installing the
appropriate tools inside the guest (only of course if preseed/kic
It is also used for backups (e.g. filesystem freeze) which is very handy
for hook-scripts.
One thing I couldn't find is how long the timeout for the agent to respond
is. If some hook-scripts take too long, I get timeouts while backing up.
Would be great to know the limit and maybe increase it.
Fo
I'd be great if we could have ZFS (L2)ARC stats aswell as IOPS for the
pool. Maybe if this integration problem is solved properly, we have a
simple pipeline to add new things.
On Mon, Jul 11, 2016 at 11:47 PM, Darryl Dixon
wrote:
> Hi,
>
> I have been working through the logistics of providing a
Hi Wolfgang,
sure. #1050
On Fri, Jul 8, 2016 at 10:45 AM, Wolfgang Bumiller
wrote:
> On Thu, Jul 07, 2016 at 05:26:48PM +0200, Andreas Steinel wrote:
> > Hi,
> >
> > I currently only have one big 3.4 install (>150 VMs), on which I compared
> > the generated MA
Hi,
I currently only have one big 3.4 install (>150 VMs), on which I compared
the generated MACs and found out that they are completely random. Are there
plans or probably is there already an implementation to generate only from
a specific region? I did not found any changes in the roadmap.
In KV
I also tried that, yet depending on the order of mounting and starting of
some PVE daemons, the files get recreated and then the ZFS could not be
mounted again.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/li
On Wed, May 18, 2016 at 5:42 AM, Dietmar Maurer wrote:
> Not sure, but maybe http://bindfs.org/ can help here?
>
Could be, but fuse is not known for its speed. I'll stick with the normal
bind mounts.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
On Tue, May 17, 2016 at 3:47 PM, Dietmar Maurer wrote:
> > Because ZFS filesystems are cheap and should be used everywhere ... at
> > least as I have read.
>
> But such setup it is really clumsy for container as long
> as there is no recursive bind mount.
>
Unfortunately, that's right. Therefore
On Sun, May 15, 2016 at 11:48 AM, Dietmar Maurer
wrote:
> > I have a filesystem called images with child filesystems of each year
> going
> > back to the 80s.
>
> And what is the purpose of such setup? Wouldn't it be simpler to merge
> those old child file systems into a single one?
>
Because ZF
Hi Dietmar,
On Sun, May 15, 2016 at 7:19 AM, Dietmar Maurer wrote:
>
> > I needed more (bind of ZFS filesystem with a lot of children) so I
> > increased the maximum in /usr/share/perl5/PVE/LXC/Config.pm and it worked
> > fine.
>
> Would you mind to explain why you need more that 10 mounts? How m
Hi,
is there a reason for a the maximum of 10 bind mounts for lxc?
I needed more (bind of ZFS filesystem with a lot of children) so I
increased the maximum in /usr/share/perl5/PVE/LXC/Config.pm and it worked
fine.
Best,
Andreas
___
pve-devel mailing li
On Fri, May 13, 2016 at 8:32 AM, Alexandre DERUMIER
wrote:
> >>Also, allocation 1GB pages may can fail due to fragmentation, while
> >>allocation of 2MB pages still work?
>
> yes.
> But this is a little bit more complex to manage.
>
In my experience, it always works, but you might end up with a
Normally hugepages are setup via sysctl.conf. Default size is always 2 MB.
No need for kernel commandline editing.
Why exactly are you using hugepages? Can KVM handle hugepages? Normally
hugepages implies no KSM, isn't that right?
On Thu, May 12, 2016 at 12:57 PM, Dietmar Maurer
wrote:
> > host
The host vendor. This is strange if it is presented as pentium, but is
indeed AMD.
On Sat, Apr 30, 2016 at 4:50 PM, Dietmar Maurer wrote:
> Thanks for the patch!
>
> I am curios - what does qemu use if we do not set that flag?
>
>
___
pve-devel mailing
Signed-off-by: Andreas Steinel
---
PVE/QemuServer.pm | 22 ++
1 file changed, 22 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0e3a95e..670cf90 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3076,6 +3076,28 @@ sub config_to_command
>From e18310a44ac11b0de4faefc0e7cabdf6e052a8be Mon Sep 17 00:00:00 2001
From: Andreas Steinel
Date: Thu, 14 Apr 2016 19:58:47 +0200
Subject: [PATCH] Fixes missing dependency
If you only install @dab@ you'll end up with this error:
root@dab-pve-builder:/# dab
Can't l
Hi all,
I don't know if this is news or not, but I encountered this problem once a
week for a couple of weeks and finally, there is hope. Hopefully the patch
will be available soon in the upstream ubuntu kernel.
https://github.com/zfsonlinux/zfs/issues/4355
Best,
Andreas
Hi Stefan,
That's really slow.
I use a similar setup, but with ZFS and I backup 6 nodes in parallel to the
storage and saturate the 1 GBit network connection.
I use LZOP on the Proxmox-side as best tradeoff between size and
online-compression speed.
On Tue, Feb 16, 2016 at 9:22 AM, Stefan Prie
Nice!
On Tue, Nov 17, 2015 at 5:39 PM, Alexandre Derumier
wrote:
> Signed-off-by: Alexandre Derumier
> ---
> PVE/QemuServer.pm | 16
> 1 file changed, 16 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 81a1c84..87b7d20 100644
> --- a/PVE/QemuServer
Hi,
I run into the PXE-Boot-DHCP-Problem which can be solved by a more recent
version of the ipxe stack in qemu. (
http://forum.proxmox.com/threads/19306-PXE-booting-VMs-not-working?p=120764#post120764
)
Booting iPXE CD works just fine and I used the git version of today to
build the virtio-net.r
Hi all,
Due to the lack of non-anonymous bind, i solved it by building a
replicating ldap instance only bind to localhost on each proxmox node. This
is a pain in the ass and very error prone - especially on schema changes,
which have to be propagated to all nodes.
On Thu, Oct 8, 2015 at 11:57 AM,
'd really want to add the requirement of 4K
block aligned storage in backup.txt which would really be another big
advantage of the vma backup mechanism.
Best regards,
Andreas Steinel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.co
nown.
>
> How much memory do you use it your vm ? what is the database software ?
>
> - Mail original -
> De: "Andreas Steinel"
> À: "Daniel Hunsaker"
> Cc: "aderumier" , "dietmar" ,
> "pve-devel"
> Envoyé: Mercredi 9
1 - 100 of 111 matches
Mail list logo