applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
comment inline ...
> I have rethinked about reloading config, without extra daemon:
>
>
> "
> datacenter level:
>-commit config : - mv /etc/pve/networks.cfg.new /etc/pve/networks.cfg
> - call each (online) node reload api.
>
>
> local node:
> reload api: -> gen
applied
> On 03 May 2019 at 11:00 Alexandre Derumier wrote:
>
>
> Alexandre Derumier (7):
> vxlanmulticast: add mtu to vxlan interface too.
> vnet: dynamic require of qemuserver && lxc
> vnet: rename read_local_vm_config to read_cluster_vm_config
> vnet: update_hook: verify if tag alrea
maybe it is worth to introduce a volume_exists() helper?
> On 30 April 2019 at 14:20 Mira Limbeck wrote:
>
>
> use file_size_info to check for existence of cloudinit disk instead of
> '-e'. this should solve the problem with rbd where the path returned by
> PVE::Storage::path is not checkable w
We need that if tasks runs inside multi-threaded applications (several
tasks inside one process).
---
Utils.js | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/Utils.js b/Utils.js
index 93ccc01..a9843a3 100644
--- a/Utils.js
+++ b/Utils.js
@@ -571,17 +571,20 @@ Ext
applied, few questions inline - i am not really happy with this patch.
> On 04 April 2019 at 16:12 Alexandre Derumier wrote:
>
>
> ---
> PVE/API2/Network/Network.pm | 3 +-
> PVE/Network/Network/VnetPlugin.pm | 58
> +++
> 2 files changed, 60 insert
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >>I agree. Also that low level C/corosync hacking is no real fun ...
> >>
> >>Another option is to use pve-daemon to apply the settings.
>
> I wonder if we could re-use the pve-firewall daemon, and transform it to
> something global for all network things.
>
> for now network config, but they
> On 04 April 2019 at 12:16 Stoiko Ivanov wrote:
>
>
> On Thu, 4 Apr 2019 11:57:38 +0200 (CEST)
> Alexandre DERUMIER wrote:
>
> > > But how does it work ? who is currently listening for changes in
> > > pmxcfs ? (through inotify?)
> >
> > >>This is low-level C-code inside pmxcfs (corosync)
> >>So the idea is to detect network.cfg changes inside pmxcfs, and if we
> >>detect changes
> >>do a network reload.
> >>
> >>That way we can apply the config without an additional daemon - sounds good.
>
> Sound good. (so we can do changes in network.cfg.tmp, still have the test
> button(api c
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> Two ideas that came up in my head (not sure if they are good or
> sensibly implementable):
>
> * The networking config has the common property with the corosync
> configuration (the chicken and egg problem - if it's wrong the
> cluster cannot push the corrected config to a broken node) so wh
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >>or locally assigned ip addresses or routes. A copy of
> >>/etc/network/interfaces does not >>provide all necessary information.
>
> What do you mean by locally assigned ? manually with ip command ?
> because it's be overwritten by network service restart/reload. (if the
> interface is defi
> > >>I think of this like deploying a network configuration with ansible (or
> > >>other tools).
> >
> > Do you have an idea where to report a local error configuration ?
>
> Maybe an extra file inside /etc/pve/nodes// ...
>
> (not sure about that).
Please ignore that suggestion. I guess it
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >>I is still unclear to me how you do those tests? AFAIK, ifreload does not
> >>have a --dry-run option.
> with ifupdown2, ifreload -a --no-act.
> (+ tests with our currrent read_networt_interface code)
Ok, thanks. (This flag is not documented in the manual page).
>
> >>Even when it has suc
> I have rethinked about it, I have (again ;) a new idea for implementation.
>
> The main problem is how to test a change at datacenter level, as we need to
> test the local configuration of each node.
>
> and it's not currently in /etc/pve , but in /etc/network/interfaces of each
> node.
I
applied, some questions below:
> On 03 April 2019 at 00:19 Alexandre Derumier wrote:
>
>
> add vnet api
> reorganize plugins to Network/Transport aa Network/Vnet
I wonder if it would be possible to use a single vnet config file?
You currently use:
/etc/pve/network/vnet.cfg
/etc/pve/network/tr
applied
> On 02 April 2019 at 12:09 Alexandre Derumier wrote:
>
>
> - Add a small fix on vlanplugin vlan-aware option
> - Implement network transport api
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/lis
> maybe better:
>
> in gui, at network,datacenter level
>
> at each change, make a
> /etc/pve/networks/vnet.cfg.
>
>
> on local node, the daemon detect the new version,make verification,
> and update /etc/pve/nodes//.networkconfigstatus
>
> version: verify:ok
I don't really get why you want t
> I think it's better than blindly try to reload networkconfig on each change,
> as network is critical,
> and sometimes admin need to change multiple parameters and apply it once.
Can't we simply add a manual "apply" button for now? Just by using backup
config files:
vnet.cfg.new
vnet.cfg
?
I have always tried to avoid such things. They are clumsy and error prone
> I have thinked about a way to generate config and reload it to differents
> nodes
>
>
> "
>
> make changes in /etc/pve/network/*.cfg
>
> at datacenter level, network panel , click button ->verify config,
> this cr
applied
> On 29 March 2019 at 00:23 Alexandre Derumier wrote:
>
>
> - vlan-protocol need to be defined on vlan interface, not bridge
> - remove check from duplicate interface in vlan plugin,
> and do it in INotify read network interfaces. (patch sent for in pve-common)
>
> Alexandre Derumier
> This is a first attempt to create a pve-network package,
> to allowed defined network at datacenter level (vnet)
> with a plugin infrastructure to handle different kind of network
> (vlan,vxlan)
Great! I just setup a new repository for this, and applied a few cleanups:
https://git.proxmox.
This looks ways to complicated for me ... Do we really want to maintain
that, considering there are very few users?
> On 27 March 2019 at 11:16 Wolfgang Bumiller wrote:
>
>
> Another round of u2f patches. The u2f parts are now always stored in
> /etc/pve/priv/tfa.cfg. pve-access-control now con
> I'm still working on it, but after some discussions with my co-workers using
> a lot vmware and students at last training,
> I have some changes for proposal.
>
> 1)
>
> in /etc/network/interfaces, don't use "transport-zone" as name for option,
> but use "uplink", this is the name in vmware, s
> Maybe could we reuse pvestatd ?
maybe
> maybe we could add a version parameter in /etc/pve/networks.cfg, (user need
> to increment it to apply config on different nodes, like push a button
> "commit" in gui),
>
> then pvestatd simply need to compare this version with local version (should
> On February 28, 2019 at 9:20 AM Alexandre DERUMIER
> wrote:
>
>
> >>Or just activate when needed (at VM start)? But yes, a separate config is
> >>preferable.
>
> Another thing is if we want to update config. (change multicast address, add
> a new unicast node,),
> when the vm are alr
> >>Not sure if we need those extra switch settings?
>
> yes, indeed, I think something like vnet[0-4096] could be better,
>
> Can't we combine
> >>switch and transportzones? i.e.
> >>
> >>vnet1: vxlanfrr
> >>name: zone4 # not really required
> >>transportzone zone4
> >>
> I'll work next week on /etc/pve/networks.cfg,
great!
> I have take time to polish the configs file, I'll would to have some feedback
> before coding.
>
>
> 1) add transportzone in /etc/network/interface.
> only on physical interfaces (eth/bond), not tagged interfaces.
> This is only
> That mean that when we do a live migration,
> the rules are not apply until the config file is moved. (and vm resume just
> after).
>
> So, we can have some seconds where the rules are not yet applied.
>
>
> I'm not sure how we could handle this correctly ?
>
> 1) force rules update after th
CI has hundreds of options, and we should not try to configure all
that stuff with proxmox. This was intentionally left out.
I think the user should configure such things inside the vm instead.
> On January 29, 2019 at 3:18 PM David Limbeck wrote:
>
>
> package_upgrade default is still 'true'
> > Yes - that is exactly my point (it makes no sense to have Sys.PowerMgnt on
> > node).
>
> It makes fully sense. They way this inteded to work is like: We have three
> nodes
> A, B, C. C is powered off. An user has the Sys.PowerMgmt permissions on only
> Node
> C. As it's powered off he natu
> > check => ['perm', '/nodes/', [ 'Sys.PowerMgmt' ]],
> >
>
> No, this does not gets proxied to the {node},
Yes - that is exactly my point (it makes no sense to have Sys.PowerMgnt on
node).
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http
> +permissions => {
> + check => ['perm', '/nodes/{node}', [ 'Sys.PowerMgmt' ]],
> +},
You can wake up and Host in the network? If so, we may want to
restrict that too:
check => ['perm', '/nodes/', [ 'Sys.PowerMgmt' ]],
Not sure about that
__
I suggest to always use the zone id as prefix for vlan/vxlan
devices. Its simply to implement and avid problems in future.
Although most people will only have only one zone?
> On December 13, 2018 at 11:46 AM Alexandre DERUMIER
> wrote:
>
>
> looking at kernel code in this patch
> https://lo
> On December 13, 2018 at 9:12 AM Alexandre DERUMIER
> wrote:
>
>
> >>I just noticed that can have v(x)lan IDs multiple times,
> >>once for each transport zone? So we need a better
> >>naming scheme, for example:
>
> >>vxlan2 in zone1 => z1vxlan2
> >>vxlan2 in zone2 => z2vxlan2
>
> it's not
> >>Do we want to name "transport zones"?
> maybe, not a big fan of id without meaning.
I just noticed that can have v(x)lan IDs multiple times,
once for each transport zone? So we need a better
naming scheme, for example:
vxlan2 in zone1 => z1vxlan2
vxlan2 in zone2 => z2vxlan2
Network device
> >>That "transport zone" looks interesting.
>
> >>We you just mark physical interfaces (or bridges?) as part of a transport
> >>zone.
> >>Then we have everything to setup the vxlan (each vlan belong to a zone)?
>
> yes, it should work. (interface could be better I think).
>
> /etc/network/i
> I need to check, they have a concept of "transport zone", seem to be an
> abstraction between distributed switch and physical host.(and some kind of
> vrf/vlan isolation)
> https://www.youtube.com/watch?v=Lsgz88OvxDk
That "transport zone" looks interesting.
We you just mark physical interface
> >>Another way, could be make somekind of template on each local host. (as we
> >>only need to duplicate them for each vlan/vxlan).
I would really prefer a declarative configuration style instead.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
applied, thanks.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> I'll have time to work again on /etc/pve/network.cfg idea.
>
> I don't known if you have some time to check my idea about using ifupdown2
> "alias"
IMHO this looks like a hack - I wonder how VMware associates the global net to
local devices on the host?
> BTW,talking with students on last tra
> Just to throw in another idea:
> How about using something like shorewall (shorewall.net) to handle the
> whole firewall generation code from a higher level. I'm using it for in
> really complex setups for years and i am very happy with it. (I know
> this won't solve the nftables problem right no
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> but I can't get any log for a vm rule with a drop/reject.
>
> It's only works with default vm drop/reject action.
Yes. We currently try to keep log rate as low as possible.
> I found an old patch about adding log by rules
> https://pve.proxmox.com/pipermail/pve-devel/2017-September/028816.htm
applied, thanks.
BTW, we have an extra list for PMG development:
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pmg-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >>Feel free to choose a better name ;-) We can the mark this API as
> >>unstable/experimental, and modify
> >>the parameters/types. IMHO most existing parameters does not really makes
> >>sense with external migration.
> >>I guess it is still possible to factor out most common code to avoid c
> > +
> > + my $storage_is_shared = $cfg->{ids}->{$storeid}->{shared};
> > + $storage_is_shared = defined($storage_is_shared) ?
> > $storage_is_shared : 0;
>
> above 2 lines looks quite strange, at least I do not understand what it does
> exactly?. Simple use:
>
> my $storag
I would like to move forward with that, but changing an existing API makes that
difficult.
I would suggest to add a second API entry point instead:
__PACKAGE__->register_method({
name => 'external_migrate_vm',
path => '{vmid}/external_migrate',
method => 'POST',
...
Feel free to cho
comments inline
> diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm
> index e941cb6..5c75d87 100644
> --- a/PVE/API2/Storage/Content.pm
> +++ b/PVE/API2/Storage/Content.pm
> @@ -285,6 +285,11 @@ __PACKAGE__->register_method ({
> type => 'string',
>
> If an image on storage has not referenced to any guest or
> replication config, we can safely delete it on the GUI.
> Also, if a config exists on another node, we can delete it too.
Only if the image is on local storage ...
> But if an image has a encoded in the image name and a guest
> exists
applied without changes to www/manager6/node/Config.js
> On October 30, 2018 at 10:33 AM David Limbeck wrote:
>
>
> workaround to keep the subscription popup on login even without 'Sys.Audit'
> permissions but remove the subscription menu in the GUI for unauthorized
> users
>
> Signed-off-by:
I am quite unsure if I want to add that. The current strategy was
to add minimal network configuration support. The rest should be
done using automation tools. I am aware of users with hundreds
of IP addresses! Configuring them inside the VM config would be
a big mess.
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, but fixed version to "18.10" in commit message
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Do not add/extract standard options if the method itself defined properties
using the same names (like 'quiet').
Signed-off-by: Dietmar Maurer
---
Changes in v2:
- remove unused code (as suggested by David)
PVE/CLI/pvesh.pm | 28 ++--
1 file changed, 22
Do not add/extract standard options if the method itself defined properties
using the same names (like 'quiet').
Signed-off-by: Dietmar Maurer
---
PVE/CLI/pvesh.pm | 31 +--
1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/PVE/CLI/pvesh.pm
> >>Please can you try to solve those issues marked as 'clean me'?
>
> I'm not sure what is the best/cleanest way to read/write and parse
> /etc/network/interfaces ?
I think the current way is OK, but we can improve
the error handling and pass the correct filename, i.e.
#clean-me
my $fh =
applied, and added a few cleanups on top.
Please can you try to solve those issues marked as 'clean me'?
> On October 2, 2018 at 9:19 AM Alexandre Derumier wrote:
>
>
> This add a new api to online reload networking configuration
> with ifupdown2.
>
> This work with native ifupdown2 modules,
looks wrong too me - will do some test next weeks.
> On October 5, 2018 at 8:46 PM Fabian Grünbichler
> wrote:
>
>
> since Tools is used by the simulator as well, which does not need
> PVE::Cluster otherwise.
>
> the bash completion methods are only used by ha-manager's CLI tools, and
> parse
I thought we all agreed that swap does not make sense on
high memory system?
> On Mon, Oct 01, 2018 at 07:43:41PM +0200, Fabian Grünbichler wrote:
> >
> > wouldn't it make more sense to create a swap partition on all boot vdevs
> > instead? if you need swap, this is very cumbersome to do after
>
fixes https://bugzilla.proxmox.com/show_bug.cgi?id=1936
> On October 1, 2018 at 10:53 AM Dietmar Maurer wrote:
>
>
> Signed-off-by: Dietmar Maurer
> ---
> PVE/CLI/pvesm.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/PVE/CLI/pvesm.pm
Signed-off-by: Dietmar Maurer
---
PVE/CLI/pvesm.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index d95b5f5..650288e 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -370,7 +370,7 @@ our $cmddef
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Dietmar Maurer
---
debian/control| 4
debian/pve-qemu-kvm.links | 5 +
debian/rules | 2 +-
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index d2c40ff..5b46622 100644
--- a/debian/control
+++ b
# git am yourpatch.eml
Applying: API2 : Network : add network config reload
.git/rebase-apply/patch:33: trailing whitespace.
name => 'reload_network_config',
.git/rebase-apply/patch:34: trailing whitespace.
path => '',
.git/rebase-apply/patch:43: space before tab in indent.
additi
ignore me - the test looks good.
> On September 26, 2018 at 6:47 AM Dietmar Maurer wrote:
>
>
> > + foreach my $bridge (keys %$bridges_delete) {
> > +
> > + my (undef, $interface) =
> > dir_glob_regex("/sys/class/net/$bridge/brif", '
> + foreach my $bridge (keys %$bridges_delete) {
> +
> + my (undef, $interface) =
> dir_glob_regex("/sys/class/net/$bridge/brif", '(tap|veth|fwpr).*');
Why 'fwpr'? Maybe you wanted to check for 'fwbr' ?
___
pve-devel mailing list
pve-devel@
I am unable to reproduce that here. Please can you test with latest version
pve-common from git?
> On September 24, 2018 at 8:43 PM Eduard Ahmatgareev
> wrote:
>
>
> Hi All,
>
> Does anybody has issue with CLI on latest version?
>
> root@cluster-13-1:~# pvesh get /access/users --output-form
> I'm not sure if it's possible to try to reload directly
> /etc/network/interfaces.new, then if it's ok, overwrite
> /etc/network/interfaces. I'll look at this.
Thanks.
Unrelated topic, but I get the following with ifquery:
# ifquery -a -t json
error: No JSON object could be decoded
Any idea
This allows to request a mapped device/path explicitly,
regardles of the storage option, eg. krbd option in the RBDplugin.
Signed-off-by: Dietmar Maurer
---
PVE/Storage.pm | 24 +
PVE/Storage/Plugin.pm| 12 +
PVE/Storage/RBDPlugin.pm | 69
> > I though somebody will suggest a better name for that parameter so that it
> > looks more reasonable?
>
> FWIW, I like Thomas suggestion of pulling out the bdev path generation
> into a private sub (or, since we are already adding to the storage
> API[1], evaluate whether "give me the block d
> + raise_param_exc({ config => "reloading config with ovs changes is not
> possible currently\n" })
> + if $ovs_changes && !$param->{restart};
> +
> + foreach my $bridge (keys %$bridges_delete) {
> +
> + my (undef, $interface) = dir_glob_regex("/sys/class/net/$bridge/brif",
> '(tap|veth|fw
> I know you just do this to not duplicate the blockdevice path assembly,
> but it feels a bit weird to directly have an map with $nomap method call
> in unmap, maybe pull the common parts out in its own ($private) helper sub?
I though somebody will suggest a better name for that parameter so that
This allows to request a mapped device/path explicitly,
regardles of the storage option, eg. krbd option in the RBDplugin.
Signed-off-by: Dietmar Maurer
---
PVE/Storage.pm | 24 +++
PVE/Storage/Plugin.pm| 12 ++
PVE/Storage/RBDPlugin.pm | 62
Signed-off-by: Dietmar Maurer
---
src/PVE/LXC.pm | 12 +---
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 0b57ae9..448ea34 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1302,12 +1302,9 @@ sub mountpoint_mount {
my
> On August 17, 2018 at 3:44 PM Alwin Antreich wrote:
>
>
> This allows methods to request a mapped device/path explicitly,
> regardles of the storage option, eg. krbd option in the RBDplugin.
You basically add an additional map parameter to all methods - what for? I
would prefer to keep exis
Why do we need --unusedOnly and --disksizeOnly ? This does not
really makes sense to me, and we do not have those options with "qm rescan"
> On September 18, 2018 at 11:16 AM Alwin Antreich
> wrote:
>
>
> This patch implements the same feature as already exists for qm
> 'rescan'. With options
> You could do this automatically for all VMs, while it comes from
> windows it's intended to be guest OS agnostic and is exposed over
> fw_cfg/ACPI, AFAIS.[0]
>
> Maybe add none if there's any "hide that we virtualize" flag is on,
> but else I do not really see a point in not doing this?
> (allow
Signed-off-by: Dietmar Maurer
---
PVE/Storage/RBDPlugin.pm | 22 +++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index ee373d6..0acfb2d 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
> Not quite sure whether this approach is not too liberal.
> It does fix the bug reported, thus I'm sending it as an RFC.
It easy to improve a bit more, like:
my $result = $raw ne '' ? JSON::decode_json($raw) : [];
+my $result;
+if ($raw eq '') {
+ $result = [];
+} elsif ($r
applied with the fixes you pointed out - thanks!
> On September 17, 2018 at 1:53 PM Wolfgang Bumiller
> wrote:
>
>
> On Mon, Sep 17, 2018 at 01:41:38PM +0200, Dietmar Maurer wrote:
> > You need to install package qemu-user-static which provides
> > the emulation
You need to install package qemu-user-static which provides
the emulation toolkit.
- emulate arm on x86
- emulate x86 on arm
Signed-off-by: Dietmar Maurer
---
src/PVE/LXC/Setup.pm | 40
1 file changed, 40 insertions(+)
diff --git a/src/PVE/LXC/Setup.pm
We can now detect arm64 and armhf containers.
Signed-off-by: Dietmar Maurer
---
src/PVE/LXC/Config.pm | 2 +-
src/PVE/LXC/Create.pm | 23 ++-
2 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 3b1e2df
applied (with typo fixed)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> I'd like to see a more general approach. This breaks my native aarch64 on
> aarch64
> container.
Sorry, why does it break anything?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
We can now detect arm64 containers.
Signed-off-by: Dietmar Maurer
---
src/PVE/LXC/Config.pm | 2 +-
src/PVE/LXC/Create.pm | 22 +-
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 3b1e2df..196cfac 100644
You need to install package qemu-user-static which provides
the emulation toolkit.
Signed-off-by: Dietmar Maurer
---
src/PVE/LXC/Setup.pm | 12
1 file changed, 12 insertions(+)
diff --git a/src/PVE/LXC/Setup.pm b/src/PVE/LXC/Setup.pm
index 1b89f28..7d522cc 100644
--- a/src/PVE/LXC
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied with small improvements:
> On September 13, 2018 at 2:55 PM Dominik Csapak wrote:
>
>
> to get and set the content of /etc/hosts
>
> Signed-off-by: Dominik Csapak
> ---
> changes from v3:
> * add digest code on read
> PVE/API2/Nodes.pm | 78
>
I applied v2 instead.
> On September 13, 2018 at 2:55 PM Dominik Csapak wrote:
>
>
> Signed-off-by: Dominik Csapak
> ---
> changes from v3:
> * removed digest code (now in api)
> src/PVE/INotify.pm | 48
> 1 file changed, 48 insertions(+)
>
>
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> changes from v3:
> * moved all of the digest code to the api (instead of INotify)
This change produce a quite ugly interface, because we now need
to return both 'data' and 'raw'. I really prefer the old code where
we compute the digest directly.
Also, we also need to compute the digest for /etc
101 - 200 of 7518 matches
Mail list logo