What bug number do you talk about exactly?
> Hello, in the past (proxmox v4 and v5) we've used Proxmox's clustering
> features and found problems when the whole cluster would shut down, when
> we turned it back on it wouldn't synchronize. Has this problem been
> fixed yet?
You need to create a pool for each such user, and give them permissions
to create an use VM on that pool only.
see
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_user_management
section: 13.8.5. Pools
> I'd like to have individual users who can clone VMs. These cloned VMs
>
> Does anyone have an assessment of the risk we would run? I still don't
> understand the security implications of the mapping of higher UIDs.
> However this is quickly becoming a major issue for us.
The risk is that it is not supported by us. Thus, we do not
test that and I do not know what
> I fear
> this might be a container-related issue but I don't understand it and I
> don't know if there is a solution or a workaround.
>
> Any help or hint is highly appreciated
Yes, we only map 65535 IDs for a single container. We cannot allow
the full range for security reasons.
> Just we are here... 'pve-ha-manager' is an alternative to 'watchdog',
> right?
You cannot use the debian watchdog package with proxmox.
> Also, 'watchdog' deaemon do other things, like reboot if load go over a
> theresold and so on, all things that probably are BAD in a virtualized
>
> Here: https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x we can
> read about IPMI watchdog, and to configure it like this:
> options ipmi_watchdog action=power_cycle panic_wdt_timeout=10
>
> The question is: would it give us anything, if we also configured that?
> As we have seen the
> is there a specific reason, why PROXMOX VMs and containers are numbered
> from 100 and not from - e.g. - 001? Can the starting number be changed?
This has historical reasons (IDs 0-99 were reserved by OpenVZ for internal use).
___
pve-user mailing
> I would like to know if qemu 4.X bring the Vm fault tolerance, like COLO ou
> microcheckpoint and if Proxmox will incorporeted that features in the next
> future!
Those things are not stable yet ...
___
pve-user mailing list
pve-user@pve.proxmox.com
gt; Zhuoyun Wei
>
> On Sat, Feb 16, 2019, at 00:34, Dietmar Maurer wrote:
> > Version lookss OK. But seems there is another local disks:
> >
> > 2019-02-15 04:41:01 found local disk 'local-lvm:vm-115-disk-0' (in
> > current VM config)
> >
> >
>
Version lookss OK. But seems there is another local disks:
2019-02-15 04:41:01 found local disk 'local-lvm:vm-115-disk-0' (in current VM
config)
___
pve-user mailing list
pve-user@pve.proxmox.com
> It seems that PVE considers the cloud-init drive as a local resource that
> cannot be moved. After removing the cloud-init drive from the VM, the
> migration succeeded.
>
> IMHO, the cloud-init drive could be treated just like a normal disk image
> that could be dd'ed and copied to another
> I don't know the idea behind keeping a VM from starting up when no
> quorum. It has been maybe, since my point of view, the worst of managing
> Proxmox cluster, because the stability of services (VM up and running)
> had to be first (before the sync of information, for instance).
>
> Is
> I thought that I could view a particular rule but I can't have more
> information than:
>pvesh get /nodes/toto/qemu/107/firewall/rules/5
>┌─┬───┐
>│ key │ value │
>├─┼───┤
>│ pos │ 5 │
>└─┴───┘
>
> The API viewer describes "Get single rule
> Is there a way to change the .members fils locate in /etc/pve ??
> This file is read-only!
no, you cannot change that file. But you can add/remove cluster members - the
file is changed accordingly.
___
pve-user mailing list
pve-user@pve.proxmox.com
> Am 07.09.2018 um 10:35 schrieb Dietmar Maurer:
> >> But what is the timing for starting VM100 on another node? Is it
> >> guaranteed that this only happens after 60 seconds?
> >
> > yes, that is the idea.
>
> I miss the point how this is achieved. Is
> What happens now exactly when HA is configured for VM100?
>
> According to https://pve.proxmox.com/wiki/High_Availability node 3 will
> reboot after 60 seconds ("When a cluster member determines that it is no
> longer in the cluster quorum, the LRM waits for a new quorum to form. As
> long as
> This 802.3ad do no suppose to agrengate the speed of all available NIC??
No, not really. One connection is limited to 1GB. If you start more
parallel connections you can gain more speed.
___
pve-user mailing list
pve-user@pve.proxmox.com
> IMO it is a real pitty that DRBD is not supported anymore.
You can get DRBD support from Linbit.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> Why does Proxmox team have not incorporated a software Raid in the
> install process ?
Because we consider mdraid unreliable and dangerous.
> So that we could include redundancy and lvm advantages
> when using local disks.
Sorry, but we have software raid included - ZFS provides that.
> On June 23, 2018 at 6:21 PM José Manuel Giner wrote:
>
>
> Hello Dietmar,
>
> thanks for answering. If I understand correctly, you propose to make a
> customized installation of the operating system and then install
> cloudinit and convert it to template.
>
> But this approach is much
> What would be the way to do it?
You can customize the template instead.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> could still ping it did I discover this. I did not think it would let
> me do it if would screw something up. It kind of backed me off of HA.
> Sure this is all better now, right?
If you reported a bug, you can few the status in the bug tracker.
> > All volume belongs to a VM, indicated by the encoded VMID. If you
> > remove a VM, we remove all volumes belonging to that VM.
> >
>
> You remove anything containing the VMID, even volumes that the VM config
> aren't refering to. That's really, really strange and should be warned
This is
> On June 21, 2018 at 3:48 PM Simone Piccardi wrote:
>
>
> Il 21/06/2018 10:43, Dietmar Maurer ha scritto:
> > In general, you should never mount storage on different clusters at the same
> > time. This is always dangerous - mostly because there is no locking and
>
> Il 21/06/2018 10:43, Dietmar Maurer ha scritto:
> > In general, you should never mount storage on different clusters at the same
> > time. This is always dangerous - mostly because there is no locking and
> > because of VMID conflicts. If you do, mount at least read-only.
In general, you should never mount storage on different clusters at the same
time. This is always dangerous - mostly because there is no locking and
because of VMID conflicts. If you do, mount at least read-only.
> Not sure this is a bug, but if it's not there should be a huge red
> warning in
> Hi, I find a problem when restoring a backup of a VPS with Cloud-init.
>
> The problem is that the VPS does not start because the Cloudinit CDROM
> drive has not been included in the backup.
>
> Should I report it in the buzilla?
Yes, please do.
> Its complicated why this is 0, in short cpu usage is a counter its hard
> to calculate actual usage in % without history data.
>
> /cluster/resources uses pvestatd which calculates that,
> /nodes/{node}/(qemu|lxc)/status/current doesn't.
Yes, this is probably the best workaround. Please note
> I want to increase vm-100-disk-1.
>
> How do I proceed? Shut down VM and run:
>
> qm resize 100 virtio0 +5G ?
>
> I am not sure which driver I should use
Please post you VM config (the driver is set there).
___
pve-user mailing list
> This is still a test machine, the backup disk is a single external USB
> drive - but even that should give ~ 1GB/minute. What could explain the
> slow backup of the first container?
many small files, or fragmented file system?
___
pve-user mailing
Hi all,
I think there are many opinions when it comes up to storage technologies, and
that is the reason why there are so many different storage projects out there.
And for that reason, we have a plugin system for different storage types :-)
> On April 4, 2018 at 9:50 AM Eneko Lacunza
> Now, since 4.4 I have the issue that I can no longer get info or
> commands from other nodes than the node I'm running this script on. I
> get "500 proxy not allowed" as soon as I get to 'get_vm_description()'.
>
> What am I doing wrong? Thanks!
Maybe you connect to the wrong port (what api
Yes
Ursprüngliche Nachricht
Von: Frank Thommen
Datum: 19.03.18 11:04 (GMT+01:00)
An: PVE User List
Betreff: Re: [PVE-User] How to fence w/o iTCO_wdt watchdog (AMD platform)
Thanks. That means, that if I
> We don't understand why the reinstaled second node is not able to see
> the LVM virtual disks over the DRBD unit.
> May someone help us with this problem?
This looks like a DRBD specific problem, so I would ask on the DRBD list
instead.
___
pve-user
> On December 30, 2017 at 2:27 AM Lindsay Mathieson
> wrote:
>
>
> On 30/12/2017 5:48 AM, Gerald Brandt wrote:
> > I have a VM with 2 snapshots. The display of snapsots for the VM is
> > blank, so I can't delete the snapshot from there.
> >
> > This is a conf
> On 11/30/2017 02:21 PM, Dietmar Maurer wrote:
> >> I greatly respect the work you do on Proxmox but this specific response
> >> is under your habitual standards from a security standpoint.
> >
> > Exactly. That is why we provide the enterprise repository.
&
> I greatly respect the work you do on Proxmox but this specific response
> is under your habitual standards from a security standpoint.
Exactly. That is why we provide the enterprise repository.
___
pve-user mailing list
pve-user@pve.proxmox.com
This is why we have an enterprise repository! Please use the enterprise
repository
if you want SSL.
> On November 30, 2017 at 12:22 PM Florent B wrote:
>
>
> Up !
>
>
> On 30/05/2017 15:21, Florent B wrote:
> > Hi PVE team,
> >
> > Would it be possible to include
> > Could someone with insight into the backup process explain why kvm is
> > started?
>
> It uses the qemu copy-on-write feature to make sure the state is consistent.
> You can immediately work with that VM, while qemu make sure that everything
> is consistent.
In your case (you stopped the VM
> Could someone with insight into the backup process explain why kvm is started?
It uses the qemu copy-on-write feature to make sure the state is consistent.
You can immediately work with that VM, while qemu make sure that everything
is consistent.
___
> Please take this than as bug report for the subcommand (or a "-h" help option)
> and as a request to update the wiki
> article to include the info, that a PATH argument can be given.
OK ;-) Will try to improve things ...
___
pve-user mailing list
> pveperf as described in [1] doesn't work anymore. Even as root I get:
>
> root@pxmx-02:~# pveperf help
> CPU BOGOMIPS: 89368.48
> REGEX/SECOND: 1505926
> df: help: No such file or directory
> DNS EXT: 13.68 ms
> DNS INT: 19.98 ms (localdomain)
see "man pveperf"
> What was really frustrating, whether in a window or full screen, the
> flyout on the left kept obscuring stuff.
>
> eg
>
> http://www.zimagez.com/zimage/screenshot-261017-164001.php
>
> Can the flyout be moved at all ?
You can simply move it to the other side (Drag and Drop).
> if a create manual snapshots via CLI for VM or CT they are not listed in
> the GUI-Webinterface (only created snapshots via GUI are listed). Via "zfs
> list -t all" I see all snapshots, regardless from where its taken. Is that
> intended? Even zfs-auto-snaps are not displayed in GUI.
That is
> Is it possible to update the keepalived version to the latest in Proxmox 4?
>
>
> Current version is 1.2.13 which has a bug with keepalived grabbing
> master on startup, even if state on all ndoes is set to BACKUP and
> priorites the same.
>
>
> 1.3.x resolves the issue. I can build it
> One first question: wenn a VM is assigned to an node an the node faile (some
> hard case like power or an atomic bomb). Could I start the VM on one of the
> other nodes?
You need to 'steal' the VM from that node. Please note that you need
to be absolutely sure that the node is down.
Then run
> Thanks for the clarification.
> Now, which is the best way to make a Feature Request?
better send patches ...
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
No, that is currently not implemented.
> On September 6, 2017 at 6:10 PM Christian Jacobsen
> wrote:
>
>
> Is there a way to allocate quota (cpu, memory, or disk) for users or
> groups to restrict the editing or creation of VMs?
>
> The idea is to allocate
> If, for some reason I need to deactive 2 node, from my 3 nodes cluster, the
> 1 remain node could not turns the /etc/pve into read-only state!
> I cannot understand why this happen!
You simply lose quorum.
> Could Proxmox just allow me to write into /etc/pve and after the others
> nodes are on
> So I cannot figure out why LVM-over-iSCSI is so slow.
I guess your benchmark is simply wrong. You are testing the
local cache, because you do not sync the data back to the storage.
___
pve-user mailing list
pve-user@pve.proxmox.com
> >>how many running VMs/Containers?
>
> on 20 cluster nodes, around 1000 vm
>
> on a 10 cluster nodes, 800vm + 800ct
>
> on a 9 cluster nodes, 400vm
Interesting. So far I did not know anybody using that with more than 6 nodes ...
___
pve-user
> just for the record,
>
> I have migrate all my clusters with unicast, also big clusters with 16-20
> nodes, and It's working fine.
>
>
> "pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected " seem
> to be gone.
>
> don't see any error on the cluster.
>
> traffic is around
> the new storage replication feature is really great! But there is an issue:
> Unfortunately the replication breaks completely if somebody do a rollback to
> an older snapshot than the last sync of a container and destroys that snapshot
> before the next sync.
AFAIK it simply syncs from
> Everything is VLAN-separated ... all three multipath links have its own
> subnets and the link between zfs local storages uses its own
> VLAN-separated link (actually vmbr1 -> intranet link )
Usually VLAN separation does not help to prevent network overload. Or do you
have some special
> Did you have any clue when???
All information is available on the developer list (pve-devel).
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> I don't see any reference about Cloud Init... Is it in this release??
No, it is not.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
I don't think docker works inside LXC.
> On June 27, 2017 at 2:28 PM Gilberto Nunes wrote:
>
>
> Hi list
>
> I am trying install collabora code inside LXC with Ubuntu 16.04, but no
> matter I run the command inside or outside the container, I get the errot
>
> I am looking for a KVM/Container hypervision solution. Proxmox seems to me to
> be the better solution for my needs. However, I have a question, is that
> proxmox is able to manage quotas for different users/groups
no
___
pve-user mailing list
> pvedaemon[3237036]: Can't locate PVE/ReplicationConfig.pm
This file is in pve-guest-common package... Maybe you need to update from git
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> On June 2, 2017 at 4:26 PM Thomas Naumann wrote:
>
>
> hi,
>
> in proxmox 4.4 repo there is a package "pve-sheepdog".
> i couldn´t find that package in test-repo for proxmox 5.
I can find it without problems:
> So please give additional details about future DRBD support in Proxmox.
> Proxmox should clarify whether we should give up using DRBD on Proxmox
> and switch to Ceph for example...
DRBD9 is supported by LINBIT directly. Proxmox will ship the default
upstream kernel module for drbd (whatever
> Alternatively, is there a way exclude one of my 6 nodes from the HA
> quorum voting?
Hint: The number of votes per node is configurable.
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> On April 25, 2017 at 10:40 AM Mark Schouten <m...@tuxis.nl> wrote:
>
>
> On Tue, 2017-04-25 at 06:01 +0200, Dietmar Maurer wrote:
> > > We are thinking about deploying the firewall in order to limit
> > > traffic to
> > > certain virtual machine
> We are thinking about deploying the firewall in order to limit traffic to
> certain virtual machines.
AFAIK there is no traffic shaping functionality.
> One question I have is if enabling the firewall at the datacenter level is a
> requirement
Yes, that is the global firewall on/off flag.
> Should the cluster always be composed by an odd number of hosts?
That is not really necessary, because we use quorum (majority decides).
Cluster switch to read only mode I no partition has quorum, so
there is no danger.
> I'm not using HA so if I move some VM from one host to the other I'm
>
> On March 2, 2017 at 10:15 PM Pavel Kolchanov
> wrote:
>
>
> Hello.
>
> I have enabled GRE and PPtP macro in firewall:
>
> cat /etc/pve/firewall/cluster.fw
> [OPTIONS]
>
> policy_in: REJECT
> enable: 1
>
> [RULES]
>
> GROUP vpn
> GROUP basic-node
>
> [group
> If I may add another question: how are you planning to handle those dynamic
> interface names that was introduced a few
> years ago? See https://en.wikipedia.org/wiki/Consistent_Network_Device_Naming
The plan is to support systemd predictable network interface names:
> Can it be overwritten somehow or should I stick to only eth* devices? Or
> do you accept some prefixes as well?
we also support the "enXXX" prefix.
>
> Is it Proxmox "functionality" or OVS'?
Proxmox
___
pve-user mailing list
> GUI says that "g10" is not a OVSPort. If I type in "eth2" - it magically
> works.
We use the device name to find out the device type. g10 does not look
like an ethernet device, so this does not work.
___
pve-user mailing list
> In case of a split between the datacenters, none of both sites would have a
> quorum and HA would not work. I could fiddle with the number of votes on
> one site, but then HA would only work on one site if there is a connection
> loss.
I would not use such setup because of above problem.
What is the output of
# pveversion -v
> On December 18, 2016 at 3:10 PM Tom wrote:
>
>
> Change reverted.
>
> A friend pointed this out to me: Dec 18 13:53:53 kappa corosync[7047]:
> [VOTEQ ] flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice:
> No QdeviceAlive:
> Tried adding the token thing to the config, no change.
Please revert that change. It makes no sense to fix things which are
already working ;-)
Is there any hint in /var/log/syslog?
___
pve-user mailing list
pve-user@pve.proxmox.com
> I've had a similar issue. Someone kindly suggested me to set the 'token'
> value to 4000 in the corosync.cnf.
Tom already told us that the corosync cluster status is OK, so why do you
think this would help? The cluster works already.
___
pve-user
> On December 18, 2016 at 10:04 AM Tom wrote:
>
>
> pvecm status runs fine showing everything is okay, and only storage thats
> there is the local /var/lib/vz
I asked for the output of
# pvesm status
Also, please make sure the system time is correct on all hosts.
> Does anyone have any solutions/pointers?
And "pvesm status" runs without any delay?
# pvesm status
Or is there a storage which hangs?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
pvesh prints that to stderr, so you just need to redirect stderr.
> That's also on the forum, if I get an answer here I'll update there.
>
> Is there a simple way to prevent pvesh from outputting 200 OK to the tty ?
> Redirecting 1 and 2 doesn't seem to do anything so I assume it writes directly
> Background. I want to prevent errors if functions called I don't have the
> right to do.
Oh, the call just return the access control list, but it is not trivial to
do the actual check. We have no real API for that currently.
___
pve-user mailing list
# pvesh get access/acl
> On December 2, 2016 at 11:49 AM IMMO WETZEL wrote:
>
>
> Hi,
>
> how can I check my own access rights on a specific node/qemu instance ?
> Is there any api function existing I coulnd found ?
>
> Background. I want to prevent errors if
> How does this affect existing Proxmox VE 4.x / DRBD9 setups?
>
> Does "removing the storage driver" mean, that there is no DRBD kernel
> module available from next release oder is it just the manageability due to
> removal of drbdmanage?
We will keep the kernel module for now, unless Linbit
Please note that our software license is AGPL.
You talk about subscriptions here - and this is something very different.
> What if the license is renewed after a year? Then you have 3 installs again?
Sure. Also, you can simply contact our support if you need more than 3
installs. We usually
> For now I’ve add the pve-no-subscripition repository.
>
> What’s the difference between the pve-enterprise and the pve-no-subscription
> repository? Are update just beter tested in the pve-enterprise repo?
Basically yes.
___
pve-user mailing list
> I subscribed for a license to support the project and of course to get
> updates. Now I’m in a testing phase so I installed my license a couple of
> times. I think I hit a maximum cause I can reactivate my license at the
> moment. I raised a ticket over at Maurer IT. I was not aware of this
>
HTTP: GET /api2/json/nodes/{node}/tasks/{upid}
CLI:pvesh get /nodes/{node}/tasks/{upid}
> On November 17, 2016 at 6:49 PM IMMO WETZEL wrote:
>
>
> Hi,
> Every task started by API gets a unique task id.
> How can I check the state of this task via API?
>
> Immo
>
Hi all,
We just want to inform you that Linbit changed the License
for their 'drbdmanage' toolkit.
The commit messages says ("Philipp Reisner"):
--
basically we do not want that others (who have not contributed to the
development) act as parasites in our support business
> So something is not good with QCOW2 disk format.
I guess this is just because it changes a sequential write
order to something more random. You will get different
results if you use other benchmark tools ...
___
pve-user mailing list
> 1) First guest inside qcow2 image, located on NFS share (via 10gbit
What values do you get with raw images?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> On November 15, 2016 at 7:49 PM Dietmar Maurer <diet...@proxmox.com> wrote:
>
>
>
>
> > On November 15, 2016 at 7:48 PM Dietmar Maurer <diet...@proxmox.com> wrote:
> >
> >
> > > I just noticed two different values on the node Summary
> On November 15, 2016 at 7:48 PM Dietmar Maurer <diet...@proxmox.com> wrote:
>
>
> > I just noticed two different values on the node Summary tab :
> >
> > Numbers : RAM usage 92.83% (467.65 GiB of 503.79 GiB)
> >
> > And graphs : Total RA
> I just noticed two different values on the node Summary tab :
>
> Numbers : RAM usage 92.83% (467.65 GiB of 503.79 GiB)
>
> And graphs : Total RAM : 540.94GB and Usage : 504.53GB
Indeed, that looks strange. Please note that the units are
different (GiB vs. GB), but values are still wrong.
> What i understand so far, is that every state/service change from LRM
> must be acknowledged (cluster-wise) by CRM master.
> So if a multicast disruption occurs, and i assume LRM wouldn't be able
> talk to the CRM MASTER, then it also couldn't reset the watchdog, am i
> right ?
Nothing
> On November 11, 2016 at 6:41 PM Dhaussy Alexandre
> wrote:
>
>
> > you lost quorum, and the watchdog expired - that is how the watchdog
> > based fencing works.
>
> I don't expect to loose quorum when _one_ node joins or leave the cluster.
This was probably a
> Responding to myself, i find this interesting :
>
> Nov 8 10:39:01 proxmoxt35 corosync[35250]: [TOTEM ] A new membership
> (10.xx.xx.11:684) was formed. Members joined: 13
> Nov 8 10:39:58 proxmoxt35 watchdog-mux[28239]: client watchdog expired -
> disable watchdog updates
you lost quorum,
> *Oct 24 14:49:10 focsa02 pve-firewall[2049]: status update error:
> iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore
> --help' for more information.**
> **Oct 24 14:49:20 focsa02 pve-firewall[2049]: status update error:
> iptables_restore_cmdlist: Try `iptables-restore
Please can you test if the problem is Glusterfs related?
Or does it occur with other storage types also?
> On October 23, 2016 at 12:39 PM Fabrizio Cuseo wrote:
>
>
> Glusterfs too
___
pve-user mailing list
> Also nearly all VM starts invoked via the gui are failing with a timeout
> error, though the VM actually starts.
I am unable to reproduce that. Please can you post the VM config? What kind
of storage do you use?
___
pve-user mailing list
what is the output of
# pveversion -v
seem there are missing packages. Try to fix with
# apt-get install proxmox-ve
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> After this operation, 0 B of additional disk space will be used.
> Do you want to continue? [Y/n] y
> Setting up pve-firewall (2.0-31) ...
> Job for pve-firewall.service failed. See 'systemctl status
> pve-firewall.service' and 'journalctl -xn' for details.
what is the output of
systemctl
> Sure enough, I can't find an LXC.pm
Seems your installation is broken. Try to fix that with
# apt-get update
# apt-get dist-upgrade
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> https://www.pictshare.net/cb2c08d9ca.png
Seems you try to display lists of Nodes/Guests in:
Offline Nodes:
Guest with errors:
IMHO such lists can be quite long, so how do you plan to display
a long lists here?
___
pve-user mailing list
> What do we overlook? Or is the manpage simply wrong?
yes, it is misleading - sorry.
> The fact that means the storage device is not really obvious.
> That the size=5 is not working is not understood by us.
size is only used after creation.
> Some more comments/examples for such an
1 - 100 of 564 matches
Mail list logo