Re: [PVE-User] Some features that I miss in the PVE WebUI

2020-05-22 Thread Frank Thommen

On 22/05/2020 16:10, Thomas Lamprecht wrote:

Hi,

On 5/22/20 4:03 PM, Frank Thommen wrote:

Dear all,

having worked with oVirt in the past there are some features that I am really 
missing in PVE in my daily work:

a) a tabular overview over all virtual machines. This should/might also include 
some performance data and the description.  See the attached partial screenshot 
from oVirt, where this is implemented quite nicely. This is /not/ to replace 
proper monitoring but to provide a quick overview over the PVE-based 
intrastructure


Isn't this what Datacenter -> Search could be? Note that the list drops any 
attachments.


Partially.  It is still missing some data that I really like in the 
oVirt overview (e.g. comments/descriptions, IP and the thumbnailed 
timelines(!) for CPU, memory and network ecc.)


I have uploaded the screenshot to https://pasteboard.co/J9B4cEc.png.



a1) the possibility to provide the virtual machines and containers with a 
short, one-line description


Why not use the VM/CT notes? Editable over VM/CT -> Summary panel?


We do use these notes heavily, but unfortunately they don't seem to be 
searchable/usable for filtering.  It would be nice to have a 
"Description" which would be a one-liner, like the description for the 
HA resources.




b) the possibility to use keywords from the Notes field or the description (see a1 above) 
in the search box.  Our hosts are all named -vm which forces 
us to keep a separate list for the mapping of services to hostnames


Dominik has done some patches adding "tags", which then would be searchable.
Some backend support is there, but we had some discussion about how to integrate
them in the frontend. I think this will be picked up soonish and should provide
what you seek in b) maybe also a1..


yes, such tags would come in very handy and could partially replace the 
description if they can be used for filtering.


Frank



cheers,
Thomas




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Some features that I miss in the PVE WebUI

2020-05-22 Thread Frank Thommen

Dear all,

having worked with oVirt in the past there are some features that I am 
really missing in PVE in my daily work:


a) a tabular overview over all virtual machines. This should/might also 
include some performance data and the description.  See the attached 
partial screenshot from oVirt, where this is implemented quite nicely. 
This is /not/ to replace proper monitoring but to provide a quick 
overview over the PVE-based intrastructure


a1) the possibility to provide the virtual machines and containers with 
a short, one-line description


b) the possibility to use keywords from the Notes field or the 
description (see a1 above) in the search box.  Our hosts are all named 
-vm which forces us to keep a separate list for the 
mapping of services to hostnames


It would be great to see these features in some future PVE release.

Cheers, Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-17 Thread Frank Thommen

On 17.03.20 09:33, Dietmar Maurer wrote:

Does anyone have an assessment of the risk we would run?  I still don't
understand the security implications of the mapping of higher UIDs.
However this is quickly becoming a major issue for us.


The risk is that it is not supported by us. Thus, we do not
test that and I do not know what problems this may trigger...



ok.  I will take the risk then, because w/o that mapping we cannot use 
the containers.


Thanks
Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-17 Thread Frank Thommen

Dear all,

On 13.03.20 14:13, Frank Thommen wrote:

On 3/12/20 7:58 PM, Frank Thommen wrote:

On 3/12/20 5:57 PM, Dietmar Maurer wrote:

I fear
this might be a container-related issue but I don't understand it and I
don't know if there is a solution or a workaround.

Any help or hint is highly appreciated


Yes, we only map 65535 IDs for a single container. We cannot allow
the full range for security reasons.


What is the security related impact of higher UIDs?  This is kind of a 
showstopper for us, as we planned several such minimal services which 
all need to be able to map all existing UIDs in the AD.


The idea was to move them away from heavy full VMs to more lightweight 
containers.


Or the other way round: What are the risks if we change the hardcoded 
limits in /usr/share/perl5/PVE/LXC.pm? (apart from the fact, that we 
will have to port the changes after each update and upgrade)


Does anyone have an assessment of the risk we would run?  I still don't 
understand the security implications of the mapping of higher UIDs. 
However this is quickly becoming a major issue for us.


Cheers
Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-13 Thread Frank Thommen

On 3/12/20 7:58 PM, Frank Thommen wrote:

On 3/12/20 5:57 PM, Dietmar Maurer wrote:

I fear
this might be a container-related issue but I don't understand it and I
don't know if there is a solution or a workaround.

Any help or hint is highly appreciated


Yes, we only map 65535 IDs for a single container. We cannot allow
the full range for security reasons.


What is the security related impact of higher UIDs?  This is kind of a 
showstopper for us, as we planned several such minimal services which 
all need to be able to map all existing UIDs in the AD.


The idea was to move them away from heavy full VMs to more lightweight 
containers.


Or the other way round: What are the risks if we change the hardcoded 
limits in /usr/share/perl5/PVE/LXC.pm? (apart from the fact, that we 
will have to port the changes after each update and upgrade)


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-12 Thread Frank Thommen

On 3/12/20 5:57 PM, Dietmar Maurer wrote:

I fear
this might be a container-related issue but I don't understand it and I
don't know if there is a solution or a workaround.

Any help or hint is highly appreciated


Yes, we only map 65535 IDs for a single container. We cannot allow
the full range for security reasons.


What is the security related impact of higher UIDs?  This is kind of a 
showstopper for us, as we planned several such minimal services which 
all need to be able to map all existing UIDs in the AD.


The idea was to move them away from heavy full VMs to more lightweight 
containers.


Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] UIDs > 65535 not valid in container

2020-03-12 Thread Frank Thommen



On 3/12/20 6:10 PM, Daniel Berteaud wrote:



- Le 12 Mar 20, à 16:35, Frank Thommen f.thom...@dkfz-heidelberg.de a écrit 
:


Dear all,

we have a strange issue with a CentOS 7 container running on PVE 6.1-3,
that UIDs > 65535 are invalid.  The container is used as a "SSH
jumphost" to access a special network: Users log in to the host and SSH
to the special network from there. sssd is running in the container. The
directory service is an Active Directory.

However users with UID > 65535 cannot login:

/var/log/secure:
[...]
Mar 12 13:48:32 XX sshd[1021]: fatal: seteuid 86544: Invalid argument
[...]


and chown isn't possible either:

$ chown 65535 /home/test
$ chown 65536 /home/test
chown: changing ownership of ‘/home/test’: Invalid argument
$


There are no problems with such UIDs on any other systems and there is
no problem with users with an UID <= 65535 within the container.  I fear
this might be a container-related issue but I don't understand it and I
don't know if there is a solution or a workaround.

Any help or hint is highly appreciated


You can work with higher UID in LXC with this :

   * Edit /etc/subuid and change the range. Eg

root:10:400039

   * Do the same for /etc/subgid
   * Edit your container config (/etc/pve/lxc/XXX.conf) and add

lxc.idmap: u 0 10 200020
lxc.idmap: g 0 10 200020

That's the values I'm using for some AD members containers. Note however that 
native PVE restore code might refuse to work with those UID (I recall the 65535 
max UID hardcoded somewhere in the restore path, but can't remember exactly 
where)


Unfortunately that doesn't work.  The container will not start any more 
with the following messages in the debug log (shortened):



[...]
lxc-start 101 20200312185335.631 INFO conf - 
conf.c:run_script_argv:372 - Executing script 
"/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config 
section "lxc"
lxc-start 101 20200312185336.964 DEBUGconf - conf.c:run_buffer:340 - 
Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start 
produced output: unable to detect OS distribution


lxc-start: 101: conf.c: run_buffer: 352 Script exited with status 2
lxc-start: 101: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start 
for container "101"
lxc-start: 101: start.c: __lxc_start: 2032 Failed to initialize 
container "101"

Segmentation fault


Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] UIDs > 65535 not valid in container

2020-03-12 Thread Frank Thommen

Dear all,

we have a strange issue with a CentOS 7 container running on PVE 6.1-3, 
that UIDs > 65535 are invalid.  The container is used as a "SSH 
jumphost" to access a special network: Users log in to the host and SSH 
to the special network from there. sssd is running in the container. The 
directory service is an Active Directory.


However users with UID > 65535 cannot login:

/var/log/secure:
[...]
Mar 12 13:48:32 XX sshd[1021]: fatal: seteuid 86544: Invalid argument
[...]


and chown isn't possible either:

$ chown 65535 /home/test
$ chown 65536 /home/test
chown: changing ownership of ‘/home/test’: Invalid argument
$


There are no problems with such UIDs on any other systems and there is 
no problem with users with an UID <= 65535 within the container.  I fear 
this might be a container-related issue but I don't understand it and I 
don't know if there is a solution or a workaround.


Any help or hint is highly appreciated

Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Why are images and rootdir not supported on CephFS?

2020-02-17 Thread Frank Thommen



On 2/17/20 7:23 AM, Fabian Grünbichler wrote:

On February 16, 2020 5:26 pm, Frank Thommen wrote:

Thank you for the link.

Even though Fabian Gruenbichler writes in the bugreport
(https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD
offers all features of CephFS, this doesn't seem to be true(?), as
CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images
rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these
two storage types are complementary and RBD cannot replace CephFS.


that comment was about using CephFS for guest image/volume storage. for
that use case, RBD has all the same features (even more), but with
better performance. obviously I didn't mean that RBD is a file system ;)


What would be a good practice (CephFS and RBD are already set up):
Create an RBD storage on the same (PVE based) Ceph storage that already
has CephFS on top of it and use one for templates and backups and the
other for images and rootdir?


yes


Won't it create problems when using the same Ceph pool with CephFS /and/
RBD (this is probably rather a Ceph question, though)


you don't use the same Ceph pool, just the same OSDs (pools are logical
in Ceph, unlike with ZFS), so this is not a problem.


Additionally this might create problems with our inhouse tape backup, as
I don't think it supports backing up object storage...


the usual backup options are available - use vzdump, and then backup the
VMA files. or use some backup solution inside the guest. or both ;)


Thank you for all the hints above.  I will go along these lines.  Still 
struggling with some Ceph concepts ;-)


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Why are images and rootdir not supported on CephFS?

2020-02-16 Thread Frank Thommen
I have now configured a Directory storage which points to the CephFS 
mountpoint.  When creating a container (Alpine Linux, 10 GB disk, 2 GB 
Memory), this happens in the blink of an eye when using Ceph RBD or 
local storage as root disk, but it takes very, very long (10 to 20 times 
longer) when using the CephFS-directory as root disk.




On 16/02/2020 17:26, Frank Thommen wrote:

Thank you for the link.

Even though Fabian Gruenbichler writes in the bugreport 
(https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD 
offers all features of CephFS, this doesn't seem to be true(?), as 
CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images 
rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these 
two storage types are complementary and RBD cannot replace CephFS.


What would be a good practice (CephFS and RBD are already set up): 
Create an RBD storage on the same (PVE based) Ceph storage that already 
has CephFS on top of it and use one for templates and backups and the 
other for images and rootdir?


Won't it create problems when using the same Ceph pool with CephFS /and/ 
RBD (this is probably rather a Ceph question, though)


Additionally this might create problems with our inhouse tape backup, as 
I don't think it supports backing up object storage...


frank


On 15/02/2020 00:38, Gianni Milo wrote:
This has been discussed in the past, see the post below for some 
answers...


https://www.mail-archive.com/pve-user@pve.proxmox.com/msg10160.html




On Fri, 14 Feb 2020 at 22:57, Frank Thommen 


wrote:


Dear all,

the PVE documentation on
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says,
that "[File level based storage technologies] allow you to store content
of any type".  However I found now - after having combined all available
disks in a big CephFS setup - that this is not true for CephFS, which
does not support images and rootdir.

What is the reason, that of all filesystens, CephFS doesn't support the
main PVE content type? :-)

Ceph RPD on the other hand, doesn't support backup.

This seems to rule our Ceph (in whatever variant) as an unifying, shared
storage for PVE.  Or do I miss an important point here?

Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Why are images and rootdir not supported on CephFS?

2020-02-16 Thread Frank Thommen

Thank you for the link.

Even though Fabian Gruenbichler writes in the bugreport 
(https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD 
offers all features of CephFS, this doesn't seem to be true(?), as 
CephFS supports "vztmpl iso backup snippets" and Ceph RBD "images 
rootdir" (https://pve.proxmox.com/pve-docs/chapter-pvesm.html), so these 
two storage types are complementary and RBD cannot replace CephFS.


What would be a good practice (CephFS and RBD are already set up): 
Create an RBD storage on the same (PVE based) Ceph storage that already 
has CephFS on top of it and use one for templates and backups and the 
other for images and rootdir?


Won't it create problems when using the same Ceph pool with CephFS /and/ 
RBD (this is probably rather a Ceph question, though)


Additionally this might create problems with our inhouse tape backup, as 
I don't think it supports backing up object storage...


frank


On 15/02/2020 00:38, Gianni Milo wrote:

This has been discussed in the past, see the post below for some answers...

https://www.mail-archive.com/pve-user@pve.proxmox.com/msg10160.html




On Fri, 14 Feb 2020 at 22:57, Frank Thommen 
wrote:


Dear all,

the PVE documentation on
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says,
that "[File level based storage technologies] allow you to store content
of any type".  However I found now - after having combined all available
disks in a big CephFS setup - that this is not true for CephFS, which
does not support images and rootdir.

What is the reason, that of all filesystens, CephFS doesn't support the
main PVE content type? :-)

Ceph RPD on the other hand, doesn't support backup.

This seems to rule our Ceph (in whatever variant) as an unifying, shared
storage for PVE.  Or do I miss an important point here?

Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Why are images and rootdir not supported on CephFS?

2020-02-14 Thread Frank Thommen

Dear all,

the PVE documentation on 
https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types says, 
that "[File level based storage technologies] allow you to store content 
of any type".  However I found now - after having combined all available 
disks in a big CephFS setup - that this is not true for CephFS, which 
does not support images and rootdir.


What is the reason, that of all filesystens, CephFS doesn't support the 
main PVE content type? :-)


Ceph RPD on the other hand, doesn't support backup.

This seems to rule our Ceph (in whatever variant) as an unifying, shared 
storage for PVE.  Or do I miss an important point here?


Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph: Monitors not running but cannot be destroyed or recreated

2020-01-26 Thread Frank Thommen

On 26/01/2020 16:46, Frank Thommen wrote:

On 26/01/2020 14:14, Frank Thommen wrote:

Dear all,

I am trying to destroy "old" Ceph monitors but they can't be deleted 
and also cannot be recreated:


I am currently configuring Ceph on our PVE cluster (3 nodes running 
PVE 6.1-3).  There have been some "remainders" of a previous Ceph 
configuration which I had tried to configure while the nodes were not 
in a cluster configuration yet (and I had used the wrong network).  
However I had purged these configurations with `pveceph purge`.  I 
have redone the basic Ceph configuration through the GUI on the first 
node and I have deleted the still existing managers through the GUI 
(to have a fresh start).


A new monitor has been created on the first node automatically, but I 
am unable to delete the monitors on nodes 2 and 3.  They show up as 
Status=stopped and Address=Unknown in the GUI and they cannot be 
started (no error message).  In the syslog window I see (after 
rebooting node odcf-pve02):



Jan 26 13:51:53 odcf-pve02 systemd[1]: Started Ceph cluster monitor 
daemon.
Jan 26 13:51:55 odcf-pve02 ceph-mon[1372]: 2020-01-26 13:51:55.450 
7faa98ab9280 -1 mon.odcf-pve02@0(electing) e1 failed to get devid for 
: fallback method has serial ''but no model



On the other hand I see the same message on the first node, and there 
the monitor seems to work fine.


Trying to destroy them results in the message, that there is no such 
monitor, and trying to create a new monitor on these nodes results in 
the message, that the monitor already exists I am stuck in this 
existence loop.  Destroying or creating them also doesn't work on the 
commandline.


Any idea on how to fix this?  I'd rather not completely reinstall the 
nodes :-)


Cheers
frank



In an attempt to clean up the Ceph setup again, I ran

   pveceph stop ceph.target
   pveceph purge

on the first node.  Now I get an

    rados_connect failed - No such file or directory (500)

when I select Ceph in the GUI of any of the three nodes.  A reboot of 
all nodes didn't help.


frank


I was finally able to completely purge the old settings and reconfigure 
Ceph with the various instructions from this 
(https://forum.proxmox.com/threads/not-able-to-use-pveceph-purge-to-completely-remove-ceph.59606/) 
post.


Maybe this information could be added to the official documentation 
(unless there is a nicer way of completely resetting Ceph in a PROXMOX 
cluster)?


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph: Monitors not running but cannot be destroyed or recreated

2020-01-26 Thread Frank Thommen

On 26/01/2020 14:14, Frank Thommen wrote:

Dear all,

I am trying to destroy "old" Ceph monitors but they can't be deleted and 
also cannot be recreated:


I am currently configuring Ceph on our PVE cluster (3 nodes running PVE 
6.1-3).  There have been some "remainders" of a previous Ceph 
configuration which I had tried to configure while the nodes were not in 
a cluster configuration yet (and I had used the wrong network).  However 
I had purged these configurations with `pveceph purge`.  I have redone 
the basic Ceph configuration through the GUI on the first node and I 
have deleted the still existing managers through the GUI (to have a 
fresh start).


A new monitor has been created on the first node automatically, but I am 
unable to delete the monitors on nodes 2 and 3.  They show up as 
Status=stopped and Address=Unknown in the GUI and they cannot be started 
(no error message).  In the syslog window I see (after rebooting node 
odcf-pve02):



Jan 26 13:51:53 odcf-pve02 systemd[1]: Started Ceph cluster monitor daemon.
Jan 26 13:51:55 odcf-pve02 ceph-mon[1372]: 2020-01-26 13:51:55.450 
7faa98ab9280 -1 mon.odcf-pve02@0(electing) e1 failed to get devid for : 
fallback method has serial ''but no model



On the other hand I see the same message on the first node, and there 
the monitor seems to work fine.


Trying to destroy them results in the message, that there is no such 
monitor, and trying to create a new monitor on these nodes results in 
the message, that the monitor already exists I am stuck in this 
existence loop.  Destroying or creating them also doesn't work on the 
commandline.


Any idea on how to fix this?  I'd rather not completely reinstall the 
nodes :-)


Cheers
frank



In an attempt to clean up the Ceph setup again, I ran

  pveceph stop ceph.target
  pveceph purge

on the first node.  Now I get an

   rados_connect failed - No such file or directory (500)

when I select Ceph in the GUI of any of the three nodes.  A reboot of 
all nodes didn't help.


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph: Monitors not running but cannot be destroyed or recreated

2020-01-26 Thread Frank Thommen

Dear all,

I am trying to destroy "old" Ceph monitors but they can't be deleted and 
also cannot be recreated:


I am currently configuring Ceph on our PVE cluster (3 nodes running PVE 
6.1-3).  There have been some "remainders" of a previous Ceph 
configuration which I had tried to configure while the nodes were not in 
a cluster configuration yet (and I had used the wrong network).  However 
I had purged these configurations with `pveceph purge`.  I have redone 
the basic Ceph configuration through the GUI on the first node and I 
have deleted the still existing managers through the GUI (to have a 
fresh start).


A new monitor has been created on the first node automatically, but I am 
unable to delete the monitors on nodes 2 and 3.  They show up as 
Status=stopped and Address=Unknown in the GUI and they cannot be started 
(no error message).  In the syslog window I see (after rebooting node 
odcf-pve02):



Jan 26 13:51:53 odcf-pve02 systemd[1]: Started Ceph cluster monitor daemon.
Jan 26 13:51:55 odcf-pve02 ceph-mon[1372]: 2020-01-26 13:51:55.450 
7faa98ab9280 -1 mon.odcf-pve02@0(electing) e1 failed to get devid for : 
fallback method has serial ''but no model



On the other hand I see the same message on the first node, and there 
the monitor seems to work fine.


Trying to destroy them results in the message, that there is no such 
monitor, and trying to create a new monitor on these nodes results in 
the message, that the monitor already exists I am stuck in this 
existence loop.  Destroying or creating them also doesn't work on the 
commandline.


Any idea on how to fix this?  I'd rather not completely reinstall the 
nodes :-)


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE Cluster: New authentication is required to access each node from GUI

2020-01-25 Thread Frank Thommen
The time it was.  It was off by a few minutes between two of the servers 
but off by several hours on the third.


I don't like ntpd anyway and I will probably replace it by chronyd.

Thanks for the hint.

Cheers, frank


On 25/01/2020 18:32, Gianni Milo wrote:

Things I would check or modify...

- output of 'pvecm s' and 'pvecm n' commands.
- syslog on each node for any clues.
- ntp.
- separate cluster (corosync) network from storage network (i.e In your
case, use --link2, LAN).

G.


On Sat, 25 Jan 2020 at 15:44, Frank Thommen 
wrote:


Dear all,

I have installed a 3-node PVE cluster as instructed on
https://pve.proxmox.com/pve-docs/chapter-pvecm.html (usung commandline).
   When I now connect via GUI to one node and select one of the other
nodes, I get a "401" error message and then I am asked to authenticate
to the other node.  So to see all nodes from all other nodes via GUI I
would have to authenticate nine times.  I don't think that is as it
should be ;-). I would assume that once I am logged in on the GUI of one
of the cluster nodes, I can look at the other two nodes w/o additional
authentication from this GUI.

The situation is somehow similar to the one described on

https://forum.proxmox.com/threads/3-node-cluster-permission-denied-invalid-pve-ticket-401.56038/,

but the suggested "pvecm updatecerts" (run on each node) only helped for
a short time.  After a reboot of the nodes I am back to the potential
nine authentications.

My three nodes are connected through a full 10GE mesh
(https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server) using
broadcast bonds.  This mesh will finally also be used for Ceph.  I
configured this mesh to be the cluster network (--link0).  As fallback
(--link1) I used the regular LAN.

Does anyone have an idea what could be wrong and how this could be
fixed?  Could the mesh with the broadcast bonds be the problem?  If yes,
should I use an other type of mesh?  Unfortunately a full dedicated
PVE-only network with a switch is not an option.  I can either use a
mesh or the regular LAN in the datacenter.

The systems are running PVE 6.1-3.

Any help or hint is appreciated.

Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] PVE Cluster: New authentication is required to access each node from GUI

2020-01-25 Thread Frank Thommen

Dear all,

I have installed a 3-node PVE cluster as instructed on 
https://pve.proxmox.com/pve-docs/chapter-pvecm.html (usung commandline). 
 When I now connect via GUI to one node and select one of the other 
nodes, I get a "401" error message and then I am asked to authenticate 
to the other node.  So to see all nodes from all other nodes via GUI I 
would have to authenticate nine times.  I don't think that is as it 
should be ;-). I would assume that once I am logged in on the GUI of one 
of the cluster nodes, I can look at the other two nodes w/o additional 
authentication from this GUI.


The situation is somehow similar to the one described on 
https://forum.proxmox.com/threads/3-node-cluster-permission-denied-invalid-pve-ticket-401.56038/, 
but the suggested "pvecm updatecerts" (run on each node) only helped for 
a short time.  After a reboot of the nodes I am back to the potential 
nine authentications.


My three nodes are connected through a full 10GE mesh 
(https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server) using 
broadcast bonds.  This mesh will finally also be used for Ceph.  I 
configured this mesh to be the cluster network (--link0).  As fallback 
(--link1) I used the regular LAN.


Does anyone have an idea what could be wrong and how this could be 
fixed?  Could the mesh with the broadcast bonds be the problem?  If yes, 
should I use an other type of mesh?  Unfortunately a full dedicated 
PVE-only network with a switch is not an option.  I can either use a 
mesh or the regular LAN in the datacenter.


The systems are running PVE 6.1-3.

Any help or hint is appreciated.

Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to get rid of default thin-provisioned default LV "data"?

2020-01-19 Thread Frank Thommen

That's what I have written in our internal documentation:

--
We delete the current thin-provisioned LV, create a new 
"thick-provisioned" LV instead, format it, mount it and configure it as 
"dir" storage for PVE:


First manually remove the lvmthin entry from /etc/pve/storage.cfg, then

$ lvremove /dev/pve/data
Do you really want to remove and DISCARD active logical volume pve/data? 
[y/n]: yes

  Logical volume "data" successfully removed
$ lvcreate -n data -l 100%FREE pve
  Logical volume "data" created.
$ mkfs -t ext4 /dev/pve/data
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 216501248 4k blocks and 54132736 inodes
Filesystem UUID: 5d457f4c-40a5-4423-9c3a-03db1ae76869
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968,
10240, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

$ mkdir /data
$ echo "/dev/pve/data /data ext4 defaults 0 2" >> /etc/fstab && mount -a
$ pvesm add dir data --path /data --content 
images,rootdir,vztmpl,iso,backup,snippets

$

done...
--

HTH, frank


On 19.01.20 00:53, Saint Michael wrote:

can somebody post the full chain of commands?
That is the only reason why I am not using the software.


On Sat, Jan 18, 2020 at 2:54 PM Frank Thommen 
wrote:


Thanks Gianni,

that worked just fine.  I've now created a regular LVM LV instead,
formatted and mounted it and configured it as "dir" storage.

frank


On 16.01.20 21:19, Gianni Milo wrote:

This should be a case of removing (or comment out) its corresponding

entry

in /etc/pve/storage.cfg and then removing it by using the usual lvm
commands.
You can then "convert" it to thick lvm and re-add it in the config

file...


Gianni

On Thu, 16 Jan 2020 at 18:26, Frank Thommen <

f.thom...@dkfz-heidelberg.de>

wrote:


Dear all,

when installing PVE on our servers with 1 TB boot disk, the installer
creates regular /root and swap LVs (ca. 100 GB in total) and the rest of
the disk (800 GB) is used for a thin-provisioned LV named "data".
Personally I don't like thin-provisioning and would like to get rid of
it and have a "regular" LV instead.  However I couldn't find a way to
achieve this through the gui (PVE 6.1-3).

Did I miss some gui functionality or is it ok to remove the LV with
regular lvm commands?  Will I break something if I remove it?  No
VM/container has been installed so far.  The hypervisors are freshly
installed and they contain additional several internal disks which we
plan to use for local storage/Ceph.

Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
 | f.thom...@dkfz-heidelberg.de
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | f.thom...@dkfz-heidelberg.de
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to get rid of default thin-provisioned default LV "data"?

2020-01-18 Thread Frank Thommen

Thanks Gianni,

that worked just fine.  I've now created a regular LVM LV instead, 
formatted and mounted it and configured it as "dir" storage.


frank


On 16.01.20 21:19, Gianni Milo wrote:

This should be a case of removing (or comment out) its corresponding entry
in /etc/pve/storage.cfg and then removing it by using the usual lvm
commands.
You can then "convert" it to thick lvm and re-add it in the config file...

Gianni

On Thu, 16 Jan 2020 at 18:26, Frank Thommen 
wrote:


Dear all,

when installing PVE on our servers with 1 TB boot disk, the installer
creates regular /root and swap LVs (ca. 100 GB in total) and the rest of
the disk (800 GB) is used for a thin-provisioned LV named "data".
Personally I don't like thin-provisioning and would like to get rid of
it and have a "regular" LV instead.  However I couldn't find a way to
achieve this through the gui (PVE 6.1-3).

Did I miss some gui functionality or is it ok to remove the LV with
regular lvm commands?  Will I break something if I remove it?  No
VM/container has been installed so far.  The hypervisors are freshly
installed and they contain additional several internal disks which we
plan to use for local storage/Ceph.

Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | f.thom...@dkfz-heidelberg.de
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] How to get rid of default thin-provisioned default LV "data"?

2020-01-16 Thread Frank Thommen

Dear all,

when installing PVE on our servers with 1 TB boot disk, the installer 
creates regular /root and swap LVs (ca. 100 GB in total) and the rest of 
the disk (800 GB) is used for a thin-provisioned LV named "data". 
Personally I don't like thin-provisioning and would like to get rid of 
it and have a "regular" LV instead.  However I couldn't find a way to 
achieve this through the gui (PVE 6.1-3).


Did I miss some gui functionality or is it ok to remove the LV with 
regular lvm commands?  Will I break something if I remove it?  No 
VM/container has been installed so far.  The hypervisors are freshly 
installed and they contain additional several internal disks which we 
plan to use for local storage/Ceph.


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] LVM/pvesm udev database initialization warnings after import of KVM images

2020-01-10 Thread Frank Thommen

Dear all,

after having (successfully) imported two KVM disk images from oVirt, LVM 
and pvesm complain about some udev initialization problem:


root@pve01:~# pvesm status
  WARNING: Device /dev/dm-8 not initialized in udev database even after 
waiting 1000 microseconds.
  WARNING: Device /dev/dm-9 not initialized in udev database even after 
waiting 1000 microseconds.
  WARNING: Device /dev/dm-8 not initialized in udev database even after 
waiting 1000 microseconds.
  WARNING: Device /dev/dm-9 not initialized in udev database even after 
waiting 1000 microseconds.
Name Type Status   TotalUsed 
Available%
local dir active9855922045640836 
47868836   46.31%
local-lvm lvmthin active   83224576071573135 
7606726248.60%

root@pve01:~#

root@pve01:~# lvdisplay
  WARNING: Device /dev/dm-8 not initialized in udev database even after 
waiting 1000 microseconds.
  WARNING: Device /dev/dm-9 not initialized in udev database even after 
waiting 1000 microseconds.

  --- Logical volume ---
  LV Path/dev/pve/swap
  LV Nameswap
  VG Namepve
  [...]
root@pve01:~#

However this doesn't seem to influence the functionality of the VMs.

Any idea what could be the problem and how to fix it?

Thank you very much in advance
Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Are two bridges to the same network possible?

2019-12-18 Thread Frank Thommen

On 12/17/19 10:35 PM, Chris Hofstaedtler | Deduktiva wrote:

* Frank Thommen  [191217 22:10]:

I probably don't understand the question correctly: One has the IP traffic
to the server/hypervisor, the other has IP traffic for the attached VMs and
containers.  Is there no need for a gateway if the bridge only provisions
VMs and containers with network connectivity? Should I think of the bridge
as a dumb connector on the link level (I should probably and urgently reread
my networking materials :-)


You can think of it as a dumb switch. As a special feature, Linux
allows you to also setup host network connectivity on a bridge - but
it doesn't have to do that.

I would guess that on the bridge you want to use for your VMs, you
don't need an IP adress (and also no gateway then).

Generally speaking, having more than one default gateway per host is
an advanced configuration and you really need to know what you're
doing then. Having multiple interfaces with IP addresses is a common
thing, but you'llneed to understand how your routing setup works.


Thanks, that helps.  I think I will additionally remove vmbr0 (using 
bond0), as this connection will not be used for any virtual machine or 
container, but only to access the hypervisor itself.


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Are two bridges to the same network possible?

2019-12-17 Thread Frank Thommen

On 17.12.19 21:42, Chris Hofstaedtler | Deduktiva wrote:

* Frank Thommen  [191217 20:29]:

(how) can I connect two virtual bridges to the same network (using the same
gateway)?

Our currently-being-built-up PVE server has four Linux network bonds
connecting to two three networks. One is being used for the PVE server
itself (administration, web ui ecc), the other three for virtual machines
which will reside in three different networks.  One of these three networks
is the same as the one where the PVE server is residing, but I'd still like
to use separate NICs for the VMs.

The server itself is attached to the default vmbr0 (using bond0).  But as
soon as I want to configure a second bridge vmbr1 (using bond1) to the same
network, PROXMOX complains that the default gateway already exists.

Is it technically possible/supported to have multiple bridges to the same
network (with the same gateways)?


Do you actually need IP traffic on the host on the second interface?
If not, then don't configure IP addresses, gateways, etc...

Chris



I probably don't understand the question correctly: One has the IP 
traffic to the server/hypervisor, the other has IP traffic for the 
attached VMs and containers.  Is there no need for a gateway if the 
bridge only provisions VMs and containers with network connectivity? 
Should I think of the bridge as a dumb connector on the link level (I 
should probably and urgently reread my networking materials :-)


Cheers, frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Are two bridges to the same network possible?

2019-12-17 Thread Frank Thommen

Hello,

(how) can I connect two virtual bridges to the same network (using the 
same gateway)?


Our currently-being-built-up PVE server has four Linux network bonds 
connecting to two three networks. One is being used for the PVE server 
itself (administration, web ui ecc), the other three for virtual 
machines which will reside in three different networks.  One of these 
three networks is the same as the one where the PVE server is residing, 
but I'd still like to use separate NICs for the VMs.


The server itself is attached to the default vmbr0 (using bond0).  But 
as soon as I want to configure a second bridge vmbr1 (using bond1) to 
the same network, PROXMOX complains that the default gateway already exists.


Is it technically possible/supported to have multiple bridges to the 
same network (with the same gateways)?


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-17 Thread Frank Thommen

On 12/17/18 9:23 AM, Eneko Lacunza wrote:

Hi,

El 16/12/18 a las 17:16, Frank Thommen escribió:
I understand that with the new PVE release PVE hosts (hypervisors) 
can be

used as Ceph servers.  But it's not clear to me if (or when) that makes
sense.  Do I really want to have Ceph MDS/OSD on the same hardware 
as my
hypervisors?  Doesn't that a) accumulate multiple POFs on the same 
hardware
and b) occupy computing resources (CPU, RAM), that I'd rather use 
for my VMs
and containers?  Wouldn't I rather want to have a separate Ceph 
cluster?

The integration of Ceph services in PVE started with Proxmox VE 3.0.
With PVE 5.3 (current) we added CephFS services to the PVE. So you can
run a hyper-converged Ceph with RBD/CephFS on the same servers as your
VM/CT.

a) can you please be more specific in what you see as multiple point of
failures?


not only I run the hypervisor which controls containers and virtual 
machines on the server, but also the fileservice which is used to 
store the VM and container images.
I think you have less points of failure :-) because you'll have 3 points 
(nodes) of failure in an hyperconverged scenario and 6 in a separate 
virtualization/storage cluster scenario...  it depends how you look at it.


Right, but I look at it from the service side: one hardware failure -> 
one service affected vs. one hardware failure -> two service affected.





b) depends on the workload of your nodes. Modern server hardware has
enough power to be able to run multiple services. It all comes down to
have enough resources for each domain (eg. Ceph, KVM, CT, host).

I recommend to use a simple calculation for the start, just to get a
direction.

In principle:

==CPU==
core='CPU with HT on'

* reserve a core for each Ceph daemon
   (preferable on the same NUMA as the network; higher frequency is
   better)
* one core for the network card (higher frequency = lower latency)
* rest of the cores for OS (incl. monitoring, backup, ...), KVM/CT usage
* don't overcommit

==Memory==
* 1 GB per TB of used disk space on an OSD (more on recovery)
Note this is not true anymore with Bluestore, because you have to add 
cache space into account (1GB for HDD and 3GB for SSD OSDs if I recall 
correctly.), and also currently OSD processes aren't that good with RAM 
use accounting... :)

* enough memory for KVM/CT
* free memory for OS, backup, monitoring, live migration
* don't overcommit

==Disk==
* one OSD daemon per disk, even disk sizes throughout the cluster
* more disks, more hosts, better distribution

==Network==
* at least 10 GbE for storage traffic (more the better),
   see our benchmark paper
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/ 

10Gbit helps a lot with latency; small clusters can work perfectly with 
2x1Gbit if they aren't latency-sensitive (we have been running a 
handfull of those for some years now).


I will keep the two points in mind.  Thank you.
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-17 Thread Frank Thommen

Hi Alwin,

On 12/16/18 7:47 PM, Alwin Antreich wrote:

On Sun, Dec 16, 2018 at 05:16:50PM +0100, Frank Thommen wrote:

Hi Alwin,

On 16/12/18 15:39, Alwin Antreich wrote:

Hello Frank,

On Sun, Dec 16, 2018 at 02:28:19PM +0100, Frank Thommen wrote:

Hi,

I understand that with the new PVE release PVE hosts (hypervisors) can be
used as Ceph servers.  But it's not clear to me if (or when) that makes
sense.  Do I really want to have Ceph MDS/OSD on the same hardware as my
hypervisors?  Doesn't that a) accumulate multiple POFs on the same hardware
and b) occupy computing resources (CPU, RAM), that I'd rather use for my VMs
and containers?  Wouldn't I rather want to have a separate Ceph cluster?

The integration of Ceph services in PVE started with Proxmox VE 3.0.
With PVE 5.3 (current) we added CephFS services to the PVE. So you can
run a hyper-converged Ceph with RBD/CephFS on the same servers as your
VM/CT.

a) can you please be more specific in what you see as multiple point of
failures?


not only I run the hypervisor which controls containers and virtual machines
on the server, but also the fileservice which is used to store the VM and
container images.

Sorry, I am still not quite sure, what your question/concern is.
Failure tolerance needs to be planned into the system design, irrespective
of service distribution.

Proxmox VE has a HA stack that restarts all services from a failed node
(if configured) on a other node.
https://pve.proxmox.com/pve-docs/chapter-ha-manager.html

Ceph does selfhealing (if enough nodes
are available) or still works in a degraded state.
http://docs.ceph.com/docs/luminous/start/intro/


Yes, I am aware of PVE and Ceph failover/healing capabilities.  But I 
always liked to separate basic and central services on the hardware 
level.  This way if one server "explodes", only one service is affected. 
 With PVE+Ceph on one node, such an outage would affect two basic 
services at once.  I don't say they wouldn't continue to run 
productively, but they would run in degraded and non-failure-safe mode - 
assumed we had three such nodes in the cluster - until the broken node 
can be restored.


But that's probably just my old-fashioned conservative approach.  That's 
why I wanted to ask the list members for their assessment ;-)



[...]


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-16 Thread Frank Thommen

Hi Alwin,

On 16/12/18 15:39, Alwin Antreich wrote:

Hello Frank,

On Sun, Dec 16, 2018 at 02:28:19PM +0100, Frank Thommen wrote:

Hi,

I understand that with the new PVE release PVE hosts (hypervisors) can be
used as Ceph servers.  But it's not clear to me if (or when) that makes
sense.  Do I really want to have Ceph MDS/OSD on the same hardware as my
hypervisors?  Doesn't that a) accumulate multiple POFs on the same hardware
and b) occupy computing resources (CPU, RAM), that I'd rather use for my VMs
and containers?  Wouldn't I rather want to have a separate Ceph cluster?

The integration of Ceph services in PVE started with Proxmox VE 3.0.
With PVE 5.3 (current) we added CephFS services to the PVE. So you can
run a hyper-converged Ceph with RBD/CephFS on the same servers as your
VM/CT.

a) can you please be more specific in what you see as multiple point of
failures?


not only I run the hypervisor which controls containers and virtual 
machines on the server, but also the fileservice which is used to store 
the VM and container images.




b) depends on the workload of your nodes. Modern server hardware has
enough power to be able to run multiple services. It all comes down to
have enough resources for each domain (eg. Ceph, KVM, CT, host).

I recommend to use a simple calculation for the start, just to get a
direction.

In principle:

==CPU==
core='CPU with HT on'

* reserve a core for each Ceph daemon
   (preferable on the same NUMA as the network; higher frequency is
   better)
* one core for the network card (higher frequency = lower latency)
* rest of the cores for OS (incl. monitoring, backup, ...), KVM/CT usage
* don't overcommit

==Memory==
* 1 GB per TB of used disk space on an OSD (more on recovery)
* enough memory for KVM/CT
* free memory for OS, backup, monitoring, live migration
* don't overcommit

==Disk==
* one OSD daemon per disk, even disk sizes throughout the cluster
* more disks, more hosts, better distribution

==Network==
* at least 10 GbE for storage traffic (more the better),
   see our benchmark paper
   https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/
* separate networks, cluster, storage, client traffic,
   additional for separate the migration network from any other
* use to physical networks for corosync (ring0 & ring1)

This list doesn't cover every aspect (eg. how much failure is allowed),
but I think it is a good start. With the above points for the sizing of
your cluster, the question of a separation of a hyper-converged service
might be a little easier.


Thanks a lot.  This sure helps in our planning.

frank



--
Cheers,
Alwin
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-16 Thread Frank Thommen

Hi,

I understand that with the new PVE release PVE hosts (hypervisors) can 
be used as Ceph servers.  But it's not clear to me if (or when) that 
makes sense.  Do I really want to have Ceph MDS/OSD on the same hardware 
as my hypervisors?  Doesn't that a) accumulate multiple POFs on the same 
hardware and b) occupy computing resources (CPU, RAM), that I'd rather 
use for my VMs and containers?  Wouldn't I rather want to have a 
separate Ceph cluster?


Or didn't I get the point of the Ceph integration?

Cheers
Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Frank Thommen

Good point.  Thanks a lot
frank


On 11/22/2018 07:51 PM, Uwe Sauter wrote:

FYI:

I had such a thing working. What you need to keep in mind is that you 
should configure both interfaces per host on the same (software) bridge 
and keep STP on… that way when you loose the link from node A to node B 
the traffic will be going through node C.


++
|    |
| Node A   br0   |
| /   \  |
|   eth0   eth1  |
+--/---\-+
   / \
+/--+  +-\+
|  eth1 |  |    eth0  |
|  /    |  |   \  |
| br0--eth0-eth1--br0 |
|   Node B  |  |  Node C  |
+---+  +--+




Am 22.11.18 um 19:42 schrieb Frank Thommen:
What I /really/ meant was "but the throughput would /not/ be higher 
when using a switch"...



On 11/22/2018 07:37 PM, Frank Thommen wrote:
But the throughput would be higher when using a switch, would it?  
It's still just 1Gbit


frank


On 11/22/2018 07:34 PM, Mark Schouten wrote:
Other than limited throughput, I can’t think of a problem. But 
limited throughput might cause unforeseen situations.


Mark Schouten

Op 22 nov. 2018 om 19:30 heeft Frank Thommen 
 het volgende geschreven:


Please excuse, if this is too basic, but after reading 
https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the 
cluster/corosync network could be built by directly connected 
network interfaces.  I.e not like this:


+---+
| pve01 |--+
+---+  |
    |
+---+ ++
| pve02 |-| network switch |
+---+ ++
    |
+---+  |
| pve03 |--+
+---+


but like this:

+---+
| pve01 |---+
+---+   |
 |   |
+---+   |
| pve02 |   |
+---+   |
 |   |
+---+   |
| pve03 |---+
+---+

(all connections 1Gbit, there are currently not plans to extend 
over three nodes)


I can't see any drawback in that solution.  It would remove one 
layer of hardware dependency and potential spof (the switch).  If 
we don't trust the interfaces, we might be able to configure a 
second network with the three remaining interfaces.


Is such a "direct-connection" topology feasible?  Recommended? 
Strictly not recommended?


I am currently just planning and thinking and there is no cluster 
(or even a PROXMOX server) in place.


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user






___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Frank Thommen
What I /really/ meant was "but the throughput would /not/ be higher when 
using a switch"...



On 11/22/2018 07:37 PM, Frank Thommen wrote:
But the throughput would be higher when using a switch, would it?  It's 
still just 1Gbit


frank


On 11/22/2018 07:34 PM, Mark Schouten wrote:
Other than limited throughput, I can’t think of a problem. But limited 
throughput might cause unforeseen situations.


Mark Schouten

Op 22 nov. 2018 om 19:30 heeft Frank Thommen 
 het volgende geschreven:


Please excuse, if this is too basic, but after reading 
https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the 
cluster/corosync network could be built by directly connected network 
interfaces.  I.e not like this:


+---+
| pve01 |--+
+---+  |
    |
+---+ ++
| pve02 |-| network switch |
+---+ ++
    |
+---+  |
| pve03 |--+
+---+


but like this:

+---+
| pve01 |---+
+---+   |
 |   |
+---+   |
| pve02 |   |
+---+   |
 |   |
+---+   |
| pve03 |---+
+---+

(all connections 1Gbit, there are currently not plans to extend over 
three nodes)


I can't see any drawback in that solution.  It would remove one layer 
of hardware dependency and potential spof (the switch).  If we don't 
trust the interfaces, we might be able to configure a second network 
with the three remaining interfaces.


Is such a "direct-connection" topology feasible?  Recommended? 
Strictly not recommended?


I am currently just planning and thinking and there is no cluster (or 
even a PROXMOX server) in place.


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | frank.thom...@uni-heidelberg.de
   | MMK:  +49-6221-54-3637 (Mo-Mi, Fr)
   | IPMB: +49-6221-54-5823 (Do)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Frank Thommen
But the throughput would be higher when using a switch, would it?  It's 
still just 1Gbit


frank


On 11/22/2018 07:34 PM, Mark Schouten wrote:

Other than limited throughput, I can’t think of a problem. But limited 
throughput might cause unforeseen situations.

Mark Schouten


Op 22 nov. 2018 om 19:30 heeft Frank Thommen  
het volgende geschreven:

Please excuse, if this is too basic, but after reading 
https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the 
cluster/corosync network could be built by directly connected network 
interfaces.  I.e not like this:

+---+
| pve01 |--+
+---+  |
|
+---+ ++
| pve02 |-| network switch |
+---+ ++
|
+---+  |
| pve03 |--+
+---+


but like this:

+---+
| pve01 |---+
+---+   |
 |   |
+---+   |
| pve02 |   |
+---+   |
 |   |
+---+   |
| pve03 |---+
+---+

(all connections 1Gbit, there are currently not plans to extend over three 
nodes)

I can't see any drawback in that solution.  It would remove one layer of 
hardware dependency and potential spof (the switch).  If we don't trust the 
interfaces, we might be able to configure a second network with the three 
remaining interfaces.

Is such a "direct-connection" topology feasible?  Recommended? Strictly not 
recommended?

I am currently just planning and thinking and there is no cluster (or even a 
PROXMOX server) in place.

Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | frank.thom...@uni-heidelberg.de
   | MMK:  +49-6221-54-3637 (Mo-Mi, Fr)
   | IPMB: +49-6221-54-5823 (Do)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Frank Thommen
Please excuse, if this is too basic, but after reading 
https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the 
cluster/corosync network could be built by directly connected network 
interfaces.  I.e not like this:


 +---+
 | pve01 |--+
 +---+  |
|
 +---+ ++
 | pve02 |-| network switch |
 +---+ ++
|
 +---+  |
 | pve03 |--+
 +---+


but like this:

 +---+
 | pve01 |---+
 +---+   |
 |   |
 +---+   |
 | pve02 |   |
 +---+   |
 |   |
 +---+   |
 | pve03 |---+
 +---+

(all connections 1Gbit, there are currently not plans to extend over 
three nodes)


I can't see any drawback in that solution.  It would remove one layer of 
hardware dependency and potential spof (the switch).  If we don't trust 
the interfaces, we might be able to configure a second network with the 
three remaining interfaces.


Is such a "direct-connection" topology feasible?  Recommended? Strictly 
not recommended?


I am currently just planning and thinking and there is no cluster (or 
even a PROXMOX server) in place.


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] HTTPS for download.proxmox.com

2017-11-30 Thread Frank Thommen

On 11/30/2017 03:11 PM, lemonni...@ulrar.net wrote:

This is dumb. I agree that it wouldn't cost them anything to setup
HTTPS, but I also agree that it is useless. The packages are signed and
apt already checks the signature, HTTPS wouldn'd add anything at all.


Not true: It gives you the certainty to be connected to the "real" 
proxmox page and not a fake page, e.g. by being redirected through a 
hacked nameserver or local resolver.


And afaik, those using the community version don't have access to the 
enterprise repos.


frank





Unless you want to hide the fact that you are installing proxmox itself,
but the connection to proxmox's repo itself gives that away.

On Thu, Nov 30, 2017 at 03:01:53PM +0100, John Crisp wrote:

On 30/11/17 14:34, Dietmar Maurer wrote:

On 11/30/2017 02:21 PM, Dietmar Maurer wrote:

I greatly respect the work you do on Proxmox but this specific response
is under your habitual standards from a security standpoint.


Exactly. That is why we provide the enterprise repository.


IMHO the times where security and confidentiality (https) are limited to
enterprise/paid services are long gone.  As the OP noted, https comes at
no cost and there is no reason not to have it configured.  I'd even say,
that https is mandatory for every site publishing more than just
personal statements.


Again, please use the enterprise repository if you want https.











___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user




___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | frank.thom...@uni-heidelberg.de
   | MMK:  +49-6221-54-3637 (Mo-Mi, Fr)
   | IPMB: +49-6221-54-5823 (Do)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] HTTPS for download.proxmox.com

2017-11-30 Thread Frank Thommen

On 11/30/2017 02:21 PM, Dietmar Maurer wrote:

I greatly respect the work you do on Proxmox but this specific response
is under your habitual standards from a security standpoint.


Exactly. That is why we provide the enterprise repository.


IMHO the times where security and confidentiality (https) are limited to 
enterprise/paid services are long gone.  As the OP noted, https comes at 
no cost and there is no reason not to have it configured.  I'd even say, 
that https is mandatory for every site publishing more than just 
personal statements.


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CT is locked (backup) - qm unlock doesn't help

2017-01-12 Thread Frank Thommen

On 01/12/2017 11:30 AM, Fabian Grünbichler wrote:

On Thu, Jan 12, 2017 at 11:27:24AM +0100, Frank Thommen wrote:

Hi,

one of my LXC containers is locked: It cannot be started and cannot be
backed up.  The error message is "CT is locked (backup)".  The only
suggestion I found to solve this prolem is `qm unlock XXX`.  However in my
case, this results in the error message

  Configuration file 'nodes/pve02/qemu-server/XX.conf' does not exist


yes, because "qm" is for qemu VMs.



What else can I try to unlock this container?



use the container tool "pct" -> "man pct" or "pct help"


too easy :-).  Everything works again.  Thanks lot.
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Which service is used to mount datacenter NFS Share ?

2017-01-04 Thread Frank Thommen



On 01/04/2017 01:46 PM, Florent B wrote:

On 01/04/2017 01:42 PM, Fábio Rabelo wrote:

Hellows ...

Look at storage view, then add .

One of the options are nfs, you need to supply ip of the share, or, if
it hass a dns name, you can use it .


Fábio Rabelo

2017-01-04 10:38 GMT-02:00 Florent B :

Hi everyone,

I couldn't find the answer, I have a NFS share configured in datacenter
storage.

Which service (or systemd unit) is used to mount (and unmount) these
shares ?

Thank you.

Florent

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Hi Fábio,

Excuse me if my message was not clear enough, but that was not my question.

I already configured a NFS share.

I just want to know, when nodes boot, which service (system service) is
mounting those shares ?


There is no service involved.  You need the mounts to be configured in 
the mount table (/etc/fstab or automounter files) and the NFS client 
tools (mount.nfs & co.) need to be present.


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Help for new server

2016-10-28 Thread Frank Thommen

On 10/28/2016 09:21 AM, Eneko Lacunza wrote:

Hi,

El 27/10/16 a las 15:07, Frank Thommen escribió:



I also need the battery for smart arrays?


If you do not plan to use ZFS, which sort of nullifies the use of a
RAID controller, then yes. If you care for your data a BBU is
absolutely vital and will give you a noticeable performance boost.

I understand that BBU improves performance, but in what way does it
"care for your data - absolutely vital"?

My understanding is that in fact it does add some risk to your data if
RAID controller fails - bye bye write cache data!


The BBU will store your unwritten cached data in case of a power
failure and write it down after power has been restored.  W/o BBU, you
would loose that data completely.  Of course, if you don't use cache,
you also don't need the BBU ;-)

Sure, but BBU is not vital for data care. If you want performance, you
need BBU, but you can safely use write-through without BBU. BBU does not
improve the consistency of your data.


Your argumentation is not completely correct: If you want performanace, 
you need caching (not BBU).  And if you use caching, you should really, 
really use BBU (your RAID controller might not even allow caching w/o 
BBU anyway)


And yes ist does improve data consistency very much: In the case of a 
power failure.


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Help for new server

2016-10-27 Thread Frank Thommen

On 10/27/2016 10:05 AM, Eneko Lacunza wrote:

Hi,

Not trolling, just curious about a thing you said...

El 26/10/16 a las 18:20, Michael Rasmussen escribió:

I also need the battery for smart arrays?


If you do not plan to use ZFS, which sort of nullifies the use of a
RAID controller, then yes. If you care for your data a BBU is
absolutely vital and will give you a noticeable performance boost.

I understand that BBU improves performance, but in what way does it
"care for your data - absolutely vital"?

My understanding is that in fact it does add some risk to your data if
RAID controller fails - bye bye write cache data!


The BBU will store your unwritten cached data in case of a power failure 
and write it down after power has been restored.  W/o BBU, you would 
loose that data completely.  Of course, if you don't use cache, you also 
don't need the BBU ;-)


And if the RAID controller is broken, you have a big problem anyway, no 
matter if you use caching or not ;-)


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Manager Skins/Themes

2016-10-07 Thread Frank Thommen

On 10/04/2016 10:47 AM, John Crisp wrote:

On 03/10/16 21:12, Brian :: wrote:
[...]

The double vertical is just insane - I can't think of anywhere else that
uses it. It just grates every time I look at it. There is no separation
from the first column.

We are all used to a vertical column left, and a horizontal row, in
multiple programs (think say email programs, file managers, and just
about anything else).

With wide screens there is no need to try and cram everything to the left.

If double columns were a good GUI choice you would see them everywhere,
but you don't, which means GUI designers the world over consider them a
bad design and do not use them.

Anyway, I guess I am in a minority of one and will be ignored.


You are not alone.  I am with you, even though I wouldn't have 
formulated it so harshly :-).


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 4.3: Not enough space for side-by-side performance graphs

2016-09-30 Thread Frank Thommen

On 09/30/2016 07:47 AM, Fabian Grünbichler wrote:

On Thu, Sep 29, 2016 at 07:01:14PM +0200, Frank Thommen wrote:

Hi all,

the upgrade to PVE 4.3 introduced some nice features in the GUI, e.g. the
nice CPU, memory and swap usage bars.  But unfortunately it also brought
back the screen space waste issue solved back in June
http://pve.proxmox.com/pipermail/pve-user/2016-May/010371.html and
http://pve.proxmox.com/pipermail/pve-user/2016-June/010583.html).  By moving
the menu from a horizontal to a vertical position, there is now - on a
standard 24" screen - not enough horizontal space left to show the graphs
side-by-side w/o zooming out.


not sure what a standard 24" screen is? all of this does not dpeend on
screen size, but on resolution.. on a FHD (1920x1080) screen, two graphs
should fit.


yes, that's what I meant with 'standard 24" screen'.  Anyway: On my 
1920x1080 the two graphs don't fit.  Browser (Firefox) is in maximum 
zoom but not fullscreen.




Is there an (upgrade-safe) way to make the frames slightly smaller, so that
they fit side-by-side w/o being forced to zoom out?


you can make the leftmost tree smaller by grabbing the divider, maybe
this helps?


It does, but that's a manual intervention (mouse) like zooming 
(keyboard) which I'd like to avoid, since it's the question of just a 
few pixels.


Cheers
frank



anyway, dietmar's suggesting about wasting less space by moving the
legends seems like a good one :)

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Upgrade to Proxmox VE 4.3

2016-09-29 Thread Frank Thommen

On 09/29/2016 06:34 PM, Luis G. Coralle wrote:

Hi all, I'have a single node Proxmox VE  4.2-2

To upgrade from 4.2 to 4.3 I run:
apt-get update
apt-get dist-upgrade

After the process the version is 4.2-2 ( same version )
Do I need a enterprise subscription?


Do you have this

  deb http://download.proxmox.com/debian jessie pve-no-subscription

repository in your apt sources list?

frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [Feature] Float Div's on the summary pages

2016-06-07 Thread Frank Thommen

Great.  Thanks for the hint.
f.

On 06/07/2016 01:01 PM, Felix Brucker wrote:

im using 4.2-11/2c626aa1 currently, but it was also present in at least 4.2-10, 
upgrade your system and clear browser cache

Best regards
Felix


From: frank.thom...@uni-heidelberg.de
To: pve-user@pve.proxmox.com
Date: Tue, 7 Jun 2016 12:56:09 +0200
Subject: Re: [PVE-User] [Feature] Float Div's on the summary pages

"My" graphs (PVE 4.2-5/7cf09667) /always/ stack onto each other, no
matter how wide the window ist, leaving a wide, unused, blank column on
the right side (see https://ibin.co/2jpiRv8X5iXY.png).  /I/'d like to
see PVE use this space instead of wasting it (i.e. like in the mockup
viewable on https://ibin.co/2jpiguIVbBuq.png)

frank



On 06/06/2016 12:51 PM, Felix Brucker wrote:

Hi,

love seeing it got integrated, thanks :)

Best regards
Felix

From: felix.bruc...@live.de
To: pve-user@pve.proxmox.com
Subject: [Feature] Float Div's on the summary pages
Date: Thu, 12 May 2016 00:18:56 +0200




Hi all,

i don't know if this is the correct place to ask, but it would be great if the 
div's from the summary pages would float left, so that they take up the whole 
width of the screen but stack above each other if there is not enough free 
space.
If it's not it would be great if someone could point me in the right direction, the forum 
seems to not have some sort of "Feature Request" section.

Best regards
Felix Brucker

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
  | frank.thom...@uni-heidelberg.de
  | TP3:  +49-6221-42-3562 (Mo+Di)
  | IPMB: +49-6221-54-5823 (Mi-Do)

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Frank Thommen   | DKFZ Heidelberg / B080
 | System Administrator
 | f.thom...@dkfz-heidelberg.de
 | MMK:  currently n.a.   (Mo+Mi, Fr)
 | IPMB: +49-6221-54-5823 (Do)


--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
   | frank.thom...@uni-heidelberg.de
   | TP3:  +49-6221-42-3562 (Mo+Di)
   | IPMB: +49-6221-54-5823 (Mi-Do)
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [Feature] Float Div's on the summary pages

2016-06-07 Thread Frank Thommen
"My" graphs (PVE 4.2-5/7cf09667) /always/ stack onto each other, no 
matter how wide the window ist, leaving a wide, unused, blank column on 
the right side (see https://ibin.co/2jpiRv8X5iXY.png).  /I/'d like to 
see PVE use this space instead of wasting it (i.e. like in the mockup 
viewable on https://ibin.co/2jpiguIVbBuq.png)


frank



On 06/06/2016 12:51 PM, Felix Brucker wrote:

Hi,

love seeing it got integrated, thanks :)

Best regards
Felix

From: felix.bruc...@live.de
To: pve-user@pve.proxmox.com
Subject: [Feature] Float Div's on the summary pages
Date: Thu, 12 May 2016 00:18:56 +0200




Hi all,

i don't know if this is the correct place to ask, but it would be great if the 
div's from the summary pages would float left, so that they take up the whole 
width of the screen but stack above each other if there is not enough free 
space.
If it's not it would be great if someone could point me in the right direction, the forum 
seems to not have some sort of "Feature Request" section.

Best regards
Felix Brucker

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user





--
Frank Thommen  | HD-HuB / DKFZ Heidelberg
| frank.thom...@uni-heidelberg.de
| TP3:  +49-6221-42-3562 (Mo+Di)
| IPMB: +49-6221-54-5823 (Mi-Do)

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] NFSv4 support for NFS storage?

2016-05-06 Thread Frank Thommen

Hi,

new fileservers in our department are configured to require NFSv4 and do 
not support NFSv3 any more.  That worked fine for PVE 3.x by patching 
the "values.options" assignment in 
/usr/share/pve-manager/ext?/pvemanagerlib.js.


When applying the same patch to PVE 4.x, the storage is mounted 
correctly and also shown in the summary, but the status is inactive and 
`pvesm list ` returns "mount error: mount.nfs: 
/mnt/pve/ is busy or already mounted".  In the content tab, the 
error "mount error: mount.nfs: /mnt/pve/ is busy or already 
mounted (500)" is shown.


Is there any other patch which has to be applied to make NFSv4 working 
with PVE 4.x and is there already a timeframe, when NFSv4 will be 
supported out of the box?


We are currently running "pve-manager/4.1-33/de386c1a (running kernel: 
4.2.6-1-pve)".


Cheers
Frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] User experience issues with Proxmox' new webUI ("Summary" pages)

2016-05-03 Thread Frank Thommen

On 05/03/2016 10:24 AM, Emmanuel Kasper wrote:

On 04/29/2016 11:54 AM, Frank Thommen wrote:

Hi,

I have updated one of our PVE systems from 4.1-1 to 4.1-33 in one go and
now I am seeing an overhauled webUI (specifically the summary pages).
While I like the zoomable graphs, I have several issues with the new UI:

a) font and line spacing have changed and are now using more (IMHO too
much) space on the screen while not becoming better readable.
Screenspace is valuable and I'd like it to be used as effectively as
possible

b) the additional space used in the "Status" box and the bigger size of
the graphs have pushed the second graph ("Memory usage") out of my
screen (fairly standard 24", 1920x1080).  The old layout (I'm comparing
with a still running 3.4-6) allowed me to see all the cpu usage and the
top half of the memory usage w/o any scrolling.  This was very efficient
and in most cases just the information I was looking for.  To get the
same information with 4.1-33 I have to scroll down for each vm/container.

c) graph buildup is now "animated" but also considerably slower.  The
old layout/graphs allowed me to very quickly browse through all
vms/containers and check cpu and memory "by eye" as text boxes and
graphs showed up almost instantaneously.  With 4.1-33 I have to wait 1-2
secs for each vm/container to show the graphs.


does zooming out with your browswer help to see more content here ?


Theoretically, but then everything becomes so tiny that I find it hard 
to see anything.  I'd rather reshuffle the graphs, so that the big 
unused (=wasted) whitespace to the right of the graphs is also used by 
some of the timelines...or I have to buy a rotatable screen and use it 
in portrait mode :-)




please note that without client side rendering of the graphs, it would
not be possible to zoom in and out the graphs themselves
and client rendering takes a bit more of time of course


I understand that from the technical side.  But still My client is 
not /so/ slow...




  d) when by mistake one of the fields in the "Status" box is selected,

the page always "jumps" back as soon as the page has been scrolled down...


Yes I could see that issue
the text fields here should actually not be selectable
can you please submit a bug report in https://bugzilla.proxmox.com/ for
that ?


Done: https://bugzilla.proxmox.com/show_bug.cgi?id=976

frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] User experience issues with Proxmox' new webUI ("Summary" pages)

2016-04-29 Thread Frank Thommen

Hi,

I have updated one of our PVE systems from 4.1-1 to 4.1-33 in one go and 
now I am seeing an overhauled webUI (specifically the summary pages). 
While I like the zoomable graphs, I have several issues with the new UI:


a) font and line spacing have changed and are now using more (IMHO too 
much) space on the screen while not becoming better readable. 
Screenspace is valuable and I'd like it to be used as effectively as 
possible


b) the additional space used in the "Status" box and the bigger size of 
the graphs have pushed the second graph ("Memory usage") out of my 
screen (fairly standard 24", 1920x1080).  The old layout (I'm comparing 
with a still running 3.4-6) allowed me to see all the cpu usage and the 
top half of the memory usage w/o any scrolling.  This was very efficient 
and in most cases just the information I was looking for.  To get the 
same information with 4.1-33 I have to scroll down for each vm/container.


c) graph buildup is now "animated" but also considerably slower.  The 
old layout/graphs allowed me to very quickly browse through all 
vms/containers and check cpu and memory "by eye" as text boxes and 
graphs showed up almost instantaneously.  With 4.1-33 I have to wait 1-2 
secs for each vm/container to show the graphs.


d) when by mistake one of the fields in the "Status" box is selected, 
the page always "jumps" back as soon as the page has been scrolled down...


These experiences have been made with openSuSE 13.2 and Firefox 45.0.


Is there is a way to customize fonts/linespacing, graph sizes and graph 
"animations" in the webUI, as to restore its previous great speed and 
efficiency?



On the other hand I think that the following two features could improve 
the overview's efficiency and usefulness a lot:


a) a possibility to zoom into all graphs simultaneously (like Ctrl-zoom 
or similar)


b) graphs which show average and max in a single graph


Cheers
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Debian 8 Template w/o systemd?

2016-04-20 Thread Frank Thommen

On 04/20/2016 06:14 PM, Dietmar Maurer wrote:

I'm a little bit puzzled, that the PVE-Template for the Debian 8
container comes w/o systemd.  Instead it looks like an SysV init system.
   Is there a specific reason for this?


yes, systemd inside debian 8 is very old, and many users had problems in the
past.
Not sure if that is still the case.


On a completely updated Debian 8 container on PVE 3.4 (started from 
Debian 7 and dist-upgraded to 8), systemd is version 215.  According to 
https://github.com/systemd/systemd/releases?after=v220 this version is 
of July 3, 2014.  I assume this /is/ considered old :-).  On the other 
hand I haven't experienced any problems so far.


On https://wiki.debian.org/systemd it says "Please make sure that you 
are using Debian jessie or newer to get a recent version of systemd.". 
Hmm, "recent"...? ;-)


But not including systemd makes the task of streamlining an IT 
environment harder.  If you try to keep bare metal and virtual hosts on 
the same distro and release (Debian 8), one can still not use the same 
mechanisms and settings everywhere (systemd on metal, SysV init in 
containers).  Switching to systemd once a system has been installed as 
SysV init system hasn't worked for me (yet).


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] No 32bit container templates for PVE 4.x?

2016-03-07 Thread Frank Thommen

On 03/03/2016 11:14 PM, Ilya Konyakhin wrote:


On Thu, Mar 3, 2016 at 9:31 PM, Frank Thommen
<frank.thom...@uni-heidelberg.de
<mailto:frank.thom...@uni-heidelberg.de>> wrote:

What do you mean by "the previous generated 32 bit templates"?
Should additional templates be downloadable somewhere?  I couldn't
find them on linuxcontainers.org <http://linuxcontainers.org>


Frank, I suggest that Thomas said about openvz templates. You can find
them here: http://wiki.openvz.org/Download/template/precreated


Meaning that I can use openVZ templates for LXC containers?  That's 
great.  I had assumed, that the templates contain configurations or 
information specific to the selected container mechanism.


Thanks for the clarification
frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] No 32bit container templates for PVE 4.x?

2016-03-03 Thread Frank Thommen

Hi,

thanks for the quick answer.

On 03/03/2016 04:11 PM, Thomas Lamprecht wrote:

Hi,

no no reason at all, LXC and so we is able to run 32bit Containers fine.

You can use the one provided by lxc (at least those which distro we
support (debian, ubuntu, arch, centos, alpine, suse are the ones if I
did not miss anything) and you normally also should be able to run the
previous generated 32 bit templates just fine.


What do you mean by "the previous generated 32 bit templates"?  Should 
additional templates be downloadable somewhere?  I couldn't find them on 
linuxcontainers.org




Our dab and aab (debian and archlinux appliance builder) can generate 32
bit templates/appliances.

So the reason that there are no 32 bit ones at our template host is
unknown to me but probably simply nobody thought of this yet,
as the overhead is often quite small and 64 bit has its advantages.


OK.  I might just switch to 64bit containers, then.



Building (at least a debian, arch one) and upload it should be no
problem, but no promise how fast :)


Debian would the one I'm currently specifically looking for :-)

Cheers
Frank




cheers,
Thomas

On 03.03.2016 14:09, Frank Thommen wrote:

Hello,

is there a reason, why there are no 32bit container templates provided
for PVE 4.x?  With PVE 3.x I used the 32bit openvz templates, as I'm
mostly running small services and I want to keep the memory and disk
footprints as small as possible.  However for PVE 4.x all offered
templates seem to be amd64.

frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--
Frank Thommen   | HD-HuB / DKFZ Heidelberg
| frank.thom...@uni-heidelberg.de
| TP3:  +49-6221-42-3562 (Mo+Di)
| IPMB: +49-6221-54-5823 (Mi-Fr)

___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] No 32bit container templates for PVE 4.x?

2016-03-03 Thread Frank Thommen

Hello,

is there a reason, why there are no 32bit container templates provided 
for PVE 4.x?  With PVE 3.x I used the 32bit openvz templates, as I'm 
mostly running small services and I want to keep the memory and disk 
footprints as small as possible.  However for PVE 4.x all offered 
templates seem to be amd64.


frank
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user