Re: [PVE-User] High I/O waits, not sure if it's a ceph issue.

2020-07-03 Thread Mark Schouten
On Thu, Jul 02, 2020 at 03:57:32PM +0200, Alexandre DERUMIER wrote:
> Hi,
> 
> you should give it a try to ceph octopus. librbd have greatly improved for 
> write, and I can recommand to enable writeback now by default

Yes, that wil probably even work better. But what I'm trying to
determine here, is if krbd is a better choice than librbd :)

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] LVM-thin from one server to other seems show wrong size

2020-07-03 Thread Mark Schouten
On Fri, Jul 03, 2020 at 09:19:19AM -0300, Gilberto Nunes wrote:
> Any body?

After migrating, the disk is no longer thin-provisioned. Enable discard,
mount linux vm's with the discard option or run fstrim. On windows,
check the optimize button in defrag.

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph Bluestore - lvmcache versus WAL/DB on SSD

2020-06-30 Thread Mark Schouten
On Tue, Jun 30, 2020 at 12:07:40AM +1000, Lindsay Mathieson wrote:
> As per the title :) I have 23 OSD spinners on 5 hosts, Data+WAL+DB all on
> the disk. All VM's are windows running with Writeback Cache. Performance is
> adequate but see occasional high IO loads that make the VM's sluggish.

Could be that (deep) scrubs are periodically killing your performance.
There are some tweaks available to make them less invading:

osd_scrub_chunk_min=20 # 5
osd_scrub_sleep=4 # 0

And then some:
https://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/

Best option is really to place spinning disks with SSD..

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] High I/O waits, not sure if it's a ceph issue.

2020-06-30 Thread Mark Schouten
On Tue, Jun 30, 2020 at 11:28:51AM +1000, Lindsay Mathieson wrote:
> Do you have KRBD set for the Proxmox Ceph Storage? that help a lot.

I think this is incorrect. Using KRBD uses the kernel-driver which is
usually older than the userland-version. Also, upgrading is easier when
not using KRBD. 

I'd like to hear that I'm wrong, am I? :)

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] External Ceph cluster for PVE6.1-5

2020-01-29 Thread Mark Schouten
On Wed, Jan 29, 2020 at 09:23:53AM +0100, Alwin Antreich wrote:
> > We just upgraded one of our clusters to PVE 6.1-5. It's not hyperconverged, 
> > so Ceph is running on an external cluster. That cluster runs Luminous, and 
> > we installed the Nautilus client on the proxmox-cluster. I can't find any 
> > documentation if this is supported or not. 
> The stock Ceph version on PVE 6 is Luminous. ;)

https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1 and the installed
packages suggest otherwise:

root@proxmox2-1:~# dpkg -l | grep ceph
ii  ceph-common  14.2.6-pve1

> > Is it OK to have a Nautilus client on Proxmox for an external Luminous 
> > cluster, or will that break stuff?
> Yes, the client should work one LTS version apart.
> Luminous <- Nautilus -> Octopus.

I know Ceph does that. But I did have lost data in a PVE4 to PVE5
upgrade where Ceph was not yet upgraded and images got shrunk because
Ceph Hammer behaved differently from Ceph Luminous. I am wondering if
that is the case here too.

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] External Ceph cluster for PVE6.1-5

2020-01-28 Thread Mark Schouten

Hi,

We just upgraded one of our clusters to PVE 6.1-5. It's not hyperconverged, so 
Ceph is running on an external cluster. That cluster runs Luminous, and we 
installed the Nautilus client on the proxmox-cluster. I can't find any 
documentation if this is supported or not. 


Is it OK to have a Nautilus client on Proxmox for an external Luminous cluster, 
or will that break stuff?

Thanks.

--
Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] CIFS Mount off- and online

2020-01-14 Thread Mark Schouten

Hi,

We have a CIFS mount to nl.dadup.eu over IPv6. This mount works fine, but 
pvestatd continuously loops between 'Offline' and 'Online'. Even though the 
mount works at that time. What can I do to help debug this issue?

root@node03:/mnt/pve/TCC_Marketplace# pveversion 
pve-manager/6.1-3/37248ce6 (running kernel: 5.3.10-1-pve)


root@node03:/mnt/pve/TCC_Marketplace# storage 'TCC_Marketplace' is not online
find
.
./dump
./dump/vzdump-qemu-160-2020_01_13-02_24_16.vma.lzo
./template
./template/iso
./template/iso/debian-10.1.0-amd64-netinst.iso
./template/cache


--
Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 5 to root on ZFS

2019-12-16 Thread Mark Schouten

Here's what I did..

https://pve.proxmox.com/pipermail/pve-user/2018-November/170210.html

--

Mark Schouten 


Tuxis, Ede, https://www.tuxis.nl


T: +31 318 200208 

 



- Originele bericht -


Van: Miguel González via pve-user (pve-user@pve.proxmox.com)
Datum: 15-12-2019 21:16
Naar: pve-user@pve.proxmox.com
Cc: Miguel González (miguel_3_gonza...@yahoo.es)
Onderwerp: [PVE-User] Proxmox 5 to root on ZFS


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Images on CephFS?

2019-11-28 Thread Mark Schouten

Yes, this works.

I've created bug 2490 for this.

https://bugzilla.proxmox.com/show_bug.cgi?id=2490

--
Mark Schouten
Tuxis B.V.
https://www.tuxis.nl/ | +31 318 200208

-- Original Message --
From: "Marco M. Gabriel" 
To: "Mark Schouten" 
Cc: "PVE User List" 
Sent: 9/25/2019 4:49:51 PM
Subject: Re: [PVE-User] Images on CephFS?


Hi Mark,

as a temporary fix, you could just add a "directory" based storage
that points to the CephFS mount point.

Marco

Am Mi., 25. Sept. 2019 um 15:49 Uhr schrieb Mark Schouten 
:


Hi,

Just noticed that this is not a PVE 6-change. It's also changed in
5.4-3. We're using this actively, which makes me wonder what will 
happen

if we stop/start a VM using disks on CephFS...

Any way we can enable it again?

--
Mark Schouten
Tuxis B.V.
https://www.tuxis.nl/ | +31 318 200208

------ Original Message --
From: "Mark Schouten" 
To: "PVE User List" 
Sent: 9/19/2019 9:15:17 AM
Subject: [PVE-User] Images on CephFS?

>
>Hi,
>
>We just built our latest cluster with PVE 6.0. We also offer CephFS
>'slow but large' storage with our clusters, on which people can 
create
>images for backupservers. However, it seems that in PVE 6.0, we can 
no

>longer use CephFS for images?
>
>
>Cany anybody confirm (and explain?) or am I looking in the wrong
>direction?
>
>--
>Mark Schouten 
>
>Tuxis, Ede, https://www.tuxis.nl
>
>T: +31 318 200208
>
>
>___
>pve-user mailing list
>pve-user@pve.proxmox.com
>https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Images on CephFS?

2019-09-25 Thread Mark Schouten

Hi,


huh, AFAICT we never allowed that, the git-history of the CephFS
storage Plugin is quite short[0] so you can confirm yourself..
The initial commit did not allow VM/CT images neither[1]..
Haha. That's cool. :) I'm pretty sure I never needed to 'hack' anything 
to allow it. I can't find an un-updated cluster to test it on.



Any way we can enable it again?


IIRC, the rational was that if Ceph is used, RBD will be prefered
for CT/VM anyhow - but CephFS seems to be quite performant, and as
all functionality should be there (or get added easily) we could
enable it just fine..

Just scratching my head how you were able to use it for images if
the plugin was never told to allow it..


The good news is, I can create the image and configure the 
vm-config-file. Works fine, just not for a normal user. It performes 
fine as well, although I would recommend only raw images. I think I 
remember issues with qcow2 and snapshots.


--
Mark Schouten
Tuxis B.V.
https://www.tuxis.nl/ | +31 318 200208

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Images on CephFS?

2019-09-25 Thread Mark Schouten

Hi,

Just noticed that this is not a PVE 6-change. It's also changed in 
5.4-3. We're using this actively, which makes me wonder what will happen 
if we stop/start a VM using disks on CephFS...


Any way we can enable it again?

--
Mark Schouten
Tuxis B.V.
https://www.tuxis.nl/ | +31 318 200208

-- Original Message --
From: "Mark Schouten" 
To: "PVE User List" 
Sent: 9/19/2019 9:15:17 AM
Subject: [PVE-User] Images on CephFS?



Hi,

We just built our latest cluster with PVE 6.0. We also offer CephFS 
'slow but large' storage with our clusters, on which people can create 
images for backupservers. However, it seems that in PVE 6.0, we can no 
longer use CephFS for images?



Cany anybody confirm (and explain?) or am I looking in the wrong 
direction?


--
Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Images on CephFS?

2019-09-19 Thread Mark Schouten

Hi,

We just built our latest cluster with PVE 6.0. We also offer CephFS 'slow but 
large' storage with our clusters, on which people can create images for 
backupservers. However, it seems that in PVE 6.0, we can no longer use CephFS 
for images? 


Cany anybody confirm (and explain?) or am I looking in the wrong direction?

--
Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-firewall, clustering and HA gone bad

2019-06-25 Thread Mark Schouten

np!

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Thomas Lamprecht (t.lampre...@proxmox.com)
Datum: 25-06-2019 10:31
Naar: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl)
Onderwerp: Re: [PVE-User] pve-firewall, clustering and HA gone bad


On 6/25/19 9:44 AM, Thomas Lamprecht wrote:
> And as also said (see quote below), for more specific hinters I need the raw
> logs, unmerged and as untouched as possible.

may just be that I did not saw the mail in my inbox, so it looks like
you already send it to me, sorry about missing it.



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-firewall, clustering and HA gone bad

2019-06-25 Thread Mark Schouten
On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
> > 2: ha-manager should not be able to start the VM's when they are running
> > elsewhere
> 
> This can only happen if fencing fails, and that fencing works is always
> a base assumption we must take (as else no HA is possible at all).
> So it would be interesting why fencing did not worked here (see below
> for the reason I could not determine that yet as I did not have your logs
> at hand)

Reading the emails from that specific night, I saw this message:

 The node 'proxmox01' failed and needs manual intervention.

 The PVE HA manager tries to fence it and recover the
 configured HA resources to a healthy node if possible.

 Current fence status: SUCCEED
 fencing: acknowledged - got agent lock for node 'proxmox01'

This seems to suggest that the cluster is confident that the fencing
succeeded. How does it determine that?

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-firewall, clustering and HA gone bad

2019-06-21 Thread Mark Schouten
On Fri, Jun 14, 2019 at 08:25:07AM +0200, Thomas Lamprecht wrote:
> Would work, but it more intrusive than it needs to be. What I would do is:
> 
> 1. Do an omping check[0] with all the new addresses you plan to replace the
>hostnames from ring0_addr *first*, as this shows if the cluster can talk
>with each other through those addresses at all. You can also get the
>currently used IPs by using the following command (maybe grep for 'ip')
># corosync-cmapctl runtime.member

I'll schedule this procedure for the next maintenance window we have for
this customer. They'll be thrilled to know I'm going to work on it
 ;)

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-firewall, clustering and HA gone bad

2019-06-21 Thread Mark Schouten
On Thu, Jun 13, 2019 at 01:30:15PM +0200, Mark Schouten wrote:
> > From a quick look at the code: That seems true and is definitively the
> > wrong behavior :/
> 
> Ok, I'll file a bug for that.

https://bugzilla.proxmox.com/show_bug.cgi?id=2245

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-firewall, clustering and HA gone bad

2019-06-13 Thread Mark Schouten
On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
> Hi,
> Do your ringX_addr in corosync.conf use the hostnames or the resolved
> addresses? As with nodes added on newer PVE (at least 5.1, IIRC) we try
> to resolve the nodename and use the resolved address to exactly avoid
> such issues. If it don't uses that I recommend changing that instead
> of the all nodes in al /etc/hosts approach.

It has the hostnames. It's a cluster upgraded from 4.2 up to current.

> > It seems that pve-firewall tries to detect localnet, but failed to do so
> > correct. localnet should be 192.168.1.0/24, but instead it detected the
> > IPv6 addresses. Which isn't entirely incorrect, but IPv6 is not used for
> > clustering, so I should open IPv4 in the firewall not IPv6. So it seems
> > like nameresolving is used to define localnat, and not what corosync is
> > actually using.
> 
> From a quick look at the code: That seems true and is definitively the
> wrong behavior :/

Ok, I'll file a bug for that.

> > 2: ha-manager should not be able to start the VM's when they are running
> > elsewhere
> 
> This can only happen if fencing fails, and that fencing works is always
> a base assumption we must take (as else no HA is possible at all).
> So it would be interesting why fencing did not worked here (see below
> for the reason I could not determine that yet as I did not have your logs
> at hand)

We must indeed make assumptions. Are there ways we can assume better? :)

> The list trims attachments, could you please send them directly to my
> address? I'd really like to see those.

Attached again, so you should receive it now.

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-firewall, clustering and HA gone bad

2019-06-13 Thread Mark Schouten

Hi,

Let me start off with saying that I am not fingerpointing at anyone,
merely looking for how to prevent sh*t from happening again!

Last month I emailed about issues with pve-firewall. I was told that
there were fixes in the newest packages, so this maintenance I started
with upgrading pve-firewall before anything else. Which went well for
about all the clusters I upgraded.

Then I ended up at the last (biggest, 9 nodes) cluster, and stuff got
pretty ugly. Here's what happened:

1: I enabled IPv6 on the cluster interfaces in the last month. I've done
this before on other clusters, nothing special there. So I added the
IPv6 addresses on the interfaces and added all nodes in all the
/etc/hosts files. I've had issues with not being able to start clusters
because hostnames could not resolve, so all my nodes in all my clusters
have all the hostnames and addresses of their respective peers in
/etc/hosts.
2: I upgraded pve-firewall on all the nodes, no issues there
3: I started dist-upgrading on proxmox01 and proxmox02, and restarting
pve-firewall with `pve-firewall restart` because of [1] and noticed that
pvecm status did not list any of the other nodes in list of peers. So we
had:
  proxmox01: proxmox01
  proxmox02: proxmox02
  proxmox03-proxmox09: proxmox03-proxmox09

Obviously, /etc/pve was readonly on proxmox01 and proxmox02, since they
had no quorum.
4: HA is heavily used on this cluster. Just about all VM's have it
enabled. So since 'I changed nothing', I restarted pve-cluster a few
times on the broken nodes. Nothing helped.
4: I then restarted pve-cluster on proxmox03, and all of the sudden,
proxmox01 looked happy again.
5: In the meantime, ha-manager had kicked in and started VM's on other
nodes, but did not actually let proxmox01 fence itself, but I did not
notice this.
6: I tried restarting pve-cluster on yet another node, and then all
nodes except proxmox01 and proxmox02 fenced themselves, rebooting
alltogether.

After rebooting, the cluster was not completely happy, because the
firewall was still confused. So why was this firewall confused? Nothing
changed, remember? Well, nothing except bullet 1.

It seems that pve-firewall tries to detect localnet, but failed to do so
correct. localnet should be 192.168.1.0/24, but instead it detected the
IPv6 addresses. Which isn't entirely incorrect, but IPv6 is not used for
clustering, so I should open IPv4 in the firewall not IPv6. So it seems
like nameresolving is used to define localnat, and not what corosync is
actually using.

I fixed the current situation by adding the correct [ALIASES] in
cluster.fw, and now all is well (except for the broken VM's that were
running on two nodes and have broken images).

So I think there are two issues here:
1: pve-firewall should better detect the IP's used for essential
services
2: ha-manager should not be able to start the VM's when they are running
elsewhere

Obviously, this is a faulty situation which causes unexpected results.
Again, I'm not pointing fingers, I would like to discuss how we can
improve these kind of faulty situations.

In the attachment, you can find a log with dpkg, pmxcfs, pve-ha-(lc)rm
from all nodes. So maybe someone can better asses what went wrong.


[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=1823

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Defaults for disk-io-limits

2019-06-06 Thread Mark Schouten
Hi,

Is it possible to configure defaults for diskio limits somewhere? I
cannot find it in documentation, but the term 'Default' in the GUI
suggests a default can be set somewhere :)

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pvesm export

2019-05-21 Thread Mark Schouten

Hi,

I was playing around with pvesm export, but I cannot find it for Ceph backed 
images. Is that unsupported? I'm thinking it might be more efficient than 
vzdump ...

--
Mark Schouten 
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and firewalling

2019-05-09 Thread Mark Schouten
On Thu, May 09, 2019 at 11:10:46AM +0200, Thomas Lamprecht wrote:
> The issue you ran into was the case where pve-cluster (pmxcfs) was
> upgraded and restarted and pve-firewall thought that the user deleted
> all rules and thus flushed them, is already fixed for most common cases
> (package upgrade and normal restart of pve-cluster), so this shouldn't
> be an issue with pve-firewall in version 3.0-20

Cool, thanks. So I should upgrade the pve-firewall package before
pve-cluster on the remaining clusters to upgrade. And after 3.0-20 this
issue should be gone.

> But, Stoiko offered to re-take a look at this and try doing additional
> error handling if the fw config read fails (as in pmxcfs not mounted)
> and keep the current rules un-touched in this case (i.e., no remove,
> no add) or maybe also moving the management rules above the conntrack,
> but we need to take a close look here to ensure this has no non-intended
> side effects.

Great, thanks.

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and firewalling

2019-05-09 Thread Mark Schouten

https://bugzilla.proxmox.com/show_bug.cgi?id=2206

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Mark Schouten (m...@tuxis.nl)
Datum: 09-05-2019 10:11
Naar: PVE User List (pve-user@pve.proxmox.com)
Onderwerp: Re: [PVE-User] Ceph and firewalling


On Thu, May 09, 2019 at 07:53:50AM +0200, Alexandre DERUMIER wrote:
> But to really be sure to not have the problem anymore :
>
> add in /etc/sysctl.conf
>
> net.netfilter.nf_conntrack_tcp_be_liberal = 1

This is very useful info. I'll create a bug for Proxmox, so they can
consider it to set this in pve-firewall, which seems a good default if
you ask me.

--
Mark Schouten     | Tuxis B.V.
KvK: 74698818     | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and firewalling

2019-05-09 Thread Mark Schouten
On Thu, May 09, 2019 at 07:53:50AM +0200, Alexandre DERUMIER wrote:
> But to really be sure to not have the problem anymore :
> 
> add in /etc/sysctl.conf
> 
> net.netfilter.nf_conntrack_tcp_be_liberal = 1

This is very useful info. I'll create a bug for Proxmox, so they can
consider it to set this in pve-firewall, which seems a good default if
you ask me.

-- 
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Ceph and firewalling

2019-05-07 Thread Mark Schouten

Hi,

While upgrading two clusters tonight, it seems that the Ceph-cluster gets 
confused by the updates of tonight. I think it has something to do with the 
firewall and connection tracking. A restart of ceph-mon on a node seems to work.

I *think* the issue is that when pve-firewall is upgraded, the conntracktable 
is emptied, and existing connections are captured by the 'ctstate 
INVALID'-rule. But it is kinda hard to reproduce.

If you ask me, the rules for the 'management' ipset should be applied before 
the conntrack-rules, or am I setting things up incorrectly?


The following packages are updated in this run:
root@proxmox01:~# grep upgrade /var/log/dpkg.log
2019-05-08 02:09:46 upgrade base-files:amd64 9.9+deb9u8 9.9+deb9u9
2019-05-08 02:09:46 upgrade ceph-mds:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:47 upgrade ceph-mgr:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:48 upgrade ceph-mon:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:49 upgrade ceph:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:49 upgrade ceph-osd:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:51 upgrade ceph-base:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:52 upgrade ceph-common:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade librbd1:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade python-rados:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade python-rbd:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade python-rgw:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade python-ceph:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade python-cephfs:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade libcephfs2:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:54 upgrade librgw2:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:55 upgrade libradosstriper1:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:55 upgrade librados2:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:55 upgrade ceph-fuse:amd64 12.2.11-pve1 12.2.12-pve1
2019-05-08 02:09:56 upgrade libhttp-daemon-perl:all 6.01-1 6.01-2
2019-05-08 02:09:56 upgrade libjs-jquery:all 3.1.1-2 3.1.1-2+deb9u1
2019-05-08 02:09:56 upgrade libmariadbclient18:amd64 10.1.37-0+deb9u1 
10.1.38-0+deb9u1
2019-05-08 02:09:56 upgrade libpng16-16:amd64 1.6.28-1 1.6.28-1+deb9u1
2019-05-08 02:09:56 upgrade libpq5:amd64 9.6.11-0+deb9u1 9.6.12-0+deb9u1
2019-05-08 02:09:56 upgrade rsync:amd64 3.1.2-1+deb9u1 3.1.2-1+deb9u2
2019-05-08 02:09:56 upgrade pve-cluster:amd64 5.0-33 5.0-36
2019-05-08 02:09:56 upgrade libpve-storage-perl:all 5.0-39 5.0-41
2019-05-08 02:09:57 upgrade pve-firewall:amd64 3.0-18 3.0-20
2019-05-08 02:09:57 upgrade pve-ha-manager:amd64 2.0-8 2.0-9
2019-05-08 02:09:57 upgrade pve-qemu-kvm:amd64 2.12.1-2 2.12.1-3
2019-05-08 02:09:59 upgrade pve-edk2-firmware:all 1.20181023-1 1.20190312-1
2019-05-08 02:10:00 upgrade qemu-server:amd64 5.0-47 5.0-50
2019-05-08 02:10:00 upgrade libpve-common-perl:all 5.0-47 5.0-51
2019-05-08 02:10:00 upgrade libpve-access-control:amd64 5.1-3 5.1-8
2019-05-08 02:10:00 upgrade libpve-http-server-perl:all 2.0-12 2.0-13
2019-05-08 02:10:00 upgrade libssh2-1:amd64 1.7.0-1 1.7.0-1+deb9u1
2019-05-08 02:10:00 upgrade linux-libc-dev:amd64 4.9.144-3.1 4.9.168-1
2019-05-08 02:10:08 upgrade pve-kernel-4.15:all 5.3-3 5.4-1
2019-05-08 02:10:08 upgrade postfix-sqlite:amd64 3.1.9-0+deb9u2 3.1.12-0+deb9u1
2019-05-08 02:10:08 upgrade postfix:amd64 3.1.9-0+deb9u2 3.1.12-0+deb9u1
2019-05-08 02:10:10 upgrade proxmox-widget-toolkit:all 1.0-23 1.0-26
2019-05-08 02:10:10 upgrade pve-container:all 2.0-35 2.0-37
2019-05-08 02:10:10 upgrade pve-docs:all 5.3-3 5.4-2
2019-05-08 02:10:11 upgrade pve-i18n:all 1.0-9 1.1-4
2019-05-08 02:10:11 upgrade pve-xtermjs:amd64 3.10.1-2 3.12.0-1
2019-05-08 02:10:11 upgrade pve-manager:amd64 5.3-11 5.4-5
2019-05-08 02:10:11 upgrade proxmox-ve:all 5.3-1 5.4-1
2019-05-08 02:10:11 upgrade pve-kernel-4.15.18-12-pve:amd64 4.15.18-35 
4.15.18-36
2019-05-08 02:10:19 upgrade python-cryptography:amd64 1.7.1-3 1.7.1-3+deb9u1
2019-05-08 02:10:19 upgrade unzip:amd64 6.0-21 6.0-21+deb9u1
2019-05-08 02:10:19 upgrade ruby2.3-dev:amd64 2.3.3-1+deb9u4 2.3.3-1+deb9u6
2019-05-08 02:10:19 upgrade libruby2.3:amd64 2.3.3-1+deb9u4 2.3.3-1+deb9u6
2019-05-08 02:10:20 upgrade publicsuffix:all 20181003.1334-0+deb9u1 
20190415.1030-0+deb9u1
2019-05-08 02:10:20 upgrade ruby2.3:amd64 2.3.3-1+deb9u4 2.3.3-1+deb9u6


--
Mark Schouten 
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 5.2, CEPH 12.2.12: still CephFS looks like jewel

2019-05-06 Thread Mark Schouten
On Mon, May 06, 2019 at 03:02:36PM +0700, Igor Podlesny wrote:
> On Mon, 6 May 2019 at 14:33, Mark Schouten  wrote:
> > `ceph features` will list all connected daemons and clients.
> > Probably, there is still a client connected that runs a Jewel version.
> 
> Can't be anything "outside" -- it's Proxmox 5.2 self-contained CEPH
> install only. No Jewel hence.

I'm not saying 'outside', I'm saying 'a client'.

> > Maybe a Kernel CephFS mount somewhere that needs a reboot?
> 
> As I described in the very fist message this warning occurs when I
> enable CephFS on a single node of that cluster.

This command is cluster-wide, the node should not matter. But more
importantly, does `ceph features` confirm that there are no jewel
clients connected?

[Also, for someone looking for help, your responses seem somewhat rude]

-- 
Mark Schouten  | Tuxis B.V.
KvK: 74698818  | http://www.tuxis.nl/
T: +31 318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 5.2, CEPH 12.2.12: still CephFS looks like jewel

2019-05-06 Thread Mark Schouten

Hi,

`ceph features` will list all connected daemons and clients. Probably, there is 
still a client connected that runs a Jewel version. Maybe a Kernel CephFS mount 
somewhere that needs a reboot?

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Igor Podlesny (pve-u...@poige.ru)
Datum: 04-05-2019 19:39
Naar: PVE User List (pve-user@pve.proxmox.com)
Onderwerp: [PVE-User] Proxmox 5.2, CEPH 12.2.12: still CephFS looks like jewel


root@pve-40:~# ceph osd set-require-min-compat-client luminous
set require_min_compat_client to luminous

After enabling CephFS on a single node:

root@pve-40:~# ceph osd set-require-min-compat-client luminous
Error EPERM: cannot set require_min_compat_client to luminous: 1
connected client(s) look like jewel (missing 0x800); add
--yes-i-really-mean-it to do it anyway

CephFS seems to work just fine, but why this warning occurs at all?

libcephfs2                           12.2.12-pve1

--
End of message. Next message?
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] NFS Storage issue

2019-04-25 Thread Mark Schouten

Might also be the number of nfsd started on your Synology.

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Gerald Brandt (g...@majentis.com)
Datum: 25-04-2019 14:51
Naar: pve-user@pve.proxmox.com
Onderwerp: Re: [PVE-User] NFS Storage issue


On 2019-04-25 7:34 a.m., Gianni Milo wrote:
> Hello
>
> I have a Synology with High Availability serving NFS storage to Proxmox.
>> A couple of months ago, this started happening:
>>
>> Apr 24 12:32:51 proxmox-1 pvestatd[3298]: unable to activate storage
>> 'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable
>> Apr 24 13:08:00 proxmox-1 pvestatd[3298]: unable to activate storage
>> 'NAS' - directory '/mnt/pve/NAS' does not exist or is unreachable
>>
>>
> It appears that the NFS storage target was unreachable during that time...
>
>
>> It affecting backups and VM speeds. Any idea what is causing this?
>
> I would guess that the network link used for accessing the NFS target is
> getting saturated. This sometimes can be caused by running backup jobs,
> and/or other tasks like VMs demanding high i/o to that destination.
>
> There are no errors in the Synology logs, and the storage stays mounted and
>> accessible.
>>
> You mean that it stays accessible after the backups are finished (or
> failed) and after the i/o operations return to the normal levels ?
> Do you recall if there was any of the above happening during that time ?
>
>
>> Any ideas what's going on?
>>
> I would set bandwidth limits either on the storage target or on the backup
> jobs. See documentation for details on how to achieve this.
> Even better, separate VMs from the backups if that's possible (i.e use
> different storage/network links).
>
> G.
> ___


Thanks for the reply. Although it does happen during backups, causing
serious issues that require Proxmox server restart, it also happens
during the day. I don't see any saturation issues during the day, but
I'll start monitoring that.

So you're saying to set network bandwidth limits on the storage network
(I do keep a separate network for storage). I guess upgrading the
synology's with something that has 10GE would help as well.

Thanks,

Gerald


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] API users

2019-04-24 Thread Mark Schouten

https://bugzilla.proxmox.com/show_bug.cgi?id=2187

Thanks.

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Original Message -


From: Thomas Lamprecht (t.lampre...@proxmox.com)
Date: 24-04-2019 13:38
To: Mark Schouten (m...@tuxis.nl), Dominik Csapak (d.csa...@proxmox.com), PVE 
User List (pve-user@pve.proxmox.com)
Subject: Re: [PVE-User] API users


Am 4/24/19 um 1:26 PM schrieb Mark Schouten:
>
> The goal would indeed be to be able to limit the less secured users to 
> specific source addresses. At this moment, we managed to limit API-calls by 
> looking for the X-requested-by header, combined with the API URL with an 
> exclude for novnc, but the user is still able to login to the web frontend.
>
> API-users and API-client-addresses would be the best fix, if you ask me.

This sounds legitimate and would be the easiest solution for this providing a
"real" fix, and at the moment I cannot think about an easy workaround achieving
something like this? Could you please open an "enhancement" request at:
https://bugzilla.proxmox.com/ it probably won't be seen as to high priority,
but it should be to hard either, once one really thinks about what makes sense.
(black/whitelist? per realm or per user, ...?)

cheers,
Thomas

> But since the GUI just uses the API, I guess that is more difficult than 
> you'd expect. :/
>
> --
>
> Mark Schouten 
>
> Tuxis, Ede, https://www.tuxis.nl
>
> T: +31 318 200208 
>  
>
>
>
> - Original Message -
>
>
> From: Thomas Lamprecht (t.lampre...@proxmox.com)
> Date: 24-04-2019 12:34
> To: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl), 
> Dominik Csapak (d.csa...@proxmox.com)
> Subject: Re: [PVE-User] API users
>
>
> Am 4/24/19 um 12:19 PM schrieb Mark Schouten:
>>
>> Hi,
>>
>> Sorry, that doesn't answer my question. I want users that have 2FA to be 
>> able to use the GUI, and I want to be able to disallow the GUI for certain 
>> users. I know that the GUI just uses the API as a backend.
>
> That's not possible, what's your use case for this? If one has API access he 
> can do everything you can do through WebUI anyway?
>
> And even _if_ we would add some sort of "WebUI" lockout, the API user could 
> just setup pve-manager's WebUI part to point at the API backend endpoint and 
> use that one.
> Or the user could just create a own gui? So I think this is not really 
> dooable and does not fits at all with REST APIs... You just can't control the 
> frontend there...
>
> If you want to make internal API users more secure you can choose a random, 
> very big (e.g. 64 chars) password for them and be done, nobody will guess 
> that and the user name in a realistic time with the 3 seconds block on wrong 
> login?
>
> What could _maybe_ make sense is to allow to restrict logins from certain 
> (sub)networks only, so that internal users are not exposed to less trusted 
> networks...
>
>>
>> By 'do not allow access to /', do you mean for the user, or at a HTTP-level? 
>> Because at HTTP-level, that would completely disable the GUI, which you 
>> obviously don't want. Or do you mean in the permissions for the user?
>>
>> Thanks,
>>
>> --
>>
>> Mark Schouten 
>>
>> Tuxis, Ede, https://www.tuxis.nl
>>
>> T: +31 318 200208
>>  
>>
>>
>>
>> - Originele bericht -
>>
>>
>> Van: Dominik Csapak (d.csa...@proxmox.com)
>> Datum: 24-04-2019 12:08
>> Naar: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl)
>> Onderwerp: Re: [PVE-User] API users
>>
>>
>> On 4/24/19 11:54 AM, Mark Schouten wrote:
>>>
>>> Hi,
>>>
>>> we want all users to authenticate using 2FA, but we also want to use the 
>>> API externally, and 2FA with the API is quite difficult.
>>>
>>> In the latest version, you can enable 2FA per user, but you cannot disable 
>>> GUI access for e.g. API users. So a API user can just login without 2FA. Is 
>>> there a way to enable 2FA, and disable the GUI for users without 2FA? 
>>> Perhaps by revoking a rolepermission?
>>>
>>
>> Hi,
>>
>> The GUI and TFA are two independent things. The GUI uses the API in the
>> same way as any external api client would use it (via ajax calls).
>> If you want to disable just the gui, simply do not allow access to '/'
>> via a reverse proxy or something similar.
>>
>> If you want to enforce TFA, you have to enable it on the realm, then it
>> is enforced for all users of that realm
>>
>> The per user TFA is to enable single users to enhance the security of
>> their account, not to enforce using them.
>>
>> hope this answers your question
>>
>>
>>
>> ___
>> pve-user mailing list
>> pve-user@pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>
>
>



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] API users

2019-04-24 Thread Mark Schouten

The goal would indeed be to be able to limit the less secured users to specific 
source addresses. At this moment, we managed to limit API-calls by looking for 
the X-requested-by header, combined with the API URL with an exclude for novnc, 
but the user is still able to login to the web frontend.

API-users and API-client-addresses would be the best fix, if you ask me. But 
since the GUI just uses the API, I guess that is more difficult than you'd 
expect. :/

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Original Message -


From: Thomas Lamprecht (t.lampre...@proxmox.com)
Date: 24-04-2019 12:34
To: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl), 
Dominik Csapak (d.csa...@proxmox.com)
Subject: Re: [PVE-User] API users


Am 4/24/19 um 12:19 PM schrieb Mark Schouten:
>
> Hi,
>
> Sorry, that doesn't answer my question. I want users that have 2FA to be able 
> to use the GUI, and I want to be able to disallow the GUI for certain users. 
> I know that the GUI just uses the API as a backend.

That's not possible, what's your use case for this? If one has API access he 
can do everything you can do through WebUI anyway?

And even _if_ we would add some sort of "WebUI" lockout, the API user could 
just setup pve-manager's WebUI part to point at the API backend endpoint and 
use that one.
Or the user could just create a own gui? So I think this is not really dooable 
and does not fits at all with REST APIs... You just can't control the frontend 
there...

If you want to make internal API users more secure you can choose a random, 
very big (e.g. 64 chars) password for them and be done, nobody will guess that 
and the user name in a realistic time with the 3 seconds block on wrong login?

What could _maybe_ make sense is to allow to restrict logins from certain 
(sub)networks only, so that internal users are not exposed to less trusted 
networks...

>
> By 'do not allow access to /', do you mean for the user, or at a HTTP-level? 
> Because at HTTP-level, that would completely disable the GUI, which you 
> obviously don't want. Or do you mean in the permissions for the user?
>
> Thanks,
>
> --
>
> Mark Schouten 
>
> Tuxis, Ede, https://www.tuxis.nl
>
> T: +31 318 200208 
>  
>
>
>
> - Originele bericht -
>
>
> Van: Dominik Csapak (d.csa...@proxmox.com)
> Datum: 24-04-2019 12:08
> Naar: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl)
> Onderwerp: Re: [PVE-User] API users
>
>
> On 4/24/19 11:54 AM, Mark Schouten wrote:
>>
>> Hi,
>>
>> we want all users to authenticate using 2FA, but we also want to use the API 
>> externally, and 2FA with the API is quite difficult.
>>
>> In the latest version, you can enable 2FA per user, but you cannot disable 
>> GUI access for e.g. API users. So a API user can just login without 2FA. Is 
>> there a way to enable 2FA, and disable the GUI for users without 2FA? 
>> Perhaps by revoking a rolepermission?
>>
>
> Hi,
>
> The GUI and TFA are two independent things. The GUI uses the API in the
> same way as any external api client would use it (via ajax calls).
> If you want to disable just the gui, simply do not allow access to '/'
> via a reverse proxy or something similar.
>
> If you want to enforce TFA, you have to enable it on the realm, then it
> is enforced for all users of that realm
>
> The per user TFA is to enable single users to enhance the security of
> their account, not to enforce using them.
>
> hope this answers your question
>
>
>
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] API users

2019-04-24 Thread Mark Schouten

Hi,

Sorry, that doesn't answer my question. I want users that have 2FA to be able 
to use the GUI, and I want to be able to disallow the GUI for certain users. I 
know that the GUI just uses the API as a backend.

By 'do not allow access to /', do you mean for the user, or at a HTTP-level? 
Because at HTTP-level, that would completely disable the GUI, which you 
obviously don't want. Or do you mean in the permissions for the user?

Thanks,

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: Dominik Csapak (d.csa...@proxmox.com)
Datum: 24-04-2019 12:08
Naar: PVE User List (pve-user@pve.proxmox.com), Mark Schouten (m...@tuxis.nl)
Onderwerp: Re: [PVE-User] API users


On 4/24/19 11:54 AM, Mark Schouten wrote:
>
> Hi,
>
> we want all users to authenticate using 2FA, but we also want to use the API 
> externally, and 2FA with the API is quite difficult.
>
> In the latest version, you can enable 2FA per user, but you cannot disable 
> GUI access for e.g. API users. So a API user can just login without 2FA. Is 
> there a way to enable 2FA, and disable the GUI for users without 2FA? Perhaps 
> by revoking a rolepermission?
>

Hi,

The GUI and TFA are two independent things. The GUI uses the API in the
same way as any external api client would use it (via ajax calls).
If you want to disable just the gui, simply do not allow access to '/'
via a reverse proxy or something similar.

If you want to enforce TFA, you have to enable it on the realm, then it
is enforced for all users of that realm

The per user TFA is to enable single users to enhance the security of
their account, not to enforce using them.

hope this answers your question



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] API users

2019-04-24 Thread Mark Schouten

Hi,

we want all users to authenticate using 2FA, but we also want to use the API 
externally, and 2FA with the API is quite difficult.

In the latest version, you can enable 2FA per user, but you cannot disable GUI 
access for e.g. API users. So a API user can just login without 2FA. Is there a 
way to enable 2FA, and disable the GUI for users without 2FA? Perhaps by 
revoking a rolepermission?

--
Mark Schouten 
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Request to update wiki, risk of dataloss

2019-04-12 Thread Mark Schouten
Great! Thanks.

Mark Schouten

> Op 12 apr. 2019 om 17:49 heeft Thomas Lamprecht  het 
> volgende geschreven:
> 
> Hi,
> 
>> On 4/12/19 4:41 PM, Mark Schouten wrote:
>> Hi,
>> 
>> I'm in the process of upgrading some older 4.x clusters with Ceph to current 
>> versions. All goes well, but we hit a bug that is understandable, but 
>> undocumented. To prevent others from hitting it, I think it would be wise to 
>> document to issue.
>> 
>> It is when you already upgraded Ceph to Luminous and not yet Proxmox to 5.x. 
>> Resizing a disk makes Proxmox request the current disk size. Because 
>> Luminous says 'GiB' or 'MiB' and older versions say 'GB' or 'MB', $size = 
>> 0+$increaseresizesize. A customer of mine resized their disk from 50GB to 
>> 20GB because of this issue.
>> 
>> So maybe a warning on https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous 
>> and/or https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 is possible?
>> 
> 
> It added one here:
> https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous#Check_cluster_status_and_adjust_settings
> at the bottom, hope this makes it clearer.
> 
> Thanks for the suggestion!
> 
> cheers,
> Thomas

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Request to update wiki, risk of dataloss

2019-04-12 Thread Mark Schouten

Hi,

I'm in the process of upgrading some older 4.x clusters with Ceph to current 
versions. All goes well, but we hit a bug that is understandable, but 
undocumented. To prevent others from hitting it, I think it would be wise to 
document to issue.

It is when you already upgraded Ceph to Luminous and not yet Proxmox to 5.x. 
Resizing a disk makes Proxmox request the current disk size. Because Luminous 
says 'GiB' or 'MiB' and older versions say 'GB' or 'MB', $size = 
0+$increaseresizesize. A customer of mine resized their disk from 50GB to 20GB 
because of this issue.

So maybe a warning on https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous 
and/or https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 is possible?

--
Mark Schouten 
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Intel Corporation Gigabit ET2 Quad Port Server Adapter

2019-04-09 Thread Mark Schouten

I think udev is making them 'disappear'.

Do you have anything in /etc/udev/rules.d/70-persisent-net ? That sometimes 
renames interfaces..

--

Mark Schouten 

Tuxis, Ede, https://www.tuxis.nl

T: +31 318 200208 
 



- Originele bericht -


Van: David Lawley (da...@upilab.com)
Datum: 09-04-2019 13:56
Naar: pve-user@pve.proxmox.com
Onderwerp: [PVE-User] Intel Corporation Gigabit ET2 Quad Port Server Adapter


Intel Corporation Gigabit ET2 Quad Port Server Adapter


Anyone using any of these?

I have 2 per host

Currently running  PVE3.4, but while rebuilding the server with 5.3, the
installation was only finding 6 of the 8 ports.

Due to some other issues at the time I had to push the old PVE disks
back and abort the install.

My next step is to try a stock Debian install and see what happens in
the next opportunity I get.

Just wondering if anyone been down this path might have a word to add?





___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] sequential node backup

2019-02-25 Thread Mark Schouten
Hi!


On Thu, Feb 21, 2019 at 08:14:43PM +, Roberto Alvarado wrote:
> If someone can help me with this…. I’m looking for a way to do a sequential 
> backup of all my proxmox nodes, for example for the node1, node2, node3, 
> start the backup process with the node1 when this node finish the backup, now 
> start with the node2 , and the same when this node finish the backup process 
> start with the node3.

I'm running PMRB, a self-built script you can find here:
https://gitlab.tuxis.nl/mark/pmrb

That runs backups every night for vm's that are running and that didn't
have a backup for 7 days. I queues all the VM's needing a backup and
create a lockfile on the cluster filesystem. So you can run pmrb on all
nodes concurrently, but only one backup will run at the time. It will
also stop starting backups after a set time, to prevent load issues
during the day.

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Loadbalancing SPICE

2019-02-19 Thread Mark Schouten
Hi,

We deliver customers their own Proxmox cluster and decided to do that
IPv6-only, behind an IPv4-enabled HAProxy setup. This all works well,
the webinterface functions as expected, NoVNC nicely switches along with
migrations, not an issue. Until a customer wanted to use SPICE.

I've been looking into SPICE and what I can configure for it, and it
seems that it is like HTTP, but nog suitable for vhosting. But I'm no
expert on SPICE, and I expect that there are people on this list that
know much more about it.

Has anyone succeeded in loadbalancing SPICE, or does anybody know what I
should change in Proxmox settings to make this possible?

Thanks!

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Migrating from LVM to ZFS

2018-11-27 Thread Mark Schouten

Hi,

one of my colleagues mistakenly installed a Proxmox node with LVM instead of 
ZFS, and I want to fix that without reinstalling. I tested the following steps, 
which seem to be working as it should. But maybe somebody can think of 
something that I forgot. So I thought I'd share it here.

Feel free to comment!

/dev/sdb is the unused device, /dev/sda is the currently in-use device.


root@proxmoxlvmzfs:~# apt install parted
root@proxmoxlvmzfs:~# parted -s /dev/sdb mktable gpt
root@proxmoxlvmzfs:~# parted -s /dev/sdb mkpart extended 34s 2047s
root@proxmoxlvmzfs:~# parted -s /dev/sdb mkpart extended 2048s 100%
root@proxmoxlvmzfs:~# parted -s /dev/sdb set 1 bios_grub on

root@proxmoxlvmzfs:~# zpool create -f rpool /dev/sdb2
root@proxmoxlvmzfs:~# zfs create rpool/ROOT
root@proxmoxlvmzfs:~# zfs create rpool/ROOT/pve-1
root@proxmoxlvmzfs:~# zfs create rpool/data
root@proxmoxlvmzfs:~# zfs create rpool/swap -V 8G
root@proxmoxlvmzfs:~# mkswap /dev/zvol/rpool/swap 
root@proxmoxlvmzfs:~# cd /rpool/ROOT/pve-1
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# rsync -avx / ./
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# mount --bind /proc proc
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# mount --bind /dev dev
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# mount --bind /sys sys
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# swapoff -a
root@proxmoxlvmzfs:/rpool/ROOT/pve-1# chroot .
 fstab fix 
Change swap partition to /dev/zvol/rpool/swap
Remove / mount entry
 fstab fix 
 grub fix 
In /etc/default/grub, set:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
 grub fix 
root@proxmoxlvmzfs:/# zpool set bootfs=rpool/ROOT/pve-1 rpool
root@proxmoxlvmzfs:/# grub-install /dev/sda
root@proxmoxlvmzfs:/# grub-install /dev/sdb
root@proxmoxlvmzfs:/# update-grub
root@proxmoxlvmzfs:/# zfs set mountpoint=/ rpool/ROOT/pve-1

Reboot

root@proxmoxlvmzfs:~# lvchange -an pve
root@proxmoxlvmzfs:~# sgdisk /dev/sdb -R /dev/sda
root@proxmoxlvmzfs:~# sgdisk -G /dev/sda
root@proxmoxlvmzfs:~# zpool attach rpool /dev/sdb2 /dev/sda2

--
Mark Schouten 
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
  

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster network via directly connected interfaces?

2018-11-22 Thread Mark Schouten
Other than limited throughput, I can’t think of a problem. But limited 
throughput might cause unforeseen situations.

Mark Schouten

> Op 22 nov. 2018 om 19:30 heeft Frank Thommen 
>  het volgende geschreven:
> 
> Please excuse, if this is too basic, but after reading 
> https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the 
> cluster/corosync network could be built by directly connected network 
> interfaces.  I.e not like this:
> 
> +---+
> | pve01 |--+
> +---+  |
>|
> +---+ ++
> | pve02 |-| network switch |
> +---+ ++
>|
> +---+  |
> | pve03 |--+
> +---+
> 
> 
> but like this:
> 
> +---+
> | pve01 |---+
> +---+   |
> |   |
> +---+   |
> | pve02 |   |
> +---+   |
> |   |
> +---+   |
> | pve03 |---+
> +---+
> 
> (all connections 1Gbit, there are currently not plans to extend over three 
> nodes)
> 
> I can't see any drawback in that solution.  It would remove one layer of 
> hardware dependency and potential spof (the switch).  If we don't trust the 
> interfaces, we might be able to configure a second network with the three 
> remaining interfaces.
> 
> Is such a "direct-connection" topology feasible?  Recommended? Strictly not 
> recommended?
> 
> I am currently just planning and thinking and there is no cluster (or even a 
> PROXMOX server) in place.
> 
> Cheers
> frank
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Filesystem corruption on a VM?

2018-11-15 Thread Mark Schouten

Obviously, a misbehaving SAN is a much better explanation for 
filesystemcorruption..
Mark


From: Marco Gaiarin (g...@sv.lnf.it)
Date: 15-11-2018 11:57
To: pve-user@pve.proxmox.com
Subject: Re: [PVE-User] Filesystem corruption on a VM?


Mandi! Luis G. Coralle
 In chel di` si favelave...

> Hi, I have a lot of VM ( debian 8 and debian 9 ) with 512 MB of RAM on PVE
> 4.4-24 version and have not problem.

...i have a second cluster, but with ceph storage, not iSCSI/SAN, with
simlar VM, but no troubles at all. True.


> Have you enough free space on the storage?

Now, yes. As just stated, i've had a temporary fill of SAN space
(something on my trim tasks, or on the SAN, goes wrong) but now all are
back as normal.


> How much ram memory do you have on PVE?

Nodes have 64GB of RAM, 52% full.

-- 
dott. Marco Gaiarin                            GNUPG Key ID: 240A3D66
 Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
 Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
 marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

          Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
     http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
     (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



--

Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Risks for using writeback on Ceph RBD

2018-11-12 Thread Mark Schouten
Hi,

We've noticed some performance wins on using writeback for Ceph RBD
devices, but I'm wondering how we should project risks on using
writeback. Writeback isn't very unsafe, but what are the risks in case
of powerloss of a host?

Thanks,

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] HA Failover if shared storage fails on one Node

2018-10-17 Thread Mark Schouten
On Wed, 2018-10-17 at 13:05 +0200, Martin Holub wrote:
> my Test VM, Proxmox seems to not recognize the Storage outtage and
> therefore did not migrate the VM to a different blade or removed that
> Node from the Cluster (either by resetting it or fencing it somehow
> else). Any hints on how to get that solved?

HA Detects outages between the Proxmox Nodes. Not if storage is
reachable.

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [SOLVED] dual port cross over cable bonding

2018-10-05 Thread Mark Schouten
On Fri, 2018-10-05 at 16:01 +0100, Adam Weremczuk wrote:
> My config was ok and it's working like a charm after both servers
> have 
> been rebooted.
> For some reason "systemctl restart networking" wasn't enough.

This is not working anymore, unfortunatly...

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] dual host HA solution in 5.2

2018-10-01 Thread Mark Schouten
On Mon, 2018-10-01 at 11:06 +0100, Adam Weremczuk wrote:
> Or purchasing a new switch for them to share and introducing a
> single 
> point of failure.

In your situation, I would wonder if a switch is really that big of a
SPOF.

It looks like you're trying to beat this:

http://organikseo.com/wp-content/uploads/2015/10/Fast-Cheap-Good-Triangle.jpg

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to change .members file??

2018-10-01 Thread Mark Schouten
On Sat, 2018-09-29 at 10:19 +0200, Dietmar Maurer wrote:
> > Is there a way to change the .members fils locate in /etc/pve ??
> > This file is read-only!
> 
> no, you cannot change that file. But you can add/remove cluster
> members - the file is changed accordingly.

I've had invalid entries in that file once. Was because my /etc/hosts
file was broken. Restarting pve-cluster (with the correct hosts file)
helped.

-- 
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] In-place upgrade and clusters...

2018-09-05 Thread Mark Schouten
On Wed, 2018-09-05 at 16:20 +0200, Marco Gaiarin wrote:
> So, probably the best upgrade path will be:
> 
> 1) move all the VMs/CTs out of a node (or, if possible, stop them).
> 
> 2) upgrade the node from 4.X to 5.X, reboot (repeat as needed)
> 
> 3) move all the VMs/CTs in a node to the upgraded node(s), repeat
> step
>   2).

Yes. That works for me. But beware that you need to upgrade Ceph first
(if used) and not to reboot in between. Because otherwise you end up
with mon's and osd's of different versions in the same cluster, which
is unsupported.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] In-place upgrade and clusters...

2018-09-04 Thread Mark Schouten
I’ve been able to migrate VM’s between upgraded and ‘old’ nodes. Not all VM’s 
survived it without crashing, but I’m not sure what caused it (might be Ceph 
client stuff as well).


> On 4 Sep 2018, at 18:29, Marco Gaiarin  wrote:
> 
> 
> I've upgraded some standalone PVE installation with 'In-place upgrade'
> method:
>   https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0#In-place_upgrade
> without a fuss.
> 
> And in this page the note:
> 
>   If you run a PVE 4 cluster it's tested and supported to add a PVE 5 
> node and migrate your guests to the new host.
> 
> seem confident to me that the answer to my answer is 'yes'... but...
> 
> 
> Upgrading a cluster, can i upgrade node by node, migrating VMs/CTs to
> other nodes, without troubles if the source node and the target node
> run different PVE versions (latest 4.X and latest 5.X)?
> 
> Or i've to shutdown all the VMs/CTs and upgrade all the cluster at the
> same time?
> 
> 
> Thanks.
> 
> -- 
> dott. Marco Gaiarin   GNUPG Key ID: 240A3D66
>  Associazione ``La Nostra Famiglia''  http://www.lanostrafamiglia.it/
>  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
>  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797
> 
>   Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
>  http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
>   (cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ceph disk replace

2018-09-04 Thread Mark Schouten
On Tue, 2018-09-04 at 10:56 +0200, lists wrote:
> When again HEALTH_OK:
> - remove the OSD from pve gui
> but at this point ceph started rebalancing again, which to us was 
> unexpected..?
> 
> It is now rebalancing nicely, but can we prevent this data movement
> next 
> time..? (and HOW?)

I think you can't prevent it. Removing an OSD changes the CRUSH-map,
and thus rebalances. However, there is no nood to wait for it to finish
rebalancing again.

You can add the new OSD directly after removing the old one.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph with differents HDD Size

2018-08-30 Thread Mark Schouten
On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote:
> Any advice to, at least, mitigate the low performance?

Balance the number of spinning disks and the size per server. This will
probably be the safest.

It's not said that not balancing degrades performance, it's said that
it might potentially cause degraded performance.


-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph with differents HDD Size

2018-08-30 Thread Mark Schouten
On Wed, 2018-08-29 at 14:04 +0200, Eneko Lacunza wrote:
> You should change the weight of the 8TB disk, so that they have the
> same 
> as the other 4TB disks.
> 
> Thanks should fix the performance issue, but you'd waste half space
> on 
> those 8TB disks :)

Wouldn't it be more efficient to do just place a 4Tb and a 8Tb disk in
each server?

Changing weight will not cause the available space counters to drop
accordingly, I think. So it's probably confusing..

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] mlock'ing VM processes

2018-08-29 Thread Mark Schouten
Hi,

I have issues on clusters with lots of Windows VM's, where the host
decides to swap VM memory in favor of (it seems) filecache.

root@proxmox05:~# free -m
 total   used   free sharedbuffers cach
ed
Mem:386874 382772   4101326  0 2080
23
-/+ buffers/cache: 174749 212125
Swap: 8191   1775   6416

The customers are experiencing slower VM's because of parts of their
memory being swapped out.

It looks like libvirt has a solution to lock a VM's memory into 'real'
memory. [1] Is there a way we can make Proxmox do the same?


[1]: 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-memory-backing

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VZDump and Ceph Snapshots

2018-08-29 Thread Mark Schouten
On Wed, 2018-08-29 at 08:11 +0200, Alexandre DERUMIER wrote:
> vma backup only work on running vm (attached disk),
> so no, it's not possible currently.

Doesn't vma just create an archive of configfiles and raw images in a
directory?

> Currently, I'm doing ceph backup to another remote ceph backup
> cluster
> with custom script

I want my customers to be able to restore easily without Ceph-hassle.
So I want to put the functionality in vzdump..

> * Start 
> * guest-fs-freeze 
> * rbd snap $image@vzdump_$timstamp 
> * guest-fs-thaw 
> * rbd export $image@vzdump_$timstamp  | rbd import 

I think we are allow to do rbd export to STDOUT and into vma? Proxmox
developers?

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VZDump and Ceph Snapshots

2018-08-28 Thread Mark Schouten
Hi,

I'm currently using vzdump with the snapshot method to periodically
generate vma-files for disasterrecovery. vzdump in snapshot mode
instructs Qemu to start backing up the disk to a specific location, and
while doing so, VM users can suffer poor performance.

We run practically all VMs on Ceph storage, which has snapshot
functionality.

Would it be feasable to alter VZDump to use the following flow:

* Start
* guest-fs-freeze
* rbd snap $image@vzdump_$timstamp
* guest-fs-thaw
* qemu-img convert -O raw rbd:$image@vzdump_$timstamp $tmpdir
* vma create
* rbd snap rm $image@vzdump_$timstamp
* Done

Regards,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] removing vm snapshot (on ceph) fails

2018-08-21 Thread Mark Schouten
On Tue, 2018-08-21 at 12:55 +0200, Ralf Storm wrote:
> manually remove the snapshot and the lock from the config - file and
> proceed

Yes. Also, check if Ceph has no snapshots left that you think you
deleted :)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] DHCP for non cloudinit VM

2018-08-21 Thread Mark Schouten
On Tue, 2018-08-21 at 09:48 +0200, José Manuel Giner wrote:
> We are talking about auto-configuring the network on the VM, and 
> therefore you cannot install the qemu-guest-agent on the VM if you
> do 
> not have a network yet.

I deploy using templates, with packages required already installed om
them. So this would not be an issue for me, personally.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] DHCP for non cloudinit VM

2018-08-21 Thread Mark Schouten
On Tue, 2018-08-21 at 08:51 +0200, José Manuel Giner wrote:
> I know that already :) and it doesn't change anything because the 
> management difficulty still exists.
> 
> Everything would be simpler with native integration.

I disagree. As would many people. But, nothing stops you from writing
your own qemu-guest-agent script to configure the IP address on the VM,
I think.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-07 Thread Mark Schouten
On Tue, 2018-08-07 at 11:30 -0400, Denis Morejon wrote:
> These is not possible because I don't use zfs on the eight nodes ?
> Just 
> in 4 of them and modifying /etc/pve/storage.cfg is a cluster wide
> operation!

Ah yes, crap. That's right.. 

> Now the problem is that I have 4 nodes with local-lvm storage active
> and 
> 4 nodes with just local storage active, because in these  last nodes
> the 
> local-lvm is disabled! (Due to the non-existence of any lmv volumen
> group)
> 
> So, the migrations of MVs between all these nodes cause problems.

Ok. Not sure if this is supposed to work, but what if you create a ZFS
Volume (zfs create -V 100G rpool/lvm) and make that a PV (pvcreate
/dev/zvol/rpool/lvm) and make a VG (vgcreate pve /dev/zvol/rpool/lvm)
and then a LV (lvcreate -L100% pve/data) ? Does that allow you to use
local-lvm?

(Not 100% sure about the commands, check before executing)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-07 Thread Mark Schouten
On Tue, 2018-08-07 at 10:49 -0400, Denis Morejon wrote:
> So, on 4 nodes of the cluster I am able to use local-lvm to put CTs
> and 
> VMs over there, but I am not able to put VMs and CTs on local-lvm in
> the 
> others.
> That's why I want to create pve VGs on the zfs nodes.

Like I said, I think you should be able to rename 'local-zfs' in
'local-lvm' in /etc/pve/storage.cfg, and not worry about LVM anymore.
But maybe we're misunderstanding each other.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-07 Thread Mark Schouten
On Tue, 2018-08-07 at 09:42 -0400, Denis Morejon wrote:
> So local-lvm is not active by default. Then, when you add this node
> to 
> others with local-lvm storage active, And you try to migrate VMs
> between
> 
> them, there are problems...

I don't think the actual technique used is relevant for migrating local
storage, but just the name of the storage.. You can create images on
ZFS, you don't need LVM to be able to create VM's with storage.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] How to use lvm on zfs ?

2018-08-07 Thread Mark Schouten
On Tue, 2018-08-07 at 08:13 -0400, Denis Morejon wrote:
> I installed a Proxmox 5.1 server with 4 sata hdds. I built a Raidz1 
> (Raid 5 aquivalent) to introduce storage redundance. But no lvm is 
> present. I want to use lvm storage on top of zpool. What should I do
> ?

I'm curious about why you would want to do that..

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ceph jewel -> luminous upgrade docs

2018-07-09 Thread Mark Schouten
On Mon, 2018-07-09 at 16:32 +0200, lists wrote:
> Hi Mark,
> 
> On 9-7-2018 15:57, Mark Schouten wrote:
> > A default Proxmox setup runs as admin, doesn't it? With all caps?
> 
> I think the answer to #1 is yes, but I'm not sure how to tell about
> the 
> second caps question. Perhaps this tell you something:

In /etc/pve/priv/ceph/ is the file you (and your VM's) are using as a
key-file for Ceph.


> > 
> > caps: [osd] allow *

This (*) allows all capabilities.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ceph jewel -> luminous upgrade docs

2018-07-09 Thread Mark Schouten
On Mon, 2018-07-09 at 15:38 +0200, lists wrote:
> "Verify that all RBD client users have sufficient caps to blacklist 
> other client users. RBD client users with only "allow r" monitor
> caps 
> should to be updated as follows"
> 
> Does this not apply to a 'regular' proxmox install? We are managing
> the 
> cluster (and installed it) using the regular proxmox 4 installer.
> (this 
> gave us hammer at the time, which we have upgraded to jewel)

A default Proxmox setup runs as admin, doesn't it? With all caps?

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM remains in snap-delete state

2018-07-03 Thread Mark Schouten
Hi,

On Wed, 2018-06-27 at 15:09 +0200, Mark Schouten wrote:
> I have a VM that remains in snap-delete state. I'm wondering what the
> safest way is to proceed. I think I can do a qm unlock, and click
> remove again, but I'm not sure. Please advise. See current qm config
> below:


Just found the following error in the task list:
trying to aquire cfs lock 'storage-REDACTED' ...TASK ERROR: got lock
request timeout

I see the snapshot still exists, as does the state-image. Would the
best solution be to just qm unlock and click delete snapshot again?

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pve-firewall bug, net.bridge.bridge-nf-call-iptables is not set at boot

2018-06-27 Thread Mark Schouten
Hi,

I've reported Bug 1823 [1] last week. Is anyone else seeing this? If
so, maybe you can confirm the bug in bugzilla?

[1]: https://bugzilla.proxmox.com/show_bug.cgi?id=1823

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] VM remains in snap-delete state

2018-06-27 Thread Mark Schouten
Hi,

I have a VM that remains in snap-delete state. I'm wondering what the
safest way is to proceed. I think I can do a qm unlock, and click
remove again, but I'm not sure. Please advise. See current qm config
below:

root@proxmox01:~# cat /etc/pve/qemu-server/111.conf 
#REDACTED
balloon: 8192
boot: cdn
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
lock: snapshot-delete
memory: 24576
name: tux178
net0: bridge=vmbr999,virtio=66:36:62:35:64:65,tag=229
net1: bridge=vmbr999,virtio=66:32:38:63:63:33,tag=3
numa: 0
ostype: l26
parent: Just_before_polpoly_10_16_5_fp1
smbios1: uuid=5f30d44f-2685-4072-b8d6-1f9e3b8c57fc
sockets: 1
virtio0: rbd:vm-111-disk-1,size=1T

[Just_before_polpoly_10_16_5_fp1]
balloon: 8192
boot: cdn
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
machine: pc-i440fx-2.9
memory: 32768
name: tux178
net0: bridge=vmbr999,virtio=66:36:62:35:64:65,tag=229
net1: bridge=vmbr999,virtio=66:32:38:63:63:33,tag=3
numa: 0
ostype: l26
smbios1: uuid=5f30d44f-2685-4072-b8d6-1f9e3b8c57fc
snapstate: delete
snaptime: 1526563253
sockets: 1
virtio0: rbd:vm-111-disk-1,size=1T
vmstate: rbd:vm-111-state-Just_before_polpoly_10_16_5_fp1

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] high cpu load on 100mbits/sec download with virtio nic

2018-06-07 Thread Mark Schouten
On Thu, 2018-06-07 at 14:49 +, Maxime AUGER wrote:
> I'm doubtful about the performance I get.
> 
> On proxmox HOST a single wget download consume 12% single CPU load at
> 11MBytes/sec (1518 packets size)

Where are you wgetting to? Benchmarking this is better with iperf or
so.

If you want to wget, do wget -O /dev/null so you're not suffering from
diskperformance..

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Check, check, double check

2018-05-30 Thread Mark Schouten
Hi,

I've setup a new Ceph cluster next to my Proxmox cluster and I am in
the process of migrating disks to the new storage cluster, which works
fine.

I've used templates and linked clones, so I have a lot of VM's with a
base-image, and their own image over it, like so:
ceph01:base-703-disk-1/vm-198-disk-1

I'm pretty sure I can just delete the Unused disk at the VM level, and
that it won't delete the base-disk. But I'd rather be completely sure.
Can anyone confirm? :)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Toggling 'krbd' checkbox

2018-05-30 Thread Mark Schouten
On Tue, 2018-05-29 at 11:32 +0300, Dmitry Petuhov wrote:
> Also it may affect live-migration of running machines.

Affect as in 'break', or affect as in 'update the new situation'? :)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Backula proxmox backup

2018-05-29 Thread Mark Schouten
On Tue, 2018-05-29 at 11:35 +0300, Dmitry Petuhov wrote:
> Looks like generic PVE backup (vzdump), ran by bacula's scheduler 
> instead of PVE's crontab with image taken by bacula instead of being 
> laid on some PVE's storage.

Yes, that's what I thought. But if they have an interface in which you
can 'mount' the vzdump-image and extract files from it, that would be a
nice feature.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Toggling 'krbd' checkbox

2018-05-29 Thread Mark Schouten
Hi,

On one of our clusters, we have the krbd-checkbox checked, because the
customer expected to play with containers. However, they don't. So I
inclined to just uncheck the box.

I expect that this only affects a VM that is shut off, and started
again, and running boxes will just keep on going. Am I right?

Regards,


-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Backula proxmox backup

2018-05-29 Thread Mark Schouten
Hi,

On Thu, 2018-05-24 at 17:54 +0200, Fabrizio Cuseo wrote:
> Someone tested bacula enterprise backup for proxmox ? And someone
> knows the pricing level ? 
> 
> https://www.baculasystems.com/corporate-data-backup-software-solution
> s/bacula-enterprise-data-backup-tools/backup-and-recovery-for-proxmox

I don't know it. But please share your experiences, as we might be
interested as well. :)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Update a cluster from 5.1-46 to 5.2

2018-05-24 Thread Mark Schouten
there is no mention on upgrading from latest 4.4 to 5.0
> > 
> > Is that intentional..? Or should it work just like a regular
> > update?
> > 

Worked fine for me.

> It's easier as 3 to 4 as the cluster communication can stay intact
> between 4.4 and 5.0 but not completely as easy as a 5.X to 5.Y
> upgrade.
> 
> See: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0

The breaking changes seems to be incorrect. I was not able to live
migrate with the changed VGA-setting. I was able to live migrate when I
changed nothing.

Some guests froze after migration though. Not sure why, I don't mind
that very much while upgrading a full cluster.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Hanging storage tasks in all RH based VMs after update

2018-05-03 Thread Mark Schouten
Hi,

On Thu, 2018-05-03 at 07:26 +0200, Uwe Sauter wrote:
>    services:
>  mon: 6 daemons, quorum 0,1,2,3,px-echo-cluster,px-foxtrott-
> cluster

No response about your current issue, but an even number of monitors is
not recommended. Also, with a six-node-cluster, I would personally be
happy with three. 

Just my 2 cts.


-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ipfilter functionality

2018-04-19 Thread Mark Schouten
On Fri, 2018-04-13 at 11:34 +0200, Wolfgang Bumiller wrote:
> Either way, it would be great if this would be fixed!
> 
> Agreed.

Is there any idea of a timeframe when this would be fixed? The current
setup renders the 'ipfilter'-functionality useless, as someone can run
services using whichever IP they want. The description of the ipfilter-
function makes you suspect that that is prevented.

(I don't mind testing/spending time on speeding this up)

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ipfilter functionality

2018-04-13 Thread Mark Schouten
On Fri, 2018-04-13 at 11:11 +0200, Wolfgang Bumiller wrote:
> For simple connections this works, but then you also break multicast
> traffic unless you add all multicast IPs to the ipfilter as well. The
> real solution would be to move the conntrack rules from PVEFW-FORWARD
> into tap/veth${vmid}i* to below the ipfilter.

True. But moving the conntrack rules to every individual chain extends
the ruleset, a lot. Multicast addresses are pretty much limited to
two(?) subnets, which could be added to an already existing ipset,
which the kernel already visits.

I'm no kernel guru, I have the feeling that increasing the ruleset is
more resourcehungry.

Either way, it would be great if this would be fixed!

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ipfilter functionality

2018-04-13 Thread Mark Schouten
On Fri, 2018-04-13 at 10:08 +0200, Mark Schouten wrote:
> It's not really MAC filtering I'm looking for. But wouldn't this be
> fixed if the connection inbound would be filtered as well as
> outbound?
> So add the ipfilter-rules to $interface-IN as well?

Like so:
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 8f545e7..1bf0725 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -2202,6 +2202,12 @@ sub ruleset_create_vm_chain {
}
ruleset_addrule($ruleset, $chain, "", "-j MARK --set-mark
$FWACCEPTMARK_OFF"); # clear mark
 }
+if ($direction eq 'IN') {
+   if ($ipfilter_ipset) {
+   ruleset_addrule($ruleset, $chain, "-m set ! --match-set
$ipfilter_ipset dst", "-j DROP");
+   }
+}
+
 
 my $accept_action = $direction eq 'OUT' ? '-g PVEFW-SET-ACCEPT-
MARK' : "-j $accept";
 ruleset_chain_add_ndp($ruleset, $chain, $ipversion, $options,
$direction, $accept_action);

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] ipfilter functionality

2018-04-13 Thread Mark Schouten
On Fri, 2018-04-13 at 08:31 +0200, Wolfgang Bumiller wrote:
> This is currently due to the connection tracking rules happening too
> early. Similarly MAC filtering only happens for IP packets.
> If you do not need to disable MAC filtering you can try the
> pve-firewall >= 3.0-8 package from pvetest which will setup ebtables
> for
> MAC filtering, that should help. But to make it work completely as
> most
> users expect it we'll have to move the conntrack rules from the
> forward
> chain into the device specific chains.
> It's on my todo list along with another round of nftables testing.

It's not really MAC filtering I'm looking for. But wouldn't this be
fixed if the connection inbound would be filtered as well as outbound?
So add the ipfilter-rules to $interface-IN as well?

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Firewalling, local_network & Ceph

2018-04-12 Thread Mark Schouten
Hi,

Wouldn't it make sense to include Ceph services in the default
local_network firewalling rules?

Regards,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] ipfilter functionality

2018-04-11 Thread Mark Schouten
Hi,

We've been struggling with ipfilter for a few days, thinking it doesn't
work, because inbound connections kept working, even though there was
not a single IP in the ipfilter-net0 IPSet.

But, it looks like only outbound connections are dropped, but inbound
connections work. While this is functional, it doesn't prevent anyone
from spoofing a neighbours address, so it's not completely functional.

Am I missing something?

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Custom storage in ProxMox 5

2018-04-03 Thread Mark Schouten
On Sat, 2018-03-31 at 09:58 +1000, Lindsay Mathieson wrote:
> The performance I got with Ceph was sub optimal - as mentioned
> earlier, 
> if you throw lots of money at Enterprise hardware & SSD's then its
> ok, 
> but that sort of expenditure was not possible for our SMB. Something
> not 
> mentioned is that it does not do well on small setups. A bare minimum
> is 
> 5 nodes with multiple OSD's.

Samsung PM863(a) disks are not that expensive and perform very well.
Also, a three-node setup with a single OSD per node works perfectly.

> Adding/replacing/removing OSD's can be a nightmare. The potential to 
> trash your pools is only one mistep away.

I really don't see how trashing a pool is one misstep away. That would
really take an enormous idiot.

I don't know Lizardfs, so I have no opinion about that. But it seems
your experiences with Ceph are not in sync with any other experience I
hear/read.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pmbalance

2018-03-25 Thread Mark Schouten
Hi,

> On 23 Mar 2018, at 18:11, Alexandre DERUMIER  wrote:
> 
> Thanks for your script, I was looking for something like that.
> 
> I'll try it next week and try to debug.

I have it working now. I’ll post the working version tomorrow.

> I'm seeing 2 improvements:
> 
> - check that shared storage is available on target node
> - take ksm value in memory count. (with ksm enable, we can have all nodes at 
> 80% memory usage, but with differents ksm usage)

I’m always pro improvements. :)

Mark___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pmbalance

2018-03-22 Thread Mark Schouten
On Thu, 2018-03-22 at 10:53 +0100, Dietmar Maurer wrote:
> > Now, since 4.4 I have the issue that I can no longer get info or
> > commands from other nodes than the node I'm running this script on.
> > I
> > get "500 proxy not allowed" as soon as I get to
> > 'get_vm_description()'.
> > 
> > What am I doing wrong? Thanks!
> 
> Maybe you connect to the wrong port (what api port do you use)?

As you can see in the script (https://tuxis.bestandonline.nl/index.php/
s/JzH3cz2wGdY8QXs/download?path=%2F=pmbalance) I'm using the Perl
API library provided by Proxmox.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] pmbalance

2018-03-22 Thread Mark Schouten
Hi,

I have a script that is able to automatically move around VM's so the
load on the clusternodes is somewhat equal. You can find the script
here: https://tuxis.bestandonline.nl/index.php/s/JzH3cz2wGdY8QXs

It's called pmbalance.

Now, since 4.4 I have the issue that I can no longer get info or
commands from other nodes than the node I'm running this script on. I
get "500 proxy not allowed" as soon as I get to 'get_vm_description()'.

What am I doing wrong? Thanks!


root@proxmox2-4:~# ./pmbalance --verbose --dry-run
V: Looking for nodes on this cluster..
V: Found proxmox2-4
V: Found proxmox2-5
V: Found proxmox2-2
V: Found proxmox2-1
V: Found proxmox2-3
V: Calculating the average memory free on the cluster
V: proxmox2-2 is at 26% free
V: proxmox2-4 is at 39% free
V: proxmox2-1 is at 69% free
V: proxmox2-3 is at 19% free
V: proxmox2-5 is at 20% free
V: Which is: 34%
V: We should lower usage of proxmox2-3 by: 15% to reach 34%
V: 15 of 67524956160 = 10128743424 bytes
V: proxmox2-1 will receive the VM
V: Looking for migratable VMs on proxmox2-3
500 proxy not allowed



-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] scripts to monitor via Sensu

2018-03-20 Thread Mark Schouten
On Tue, 2018-03-20 at 14:09 +0100, Geert Stappers wrote:
> I'm using the scripts attached to monitor via Sensu.
> 
> Those scripts weren't found attached.
> 
> Not transmitted or stripped by some blunt filter??

Mailinglistsoftware I assume.

https://tuxis.bestandonline.nl/index.php/s/JzH3cz2wGdY8QXs

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Request New Features for Web-UI

2018-03-20 Thread Mark Schouten
On Tue, 2018-03-20 at 10:16 +0100, Thomas Lamprecht wrote:
> 2.  Feature to download backup archives (or templates) from web UI.
> > 
> 
> Sounds reasonable for convenience on smaller setups, bigger normally
> use an automated way to pull backups from PVE nodes to remote, if
> wished.
> 
> Could be a problem in hosted environments where traffic is limited,
> but this could be just ignored, we're a administration platform not
> a hoster one... 

Yes. This could come in handy.

> > 3.  Email Notification feature if something goes wrong w/ cluster
> > or system
> > resource (similar to zfs-zed).
> > 
> 
> sounds reasonable and we talked about wanting such a system (maybe I
> just
> talked with Dominik off list, not sure anymore).
> But currently nobody works on it, AFAIK.

I personally think 'real' monitoring (Nagios, Zabbix, Sensu) makes more
sense than emailing. 

I'm using the scripts attached to monitor via Sensu.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Linux crashes while running backups

2018-02-13 Thread Mark Schouten
Hi,

I have written PMRB (ProxMox Rolling Backups), which automatically runs vmdump 
on VM's 
that have a backupimage older than 1 week.

However, we are experiencing hard crashes in the VM while running the backup. 
We have 
noticed before that the VM slows down if the target storage is slow but 
complete crashes 
are new. The cluster in this case was just upgraded to PVE 5.x

Does anybody have tips or tricks on how to improve stability during backups? 
Please let me 
know.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] CephFS on Proxmox cluster

2018-02-06 Thread Mark Schouten
Hi,

Is anyone actively using CephFS served by a Proxmox cluster. AFAICS there is 
no technical problem to configure a Proxmox cluster to serve CephFS, but I'm 
looking for pros and cons.

Thanks,
-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Feedback on 4.x to 5.x upgrade documentation

2018-01-30 Thread Mark Schouten
On dinsdag 30 januari 2018 08:29:12 CET Fabian Grünbichler wrote:
> could you provide some details nevertheless? the following might be
> interesting:
> 
> - VM config of an affected guest
> - storage config
> - task log of a failed migration
> - syslog/journal on both source and target node around the time of the
>   same failed migration
> 
> did all VMs with disks mapped via KRBD fail to migrate, or just some?

Sent this info off-list.

I tried a few, that failed. And then stopped trying. So I'm not sure.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

signature.asc
Description: This is a digitally signed message part.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Loosing network connectivity in a VM

2018-01-02 Thread Mark Schouten
No logs, whatsoever. Nothing.



19:21:42.186515 62:1e:3b:f6:a3:a4 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 42: Request who-has 10.10.9.254 tell 10.10.9.4, length 28
19:21:42.249916 2e:c6:df:de:ea:a0 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 60: Request who-has 10.10.9.254 tell 10.10.9.3, length 46
19:21:42.728193 66:39:34:36:62:38 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 42: Request who-has 10.10.9.254 tell 10.10.9.1, length 28


is what I do see, and nothing else.


So, flipping the disconnect-checkbox fixes this. So it seems like a Qemu bug?

Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   dORSY <dors...@yahoo.com> 
 Aan:   PVE User List <pve-user@pve.proxmox.com>, Mark Schouten <m...@tuxis.nl> 
 Verzonden:   2-1-2018 19:54 
 Onderwerp:   Re: [PVE-User] Loosing network connectivity in a VM 





What does stops responding mean?


Is there anything in the host's logs (bridge/networking related)? 

What do you see in the vm's when its happening? Is interface up? Is there an 
IP? What are the routes?

We had for example this bug for a while (tough it made only warnings for me):

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1715609
 



  
 
  
 On Tuesday, 2 January 2018, 19:28:30 CET, Mark 
Schouten <m...@tuxis.nl> wrote:  

 

 

I just had it again. Toggling the 'disconnect' checkbox in Proxmox 'fixes' the 
issue.


I do not use bonding. It is as simple as it gets. A bridge on a 10Gbit 
interface.


Do you have a link about these virtio issues in 4.10 ?



Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Alexandre DERUMIER <aderum...@odiso.com> 
 Aan:   proxmoxve <pve-user@pve.proxmox.com> 
 Verzonden:   2-1-2018 17:49 
 Onderwerp:   Re: [PVE-User] Loosing network connectivity in a VM 


Try to upgrade to kernel 4.13, they are known virtio bug in 4.10.
(no sure it's related, but it could help)

do you use bonding on your host ? if yes, which mode ?

- Mail original -
De: "Mark Schouten" <m...@tuxis.nl>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Mardi 2 Janvier 2018 13:41:43
Objet: [PVE-User] Loosing network connectivity in a VM

Hi, 

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000. 

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly. 

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all. 

tcpdump shows me ARP-traffic going out, but nothing coming back. 

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue. 

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it? 

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve) 

# info version 
2.9.0pve-qemu-kvm_2.9.0-4 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
  ___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Loosing network connectivity in a VM

2018-01-02 Thread Mark Schouten
I just had it again. Toggling the 'disconnect' checkbox in Proxmox 'fixes' the 
issue.


I do not use bonding. It is as simple as it gets. A bridge on a 10Gbit 
interface.


Do you have a link about these virtio issues in 4.10 ?



Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Alexandre DERUMIER <aderum...@odiso.com> 
 Aan:   proxmoxve <pve-user@pve.proxmox.com> 
 Verzonden:   2-1-2018 17:49 
 Onderwerp:   Re: [PVE-User] Loosing network connectivity in a VM 

Try to upgrade to kernel 4.13, they are known virtio bug in 4.10.
(no sure it's related, but it could help)

do you use bonding on your host ? if yes, which mode ?

- Mail original -----
De: "Mark Schouten" <m...@tuxis.nl>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Mardi 2 Janvier 2018 13:41:43
Objet: [PVE-User] Loosing network connectivity in a VM

Hi, 

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000. 

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly. 

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all. 

tcpdump shows me ARP-traffic going out, but nothing coming back. 

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue. 

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it? 

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve) 

# info version 
2.9.0pve-qemu-kvm_2.9.0-4 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Loosing network connectivity in a VM

2018-01-02 Thread Mark Schouten
Hi,

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000.

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly.

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all.

tcpdump shows me ARP-traffic going out, but nothing coming back.

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue.

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it?

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve)

# info version
2.9.0pve-qemu-kvm_2.9.0-4

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 4.4, Ceph Hammer and Ceph Luminous

2017-10-20 Thread Mark Schouten
Hi,


>>Has anybody installed a Luminous client on Proxmox 4.4 ?

yes, it's working.

But I have tested luminous client with jewel and luminous cluster only.

don't known if it's still compatible with hammer




I've learned from a Ceph expert that Ceph 'garantuees' one LTS version in 
between. So Hammer SHOULD be able to talk to Jewel, but does not have to work 
with Luminous per se.


I also learned that Qemu/KVM uses the librados-version that you have installed.


So I'm going to upgrade my Proxmox-nodes to Ceph Jewel clients. And upgrade my 
hammer cluster afterwards..


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Proxmox 4.4, Ceph Hammer and Ceph Luminous

2017-10-18 Thread Mark Schouten
Hi,


I have a Proxmox 4.4 cluster, which is connected to two Ceph clusters. One of 
them is running Hammer, the other is running Luminous (since this morning).


All running VM's seem to be fine and working, but the Proxmox webinterface 
froze. It looks like the default Rados-client in Debian (
root@proxmox2-1:~# rados -v
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
)
is incompatible with Ceph Luminous.



2017-10-18 10:43:54.285544 7fa394755700 -1 failed to decode message of type 59 
v1: buffer::malformed_input: __PRETTY_FUNCTION__ unknown encoding version > 9


That is a bit surprising, imho. But OK, there are a few versions in between.


The question is, can I install a Ceph Luminous client on the Proxmox machines 
(add Ceph Luminous to the sources.list, dist-upgrade) and be happy? Or am I 
going to have trouble connecting to my older Hammer cluster using a Luminous 
client?


For now, I disabled the whole storage-entry in storage.cfg for the Luminous 
cluster to prevent Proxmox from hanging on /usr/bin/rados. Has anybody 
installed a Luminous client on Proxmox 4.4 ?

Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Bonding and packetloss

2017-09-04 Thread Mark Schouten
You cannot just LACP over different switches. It should be a stack of switches.


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Daniel <dan...@linux-nerd.de> 
 Aan:   PVE User List <pve-user@pve.proxmox.com> 
 Verzonden:   1-9-2017 22:22 
 Onderwerp:   [PVE-User] Bonding and packetloss 

Hi there, 
 
here is a small overview if my Network: 
 
2x HP Switches. Both are connected with 4x 1Gbit with a LACP Trunk to each 
other – Working as expected. 
 
Now my Problem, I configured all my hosts with Bond Mode 6 and conncted 1 NIC 
to Switch One and the other to Switch Two 
Sometimes I got packetloss and see a Kernel error like this: vmbr0: received 
packet on bond0 with own address as source address (addr:0c:c4:7a:aa:5c:e4, 
vlan:0) 
 
Some hosts are working pretty well and some has Packet Loss. 
After adding some “rules” to a host which has loss the error messages disappear 
but the loss (less loss) still exists. 
Is there any special hint what can be the matter? When I change to 
active/passive mode all is fine. 
 
This is my interfaces config which has packetloss: 
 
auto lo 
iface lo inet loopback 
 
iface eno1 inet manual 
 
iface eno2 inet manual 
 
auto bond0 
iface bond0 inet manual 
                slaves eno1 eno2 
                bond_miimon 100 
                bond_mode 6 
 
auto vmbr0 
iface vmbr0 inet static 
                address  10.0.2.111 
                netmask  255.255.255.0 
                gateway  10.0.2.1 
                bridge_ports bond0 
                bridge_stp off 
                bridge_fd 0 
                bridge_maxage 0 
                bridge_ageing 0 
                bridge_maxwait 0 
 
I am absolutely without any glue ☹ tested a lot and nothing really helps to 
solve this problem. 
 
-- 
Grüsse 
 
Daniel 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] proxmox zfs mirror installation question

2017-08-15 Thread Mark Schouten
Create a single disk setup. Later on, type zpool attach..


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Nils Privat <gismo3...@gmail.com> 
 Aan:   <pve-user@pve.proxmox.com> 
 Verzonden:   14-8-2017 16:43 
 Onderwerp:   [PVE-User] proxmox zfs mirror installation question 

Hello,
i want to install proxmox 5 on two ssds on a mirror. I got some older
corsair ssds, one 60gb and one 80gb. The proxmox installer dont let me
install zfs on a mirror because the ssds are not the same size. how
can i tell proxmox that the mirror should be created with the smallest
size of that array... so that the 20gb are just none used space for
overprovising on the 80gb ssd. Must i create a 60gb partition on the
80gb first to install? I dont know if that is right because the
install script probably creates severeal own partitions for
booting/efi etc?!? Thanks for any help and suggestions.
Would be great if the installer detects automatically the smallest
size in the array and use that as hdsize and leave the space upon just
free.

thanks
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] Conntrack on FORWARD-chain

2017-07-06 Thread Mark Schouten
Hi,

We have a cluster with the firewall enabled on cluster- and host-level, not on 
VM-level.

One of the VM's is a firewall which routes traffic for the other VM's. We ran 
into issues because the Proxmox firewall is looking at the FORWARD-chain, and 
dropping ctstate INVALID. That is causing issues, because it feels the routed 
traffic has state invalid.

Everything starts working as soon as I do a `iptables -D PVEFW-FORWARD 1`. Am 
I misinterpreting stuff, doing something wrong, or is this something else?

Thanks,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Missing node in ha-manager

2017-05-16 Thread Mark Schouten
On Tue, 2017-05-16 at 10:31 +0200, Thomas Lamprecht wrote:
> Honestly, I would just stop the HA services (this marks the VMs 
> currently under
> HA) and then do a clean restart of the pve-cluster corosync services,
> I'd do this for all nodes but not all at the same time :) This is as 
> safe as it
> can get, no VM should be interrupted, and I expected that even if we 
> know the
> full trigger of this it will result in the same action.

Sorry, forgot to let everybody know. I dist-upgraded the cluster last
week, which restarted clustering and that 'fixed' everything.

Thanks for your support.

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Missing node in ha-manager

2017-05-05 Thread Mark Schouten
Thomas, pretty please? :)

On Wed, 2017-05-03 at 09:45 +0200, Mark Schouten wrote:
> On Tue, 2017-05-02 at 09:05 +0200, Thomas Lamprecht wrote:
> > Can you control that the config looks the same on all nodes?
> > Especially the difference between working and misbehaving nodes
> > would
> > be 
> > interesting.
> 
> Please see the attachment. That includes /etc/pve/.members and
> /etc/pve/corosync.conf from all nodes. Only the .members file of the
> misbehaving node is off.
> 
> > In general you could just restart CRM, but the CRM is capable of
> > syncing 
> > in new nodes while running, so there shouldn't be any need for
> > that,
> > the 
> > patches you linked also do not change that, AFAIK.
> 
> I would like to do a sync without a restart as well, but what would
> trigger this?
> 
> > As /etc/pve.members doesn't shows the new node on the misbehaving
> > one 
> > the problem is another one.
> > Who is the current master? Can you give me an output of:
> > # ha-manager status
> > # pvecm status
> > # cat /etc/pve/corosync.conf
> 
> Output in the attachment. Because the misbehaving node also is the
> master, output of ha-manager is identical on all nodes.
> 
> 
> ___
> pve-user mailing list
> pve-user@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Missing node in ha-manager

2017-05-03 Thread Mark Schouten
On Tue, 2017-05-02 at 09:05 +0200, Thomas Lamprecht wrote:
> Can you control that the config looks the same on all nodes?
> Especially the difference between working and misbehaving nodes would
> be 
> interesting.

Please see the attachment. That includes /etc/pve/.members and
/etc/pve/corosync.conf from all nodes. Only the .members file of the
misbehaving node is off.

> In general you could just restart CRM, but the CRM is capable of
> syncing 
> in new nodes while running, so there shouldn't be any need for that,
> the 
> patches you linked also do not change that, AFAIK.

I would like to do a sync without a restart as well, but what would
trigger this?

> As /etc/pve.members doesn't shows the new node on the misbehaving
> one 
> the problem is another one.
> Who is the current master? Can you give me an output of:
> # ha-manager status
> # pvecm status
> # cat /etc/pve/corosync.conf

Output in the attachment. Because the misbehaving node also is the
master, output of ha-manager is identical on all nodes.


-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nlroot@proxmox01:~# cat /etc/pve/.members 
{
"nodename": "proxmox01",
"version": 23,
"cluster": { "name": "redacted01", "version": 4, "nodes": 4, "quorate": 1 },
"nodelist": {
  "proxmox02": { "id": 2, "online": 1, "ip": "10.1.1.2"},
  "proxmox01": { "id": 1, "online": 1, "ip": "10.1.1.1"},
  "proxmox04": { "id": 4, "online": 1, "ip": "10.1.1.4"},
  "proxmox03": { "id": 3, "online": 1, "ip": "10.1.1.3"}
  }
}

root@proxmox02:~# cat /etc/pve/.members 
{
"nodename": "proxmox02",
"version": 21,
"cluster": { "name": "redacted01", "version": 4, "nodes": 4, "quorate": 1 },
"nodelist": {
  "proxmox02": { "id": 2, "online": 1, "ip": "10.1.1.2"},
  "proxmox01": { "id": 1, "online": 1, "ip": "10.1.1.1"},
  "proxmox04": { "id": 4, "online": 1, "ip": "10.1.1.4"},
  "proxmox03": { "id": 3, "online": 1, "ip": "10.1.1.3"}
  }
}

root@proxmox03:~# cat /etc/pve/.members 
{
"nodename": "proxmox03",
"version": 24,
"cluster": { "name": "redacted01", "version": 3, "nodes": 3, "quorate": 1 },
"nodelist": {
  "proxmox02": { "id": 2, "online": 1, "ip": "10.1.1.2"},
  "proxmox03": { "id": 3, "online": 1, "ip": "10.1.1.3"},
  "proxmox01": { "id": 1, "online": 1, "ip": "10.1.1.1"}
  }
}

root@proxmox04:~# cat /etc/pve/.members 
{
"nodename": "proxmox04",
"version": 6,
"cluster": { "name": "redacted01", "version": 4, "nodes": 4, "quorate": 1 },
"nodelist": {
  "proxmox02": { "id": 2, "online": 1, "ip": "10.1.1.2"},
  "proxmox01": { "id": 1, "online": 1, "ip": "10.1.1.1"},
  "proxmox04": { "id": 4, "online": 1, "ip": "10.1.1.4"},
  "proxmox03": { "id": 3, "online": 1, "ip": "10.1.1.3"}
  }
}

root@proxmox01:~# cat /etc/pve/corosync.conf 
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
name: proxmox02
nodeid: 2
quorum_votes: 1
ring0_addr: proxmox02
  }

  node {
name: proxmox04
nodeid: 4
quorum_votes: 1
ring0_addr: proxmox04
  }

  node {
name: proxmox01
nodeid: 1
quorum_votes: 1
ring0_addr: proxmox01
  }

  node {
name: proxmox03
nodeid: 3
quorum_votes: 1
ring0_addr: proxmox03
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: redacted01
  config_version: 4
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
bindnetaddr: 10.1.1.1
ringnumber: 0
  }

}

root@proxmox02:~# cat /etc/pve/corosync.conf 
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
name: proxmox02
nodeid: 2
quorum_votes: 1
ring0_addr: proxmox02
  }

  node {
name: proxmox04
nodeid: 4
quorum_votes: 1
ring0_addr: proxmox04
  }

  node {
name: pro

[PVE-User] Missing node in ha-manager

2017-05-01 Thread Mark Schouten
Hi,

I recently added a new node to a cluster which is also running with HA. The 
fourth node seems to be working fine, but one of the other nodes is confused. 
pvecm nodes shows the full hostname for the new node, and the short one for the 
existing nodes. So probably a result of a imperfect /etc/hosts-file. I 
corrected the hosts-file on all nodes, but pvecm nodes still shows the 
incorrect output.

Also, in HA, the new node does not exist on the misbehaving node.  In the logs 
I see:
May 01 09:38:36 proxmox03 pve-ha-crm[2777]: node 'proxmox04': state changed 
from 'unknown' => 'gone'
May 01 09:38:36 proxmox03 pve-ha-crm[2777]: crm command error - node not 
online: migrate vm:222 proxmox04

Which is a result of 
http://pve-devel.pve.proxmox.narkive.com/Eafo8CAz/patch-pve-ha-manager-handle-node-deletion-in-the-ha-stack
 
<http://pve-devel.pve.proxmox.narkive.com/Eafo8CAz/patch-pve-ha-manager-handle-node-deletion-in-the-ha-stack>
 . I understand why this is done, but I would like to fix this without 
rebooting the misbehaving node. Can I restart pve-ha-crm to make things right 
again? /etc/pve/.members on the misbehaving node does not mention the new node 
at all…

Please advise.

— 
Mark Schouten
Tuxis Internet Engineering <m...@tuxis.nl <mailto:m...@tuxis.nl>>
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox storage summary

2017-04-25 Thread Mark Schouten
On Tue, 2017-04-25 at 12:04 +0300, Sten Aus wrote:
> But the problem is that Proxmox GUI itself does not show the correct 
> amount of the storage and also when I select storage pool from the 
> server view I see that Usage is N/A. See attachment.

Does you user have all the permissions in Ceph?

> Have I forgot something that needs to be done in order to get
> Proxmox 
Have I forgot something that needs to be done in order to get
Proxmox 
> web GUI to show Ceph storage amount?
> Do I need to configure ceph keyring to /etc/ceph/ as well?

Yes:

root@proxmox04:~# ls -al /etc/ceph/ceph.conf 
lrwxrwxrwx 1 root root 18 Apr 25 11:21 /etc/ceph/ceph.conf ->
/etc/pve/ceph.conf


-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076  | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


  1   2   >