Re: [pve-devel] pvedaemon hanging because of qga retry

2018-05-17 Thread Dietmar Maurer
If you simply skip commands like 'guest-fsfreeze-thaw'
your VM will get totally unusable (frozen). So I am not
sure what you want to suggest?

A correct fix would be to implement an async command queue inside qemu...


> On May 18, 2018 at 7:13 AM Alexandre DERUMIER  wrote:
> 
> 
> Seem to be introduced a long time ago in 2012
> 
> https://git.proxmox.com/?p=qemu-server.git;a=blobdiff;f=PVE/QMPClient.pm;h=9829986ae77e82d340974e4d4128741ef85b4a0e;hp=d026f4d4c3012203d96660a311b1890e84e6aa18;hb=6d04217600f2145ee80d5d62231b8ade34f2e5ff;hpb=037a97463447b06ebf79a7f1d40c596d9955acee
> 
> previously, connect timeout was 1s. 
> 
> I think we don't have qga support at this time. Not sure why it's have been
> increased for qmp command ?
> 
> 
> (with 1s, it's working fine if qga agent is down).
> 
> 
> 
> - Mail original -
> De: "aderumier" 
> À: "pve-devel" 
> Envoyé: Vendredi 18 Mai 2018 00:37:30
> Objet: Re: [pve-devel] pvedaemon hanging because of qga retry
> 
> in qmpclient : open_connection 
> 
> for (;;) { 
> $count++; 
> $fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 1); 
> last if $fh; 
> if ($! != EINTR && $! != EAGAIN) { 
> die "unable to connect to VM $vmid $sotype socket - $!\n"; 
> } 
> my $elapsed = tv_interval($starttime, [gettimeofday]); 
> if ($elapsed >= $timeout) { 
> die "unable to connect to VM $vmid $sotype socket - timeout after $count
> retries\n"; 
> } 
> usleep(10); 
> } 
> 
> 
> we use $elapsed >= $timeout. 
> 
> Isn't this timeout for command execution time and not connect time ? 
> 
> I'm seeing at the end: 
> $self->{mux}->set_timeout($fh, $timeout); 
> 
> seem to be the command execution time in the muxer 
> 
> 
> 
> 
> 
> - Mail original - 
> De: "Alexandre Derumier"  
> À: "pve-devel"  
> Envoyé: Jeudi 17 Mai 2018 23:16:36 
> Objet: [pve-devel] pvedaemon hanging because of qga retry 
> 
> Hi, 
> I had a strange behaviour today, 
> 
> with a vm running + qga enabled, but qga service down in the vm 
> 
> after theses attempts, 
> 
> May 17 21:54:01 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp
> command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket
> - timeout after 101 retries 
> May 17 21:55:10 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp
> command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket
> - timeout after 101 retries 
> 
> 
> some api request give 596 errors, mainly for the 745 vm
> (/api2/json/nodes/kvm14/qemu/745/status/current), 
> but also for the server kvm14 on /api2/json/nodes/kvm14/qemu 
> 
> 
> restarting the pvedaemon have fixed the problem 
> 
> 10.59.100.141 - root@pam [17/05/2018:21:53:51 +0200] "POST
> /api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:21:55:00 +0200] "POST
> /api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:01:28 +0200] "POST
> /api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:01:30 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:02:21 +0200] "GET
> /api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:03:05 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:03:32 +0200] "GET
> /api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:04:40 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:05:01 +0200] "GET
> /api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:05:59 +0200] "GET
> /api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:06:15 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:07:50 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:09:25 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:11:00 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:12:35 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.59.100.141 - root@pam [17/05/2018:22:14:19 +0200] "GET
> /api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:15:44 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:17:19 +0200] "GET
> /api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
> 10.3.99.10 - root@pam [17/05/2018:22:18:54 +0200] "GET
> 

Re: [pve-devel] pvedaemon hanging because of qga retry

2018-05-17 Thread Dietmar Maurer
> we use $elapsed >= $timeout.
> 
> Isn't this timeout for command execution time and not connect time ?
> 
> I'm seeing at the end:
> $self->{mux}->set_timeout($fh, $timeout);
> 
> seem to be the command execution time in the muxer
> 

I guess both should be shorter than $timeout?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pvedaemon hanging because of qga retry

2018-05-17 Thread Alexandre DERUMIER
Seem to be introduced a long time ago in 2012

https://git.proxmox.com/?p=qemu-server.git;a=blobdiff;f=PVE/QMPClient.pm;h=9829986ae77e82d340974e4d4128741ef85b4a0e;hp=d026f4d4c3012203d96660a311b1890e84e6aa18;hb=6d04217600f2145ee80d5d62231b8ade34f2e5ff;hpb=037a97463447b06ebf79a7f1d40c596d9955acee

previously, connect timeout was 1s. 

I think we don't have qga support at this time. Not sure why it's have been 
increased for qmp command ?


(with 1s, it's working fine if qga agent is down).



- Mail original -
De: "aderumier" 
À: "pve-devel" 
Envoyé: Vendredi 18 Mai 2018 00:37:30
Objet: Re: [pve-devel] pvedaemon hanging because of qga retry

in qmpclient : open_connection 

for (;;) { 
$count++; 
$fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 1); 
last if $fh; 
if ($! != EINTR && $! != EAGAIN) { 
die "unable to connect to VM $vmid $sotype socket - $!\n"; 
} 
my $elapsed = tv_interval($starttime, [gettimeofday]); 
if ($elapsed >= $timeout) { 
die "unable to connect to VM $vmid $sotype socket - timeout after $count 
retries\n"; 
} 
usleep(10); 
} 


we use $elapsed >= $timeout. 

Isn't this timeout for command execution time and not connect time ? 

I'm seeing at the end: 
$self->{mux}->set_timeout($fh, $timeout); 

seem to be the command execution time in the muxer 





- Mail original - 
De: "Alexandre Derumier"  
À: "pve-devel"  
Envoyé: Jeudi 17 Mai 2018 23:16:36 
Objet: [pve-devel] pvedaemon hanging because of qga retry 

Hi, 
I had a strange behaviour today, 

with a vm running + qga enabled, but qga service down in the vm 

after theses attempts, 

May 17 21:54:01 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries 
May 17 21:55:10 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries 


some api request give 596 errors, mainly for the 745 vm 
(/api2/json/nodes/kvm14/qemu/745/status/current), 
but also for the server kvm14 on /api2/json/nodes/kvm14/qemu 


restarting the pvedaemon have fixed the problem 

10.59.100.141 - root@pam [17/05/2018:21:53:51 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:21:55:00 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:01:28 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:01:30 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:02:21 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:03:05 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:03:32 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:04:40 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:05:01 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:05:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:06:15 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:07:50 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:09:25 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:11:00 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:12:35 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:14:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:15:44 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:17:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:18:54 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:20:29 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:22:04 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:23:39 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:25:14 +0200] "GET 

Re: [pve-devel] pvedaemon hanging because of qga retry

2018-05-17 Thread Alexandre DERUMIER
in qmpclient : open_connection

   for (;;) {
$count++;
$fh = IO::Socket::UNIX->new(Peer => $sname, Blocking => 0, Timeout => 
1);
last if $fh;
if ($! != EINTR && $! != EAGAIN) {
die "unable to connect to VM $vmid $sotype socket - $!\n";
}
my $elapsed = tv_interval($starttime, [gettimeofday]);
if ($elapsed >= $timeout) {
die "unable to connect to VM $vmid $sotype socket - timeout after 
$count retries\n";
}
usleep(10);
}


we use $elapsed >= $timeout.

Isn't this timeout for command execution time and not connect time ?

I'm seeing at the end:
$self->{mux}->set_timeout($fh, $timeout);

seem to be the command execution time in the muxer





- Mail original -
De: "Alexandre Derumier" 
À: "pve-devel" 
Envoyé: Jeudi 17 Mai 2018 23:16:36
Objet: [pve-devel] pvedaemon hanging because of qga retry

Hi, 
I had a strange behaviour today, 

with a vm running + qga enabled, but qga service down in the vm 

after theses attempts, 

May 17 21:54:01 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries 
May 17 21:55:10 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries 


some api request give 596 errors, mainly for the 745 vm 
(/api2/json/nodes/kvm14/qemu/745/status/current), 
but also for the server kvm14 on /api2/json/nodes/kvm14/qemu 


restarting the pvedaemon have fixed the problem 

10.59.100.141 - root@pam [17/05/2018:21:53:51 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:21:55:00 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:01:28 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:01:30 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:02:21 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:03:05 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:03:32 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:04:40 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:05:01 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:05:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:06:15 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:07:50 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:09:25 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:11:00 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:12:35 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.59.100.141 - root@pam [17/05/2018:22:14:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:15:44 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:17:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:18:54 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:20:29 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:22:04 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:23:39 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:25:14 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:26:49 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:28:24 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:29:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:31:34 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 - 
10.3.99.10 - root@pam [17/05/2018:22:34:44 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current 

[pve-devel] pvedaemon hanging because of qga retry

2018-05-17 Thread Alexandre DERUMIER
Hi,
I had a strange behaviour today,

with a vm running + qga enabled, but qga service down in the vm

after theses attempts,

May 17 21:54:01 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries
May 17 21:55:10 kvm14 pvedaemon[20088]: VM 745 qmp command failed - VM 745 qmp 
command 'guest-fsfreeze-thaw' failed - unable to connect to VM 745 qga socket - 
timeout after 101 retries


some api request give 596 errors, mainly for the 745 vm 
(/api2/json/nodes/kvm14/qemu/745/status/current),
but also for the server kvm14 on /api2/json/nodes/kvm14/qemu 


restarting the pvedaemon have fixed the problem 

10.59.100.141 - root@pam [17/05/2018:21:53:51 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:21:55:00 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:01:28 +0200] "POST 
/api2/json/nodes/kvm14/qemu/745/agent/fsfreeze-freeze HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:01:30 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:02:21 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:03:05 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:03:32 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:04:40 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:05:01 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:05:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:06:15 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:07:50 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:09:25 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:11:00 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:12:35 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:14:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:15:44 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:17:19 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:18:54 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:20:29 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:22:04 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:23:39 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:25:14 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:26:49 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:28:24 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:29:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:31:34 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:34:44 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.18 - root@pam [17/05/2018:22:35:30 +0200] "GET 
/api2/json/nodes/kvm14/qemu/733/status/current HTTP/1.1" 596 -
10.59.100.141 - root@pam [17/05/2018:22:37:16 +0200] "GET 
/api2/json/nodes/kvm14/qemu HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:37:24 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:38:59 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -
10.3.99.10 - root@pam [17/05/2018:22:40:08 +0200] "GET 
/api2/json/nodes/kvm14/qemu/745/status/current HTTP/1.1" 596 -



I'm don't see errors log for fsfreeze (called directly through api),

but

} elsif ($cmd->{execute} eq 'guest-fsfreeze-freeze') {
# freeze syncs all guest FS, if we kill it it stays in an 
unfreezable
# locked state with high probability, so use an generous timeout
$timeout = 60*60; # 1 hour


it was still running in pvedaemon ?

same with
# qm agent 745 

[pve-devel] Little Bug in GUI for iSCSI Storage

2018-05-17 Thread Bastian Sebode
Hey Guys,

when I disable an iSCSI Storage it gets disabled in /etc/pve/storage.cfg
by adding a "disabled"-line. The GUI shows it correct, but you can't
reactivate it because when you edit it, it thinks it is enabled.

When you enable it by removing the line in storage.cfg everything is
fine again. Guess its just a wrong if statement anywhere.

Peace
Bastian

-- 
Bastian Sebode
Fachinformatiker Systemintegration

LINET Services GmbH | Cyriaksring 10a | 38118 Braunschweig
Tel. 0531-180508-0 | Fax 0531-180508-29 | http://www.linet-services.de

LINET in den sozialen Netzwerken:
www.twitter.com/linetservices | www.facebook.com/linetservices
Wissenswertes aus der IT-Welt: www.linet-services.de/blog/

Geschäftsführung: Timo Springmann, Mirko Savic und Moritz Bunkus
HR B 9170 Amtsgericht Braunschweig

USt-IdNr. DE 259 526 516
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall] introduce ebtables_enable option to cluster config

2018-05-17 Thread Stoiko Ivanov
minimally fixes #1764, by introducing ebtables_enable as option in cluster.fw


Signed-off-by: Stoiko Ivanov 
---
Note: A better option would be to just not overwrite any output of
ebtables-save, not containing pve-specific interface names or PVE, however
this patch should at least fix the problem describend in #1764.

 src/PVE/Firewall.pm | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 96cf9bd..4bd1f89 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -2667,6 +2667,9 @@ sub parse_clusterfw_option {
if (($value > 1) && ((time() - $value) > 60)) {
$value = 0
}
+} elsif ($line =~ m/^(ebtables_enable):\s*(0|1)\s*$/i) {
+   $opt = lc($1);
+   $value = int($2);
 } elsif ($line =~ m/^(policy_(in|out)):\s*(ACCEPT|DROP|REJECT)\s*$/i) {
$opt = lc($1);
$value = uc($3);
@@ -3422,7 +3425,7 @@ sub compile {
$vmfw_configs = read_vm_firewall_configs($cluster_conf, $vmdata, undef, 
$verbose);
 }
 
-return ({},{},{}) if !$cluster_conf->{options}->{enable};
+return ({},{},{},{}) if !$cluster_conf->{options}->{enable};
 
 my $localnet;
 if ($cluster_conf->{aliases}->{local_network}) {
@@ -3441,7 +3444,6 @@ sub compile {
 my $rulesetv6 = compile_iptables_filter($cluster_conf, $hostfw_conf, 
$vmfw_configs, $vmdata, 6, $verbose);
 my $ebtables_ruleset = compile_ebtables_filter($cluster_conf, 
$hostfw_conf, $vmfw_configs, $vmdata, $verbose);
 my $ipset_ruleset = compile_ipsets($cluster_conf, $vmfw_configs, $vmdata);
-
 return ($ruleset, $ipset_ruleset, $rulesetv6, $ebtables_ruleset);
 }
 
@@ -3657,13 +3659,14 @@ sub compile_ipsets {
 sub compile_ebtables_filter {
 my ($cluster_conf, $hostfw_conf, $vmfw_configs, $vmdata, $verbose) = @_;
 
-return ({}, {}) if !$cluster_conf->{options}->{enable};
+if (!($cluster_conf->{options}->{ebtables_enable} // 1)) {
+   return {};
+}
 
 my $ruleset = {};
 
 ruleset_create_chain($ruleset, "PVEFW-FORWARD");
 
-
 ruleset_create_chain($ruleset, "PVEFW-FWBR-OUT");
 #for ipv4 and ipv6, check macaddress in iptables, so we use conntrack 
'ESTABLISHED', to speedup rules
 ruleset_addrule($ruleset, 'PVEFW-FORWARD', '-p IPv4', '-j ACCEPT');
@@ -3852,6 +3855,7 @@ sub get_ruleset_cmdlist {
 sub get_ebtables_cmdlist {
 my ($ruleset, $verbose) = @_;
 
+return (wantarray ? ('', 0) : '') if ! keys (%$ruleset);
 my $changes = 0;
 my $cmdlist = "*filter\n";
 
@@ -3995,7 +3999,7 @@ sub apply_ruleset {
 
 ipset_restore_cmdlist($ipset_delete_cmdlist) if $ipset_delete_cmdlist;
 
-ebtables_restore_cmdlist($ebtables_cmdlist);
+ebtables_restore_cmdlist($ebtables_cmdlist) if $ebtables_cmdlist;
 
 $tmpfile = "$pve_fw_status_dir/ebtablescmdlist";
 PVE::Tools::file_set_contents($tmpfile, $ebtables_cmdlist || '');
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage] Cephfs storage plugin

2018-05-17 Thread Fabian Grünbichler
some general remarks from a quick run-through:
- backup does not seem to work, probably requires
  qemu-server/pve-container changes? [1]
- parts of this could be merged with RBDPlugin code, either by extending
  it, or by refactoring to use a common helper module
- ceph-fuse and the kernel have a very different view of the same cephfs
  storage[2]

some comments inline

1: "TASK ERROR: could not get storage information for 'cephfs_ext':
can't use storage type 'cephfs' for backup"

2: $ pvesm status | grep cephfs
cephfs_ext   cephfs active   40203059249885184   
352145408   12.41%
cephfs_ext_fuse  cephfs active   108322816  831488   
1074913280.77%

On Thu, May 17, 2018 at 11:17:05AM +0200, Alwin Antreich wrote:
>  - ability to mount through kernel and fuse client
>  - allow mount options
>  - get MONs from ceph config if not in storage.cfg
>  - allow the use of ceph config with fuse client
> 
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/API2/Storage/Config.pm  |   2 +-
>  PVE/Storage.pm  |   2 +
>  PVE/Storage/CephFSPlugin.pm | 262 
> 
>  PVE/Storage/Makefile|   2 +-
>  PVE/Storage/Plugin.pm   |   1 +
>  debian/control  |   2 +
>  6 files changed, 269 insertions(+), 2 deletions(-)
>  create mode 100644 PVE/Storage/CephFSPlugin.pm
> 
> diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
> index 3b38304..368a5c9 100755
> --- a/PVE/API2/Storage/Config.pm
> +++ b/PVE/API2/Storage/Config.pm
> @@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
>   PVE::Storage::activate_storage($cfg, $baseid);
>  
>   PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
> $opts->{vgname}, $opts->{shared});
> - } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
> + } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
> !defined($opts->{monhost})) {
>   my $ceph_admin_keyring = 
> '/etc/pve/priv/ceph.client.admin.keyring';
>   my $ceph_storage_keyring = 
> "/etc/pve/priv/ceph/${storeid}.keyring";

this does not work, we need to put the key/secret, not the whole keyring
there. maybe we can also name the files accordingly (.secret instead of
.keyring) to make it more obvious.

for external clusters, this needs to go into the documentation as well.

>  
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index d733380..f9732fe 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
>  use PVE::Storage::CIFSPlugin;
>  use PVE::Storage::ISCSIPlugin;
>  use PVE::Storage::RBDPlugin;
> +use PVE::Storage::CephFSPlugin;
>  use PVE::Storage::SheepdogPlugin;
>  use PVE::Storage::ISCSIDirectPlugin;
>  use PVE::Storage::GlusterfsPlugin;
> @@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
>  PVE::Storage::CIFSPlugin->register();
>  PVE::Storage::ISCSIPlugin->register();
>  PVE::Storage::RBDPlugin->register();
> +PVE::Storage::CephFSPlugin->register();
>  PVE::Storage::SheepdogPlugin->register();
>  PVE::Storage::ISCSIDirectPlugin->register();
>  PVE::Storage::GlusterfsPlugin->register();
> diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
> new file mode 100644
> index 000..a368c5b
> --- /dev/null
> +++ b/PVE/Storage/CephFSPlugin.pm
> @@ -0,0 +1,262 @@
> +package PVE::Storage::CephFSPlugin;
> +
> +use strict;
> +use warnings;
> +use IO::File;
> +use Net::IP;
> +use File::Path;
> +use PVE::Tools qw(run_command);
> +use PVE::ProcFSTools;
> +use PVE::Storage::Plugin;
> +use PVE::JSONSchema qw(get_standard_option);
> +
> +use base qw(PVE::Storage::Plugin);
> +
> +my $hostlist = sub {
> +my ($list_text, $separator) = @_;
> +
> +my @monhostlist = PVE::Tools::split_list($list_text);
> +return join($separator, map {
> + my ($host, $port) = PVE::Tools::parse_host_and_port($_);
> + $port = defined($port) ? ":$port" : '';
> + $host = "[$host]" if Net::IP::ip_is_ipv6($host);
> + "${host}${port}"
> +} @monhostlist);
> +};

probably a candidate for merging with RBDPlugin.pm

> +
> +my $parse_ceph_config = sub {
> +my ($filename) = @_;
> +
> +my $cfg = {};
> +
> +return $cfg if ! -f $filename;
> +
> +my $fh = IO::File->new($filename, "r") ||
> + die "unable to open '$filename' - $!\n";
> +
> +my $section;
> +
> +while (defined(my $line = <$fh>)) {
> + $line =~ s/[;#].*$//;
> + $line =~ s/^\s+//;
> + $line =~ s/\s+$//;
> + next if !$line;
> +
> + $section = $1 if $line =~ m/^\[(\S+)\]$/;
> + if (!$section) {
> + warn "no section - skip: $line\n";
> + next;
> + }
> +
> + if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
> + $cfg->{$section}->{$1} = $2;
> + }
> +
> +}
> +
> +return $cfg;
> +};

this is only used once, see below

> +
> +my $get_monaddr_list = sub {
> +my ($scfg, $configfile) = @_;
> 

[pve-devel] [PATCH V2 ifupdown2 0/2] ifupdown2 package

2018-05-17 Thread Alexandre Derumier
Changelog v2:
 - use submodule for ifupdown2 src
 - split proxmox/extra patches
 - add description in 0004-add-dummy-mtu-bridgevlanport-modules.patch
 - add a note in this cover letter about systemd-networkd and ipv6 madness

Hi,

Theses last months, I'm working on vxlan implementation. (I'll send info in 
coming weeks)

I have worked on classic ifupdown, but it's not super clean to implement, 
when we have complex configuration.

ifupdown2 is currently well maintained by cumulus since 2014, and support all 
features from last kernels.
(vxlan (unicast, multicast, frr, arp suppression, vrf, vlanaware bridge, 
 vlan attributes on interfaces, ...)
and compatible with classic ifupdown syntax.


This package is based on cumulus branch
https://github.com/CumulusNetworks/ifupdown2/tree/cl3u18
as the master/debian branch is old and don't have all features
(cumulus is planning to rebase it in coming months)

For now, it could be great to simply propose ifupdown2 as alternative to 
proxmox users.
and maybe in 1 or 2 years, if it's working great, make it default for proxmox6 ?

Some advantages vs classic ifupdown:

 -we can reload configuration ! (ifreload -a, or systemctl reload networking).
 ifupdown2 maintain graphs dependencies between interfaces.

 (Note that as we don't define tap,veth interfaces in /etc/network/interfaces,
 they are not bridged anymore if you do ifdown/ifup vmbr0,
  but it don't remove them on ifreload vmbr0)

 -we can define ipv4/ipv6 in same interface
  (no need anymore iface inet6 static, iface inet static, or iface inet manual, 
but old iface inet syntax is still supported)

  auto eth0
  iface eth0
address 192.168.0.1
address 2001:db8::1:1/64
address 2001:db8::2:2/64

 or multiple ip on loopback

auto lo
iface lo inet loopback
address 10.3.3.3/32
address 10:3:3::3/128
 -classic pre-up scripts still works (if users have custom config)

- for ovs I just have needed to make a small workaround in ovs ifupdown script 
(see my ovs patch),
  and a small config change (replace allow-ovs by auto).
  Currently, I don't do in ifupdown2 post-install script

 -templating support: example: creating vxlan interfaces from vxlan30->vxlan100

   auto all
   %for v in range(30,100):

   auto vxlan${v}
   iface vxlan${v}
vxlan-id ${v}
vxlan-local-tunnelip 10.59.100.231
bridge-learning off
bridge-arp-nd-suppress on
bridge-unicast-flood off
bridge-multicast-flood off
bridge-access ${v}
%endfor

some documentation here:
 
https://support.cumulusnetworks.com/hc/en-us/articles/202933638-Comparing-ifupdown2-Commands-with-ifupdown-Commands


About systemd-networkd:
 - Currently it can't reload configuration
   https://github.com/systemd/systemd/issues/6654
 - unicast vxlan it not supported
   https://github.com/systemd/systemd/issues/5145
 - I think we don't have to maintain a systemd package if we need to extend it
 - new features seem to take years to come
 - IPV6: systemd-networkd reimplement kernel features (ipv6 RA,...) with tons 
of bugs (some not yet fixed)
 http://ipv6-net.blogspot.fr/2016/11/ipv6-systemd-another-look.html
 http://ipv6-net.blogspot.fr/2016/04/systemd-oh-you-wanted-to-run-ipv6.html
 https://github.com/systemd/systemd/issues/8906


Alexandre Derumier (2):
  add debian dir
  add ifupdown2 submodule

 .gitmodules|   3 +
 debian/changelog   | 174 +
 debian/compat  |   1 +
 debian/control |  31 
 debian/copyright   |  28 
 ...0001-start-networking-add-usr-bin-in-PATH.patch |  28 
 ...ns-scripts-fix-ENV-for-interfaces-options.patch |  29 
 ...3-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch |  31 
 .../extra/0004-add-vxlan-physdev-support.patch | 159 +++
 debian/patches/pve/0001-config-tuning.patch|  52 ++
 .../pve/0002-manual-interfaces-set-link-up.patch   |  58 +++
 ...e-tap-veth-fwpr-interfaces-from-bridge-on.patch |  27 
 ...0004-add-dummy-mtu-bridgevlanport-modules.patch |  74 +
 debian/patches/series  |   8 +
 debian/rules   |  21 +++
 ifupdown2  |   1 +
 16 files changed, 725 insertions(+)
 create mode 100644 .gitmodules
 create mode 100644 debian/changelog
 create mode 100644 debian/compat
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100644 
debian/patches/extra/0001-start-networking-add-usr-bin-in-PATH.patch
 create mode 100644 
debian/patches/extra/0002-addons-scripts-fix-ENV-for-interfaces-options.patch
 create mode 100644 
debian/patches/extra/0003-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch
 create mode 100644 debian/patches/extra/0004-add-vxlan-physdev-support.patch
 create mode 

[pve-devel] [PATCH V2 ifupdown2 1/2] add debian dir

2018-05-17 Thread Alexandre Derumier
---
 debian/changelog   | 174 +
 debian/compat  |   1 +
 debian/control |  31 
 debian/copyright   |  28 
 ...0001-start-networking-add-usr-bin-in-PATH.patch |  28 
 ...ns-scripts-fix-ENV-for-interfaces-options.patch |  29 
 ...3-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch |  31 
 .../extra/0004-add-vxlan-physdev-support.patch | 159 +++
 debian/patches/pve/0001-config-tuning.patch|  52 ++
 .../pve/0002-manual-interfaces-set-link-up.patch   |  58 +++
 ...e-tap-veth-fwpr-interfaces-from-bridge-on.patch |  27 
 ...0004-add-dummy-mtu-bridgevlanport-modules.patch |  74 +
 debian/patches/series  |   8 +
 debian/rules   |  21 +++
 14 files changed, 721 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/compat
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100644 
debian/patches/extra/0001-start-networking-add-usr-bin-in-PATH.patch
 create mode 100644 
debian/patches/extra/0002-addons-scripts-fix-ENV-for-interfaces-options.patch
 create mode 100644 
debian/patches/extra/0003-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch
 create mode 100644 debian/patches/extra/0004-add-vxlan-physdev-support.patch
 create mode 100644 debian/patches/pve/0001-config-tuning.patch
 create mode 100644 debian/patches/pve/0002-manual-interfaces-set-link-up.patch
 create mode 100644 
debian/patches/pve/0003-don-t-remove-tap-veth-fwpr-interfaces-from-bridge-on.patch
 create mode 100644 
debian/patches/pve/0004-add-dummy-mtu-bridgevlanport-modules.patch
 create mode 100644 debian/patches/series
 create mode 100755 debian/rules

diff --git a/debian/changelog b/debian/changelog
new file mode 100644
index 000..9609ca6
--- /dev/null
+++ b/debian/changelog
@@ -0,0 +1,174 @@
+ifupdown2 (1.1-cl3u18) RELEASED; urgency=medium
+
+  * Closes: CM-20069: Link down does not work on SVI configured in a VRF
+  * Closes: CM-20027: ifreload causes MTU to drop on bridge SVIs
+  * Closes: CM-20002: addons: addressvirtual: check if SVI name is first in 
routing table
+  * Closes: CM-19587: ifreload error on deleting bond slaves from an already 
configured bond
+  * Closes: CM-19882: ifupdown2 error is confusing when netmask is specified 
for vxlan-local-tunnelip
+  * Closes: CM-19760: ifupdown2 syntax check needed for vxlan interfaces
+  * Closes: CM-19081: vxlan-ageing default timer doesn't align with 
bridge-ageing
+  * Closes: CM-14031: Error with "ifreload -a -n" when MGMT VRF is not Applied
+  * Closes: CM-19075: using reserved VLAN range reports error but ifreload 
returns 0
+  * Closes: CM-18882: unable to set bridge-portmcrouter to "2"
+  * Closes: CM-19760: vxlan syntax-check warn on missing vxlan-local-tunnelip
+  * Closes: github #39: addons: vrf: fix vrf slave link kind
+  * New. Enabled: addons: vxlan: add support for vxlan-port attribute
+
+ -- dev-support   Thu, 08 Feb 2018 10:42:42 
+0100
+
+ifupdown2 (1.1-cl3u17) RELEASED; urgency=medium
+
+  * Closes: CM-19671: ip[6]-forward attributes not set at boot
+
+ -- dev-support   Thu, 08 Feb 2018 09:48:37 
+0100
+
+ifupdown2 (1.1-cl3u16) RELEASED; urgency=medium
+
+  * Closes: CM-18647, CM-19279. fix python exception on macvlans address dump
+  * Closes: CM-19332. fix eth0 doesn't acquire DHCP address when mgmt VRF is 
enabled
+
+ -- dev-support   Tue, 09 Jan 2018 02:02:58 
+0100
+
+ifupdown2 (1.1-cl3u15) RELEASED; urgency=medium
+
+  * New. Enabled: bridge: add support for bridge-l2protocol-tunnel
+  * New. Enabled: bridge attributes, when removed reset to default
+  * New. Enabled: vxlan attributes, when removed reset to default
+  * New. Enabled: improve handling of optional resources (if missing 
bridge-utils/ethtool)
+  * Closes: CM-17577 & CM-18951. fix policy "iface_defaults" not supported for 
MTU
+  * Closes: CM-18161. fix address module: handling of ipv4 & ipv6 (add/remove)
+  * Closes: CM-18262. fix warning for vlan reserved range
+  * Closes: CM-18886. fix MTU handling on bridge SVIs
+
+ -- dev-support   Wed, 22 Nov 2017 19:07:43 
+0100
+
+ifupdown2 (1.1-cl3u14) RELEASED; urgency=medium
+
+  * New. Enabled: default policy for bridge MAC address
+  * Closes: CM-18458. ethtool: don't set link speed and duplex if autoneg is on
+
+ -- dev-support   Wed, 25 Oct 2017 23:12:27 
+0200
+
+ifupdown2 (1.1-cl3u13) RELEASED; urgency=medium
+
+  * Closes: CM-17789: fix: VRF: ssh session not killed on ifreload
+
+ -- dev-support   Fri, 15 Sep 2017 22:43:12 
+0200
+
+ifupdown2 (1.1-cl3u12) RELEASED; urgency=medium
+
+  * New. Enabled: mpls-enable attribute
+  

[pve-devel] [PATCH V2 ifupdown2 2/2] add ifupdown2 submodule

2018-05-17 Thread Alexandre Derumier
---
 .gitmodules | 3 +++
 ifupdown2   | 1 +
 2 files changed, 4 insertions(+)
 create mode 100644 .gitmodules
 create mode 16 ifupdown2

diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 000..a9d2457
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "ifupdown2"]
+   path = ifupdown2
+   url = https://github.com/CumulusNetworks/ifupdown2
diff --git a/ifupdown2 b/ifupdown2
new file mode 16
index 000..57069d0
--- /dev/null
+++ b/ifupdown2
@@ -0,0 +1 @@
+Subproject commit 57069d0247c7e57580a4df968742c5f4cf72b3a9
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
 - ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 6 files changed, 269 insertions(+), 2 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 3b38304..368a5c9 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
PVE::Storage::activate_storage($cfg, $baseid);
 
PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
$opts->{vgname}, $opts->{shared});
-   } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
+   } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
!defined($opts->{monhost})) {
my $ceph_admin_keyring = 
'/etc/pve/priv/ceph.client.admin.keyring';
my $ceph_storage_keyring = 
"/etc/pve/priv/ceph/${storeid}.keyring";
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d733380..f9732fe 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
 use PVE::Storage::CIFSPlugin;
 use PVE::Storage::ISCSIPlugin;
 use PVE::Storage::RBDPlugin;
+use PVE::Storage::CephFSPlugin;
 use PVE::Storage::SheepdogPlugin;
 use PVE::Storage::ISCSIDirectPlugin;
 use PVE::Storage::GlusterfsPlugin;
@@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
 PVE::Storage::CIFSPlugin->register();
 PVE::Storage::ISCSIPlugin->register();
 PVE::Storage::RBDPlugin->register();
+PVE::Storage::CephFSPlugin->register();
 PVE::Storage::SheepdogPlugin->register();
 PVE::Storage::ISCSIDirectPlugin->register();
 PVE::Storage::GlusterfsPlugin->register();
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
new file mode 100644
index 000..a368c5b
--- /dev/null
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -0,0 +1,262 @@
+package PVE::Storage::CephFSPlugin;
+
+use strict;
+use warnings;
+use IO::File;
+use Net::IP;
+use File::Path;
+use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
+use PVE::Storage::Plugin;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::Storage::Plugin);
+
+my $hostlist = sub {
+my ($list_text, $separator) = @_;
+
+my @monhostlist = PVE::Tools::split_list($list_text);
+return join($separator, map {
+   my ($host, $port) = PVE::Tools::parse_host_and_port($_);
+   $port = defined($port) ? ":$port" : '';
+   $host = "[$host]" if Net::IP::ip_is_ipv6($host);
+   "${host}${port}"
+} @monhostlist);
+};
+
+my $parse_ceph_config = sub {
+my ($filename) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $filename;
+
+my $fh = IO::File->new($filename, "r") ||
+   die "unable to open '$filename' - $!\n";
+
+my $section;
+
+while (defined(my $line = <$fh>)) {
+   $line =~ s/[;#].*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $section = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$section) {
+   warn "no section - skip: $line\n";
+   next;
+   }
+
+   if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
+   $cfg->{$section}->{$1} = $2;
+   }
+
+}
+
+return $cfg;
+};
+
+my $get_monaddr_list = sub {
+my ($scfg, $configfile) = @_;
+
+my $server;
+my $no_mon = !defined($scfg->{monhost});
+
+if (($no_mon) && defined($configfile)) {
+   my $config = $parse_ceph_config->($configfile);
+   $server = join(',', sort { $a cmp $b }
+   map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config});
+}else {
+   $server = $hostlist->($scfg->{monhost}, ',');
+}
+
+return $server;
+};
+
+my $get_configfile = sub {
+my ($storeid) = @_;
+
+my $configfile;
+my $pve_cephconfig = '/etc/pve/ceph.conf';
+my $storeid_cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
+
+if (-e $pve_cephconfig) {
+   if (-e $storeid_cephconfig) {
+   warn "ignoring custom ceph config for storage '$storeid', 'monhost' 
is not set (assuming pveceph managed cluster)!\n";
+   }
+   $configfile = $pve_cephconfig;
+} elsif (-e $storeid_cephconfig) {
+   $configfile = $storeid_cephconfig;
+} else {
+   die "Missing ceph config for ${storeid} storage\n";
+}
+
+return $configfile;
+};
+
+sub cephfs_is_mounted {
+my ($scfg, $storeid, $mountdata) = @_;
+
+my $no_mon = !defined($scfg->{monhost});
+
+my $configfile = $get_configfile->($storeid) if ($no_mon);
+my $server = $get_monaddr_list->($scfg, $configfile);
+
+my $subdir = $scfg->{subdir} ? 

[pve-devel] applied: [PATCH manager] ui: vm: allow to add socket backed serial devices

2018-05-17 Thread Wolfgang Bumiller
applied

On Wed, May 16, 2018 at 03:30:59PM +0200, Thomas Lamprecht wrote:
> We show and can remove serial devices but couldn't add new ones
> through the WebUI.
> Add a simple component to allow adding serial ports backed by a
> socket, which can be especially useful now with xterm.js
> 
> Passing through serial devices from /dev isn't possible with this, as
> it is normally a root only operation and not that often used.
> 
> Signed-off-by: Thomas Lamprecht 
> ---
>  www/manager6/Makefile |  1 +
>  www/manager6/qemu/HardwareView.js | 13 +++
>  www/manager6/qemu/SerialEdit.js   | 81 
> +++
>  3 files changed, 95 insertions(+)
>  create mode 100644 www/manager6/qemu/SerialEdit.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index 7e9877b2..a2bd4576 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -129,6 +129,7 @@ JSSRC=
> \
>   qemu/Config.js  \
>   qemu/CreateWizard.js\
>   qemu/USBEdit.js \
> + qemu/SerialEdit.js  \
>   qemu/AgentIPView.js \
>   qemu/CloudInit.js   \
>   qemu/CIDriveEdit.js \
> diff --git a/www/manager6/qemu/HardwareView.js 
> b/www/manager6/qemu/HardwareView.js
> index 17e755a8..a87a9df1 100644
> --- a/www/manager6/qemu/HardwareView.js
> +++ b/www/manager6/qemu/HardwareView.js
> @@ -570,6 +570,19 @@ Ext.define('PVE.qemu.HardwareView', {
>   win.show();
>   }
>   },
> + {
> + text: gettext('Serial Port'),
> + itemId: 'addserial',
> + iconCls: 'pve-itype-icon-serial',
> + disabled: !caps.vms['VM.Config.Options'],
> + handler: function() {
> + var win = Ext.create('PVE.qemu.SerialEdit', 
> {
> + url: '/api2/extjs/' + baseurl
> + });
> + win.on('destroy', reload);
> + win.show();
> + }
> + },
>   {
>   text: gettext('CloudInit Drive'),
>   itemId: 'addci',
> diff --git a/www/manager6/qemu/SerialEdit.js b/www/manager6/qemu/SerialEdit.js
> new file mode 100644
> index ..794c7fa2
> --- /dev/null
> +++ b/www/manager6/qemu/SerialEdit.js
> @@ -0,0 +1,81 @@
> +/*jslint confusion: true */
> +Ext.define('PVE.qemu.SerialnputPanel', {
> +extend: 'Proxmox.panel.InputPanel',
> +
> +autoComplete: false,
> +
> +setVMConfig: function(vmconfig) {
> + var me = this, i;
> + me.vmconfig = vmconfig;
> +
> + for (i = 0; i < 4; i++) {
> + var port = 'serial' +  i.toString();
> + if (!me.vmconfig[port]) {
> + me.down('field[name=serialid]').setValue(i);
> + break;
> + }
> + }
> +
> +},
> +
> +onGetValues: function(values) {
> + var me = this;
> +
> + var id = 'serial' + values.serialid;
> + delete values.serialid;
> + values[id] = 'socket';
> + return values;
> +},
> +
> +items: [
> + {
> + xtype: 'proxmoxintegerfield',
> + name: 'serialid',
> + fieldLabel: gettext('Serial Port'),
> + minValue: 0,
> + maxValue: 3,
> + allowBlank: false,
> + validator: function(id) {
> + if (!this.rendered) {
> + return true;
> + }
> + var me = this.up('panel');
> + if (me.vmconfig !== undefined && 
> Ext.isDefined(me.vmconfig['serial' + id])) {
> + return "This device is already in use.";
> + }
> + return true;
> + }
> + }
> +]
> +});
> +
> +Ext.define('PVE.qemu.SerialEdit', {
> +extend: 'Proxmox.window.Edit',
> +
> +vmconfig: undefined,
> +
> +isAdd: true,
> +
> +subject: gettext('Serial Port'),
> +
> +initComponent : function() {
> + var me = this;
> +
> + // for now create of (socket) serial port only
> + me.isCreate = true;
> +
> + var ipanel = Ext.create('PVE.qemu.SerialnputPanel', {});
> +
> + Ext.apply(me, {
> + items: [ ipanel ]
> + });
> +
> + me.callParent();
> +
> + me.load({
> + success: function(response, options) {
> + ipanel.setVMConfig(response.result.data);
> + }
> + });
> +}
> +});
> -- 
> 2.14.2

___
pve-devel mailing list

Re: [pve-devel] [PATCH 1/1] initial ifupdown2 package

2018-05-17 Thread Wolfgang Bumiller
On Wed, May 16, 2018 at 12:01:40PM +0200, Alexandre Derumier wrote:
> ---
>  Makefile   |  42 +
>  debian/changelog   | 174 
> +
>  debian/compat  |   1 +
>  debian/control |  31 
>  debian/copyright   |  28 
>  debian/ifupdown2.postinst  |  86 ++
>  ...0001-start-networking-add-usr-bin-in-PATH.patch |  28 
>  ...ns-scripts-fix-ENV-for-interfaces-options.patch |  29 
>  debian/patches/0003-config-tuning.patch|  52 ++
>  .../0004-manual-interfaces-set-link-up.patch   |  58 +++
>  ...e-tap-veth-fwpr-interfaces-from-bridge-on.patch |  27 
>  ...6-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch |  29 
>  ...0007-add-dummy-mtu-bridgevlanport-modules.patch |  69 
>  .../patches/0008-add-vxlan-physdev-support.patch   | 159 +++
>  debian/patches/series  |   8 +
>  debian/rules   |  21 +++
>  16 files changed, 842 insertions(+)
>  create mode 100644 Makefile
>  create mode 100644 debian/changelog
>  create mode 100644 debian/compat
>  create mode 100644 debian/control
>  create mode 100644 debian/copyright
>  create mode 100644 debian/ifupdown2.postinst
>  create mode 100644 
> debian/patches/0001-start-networking-add-usr-bin-in-PATH.patch
>  create mode 100644 
> debian/patches/0002-addons-scripts-fix-ENV-for-interfaces-options.patch
>  create mode 100644 debian/patches/0003-config-tuning.patch
>  create mode 100644 debian/patches/0004-manual-interfaces-set-link-up.patch
>  create mode 100644 
> debian/patches/0005-don-t-remove-tap-veth-fwpr-interfaces-from-bridge-on.patch
>  create mode 100644 
> debian/patches/0006-netlink-IFLA_BRPORT_ARP_SUPPRESS-use-32.patch
>  create mode 100644 
> debian/patches/0007-add-dummy-mtu-bridgevlanport-modules.patch
>  create mode 100644 debian/patches/0008-add-vxlan-physdev-support.patch
>  create mode 100644 debian/patches/series
>  create mode 100755 debian/rules
> (...)

Looks interesting, seems like you spent quite some time with this
already.

If we add this, a couple of things:

I'd like to have more info on patch 0007-add-dummy-..., probably best
to just write a more descriptive commit message there.

Note that we started using git submodules with mirrors of upstream git
repos for several packages by now, rather than having a `make download`
target producing a tarball. We could do that here as well. I think it's
more convenient, too.

As for the patch workflow, I see there are a bunch which are going
upstream and a bunch of pve-specific ones. Would be good to separate
them somehow. (I haven't been very consistent in that regard, though.
eg. in qemu we have a pve/ and an extra/ direcory, while in lxc we have
the pve patches directly in the patches dir and a fixes/ subdir).
One or the other might be nice to use. That way it's also more
convenient to (automatically) update patch files if you have your custom
changes as a branch in a git-clone.

PS: You don't need to use a cover letter for single-patch mails,
especially if it only copies the commit message, as it's already in the
patch itself anyway ;-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
 - ability to mount through kernel and fuse client
 - allow mount options
 - get MONs from ceph config if not in storage.cfg
 - allow the use of ceph config with fuse client

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 3b38304..368a5c9 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -171,7 +171,7 @@ __PACKAGE__->register_method ({
PVE::Storage::activate_storage($cfg, $baseid);
 
PVE::Storage::LVMPlugin::lvm_create_volume_group($path, 
$opts->{vgname}, $opts->{shared});
-   } elsif ($type eq 'rbd' && !defined($opts->{monhost})) {
+   } elsif (($type eq 'rbd' || $type eq 'cephfs') && 
!defined($opts->{monhost})) {
my $ceph_admin_keyring = 
'/etc/pve/priv/ceph.client.admin.keyring';
my $ceph_storage_keyring = 
"/etc/pve/priv/ceph/${storeid}.keyring";
 
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index ab07146..2d8d143 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE/API2/Storage/Status.pm
@@ -335,7 +335,7 @@ __PACKAGE__->register_method ({
my $scfg = PVE::Storage::storage_check_enabled($cfg, $param->{storage}, 
$node);
 
die "cant upload to storage type '$scfg->{type}'\n" 
-   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs');
+   if !($scfg->{type} eq 'dir' || $scfg->{type} eq 'nfs' || 
$scfg->{type} eq 'glusterfs' || $scfg->{type} eq 'cephfs');
 
my $content = $param->{content};
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d733380..f9732fe 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -28,6 +28,7 @@ use PVE::Storage::NFSPlugin;
 use PVE::Storage::CIFSPlugin;
 use PVE::Storage::ISCSIPlugin;
 use PVE::Storage::RBDPlugin;
+use PVE::Storage::CephFSPlugin;
 use PVE::Storage::SheepdogPlugin;
 use PVE::Storage::ISCSIDirectPlugin;
 use PVE::Storage::GlusterfsPlugin;
@@ -46,6 +47,7 @@ PVE::Storage::NFSPlugin->register();
 PVE::Storage::CIFSPlugin->register();
 PVE::Storage::ISCSIPlugin->register();
 PVE::Storage::RBDPlugin->register();
+PVE::Storage::CephFSPlugin->register();
 PVE::Storage::SheepdogPlugin->register();
 PVE::Storage::ISCSIDirectPlugin->register();
 PVE::Storage::GlusterfsPlugin->register();
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
new file mode 100644
index 000..a368c5b
--- /dev/null
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -0,0 +1,262 @@
+package PVE::Storage::CephFSPlugin;
+
+use strict;
+use warnings;
+use IO::File;
+use Net::IP;
+use File::Path;
+use PVE::Tools qw(run_command);
+use PVE::ProcFSTools;
+use PVE::Storage::Plugin;
+use PVE::JSONSchema qw(get_standard_option);
+
+use base qw(PVE::Storage::Plugin);
+
+my $hostlist = sub {
+my ($list_text, $separator) = @_;
+
+my @monhostlist = PVE::Tools::split_list($list_text);
+return join($separator, map {
+   my ($host, $port) = PVE::Tools::parse_host_and_port($_);
+   $port = defined($port) ? ":$port" : '';
+   $host = "[$host]" if Net::IP::ip_is_ipv6($host);
+   "${host}${port}"
+} @monhostlist);
+};
+
+my $parse_ceph_config = sub {
+my ($filename) = @_;
+
+my $cfg = {};
+
+return $cfg if ! -f $filename;
+
+my $fh = IO::File->new($filename, "r") ||
+   die "unable to open '$filename' - $!\n";
+
+my $section;
+
+while (defined(my $line = <$fh>)) {
+   $line =~ s/[;#].*$//;
+   $line =~ s/^\s+//;
+   $line =~ s/\s+$//;
+   next if !$line;
+
+   $section = $1 if $line =~ m/^\[(\S+)\]$/;
+   if (!$section) {
+   warn "no section - skip: $line\n";
+   next;
+   }
+
+   if ($line =~ m/^(.*?\S)\s*=\s*(\S.*)$/) {
+   $cfg->{$section}->{$1} = $2;
+   }
+
+}
+
+return $cfg;
+};
+
+my $get_monaddr_list = sub {
+my ($scfg, $configfile) = @_;
+
+my $server;
+my $no_mon = !defined($scfg->{monhost});
+
+if (($no_mon) && defined($configfile)) {
+   my $config = $parse_ceph_config->($configfile);
+   $server = join(',', sort { $a cmp $b }
+   map { $config->{$_}->{'mon addr'} } grep {/mon/} %{$config});
+}else {
+   $server = $hostlist->($scfg->{monhost}, ',');
+}
+
+return $server;
+};
+
+my $get_configfile = sub {
+my ($storeid) = @_;
+
+my $configfile;
+my $pve_cephconfig = '/etc/pve/ceph.conf';
+my $storeid_cephconfig = "/etc/pve/priv/ceph/${storeid}.conf";
+
+if (-e $pve_cephconfig) {
+   if (-e 

[pve-devel] [PATCH v2 manager] Cephfs storage wizard

2018-05-17 Thread Alwin Antreich
 Add internal and external storage wizard for cephfs

Signed-off-by: Alwin Antreich 
---
 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 71 ++
 3 files changed, 82 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 7e9877b2..6f9b40ca 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -161,6 +161,7 @@ JSSRC=  
\
storage/IScsiEdit.js\
storage/LVMEdit.js  \
storage/LvmThinEdit.js  \
+   storage/CephFSEdit.js   \
storage/RBDEdit.js  \
storage/SheepdogEdit.js \
storage/ZFSEdit.js  \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ad5a0a61..f41a9562 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -427,6 +427,16 @@ Ext.define('PVE.Utils', { utilities: {
hideAdd: true,
faIcon: 'building'
},
+   cephfs: {
+   name: 'CephFS (PVE)',
+   ipanel: 'PVECephFSInputPanel',
+   faIcon: 'building'
+   },
+   cephfs_ext: {
+   name: 'CephFS (external)',
+   ipanel: 'CephFSInputPanel',
+   faIcon: 'building'
+   },
rbd: {
name: 'RBD',
ipanel: 'RBDInputPanel',
diff --git a/www/manager6/storage/CephFSEdit.js 
b/www/manager6/storage/CephFSEdit.js
new file mode 100644
index ..8f745b63
--- /dev/null
+++ b/www/manager6/storage/CephFSEdit.js
@@ -0,0 +1,71 @@
+Ext.define('PVE.storage.CephFSInputPanel', {
+extend: 'PVE.panel.StorageBase',
+
+initComponent : function() {
+   var me = this;
+
+   if (!me.nodename) {
+   me.nodename = 'localhost';
+   }
+   me.type = 'cephfs';
+
+   me.column1 = [];
+
+   if (me.pveceph) {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   nodename: me.nodename,
+   name: 'username',
+   value: '',
+   emptyText: gettext('admin'),
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   } else {
+   me.column1.push(
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'monhost',
+   vtype: 'HostList',
+   value: '',
+   fieldLabel: 'Monitor(s)',
+   allowBlank: false
+   },
+   {
+   xtype: me.isCreate ? 'textfield' : 'displayfield',
+   name: 'username',
+   value: '',
+   emptyText: gettext('admin'),
+   fieldLabel: gettext('User name'),
+   allowBlank: true
+   }
+   );
+   }
+
+   // here value is an array,
+   // while before it was a string
+   /*jslint confusion: true*/
+   me.column2 = [
+   {
+   xtype: 'pveContentTypeSelector',
+   cts: ['backup', 'iso', 'vztmpl'],
+   fieldLabel: gettext('Content'),
+   name: 'content',
+   value: ['backup'],
+   multiSelect: true,
+   allowBlank: false
+   }
+   ];
+   /*jslint confusion: false*/
+
+   me.callParent();
+}
+});
+
+Ext.define('PVE.storage.PVECephFSInputPanel', {
+extend: 'PVE.storage.CephFSInputPanel',
+
+pveceph: 1
+});
-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC v2 storage/manager 0/2] Cephfs storage plugin

2018-05-17 Thread Alwin Antreich
This patch series is an update and adds the Cephfs to our list of storages.
You can mount the storage through the kernel or fuse client. The plugin for
now allows all content formats, but this needs further testing.

Config and keyfile locations are the same as in the RBD plugin.

Example entry:
cephfs: cephfs0
monhost 192.168.1.2:6789
path /mnt/pve/cephfs0
content iso,backup,images,vztmpl,rootdir
subdir /blubb
fuse 0
username admin

Comments and tests are very welcome. ;)

Changes in V2:
After some testing, I decided to remove the image/rootfs option from the
plugin in this version.
Also cephfs incorrectly propagates sparse files to the stat() system call, as
cephfs doesn't track which part is written. This will confuse users looking at
their image files and directories with tools such as du.

My test results:
### directly on cephfs
# fio --filename=/mnt/pve/cephfs0/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=cephfs-test
  WRITE: io=273200KB, aggrb=4553KB/s, minb=4553KB/s, maxb=4553KB/s, 
mint=60001msec, maxt=60001msec

### /dev/loop0 -> raw image on cephfs
# fio --filename=/dev/loop0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=258644KB, aggrb=4310KB/s, minb=4310KB/s, maxb=4310KB/s, 
mint=60001msec, maxt=60001msec

### /dev/rbd0 -> rbd image mapped 
# fio --filename=/dev/rbd0 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=cephfs-test
  WRITE: io=282064KB, aggrb=4700KB/s, minb=4700KB/s, maxb=4700KB/s, 
mint=60001msec, maxt=60001msec

### ext4 on mapped rbd image
# fio --ioengine=libaio --filename=/opt/testfile --size=10G --direct=1 --sync=1 
--rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based 
--group_reporting --name=fio
  WRITE: io=122608KB, aggrb=2043KB/s, minb=2043KB/s, maxb=2043KB/s, 
mint=60002msec, maxt=60002msec

### timed cp -r linux kernel source from tempfs
# -> cephfs
real0m23.522s
user0m0.744s
sys 0m3.292s

# -> /root/ (SSD MX100)
real0m3.318s
user0m0.502s
sys 0m2.770s

# -> rbd mapped ext4 (SM863a)
real0m3.313s
user0m0.441s
sys 0m2.826s


Alwin Antreich (1):
  Cephfs storage plugin

 PVE/API2/Storage/Config.pm  |   2 +-
 PVE/API2/Storage/Status.pm  |   2 +-
 PVE/Storage.pm  |   2 +
 PVE/Storage/CephFSPlugin.pm | 262 
 PVE/Storage/Makefile|   2 +-
 PVE/Storage/Plugin.pm   |   1 +
 debian/control  |   2 +
 7 files changed, 270 insertions(+), 3 deletions(-)
 create mode 100644 PVE/Storage/CephFSPlugin.pm

Alwin Antreich (1):
  Cephfs storage wizard

 www/manager6/Makefile  |  1 +
 www/manager6/Utils.js  | 10 ++
 www/manager6/storage/CephFSEdit.js | 71 ++
 3 files changed, 82 insertions(+)
 create mode 100644 www/manager6/storage/CephFSEdit.js

-- 
2.11.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel