Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Alexandre DERUMIER
>>But that is true for file systems in general? Even if you're on a local
>>"standard" filesystem and have a reader/writer process pair you need to ensure
>>some level of atomicity regulation, either a (shared) rw_lock or like in our
>>case, where the reader reopens the file on every read loop iteration anyway a
>>"write to tmp then rename" is enough, as with this the reader never opens the
>>file while write operations are still in flight. So nothing specific to 
>>pmxcfs.

Thanks Thomas. I really didn't known about this. (Or didn't remember)

I found a small vim script to do atomic write (.tmp + mv)
https://github.com/vim-scripts/Atomic-Save


>>Hmm, but if one wants to restore the defaults by simply deleting the file 
>>he'd 
>>need to restart the firewall daemon too. Not really sure if this is ideal 
>>either... Even if we could do heuristics for if the file was really 
>>removed/truncated (double checks) that would be just feel hacky and as said 
>>above, such actions can get you in trouble with all processes where there are 
>>reader writers, so this should be handled by the one updating the file. 

Ok I understand.
I'm also think of case, where we could have a corosync/network failure, 
where /etc/pve couldn't be mounted anymore or not readable, 
that mean that in this case the firewall will be off too.
That's seem bad for security



>>But maybe a note like "As with all other filesystems you need to ensure a 
>>write 
>>operation is seen atomic by any read process by writing to a temporary file 
>>and 
>>then renaming (move) it. 

Sound great :)


- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" , 
"Stefan Priebe, Profihost AG" 
Envoyé: Mercredi 9 Janvier 2019 08:16:46
Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
replicated and rules are updated ?

On 1/8/19 10:19 PM, Alexandre DERUMIER wrote: 
>>> or those cases i use something like (pseudocode - i use salt not puppet): 
>>> 
>>> - manage copy of file 
>>> - if file has changed trigger: 
>>> - mv -v $managedfile $realfile 
>>> 
> 
>>> Greets, 
>>> Stefan 
> 
> Thanks Stefan, works fine indeed. 
> 
> I really didn't known/remember that /etc/pve was not atomic without a move. 
> Maybe should we add a warning in documentation about this. 

But that is true for file systems in general? Even if you're on a local 
"standard" filesystem and have a reader/writer process pair you need to ensure 
some level of atomicity regulation, either a (shared) rw_lock or like in our 
case, where the reader reopens the file on every read loop iteration anyway a 
"write to tmp then rename" is enough, as with this the reader never opens the 
file while write operations are still in flight. So nothing specific to pmxcfs. 

> 
> also, maybe for firewall, could we add a protection for cluster.fw ( 

Hmm, but if one wants to restore the defaults by simply deleting the file he'd 
need to restart the firewall daemon too. Not really sure if this is ideal 
either... Even if we could do heuristics for if the file was really 
removed/truncated (double checks) that would be just feel hacky and as said 
above, such actions can get you in trouble with all processes where there are 
reader writers, so this should be handled by the one updating the file. 

But maybe a note like "As with all other filesystems you need to ensure a write 
operation is seen atomic by any read process by writing to a temporary file and 
then renaming (move) it. 


> 
> sub update { 
> my $code = sub { 
> 
> my $cluster_conf = load_clusterfw_conf(); 
> + return if !$cluster_conf 
> 
> my $cluster_options = $cluster_conf->{options}; 
> 
> if (!$cluster_options->{enable}) { 
> PVE::Firewall::remove_pvefw_chains(); 
> return; 
> } 
> 
> 
> - Mail original - 
> De: "Stefan Priebe, Profihost AG"  
> À: "pve-devel" , "aderumier" 
> , "Thomas Lamprecht"  
> Envoyé: Mardi 8 Janvier 2019 21:59:44 
> Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
> replicated and rules are updated ? 
> 
> Hi Alexandre, 
> 
> 
> Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER: 
 But, file_set_contents - which save_clusterfw_conf uses - does this 
 already[0], 
 so maybe this is the "high-level fuse rename isn't atomic" bug again... 
 May need to take a closer look tomorrow. 
>> 
>> mmm, ok. 
>> 
>> In my case, it was with a simple file copy (cp /tmp/cluster.fw 
>> /etc/pve/firewall/cluster.fw). (I manage cluster.fw through puppet for 
>> multiple cluster). 
>> can reproduce it too with a simple vi edition. 
>> 
>> I think others users could trigger this too, if they use scripts to automate 
>> ipset (blacklist, ). 
>> 
>> Maybe could it be better to only disable firewall when option is setup with 
>> "enabled:0", and if cluster.fw is missing, don't change any rules. 
>> ²²² 
> 
> 
> for those cases i use something like (pseudocode - i use salt not puppet): 
> 
> - manage copy of file 
> - if file has changed trigger: 
> - 

Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Thomas Lamprecht
On 1/8/19 10:19 PM, Alexandre DERUMIER wrote:
>>> or those cases i use something like (pseudocode - i use salt not puppet):
>>>
>>> - manage copy of file
>>> - if file has changed trigger:
>>>   - mv -v $managedfile $realfile
>>>
> 
>>> Greets,
>>> Stefan
> 
> Thanks Stefan, works fine indeed.
> 
> I really didn't known/remember that /etc/pve was not atomic without a move.
> Maybe should we add a warning in documentation about this.

But that is true for file systems in general? Even if you're on a local
"standard" filesystem and have a reader/writer process pair you need to ensure
some level of atomicity regulation, either a (shared) rw_lock or like in our
case, where the reader reopens the file on every read loop iteration anyway a
"write to tmp then rename" is enough, as with this the reader never opens the
file while write operations are still in flight. So nothing specific to pmxcfs.

> 
> also, maybe for firewall, could we add a protection for cluster.fw (

Hmm, but if one wants to restore the defaults by simply deleting the file he'd
need to restart the firewall daemon too. Not really sure if this is ideal
either... Even if we could do heuristics for if the file was really
removed/truncated (double checks) that would be just feel hacky and as said
above, such actions can get you in trouble with all processes where there are
reader writers, so this should be handled by the one updating the file.

But maybe a note like "As with all other filesystems you need to ensure a write
operation is seen atomic by any read process by writing to a temporary file and
then renaming (move) it.


> 
> sub update {
> my $code = sub {
> 
> my $cluster_conf = load_clusterfw_conf();
> +   return if !$cluster_conf
> 
> my $cluster_options = $cluster_conf->{options};
> 
> if (!$cluster_options->{enable}) {
> PVE::Firewall::remove_pvefw_chains();
> return;
> }
> 
> 
> - Mail original -
> De: "Stefan Priebe, Profihost AG" 
> À: "pve-devel" , "aderumier" 
> , "Thomas Lamprecht" 
> Envoyé: Mardi 8 Janvier 2019 21:59:44
> Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
> replicated and rules are updated ?
> 
> Hi Alexandre, 
> 
> 
> Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER: 
 But, file_set_contents - which save_clusterfw_conf uses - does this 
 already[0], 
 so maybe this is the "high-level fuse rename isn't atomic" bug again... 
 May need to take a closer look tomorrow. 
>>
>> mmm, ok. 
>>
>> In my case, it was with a simple file copy (cp /tmp/cluster.fw 
>> /etc/pve/firewall/cluster.fw). (I manage cluster.fw through puppet for 
>> multiple cluster). 
>> can reproduce it too with a simple vi edition. 
>>
>> I think others users could trigger this too, if they use scripts to automate 
>> ipset (blacklist, ). 
>>
>> Maybe could it be better to only disable firewall when option is setup with 
>> "enabled:0", and if cluster.fw is missing, don't change any rules. 
>> ²²² 
> 
> 
> for those cases i use something like (pseudocode - i use salt not puppet): 
> 
> - manage copy of file 
> - if file has changed trigger: 
> - mv -v $managedfile $realfile 
> 
> 
> Greets, 
> Stefan 
> 
>>
>>
>>
>> - Mail original - 
>> De: "Thomas Lamprecht"  
>> À: "pve-devel" , "aderumier" 
>>  
>> Envoyé: Mardi 8 Janvier 2019 20:58:51 
>> Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
>> replicated and rules are updated ? 
>>
>> Hi, 
>>
>> On 1/8/19 7:37 PM, Alexandre DERUMIER wrote: 
>>> I'm able to reproduce with: 
>>> --- 
>>> on 1 host: 
>>>
>>> cluster.fw: 
>>> [OPTIONS] 
>>>
>>> enable: 1 
>>> policy_in: ACCEPT 
>>>
>>>
>>>
>>>
>>> #!/usr/bin/perl 
>>>
>>> use IO::File; 
>>> use PVE::Firewall; 
>>> use Data::Dumper; 
>>> use Time::HiRes qw ( time alarm sleep usleep ); 
>>>
>>> while(1){ 
>>>
>>> $filename = "/etc/pve/firewall/cluster.fw"; 
>>>
>>> if (my $fh = IO::File->new($filename, O_RDONLY)) { 
>>>
>>> $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, $fh, 
>>> $verbose); 
>>> my $cluster_options = $cluster_conf->{options}; 
>>>
>>> if (!$cluster_options->{enable}) { 
>>> print Dumper($cluster_options); 
>>> die "error\n"; 
>>> } 
>>>
>>> } 
>>> usleep(100); 
>>> }; 
>>>
>>>
>>> the script is running fine. 
>>>
>>>
>>> on another host, edit the file (simple open/write), 
>>> then the script on first host, return 
>>>
>>> $VAR1 = {}; 
>>> error 
>>
>> that is expected, AFAICT, a modify operation shouldn't be: 
>> * read FILE -> modify -> write FILE 
>> but rather: 
>> * read FILE -> modify -> write FILE.TMP -> move FILE.TMP to FILE 
>> if it's wanted that always a valid content is read. Else yes, you may have a 
>> small 
>> time window where the file is truncated. 
>>
>> But, file_set_contents - which save_clusterfw_conf uses - does this 
>> already[0], 
>> so maybe this is the "high-level fuse rename isn't atomic" bug again... 
>> May need to take a 

Re: [pve-devel] [PATCH qemu-server v2 0/3] Fix #2041 and #413

2019-01-08 Thread Alexandre DERUMIER
Hi,

they are also 
ich9-intel-hda  as sound controller.

(maybe when q35 is used)



could be great to use intel hda for linux too, and not only modern windows.

- Mail original -
De: "Andreas Steinel" 
À: "pve-devel" 
Envoyé: Mardi 8 Janvier 2019 23:57:58
Objet: [pve-devel] [PATCH qemu-server v2 0/3] Fix #2041 and #413

Add a new option 'spicedesktop' to enable audio and folder sharing. 

More precisely, add another serial port in order to get the service 
spice-webdavd on Linux and Windows working. Afterwards you can use 
remote-viewer and enable folder sharing therein to get a new virtual 
inside of your guest for sharing files. 

Based on the Windows version, there are two possible sound implementations 
used like it is described on the SPICE page on the wiki. 

Signed-off-by: Andreas Steinel  

Andreas Steinel (3): 
Fix #2041: add spice webdav / folder sharing 
fix #413: add SPICE audio device 
Adding new config option 'spicedesktop' 

PVE/QemuServer.pm | 22 ++ 
PVE/QemuServer/PCI.pm | 2 ++ 
2 files changed, 24 insertions(+) 

-- 
2.11.0 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 0/3] Fix #2041 and #413

2019-01-08 Thread Andreas Steinel
Add a new option 'spicedesktop' to enable audio and folder sharing.

More precisely, add another serial port in order to get the service
spice-webdavd on Linux and Windows working. Afterwards you can use
remote-viewer and enable folder sharing therein to get a new virtual
inside of your guest for sharing files.

Based on the Windows version, there are two possible sound implementations
used like it is described on the SPICE page on the wiki.

Signed-off-by: Andreas Steinel 

Andreas Steinel (3):
  Fix #2041: add spice webdav / folder sharing
  fix #413: add SPICE audio device
  Adding new config option 'spicedesktop'

 PVE/QemuServer.pm | 22 ++
 PVE/QemuServer/PCI.pm |  2 ++
 2 files changed, 24 insertions(+)

-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 3/3] Adding new config option 'spicedesktop'

2019-01-08 Thread Andreas Steinel
---
 PVE/QemuServer.pm | 32 
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 173ae82..657cfad 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -607,6 +607,12 @@ EODESCR
default => "1 (autogenerated)",
optional => 1,
 },
+spicedesktop => {
+   optional => 1,
+   type => 'boolean',
+   description => "Enable/disable SPICE folder sharing and audio support",
+   default => 0,
+},
 };
 
 my $confdesc_cloudinit = {
@@ -3768,18 +3774,20 @@ sub config_to_command {
push @$devices, '-chardev', "spicevmc,id=vdagent,name=vdagent";
push @$devices, '-device', 
"virtserialport,chardev=vdagent,name=com.redhat.spice.0";
 
-   $pciaddr = print_pci_addr("spicewebdav", $bridges, $arch, 
$machine_type);
-   push @$devices, '-device', "virtio-serial,id=webdav$pciaddr";
-   push @$devices, '-chardev', 
"spiceport,id=webdav,name=org.spice-space.webdav.0";
-   push @$devices, '-device', 
"virtserialport,chardev=webdav,name=org.spice-space.webdav.0";
-
-   $pciaddr = print_pci_addr("audio", $bridges, $arch, $machine_type);
-   if ($winversion >= 6) {
-   push @$devices, '-device', "intel-hda,id=sound5$pciaddr";
-   push @$devices, '-device', 
"hda-micro,id=sound5-codec0,bus=sound5.0,cad=0";
-   push @$devices, '-device', 
"hda-duplex,id=sound5-codec1,bus=sound5.0,cad=1";
-   } else {
-   push @$devices, '-device', "AC97$pciaddr";
+   if ( defined($conf->{spice_desktop}) && $conf->{spice_desktop} == 1 ) {
+   $pciaddr = print_pci_addr("spicewebdav", $bridges, $arch, 
$machine_type);
+   push @$devices, '-device', "virtio-serial,id=webdav$pciaddr";
+   push @$devices, '-chardev', 
"spiceport,id=webdav,name=org.spice-space.webdav.0";
+   push @$devices, '-device', 
"virtserialport,chardev=webdav,name=org.spice-space.webdav.0";
+
+   $pciaddr = print_pci_addr("audio", $bridges, $arch, 
$machine_type);
+   if ($winversion >= 6) {
+   push @$devices, '-device', 
"intel-hda,id=sound5$pciaddr";
+   push @$devices, '-device', 
"hda-micro,id=sound5-codec0,bus=sound5.0,cad=0";
+   push @$devices, '-device', 
"hda-duplex,id=sound5-codec1,bus=sound5.0,cad=1";
+   } else {
+   push @$devices, '-device', "AC97$pciaddr";
+   }
}
 }
 
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 1/3] Fix #2041: add spice webdav / folder sharing

2019-01-08 Thread Andreas Steinel
Adding the device and serial port for the service spice-webdavd on Linux and
Windows.

Signed-off-by: Andreas Steinel 
---
 PVE/QemuServer.pm | 5 +
 PVE/QemuServer/PCI.pm | 1 +
 2 files changed, 6 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1ccdccf..225f0c0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3767,6 +3767,11 @@ sub config_to_command {
push @$devices, '-device', "virtio-serial,id=spice$pciaddr";
push @$devices, '-chardev', "spicevmc,id=vdagent,name=vdagent";
push @$devices, '-device', 
"virtserialport,chardev=vdagent,name=com.redhat.spice.0";
+
+   $pciaddr = print_pci_addr("spicewebdav", $bridges, $arch, 
$machine_type);
+   push @$devices, '-device', "virtio-serial,id=webdav$pciaddr";
+   push @$devices, '-chardev', 
"spiceport,id=webdav,name=org.spice-space.webdav.0";
+   push @$devices, '-device', 
"virtserialport,chardev=webdav,name=org.spice-space.webdav.0";
 }
 
 # enable balloon by default, unless explicitly disabled
diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index f22f5ad..45c1c90 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -68,6 +68,7 @@ my $devices = {
 'net30' => { bus => 1, addr => 25 },
 'net31' => { bus => 1, addr => 26 },
 'xhci' => { bus => 1, addr => 27 },
+'spicewebdav' => { bus => 1, addr => 28 },
 'virtio6' => { bus => 2, addr => 1 },
 'virtio7' => { bus => 2, addr => 2 },
 'virtio8' => { bus => 2, addr => 3 },
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 2/3] fix #413: add SPICE audio device

2019-01-08 Thread Andreas Steinel
If you enable SPICE, the audio device will be automatically added. Intel
HD for newer Windows and AC97 otherwise.
---
 PVE/QemuServer.pm | 9 +
 PVE/QemuServer/PCI.pm | 1 +
 2 files changed, 10 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 225f0c0..173ae82 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3772,6 +3772,15 @@ sub config_to_command {
push @$devices, '-device', "virtio-serial,id=webdav$pciaddr";
push @$devices, '-chardev', 
"spiceport,id=webdav,name=org.spice-space.webdav.0";
push @$devices, '-device', 
"virtserialport,chardev=webdav,name=org.spice-space.webdav.0";
+
+   $pciaddr = print_pci_addr("audio", $bridges, $arch, $machine_type);
+   if ($winversion >= 6) {
+   push @$devices, '-device', "intel-hda,id=sound5$pciaddr";
+   push @$devices, '-device', 
"hda-micro,id=sound5-codec0,bus=sound5.0,cad=0";
+   push @$devices, '-device', 
"hda-duplex,id=sound5-codec1,bus=sound5.0,cad=1";
+   } else {
+   push @$devices, '-device', "AC97$pciaddr";
+   }
 }
 
 # enable balloon by default, unless explicitly disabled
diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 45c1c90..c0ee435 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -69,6 +69,7 @@ my $devices = {
 'net31' => { bus => 1, addr => 26 },
 'xhci' => { bus => 1, addr => 27 },
 'spicewebdav' => { bus => 1, addr => 28 },
+'audio' => { bus => 1, addr => 29 },
 'virtio6' => { bus => 2, addr => 1 },
 'virtio7' => { bus => 2, addr => 2 },
 'virtio8' => { bus => 2, addr => 3 },
-- 
2.11.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Alexandre DERUMIER
>>or those cases i use something like (pseudocode - i use salt not puppet):
>>
>>- manage copy of file
>>- if file has changed trigger:
>>   - mv -v $managedfile $realfile
>>

>>Greets,
>>Stefan

Thanks Stefan, works fine indeed.

I really didn't known/remember that /etc/pve was not atomic without a move.
Maybe should we add a warning in documentation about this.


also, maybe for firewall, could we add a protection for cluster.fw (

sub update {
my $code = sub {

my $cluster_conf = load_clusterfw_conf();
+   return if !$cluster_conf

my $cluster_options = $cluster_conf->{options};

if (!$cluster_options->{enable}) {
PVE::Firewall::remove_pvefw_chains();
return;
}


- Mail original -
De: "Stefan Priebe, Profihost AG" 
À: "pve-devel" , "aderumier" , 
"Thomas Lamprecht" 
Envoyé: Mardi 8 Janvier 2019 21:59:44
Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
replicated and rules are updated ?

Hi Alexandre, 


Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER: 
>>> But, file_set_contents - which save_clusterfw_conf uses - does this 
>>> already[0], 
>>> so maybe this is the "high-level fuse rename isn't atomic" bug again... 
>>> May need to take a closer look tomorrow. 
> 
> mmm, ok. 
> 
> In my case, it was with a simple file copy (cp /tmp/cluster.fw 
> /etc/pve/firewall/cluster.fw). (I manage cluster.fw through puppet for 
> multiple cluster). 
> can reproduce it too with a simple vi edition. 
> 
> I think others users could trigger this too, if they use scripts to automate 
> ipset (blacklist, ). 
> 
> Maybe could it be better to only disable firewall when option is setup with 
> "enabled:0", and if cluster.fw is missing, don't change any rules. 
> ²²² 


for those cases i use something like (pseudocode - i use salt not puppet): 

- manage copy of file 
- if file has changed trigger: 
- mv -v $managedfile $realfile 


Greets, 
Stefan 

> 
> 
> 
> - Mail original - 
> De: "Thomas Lamprecht"  
> À: "pve-devel" , "aderumier"  
> Envoyé: Mardi 8 Janvier 2019 20:58:51 
> Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
> replicated and rules are updated ? 
> 
> Hi, 
> 
> On 1/8/19 7:37 PM, Alexandre DERUMIER wrote: 
>> I'm able to reproduce with: 
>> --- 
>> on 1 host: 
>> 
>> cluster.fw: 
>> [OPTIONS] 
>> 
>> enable: 1 
>> policy_in: ACCEPT 
>> 
>> 
>> 
>> 
>> #!/usr/bin/perl 
>> 
>> use IO::File; 
>> use PVE::Firewall; 
>> use Data::Dumper; 
>> use Time::HiRes qw ( time alarm sleep usleep ); 
>> 
>> while(1){ 
>> 
>> $filename = "/etc/pve/firewall/cluster.fw"; 
>> 
>> if (my $fh = IO::File->new($filename, O_RDONLY)) { 
>> 
>> $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, $fh, 
>> $verbose); 
>> my $cluster_options = $cluster_conf->{options}; 
>> 
>> if (!$cluster_options->{enable}) { 
>> print Dumper($cluster_options); 
>> die "error\n"; 
>> } 
>> 
>> } 
>> usleep(100); 
>> }; 
>> 
>> 
>> the script is running fine. 
>> 
>> 
>> on another host, edit the file (simple open/write), 
>> then the script on first host, return 
>> 
>> $VAR1 = {}; 
>> error 
> 
> that is expected, AFAICT, a modify operation shouldn't be: 
> * read FILE -> modify -> write FILE 
> but rather: 
> * read FILE -> modify -> write FILE.TMP -> move FILE.TMP to FILE 
> if it's wanted that always a valid content is read. Else yes, you may have a 
> small 
> time window where the file is truncated. 
> 
> But, file_set_contents - which save_clusterfw_conf uses - does this 
> already[0], 
> so maybe this is the "high-level fuse rename isn't atomic" bug again... 
> May need to take a closer look tomorrow. 
> 
> [0]: 
> https://git.proxmox.com/?p=pve-common.git;a=blob;f=src/PVE/Tools.pm;h=accf6539da94d2b5d5b6f4539310fe5c4d526c7e;hb=HEAD#l213
>  
> 
>> 
>> - Mail original - 
>> De: "aderumier"  
>> À: "pve-devel"  
>> Envoyé: Mardi 8 Janvier 2019 19:15:06 
>> Objet: [pve-devel] firewall : possible bug/race when cluster.fw is 
>> replicated and rules are updated ? 
>> 
>> Hi, 
>> I'm currently debugging a possible firewalling problem. 
>> I'm running some cephfs client in vm, firewalled by proxmox. 
>> cephfs client are really sensitive to network problem, and mainly with 
>> packets logss or dropped packets. 
>> 
>> I'm really not sure, but I have currently puppet updating my cluster.fw, at 
>> regular interval, 
>> and sometimes, I have all the vm on a specific host (or multiple hosts), at 
>> the same time, have a small disconnect (maybe some second). 
>> 
>> 
>> I would like to known, if cluster.fw replication is atomic in /etc/pve/ ? 
>> or if they are any chance, that during file replication, the firewall try to 
>> read the file, 
>> it could be empty ? 
>> 
>> 
>> I just wonder (I'm really really not sure) if I could trigger this: 
>> 
>> 
>> sub update { 
>> my $code = sub { 
>> 
>> my $cluster_conf = load_clusterfw_conf(); 
>> my $cluster_options = 

Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Stefan Priebe - Profihost AG
Hi Alexandre,


Am 08.01.19 um 21:55 schrieb Alexandre DERUMIER:
>>> But, file_set_contents - which save_clusterfw_conf uses - does this 
>>> already[0],
>>> so maybe this is the "high-level fuse rename isn't atomic" bug again...
>>> May need to take a closer look tomorrow.
> 
> mmm, ok.
> 
> In my case, it was with a simple file copy (cp /tmp/cluster.fw 
> /etc/pve/firewall/cluster.fw). (I manage cluster.fw through puppet for 
> multiple cluster).
> can reproduce it too with a simple vi edition.
> 
> I think others users could trigger this too, if they use scripts to automate 
> ipset (blacklist, ).
> 
> Maybe could it be better to only disable firewall when option is setup with 
> "enabled:0", and if cluster.fw is missing, don't change any rules.
> ²²²


for those cases i use something like (pseudocode - i use salt not puppet):

- manage copy of file
- if file has changed trigger:
   - mv -v $managedfile $realfile


Greets,
Stefan

> 
> 
> 
> - Mail original -
> De: "Thomas Lamprecht" 
> À: "pve-devel" , "aderumier" 
> Envoyé: Mardi 8 Janvier 2019 20:58:51
> Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
> replicated and rules are updated ?
> 
> Hi, 
> 
> On 1/8/19 7:37 PM, Alexandre DERUMIER wrote: 
>> I'm able to reproduce with: 
>> --- 
>> on 1 host: 
>>
>> cluster.fw: 
>> [OPTIONS] 
>>
>> enable: 1 
>> policy_in: ACCEPT 
>>
>>
>>
>>
>> #!/usr/bin/perl 
>>
>> use IO::File; 
>> use PVE::Firewall; 
>> use Data::Dumper; 
>> use Time::HiRes qw ( time alarm sleep usleep ); 
>>
>> while(1){ 
>>
>> $filename = "/etc/pve/firewall/cluster.fw"; 
>>
>> if (my $fh = IO::File->new($filename, O_RDONLY)) { 
>>
>> $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, $fh, 
>> $verbose); 
>> my $cluster_options = $cluster_conf->{options}; 
>>
>> if (!$cluster_options->{enable}) { 
>> print Dumper($cluster_options); 
>> die "error\n"; 
>> } 
>>
>> } 
>> usleep(100); 
>> }; 
>>
>>
>> the script is running fine. 
>>
>>
>> on another host, edit the file (simple open/write), 
>> then the script on first host, return 
>>
>> $VAR1 = {}; 
>> error 
> 
> that is expected, AFAICT, a modify operation shouldn't be: 
> * read FILE -> modify -> write FILE 
> but rather: 
> * read FILE -> modify -> write FILE.TMP -> move FILE.TMP to FILE 
> if it's wanted that always a valid content is read. Else yes, you may have a 
> small 
> time window where the file is truncated. 
> 
> But, file_set_contents - which save_clusterfw_conf uses - does this 
> already[0], 
> so maybe this is the "high-level fuse rename isn't atomic" bug again... 
> May need to take a closer look tomorrow. 
> 
> [0]: 
> https://git.proxmox.com/?p=pve-common.git;a=blob;f=src/PVE/Tools.pm;h=accf6539da94d2b5d5b6f4539310fe5c4d526c7e;hb=HEAD#l213
>  
> 
>>
>> - Mail original - 
>> De: "aderumier"  
>> À: "pve-devel"  
>> Envoyé: Mardi 8 Janvier 2019 19:15:06 
>> Objet: [pve-devel] firewall : possible bug/race when cluster.fw is 
>> replicated and rules are updated ? 
>>
>> Hi, 
>> I'm currently debugging a possible firewalling problem. 
>> I'm running some cephfs client in vm, firewalled by proxmox. 
>> cephfs client are really sensitive to network problem, and mainly with 
>> packets logss or dropped packets. 
>>
>> I'm really not sure, but I have currently puppet updating my cluster.fw, at 
>> regular interval, 
>> and sometimes, I have all the vm on a specific host (or multiple hosts), at 
>> the same time, have a small disconnect (maybe some second). 
>>
>>
>> I would like to known, if cluster.fw replication is atomic in /etc/pve/ ? 
>> or if they are any chance, that during file replication, the firewall try to 
>> read the file, 
>> it could be empty ? 
>>
>>
>> I just wonder (I'm really really not sure) if I could trigger this: 
>>
>>
>> sub update { 
>> my $code = sub { 
>>
>> my $cluster_conf = load_clusterfw_conf(); 
>> my $cluster_options = $cluster_conf->{options}; 
>>
>> if (!$cluster_options->{enable}) { 
>> PVE::Firewall::remove_pvefw_chains(); 
>> return; 
>> } 
>>
>>
>> cluster.conf not readable/absent/ , and remove_pvefw_chains called. 
>> then after some seconds, rules are applied again. 
>>
>>
>> I'm going to add some log to try to reproduce it. (BTW, it could be great to 
>> logs rules changed, maybe an audit log with a diff could be great) 
>> ___ 
>> pve-devel mailing list 
>> pve-devel@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
>>
>> ___ 
>> pve-devel mailing list 
>> pve-devel@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
___
pve-devel mailing list
pve-devel@pve.proxmox.com

Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Alexandre DERUMIER
>>But, file_set_contents - which save_clusterfw_conf uses - does this 
>>already[0],
>>so maybe this is the "high-level fuse rename isn't atomic" bug again...
>>May need to take a closer look tomorrow.

mmm, ok.

In my case, it was with a simple file copy (cp /tmp/cluster.fw 
/etc/pve/firewall/cluster.fw). (I manage cluster.fw through puppet for multiple 
cluster).
can reproduce it too with a simple vi edition.

I think others users could trigger this too, if they use scripts to automate 
ipset (blacklist, ).

Maybe could it be better to only disable firewall when option is setup with 
"enabled:0", and if cluster.fw is missing, don't change any rules.
²²²



- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" 
Envoyé: Mardi 8 Janvier 2019 20:58:51
Objet: Re: [pve-devel] firewall : possible bug/race when cluster.fw is 
replicated and rules are updated ?

Hi, 

On 1/8/19 7:37 PM, Alexandre DERUMIER wrote: 
> I'm able to reproduce with: 
> --- 
> on 1 host: 
> 
> cluster.fw: 
> [OPTIONS] 
> 
> enable: 1 
> policy_in: ACCEPT 
> 
> 
> 
> 
> #!/usr/bin/perl 
> 
> use IO::File; 
> use PVE::Firewall; 
> use Data::Dumper; 
> use Time::HiRes qw ( time alarm sleep usleep ); 
> 
> while(1){ 
> 
> $filename = "/etc/pve/firewall/cluster.fw"; 
> 
> if (my $fh = IO::File->new($filename, O_RDONLY)) { 
> 
> $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, $fh, 
> $verbose); 
> my $cluster_options = $cluster_conf->{options}; 
> 
> if (!$cluster_options->{enable}) { 
> print Dumper($cluster_options); 
> die "error\n"; 
> } 
> 
> } 
> usleep(100); 
> }; 
> 
> 
> the script is running fine. 
> 
> 
> on another host, edit the file (simple open/write), 
> then the script on first host, return 
> 
> $VAR1 = {}; 
> error 

that is expected, AFAICT, a modify operation shouldn't be: 
* read FILE -> modify -> write FILE 
but rather: 
* read FILE -> modify -> write FILE.TMP -> move FILE.TMP to FILE 
if it's wanted that always a valid content is read. Else yes, you may have a 
small 
time window where the file is truncated. 

But, file_set_contents - which save_clusterfw_conf uses - does this already[0], 
so maybe this is the "high-level fuse rename isn't atomic" bug again... 
May need to take a closer look tomorrow. 

[0]: 
https://git.proxmox.com/?p=pve-common.git;a=blob;f=src/PVE/Tools.pm;h=accf6539da94d2b5d5b6f4539310fe5c4d526c7e;hb=HEAD#l213
 

> 
> - Mail original - 
> De: "aderumier"  
> À: "pve-devel"  
> Envoyé: Mardi 8 Janvier 2019 19:15:06 
> Objet: [pve-devel] firewall : possible bug/race when cluster.fw is replicated 
> and rules are updated ? 
> 
> Hi, 
> I'm currently debugging a possible firewalling problem. 
> I'm running some cephfs client in vm, firewalled by proxmox. 
> cephfs client are really sensitive to network problem, and mainly with 
> packets logss or dropped packets. 
> 
> I'm really not sure, but I have currently puppet updating my cluster.fw, at 
> regular interval, 
> and sometimes, I have all the vm on a specific host (or multiple hosts), at 
> the same time, have a small disconnect (maybe some second). 
> 
> 
> I would like to known, if cluster.fw replication is atomic in /etc/pve/ ? 
> or if they are any chance, that during file replication, the firewall try to 
> read the file, 
> it could be empty ? 
> 
> 
> I just wonder (I'm really really not sure) if I could trigger this: 
> 
> 
> sub update { 
> my $code = sub { 
> 
> my $cluster_conf = load_clusterfw_conf(); 
> my $cluster_options = $cluster_conf->{options}; 
> 
> if (!$cluster_options->{enable}) { 
> PVE::Firewall::remove_pvefw_chains(); 
> return; 
> } 
> 
> 
> cluster.conf not readable/absent/ , and remove_pvefw_chains called. 
> then after some seconds, rules are applied again. 
> 
> 
> I'm going to add some log to try to reproduce it. (BTW, it could be great to 
> logs rules changed, maybe an audit log with a diff could be great) 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Thomas Lamprecht
Hi,

On 1/8/19 7:37 PM, Alexandre DERUMIER wrote:
> I'm able to reproduce with:
> ---
> on 1 host:
>
> cluster.fw:
> [OPTIONS]
>
> enable: 1
> policy_in: ACCEPT
>
>
>
>
> #!/usr/bin/perl
>
> use IO::File;
> use PVE::Firewall;
> use Data::Dumper;
> use Time::HiRes qw ( time alarm sleep usleep );
>
> while(1){
>
> $filename = "/etc/pve/firewall/cluster.fw";
>
> if (my $fh = IO::File->new($filename, O_RDONLY)) {
>
>  $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, 
> $fh, $verbose);
> my $cluster_options = $cluster_conf->{options};
>
> if (!$cluster_options->{enable}) {
>print Dumper($cluster_options);
>die "error\n";
> }
>
> } 
> usleep(100);
> };
>
>
> the script is running fine.
>
>
> on another host, edit the file (simple open/write),
> then the script on first host, return
>
> $VAR1 = {};
> error

that is expected, AFAICT,  a modify operation shouldn't be:
* read FILE -> modify -> write FILE
but rather:
* read FILE -> modify -> write FILE.TMP -> move FILE.TMP to FILE
if it's wanted that always a valid content is read. Else yes, you may have a 
small
time window where the file is truncated.

But, file_set_contents - which save_clusterfw_conf uses - does this already[0],
so maybe this is the "high-level fuse rename isn't atomic" bug again...
May need to take a closer look tomorrow.

[0]: 
https://git.proxmox.com/?p=pve-common.git;a=blob;f=src/PVE/Tools.pm;h=accf6539da94d2b5d5b6f4539310fe5c4d526c7e;hb=HEAD#l213

>
> - Mail original -
> De: "aderumier" 
> À: "pve-devel" 
> Envoyé: Mardi 8 Janvier 2019 19:15:06
> Objet: [pve-devel] firewall : possible bug/race when cluster.fw is replicated 
> and rules are updated ?
>
> Hi, 
> I'm currently debugging a possible firewalling problem. 
> I'm running some cephfs client in vm, firewalled by proxmox. 
> cephfs client are really sensitive to network problem, and mainly with 
> packets logss or dropped packets. 
>
> I'm really not sure, but I have currently puppet updating my cluster.fw, at 
> regular interval, 
> and sometimes, I have all the vm on a specific host (or multiple hosts), at 
> the same time, have a small disconnect (maybe some second). 
>
>
> I would like to known, if cluster.fw replication is atomic in /etc/pve/ ? 
> or if they are any chance, that during file replication, the firewall try to 
> read the file, 
> it could be empty ? 
>
>
> I just wonder (I'm really really not sure) if I could trigger this: 
>
>
> sub update { 
> my $code = sub { 
>
> my $cluster_conf = load_clusterfw_conf(); 
> my $cluster_options = $cluster_conf->{options}; 
>
> if (!$cluster_options->{enable}) { 
> PVE::Firewall::remove_pvefw_chains(); 
> return; 
> } 
>
>
> cluster.conf not readable/absent/ , and remove_pvefw_chains called. 
> then after some seconds, rules are applied again. 
>
>
> I'm going to add some log to try to reproduce it. (BTW, it could be great to 
> logs rules changed, maybe an audit log with a diff could be great) 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Alexandre DERUMIER
I'm able to reproduce with:
---
on 1 host:

cluster.fw:
[OPTIONS]

enable: 1
policy_in: ACCEPT




#!/usr/bin/perl

use IO::File;
use PVE::Firewall;
use Data::Dumper;
use Time::HiRes qw ( time alarm sleep usleep );

while(1){

$filename = "/etc/pve/firewall/cluster.fw";

if (my $fh = IO::File->new($filename, O_RDONLY)) {

 $cluster_conf = PVE::Firewall::parse_clusterfw_config($filename, $fh, 
$verbose);
my $cluster_options = $cluster_conf->{options};

if (!$cluster_options->{enable}) {
   print Dumper($cluster_options);
   die "error\n";
}

} 
usleep(100);
};


the script is running fine.


on another host, edit the file (simple open/write),
then the script on first host, return

$VAR1 = {};
error





- Mail original -
De: "aderumier" 
À: "pve-devel" 
Envoyé: Mardi 8 Janvier 2019 19:15:06
Objet: [pve-devel] firewall : possible bug/race when cluster.fw is replicated 
and rules are updated ?

Hi, 
I'm currently debugging a possible firewalling problem. 
I'm running some cephfs client in vm, firewalled by proxmox. 
cephfs client are really sensitive to network problem, and mainly with packets 
logss or dropped packets. 

I'm really not sure, but I have currently puppet updating my cluster.fw, at 
regular interval, 
and sometimes, I have all the vm on a specific host (or multiple hosts), at the 
same time, have a small disconnect (maybe some second). 


I would like to known, if cluster.fw replication is atomic in /etc/pve/ ? 
or if they are any chance, that during file replication, the firewall try to 
read the file, 
it could be empty ? 


I just wonder (I'm really really not sure) if I could trigger this: 


sub update { 
my $code = sub { 

my $cluster_conf = load_clusterfw_conf(); 
my $cluster_options = $cluster_conf->{options}; 

if (!$cluster_options->{enable}) { 
PVE::Firewall::remove_pvefw_chains(); 
return; 
} 


cluster.conf not readable/absent/ , and remove_pvefw_chains called. 
then after some seconds, rules are applied again. 


I'm going to add some log to try to reproduce it. (BTW, it could be great to 
logs rules changed, maybe an audit log with a diff could be great) 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] firewall : possible bug/race when cluster.fw is replicated and rules are updated ?

2019-01-08 Thread Alexandre DERUMIER
Hi,
I'm currently debugging a possible firewalling problem.
I'm running some cephfs client in vm, firewalled by proxmox.
cephfs client are really sensitive to network problem, and mainly with packets 
logss or dropped packets.

I'm really not sure, but I have currently puppet updating my cluster.fw, at 
regular interval,
and sometimes, I have all the vm on a specific host (or multiple hosts), at the 
same time, have a small disconnect (maybe some second).


I would like to known, if cluster.fw replication is atomic in /etc/pve/ ?
or if they are any chance, that during file replication, the firewall try to 
read the file,
it could be empty ?


I just wonder (I'm really really not sure) if I could trigger this:


sub update {
my $code = sub {

my $cluster_conf = load_clusterfw_conf();
my $cluster_options = $cluster_conf->{options};

if (!$cluster_options->{enable}) {
PVE::Firewall::remove_pvefw_chains();
return;
}


cluster.conf not readable/absent/  , and remove_pvefw_chains called.
then after some seconds, rules are applied again.


I'm going to add some log to try to reproduce it. (BTW, it could be great to 
logs rules changed, maybe an audit log with a diff could be great)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH ifupdown2 0/3] bump to 1.2.2

2019-01-08 Thread Thomas Lamprecht
On 12/31/18 11:20 AM, Alexandre Derumier wrote:
> This need to update mirror_ifupdown2 to tag 1.2.2-1
> 
> Alexandre Derumier (3):
>   Makefile : bump to 1.2.2
>   changelog : dump to 1.2.2
>   update config tuning patch
> 
>  Makefile|  2 +-
>  debian/changelog|  6 +
>  debian/patches/pve/0001-config-tuning.patch | 34 
> ++---
>  3 files changed, 24 insertions(+), 18 deletions(-)
> 

applied series, thanks! but I squashed the separate "version bump" patches for
makefile / changelog into one and reordered them, so that first the submodule
gets updated, then the "update config tuning patch" gets done and the version
bump comes last. Hope this is OK for you.

Also pushed the 1.2.2-1 tag, while there's 1.2.3-1 available (released at about
the same time you sent those patches) it includes only a fix for sysvinit
systems, which we do not support on the PVE Host itself, so I did not worry
about that one.

Side question: our mirrored master branch is currently quite void and has a
single "development happens on master-next" README, is this still true?
Because upstream repo's master is currently newer than upstream/master-next
and only master even contained the 1.2.2-1 commit (thus I only pushed the tag
for now). Do you know something about this? Because looking at this now I'd
assume that now master is the stable dev tree?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel