[pve-devel] Snapshots over DRBD

2016-07-27 Thread Gilou
Hi,

I just set up a PoC using Proxmox 4.2 & DRBD 9, and realized afterwards
that I can't snapshot those.

I remember a discussion about someone working on patching it, but I
guess it didn't make it through... I see the DRDB cleverly uses
drbdmanage to set each thin LVM as primary accordingly, would it be
harder to allow drbd snapshots than local snapshots ?

Cheers,

Gilles
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Upload and import disk images

2016-01-14 Thread Gilou
Le 13/01/2016 12:17, Timo Grodzinski a écrit :
> Hi,
> 
> we plan to enable the user to upload and import disk images (qcow2,
> vmdk, raw, ... ).
> 
> Is anyone interested in this feature to go upstream in the public pve repos?
> 
> Any suggestions, hints or feature requests?
> 
> Best regards,
> Timo Grodzinski

Hi,

That is a good idea, for now it requires quite a bit of know how to
import existing images, so that would be nice.

As for features, it could even not stop at the image, but the config
hehe.. Why not go as far as importing a .ovf/ova ? ;)

Cheers,

Gilles

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 18:55, Michael Rasmussen a écrit :
> On Fri, 9 Oct 2015 18:17:52 +0200
> Gilou <contact+...@gilouweb.com> wrote:
> 
>>
>> Maybe related, I have a lot of that in the logs:
>> Oct 09 18:14:56 pve2 pve-ha-lrm[1224]: watchdog update failed - Broken pipe
>>
> Are you running your nodes in virtualbox?
> 
> When I test in virtualbox I see the same in the syslog window.


Those are 3 real nodes, running on dell T1700 machines with 2 NICs.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 18:43, Thomas Lamprecht a écrit :
> Whats your cluster setup? Three nodes?

3 nodes, yes

> 
> Output from:
> # ha-manager status

root@pve2:~# date
Fri Oct  9 19:00:32 CEST 2015
root@pve2:~# ha-manager status
quorum OK
master pve2 (old timestamp - dead?, Fri Oct  9 18:57:03 2015)
lrm pve1 (active, Fri Oct  9 19:00:30 2015)
lrm pve2 (active, Fri Oct  9 19:00:31 2015)
lrm pve3 (active, Fri Oct  9 19:00:30 2015)

I removed the service manually, I'm going to start again and record it...

>> They were down (so I guess the fencing worked). But still?
> They were down means they rebooted during the time?
> If the services are frozen it means for sure that a gracefull shutdown
> of the LRM happened, that normally also comes together with a gracefull
> shutdown of the whole node.

Nothing happened but the cables being unplugged.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 19:01, Gilou a écrit :
> Le 09/10/2015 18:43, Thomas Lamprecht a écrit :
>> Whats your cluster setup? Three nodes?
> 
> 3 nodes, yes
> 
>>
>> Output from:
>> # ha-manager status
> 
> root@pve2:~# date
> Fri Oct  9 19:00:32 CEST 2015
> root@pve2:~# ha-manager status
> quorum OK
> master pve2 (old timestamp - dead?, Fri Oct  9 18:57:03 2015)
> lrm pve1 (active, Fri Oct  9 19:00:30 2015)
> lrm pve2 (active, Fri Oct  9 19:00:31 2015)
> lrm pve3 (active, Fri Oct  9 19:00:30 2015)
> 
> I removed the service manually, I'm going to start again and record it...
> 
>>> They were down (so I guess the fencing worked). But still?
>> They were down means they rebooted during the time?
>> If the services are frozen it means for sure that a gracefull shutdown
>> of the LRM happened, that normally also comes together with a gracefull
>> shutdown of the whole node.
> 
> Nothing happened but the cables being unplugged.
> 

OK, well, gonna start fresh, dunno what is happening:
failed state.
Oct 09 19:05:49 pve2 watchdog-mux[1619]: watchdog set timeout: Invalid
argument
Oct 09 19:05:49 pve2 systemd[1]: watchdog-mux.service: main process
exited, code=exited, status=1/FAILURE
Oct 09 19:05:49 pve2 systemd[1]: Unit watchdog-mux.service entered
failed state.
Oct 09 19:05:49 pve2 watchdog-mux[1621]: watchdog set timeout: Invalid
argument


But it's the second time I try this, and it is inconsistent, and not
really pleasant. I'll get back with full install of PVE 4 as of the
release, and try that again. Stay tuned...


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 18:36, Gilou a écrit :
> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
>>> So I tried again.. HA doesn't work.
>>> Both resources are now frozen (?), and they didn't restart... Even after
>>> 5 minutes...
>>> service vm:102 (pve1, freeze)
>>> service vm:303 (pve1, freeze)
>>
>> The question is why they are frozen. The only action which 
>> puts them to 'freeze' is when you shutdown a node.
>>
> 
> I pulled the ethernet cables out of the to-be-failing node when I
> tested. It didn't shut down. I plugged them back in 20 minutes later.
> They were down (so I guess the fencing worked). But still?
> 

OK, so I reinstalled fresh from the PVE 4 ISO 3 nodes, that are using
one single NIC to communicate with a NFS server and themselves. Cluster
is up, and one VM is protected:
# ha-manager status
quorum OK
master pve1 (active, Fri Oct  9 19:55:06 2015)
lrm pve1 (active, Fri Oct  9 19:55:12 2015)
lrm pve2 (active, Fri Oct  9 19:55:07 2015)
lrm pve3 (active, Fri Oct  9 19:55:10 2015)
service vm:100 (pve2, started)
# pvecm status
Quorum information
--
Date: Fri Oct  9 19:55:22 2015
Quorum provider:  corosync_votequorum
Nodes:3
Node ID:  0x0001
Ring ID:  12
Quorate:  Yes

Votequorum information
--
Expected votes:   3
Highest expected: 3
Total votes:  3
Quorum:   2
Flags:Quorate

Membership information
--
Nodeid  Votes Name
0x0002  1 192.168.44.129
0x0003  1 192.168.44.132
0x0001  1 192.168.44.143 (local)

One one of the nodes, incidentally, the one running the HA VM, I already
get those:
Oct 09 19:55:07 pve2 pve-ha-lrm[1211]: watchdog update failed - Broken pipe

Not good.
I tried to migrate to pve1 to see what happens:
Executing HA migrate for VM 100 to node pve1
unable to open file '/etc/pve/ha/crm_commands.tmp.3377' - No such file
or directory
TASK ERROR: command 'ha-manager migrate vm:100 pve1' failed: exit code 2

OK.. so we can't migrate running HA VMs ? What did I get wrong here?
So. I remove the VM from HA, I migrate it on pve1, see what happens. It
works. OK. I stop the VM. Enable HA. It won't start.
service vm:100 (pve1, freeze)

OK. And now, on pve1:
Oct 09 19:59:16 pve1 pve-ha-crm[1202]: watchdog update failed - Broken pipe

OK... Let's try pve3, cold migrate, without ha, enable ha again..
interesting, now we have:
# ha-manager status
quorum OK
master pve1 (active, Fri Oct  9 20:09:46 2015)
lrm pve1 (old timestamp - dead?, Fri Oct  9 19:58:57 2015)
lrm pve2 (active, Fri Oct  9 20:09:47 2015)
lrm pve3 (active, Fri Oct  9 20:09:50 2015)
service vm:100 (pve3, started)

Why is pve1 not reporting properly...

And now on 3 nodes:
Oct 09 20:10:40 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
Oct 09 20:10:50 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
Oct 09 20:11:00 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe

Wtf? omping reports multicast is getting through, but I'm not sure what
would be the issue there... It worked on 3.4 on the same physical setup.
So ?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 20:14, Gilou a écrit :
> Le 09/10/2015 18:36, Gilou a écrit :
>> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
>>>> So I tried again.. HA doesn't work.
>>>> Both resources are now frozen (?), and they didn't restart... Even after
>>>> 5 minutes...
>>>> service vm:102 (pve1, freeze)
>>>> service vm:303 (pve1, freeze)
>>>
>>> The question is why they are frozen. The only action which 
>>> puts them to 'freeze' is when you shutdown a node.
>>>
>>
>> I pulled the ethernet cables out of the to-be-failing node when I
>> tested. It didn't shut down. I plugged them back in 20 minutes later.
>> They were down (so I guess the fencing worked). But still?
>>
> 
> OK, so I reinstalled fresh from the PVE 4 ISO 3 nodes, that are using
> one single NIC to communicate with a NFS server and themselves. Cluster
> is up, and one VM is protected:
> # ha-manager status
> quorum OK
> master pve1 (active, Fri Oct  9 19:55:06 2015)
> lrm pve1 (active, Fri Oct  9 19:55:12 2015)
> lrm pve2 (active, Fri Oct  9 19:55:07 2015)
> lrm pve3 (active, Fri Oct  9 19:55:10 2015)
> service vm:100 (pve2, started)
> # pvecm status
> Quorum information
> --
> Date: Fri Oct  9 19:55:22 2015
> Quorum provider:  corosync_votequorum
> Nodes:3
> Node ID:  0x0001
> Ring ID:  12
> Quorate:  Yes
> 
> Votequorum information
> --
> Expected votes:   3
> Highest expected: 3
> Total votes:  3
> Quorum:   2
> Flags:Quorate
> 
> Membership information
> --
> Nodeid  Votes Name
> 0x0002  1 192.168.44.129
> 0x0003  1 192.168.44.132
> 0x0001  1 192.168.44.143 (local)
> 
> One one of the nodes, incidentally, the one running the HA VM, I already
> get those:
> Oct 09 19:55:07 pve2 pve-ha-lrm[1211]: watchdog update failed - Broken pipe
> 
> Not good.
> I tried to migrate to pve1 to see what happens:
> Executing HA migrate for VM 100 to node pve1
> unable to open file '/etc/pve/ha/crm_commands.tmp.3377' - No such file
> or directory
> TASK ERROR: command 'ha-manager migrate vm:100 pve1' failed: exit code 2
> 
> OK.. so we can't migrate running HA VMs ? What did I get wrong here?
> So. I remove the VM from HA, I migrate it on pve1, see what happens. It
> works. OK. I stop the VM. Enable HA. It won't start.
> service vm:100 (pve1, freeze)
> 
> OK. And now, on pve1:
> Oct 09 19:59:16 pve1 pve-ha-crm[1202]: watchdog update failed - Broken pipe
> 
> OK... Let's try pve3, cold migrate, without ha, enable ha again..
> interesting, now we have:
> # ha-manager status
> quorum OK
> master pve1 (active, Fri Oct  9 20:09:46 2015)
> lrm pve1 (old timestamp - dead?, Fri Oct  9 19:58:57 2015)
> lrm pve2 (active, Fri Oct  9 20:09:47 2015)
> lrm pve3 (active, Fri Oct  9 20:09:50 2015)
> service vm:100 (pve3, started)
> 
> Why is pve1 not reporting properly...
> 
> And now on 3 nodes:
> Oct 09 20:10:40 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
> Oct 09 20:10:50 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
> Oct 09 20:11:00 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
> 
> Wtf? omping reports multicast is getting through, but I'm not sure what
> would be the issue there... It worked on 3.4 on the same physical setup.
> So ?
> 
>

Well, then I still tried to see some failover, so I unplugged pve3 which
had the VM, something happened:

Oct  9 20:18:26 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
'online' => 'unknown'
Oct  9 20:19:16 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
from 'started' to 'fence'
Oct  9 20:19:16 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
'unknown' => 'fence'
Oct  9 20:20:26 pve1 pve-ha-crm[1202]: successfully acquired lock
'ha_agent_pve3_lock'
Oct  9 20:20:26 pve1 pve-ha-crm[1202]: fencing: acknowleged - got agent
lock for node 'pve3'
Oct  9 20:20:26 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
'fence' => 'unknown'
Oct  9 20:20:26 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
from 'fence' to 'stopped'
Oct  9 20:20:36 pve1 pve-ha-crm[1202]: watchdog update failed - Broken pipe
Oct  9 20:20:36 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
from 'stopped' to 'started'  (node = pve1)
Oct  9 20:20:36 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
from 'started' to 'freeze'

OK, frozen. great.
root@pve1:~# ha-manager status
quorum OK
master pve1 (active, Fri Oct  9 20:23:26 2015)
lrm pve1 (old timestamp - dead?, Fri Oct  9 19:58:57 2015)
lrm pve2 (active, Fri Oct  9 20:23:27 2015)
lrm pve3 (old timestamp - dead?, Fri Oct  9 20:18:10 2015)
service vm:100 (pve1, freeze)

What to do?
(Then starting manually doesn't work.. only way is to pull it out of
HA... all the same circus).



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-09 Thread Gilou
Le 09/10/2015 20:27, Gilou a écrit :
> Le 09/10/2015 20:14, Gilou a écrit :
>> Le 09/10/2015 18:36, Gilou a écrit :
>>> Le 09/10/2015 18:21, Dietmar Maurer a écrit :
>>>>> So I tried again.. HA doesn't work.
>>>>> Both resources are now frozen (?), and they didn't restart... Even after
>>>>> 5 minutes...
>>>>> service vm:102 (pve1, freeze)
>>>>> service vm:303 (pve1, freeze)
>>>>
>>>> The question is why they are frozen. The only action which 
>>>> puts them to 'freeze' is when you shutdown a node.
>>>>
>>>
>>> I pulled the ethernet cables out of the to-be-failing node when I
>>> tested. It didn't shut down. I plugged them back in 20 minutes later.
>>> They were down (so I guess the fencing worked). But still?
>>>
>>
>> OK, so I reinstalled fresh from the PVE 4 ISO 3 nodes, that are using
>> one single NIC to communicate with a NFS server and themselves. Cluster
>> is up, and one VM is protected:
>> # ha-manager status
>> quorum OK
>> master pve1 (active, Fri Oct  9 19:55:06 2015)
>> lrm pve1 (active, Fri Oct  9 19:55:12 2015)
>> lrm pve2 (active, Fri Oct  9 19:55:07 2015)
>> lrm pve3 (active, Fri Oct  9 19:55:10 2015)
>> service vm:100 (pve2, started)
>> # pvecm status
>> Quorum information
>> --
>> Date: Fri Oct  9 19:55:22 2015
>> Quorum provider:  corosync_votequorum
>> Nodes:3
>> Node ID:  0x0001
>> Ring ID:  12
>> Quorate:  Yes
>>
>> Votequorum information
>> --
>> Expected votes:   3
>> Highest expected: 3
>> Total votes:  3
>> Quorum:   2
>> Flags:Quorate
>>
>> Membership information
>> --
>> Nodeid  Votes Name
>> 0x0002  1 192.168.44.129
>> 0x0003  1 192.168.44.132
>> 0x0001  1 192.168.44.143 (local)
>>
>> One one of the nodes, incidentally, the one running the HA VM, I already
>> get those:
>> Oct 09 19:55:07 pve2 pve-ha-lrm[1211]: watchdog update failed - Broken pipe
>>
>> Not good.
>> I tried to migrate to pve1 to see what happens:
>> Executing HA migrate for VM 100 to node pve1
>> unable to open file '/etc/pve/ha/crm_commands.tmp.3377' - No such file
>> or directory
>> TASK ERROR: command 'ha-manager migrate vm:100 pve1' failed: exit code 2
>>
>> OK.. so we can't migrate running HA VMs ? What did I get wrong here?
>> So. I remove the VM from HA, I migrate it on pve1, see what happens. It
>> works. OK. I stop the VM. Enable HA. It won't start.
>> service vm:100 (pve1, freeze)
>>
>> OK. And now, on pve1:
>> Oct 09 19:59:16 pve1 pve-ha-crm[1202]: watchdog update failed - Broken pipe
>>
>> OK... Let's try pve3, cold migrate, without ha, enable ha again..
>> interesting, now we have:
>> # ha-manager status
>> quorum OK
>> master pve1 (active, Fri Oct  9 20:09:46 2015)
>> lrm pve1 (old timestamp - dead?, Fri Oct  9 19:58:57 2015)
>> lrm pve2 (active, Fri Oct  9 20:09:47 2015)
>> lrm pve3 (active, Fri Oct  9 20:09:50 2015)
>> service vm:100 (pve3, started)
>>
>> Why is pve1 not reporting properly...
>>
>> And now on 3 nodes:
>> Oct 09 20:10:40 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
>> Oct 09 20:10:50 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
>> Oct 09 20:11:00 pve3 pve-ha-lrm[1208]: watchdog update failed - Broken pipe
>>
>> Wtf? omping reports multicast is getting through, but I'm not sure what
>> would be the issue there... It worked on 3.4 on the same physical setup.
>> So ?
>>
>>
> 
> Well, then I still tried to see some failover, so I unplugged pve3 which
> had the VM, something happened:
> 
> Oct  9 20:18:26 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
> 'online' => 'unknown'
> Oct  9 20:19:16 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
> from 'started' to 'fence'
> Oct  9 20:19:16 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
> 'unknown' => 'fence'
> Oct  9 20:20:26 pve1 pve-ha-crm[1202]: successfully acquired lock
> 'ha_agent_pve3_lock'
> Oct  9 20:20:26 pve1 pve-ha-crm[1202]: fencing: acknowleged - got agent
> lock for node 'pve3'
> Oct  9 20:20:26 pve1 pve-ha-crm[1202]: node 'pve3': state changed from
> 'fence' => 'unknown'
> Oct  9 20:20:26 pve1 pve-ha-crm[1202]: service 'vm:100': state changed
> from 'fence' to 'stopped'
> Oct  9 20:20:36

Re: [pve-devel] Proxmox 4 feedback

2015-10-04 Thread Gilou
Le 04/10/2015 10:50, Thomas Lamprecht a écrit :
> 
> 
> Am 03.10.2015 um 22:40 schrieb Gilou:
>> Le 02/10/2015 15:18, Thomas Lamprecht a écrit :
>>>
>>> On 10/02/2015 11:59 AM, Gilou wrote:
>>>> Hi,
>>>>
>>>> I just installed PVE 4 Beta2 (43 ?), and played a bit with it.
>>>>
>>>> I do not notice the same bug I had on 3.4 with Windows 2012
>>>> rollbacks of
>>>> snapshots: it just works, that is great.
>>>>
>>>> However, I keep on getting an error on different pages: "Too many
>>>> redirections (599)". Any clue what could cause that? It happens even
>>>> more often on the storage contents...
>>> How do you connect to the web interface, network and browser? We do not
>>> redirect from our proxy, AFAIK.
>>>> I have an issue installing it over an old debian (that was running PVE
>>>> 3.4), it seems it has a hard time properly partitionning the local
>>>> disk,
>>>> I'll investigate a bit further on that.
>>> How did you install it over the old debian? PVE4 is based on debian
>>> jessie, whereas PVE3.4 is based on wheezy so an upgrade is needed, but
>>> that's not that trivial.
>>> You can install PVE4 on a - new installed - plain debian jessie, though.
>> I did it that way (I didn't want to try the upgrade from Debian,
>> assuming it would end badly ;)). But it seems that it barfed on the
>> partition layout. I deleted the partition table, and it installed just
>> fine. I haven't reproduced or troubleshooted any further.
>>
>> Also, I managed to kill HA, it was working fine on PVE3, but in PVE4 it
>> just didn't work... I have to read more about the new resource manager,
>> but I wasn't impressed by the reliability of the beta overall. I'll get
>> back to it next week and investigate further.
> How did you kill HA? And what doesn't seemed reliable? We welcome bug
> reports!
> I did only some small stuff with 3.4 HA stack and a lot with the 4.0beta
> stack, and for me the new HA manager was easier to configure and also
> reliable on my test cases.
> Although I have to say that as someone which develops on and with the ha
> manager I'm probably a bit biased. :-)
> Look at:
> http://pve.proxmox.com/wiki/High_Availability_Cluster_4.x
> and
>> man ha-manager
> those should give some information how it works.

That's what I intend to read more thoroughly, though I already skimmed
through it while I set it up. HA simply didn't work... Softdog behaving
erraticly, corosync reporting a quorate cluster (and multicast behaving
properly) while resource manager was unable to start, let alone protect
or migrate vms..

I will investigate, and report (unless I find I did something wrong). I
also tried to explicitly separate the NIC for the filer from the
management/cluster one, but it didn't work (and I can't say I believe
it's actually being checked, but it's mentionned in the docs ;))

Also, I won't insist much, but LXC live migration is a requirement for a
proper upgrade plan, as it's working for OpenVZ, and it's not possible
for us to lose that feature.

Cheers,
Gilles
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox 4 feedback

2015-10-03 Thread Gilou
Le 02/10/2015 15:18, Thomas Lamprecht a écrit :
> 
> 
> On 10/02/2015 11:59 AM, Gilou wrote:
>> Hi,
>>
>> I just installed PVE 4 Beta2 (43 ?), and played a bit with it.
>>
>> I do not notice the same bug I had on 3.4 with Windows 2012 rollbacks of
>> snapshots: it just works, that is great.
>>
>> However, I keep on getting an error on different pages: "Too many
>> redirections (599)". Any clue what could cause that? It happens even
>> more often on the storage contents...
> How do you connect to the web interface, network and browser? We do not
> redirect from our proxy, AFAIK.
>>
>> I have an issue installing it over an old debian (that was running PVE
>> 3.4), it seems it has a hard time properly partitionning the local disk,
>> I'll investigate a bit further on that.
> How did you install it over the old debian? PVE4 is based on debian
> jessie, whereas PVE3.4 is based on wheezy so an upgrade is needed, but
> that's not that trivial.
> You can install PVE4 on a - new installed - plain debian jessie, though.

I did it that way (I didn't want to try the upgrade from Debian,
assuming it would end badly ;)). But it seems that it barfed on the
partition layout. I deleted the partition table, and it installed just
fine. I haven't reproduced or troubleshooted any further.

Also, I managed to kill HA, it was working fine on PVE3, but in PVE4 it
just didn't work... I have to read more about the new resource manager,
but I wasn't impressed by the reliability of the beta overall. I'll get
back to it next week and investigate further.

Working on a 4.x kernel & with LXC is quite refreshing though...

Gilles
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Proxmox 4 feedback

2015-10-02 Thread Gilou
Hi,

I just installed PVE 4 Beta2 (43 ?), and played a bit with it.

I do not notice the same bug I had on 3.4 with Windows 2012 rollbacks of
snapshots: it just works, that is great.

However, I keep on getting an error on different pages: "Too many
redirections (599)". Any clue what could cause that? It happens even
more often on the storage contents...

I have an issue installing it over an old debian (that was running PVE
3.4), it seems it has a hard time properly partitionning the local disk,
I'll investigate a bit further on that.

Cheers,
Gilles
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Running KVM as root is a security issue

2015-07-27 Thread Gilou
Le 27/07/2015 15:29, Eric Blevins a écrit :
 I have no idea if CVE-2015-5154 that Stephan inquired about affests Proxmox.
 
 But when I see exploits like that the first thought in my mind is how
 easy it would be for such an exploit to get root on the Proxmox host.
 
 I've done some experimenting. If I take the KVM command as generated
 by Proxmox and simply add -runas nobody the VM starts up and runs
 without a problem.
 
 However when I try to open a console the KVM process fails.
 I suspect this is just some permissions in creating the socket but not
 investidated.
 
 A patch exists to prevent a crash when a socket cannot be opened.
 https://lists.gnu.org/archive/html/qemu-devel/2015-05/msg00577.html
 
 Any chance this security issue can be fixed before the 4.0 release?
 
 Eric

Hi,

Maybe it could even go further, allowing to separate some VMs using
different usernames to isolate them somehow?

Cheers
Gilles
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel