[pve-devel] updated 3.10.0 kernel

2014-10-15 Thread Dietmar Maurer
Just uploaded a 3.10.0 kernel update top pvetest:

pve-kernel-3.10.0-5-pve_3.10.0-19_amd64.deb

Please test,

- Dietmar
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] updated 3.10.0 kernel

2014-10-15 Thread Alexandre DERUMIER
Thanks,I'll try it tomorrow

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: pve-devel@pve.proxmox.com 
Envoyé: Mercredi 15 Octobre 2014 08:23:51 
Objet: [pve-devel] updated 3.10.0 kernel 



Just uploaded a 3.10.0 kernel update top pvetest: 

pve-kernel-3.10.0-5-pve_3.10.0-19_amd64.deb 

Please test, 

- Dietmar 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] pveproxy issue

2014-10-15 Thread Guy Loesch


Hi Dietmar,

I am using Proxmox (great product) in a lab context for my students.  
Maybe you remember our mail exchange last year where we had an issue  
with identical mac address generation when all of my students created  
a new machine at the same time. You bug-fixed the issue very quickly.  
Thanks again.


This year I have a new issue. Before starting this year's course I  
updated Proxmox to the latest version.


When the students (11 in all) are working all together on the Proxmox  
GUI, pveproxy hangs after a quite a short time and they get  
Communication failure(0), Loading... Server Offline? or Time  
out messages. I have to restart the pveproxy service to get back to a  
working GUI. All the machines are working fine when accessing them via  
noVnc or remote desktop so the problem is only related to pveproxy.


Observations:

I did not have that problem last year. Is there any parameters related  
to pveproxy that got changed due to the upgrade?


I discovered the keep-alive value in /usr/bin/pveproxy, and I tried  
values from 10 to 900 (default is 100), with no positive effect.


I installed nginx in a container. Accessing the Proxmox GUI via nginx  
improved the situation a lot. I still get same hangs but they appear  
a lot later, less often and recover after a while. I do not have to  
restart pveproxy.

Unfortunately I have no solution yet to acces the novnc this way.


Is there anything else I can try?

Guy

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Add functionalities

2014-10-15 Thread Serge NOEL
Hello,

we plan to add some functionnalities to qm command :

qm copy_disk vmid disk storage [OPTIONS]
  will act seamlessly as move disk, but won't change active disk at the end
qm change_disk vmid disk storage [OPTIONS]
  will allow to change disk used by Proxmox

Our needs :

We have 3 Proxmox servers, that share a unique SAN Lvm volume thru iSCSI
IBM DS3300.

In case of NAS system failure, we don't want to wait for restore... So we
plan to prediodicaly copy VM disks to a NAS volume. (qm copy_disk)
When a failure occurs, we have just to change disk, for the copy (qm
change_disk).

We plan also to make this behaviour available from Web interface (as
schedule backup).

We will use Developer Documentation and publish it when available.

Please let us know if something similar already exist, or you are in plan
to deliver it. We want to push it as standard feature of Proxmox.

Best regards
Serge NOEL
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pveproxy issue

2014-10-15 Thread Dietmar Maurer

 When the students (11 in all) are working all together on the Proxmox GUI,
 pveproxy hangs after a quite a short time and they get Communication
 failure(0), Loading... Server Offline? or Time out messages. I have to
 restart the pveproxy service to get back to a working GUI. 

Is there any error message in /var/log/syslog?

Is there a reliable way to reproduce the behavior?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add functionalities

2014-10-15 Thread Dietmar Maurer
 In case of NAS system failure, we don't want to wait for restore... So we 
 plan to
 prediodicaly copy VM disks to a NAS volume. (qm copy_disk) When a failure
 occurs, we have just to change disk, for the copy (qm change_disk).

So you don’t want to use normal backup, because restore takes too long?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add functionalities

2014-10-15 Thread Serge NOEL
Exactly, we are working on disaster scenarii and we want to find quick
solution in case of SAN failure.
In this case, we imagine that SAN will be unavailable, and we hope to be
able to restart with NAS (iSCSI) as replacement target, if we are able to
make copy of all VM, (it seems OK, i tried to copy lvm volume group with dd
: it's works) during normal operations. Doing so, in case of failure, all
we have to do is : to say to Proxmox to use disk from different iSCI (or
NFS) target.

Regards


2014-10-15 16:41 GMT+02:00 Dietmar Maurer diet...@proxmox.com:

  In case of NAS system failure, we don't want to wait for restore... So
 we plan to
  prediodicaly copy VM disks to a NAS volume. (qm copy_disk) When a failure
  occurs, we have just to change disk, for the copy (qm change_disk).

 So you don’t want to use normal backup, because restore takes too long?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add functionalities

2014-10-15 Thread Michael Rasmussen
On Wed, 15 Oct 2014 17:02:25 +0200
Serge NOEL serge.noel2...@gmail.com wrote:

 Exactly, we are working on disaster scenarii and we want to find quick
 solution in case of SAN failure.
 In this case, we imagine that SAN will be unavailable, and we hope to be
 able to restart with NAS (iSCSI) as replacement target, if we are able to
 make copy of all VM, (it seems OK, i tried to copy lvm volume group with dd
 : it's works) during normal operations. Doing so, in case of failure, all
 we have to do is : to say to Proxmox to use disk from different iSCI (or
 NFS) target.
Isn't that dangerous? I mean your approach will surely cause the loss
of data since all changes between the time of creation of the fall-back
disk an the incident will be lost!

If you need uptime like 99.999% you should drop you single point of
failure SAN an move on to clustered storage since clustered storage is
the only viable solution to your high demand on uptime.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
This fortune intentionally says nothing.


pgp8E1TGud6Ia.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add functionalities

2014-10-15 Thread Serge NOEL
Sure it is, but :
  we will make one copy per week, like a full backup
  we will make backup of datas in a classical way (rsync or something
similar...)

So we plan to use it as a far recovery process (wich provide a base
machine). It doesn' t allow us to bypass backup, just a way to quick
recover services...

More, only file server has changing files, others are TSE they stay
relatively stable on time, and loosing one week of change is better, that
being blocked during one day... So we are

Everything is a couple of choices, nor the best, nor the worst

Ideas are welcome...

Serge

2014-10-15 17:22 GMT+02:00 Michael Rasmussen m...@datanom.net:

 On Wed, 15 Oct 2014 17:02:25 +0200
 Serge NOEL serge.noel2...@gmail.com wrote:

  Exactly, we are working on disaster scenarii and we want to find quick
  solution in case of SAN failure.
  In this case, we imagine that SAN will be unavailable, and we hope to be
  able to restart with NAS (iSCSI) as replacement target, if we are able to
  make copy of all VM, (it seems OK, i tried to copy lvm volume group with
 dd
  : it's works) during normal operations. Doing so, in case of failure, all
  we have to do is : to say to Proxmox to use disk from different iSCI (or
  NFS) target.
 Isn't that dangerous? I mean your approach will surely cause the loss
 of data since all changes between the time of creation of the fall-back
 disk an the incident will be lost!

 If you need uptime like 99.999% you should drop you single point of
 failure SAN an move on to clustered storage since clustered storage is
 the only viable solution to your high demand on uptime.

 --
 Hilsen/Regards
 Michael Rasmussen

 Get my public GnuPG keys:
 michael at rasmussen dot cc
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
 mir at datanom dot net
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
 mir at miras dot org
 http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
 --
 /usr/games/fortune -es says:
 This fortune intentionally says nothing.

 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Add functionalities

2014-10-15 Thread VELARTIS Philipp Dürhammer
Hi,

a better solution would be to work with lvm snapshots and deltas. You just have 
to move the small delta. And therefore you can do this every hour or even more 
aften..
i dont know i fit is also possible with vzdump but it is also capable of making 
deltas. But ist not the same like applieng the delta direct tot he storage 
target like on lvm...

Like this it sould be possible to archieve something like vmware has allready.
Some tool like this would be actually cool in proxmox for some guis with 
smaller environments.
For bigger ones and maybe for you using ceph instead of your san would be 
better

-Ursprüngliche Nachricht-
Von: pve-devel [mailto:pve-devel-boun...@pve.proxmox.com] Im Auftrag von 
Michael Rasmussen
Gesendet: Mittwoch, 15. Oktober 2014 17:23
An: pve-devel@pve.proxmox.com
Betreff: Re: [pve-devel] Add functionalities

On Wed, 15 Oct 2014 17:02:25 +0200
Serge NOEL serge.noel2...@gmail.com wrote:

 Exactly, we are working on disaster scenarii and we want to find quick 
 solution in case of SAN failure.
 In this case, we imagine that SAN will be unavailable, and we hope to 
 be able to restart with NAS (iSCSI) as replacement target, if we are 
 able to make copy of all VM, (it seems OK, i tried to copy lvm volume 
 group with dd
 : it's works) during normal operations. Doing so, in case of failure, 
 all we have to do is : to say to Proxmox to use disk from different 
 iSCI (or
 NFS) target.
Isn't that dangerous? I mean your approach will surely cause the loss of data 
since all changes between the time of creation of the fall-back disk an the 
incident will be lost!

If you need uptime like 99.999% you should drop you single point of failure SAN 
an move on to clustered storage since clustered storage is the only viable 
solution to your high demand on uptime.

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
/usr/games/fortune -es says:
This fortune intentionally says nothing.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] pveproxy issue

2014-10-15 Thread Guy Loesch
No unfortunately not, some times only a warning:

Oct 15 09:26:04 misspiggy pveproxy[996374]: WARNING: proxy detected
vanished client connection
Oct 15 09:26:04 misspiggy pveproxy[996374]: WARNING: proxy detected
vanished client connection
Oct 15 09:28:22 misspiggy pveproxy[994305]: received terminate request
Oct 15 09:28:22 misspiggy pveproxy[994305]: worker 994306 finished
Oct 15 09:28:22 misspiggy pveproxy[994305]: worker 994307 finished
Oct 15 09:28:22 misspiggy pveproxy[994305]: worker 996374 finished
Oct 15 09:28:22 misspiggy pveproxy[994305]: server closing
Oct 15 09:28:24 misspiggy pveproxy[997353]: starting server
Oct 15 09:28:24 misspiggy pveproxy[997353]: starting 3 worker(s)
Oct 15 09:28:24 misspiggy pveproxy[997353]: worker 997354 started
Oct 15 09:28:24 misspiggy pveproxy[997353]: worker 997355 started
Oct 15 09:28:24 misspiggy pveproxy[997353]: worker 997356 started
Oct 15 09:28:25 misspiggy pveproxy[997356]: WARNING: proxy detected
vanished client connection
Oct 15 09:29:54 misspiggy pveproxy[997353]: worker 997354 finished
Oct 15 09:29:54 misspiggy pveproxy[997353]: starting 1 worker(s)

I don't have a reliable way to reproduce the behavior. Students connect
and pveproxy hangs after a relative short time.

Guy



On 15/10/2014 16:09, Dietmar Maurer wrote:
 When the students (11 in all) are working all together on the Proxmox GUI,
 pveproxy hangs after a quite a short time and they get Communication
 failure(0), Loading... Server Offline? or Time out messages. I have to
 restart the pveproxy service to get back to a working GUI. 
 Is there any error message in /var/log/syslog?

 Is there a reliable way to reproduce the behavior?





___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel