Re: [pve-devel] Make LXC code independent of storage plugin names

2016-09-18 Thread Fabian Grünbichler
On Sat, Sep 17, 2016 at 10:25:08PM +0300, Dmitry Petuhov wrote:
> Please, consider applying this patchset. Last week began to use it with my 
> Netapp
> https://github.com/mityarzn/pve-storage-custom-mpnetapp and Dell PS-series
> https://github.com/mityarzn/pve-storage-custom-dellps SAN plugins, as well as
> with 'local' standard storage (both raw and subvol formats). All seems to 
> work.
> 

I sent you some feedback on patch #1 yesterday.. Since it breaks stuff,
it can't be merged (yet). Feel free to send an updated v2 (or if you
feel the feedback requires discussion, feel free to respond).

Thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-18 Thread Dietmar Maurer
> The cpufreq and intel pstate driver were somewhat broken in 4.4 there were a
> lot of changes in 4.5 or 4.6 (can't remember). I'm using around 20 cpufreq
> (also a lot of optimizations) patches in 4.4.
> 
> I grabbed those from mr hoffstaette who has his own repo of 4.4 patches and
> backports.
> 
> See here and look for prefix cpufreq
> https://github.com/hhoffstaette/kernel-patches/tree/master/4.4.21

@alexandre: please can you test if that solves your problem?

@stefan: Do you also use the brtfs patches from hhoffstaette/... ?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-18 Thread Stefan Priebe - Profihost AG
The cpufreq and intel pstate driver were somewhat broken in 4.4 there were a 
lot of changes in 4.5 or 4.6 (can't remember). I'm using around 20 cpufreq 
(also a lot of optimizations) patches in 4.4.

I grabbed those from mr hoffstaette who has his own repo of 4.4 patches and 
backports.

See here and look for prefix cpufreq 
https://github.com/hhoffstaette/kernel-patches/tree/master/4.4.21

Greets,
Stefan

Excuse my typo sent from my mobile phone.

> Am 19.09.2016 um 06:49 schrieb Dietmar Maurer :
> 
> 
>> and I have found than 3 hosts of my 15 host cluster have wrong cpu frequency.
>> 
>> All nodes are dell r630, with xeon v3 3,1ghz. (all with last bios/microcode
>> updates , last proxmox kernel)
> 
> model name: Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz
> 
> but I never observer such problems. I get:
> 
> # cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq 
> 340
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] intel pstate: wrong cpu frequency with performance governor

2016-09-18 Thread Dietmar Maurer

> and I have found than 3 hosts of my 15 host cluster have wrong cpu frequency.
> 
> All nodes are dell r630, with xeon v3 3,1ghz. (all with last bios/microcode
> updates , last proxmox kernel)

model name  : Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz

but I never observer such problems. I get:

# cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq 
340

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-18 Thread Dietmar Maurer
> It's become to be difficult to manage, as we can't easily migrate vm across
> clusters 

But this is difficult because there is no shared storage in that case?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-18 Thread Dietmar Maurer
> I'm not an expert in cluster messaging, but I have found some projects which
> seem interesting:
> 
> serf:
> https://www.serf.io/intro/index.html
> consul:
> https://www.consul.io/

What problem do you want to solve exactly? Only more nodes? Or nodes
between physically distant data centers? 

An alternative plan would be to write a management tool which
can deal with multiple (corosync) clusters. I always wanted such 
tool, and I guess it is not really hard to write one.

Things like HA failover would still be restricted to a single
corosync cluster. What do you think about this idea?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] question/idea : managing big proxmox cluster (100nodes), get rid of corosync ?

2016-09-18 Thread Alexandre DERUMIER
Hi,

I'm looking for the future (1-2year), to manage big proxmox clusters 
(50-100nodes).

Currently I manage multiple cluster with max 16nodes, because corosync && 
multicast can be very difficult to tune.
(I'm also managing clusters across datacenters). 

It's become to be difficult to manage, as we can't easily migrate vm across 
clusters + we need to connect to differents cluster to manage vms.



I'm not an expert in cluster messaging, but I have found some projects which 
seem interesting:

serf:
https://www.serf.io/intro/index.html

a decentralized cluster messenging , based on gossip protocol



consul:
https://www.consul.io/

cluster messenging (it's use serf) + distributed key-value store + service 
management
(some beta fuse-fs implementation exist too) 



I don't known how much it could be to switch from corosync, pxmcfs, ?

Maybe something like serf is enough to replace corosync (an keep pmxcfs,ha 
management,...)
consul looks like corosync+ pmxcfs + ha services management.


So,It's just an idea, but I think getting rid of multicast/corosync could be a 
great step for proxmox.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel