https://git.proxmox.com/ ?
On Mon, Jun 29, 2015, at 06:24 PM, Albert Dengg wrote:
hi,
it's probably a dumb question, but:
where do i find the soruces for the pve-kernel-2.6.32-39-pve
package?
if i try to just add
deb-src https://enterprise.proxmox.com/debian wheezy pve-enterprise
i
This is probably due to blowfish being faster than AES.
Proxmox uses ssh for migrations and other tasks, and since they (mostly)
performs in private networks, there is no need for strong encryption.
On Wed, Oct 22, 2014, at 06:42 AM, Simone Piccardi wrote:
Hi,
I got some problems with the
I have exact same issue. Java console works, but novnc gives me same
error with exit code 1
On Mon, Sep 15, 2014, at 07:20 AM, Dhaussy Alexandre wrote:
No idea ? or should i blame my english ? x)
Le 10/09/2014 16:05, Alexandre DHAUSSY a écrit :
Hello,
I'm getting a timeout when i try to
Most tests shows that internal PVE webserver (the one on port 8006) is
vulnerable.
On Sun, Apr 13, 2014, at 05:05 AM, Thinker Rix wrote:
Hi all,
are there any realizations yet, if - and if yes: to which extent -
Proxmox is affected by heartbleed and what counter measures are to be
done?
+1 for HP 1810-24G. I have v2 at home lab, great device for its price.
On Sat, Apr 12, 2014, at 11:57 AM, Patrick Westenberg wrote:
Leslie-Alexandre DENIS schrieb:
For example a smart managed HP 1810-24G 24 RJ45 Gigabit ports costs
around 100-150€ on eBay.
Or a J9279A / 2510-24G which
I have same problems. One VM sometimes stops backup on 40-43%, leaving
it locked. Very annoying.
On Tue, Feb 11, 2014, at 10:45 PM, Lindsay Mathieson wrote:
On 12 February 2014 16:03, Dietmar Maurer diet...@proxmox.com wrote:
Several times it has just stopped responding, Java Console abd Spice
It's not possible to map cpu core to specific VM. You can, however,
pin VM KVM process to specific core.
On Thu, Feb 6, 2014, at 01:57 AM, Muhammad Yousuf Khan wrote:
what we assign as CPU in KVM is virtual.
is it possible that we can assign a physical or dedicated core of a
processor to
My personal proxmox server in Germany:
13:43:46 up 444 days, 20:10, 1 user, load average: 0.02, 0.03, 0.01
On Wed, Dec 25, 2013, at 01:43 AM, Lindsay Mathieson wrote:
Mine are pitiful - new install and can't resist fiddling with it :)
Currently
at 5 days.
Anyone over a year?
--
Lindsay
Hello, title says it all. Is there any guides/common mistakes/etc for
moving cluster to different subnet?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Storage is on complete separate network with dedicated interfaces, so
there is no problem with that.
So for cluster i only need resolving hostnames like they written in
/etc/pve/cluster.conf?
On Sun, Dec 15, 2013, at 04:28 AM, Michael Rasmussen wrote:
On Sun, 15 Dec 2013 03:53:53 -0800
Lex
By LVM mirror i mean mirrored LV volumes, distributed on several VGs.
(PVs is iSCSI-based disks running on two separate servers)
Like RAID1.
On Fri, Dec 6, 2013, at 03:59 PM, Marco Gabriel - inett GmbH wrote:
-Ursprüngliche Nachricht-
Hello, has anyone tried LVM mirror for storage
Hello, has anyone tried LVM mirror for storage HA?
We managed to get mirror running with two iSCSI disks, but when one of
devices goes offline, test VM acting strange.
Sometimes it _may_ work after one disk failure and connection recovery,
but more common result is VM going to zombie state.
In
Not sure about completely disabling encryption, but that's now i use
less cpu-hungry ciphers for local network:
in $HOME/.ssh/config:
Host 10.* 172.* 192.168.*
Ciphers blowfish-cbc
On Sat, Nov 2, 2013, at 05:48 AM, Laurent CARON wrote:
Hi,
I remember seeing a post about speeding up live
Depends on storage model.
AFAIK you can't thin provision RAW image which is used in LVM+iSCSI
scenario.
In case of qcow2 on NFS - image metadata is pre-allocated.
On Mon, Oct 28, 2013, at 07:28 AM, franci...@retina.sld.cu wrote:
Hi to all, I need to know how to create virtual auto-expandable
Try this:
dpkg -l pve-kernel-* | awk '/^ii/{ print $2}' | grep -v -e `uname -r |
cut -f1,2 -d-` | grep -e [0-9] | xargs sudo apt-get -y purge
you can remove -y from last command to verify things before uninstall,
otherwise it will wipe old kernels without confirmation.
On Sat, Oct 5, 2013, at
HA (running VM on spare machine in cluster if one failed) - yes, but
you need IPMI/iLO/etc on host nodes for fencing
FT (AFAIK it simultaneously runs one VM on two/more hosts for instant
overtake in case of failure) - nope, at least not yet.
On Fri, Sep 20, 2013, at 05:08 AM, Wiedenmann Georg
This is completely different virtualization schemes. No, that's not
possible.
Use dpkg --set/get selections to get list of installed packages to get
similar setup in different VM/CT.
On Tue, Sep 17, 2013, at 02:57 AM, sanjay kumar wrote:
Namaskar!
Is there a way to convert vmdk file in
Not with proxmox. You can try plain qemu.
On Fri, Sep 13, 2013, at 06:27 AM, Luis G. Coralle wrote:
Hi all, Any idea if can I create a VM with mips hardware?
--
Luis G. Coralle
___
pve-user mailing list
[1]pve-user@pve.proxmox.com
Nexenta CE is pretty good.
On Thu, Aug 22, 2013, at 05:10 AM, Muhammad Yousuf Khan wrote:
Need any experience suggestion.
We want a SMB SAN storage and a bit convenient in price. with HA and
Multipathing for iSCSI
do you guys think openfiler could be a good option for production or it
is
with ZFS is just not stable
This usually happens when you enable deduplication :)
On Thu, Aug 22, 2013, at 09:25 AM, Fábio Rabelo wrote:
2013/8/22 Marco Gabriel - inett GmbH [1]mgabr...@inett.de
Nexenta CE may only be used for testing, not allowed for production.
If you go for a cheap
Another problem: live migrating back a VM from a 3.0 to a 3.1 host, I have
I thought live migration between different versions was never
(officially) supported.
On Wed, Aug 21, 2013, at 05:19 AM, Fabrizio Cuseo wrote:
Another problem: live migrating back a VM from a 3.0 to a 3.1 host, I
have:
No problems with different kernels in cluster. I still have few proxmox
2.3 nodes in 3.0 cluster and i have not seen any problems.
On Wed, Aug 7, 2013, at 07:25 AM, Rob Fantini wrote:
Hello Martin
Our cluster has 4 nodes.
For testing a new kernel is it generally OK to use it on just some
RHEL kernel is not old. 2.6.32 RHEL != vanilla 2.6.32. Take a diff and
compare it yourself, they backport shitload of features, patches, etc
from newer kernels.
RHEL kernel is more like separate fork from vanilla kernel.
Nothing stops you from installing and running with wheezy kernel, but
you
Sometimes while migrating VM from one storage to another (typical
scenario is LVM on top of iSCSI - LVM on top of iSCSI) i got this error
right after copying disk image:
---
device-mapper: remove ioctl on failed: Device or resource busy
Logical volume vm-146-disk-1 successfully removed
Hello.
I have machine which have span port from physical switch for VMs.
Interface config:
auto eth3
allow-hotplug eth3
iface eth3 inet manual
up /sbin/ifconfig $IFACE up
downp /sbin/ifconfig $IFACE down
post-up ethtool -K eth4 sg off
post-up ethtool -K eth4
Well silly me. brctl location changed from /usr/sbin in squeeze to just
/sbin in wheezy. Sorry for bothering, it's working now :)
On Mon, Jul 8, 2013, at 06:32 AM, Lex Rivera wrote:
Hello.
I have machine which have span port from physical switch for VMs.
Interface config:
auto eth3
Leftovers. Previous network card required that tuning, otherwise
mirrored frames would be duplicated.
Thanks for notifying.
On Mon, Jul 8, 2013, at 09:10 AM, Alexandre Kouznetsov wrote:
Hello.
El 08/07/13 08:44, Lex Rivera escribió:
Well silly me. brctl location changed from /usr/sbin
Yep, config folder is shared and AFAIK format of config files hasn't
changed. Well you can simply tar /etc/pve directory just in case.
Reinstall and reconnect probably will do, but there may be problems
with SSH keys since reinstalled node will get new one, i does not know
how proxmox handles
And migrate (offline migration is also ok) VMs between them?
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
So it's supports live storage migration?
On Mon, Jun 10, 2013, at 05:21 AM, Alexandre DERUMIER wrote:
Wiki is not yet updated, but
From gui, you have a new move disk button, on your vm hardware tab.
(works offline or online)
command line is : qm move_disk vmid disk storage
-
3.2.0-4 is debian kernel, not proxmox. You probably should reboot
again, choose -pve kernel and boot with it, if any available, otherwise
you can't use openvz (and probably some other features which available
in proxmox-only kernel). If no -pve kernel available, you probably need
to upgrade your
Hello.
I recently upgraded one of lab machines to Proxmox 3.0 and found out
that it can't run pfsense anymore.
ATM i solved it by extracting bios from pve-qemu-kvm-1.3.0, setting path
to that bios via -bios in VM .conf, and disabling KVM hardware
virtualization. That way it works.
With recent PVE
32 matches
Mail list logo