I wonder where 16 VirtIO disk per VM isn't enough, at the moment I can't
really see an usecase were its necessary to use 16 different VirtIO
disks on a single virtual machine.
On 07/30/2015 10:25 PM, Keri Alleyne wrote:
Good day,
I'm monitoring this thread:
https://forum.proxmox.com/threads/
They don't had tagged an stable release in the last five years, and
always include latest master maybe wouldn't be a good idea, as it can
introduce bugs and complications with changes.
Have you an idea how they handle that, are there any releases which are
considered as (somewhat) stable?
On 0
On 08/24/2015 07:44 PM, Falko Trojahn wrote:
Hi all,
after upgrading our 3 nodes proxmox 3.4 system (using wheezy) from stock
3.2.0-4-amd64 debian kernel to 3.10.0-11-pve, we have only on one node
and only on it's cpu0 approx. 93..100% load constantly with ksoftirqd.
ksoftirq should normally
thanx for your answer.
Thomas Lamprecht schrieb am 27.08.2015 um 10:01:
after upgrading our 3 nodes proxmox 3.4 system (using wheezy) from
stock
3.2.0-4-amd64 debian kernel to 3.10.0-11-pve, we have only on one node
and only on it's cpu0 approx. 93..100% load constantly with ksoftirqd.
On 09/01/2015 03:19 PM, Falko Trojahn wrote:
Am 01.09.2015 um 14:45 schrieb Thomas Lamprecht:
There is an huge amount of interrupts from the network card and also
from your raid.
I'd guess that the main problem it's your network card, make sure the
driver of eth0 and eth3 are conf
Hi,
https://pve.proxmox.com/wiki/User_Management#Command_Line_Tool
this should help you.
> IMMO WETZEL hat am 1. September 2015 um 17:03
> geschrieben:
>
>
> Hi,
>
>
>
> are there any way to import users from domain with an simple command ?
>
> create every user is not th
On 09/03/2015 06:51 PM, bobs wrote:
I'm looking at: http://pve.proxmox.com/pve2-api-doc/
I want to use OpenVZ to make a NoVNC console work in a web page.
I can create a - VNC Proxy - just fine with the console:
pvesh create /nodes/{node}/openvz/{vmid}/vncproxy
This returns the cert, port, ti
hich only has access to the VM console (VM.Console is
the permission IIRC), as setting the access cookie let's the user log in
in the interface.
On 09/04/2015 03:42 AM, Thomas Lamprecht wrote:
On 09/03/2015 06:51 PM, bobs wrote:
I'm looking at: http://pve.proxmox.com/pve2-api-doc/
I wa
Am 04.09.2015 um 18:43 schrieb Gilberto Nunes:
I'm in trouble here with cluster construction...
Do you had some trouble with cluster configuration??
Yes, I did have troubles - adding a node failed. What's the problem in
your case?
How did you add the node? What was the failure message?
Best
On 09/08/2015 09:30 AM, Frank, Petric (Petric) wrote:
Hello,
i got a little further.
After viewing the script i realized that i have to set the env variables
http(s)_proxy
http://search.cpan.org/~ether/libwww-perl-6.13/lib/LWP/UserAgent.pm#Proxy_attributes
look at the 'env_proxy' entry,
and it worked? https should be preferred
though, to counter man in the middle attacks and other security issues.
Regards
Kind regards
Petric
-Original Message-
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of
Thomas Lamprecht
Sent: Dienstag, 8. Sep
neral use, as we _want_ cert checks, else https is unsecure.
Only to know,
https_proxy=https://your.proxy pveceph install -version hammer
didn't work?
Kind regards
Petric
-Original Message-
From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of
Thomas Lamprecht
Hi,
is your proxy still on port 3128 like stated in your bug report?
Because there normally runs the spiceproxy from PVE, so maybe it should
be another port?
On 09/08/2015 11:11 AM, Nicolas Costes wrote:
Le mardi 8 septembre 2015 08:43:27 Frank, Petric a écrit :
Hello,
yes, our proxy is able
Hello,
theres an bug report on this case, but it lacks some info.
https://bugzilla.proxmox.com/show_bug.cgi?id=702
what does ls /sys/blocks outputs?
On 09/08/2015 10:59 AM, Frank, Petric (Petric) wrote:
Hello,
following the guide
http://pve.proxmox.com/wiki/Ceph_Server
i am in the stage c
-Original Message-----
From: Thomas Lamprecht [mailto:t.lampre...@proxmox.com]
Sent: Dienstag, 8. September 2015 11:18
To: Frank, Petric (Petric); pve-user@pve.proxmox.com
Subject: Re: [PVE-User] Ceph install failed
On 09/08/2015 10:43 AM, Frank, Petric (Petric) wrote:
Hello,
yes, our p
You can use groups for that, see in the Datacenter -> HA tab, there you
can configure a group with only the two desired ones enabled. Add the
group to your services and they will only migrate to nodes in the group.
with the "restricted" option set they never migrate to an node which
isn't in t
Adapt the corosync.conf file in the /etc/pve folder there, set
quorum_votes of one of the node entries from 1 to 2.
This achieves the same as your current setup but with much less overhead
as no VM, which acts as PVE node, is needed.
*NOTE*: for real HA three real nodes are needed.
If you re
Oh, and after you added the votes, delete the PVE VM and delete it with the
pvecm delnode oldvmnode-hostname
from the cluster
On 09/24/2015 05:08 PM, Thomas Lamprecht wrote:
Adapt the corosync.conf file in the /etc/pve folder there, set
quorum_votes of one of the node entries from 1 to 2
Hi, can you look into the syslog of your current master and see if there
are messages like:
detected unknown node state
On 09/24/2015 09:33 PM, Gilberto Nunes wrote:
Hi
Any idea how can I remove a delete node from HA??
It still appears when I run
ha-manager status
quorum OK
master proxmox
9-25 11:38 GMT-03:00 Thomas Lamprecht <mailto:t.lampre...@proxmox.com>>:
Hi, can you look into the syslog of your current master and see if
there are messages like:
detected unknown node state
On 09/24/2015 09:33 PM, Gilberto Nunes wrote:
Hi
Any idea how ca
Hi,
socat should cover that just fine, see `man socat`
something like:
|socat /dev/ttyS0,raw,echo=0,crnl /dev/ttyS1,raw,echo=0,crnl |
should work.
For reusing/multiplexing ports take a look at my forum entry here:
http://forum.proxmox.com/threads/23379-Sharing-physical-COM-port-with-two-servers
Hi,
Note that this is a mailing list related to using Proxmox. Although your
VM may run on a Proxmox host the list/websites from the Operating System
running in the VM should be used when searching or asking OS specific stuff.
With Ubuntu you can configure network interfaces in the
/etc/net
On 11/30/2015 03:44 AM, Lindsay Mathieson wrote:
In the past two weeks we've had several instance of servers on our 3
node cluster doing a hard reboot, this is since upgrading to PE 4.0
Its been most curious. Once it was all three servers rebooting
simultaneously. Twice it has been a single se
On 11/30/2015 07:44 AM, Lindsay Mathieson wrote:
On 30/11/15 16:14, Thomas Lamprecht wrote:
Do you have HA resources configured?
Yes, several VM's - only setup since the 4.0 upgrade
If yes do you have quorum problems (with duration > 50-60 seconds)?
Not that I know of :) Is the
Hi,
simply make a reboot, this is a small, uncritical, but quite annoying
"bug" where the fix wasn't backported to 3.2 when it was discovered (out
of date).
3.4 should fix this completely, also 4.0+. But as it occurs only about
after 2-3 Years uptime you should have been able to upgrade unti
Hi,
I didn't looked at the entire command if there is an
error/misconfiguration, to tiresome for me.
Can you please also post the VMs config file and maybe some additional
info you have: is this VM special in any sense? did the VM never work
(since creation) or stopped working?
Further, is
Hi,
I didn't looked at the entire command if there is an
error/misconfiguration, to tiresome for me.
Can you please also post the VMs config file and maybe some additional
info you have: is this VM special in any sense? did the VM never work
(since creation) or stopped working?
Further, is
02:00 Thomas Lamprecht <mailto:t.lampre...@proxmox.com>>:
Hi,
I didn't looked at the entire command if there is an
error/misconfiguration, to tiresome for me.
Can you please also post the VMs config file and maybe some
additional info you have: is this VM specia
Hi,
On 23.01.2016 19:05, Gilberto Nunes wrote:
Hi list
I have a big VM file, with 2 TB of size...
The VM is hosted in GlusterFS server, with Ubuntu 14.04.3 LTS
(GNU/Linux 4.3.0-040300-lowlatency x86_64), act as a server...
Between PVE and Ubuntu server, I have bond interface network, with 3 x
Hi,
the obvious questions: what did you do before the error came to light?
With this error the HA CRM cannot do any action (start, stop, migrate, ...)
whats the output from
# systemctl status pve-ha-lrm
# ha-manager status
on this node (or both nodes), be sure that pve-ha-lrm is started!
Al
nks a lot
2016-02-05 10:15 GMT-02:00 Thomas Lamprecht <mailto:t.lampre...@proxmox.com>>:
Hi,
the obvious questions: what did you do before the error came to
light?
With this error the HA CRM cannot do any action (start, stop,
migrate, ...)
whats the outpu
If it's a call where a worker gets forked we have an UPID which
represents this action and we have a UPID wait function where you can
wait for this task to finish.
If you're making perl scripts on a PVE host it'd be really easy:
use PVE::API2::Qemu;
use PVE::ProcFSTools;
my $par
dware) i think
polling once per second should be fast enough enough, and that is surely
hand-able by PVE.
On 5 February 2016 at 15:34, Thomas Lamprecht <mailto:t.lampre...@proxmox.com>> wrote:
If it's a call where a worker gets forked we have an UPID which
represent
On 02/05/2016 04:26 PM, Gilberto Nunes wrote:
> Also if you have no more VM under HA you can remove the HA resource
config
>
> # rm /etc/pve/ha/resources.cfg
But if do this, in the future, when I realize that I need insert the
previously node removed, it will work?
Yes, if you add the othe
On 02/09/2016 11:29 AM, Mohamed Sadok Ben Jazia wrote:
Hello list,
I'm implementing an API for a full virtualization solution with proxmox.
Isn't that the Proxmox API2 already? :-)
For a simple usage (create CT, start, reboot, ...) i need to get a
ticket each API call.
As i know the ticke
Hi,
can you show a (simple) example how you do it? Are you making next id
calls from all over the cluster (in parallel)?
If you're doing it in serial like:
new_vmid = call-to-nextid();
create-ct(new_vmid)
you're fine, doing parallel calls you get race conditions and atomicity
violations.
Solvi
h a solution.
I'm thinking also about making a patch which adds another parameter to
the create Container and Virtual Machine API calls, which results in
auto selecting a free VMID when creating one and returning that when
finished.
This seems as an usable "feature" for other use cases
Note that /etc/cluster/cluster.conf isn't needed anymore, everything
cluster relevant will we read out of /etc/pve/corosync.conf (which looks
good as far as I can see).
You said you upgrade, are you really _really_ sure you did not miss a
step (no offense)?
I assume you rebuild the cluster cleanl
This should be fixed with pve-qemu-kvm 2.5-7
AFAIK it's available, at least, on pvetest.
cheers,
Thomas
On 02/25/2016 12:45 PM, Eneko Lacunza wrote:
> Hi all,
>
> I'm in the process of patching our office 4-node Proxmox cluster. It
> seems I'm unable to online migrate VMs between the newest PV
* shutdown: gracefully shut the container down, this is like you would
normally shutdown a real computer
* stop: stop the container instantly (kill the process(es)), this is
like you pplug out the power cord, or press the power button long
In general shutdown is, as expected, recommended.
cheer
Oh and you have a "timeout" and "force" parameter on the shutdown
command, the could be interesting for you.
On 03/02/2016 03:58 PM, Thomas Lamprecht wrote:
>
> * shutdown: gracefully shut the container down, this is like you would
> normally shutdown a real
Hi,
no no reason at all, LXC and so we is able to run 32bit Containers fine.
You can use the one provided by lxc (at least those which distro we
support (debian, ubuntu, arch, centos, alpine, suse are the ones if I
did not miss anything) and you normally also should be able to run the
previous ge
On 03/18/2016 12:36 PM, Jean-Laurent Ivars wrote:
> You can let go it’s ok, I revert the conf file from my other node, I
> restarted the corosync service from the web interface and the folder
> is back.
>
> As you’re saying* pvecm expected 1 *is working despite what pvecm
> status says because bo
Hi.
On 03/18/2016 02:02 PM, Jean-Laurent Ivars wrote:
> Honestly I don’t see the point of your message, you basically show the
> same as i did, unless the fact that you have tree servers in your
> cluster, my servers are hosted so i’m very sorry but I can’t pull or
> plug a cable…
Yes, but I show
Hi,
this idea was proposed quite some time ago and we planned to implement
it in the pve-ha-manager stack,
as it provides a lot of functionality needed for that.
The general idea from our side is:
* wait for the cluster to become stable (e.g. a few minutes no cluster
action),
* evaluate the load
s, max ram), as far as I've understood.
best regards,
Thomas
> On 5 April 2016 at 10:42, Thomas Lamprecht wrote:
>
>> Hi,
>>
>> this idea was proposed quite some time ago and we planned to implement
>> it in the pve-ha-manager stack,
>> as it provides a
s not friendly once it takes time.
I would also strongly be _against_ backup/restore in this case.
cheers,
Thomas
> I'm going to commit this small method in github and upgrade it later.
> Best regards
> Le 6 avr. 2016 7:28 AM, "Thomas Lamprecht" a
> écrit :
>
>>
Hi,
yes this still holds, as we continue to use postfix per default.
So yes you can follow the tutorial and set postfix up that it uses an
provider like GMail as relay.
cheers,
Thomas
On 04/11/2016 04:08 AM, Daniel Bayerdorffer wrote:
> Hello,
>
> Does anyone have any advice on how to setup Pr
Hi,
On 04/22/2016 12:31 AM, Maks wrote:
> Alwin,
>
> Yes I did apt-get update prior to upgrade and dist-upgrade. And several
> times after ;)
I see you resolved the issue already but as a hint for the future:
apt-get upgrade is not recommended, always update with:
apt-get dist-upgrade
cheers,
Hi,
On 24.04.2016 10:16, Lindsay Mathieson wrote:
> On 24/04/2016 5:58 PM, Laurent Dumont wrote:
>> Hi!
>>
>> I assume that the instructions from here :
>> https://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0 are still valid?
>
>
> Good question, the latest ver is 4.1
>
> I'd presume they are
Hi,
On 05/11/2016 11:55 AM, Holger Hampel | RA Consulting wrote:
> Hello,
>
> trying the new Ubuntu version, I have troubles with the guest agent. The
> agent will not shutdown the VM. Running in foreground:
>
> 1462959272.439482: debug: received EOF
> 1462959272.539653: debug: received EOF
> 146
Anything in the logs, errors or a service not stopped?
Can you try systemctl reboot next time (although the your version should
work)?
On 05/11/2016 02:59 PM, Lindsay Mathieson wrote:
> I encountered that today too, though I just did a "reboot"
>
> Had to drive in and hard reset.
>
> On 11 May 20
On 05/11/2016 03:22 PM, Eneko Lacunza wrote:
> 3rd and 4th nodes in the cluster have done "init 6" right.
>
> Are you using NFS storage in your setup?
>
> I use it for backups, but it was disabled before the reboot.
>
> I have noticed that after disabling NFS storage from GUI, for some
> time at
On 05/11/2016 03:45 PM, Lindsay Mathieson wrote:
> On 11/05/2016 11:32 PM, Thomas Lamprecht wrote:
>> Yes, if there is an NFS still mounted but unreachable this will hang
>> forever, yes.
>>
>> Ensure that you deactivate it before the NFS goes offline.
>
> I h
On 05/10/2016 08:50 PM, Gaël Jobin wrote:
> Hi everyone,
>
> A while ago, to build a continuous integration system, I installed some
> Linux, OSX and Windows VM under Proxmox 3.2 and the installations went
> fine (Windows 7 32 and 64 bits). Now, under Proxmox VE 4.2, my VM are
> still running/boo
Hi,
Did you run pmxcfs manually in a local mode?
ps aux | grep pmxcfs
if yes stop it and restart the pve-cluster service with:
systemctl restart pve-cluster
if that fails ensure pmxcfs is stopped then remove the lockfile with:
rm /var/lib/pve-cluster/.pmxcfs.lockfile
and retry.
cheers,
Thoma
man kvm shows:
> -D logfile
>Output log in logfile instead of to stderr
qm set VMID --args '-D /path/to/log'
Should do the trick, have no erroneous qemu VM available at the moment
so couldn't check :)
cheers,
Thomas
On 05/20/2016 02:29 PM, Lindsay Mathieson wrote:
> Is there a way to captu
On 05/20/2016 03:49 PM, Lindsay Mathieson wrote:
> On 20/05/2016 10:59 PM, Thomas Lamprecht wrote:
>> qm set VMID --args '-D /path/to/log'
>>
>> Should do the trick, have no erroneous qemu VM available at the moment
>> so couldn't check:)
>
> Thank
On 28.05.2016 11:07, haoyun wrote:
hello,everyone~
I have a cluster with 2 physical machines,and they are pve4.2
my physical machine:
root@cna5:~# free
total used free sharedbuffers cached
Mem: 65674780 21937328 43737452 93316 166488153
Hi,
On 27.05.2016 21:39, Daniel Eschner wrote:
seems to be :-(
Yes, offline storage migration should work for more or less all and if not it
is on the todo list of a proxmox dev, AFAIK.
Online Storage migration is not implemented :)
but should not be a big deal to copy all files ;)
Exac
Hi,
On 04.07.2016 16:27, Eneko Lacunza wrote:
Hi all,
I have continued looking onto this, and it seems I have to use -acpitable
command with an image of the original host SLIC table. But a fix in QEMU is
also needed:
https://bugzilla.redhat.com/show_bug.cgi?id=1248758#c27
Is this upstream pa
Hi,
On 07/05/2016 07:53 AM, Eneko Lacunza wrote:
Hi Thomas,
El 04/07/16 a las 18:00, Thomas Lamprecht escribió:
I have continued looking onto this, and it seems I have to use
-acpitable command with an image of the original host SLIC table.
But a fix in QEMU is also needed:
https
Hi,
On 07/05/2016 09:53 PM, William Gnann wrote:
Hi,
I have created a cluster with two nodes to concentrate the administration of my
machines within a single interface. But one of the machines got some hardware
problem and went down.
I ran "pvecm e 1" to bring back the master node to quorate
Hi
On 07/14/2016 12:51 PM, Kilian Ries wrote:
Just tested it, ssh works in both directions.
As additional information here is the migration output from proxmox:
###
Jul 14 12:46:03 starting migration of VM 101 to node 'proxmox2'
(192.168.100.253)
Jul 14 12:46:03 copying disk images
Jul 14 12
Hi,
On 09/09/2016 04:35 PM, IMMO WETZEL wrote:
HI,
are there any way to capture the startup output of the VM ?
May be a video output or better a text based output...
You may use an serial terminal:
https://pve.proxmox.com/wiki/Serial_Terminal
I used it a while ago with Linux guests, worked
On 09/21/2016 11:37 AM, Bart Lageweg | Bizway wrote:
Hi,
I want to delete an offline node from a Proxmox 4 cluster.
Node is not listed in pvecm nodes since it is offline.
How to delete?
You can delete the node just fine with:
pvecm delnode
Use the name of the offline node to delete it.
If
g the pveproxy service also :)
cheers,
Thomas
-Oorspronkelijk bericht-
Van: pve-user [mailto:pve-user-boun...@pve.proxmox.com] Namens Thomas Lamprecht
Verzonden: woensdag 21 september 2016 12:03
Aan: pve-user@pve.proxmox.com
Onderwerp: Re: [PVE-User] Missing offline node Proxmox 4
On
On 09/22/2016 03:25 PM, Denis Morejon wrote:
Is there any portable Proxmox wiki or a way to download it ?
I tried to download it using wget, but I couldn't. I need it offline.
If you have a updated version of Proxmox VE take a look at our documentation
which now comes with PVE included:
https
Hi,
On 10/04/2016 09:30 AM, Marco Gaiarin wrote:
Probably is a dumb question. But i've not found an answer on wiki...
no dumb question!
There's a way to 'run' a VM for non intel/amd CPUs? And manage it via
PVE interface?
No currently not, our KVM/QEMU package gets compiled with the x86_6
Hi,
On 05.10.2016 14:43, mj wrote:
Hi Kevin,
On 10/05/2016 02:25 PM, Kevin Lemonnier wrote:
Just as it's weird and very very annoying that you can't "Stop" a VM
when HA is enabled. If you click on "Stop", it actually tries to do a
shutdown on it, and only does an actual stop after that times o
Hi,
On 05.10.2016 14:11, mj wrote:
Hi,
Just noticed something we find counterintuitive in proxmox:
We are using HA for some of our machines.
As we needed to work on an HA-managed machine, we disabled HA, so that we could
manually halt/reboot/start it.
But to our surprise, the "disable" butt
On 05.10.2016 17:06, Michael Rasmussen wrote:
On Wed, 5 Oct 2016 17:01:23 +0200
Thomas Lamprecht wrote:
Our reason for this is that a HA enabled service should be *cleanly* shutdown.
Stopping a VM is like pulling the power cord out, which can be bad :)
We can re evaluate that. For us HA
Hi,
On 10/05/2016 09:52 PM, Lindsay Mathieson wrote:
On 6/10/2016 1:09 AM, Thomas Lamprecht wrote:
Please *always* read the documentation if you want HA and if you want
to use it
in production you surely shouldn't assume anything and read the docs
first.
Ignores the point people are m
On 10/06/2016 09:08 AM, Kevin Lemonnier wrote:
He wanted to remove it from HA managed but said to HA it should be
stopped, nothing to do with the stop/shutdown button confusion :)
Yeah sorry about that, I hijacked this thread with my somewhat un-related
complaint, that's my bad :).
No worries,
were someone disables a VM because he
thinks that means disabling HA temporary.
I opened a bug report regarding
https://bugzilla.proxmox.com/show_bug.cgi?id=1142
hope this will help a bit.
cheers,
Thomas
El 06/10/16 a las 09:39, Thomas Lamprecht escribió:
On 10/06/2016 09:08 AM, Kevin Lemon
On 10/06/2016 11:51 AM, Kevin Lemonnier wrote:
Yes, I would find that logical. And perhaps also a third option: to
disable-HA management, so we can work on a machine, without having to
completely remove it from the HA system.
That's not a bid idea. I assume we all make the mistake for the same
-
De: "Phil Kauffman"
À: "proxmoxve"
Envoyé: Mercredi 19 Octobre 2016 18:23:54
Objet: [PVE-User] Dedicated Migration Network?
I am resubmitting this to the user list after a conversation with Thomas
Lamprecht during the proxtalks conference.
Is setting up a dedicated migratio
Hi,
On 10/24/2016 09:23 AM, Marco Gaiarin wrote:
Mandi! Daniel
In chel di` si favelave...
Thanks to all. Clearly i'm on a shared storage.
You have to turn off the Container then you can migrate it.
Sure, and effectively i do that. And work.
I'm simply asking why, if i deselect 'online',
On 10/24/2016 10:26 AM, Marco Gaiarin wrote:
Mandi! Thomas Lamprecht
In chel di` si favelave...
Currently this is not implemented, but AFAIK its planned to do so in
the near future. Offline is at the moment only migration of a
stopped CT including storage migration.
Ok. Thanks.
Made a
Hi,
On 07.11.2016 18:26, lists wrote:
Hi,
We are using collectd/grafana to monitor various servers. We would like to use
that also to check our VMs on proxmox.
There is a plugin for collectd: libvirt (or virt), but this requires libvirt,
which is not installed/used on proxmox.
Is there anot
Hi,
On 09.11.2016 16:29, Dhaussy Alexandre wrote:
I try to remove from ha in the gui, but nothing happends.
There are some services in "error" or "fence" state.
Now i tried to remove the non-working nodes from the cluster... but i
still see those nodes in /etc/pve/ha/manager_status.
Can you p
iously?
nov. 09 17:08:22 proxmoxt34 pve-ha-lrm[26282]: status change startup =>
wait_for_agent_lock
nov. 09 17:12:07 proxmoxt34 pve-ha-lrm[26282]: ipcc_send_rec failed:
Noeud final de transport n'est pas connecté
We are also investigating on a possible network problem..
Multicast properly working?
ake place on this node one way or
the other.
Thanks again for the help.
Alexandre.
Le 09/11/2016 à 20:54, Thomas Lamprecht a écrit :
On 09.11.2016 18:05, Dhaussy Alexandre wrote:
I have done a cleanup of ressources with echo "" >
/etc/pve/ha/resources.cfg
It seems to have re
On 10.11.2016 21:34, Lindsay Mathieson wrote:
qm migrate 506 vnb --online
400 Parameter verification failed.
target: target is local node.
qm migrate [OPTIONS]
root@vnb:/etc/pve/softlog# qm migrate 506 vng --online
ERROR: unknown command 'mtunnel'
Are you sure you upgraded all, i.e. used:
a
On 11/10/2016 10:35 PM, Lindsay Mathieson wrote:
On 11/11/2016 7:11 AM, Thomas Lamprecht wrote:
Are you sure you upgraded all, i.e. used:
apt update
apt full-upgrade
Resolved it thanks Thomas - I hadn't updated the *destination* server.
makes sense, should have been made sense a few
On 14.11.2016 11:50, Dhaussy Alexandre wrote:
Le 11/11/2016 à 19:43, Dietmar Maurer a écrit :
On November 11, 2016 at 6:41 PM Dhaussy Alexandre
wrote:
you lost quorum, and the watchdog expired - that is how the watchdog
based fencing works.
I don't expect to loose quorum when _one_ node jo
est VM first to see if it works.
cheers,
Thomas
Thanks!
On 11/11/16, 2:05 AM, "pve-user on behalf of Thomas Lamprecht"
wrote:
On 11/10/2016 10:35 PM, Lindsay Mathieson wrote:
> On 11/11/2016 7:11 AM, Thomas Lamprecht wrote:
>> Are you sure you upgraded al
e-cluster pcakage on the old node, then it should work also.
cheers,
Thomas
Thanks!
On 11/18/16, 11:44 AM, "pve-user on behalf of Thomas Lamprecht"
wrote:
Hi,
On 11/18/2016 05:02 PM, Chance Ellis wrote:
> Hello,
>
> I am running a sma
Hi,
On 11/21/2016 10:44 PM, IMMO WETZEL wrote:
Hi,
I try to change the description of a vm via a picture call.
pvesh set /nodes/localhost/qemu/100/config --description 'test 12'
works for me here.
So with HTTP you would use a PUT request (instead of set with pvesh) to
/nodes/localhost/qem
Hi all,
regarding the discussion about our HA stack on the pve-user list in October
we made some changes, which - hopefully - should address some problems and
reduce some common pitfalls.
* What has changed or is new:
pct shutdown / qm shutdown and the Shutdown button in the web interface work
Hi,
On 11/30/2016 09:06 AM, F.Rust wrote:
Hi all,
I’m using promos 4.2 for a while now and am quite satisfied.
But now I managed somehow to accidental switch the web frontend to only show the main
content but not the left tree pane and not the bottom "Tasks/Cluster Log" pane.
How can I get ba
the cache: CTRL + SHIFT + R
Else I could imagine that an Addon is interfering here, maybe you
accidentally did a
"right click + Block Element" on the tree panel so that your add blocker
blocks it,
just shooting in the dark here :)
cheers,
Thomas
Am 30.11.2016 um 10:22 schr
On 12/15/2016 11:35 AM, IMMO WETZEL wrote:
Hi
Thats my current script to get all vm names from the cluster. Afterwards I
check the new name against the list to prevent errors.
#!/usr/bin/env bash
nodes=$(pvesh get /nodes/ 2>/dev/null | sed -n -E '/\"id\"/
s/.*:\s\"(.*)\".*/\1/p' | sed -n -
On 12/20/2016 01:21 PM, IMMO WETZEL wrote:
How can I set multiline config descriptions ?
root@node01:~# pvesh set /nodes/node04/qemu/315/config -description
"line1\nline2"
è Description shows a n not a line break
è update VM 315: -description line1nline2
è 200 OK
root@node01:~# pvesh set /
Hi Vadim and Jatani,
On 01/19/2017 01:41 PM, Vadim Bulst wrote:
Hi Jatani,
you answered out of thread and still no error message.
I guess he attached it but as it was to big (possible a screenshot) the
mailing list software removed the attachment.
If it was an image please consider posting
On 01/26/2017 10:25 AM, Yannis Milios wrote:
Your question is quite generic because it depends how you have configured
your PVE, what will be the storage backends etc.
I will assume that you have one PVE server with VMs stored in a local
storage like LVM or ZFS. You have a NAS as well where you
Hi,
just answering the additional question, the rest may get better answered
by our community.
On 02/02/2017 10:22 AM, Uwe Sauter wrote:
Hi all,
I would like to hear recommendations regarding the network setup of a Proxmox
cluster. The situation is the following:
* Proxmox hosts have seve
Hi,
But this setup is exactly what I'd want to avoid. Imagine you have a VM running
on Node A that needs VLAN 7. With this kind of
setup Proxmox could migrate the VM to Node B or C in case of failure of node A.
But if the VM is put on Node C the VM has no
connectivity to VLAN 7 which is again
Hi,
Am 09.02.2017 um 18:27 schrieb Daniel:
Hi there,
after a Power Suppley replacement one Node is not able to access the Cluster
again:
Feb 9 18:25:27 host04 pmxcfs[2026]: [quorum] crit: quorum_initialize failed: 2
Feb 9 18:25:27 host04 pmxcfs[2026]: [confdb] crit: cmap_initialize failed:
Hi,
On 02/10/2017 09:04 PM, Ricardo Persaud wrote:
Hi,
I am trying to install Proxmox VE4.4 on a IBM server. When I boot from the
install CD or USB, I get the menu screen and when I choose to install Proxmox
VE my monitor starts to display different colors and pattern. I attached a
screensh
1 - 100 of 270 matches
Mail list logo