Hi all,
I would like to hear recommendations regarding the network setup of a Proxmox
cluster. The situation is the following:
* Proxmox hosts have several ethernet links
* multiple VLANs are used in our datacenter
* I cannot guarantee that the VLANs are on the same interface for each host
Hi Kevin,
thanks for explaining your setup. Comments below.
Am 06.02.2017 um 12:57 schrieb Kevin Lemonnier:
>> * How does fencing work in Proxmox (technically)?
>> Due to fencing being based on watchdogs I assume that some piece of
>> software
>> regularly resets the watchdog's clock so
Dear all,
I'm a bit confused by the Wiki pages regarding high availability [1] & [2].
I would appreciate if my questions could be answered. (Searched the mailing list
archive but didn't find threads regarding HA that are new enough to cover v4.x)
* When does a Proxmox cluster become a HA
Hi Alwin,
thanks for your suggestion. Comments below.
Am 04.02.2017 um 12:04 schrieb Alwin Antreich:
[…]
>>
>> What kind of network setup would you recommend?
>
> We also use multiple VLANs on our network. As linux bridges are
> VLAN-aware (bridge-vlan-aware yes), we set the VLAN in the VM
If I may add another question: how are you planning to handle those dynamic
interface names that was introduced a few
years ago? See https://en.wikipedia.org/wiki/Consistent_Network_Device_Naming
Regards,
Uwe
Am 20.02.2017 um 20:58 schrieb Sten Aus:
> Hi! Thanks for fast response!
>
>
difficulties in cluster communication.Have a
> look these notes:
>
> https://pve.proxmox.com/wiki/Multicast_notes
>
>
>
> On Fri, 24 Feb 2017 at 22:45, Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>> wrote:
>
> Hi,
>
arting the service on all nodes restored full operation of the web GUI.
Perhaps someone from Proxmox could add this piece of knowledge to the wiki?
Regards,
Uwe
Am 25.02.2017 um 09:23 schrieb Uwe Sauter:
> I'm sorry I forgot to mention that I already switched to "transport: ud
Hi,
I have a GUI problem with a four node cluster that I installed recently. I was
able
to follow this up to ext-all.js but I'm no web developer so this is where I got
stuck.
Background:
* four node cluster
* each node has two interfaces in use
** eth0 is 1Gb used for management and some VM
s
>
> in every nodes???
>
> 2017-02-24 15:04 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>>:
>
> Hi,
>
> I have a GUI problem with a four node cluster that I installed recently.
> I was able
>
Hi,
I'm trying to use NAT in one of my VMs as I have no official IP address for it.
I found [1] which explains how to setup
masquerading but I'm a bit confused. [1] uses 10.10.10.0/24 as source address.
In the PVE documentation [2] it is mentioned that
PVE will serve addresses in the
I have a setup where I don't use Proxmox own VLAN management but have one
bridge per VLAN that I use:
/etc/network/interfaces
###
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.253.200
netmask 255.255.255.0
gateway 192.168.253.254
auto
Check that there are no firewalls blocking communication. I had a problem like this a couple of weeks ago and all I
needed was to properly configure the settings for pveproxy. (There are other firewall settings, too.)
Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier:
Looks like they can't find
schrieb Uwe Sauter:
Check that there are no firewalls blocking communication. I had a problem like
this a couple of weeks ago and all I
needed was to properly configure the settings for pveproxy. (There are other
firewall settings, too.)
Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier:
Looks
Hi,
I was installing the latest updates to PVE 4.4 yesterday and it got stuck after
the configuration step for Ceph.
I was able to trace this to a process "systemd-tty-ask-password-agent --watch" while systemd was restarting ceph.target.
It seems that systemd confused its internal state
Yes, you can add arbitrary sized disks to Ceph. Usually the disk size is used
as the OSD's weight factor which influences the placement of data.
Am 2. März 2017 22:49:23 MEZ schrieb Daniel :
>Hi there,
>
>i am playing abit with Cepg since some weeks and i just wanted to
to either the SSD or the HDD, as they are run as
> independent strorage systems.
>
> _.https://eXtremeSHOK.com
> .__
>
> On 03/03/2017 12:05 AM, Uwe Sauter wrote:
>> Yes, you can add arbitrary sized disks
> 6- Configure your switch (both ports AND bond) to your VLANS, all tagged.
>
> 7- Reboot.
>
> In your Network settings page, you should see only OVS elements (+ the two
> eths of the bond as Network Devices).
>
> You can assign IPs directly to vmbrs when you don't need ot
Hi Yannick,
I'll give it a try tomorrow.
Thanks for the suggestion.
Regards,
Uwe
Am 28.02.2017 um 19:45 schrieb Yannick Palanque:
> Hello,
>
> À 2017-02-28T13:20:24+0100,
> Uwe Sauter <uwe.sauter...@gmail.com> écrivit :
>
>> Hi,
>>
>> I'm tryin
Hi,
I'd like to make you aware of a security flaw in virtfs [1] that was published
about 2 weeks ago.
Might be worth while to get this into the coming update if this applies to PVE.
Regards,
Uwe
[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=1035=6=
Am 27.02.2017 um
If it is a multicast problem and your cluster is not that big (~10 nodes) you
could switch to using "udpu" in corosync.conf
totem {
[…]
config_version: +=1 # increment with every change you do
transport: udpu
}
Am 11.08.2017 um 13:48 schrieb Alexandre DERUMIER:
> seem to be a
Are there any reasons on your side to use Ubuntu? If you want to stay
compatible you could also install Proxmox including Ceph but
not use those hosts for virtualization…
Am 14.08.2017 um 14:07 schrieb Gilberto Nunes:
> Hi
>
> Regard Ceph, can I use 3 Ubuntu Server 16 Xenial to build a Ceph
-14 9:30 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>>:
>
> Are there any reasons on your side to use Ubuntu? If you want to stay
> compatible you could also install Proxmox including Ceph but
> not use those hosts for virtu
t;
>
> 2017-08-14 9:51 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>>:
>
> Then the question is if
>
> a) you'd want to integrate those Ubuntu servers into an existing Ceph
> cluster (managed by Proxmox) or
>
&
-1~bpo80+1
Am 07.07.2017 um 17:38 schrieb Nicola Ferrari (#554252):
> On 20/06/2017 18:19, Uwe Sauter wrote:
>>
>> Can someone explain under which circumstances this output is displayed
>> instead of just the short message that migration
>> was started?
>
> I
Ah, thanks. (Sorry for the late reply, Gmail put you answer into the spam
folder.)
Am 20.06.2017 um 19:01 schrieb Michael Rasmussen:
> The former is for HA vm's the latter for non HA vm's
>
> On June 20, 2017 6:19:36 PM GMT+02:00, Uwe Sauter <http://uwe.sauter.de>@g
Hi all,
1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes is
restarted / shutdown either via WebGUI or via
commandline. Will hosted, HA-managed VMs be migrated to other hosts before
shutting down or will they be stopped (and restared on
another host once HA recognizes
Hi Thomas,
thank you for your insight.
>> 1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes
>> is restarted / shutdown either via WebGUI or via
>> commandline. Will hosted, HA-managed VMs be migrated to other hosts before
>> shutting down or will they be stopped
Thomas,
>>> An idea is to allow the configuration of the behavior and add two
>>> additional behaviors,
>>> i.e. migrate away and relocate away.
>> What's the difference between migration and relocation? Temporary vs.
>> permanent?
>
> Migration does an online migration if possible (=on VMs)
An example out of my head:
/etc/network/interfaces
-
# management interface
auto eth0
iface eth0 inet static
address 10.100.100.8
netmask 255.255.255.0
gateway 10.100.100.254
# 1st interface in bond
auto eth1
iface eth1 inet manual
mtu 9000
# 2nd
on bond with IP
auto bond0.120
iface bond0.120 inet static
address 10.100.100.8
netmask 255.255.255.0
mtu 9000
# interface for vlan 130 on bond without IP (just for VMs)
auto bond0.130
iface bond0.130 inet manual
-
Am 23.08.2017 um 07:27 schrieb Uwe Sauter
https://pve.proxmox.com/wiki/Ceph_Server#Ceph_on_Proxmox_VE_5.0
"Note: the current Ceph Luminous 12.1.x is the release candidate, for
production ready Ceph Cluster packages please wait for
version 12.2.x "
Am 17.08.2017 um 16:58 schrieb Gilberto Nunes:
> Hi guys
>
> Ceph Luminous is
sistet (compare lines 4296 and 33465).
This is the reason why I have 2 sub_filters for basically the same replacement.
Am 09.05.2017 um 11:01 schrieb Thomas Lamprecht:
> Hi,
>
> On 05/05/2017 06:18 PM, Uwe Sauter wrote:
>> Hi,
>>
>> I've seen the wiki page [1] that ex
Hi Thomas,
thank you for the effort of explaining.
>
> Hmm, there are some problems as we mostly set absolute paths on resources
> (images, JS and CSS files)
> so the loading fails...
> I.e., pve does not knows that it is accessed from
> https://example.com/pve-node/ and tries to load the
Hi,
as my Proxmox hosts don't have enough local storage I wanted to do backups into
the "network". One option that came into mind was
using the existing Ceph installation to do backups. What's currently missing
for that (as far as I can tell) is Proxmox support
for a Ceph-backed filesystem
Hi all,
usually when I update my PVE cluster I do it in a rolling fashion:
1) empty one node from running VMs
2) update & reboot that node
3) go to next node
4) migrate all running VMs to already updated node
5) go to 2 until no more nodes need update
For step 1 (or 4) I usually do:
# qm list
Hi all,
after having succeeded to have an almost TCP-based NFS share mounted (see
yesterday's thread) I'm now struggling with the backup
process itself.
Definition of NFS share in /etc/pve/storage.cfg is:
nfs: aurel
export /backup/proxmox-infra
path /mnt/pve/aurel
Hi Fabian,
thanks for looking into this.
As I already mentioned yesterday my NFS setup tries to use TCP as much as
possible so the only UDP port used / allowed in the NFS
servers firewall is udp/111 for Portmapper (to allow showmount to work).
>> Issue 1:
>> Backups failed tonight with "Error:
Am 19.05.2017 um 11:53 schrieb Fabian Grünbichler:
> On Fri, May 19, 2017 at 11:26:35AM +0200, Uwe Sauter wrote:
>> Hi Fabian,
>>
>> thanks for looking into this.
>>
>> As I already mentioned yesterday my NFS setup tries to use TCP as much as
>> possi
Am 18.05.2017 um 15:04 schrieb Emmanuel Kasper:
>
>
> On 05/18/2017 02:56 PM, Uwe Sauter wrote:
>> # mount -t nfs -o vers=4,rw,sync :$SHARE /mnt
>> mount.nfs: mounting aurel:/proxmox-infra failed, reason given by server: No
>> such file or directory
>
> aurel:
>>> perl -e 'use strict; use warnings; use PVE::ProcFSTools; use Data::Dumper;
>>> print Dumper(PVE::ProcFSTools::parse_proc_mounts());'
>>>
>>
>> $VAR1 = [
>>
>> [
>> ':/backup/proxmox-infra',
>> '/mnt/pve/aurel',
>> 'nfs',
>>
>>
>>
>> the culprit is likely that your storage.cfg contains the IP, but your
>> /proc/mounts contains the hostname (with a reverse lookup inbetween?).
>>
>
> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid
> DNS lookup delays, it is usually preferable to use an
> IP
Am 22.05.2017 um 15:40 schrieb Uwe Sauter:
>
>>
>> I discovered a different issue with this definition: If I go to Datacenter
>> -> node -> storage aurel -> content I only get "mount
>> error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)&q
>
> I discovered a different issue with this definition: If I go to Datacenter ->
> node -> storage aurel -> content I only get "mount
> error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)".
>
> The share is mounted again with IP address though I didn't change the config
> after
l by running
> unconfigured.sh.
>
> So basically, if I boot to the shell, how can I start the install from the
> contents of the CD/ISO.
>
>
>
> On 18 May 2017 at 19:04, Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>> wrote:
>
>
Don't know what your situation is but there is a wiki page [1] that describes
the installation of Proxmox on top of an
existing Debian.
[1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
Am 18.05.2017 um 19:55 schrieb Steve:
> In version 3.2 ISO there was this script to start
but again, due to showmount not using TCP PVE will not mount it automatically.
Regards,
Uwe
Am 18.05.2017 um 11:40 schrieb Uwe Sauter:
> Hi,
>
> as my Proxmox hosts don't have enough local storage I wanted to do backups
> into the "network". One option
Hi Fabian,
>> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid
>> DNS lookup delays, it is usually preferable to use an
>> IP address instead of a DNS name". But yes, the DNS in our environment is
>> configured to allow reverse lookups.
>
> which - AFAIK - is still
Hi,
I just noticed an (intentional?) inconsistency between the WebUI's Ceph OSD
page vs. the tasks view on the bottom and
the CLI:
If you go to Datacenter -> node -> Ceph -> OSD and select one of the OSDs you
can "remove" it with a button in the upper
right corner. If you do so the task is
Hi,
I've seen the wiki page [1] that explains how to operate a PVE host behind a
reverse proxy.
I'm currently in the situation that I have several services already behind a
rev proxy that are accessible with different
webroots, e.g.
https://example.com/dashboard
https://example.com/owncloud
Hi,
suppose I have several snapshots of a VM:
Snap1
└── Snap2
└── Snap3
└── Snap4
└── Snap5
Is there a way to determine the size of each snapshot?
Regards,
Uwe
___
pve-user mailing list
pve-user@pve.proxmox.com
M 12636M
>
>
> On Thu, Sep 21, 2017 at 8:30 AM, Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>> wrote:
>
> Hi,
>
> thanks, but I forgot to mention that all my VMs have Ceph as backend and
> thus snapshots can'
Hi,
I'm currently facing the following problem:
VM is defined with several disks:
scsi0 -> ceph:vm-201-disk1,discard=on, size=16G
scsi1 -> ceph:vm-201-disk2,discard=on, size=16G
scsi2 -> ceph:vm-201-disk3,discard=on, size=4G
scsi3 -> ceph:vm-201-disk4,discard=on, size=4G
scsi4 ->
) to the OS.
Having a look at the dmesg output it seems to be a timing issue: highest LUN is
recognized as first.
Am 29.08.2017 um 13:30 schrieb Lindsay Mathieson:
> On 29/08/2017 9:17 PM, Uwe Sauter wrote:
>> Is there any way to force scsi1 to /dev/sdb, scsi2 to /dev/sdc, etc. so that
>
Hi all,
I'm a bit shocked. I wanted to create a "save" backup where the VM is shut down
and thus all filesystems are in a
consistent state. For that I shut down my VM and then started a backup (backup
mode=stop, compression=lzo) and what must
I see:
INFO: starting new backup job: vzdump 106
Thanks for clarification!
Am 19.11.2017 um 09:11 schrieb Dietmar Maurer:
>>> Could someone with insight into the backup process explain why kvm is
>>> started?
>>
>> It uses the qemu copy-on-write feature to make sure the state is consistent.
>> You can immediately work with that VM, while qemu
Hi,
is it still correct to set tunables to "hammer" even whit Proxmox 5? This is
mentioned in the wiki [1].
Regards,
Uwe
[1] https://pve.proxmox.com/wiki/Ceph_Server#Set_the_Ceph_OSD_tunables
___
pve-user mailing list
Hi
running a cluster with PVE 5.1 and Ceph.
pveperf as described in [1] doesn't work anymore. Even as root I get:
root@pxmx-02:~# pveperf help
CPU BOGOMIPS: 89368.48
REGEX/SECOND: 1505926
df: help: No such file or directory
DNS EXT: 13.68 ms
DNS INT: 19.98 ms
True, my bad. But every other PVE related command I used so far had a
" help" subcommand so I didn't look into
the man page.
Please take this than as bug report for the subcommand (or a "-h" help option)
and as a request to update the wiki
article to include the info, that a PATH argument can
Hi,
now that 5.1 is released will there be documentation how to upgrade from 4.4?
Is the wikie page [1] valid for 5.1?
Did someone already try the upgrade? Any experience is appreciated.
Regards,
Uwe
[1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
Hi,
since kernel 4.15.x was released in pve-nosubscription I have I/O performance
regressions that lead to 100% iowait in VMs, dropped (audit) log records and
instability in general.
All VMs that present this behavior run up-to-date CentOS 7 on Ceph-backed
storage
with kvm64 as CPU.
This
Hi all,
I recently discovered that one of the updates since turn of the year introduced
options to let the VM know about Meltdown/Spectre
mitigation on the host (VM configuration -> processors -> advanced -> PCID &
SPEC-CTRL).
I'm not sure if I understand the documentation correctly so please
as I can tell.
>
> On Tue, May 08, 2018 at 03:31:52PM +0200, Uwe Sauter wrote:
>> Hi all,
>>
>> I recently discovered that one of the updates since turn of the year
>> introduced options to let the VM know about Meltdown/Spectre
>> mitigation on the host (V
g
on 4.13.16 then no blocking OSDs happen (as far as
I have seen until now).
Has anyone seen repeatedly OSDs with blocked requests when running 4.15.17 or
is it just me?
Regards,
Uwe
Am 09.05.2018 um 11:51 schrieb Uwe Sauter:
> Hi,
>
> since kernel 4.15.x was released in pve-nos
Hi all,
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating VMs
between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Since then I experience hanging storage tasks inside the VMs (e.g. jbd2 on VMs
Hi Lindsay,
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating
VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Whats your ceph status? it probably doing a massive backfill after the
Mathieson:
On 3/05/2018 6:27 AM, Uwe Sauter wrote:
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating
VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Whats your ceph status? it probably doing
Looks like this was cause by pve-kernel-4.15.15-1-pve. After rebooting into
pve-kernel-4.13.16-2-pve performance is back to normal.
Hopefully the next kernel update will address this.
Regards,
Uwe
Am 02.05.2018 um 22:27 schrieb Uwe Sauter:
> Hi all,
>
> I updated m
Hi,
I'm trying to use the virtualization support that Mellanox ConnectX-3 cards
provide. In [1] you can find a document by Mellanox
that describes the necessary steps for KVM.
Currently I'm trying to install Mellanox OFED but the installation fails
because there is no package
Am 26.07.2018 um 11:22 schrieb Thomas Lamprecht:
> Hi,
>
> Am 07/26/2018 um 11:05 AM schrieb Brent Clark:
>> Good day Guys
>>
>> I did a sslscan on my proxmox host, and I got the following:
>>
>> snippet:
>> Preferred TLSv1.0 256 bits ECDHE-RSA-AES256-SHA Curve P-256 DHE
>> 256
>>
y_pass https://localhost:8006;
> proxy_buffering off;
> client_max_body_size 0;
> proxy_connect_timeout 3600s;
> proxy_read_timeout 3600s;
> proxy_send_timeout 3600s;
> send_timeout 3600s;
> }
> }
Would you mind to share the relevant parts of your nginx config? Does
forwarding NoVNC traffic work?
Am 26.07.2018 um 13:22 schrieb Ian Coetzee:
> Hi All,
>
> I know this has been answered.
>
> What I did was to drop a reverse proxy (nginx) in front of pveproxy
> listening on port 443 then
>>>
* pve-kernel 4.13 is based on
http://kernel.ubuntu.com/git/ubuntu/ubuntu-artful.git/ ?
>>>
>>> Yes. (Note that this may not get much updates anymore)
>>>
* pve-kernel 4.15 is based on
http://kernel.ubuntu.com/git/ubuntu/ubuntu-bionic.git/ ?
>>>
>>> Yes. We're
Hi Thomas,
Am 22.08.18 um 09:55 schrieb Thomas Lamprecht:
> Hi Uwe,
>
> On 8/22/18 9:48 AM, Uwe Sauter wrote:
>> Hi all,
>>
>> some quick questions:
>>
>> * As far as I can tell the PVE kernel is a modified version of Ubuntu
>> kernels, correct
Hi all,
some quick questions:
* As far as I can tell the PVE kernel is a modified version of Ubuntu kernels,
correct?
Modifications can be viewed in the pve-kernel.git repository
(https://git.proxmox.com/?p=pve-kernel.git;a=tree).
* pve-kernel 4.13 is based on
ming knowledge)
>
> Marcus Haarmann
>
> ----------
> *Von: *"Uwe Sauter"
> *An: *"pve-user"
> *Gesendet: *Mittwoch, 22. August 2018 09:4
One thing speaks againts this being PTI is that both types of nodes have
secondary OSDs causing slow requests.
Though it still is an option to try before giving up completely.
Am 22.08.18 um 11:45 schrieb Uwe Sauter:
> Hi Marcus,
>
> no, I haven't disabled Spectre/Meltdown mitigat
encountered stuck I/O on rdb devices.
> And kernel says it is losing a mon connection and hunting for a new mon all
> the time (when backup takes
> place and heavy I/O is done).
>
> Marcus Haarmann
>
> ------
If using standard 802.3ad (LACP) you will always get only the performance of a
single link between one host and another.
Using "bond-xmit-hash-policy layer3+4" might get you a better performance but
is not standard LACP.
Am 24.08.18 um 12:01 schrieb Gilberto Nunes:
> So what bond mode I
-------
> *Von: *"uwe sauter de"
> *An: *"Thomas Lamprecht" , "pve-user"
>
> *Gesendet: *Mittwoch, 22. August 2018 10:50:19
> *Betreff: *Re: [PVE-User] PVE kernel
As long as you microcode is older than June 2017 there is no way that there are
mitigations for Meltdown and Spectre as Intel was
only made aware of the flaws back in June 2017.
Same goes for the BIOS as the vendors require the microcode from Intel to
include into their updates.
Regarding the
Hi all,
I discourage you from updating ZFS to version 0.7.7 as it contains a
regression. Version 0.7.8 was released today that reverts the
commit that introduced the regression.
For Infos check: https://github.com/zfsonlinux/zfs/issues/7401
Regards,
Uwe
Hi there,
I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
migrations fail with this setting.
# log of failed insecure migration #
2018-03-23 14:58:44 starting migration of VM 101 to node 'px-bravo-cluster'
(169.254.42.49)
2018-03-23 14:58:44 copying disk
Thanks, I'll try again.
Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht:
> Hi Uwe!
>
> On 3/23/18 3:02 PM, Uwe Sauter wrote:
>> Hi there,
>>
>> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
>> migrations fail with this
Ah, syntax. Thanks again.
Have a nice weekend.
Am 23.03.2018 um 15:35 schrieb Thomas Lamprecht:
> Uwe,
>
> On 3/23/18 3:31 PM, Uwe Sauter wrote:
>> a quick follow-up: is it possible to create PVE firewall rules for port
>> ranges? It seems that only a single port is allo
lid format - invalid port '6-60050'
Best,
Uwe
Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht:
> Hi Uwe!
>
> On 3/23/18 3:02 PM, Uwe Sauter wrote:
>> Hi there,
>>
>> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
>&g
Hi,
I'm trying to manually migrate VM images with snapshots from pool "vms" to pool
"vdisks" but it fails:
# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format
2 - vdisks/vm-102-disk-2
rbd: import header failed.
rbd: import failed: (22) Invalid argument
Exporting
Hi,
first problem is that you seem to be using some client that replaces verbose text with links to facebook. Could you
please resend you mail using a plain text message (no html). This should also take care of the formating (currently no
monospace font which make it much harder do find the
Hi,
in the documentation to pvecm [1] it says:
At this point you must power off hp4 and make sure that it will not power on
again (in the network) as it is.
Important:
As said above, it is critical to power off the node before removal, and make
sure that it will never power on again (in the
Hi all,
thanks for looking into this.
With help from the ceph-users list I was able to migrate my images.
So no need anymore.
Best,
Uwe
Am 08.11.18 um 16:38 schrieb Thomas Lamprecht:
> On 11/8/18 1:43 PM, Alwin Antreich wrote:
>> On Wed, Nov 07, 2018 at 09:01:09PM +0100, U
You could use
qm terminal
to connect to the serial console. Ctrl + o will quit the session.
You need to configure your VMs to provide a serial console, e.g. by adding "console=tty0 console=ttyS0,115200n8" to
GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and running "grub-mkconfig -o
Frank Thommen:
Good point. Thanks a lot
frank
On 11/22/2018 07:51 PM, Uwe Sauter wrote:
FYI:
I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the
same (software) bridge and keep STP on… that way when you loose the link from
FYI:
I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the
same (software) bridge and keep STP on… that way when you loose the link from node A to node B the traffic will be going
through node C.
++
|
And how would you handle the situation where you want to use dport 10200 on
several VMs on the same host?
I don't think that this will work reliably in a cluster, where VMs migrate
between hosts.
Am 06.04.19 um 16:25 schrieb Gilberto Nunes:
Hi there...
Is there any way to use port forward
No in-place way to convert between but you can set one drive at a time to "out"
to migrate data away, then to "off" and destroy.
Then remove the OSD and recreate with filestore. Let it sync and once finished,
do the next drive.
Am 19.02.19 um 12:16 schrieb Gilberto Nunes:
> Hi
>
> I have 15
In dmesg output, are there lines like "e1000e :00:19.0 enp0s25: renamed
from eth0" ?
The question I see is: are all of your interfaces detected but Udev is doing
something wrong or does the kernel not detect all
interfaces (besides it seems to see all PCIe devices).
You could try to create
Am 11.04.19 um 16:07 schrieb Thomas Lamprecht:
On 4/11/19 2:47 PM, Uwe Sauter wrote:
Thanks for all your effort. Two questions though:
From the release notes:
HA improvements and added flexibility
It is now possible to set a datacenter wide HA policy which can change the
way guests
Thanks for all your effort. Two questions though:
From the release notes:
HA improvements and added flexibility
It is now possible to set a datacenter wide HA policy which can change the
way guests are treated upon a Node shutdown or
reboot. The choices are:
freeze: always freeze
To be most flexible in a HA setup you would take the minimal denominator on the
CPU architecture / feature side.
E.g. if you have 5 hosts with SandyBridge CPUs and 5 Hosts with Skylake CPUs
you would limit the CPU type to SandyBridge. This
enables you to migrate VMs back from a Skylake node to
Hi all,
is it possible to move a VM's disks from one Ceph cluster to another, including
all snapshots that those disks have? The GUI
doesn't let me do it but is there some commandline magic that will move the
disks and all I have to do is edit the VM's config file?
Background: I have two PVE
unza, wrote:
>
>> Hi Uwe,
>>
>> El 19/8/19 a las 10:14, Uwe Sauter escribió:
>>> is it possible to move a VM's disks from one Ceph cluster to another,
>> including all snapshots that those disks have? The GUI
>>> doesn't let me do it but is there some com
1 - 100 of 114 matches
Mail list logo