t it's just a case of loading appropriate modules
with modprobe command.
G.
On Fri, 15 May 2020 at 15:29, Thomas Lamprecht mailto:t.lampre...@proxmox.com>> wrote:
On 5/15/20 9:00 AM, Uwe Sauter wrote:
> Chris,
>
> thanks for taking a look.
>
>
Chris,
thanks for taking a look.
Am 14.05.20 um 23:13 schrieb Chris Hofstaedtler | Deduktiva:
> * Uwe Sauter [200514 22:23]:
> [...]
>> More details:
>>
>> I followed these two instructions:
>>
>> https://community.mellanox.com/s/article/howto-config
Hi all,
I had to change the hardware of one of my Proxmox installations and now have the problem that I cannot configure a
Mellanox ConnectX-5 card for SR-IOV/passthrough. To be more precise I can boot the VM and it also recognizes the
Infiniband device but I'm unable to assign a Node GUID and
If you can afford the downtime of the VMS you might be able to migrate the disk
images using "rbd export | ncat" and "ncat | rbd
import".
I haven't tried this with such a great difference of versions but from Proxmox
5.4 to 6.1 this worked without a problem.
Regards,
Uwe
Am 30.01.20
s of the management
interfaces are not in that set.
Can anybody confirm that this is indeed an incomplete macro or is something
wrong with my configuration?
Regards,
Uwe Sauter
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi all,
today I create some kind of a mess.
TL;DR
I added one node with the wrong hostname to an existing cluster, tried to rename the node, now some parts of the WebUI
don't work anymore. Any thoughts?
I have a 3 node cluster with up-to-date software and added a fourth and fifth node. By
Am 05.12.19 um 07:58 schrieb Thomas Lamprecht:
> Hi,
>
> On 12/4/19 11:17 PM, Uwe Sauter wrote:
>> Hi,
>>
>> upgraded a cluster of three servers to 6.1. Currently I'm in the process of
>> rebooting them one after the other.
>>
>
> Upgrade fro
Hi,
upgraded a cluster of three servers to 6.1. Currently I'm in the process of
rebooting them one after the other.
When trying to migrate VMs to a host that was already rebooted I get the
following in the task viewer window in the web ui:
Check VM 109: precondition check passed
Migrating VM
Am 06.09.19 um 12:32 schrieb Alwin Antreich:
> On Fri, Sep 06, 2019 at 11:44:10AM +0200, Uwe Sauter wrote:
>> root@px-bravo-cluster:~# rbd -p vdisks create vm-112-disk-0 --size 1G
>> rbd: create error: (17) File exists
>> 2019-09-06 11:35:20.943998 7faf704660c0 -1 librbd: rb
I'll need to think about naming then.
But this keeps me wondering why it only failed for one VM and the other six I
moved today caused no problems.
Thank you.
Regards,
Uwe
>
> On Fri, 6 Sep 2019, 12:45 Uwe Sauter, <mailto:uwe.sauter...@gmail.com>> wrote:
>
>
Hello Alwin,
Am 06.09.19 um 11:32 schrieb Alwin Antreich:
> Hello Uwe,
>
> On Fri, Sep 06, 2019 at 10:41:18AM +0200, Uwe Sauter wrote:
>> Hi,
>>
>> I'm having trouble moving a disk image to Ceph. Moving between local disks
>> and NFS share
Hi,
I'm having trouble moving a disk image to Ceph. Moving between local disks and
NFS share is working.
The error given is:
create full clone of drive scsi0 (aurel-cluster1-VMs:112/vm-112-disk-0.qcow2)
rbd: create error: (17) File exists
TASK ERROR: storage migration failed: error
Am 03.09.19 um 12:09 schrieb Fabian Grünbichler:
> On September 3, 2019 11:46 am, Thomas Lamprecht wrote:
>> Hi Uwe,
>>
>> On 03.09.19 09:18, Uwe Sauter wrote:
>>> Hi all,
>>>
>>> on a freshly installed PVE 6 my /etc/aliases looks like:
>>&g
Thanks to all. I got it working. rbd export/import were the right hint.
Am 21.08.19 um 01:26 schrieb Mike O'Connor:
On 20/8/19 12:19 am, Mark Adams wrote:
On Mon, 19 Aug 2019 at 11:59, Uwe Sauter wrote:
Hi,
@Eneko
Both clusters are hyper-converged PVE clusters each running its own Ceph
unza, wrote:
>
>> Hi Uwe,
>>
>> El 19/8/19 a las 10:14, Uwe Sauter escribió:
>>> is it possible to move a VM's disks from one Ceph cluster to another,
>> including all snapshots that those disks have? The GUI
>>> doesn't let me do it but is there some com
Hi all,
is it possible to move a VM's disks from one Ceph cluster to another, including
all snapshots that those disks have? The GUI
doesn't let me do it but is there some commandline magic that will move the
disks and all I have to do is edit the VM's config file?
Background: I have two PVE
To be most flexible in a HA setup you would take the minimal denominator on the
CPU architecture / feature side.
E.g. if you have 5 hosts with SandyBridge CPUs and 5 Hosts with Skylake CPUs
you would limit the CPU type to SandyBridge. This
enables you to migrate VMs back from a Skylake node to
Am 11.04.19 um 16:07 schrieb Thomas Lamprecht:
On 4/11/19 2:47 PM, Uwe Sauter wrote:
Thanks for all your effort. Two questions though:
From the release notes:
HA improvements and added flexibility
It is now possible to set a datacenter wide HA policy which can change the
way guests
Thanks for all your effort. Two questions though:
From the release notes:
HA improvements and added flexibility
It is now possible to set a datacenter wide HA policy which can change the
way guests are treated upon a Node shutdown or
reboot. The choices are:
freeze: always freeze
In dmesg output, are there lines like "e1000e :00:19.0 enp0s25: renamed
from eth0" ?
The question I see is: are all of your interfaces detected but Udev is doing
something wrong or does the kernel not detect all
interfaces (besides it seems to see all PCIe devices).
You could try to create
And how would you handle the situation where you want to use dport 10200 on
several VMs on the same host?
I don't think that this will work reliably in a cluster, where VMs migrate
between hosts.
Am 06.04.19 um 16:25 schrieb Gilberto Nunes:
Hi there...
Is there any way to use port forward
No in-place way to convert between but you can set one drive at a time to "out"
to migrate data away, then to "off" and destroy.
Then remove the OSD and recreate with filestore. Let it sync and once finished,
do the next drive.
Am 19.02.19 um 12:16 schrieb Gilberto Nunes:
> Hi
>
> I have 15
You could use
qm terminal
to connect to the serial console. Ctrl + o will quit the session.
You need to configure your VMs to provide a serial console, e.g. by adding "console=tty0 console=ttyS0,115200n8" to
GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and running "grub-mkconfig -o
Frank Thommen:
Good point. Thanks a lot
frank
On 11/22/2018 07:51 PM, Uwe Sauter wrote:
FYI:
I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the
same (software) bridge and keep STP on… that way when you loose the link from
FYI:
I had such a thing working. What you need to keep in mind is that you should configure both interfaces per host on the
same (software) bridge and keep STP on… that way when you loose the link from node A to node B the traffic will be going
through node C.
++
|
Hi,
first problem is that you seem to be using some client that replaces verbose text with links to facebook. Could you
please resend you mail using a plain text message (no html). This should also take care of the formating (currently no
monospace font which make it much harder do find the
Hi all,
thanks for looking into this.
With help from the ceph-users list I was able to migrate my images.
So no need anymore.
Best,
Uwe
Am 08.11.18 um 16:38 schrieb Thomas Lamprecht:
> On 11/8/18 1:43 PM, Alwin Antreich wrote:
>> On Wed, Nov 07, 2018 at 09:01:09PM +0100, U
Hi,
I'm trying to manually migrate VM images with snapshots from pool "vms" to pool
"vdisks" but it fails:
# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format
2 - vdisks/vm-102-disk-2
rbd: import header failed.
rbd: import failed: (22) Invalid argument
Exporting
Hi,
in the documentation to pvecm [1] it says:
At this point you must power off hp4 and make sure that it will not power on
again (in the network) as it is.
Important:
As said above, it is critical to power off the node before removal, and make
sure that it will never power on again (in the
-------
> *Von: *"uwe sauter de"
> *An: *"Thomas Lamprecht" , "pve-user"
>
> *Gesendet: *Mittwoch, 22. August 2018 10:50:19
> *Betreff: *Re: [PVE-User] PVE kernel
If using standard 802.3ad (LACP) you will always get only the performance of a
single link between one host and another.
Using "bond-xmit-hash-policy layer3+4" might get you a better performance but
is not standard LACP.
Am 24.08.18 um 12:01 schrieb Gilberto Nunes:
> So what bond mode I
One thing speaks againts this being PTI is that both types of nodes have
secondary OSDs causing slow requests.
Though it still is an option to try before giving up completely.
Am 22.08.18 um 11:45 schrieb Uwe Sauter:
> Hi Marcus,
>
> no, I haven't disabled Spectre/Meltdown mitigat
encountered stuck I/O on rdb devices.
> And kernel says it is losing a mon connection and hunting for a new mon all
> the time (when backup takes
> place and heavy I/O is done).
>
> Marcus Haarmann
>
> ------
>>>
* pve-kernel 4.13 is based on
http://kernel.ubuntu.com/git/ubuntu/ubuntu-artful.git/ ?
>>>
>>> Yes. (Note that this may not get much updates anymore)
>>>
* pve-kernel 4.15 is based on
http://kernel.ubuntu.com/git/ubuntu/ubuntu-bionic.git/ ?
>>>
>>> Yes. We're
ming knowledge)
>
> Marcus Haarmann
>
> ----------
> *Von: *"Uwe Sauter"
> *An: *"pve-user"
> *Gesendet: *Mittwoch, 22. August 2018 09:4
Hi Thomas,
Am 22.08.18 um 09:55 schrieb Thomas Lamprecht:
> Hi Uwe,
>
> On 8/22/18 9:48 AM, Uwe Sauter wrote:
>> Hi all,
>>
>> some quick questions:
>>
>> * As far as I can tell the PVE kernel is a modified version of Ubuntu
>> kernels, correct
Hi all,
some quick questions:
* As far as I can tell the PVE kernel is a modified version of Ubuntu kernels,
correct?
Modifications can be viewed in the pve-kernel.git repository
(https://git.proxmox.com/?p=pve-kernel.git;a=tree).
* pve-kernel 4.13 is based on
y_pass https://localhost:8006;
> proxy_buffering off;
> client_max_body_size 0;
> proxy_connect_timeout 3600s;
> proxy_read_timeout 3600s;
> proxy_send_timeout 3600s;
> send_timeout 3600s;
> }
> }
Would you mind to share the relevant parts of your nginx config? Does
forwarding NoVNC traffic work?
Am 26.07.2018 um 13:22 schrieb Ian Coetzee:
> Hi All,
>
> I know this has been answered.
>
> What I did was to drop a reverse proxy (nginx) in front of pveproxy
> listening on port 443 then
Am 26.07.2018 um 11:22 schrieb Thomas Lamprecht:
> Hi,
>
> Am 07/26/2018 um 11:05 AM schrieb Brent Clark:
>> Good day Guys
>>
>> I did a sslscan on my proxmox host, and I got the following:
>>
>> snippet:
>> Preferred TLSv1.0 256 bits ECDHE-RSA-AES256-SHA Curve P-256 DHE
>> 256
>>
g
on 4.13.16 then no blocking OSDs happen (as far as
I have seen until now).
Has anyone seen repeatedly OSDs with blocked requests when running 4.15.17 or
is it just me?
Regards,
Uwe
Am 09.05.2018 um 11:51 schrieb Uwe Sauter:
> Hi,
>
> since kernel 4.15.x was released in pve-nos
Hi,
since kernel 4.15.x was released in pve-nosubscription I have I/O performance
regressions that lead to 100% iowait in VMs, dropped (audit) log records and
instability in general.
All VMs that present this behavior run up-to-date CentOS 7 on Ceph-backed
storage
with kvm64 as CPU.
This
as I can tell.
>
> On Tue, May 08, 2018 at 03:31:52PM +0200, Uwe Sauter wrote:
>> Hi all,
>>
>> I recently discovered that one of the updates since turn of the year
>> introduced options to let the VM know about Meltdown/Spectre
>> mitigation on the host (V
Hi all,
I recently discovered that one of the updates since turn of the year introduced
options to let the VM know about Meltdown/Spectre
mitigation on the host (VM configuration -> processors -> advanced -> PCID &
SPEC-CTRL).
I'm not sure if I understand the documentation correctly so please
Looks like this was cause by pve-kernel-4.15.15-1-pve. After rebooting into
pve-kernel-4.13.16-2-pve performance is back to normal.
Hopefully the next kernel update will address this.
Regards,
Uwe
Am 02.05.2018 um 22:27 schrieb Uwe Sauter:
> Hi all,
>
> I updated m
Mathieson:
On 3/05/2018 6:27 AM, Uwe Sauter wrote:
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating
VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Whats your ceph status? it probably doing
Hi Lindsay,
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating
VMs between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Whats your ceph status? it probably doing a massive backfill after the
Hi all,
I updated my cluster this morning (version info see end of mail) and rebooted all hosts sequentially, live migrating VMs
between hosts. (Six hosts connected via 10GbE, all participating in a Ceph cluster.)
Since then I experience hanging storage tasks inside the VMs (e.g. jbd2 on VMs
Hi all,
I discourage you from updating ZFS to version 0.7.7 as it contains a
regression. Version 0.7.8 was released today that reverts the
commit that introduced the regression.
For Infos check: https://github.com/zfsonlinux/zfs/issues/7401
Regards,
Uwe
Ah, syntax. Thanks again.
Have a nice weekend.
Am 23.03.2018 um 15:35 schrieb Thomas Lamprecht:
> Uwe,
>
> On 3/23/18 3:31 PM, Uwe Sauter wrote:
>> a quick follow-up: is it possible to create PVE firewall rules for port
>> ranges? It seems that only a single port is allo
lid format - invalid port '6-60050'
Best,
Uwe
Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht:
> Hi Uwe!
>
> On 3/23/18 3:02 PM, Uwe Sauter wrote:
>> Hi there,
>>
>> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
>&g
Thanks, I'll try again.
Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht:
> Hi Uwe!
>
> On 3/23/18 3:02 PM, Uwe Sauter wrote:
>> Hi there,
>>
>> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
>> migrations fail with this
Hi there,
I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
migrations fail with this setting.
# log of failed insecure migration #
2018-03-23 14:58:44 starting migration of VM 101 to node 'px-bravo-cluster'
(169.254.42.49)
2018-03-23 14:58:44 copying disk
As long as you microcode is older than June 2017 there is no way that there are
mitigations for Meltdown and Spectre as Intel was
only made aware of the flaws back in June 2017.
Same goes for the BIOS as the vendors require the microcode from Intel to
include into their updates.
Regarding the
Thanks for clarification!
Am 19.11.2017 um 09:11 schrieb Dietmar Maurer:
>>> Could someone with insight into the backup process explain why kvm is
>>> started?
>>
>> It uses the qemu copy-on-write feature to make sure the state is consistent.
>> You can immediately work with that VM, while qemu
Hi all,
I'm a bit shocked. I wanted to create a "save" backup where the VM is shut down
and thus all filesystems are in a
consistent state. For that I shut down my VM and then started a backup (backup
mode=stop, compression=lzo) and what must
I see:
INFO: starting new backup job: vzdump 106
True, my bad. But every other PVE related command I used so far had a
" help" subcommand so I didn't look into
the man page.
Please take this than as bug report for the subcommand (or a "-h" help option)
and as a request to update the wiki
article to include the info, that a PATH argument can
Hi
running a cluster with PVE 5.1 and Ceph.
pveperf as described in [1] doesn't work anymore. Even as root I get:
root@pxmx-02:~# pveperf help
CPU BOGOMIPS: 89368.48
REGEX/SECOND: 1505926
df: help: No such file or directory
DNS EXT: 13.68 ms
DNS INT: 19.98 ms
Hi,
is it still correct to set tunables to "hammer" even whit Proxmox 5? This is
mentioned in the wiki [1].
Regards,
Uwe
[1] https://pve.proxmox.com/wiki/Ceph_Server#Set_the_Ceph_OSD_tunables
___
pve-user mailing list
Hi,
now that 5.1 is released will there be documentation how to upgrade from 4.4?
Is the wikie page [1] valid for 5.1?
Did someone already try the upgrade? Any experience is appreciated.
Regards,
Uwe
[1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
Hi,
I'm trying to use the virtualization support that Mellanox ConnectX-3 cards
provide. In [1] you can find a document by Mellanox
that describes the necessary steps for KVM.
Currently I'm trying to install Mellanox OFED but the installation fails
because there is no package
M 12636M
>
>
> On Thu, Sep 21, 2017 at 8:30 AM, Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>> wrote:
>
> Hi,
>
> thanks, but I forgot to mention that all my VMs have Ceph as backend and
> thus snapshots can'
Hi,
suppose I have several snapshots of a VM:
Snap1
└── Snap2
└── Snap3
└── Snap4
└── Snap5
Is there a way to determine the size of each snapshot?
Regards,
Uwe
___
pve-user mailing list
pve-user@pve.proxmox.com
) to the OS.
Having a look at the dmesg output it seems to be a timing issue: highest LUN is
recognized as first.
Am 29.08.2017 um 13:30 schrieb Lindsay Mathieson:
> On 29/08/2017 9:17 PM, Uwe Sauter wrote:
>> Is there any way to force scsi1 to /dev/sdb, scsi2 to /dev/sdc, etc. so that
>
Hi,
I'm currently facing the following problem:
VM is defined with several disks:
scsi0 -> ceph:vm-201-disk1,discard=on, size=16G
scsi1 -> ceph:vm-201-disk2,discard=on, size=16G
scsi2 -> ceph:vm-201-disk3,discard=on, size=4G
scsi3 -> ceph:vm-201-disk4,discard=on, size=4G
scsi4 ->
on bond with IP
auto bond0.120
iface bond0.120 inet static
address 10.100.100.8
netmask 255.255.255.0
mtu 9000
# interface for vlan 130 on bond without IP (just for VMs)
auto bond0.130
iface bond0.130 inet manual
-
Am 23.08.2017 um 07:27 schrieb Uwe Sauter
An example out of my head:
/etc/network/interfaces
-
# management interface
auto eth0
iface eth0 inet static
address 10.100.100.8
netmask 255.255.255.0
gateway 10.100.100.254
# 1st interface in bond
auto eth1
iface eth1 inet manual
mtu 9000
# 2nd
https://pve.proxmox.com/wiki/Ceph_Server#Ceph_on_Proxmox_VE_5.0
"Note: the current Ceph Luminous 12.1.x is the release candidate, for
production ready Ceph Cluster packages please wait for
version 12.2.x "
Am 17.08.2017 um 16:58 schrieb Gilberto Nunes:
> Hi guys
>
> Ceph Luminous is
t;
>
> 2017-08-14 9:51 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>>:
>
> Then the question is if
>
> a) you'd want to integrate those Ubuntu servers into an existing Ceph
> cluster (managed by Proxmox) or
>
&
-14 9:30 GMT-03:00 Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>>:
>
> Are there any reasons on your side to use Ubuntu? If you want to stay
> compatible you could also install Proxmox including Ceph but
> not use those hosts for virtu
Are there any reasons on your side to use Ubuntu? If you want to stay
compatible you could also install Proxmox including Ceph but
not use those hosts for virtualization…
Am 14.08.2017 um 14:07 schrieb Gilberto Nunes:
> Hi
>
> Regard Ceph, can I use 3 Ubuntu Server 16 Xenial to build a Ceph
If it is a multicast problem and your cluster is not that big (~10 nodes) you
could switch to using "udpu" in corosync.conf
totem {
[…]
config_version: +=1 # increment with every change you do
transport: udpu
}
Am 11.08.2017 um 13:48 schrieb Alexandre DERUMIER:
> seem to be a
Ah, thanks. (Sorry for the late reply, Gmail put you answer into the spam
folder.)
Am 20.06.2017 um 19:01 schrieb Michael Rasmussen:
> The former is for HA vm's the latter for non HA vm's
>
> On June 20, 2017 6:19:36 PM GMT+02:00, Uwe Sauter <http://uwe.sauter.de>@g
-1~bpo80+1
Am 07.07.2017 um 17:38 schrieb Nicola Ferrari (#554252):
> On 20/06/2017 18:19, Uwe Sauter wrote:
>>
>> Can someone explain under which circumstances this output is displayed
>> instead of just the short message that migration
>> was started?
>
> I
Thomas,
>>> An idea is to allow the configuration of the behavior and add two
>>> additional behaviors,
>>> i.e. migrate away and relocate away.
>> What's the difference between migration and relocation? Temporary vs.
>> permanent?
>
> Migration does an online migration if possible (=on VMs)
Hi Thomas,
thank you for your insight.
>> 1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes
>> is restarted / shutdown either via WebGUI or via
>> commandline. Will hosted, HA-managed VMs be migrated to other hosts before
>> shutting down or will they be stopped
Hi all,
1) I was wondering how a PVE (4.4) cluster will behave when one of the nodes is
restarted / shutdown either via WebGUI or via
commandline. Will hosted, HA-managed VMs be migrated to other hosts before
shutting down or will they be stopped (and restared on
another host once HA recognizes
Hi all,
usually when I update my PVE cluster I do it in a rolling fashion:
1) empty one node from running VMs
2) update & reboot that node
3) go to next node
4) migrate all running VMs to already updated node
5) go to 2 until no more nodes need update
For step 1 (or 4) I usually do:
# qm list
Hi Fabian,
>> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid
>> DNS lookup delays, it is usually preferable to use an
>> IP address instead of a DNS name". But yes, the DNS in our environment is
>> configured to allow reverse lookups.
>
> which - AFAIK - is still
Am 22.05.2017 um 15:40 schrieb Uwe Sauter:
>
>>
>> I discovered a different issue with this definition: If I go to Datacenter
>> -> node -> storage aurel -> content I only get "mount
>> error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)&q
>
> I discovered a different issue with this definition: If I go to Datacenter ->
> node -> storage aurel -> content I only get "mount
> error: mount.nfs: /mnt/pve/aurel is busy or already mounted (500)".
>
> The share is mounted again with IP address though I didn't change the config
> after
>>
>> the culprit is likely that your storage.cfg contains the IP, but your
>> /proc/mounts contains the hostname (with a reverse lookup inbetween?).
>>
>
> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid
> DNS lookup delays, it is usually preferable to use an
> IP
>>> perl -e 'use strict; use warnings; use PVE::ProcFSTools; use Data::Dumper;
>>> print Dumper(PVE::ProcFSTools::parse_proc_mounts());'
>>>
>>
>> $VAR1 = [
>>
>> [
>> ':/backup/proxmox-infra',
>> '/mnt/pve/aurel',
>> 'nfs',
>>
>>
Am 19.05.2017 um 11:53 schrieb Fabian Grünbichler:
> On Fri, May 19, 2017 at 11:26:35AM +0200, Uwe Sauter wrote:
>> Hi Fabian,
>>
>> thanks for looking into this.
>>
>> As I already mentioned yesterday my NFS setup tries to use TCP as much as
>> possi
Hi Fabian,
thanks for looking into this.
As I already mentioned yesterday my NFS setup tries to use TCP as much as
possible so the only UDP port used / allowed in the NFS
servers firewall is udp/111 for Portmapper (to allow showmount to work).
>> Issue 1:
>> Backups failed tonight with "Error:
Hi all,
after having succeeded to have an almost TCP-based NFS share mounted (see
yesterday's thread) I'm now struggling with the backup
process itself.
Definition of NFS share in /etc/pve/storage.cfg is:
nfs: aurel
export /backup/proxmox-infra
path /mnt/pve/aurel
l by running
> unconfigured.sh.
>
> So basically, if I boot to the shell, how can I start the install from the
> contents of the CD/ISO.
>
>
>
> On 18 May 2017 at 19:04, Uwe Sauter <uwe.sauter...@gmail.com
> <mailto:uwe.sauter...@gmail.com>> wrote:
>
>
Don't know what your situation is but there is a wiki page [1] that describes
the installation of Proxmox on top of an
existing Debian.
[1] https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
Am 18.05.2017 um 19:55 schrieb Steve:
> In version 3.2 ISO there was this script to start
Am 18.05.2017 um 15:04 schrieb Emmanuel Kasper:
>
>
> On 05/18/2017 02:56 PM, Uwe Sauter wrote:
>> # mount -t nfs -o vers=4,rw,sync :$SHARE /mnt
>> mount.nfs: mounting aurel:/proxmox-infra failed, reason given by server: No
>> such file or directory
>
> aurel:
but again, due to showmount not using TCP PVE will not mount it automatically.
Regards,
Uwe
Am 18.05.2017 um 11:40 schrieb Uwe Sauter:
> Hi,
>
> as my Proxmox hosts don't have enough local storage I wanted to do backups
> into the "network". One option
Hi,
as my Proxmox hosts don't have enough local storage I wanted to do backups into
the "network". One option that came into mind was
using the existing Ceph installation to do backups. What's currently missing
for that (as far as I can tell) is Proxmox support
for a Ceph-backed filesystem
Hi,
I just noticed an (intentional?) inconsistency between the WebUI's Ceph OSD
page vs. the tasks view on the bottom and
the CLI:
If you go to Datacenter -> node -> Ceph -> OSD and select one of the OSDs you
can "remove" it with a button in the upper
right corner. If you do so the task is
sistet (compare lines 4296 and 33465).
This is the reason why I have 2 sub_filters for basically the same replacement.
Am 09.05.2017 um 11:01 schrieb Thomas Lamprecht:
> Hi,
>
> On 05/05/2017 06:18 PM, Uwe Sauter wrote:
>> Hi,
>>
>> I've seen the wiki page [1] that ex
Hi Thomas,
thank you for the effort of explaining.
>
> Hmm, there are some problems as we mostly set absolute paths on resources
> (images, JS and CSS files)
> so the loading fails...
> I.e., pve does not knows that it is accessed from
> https://example.com/pve-node/ and tries to load the
Hi,
I've seen the wiki page [1] that explains how to operate a PVE host behind a
reverse proxy.
I'm currently in the situation that I have several services already behind a
rev proxy that are accessible with different
webroots, e.g.
https://example.com/dashboard
https://example.com/owncloud
schrieb Uwe Sauter:
Check that there are no firewalls blocking communication. I had a problem like
this a couple of weeks ago and all I
needed was to properly configure the settings for pveproxy. (There are other
firewall settings, too.)
Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier:
Looks
Check that there are no firewalls blocking communication. I had a problem like this a couple of weeks ago and all I
needed was to properly configure the settings for pveproxy. (There are other firewall settings, too.)
Am 14.03.2017 um 20:15 schrieb Kevin Lemonnier:
Looks like they can't find
Hi,
I was installing the latest updates to PVE 4.4 yesterday and it got stuck after
the configuration step for Ceph.
I was able to trace this to a process "systemd-tty-ask-password-agent --watch" while systemd was restarting ceph.target.
It seems that systemd confused its internal state
to either the SSD or the HDD, as they are run as
> independent strorage systems.
>
> _.https://eXtremeSHOK.com
> .__
>
> On 03/03/2017 12:05 AM, Uwe Sauter wrote:
>> Yes, you can add arbitrary sized disks
Yes, you can add arbitrary sized disks to Ceph. Usually the disk size is used
as the OSD's weight factor which influences the placement of data.
Am 2. März 2017 22:49:23 MEZ schrieb Daniel :
>Hi there,
>
>i am playing abit with Cepg since some weeks and i just wanted to
1 - 100 of 114 matches
Mail list logo