Hi,
On 5/22/20 4:03 PM, Frank Thommen wrote:
> Dear all,
>
> having worked with oVirt in the past there are some features that I am really
> missing in PVE in my daily work:
>
> a) a tabular overview over all virtual machines. This should/might also
> include some performance data and the
On 5/15/20 9:00 AM, Uwe Sauter wrote:
> Chris,
>
> thanks for taking a look.
>
>
> Am 14.05.20 um 23:13 schrieb Chris Hofstaedtler | Deduktiva:
>> * Uwe Sauter [200514 22:23]:
>> [...]
>>> More details:
>>>
>>> I followed these two instructions:
>>>
>>>
Hi,
On 5/12/20 8:33 PM, Stephan Leemburg wrote:
> One question though. What is meant by:
>
> * Improve default settings to support hundreds to thousands* of
> parallel running Containers per node (* thousands only with simple
> distributions like Alpine Linux)
>
> Is that setting feature
Hi,
On 2/17/20 2:21 PM, Demetri A. Mkobaranov wrote:
> 1) From the Proxmox manual it seems like a cluster, without HA, offers just
> the ability to migrate a guest from a node to another one. Is this correct?
That plus some other things:
* you can manage all nodes through connecting to any node
Hi,
On 2/17/20 8:30 AM, Amin Vakil wrote:
> This link is broken and gives 404 not found.
>
> http://ceph.com/papers/weil-thesis.pdf
>
> I think this is the new and working link:
>
> https://ceph.com/wp-content/uploads/2016/08/weil-thesis.pdf
Yes seems right, thanks for telling us. Can you
Hi,
On 2/11/20 12:25 PM, Dmytro O. Redchuk wrote:
> Hi masters,
>
> please is it possible to attach backup hook scripts on per-vm basics,
> via GUI or CLI?
>
Currently you only can specify a hook script for a whole backup job.
Either by uncommenting and setting the node-wide "script:
On 1/15/20 3:58 PM, Martin Holub via pve-user wrote:
> Is there any way to add an Alias instead the reported "hostname" for
> InfluxDB. My problem is, i have severall Hosts with hostname "s1" but
> FQDN like s1.$location.domain.tld. As the Influx exported seems to
> report only the hostname, but
On 12/6/19 2:21 PM, Lindsay Mathieson wrote:
> Solved it - there were a lot off ssl errors in syslog, needed to run:
>
> * pvecm updatecerts -f
>
>
> Dunno how it became a problem as I've never fiddled with custom certs
>
maybe you got hit by the stricter security policy on Debian 10:
oiding
a cluster rebuild as it was necessary for the PVE 3.4 to 4.x upgrade.
Hopefully the "to 7.x" will be again a smooth-as-silk upgrade :)
cheers,
Thomas
> On Fri, 6 Dec 2019 at 15:51, Thomas Lamprecht
> wrote:
>
>> On 12/6/19 1:31 AM, Lindsay Mathieson wrote:
>>&
On 12/6/19 1:31 AM, Lindsay Mathieson wrote:
> As per the subject, I have the error : "FAIL: Corosync transport explicitly
> set to 'udpu' instead of implicit default!"
>
>
> Can I ignore that for the upgrade? I had constant problems with multicast,
> udpu is quite reliable.
>
FAILures from
On 12/5/19 8:47 AM, Uwe Sauter wrote:
> Am 05.12.19 um 07:58 schrieb Thomas Lamprecht:
>> On 12/4/19 11:17 PM, Uwe Sauter wrote:
>>> When trying to migrate VMs to a host that was already rebooted I get the
>>> following in the task viewer window in the web ui:
>>
Hi,
On 12/4/19 11:17 PM, Uwe Sauter wrote:
> Hi,
>
> upgraded a cluster of three servers to 6.1. Currently I'm in the process of
> rebooting them one after the other.
>
Upgrade from 5.4 to 6.1 or from 6.0 to 6.1 ?
> When trying to migrate VMs to a host that was already rebooted I get the
>
Hi,
On 12/4/19 3:36 PM, Olivier Benghozi wrote:
> I suggest you should just leave appart the proxmox iso installer. Had only
> problems with it.
We did tens of installations test just this week on many different HW,
combinations. Not a single one were it didn't worked here.
We'd be happy if
On 12/4/19 3:33 PM, Roland @web.de wrote:
> thanks for making proxmox!
>
> unfortunatly i cannot install it on fujitsu rx300 s6, mouse/keyboard
> won't work in installer screen anymore. 6.0 and before works without
> problems.
We try hard to fix Installer issues, so more information would be
Hey,
On 12/2/19 11:18 PM, Adrian Petrescu wrote:
> Hey all, I have a pretty intriguing issue.
>
> I'm spinning up VMs through a Terraform
> provider(https://github.com/Telmate/terraform-provider-proxmox
> if it matters), which goes through the /api2/json endpoints. They are
> all full clones of
Hi,
On 11/25/19 6:14 AM, k...@zimmer.net wrote:
> Hi,
> i updated my Proxmox VE Host (via 'apt-get update; apt-get upgrade').
That's the wrong way to upgrade a Proxmox VE host[0] and is probably the
cause for your problems. Use
apt-get update
apt-get dist-upgrade
or the more modern interface:
Hi,
On 11/23/19 9:29 AM, arjenvanweel...@gmail.com wrote:
> Hi,
>
> Yesterday evening, I was surprised by the same PCI passthrough issue as
> described in
> https://forum.proxmox.com/threads/pci-passthrough-not-working-after-update.60580/
> . A VM failed to start with the error "no pci device
On 11/8/19 4:22 PM, Mark Adams wrote:
> I didn't configure it to do
> this myself, so is this an automatic feature? Everything I have read says
> it should be configured manually.
Maybe my previous mail did not answered this point in a good way.
You need to configure *hardware-based* Watchdogs
Hi,
On 11/8/19 4:35 PM, Daniel Berteaud wrote:
> - Le 8 Nov 19, à 16:22, Mark Adams m...@openvs.co.uk a écrit :
>> Hi All,
>>
>> This cluster is on 5.4-11.
>>
>> This is most probably a hardware issue either with ups or server psus, but
>> wanted to check if there is any default watchdog or
Hi,
On 10/15/19 4:43 PM, Adam Weremczuk wrote:
> Hi all,
>
> It started failing following Debian 9->10 and PVE 5->6 upgrade:
>
> pveam update
> update failed - see /var/log/pveam.log for details
>
> "apt-key list" wasn't showing it so I've added it:
>
> wget
>
Hi,
On 10/15/19 11:58 AM, Adam Weremczuk wrote:
> Hello,
>
> I'm running PVE 5.4-13 (Debian 9.11 based) using free no-subscription repos.
>
> Recently I've deployed a few Debian 10.0 containers which I later upgraded to
> 10.1.
>
> I'm having constant issues with these CTs such as delayed
Hi,
On 10/10/19 5:47 PM, JR Richardson wrote:
> Hi All,
>
> I'm testing ceph in the lab. I constructed a 3 node proxmox cluster
> with latest 5.4-13 PVE all updates done yesterday and used the
> tutorials to create ceph cluster, added monitors on each node, added 9
> OSDs, 3 disks per ceph
Hi,
On 10/1/19 4:08 AM, Roberto Alvarado wrote:
> Hi Folks,
>
> Do you know what is the best way to upgrade the smartmon tools package to
> version 7?
>
just upgrade to Proxmox VE 6.x, it has smartmontools 7:
# apt show smartmontools
Package: smartmontools
Version: 7.0-pve2
For now we have
Hi,
On 9/27/19 10:30 AM, Mark Adams wrote:
> Hi All,
>
> I'm trying out one of these new processors, and it looks like I need at
> least 5.2 kernel to get some support, preferably 5.3.
>
We're onto a 5.3 based kernel, may need a bit until a build gets
released for testing though.
But the
Hi,
On 9/25/19 3:46 PM, Mark Schouten wrote:
> Hi,
>
> Just noticed that this is not a PVE 6-change. It's also changed in 5.4-3.
> We're using this actively, which makes me wonder what will happen if we
> stop/start a VM using disks on CephFS...
huh, AFAICT we never allowed that, the
On 03.09.19 12:39, Musee Ullah via pve-user wrote:
> On 2019/09/03 3:14, Uwe Sauter wrote:
>> I'd suggest to do:
>> sed -i -e 's/^www:/www: /' /etc/aliases
>>
>> so that lines that were changed by a user are also caught.
>
> just pointing out that consecutive package updates'll continuously add
>
Hi Uwe,
On 03.09.19 09:18, Uwe Sauter wrote:
> Hi all,
>
> on a freshly installed PVE 6 my /etc/aliases looks like:
>
> # cat /etc/aliases
> postmaster: root
> nobody: root
> hostmaster: root
> webmaster: root
> www:root
>
> and I get this output from mailq
>
> # mailq
> -Queue ID- --Size--
Hi,
Am 8/13/19 um 10:37 AM schrieb lord_Niedzwiedz:
>
> I run a "Stop" backup on proxmox, it shuts down the machine.
> Starts making a copy.
> But it immediately turns it on ("restarts only", doesn't stop for the
> duration of the copy !! - why ??).
> "resuming VM again after 21 seconds" ?? !!
Am 8/6/19 um 3:57 PM schrieb Hervé Ballans:
> Our OSDs are currently in 'filestore' backend. Does Nautilus handle this
> backend or do we have to migrate OSDs in 'Bluestore' ?
Nautlius can still handle Filestore.
But, we do not support adding new Filestore OSDs through our tooling
any more (you
Hi,
On 7/16/19 5:16 PM, Adam Weremczuk wrote:
> I've just deployed a test Debian 10.0 container on PVE 5.4.6 from the default
> template.
>
> It installed fine, network is working ok across the LAN and I can ssh to it.
>
> Regardless whether I disable IPv6 or not
On 7/16/19 5:37 PM, Alain péan wrote:
> I shall indeed test carefully on a test cluster. But the problem is that I
> have one still in filestore, and the other in bluestore, so perhaps, I shall
> have to migrate all to bluestore in a first step...
You can still use Filestore backed Clusters,
On 7/17/19 12:47 PM, mj wrote:
> Question: is it possible to add some extra ceph OSD storage nodes, without
> proxmox virtualisation, and thus without the need to purchase additional
> proxmox licenses?
>
> Anyone doing that?
>
> We are wondering for example if the extra mon nodes & OSDs would
On 7/16/19 9:28 PM, Ricardo Correa wrote:
> systemd[1]: Starting The Proxmox VE cluster filesystem...
> systemd[1]: pve-cluster.service: Start operation timed out. Terminating.
> pmxcfs[13267]: [main] crit: read error: Interrupted system call
That's strange, that an error happening initially at
On 7/16/19 11:26 PM, Chris Hofstaedtler | Deduktiva wrote:
> * Fabian Grünbichler [190716 21:55]:
> [..]
>>>
>>> dpkg: error processing package corosync (--configure):
>>> dependency problems - leaving unconfigured
>>> Processing triggers for libc-bin (2.24-11+deb9u4) ...
>>> Processing triggers
On 7/16/19 4:57 PM, Thomas Lamprecht wrote:
> On 7/16/19 4:38 PM, Alain péan wrote:
>> *ceph-disk has been removed*: After upgrading it is not possible to create
>> new OSDs without upgrading to Ceph Nautilus.
>>
>> So it willbe mandatory to upgrade to Ceph Nautilu
On 7/16/19 4:38 PM, Alain péan wrote:
> *ceph-disk has been removed*: After upgrading it is not possible to create
> new OSDs without upgrading to Ceph Nautilus.
>
> So it willbe mandatory to upgrade to Ceph Nautilus, in addition to the other
> changes ?
yes, if you upgrade to 6.x you will
Hi,
On 7/8/19 6:04 PM, bsd--- via pve-user wrote:
> Hello,
>
> There is a JS in Proxmox VE v.5.4.6 which reloads the page and forces all
> menu item at the top every 5".
A full page reload? We only do that on cluster creation, as there
the websites TLS certificate changed, and thus it's
Am 7/8/19 um 12:13 PM schrieb Fabian Grünbichler:
> On Mon, Jul 08, 2019 at 09:10:48AM +0200, Thomas Lamprecht wrote:
>> Am 7/8/19 um 8:05 AM schrieb Fabian Grünbichler:
>>> On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva
>>> wrote:
>>>
Am 7/8/19 um 9:56 AM schrieb arjenvanweel...@gmail.com:
> Is just installing haveged sufficient? Can the Proxmox-team decide to
> add haveged to it's dependencies? Or is more discussion required?
It'd be, the service is then enabled and running by default.
For me it'd be OK to add as a
Am 7/8/19 um 9:34 AM schrieb arjenvanweel...@gmail.com:
> Having this (as an option) in the GUI would be very nice,
> and 'apt-get install haveged' is quick and easy.
opt-in is surely no problem, my concerns would be rather for
the case where we just add this for VMs with Linux as ostype,
Am 7/8/19 um 8:05 AM schrieb Fabian Grünbichler:
> On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva
> wrote:
>> Hello,
>>
>> while doing some test upgrades I ran into the buster RNG problem [1],
>> where the newer kernel and systemd use a lot more randomness during
>>
Hi,
On 7/5/19 9:32 AM, mj wrote:
> Looks like a great new release!
>
> Does corosync 3.0 mean that the notes on
> [https://pve.proxmox.com/wiki/Multicast_notes] are no longer relevant?
We will update the documentation and wiki articles regarding this in
the following days, until the final PVE
On 7/4/19 12:35 PM, Marco Gaiarin wrote:
> We had a major power outgage here, and our cluster have some trouble on
> restart. The worster was:
>
> Jul 3 19:58:40 pvecn1 corosync[3443]: [MAIN ] Corosync Cluster Engine
> ('2.4.4-dirty'): started and ready to provide service.
> Jul 3 19:58:40
On 6/25/19 9:44 AM, Thomas Lamprecht wrote:
> And as also said (see quote below), for more specific hinters I need the raw
> logs, unmerged and as untouched as possible.
may just be that I did not saw the mail in my inbox, so it looks like
you already send it to me, sorry about m
On 6/25/19 9:10 AM, Mark Schouten wrote:
> On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
>>> 2: ha-manager should not be able to start the VM's when they are running
>>> elsewhere
>>
>> This can only happen if fencing fails, and that fencing wor
Hi,
On 6/13/19 10:08 PM, JR Richardson wrote:
>> On 6/13/19 3:29 PM, Horace wrote:
>>> Should this stuff be in 'help' documentation ?
>>
>> The thing with the resolved ringX_ addresses?
>>
>> Hmm, it would not hurt if something regarding this is written there.
>> But it isn't as black and white,
own into that too..
Stefan (CCd), would you be willing to take a look at this and expand the
"Cluster Network" section from the pvecm chapter in pve-docs a bit
regarding this? That'd be great.
>
> On 6/13/19 12:29 PM, Thomas Lamprecht wrote:
>> On 6/13/19 1:30 PM, Mark Schout
On 6/13/19 1:30 PM, Mark Schouten wrote:
> On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
>> Hi,
>> Do your ringX_addr in corosync.conf use the hostnames or the resolved
>> addresses? As with nodes added on newer PVE (at least 5.1, IIRC) we try
>> to r
Hi,
On 6/13/19 11:47 AM, Mark Schouten wrote:
> Let me start off with saying that I am not fingerpointing at anyone,
> merely looking for how to prevent sh*t from happening again!
>
> Last month I emailed about issues with pve-firewall. I was told that
> there were fixes in the newest packages,
Hi,
On 5/23/19 10:43 AM, Thomas Naumann wrote:
> there is an extra point "improved SDN support" under roadmap in
> official proxmox-wiki. Who can give a hint what this means in detail?
>
Maybe you did not see it but Alexandre answered already to the same mail
on pve-devel[0].
[0]:
On 5/17/19 4:07 PM, Igor Podlesny wrote:
> On Fri, 17 May 2019 at 17:59, Saint Michael wrote:
>>
>> Maybe you should share the patch here so we benefit from it.
>
> Thomas said everything is kept in public git repository, what else are
> you looking to benefit from? :)
>
The original poster of
On 5/17/19 9:53 AM, Christian Balzer wrote:
> On Fri, 17 May 2019 08:05:21 +0200 Thomas Lamprecht wrote:
>> On 5/17/19 4:27 AM, Christian Balzer wrote:
>>> is there anything that's stopping the current PVE to work with an
>>> externally configured Ceph Nautilus cluste
Hi,
On 5/17/19 2:57 AM, Mike O'Connor wrote:
> Hi Guys
>
> Where can I download the source code for the PVE kernels with there
> patches (including old releases) ? I want to apply a patch to fix an issue.
>
All our sources are available at: https://git.proxmox.com/
For cloning the kernel do:
Hi,
On 5/17/19 4:27 AM, Christian Balzer wrote:
>
> Hello,
>
> is there anything that's stopping the current PVE to work with an
> externally configured Ceph Nautilus cluster?
Short: rather not, you need to try out to be sure though.
You probably cannot use the kernel RBD as it's support may
Hi,
On 5/15/19 9:34 AM, Anton Blau wrote:
> Hello,
>
> for better clarity, I have assigned 4-digit IDs for some VMs (eg 1250).
>
> In the menu Data Center -> Backup -> Add these VMs do no longer appear.
>
> Is this a bug or did I do something wrong?
>
Same as Dominic, I cannot reproduce this
On 5/9/19 10:09 AM, Mark Schouten wrote:
> On Thu, May 09, 2019 at 07:53:50AM +0200, Alexandre DERUMIER wrote:
>> But to really be sure to not have the problem anymore :
>>
>> add in /etc/sysctl.conf
>>
>> net.netfilter.nf_conntrack_tcp_be_liberal = 1
>
> This is very useful info. I'll create a
On 5/8/19 10:15 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 15:02, Thomas Lamprecht wrote:
> [...]
>>> -- I didn't open no ticket, neither did I __complain__. I just let
>>> others know there's a pitfall, meanwhile thoroughly describing what it
>>> was. That'
On 5/8/19 9:37 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 14:14, Thomas Lamprecht wrote:
>> On 5/8/19 8:57 AM, Igor Podlesny wrote:
>>> On Wed, 8 May 2019 at 13:11, Thomas Lamprecht
>>> wrote:
> [...]
>>> In short: pain, suffering and all That.
>&
On 5/3/19 12:57 PM, Igor Podlesny wrote:
> On Fri, 3 May 2019 at 14:44, Iztok Gregori wrote:
>>
>> Hi to all!
>>
>> So what happens when one of the configured servers fails, Proxmox
>> recognize the failure and mounts the secondary? If this so the running
>
> Proxmox tells you go suffer, that's
Am 4/26/19 um 4:56 PM schrieb Roland @web.de:
>> will run at the lowest common denominator. In other words, if you have 3
>> hosts each with CPU frequencies being 2.1 GHz, 2.3 GHz, and 2.5 GHz
>> respectively, the entire cluster will run at a 2.1 GHz level.
>
> huh, really? never heard of that,
Am 4/26/19 um 2:05 PM schrieb Craig Jones:
> Hello,
>
> To my understanding, in the vSphere world, a cluster with hosts of mixed
> CPU frequencies and generations (let's assume consistent manufacturer)
> will run at the lowest common denominator. In other words, if you have 3
> hosts each with
Am 4/24/19 um 8:13 PM schrieb David Lawley:
> I did that as part of the migration
>
and the guest agent works? i.e., things like
# qm guest cmd VMID get-osinfo
also the guest config could be interesting:
# qm config VMID
> Serial driver? Don't have have any odd devices showing up in the
user, ...?)
cheers,
Thomas
> But since the GUI just uses the API, I guess that is more difficult than
> you'd expect. :/
>
> --
>
> Mark Schouten
>
> Tuxis, Ede, https://www.tuxis.nl
>
> T: +31 318 200208
>
>
>
>
> - Original Message -
Am 4/24/19 um 12:19 PM schrieb Mark Schouten:
>
> Hi,
>
> Sorry, that doesn't answer my question. I want users that have 2FA to be able
> to use the GUI, and I want to be able to disallow the GUI for certain users.
> I know that the GUI just uses the API as a backend.
That's not possible,
Hi,
On 4/12/19 4:41 PM, Mark Schouten wrote:
> Hi,
>
> I'm in the process of upgrading some older 4.x clusters with Ceph to current
> versions. All goes well, but we hit a bug that is understandable, but
> undocumented. To prevent others from hitting it, I think it would be wise to
> document
On 4/11/19 2:47 PM, Uwe Sauter wrote:
> Thanks for all your effort. Two questions though:
>
>
> From the release notes:
>
> HA improvements and added flexibility
>
> It is now possible to set a datacenter wide HA policy which can change
> the way guests are treated upon a Node shutdown or
On 4/6/19 8:39 AM, Igor Podlesny wrote:
> -- Beyond of the obvious "well, it's for redundancy". That's obvious..
> but "What subsystems and under what circumstances are gonna use it?"
> -- isn't at all.
>
> I have strong suspicion that qemu-kvm isn't capable of fail-over
> switching in case its
On 3/22/19 3:17 PM, Eneko Lacunza wrote:
> Hi Alwin,
>
> El 22/3/19 a las 15:04, Alwin Antreich escribió:
>> On Fri,On a point release, a ISO is generated and the release info is needed
>> On a point release, a ISO is generated and the release info is needed
>> for that.
>>
>> The volume of
On 3/26/19 8:09 AM, lord_Niedzwiedz wrote:
> root@ave:~# apt upgrade
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Calculating upgrade... Done
> The following packages have been kept back:
> zfs-initramfs zfsutils-linux
> 0 upgraded, 0 newly
On 3/20/19 3:28 PM, DL Ford wrote:
> I am not sure if this will effect everyone or if something really strange
> just happened to my system, but after upgrading to PVE 4.15.18-35, all of my
> network name assignments have gone back to the old style (e.g. in my case
> enp4s0 is now eth0, enp6s0
On 3/7/19 7:42 PM, David Lawley wrote:
> sorry brain faart
>
> I'm on 3,x, more work.
>
> I think the last time I researched this I just decided it was time for a
> refresh anyway
surely the easiest and cleanest way, then you can also go straight to a
fresh PVE 5.X installation.
>
> On
Hi,
On 3/1/19 11:09 AM, Patrick Westenberg wrote:
> Hi everyone,
>
> I configured PAM authentication to use yubico but I can't login anymore.
>
> Mar 1 11:02:23 pve01 pvedaemon[4917]: authentication failure;
> rhost=172.31.0.1 user=root@pam msg=Invalid response from server: 410 Gone
>
> Is it
Hi,
On 2/25/19 6:22 PM, Frederic Van Espen wrote:
> Hi,
>
> We're designing a new datacenter network where we will run proxmox nodes on
> about 30 servers. Of course, shared storage is a part of the design.
>
> What kind of shared storage would anyone recommend based on their
> experience and
On 2/25/19 6:03 PM, José Manuel Giner wrote:
> According to this link, Proxmox VE 5 is affected.
>
> https://www.cloudlinux.com/cloudlinux-os-blog/entry/major-9-8-vulnerability-affects-multiple-linux-kernels-cve-2019-8912-af-alg-release
>
> We have a patch?
>
ah yeah, the hyped CVE ^^ but yes,
On 1/23/19 10:27 AM, Fabian Grünbichler wrote:
> The APT package manager used by Proxmox VE and Proxmox Mail Gateway was
> recently discovered to be affected by CVE-2019-3462, allowing a
> Man-In-The-Middle or malicious mirror server to execute arbitrary code
> with root privileges when affected
Hi,
On 1/23/19 7:03 PM, Gilberto Nunes wrote:
> I am facing some trouble with 2 LXC that cannot access either kill it.
> Already try lxc-stop -k -n but no effect.
> Any advice will be welcome...
does it have processes in the "D" (uninterruptible) state?
Probably because some network mount where
On 1/7/19 7:39 PM, Denis Morejon wrote:> Could you give me an example please?
Dietmar did already, research split brain.
>> In practice, I know a lot of people that are afraid of building a cluster
>> because of the lost of quorum, an have a plain html web page with the url of
>> each node
On 12/4/18 10:27 AM, lord_Niedzwiedz wrote:
> root@hayne:~# systemctl start pve-container@108
> Job for pve-container@108.service failed because the control process exited
> with error code.
> See "systemctl status pve-container@108.service" and "journalctl -xe" for
> details.
>
> root@hayne:~#
On 11/22/18 7:29 PM, Frank Thommen wrote:
> Please excuse, if this is too basic, but after reading
> https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the
> cluster/corosync network could be built by directly connected network
> interfaces. I.e not like this:
>
> +---+
> |
On 11/8/18 1:43 PM, Alwin Antreich wrote:
> On Wed, Nov 07, 2018 at 09:01:09PM +0100, Uwe Sauter wrote:
>> This is a bug in 12.2.8 [1] and has been fixed in this PR [2].
>>
>> Would it be possible to get this backported as it is not recommended to
>> upgrade to 12.2.9?
> Possible yes, but it
Hi,
On 11/6/18 12:56 PM, Uwe Sauter wrote:
> Hi,
>
> in the documentation to pvecm [1] it says:
>
>
> At this point you must power off hp4 and make sure that it will not power on
> again (in the network) as it is.
> Important:
> As said above, it is critical to power off the node before
Hi!
Am 10/29/2018 um 05:36 PM schrieb Dewangga Alam:
> On 29/10/18 16.14, Thomas Lamprecht wrote:
>> Am 10/28/2018 um 02:54 PM schrieb Dewangga Alam: Hello!
>>
>> I was new in proxmox and am trying to build large scale proxmox
>> 5.2 cluster (>128 nodes). My `/etc/pv
Hi!
Am 10/28/2018 um 02:54 PM schrieb Dewangga Alam:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hello!
>
> I was new in proxmox and am trying to build large scale proxmox 5.2
> cluster (>128 nodes). My `/etc/pve/corosync.conf` configuration like :
>
> ```
> nodelist {
> node {
>
Hi,
On 10/22/18 5:29 PM, Eneko Lacunza wrote:
> El 22/10/18 a las 17:17, Eneko Lacunza escribió:
>>
>> I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4
>> to 5 migration:
>> https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous
>>
>> I see that after the procedure, there
On 10/22/18 7:02 AM, Юрий Авдеев wrote:
> What I need: Two hosts (node1 and node2) with one virtual machine in
> replication without shared storage.
> If one of two hosts is dead - virtual machine will starts in other hosts.
> Node3 is online, only for quorum, not for virt.
> I using ZFS for
On 10/12/18 6:57 PM, Denis Morejon wrote:
> The 10 nodes lost the communication with each other. And they were working
> fine for a month. They all have version 5.1.
>
any environment changes? E.g., switch change or software update
(which then could block multicast)?
Can you also see if the
On 10/4/18 9:22 AM, lord_Niedzwiedz wrote:
> root@hayneee:~# apt install pve5-usb-automount
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> E: Unable to locate package pve5-usb-automount
>
>> apt install pve5-usb-automount
>>
>>
>>
>>
>> On Oct 3,
Hi,
On 9/20/18 6:22 PM, Gilberto Nunes wrote:
> HI there
>
> PVE 5.2
> CentOS guest with kernel 2.6.32
>
> With is safer: virtio or realtek?
hard to say, but 2.6.32 has virtio-net support, and it's normally
faster, so I'd start there. If you still run into problems you
can always try realtek
Hi all,
As you may have read[0], some bugs in the package manager APK in Alpine Linux
surfaced.
The most serious one allowing Remote Code Execution (RCE) if the host suffers a
Man In
The Middle Attack.
To mitigate this please update your APK version to:
* Alpine Linux v3.5: 2.6.10
* Alpine
Hi,
On 9/10/18 10:49 AM, John Crisp wrote:
> I have been critical of some things in the past with Proxmox, so to be
> even handed I thought I'd just drop a note to say over the weekend I did
> 2 in place upgrades from v4 -> v5
>
> Both went pretty well as smooth as silk, the only issue being
On 9/7/18 4:28 PM, Klaus Darilion wrote:
> Am 07.09.2018 um 10:35 schrieb Dietmar Maurer:
>>> But what is the timing for starting VM100 on another node? Is it
>>> guaranteed that this only happens after 60 seconds?
>>
>> yes, that is the idea.
>
> I miss the point how this is achieved. Is there
On 9/6/18 10:33 AM, lord_Niedzwiedz wrote:
> Hi,
> I get yours offical Fedora 27.
you should now be able to get the Fedora 28 template directly from us.
# pveam update
should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through
PVE 5.2 contains a newer version of OVMF (our used EFI implementation) with
a lot of fixes, updating could help - but is certainly no guarantee -
especially as your windows is already in the process of booting.
On 8/28/18 11:20 AM, lists wrote:
> If, during windows iso boot, I press F8 for
Hi,
On 8/24/18 11:51 AM, Dreyer, Jan, SCM-IT wrote:
> Hi,
>
> my configuration:
> HP DL380 G5 with Smart Array P400
> Proxmox VE 5.2-1
> name: 4.4.128-1-pve #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018 14:00:02 +)
> x86_64 GNU/Linux
> This system is currently running ZFS filesystem version 5.
>
On 8/22/18 9:58 AM, Uwe Sauter wrote:
> Am 22.08.18 um 09:55 schrieb Thomas Lamprecht:
>> On 8/22/18 9:48 AM, Uwe Sauter wrote:
>>> Hi all,
>>>
>>> some quick questions:
>>>
>>> * As far as I can tell the PVE kernel is a modified version of
Hi,
On 8/21/18 9:01 PM, Gilberto Nunes wrote:
> Hi there
>
> Can I download a kernel from here:
>
> http://kernel.ubuntu.com/~kernel-ppa/mainline/
>
> And use it with proxmox?
>
You can, just install the .deb with dpkg, but you won't have ZFS and
a few other things included.
It may work but
Hi Uwe,
On 8/22/18 9:48 AM, Uwe Sauter wrote:
> Hi all,
>
> some quick questions:
>
> * As far as I can tell the PVE kernel is a modified version of Ubuntu
> kernels, correct?
> Modifications can be viewed in the pve-kernel.git repository (
> https://git.proxmox.com/?p=pve-kernel.git;a=tree
Am 07/30/2018 um 11:57 AM schrieb lyt_yudi:
sorry, got it
It’s fixed!
great, thanks for reporting and testing again!
在 2018年7月30日,下午5:54,lyt_yudi 写道:
Hi
在 2018年7月30日,下午4:43,Thomas Lamprecht 写道:
# pvesh create nodes/localhost/qemu/131/agent/set-user-password --password
test123456
Am 07/30/2018 um 09:15 AM schrieb lyt_yudi:
在 2018年7月30日,下午2:25,Dominik Csapak 写道:
yes there was a perl import missing, i already sent a fix on the devel list,
see:
https://pve.proxmox.com/pipermail/pve-devel/2018-July/033180.html
(note: re-send as I forgot to hit answer all, thus the list wasn't included)
Am 07/27/2018 um 08:45 PM schrieb Eric Germann:
> I have two new Proxmox boxes in a virgin cluster. No VM’s, etc. The
only thing setup on them is networking.
>
> I created a cluster on the first one successfully.
>
1 - 100 of 240 matches
Mail list logo