On 09/07/2017 12:54 PM, prash...@stella-telecom.fr wrote:
Hello Guys,
I would like to lock a group of users to a specific node (hypervisor) so that
they can create their VMs on this node only.
I have used /node/hyp01 for path with role PVEVMAdmin for that group, but still
the group has access
d like to get the basic idea reviewed first. :-)
cheers,
Thomas
Thomas Lamprecht (4):
remove unused variables
kvm_ostype: move to store-like format
OSType edit: switch to combobox
wizard: merge CD/DVD and OS panels
www/manager6/Utils.js | 54 +++-
www/manager
Hi,
On 08/23/2017 08:21 PM, Gilberto Nunes wrote:
I am a bit confused here... I see that are two Replication, one in
Datacenter, and one in each node of a cluster.
The Datacenter replication what a purpose for?
The Datacenter one is a global overview of all configured jobs,
you do not see
Hi,
On 08/22/2017 02:27 PM, Nils Privat wrote:
Hello,
just a question: How do i properly blacklist a module? I found three
possible ways.
A) On the internet i often read to create a file with the module name in
'/etc/modprobe.d' so for e.g. 'etc/modprobe.d/bbbridge.conf' and write
into
On 10/18/2017 12:13 PM, Жюль Верн wrote:
> I read documentation and know how to create a cluster. Problem now is that
> no quorum between node1 and node2 of my cluster. Search problem in log, but
> nothing find. Try pvecm expected 1 command to add node3 to cluster but it
> not help.
>
I assume
On 10/18/2017 02:39 PM, Жюль Верн wrote:
> I have a proxmox cluster with ceph in my production. Versions 3.4. I need
> to update it to 5. For the upgrade test, I rented 3 servers. And now I ran
> into a problem. Why is not there a quorum ?
>
Please, could you describe your plan to upgrade in
Hi,
On 12/04/2017 07:51 PM, Mark Adams wrote:
> On 17 November 2017 at 10:55, Thomas Lamprecht <t.lampre...@proxmox.com>
> wrote:
>> On 11/16/2017 07:20 PM, Mark Adams wrote:
>>> Hi all,
>>>
>>> It looks like in newer versions of proxmox,
On 12/05/2017 10:25 AM, Mark Adams wrote:
> On 5 December 2017 at 08:52, Thomas Lamprecht <t.lampre...@proxmox.com>
> wrote:
>> On 12/04/2017 07:51 PM, Mark Adams wrote:
>>> On 17 November 2017 at 10:55, Thomas Lamprecht <t.lampre...@proxmox.com>
>>>> w
Hi,
At the end of last week we updated the container system appliances,
hosted on http://download.proxmox.com/images/
As previously, they are available to download through the Proxmox VE
webUI storage content panel.
Here a quick overview of what changed:
New:
* Ubuntu Artful (17.10)
* Alpine
Hi,
On 11/16/2017 05:52 PM, Lonnie Cumberland wrote:
> Greetings All,
>
> I am actually researching various hypervisors for a project that I am
> trying to coordinate to move forward and have narrowed it down to:
>
> 1. SmartOS (Illumos-based, via OpenSolaris) -- this is a wonderful
>
Hi,
On 11/16/2017 07:20 PM, Mark Adams wrote:
> Hi all,
>
> It looks like in newer versions of proxmox, the only fencing type advised
> is watchdog. Is that the case?
>
Yes, since PVE 4.0 watchdog fencing is the norm.
There is a patch set of mine which implements the use of external fence
Hi,
On 11/09/2017 06:36 PM, Chase, Brian E wrote:
> I was able to use the GUI to add a USB device and subsequently mount it on a
> guest QEMU Virtual Machine, but those same options are not present in the web
> UI for containers, so I found some related documentation here:
>
>
Hi,
some more information would be great to check this.
First, do you have a daemon(like) service loading sysctl
configs on the fly? If not we may rule out the sysctl config problem
as a trigger for this.
On 12/06/2017 06:43 PM, Andreas Herrmann wrote:
> Hi there,
>
> be warned: the actual
Hi,
On 12/11/2017 10:31 AM, F.Rust wrote:
> Hi all,
>
> is it possible to set a different starting number for VM ids?
No, currently not, I'm afraid.
> We have different clusters and don’t want to have overlapping vm ids.
> So it would be great to simply say
> Cluster 1 start VM-ids at 100
>
roxmox.com/wiki/FAQ (Point 10)
cheers,
Thomas
2018-05-07 11:46, Thomas Lamprecht yazmış:
Hi,
Am 05/05/2018 um 12:35 PM schrieb Harald Leithner:
Hi, yesterday we did a kernel update on one of the cluster nodes to
4.15.15-1-pve, after some hours the node freezes and got fenced.
There seem
Hi,
Am 05/05/2018 um 12:35 PM schrieb Harald Leithner:
Hi,
yesterday we did a kernel update on one of the cluster nodes to
4.15.15-1-pve, after some hours the node freezes and got fenced.
There seem to be some IO regressions in this kernel release,
we updated it and pushed a version to
On 5/24/18 11:02 AM, mj wrote:
> Hi, another question on the same subject. On the mentioned page
>
> On 05/24/2018 10:17 AM, Stefan M. Radman wrote:
>> The update instructions are here:
>> https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2
>
>
On 5/24/18 11:28 AM, Mark Schouten wrote:
> there is no mention on upgrading from latest 4.4 to 5.0
>>>
>>> Is that intentional..? Or should it work just like a regular
>>> update?
>>>
>
> Worked fine for me.
>
>> It's easier as 3 to 4 as the cluster communication can stay intact
>> between 4.4
On 5/24/18 12:53 PM, Simone Piccardi wrote:
> Il 24/05/2018 10:17, Stefan M. Radman ha scritto:
>> The update instructions are here:
>> https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.2
>
> The instructions mention the enterprise repository,
On 6/11/18 10:40 PM, Klaus Darilion wrote:
> Am 08.06.2018 um 14:44 schrieb Thomas Lamprecht:
>> On 6/7/18 2:20 PM, Klaus Darilion wrote:
>>>
>>> Am 07.06.2018 um 12:56 schrieb Thomas Lamprecht:
>>>> so I wouldn't be opposed to backport this for our next
On 6/7/18 2:20 PM, Klaus Darilion wrote:
>
> Am 07.06.2018 um 12:56 schrieb Thomas Lamprecht:
>> so I wouldn't be opposed to backport this for our next kernel
>> update.
>
> That would be great. I volunteer for testing ;-)
>
That'd be perfect! Backported, packag
Hi,
Am 06/01/2018 um 04:52 PM schrieb Daniel Berteaud:
Hi.
Writing some script to monitor my guests on a proxmox 5.2 cluster (3
nodes), I find something strange. When I ask the current status of a
guest, eg
pvesh get /nodes/pve1/qemu/109/status/current
I get most stats I want, except for CPU
Hi,
On 6/7/18 12:27 PM, Klaus Darilion wrote:
> Hi!>
> We are using Proxmox with OVS and SUN NICs and it seems we are hit by
> this bug:
> https://github.com/torvalds/linux/commit/14224923c3600bae2ac4dcae3bf0c3d4dc2812be#diff-0bb9a1cc5be29abf50531852db2df75f
>
> It is included in 4.17. Proxmox
On 7/2/18 4:16 AM, Vinicius Barreto wrote:
> Hello,
> please, would anyone know to tell me which service is responsible for
> mounting the NFS storages during the startup of the Proxmox?
> Note: Added by GUI or Pvesm.
>
We have a logic that we activate volumes of what we need.
E.g., when a VM is
On 6/21/18 10:04 AM, ronny+pve-u...@aasen.cx wrote:
> On 20. juni 2018 12:44, Tonči Stipičević wrote:
>> Hello to all
>>
>> I'm testing pve-storage-zsync and it works pretty well on my 3node cluster .
>> One VM from origin node is replicating to other two nodes and when I
>> shutdown origin
Hi,
On 1/29/18 4:17 PM, Roberto Alvarado wrote:
> Hi Folks,
>
> After upgrade a node I got the following problem if the VM have a traffic
> shapping / network rate limit rule active, wont boot on the last version of
> PVE kernel/Proxmox-VE:
>
> What is ":1"?
> Usage: ... basic [ match
Hi,
On 1/31/18 11:50 AM, Gilberto Nunes wrote:
> Hi Ian
>
> In my case, I am just installed to make some tests, so I have installed it
> inside other proxmox, you know? Nested Virt.
> So I guess this is the cause, 'cause in this scenario, there's no way to
> use multcast, or there'is??
>
For
On 6/20/18 2:22 PM, Andreas Heinlein wrote:
> Am 20.06.2018 um 13:01 schrieb Thomas Lamprecht:
>> please don't do that, uneccessarily complicated and not good practice
>> to depend an additional vote to itself through indirection...
>> And if you add one in both nodes you
Hi,
Am 07/27/2018 um 11:02 AM schrieb Marcus Haarmann:
Hi experts,
we are using a Proxmox cluster with an underlying ceph storage.
Versions are pve 5.2-2 with kernel 4.15.18-1-pve and ceph luminous 12.2.5
We are running a couple of VM and also Containers there.
3 virtual NIC (as bond
kernel-4.15.18-1-pve 4.15.18-15
pve-libspice-server1 0.12.8-3
pve-manager 5.2-5
pve-qemu-kvm 2.11.2-1
pve-xtermjs 1.0-5
Regards
Brent
On 26/07/2018 11:22, Thomas Lamprecht wrote:
Hi,
Am 07/26/2018 um 11:05 AM schrieb Brent Clark:
Good day Guys
I did a sslscan on my proxmox host, and I g
(note: re-send as I forgot to hit answer all, thus the list wasn't included)
Am 07/27/2018 um 08:45 PM schrieb Eric Germann:
> I have two new Proxmox boxes in a virgin cluster. No VM’s, etc. The
only thing setup on them is networking.
>
> I created a cluster on the first one successfully.
>
Am 07/30/2018 um 09:15 AM schrieb lyt_yudi:
在 2018年7月30日,下午2:25,Dominik Csapak 写道:
yes there was a perl import missing, i already sent a fix on the devel list,
see:
https://pve.proxmox.com/pipermail/pve-devel/2018-July/033180.html
Am 07/30/2018 um 11:57 AM schrieb lyt_yudi:
sorry, got it
It’s fixed!
great, thanks for reporting and testing again!
在 2018年7月30日,下午5:54,lyt_yudi 写道:
Hi
在 2018年7月30日,下午4:43,Thomas Lamprecht 写道:
# pvesh create nodes/localhost/qemu/131/agent/set-user-password --password
test123456
Hi Uwe,
On 8/22/18 9:48 AM, Uwe Sauter wrote:
> Hi all,
>
> some quick questions:
>
> * As far as I can tell the PVE kernel is a modified version of Ubuntu
> kernels, correct?
> Modifications can be viewed in the pve-kernel.git repository (
> https://git.proxmox.com/?p=pve-kernel.git;a=tree
Hi,
On 8/21/18 9:01 PM, Gilberto Nunes wrote:
> Hi there
>
> Can I download a kernel from here:
>
> http://kernel.ubuntu.com/~kernel-ppa/mainline/
>
> And use it with proxmox?
>
You can, just install the .deb with dpkg, but you won't have ZFS and
a few other things included.
It may work but
On 8/22/18 9:58 AM, Uwe Sauter wrote:
> Am 22.08.18 um 09:55 schrieb Thomas Lamprecht:
>> On 8/22/18 9:48 AM, Uwe Sauter wrote:
>>> Hi all,
>>>
>>> some quick questions:
>>>
>>> * As far as I can tell the PVE kernel is a modified version of
Hi,
On 8/24/18 11:51 AM, Dreyer, Jan, SCM-IT wrote:
> Hi,
>
> my configuration:
> HP DL380 G5 with Smart Array P400
> Proxmox VE 5.2-1
> name: 4.4.128-1-pve #1 SMP PVE 4.4.128-111 (Wed, 23 May 2018 14:00:02 +)
> x86_64 GNU/Linux
> This system is currently running ZFS filesystem version 5.
>
PVE 5.2 contains a newer version of OVMF (our used EFI implementation) with
a lot of fixes, updating could help - but is certainly no guarantee -
especially as your windows is already in the process of booting.
On 8/28/18 11:20 AM, lists wrote:
> If, during windows iso boot, I press F8 for
Hi,
Am 07/24/2018 um 07:36 PM schrieb JR Richardson:
Hi All,
Which permission role grants view access to the datacenter summary
page? I configured a user for view-only with PVEAuditor but only see
Nodes and VMs. The idea is to export the datacenter summary page out
to our NOC so they can see
On 9/7/18 4:28 PM, Klaus Darilion wrote:
> Am 07.09.2018 um 10:35 schrieb Dietmar Maurer:
>>> But what is the timing for starting VM100 on another node? Is it
>>> guaranteed that this only happens after 60 seconds?
>>
>> yes, that is the idea.
>
> I miss the point how this is achieved. Is there
Hi,
On 9/10/18 10:49 AM, John Crisp wrote:
> I have been critical of some things in the past with Proxmox, so to be
> even handed I thought I'd just drop a note to say over the weekend I did
> 2 in place upgrades from v4 -> v5
>
> Both went pretty well as smooth as silk, the only issue being
On 9/6/18 10:33 AM, lord_Niedzwiedz wrote:
> Hi,
> I get yours offical Fedora 27.
you should now be able to get the Fedora 28 template directly from us.
# pveam update
should pull the newest appliance index (gets normally done automatically,
once a day) then either download it through
Hi,
On 1/22/18 11:29 AM, Carles Xavier Munyoz Baldó wrote:
> Hi,
> I have a two nodes Proxmox cluster with a directory storate with DRBD in
> Primary/Primary mode. I can live migrate kvm virtual machines without
> problems, but I have problems when I migrate a LXC container from one
> node to the
On 3/1/18 12:21 PM, Gregor Burck wrote:
> Hi,
Hi,
> I want to start build an cluster.
>
> I start with on machine, fresh install.
>
> It is possible to pickup an node to the cluster, wich esist longer and 've
> existing virtual machines?
Normally it's easier to create the cluster on the
On 3/8/18 7:23 PM, Alexis Huxley wrote:
> I'm currently using KVM and libvirt with two *unclustered* servers and
> shared storage [*], but am considering migrating to Proxmox because
> I want to switch to lighter weight containers.
>
> At the moment, with libvirt, on the first server I create a
Hi,
Am 03/30/2018 um 03:20 AM schrieb Lindsay Mathieson:
I was working on a custom storage plugin (lizardfs) for VE 4.x, looking to
revisit it. Has the API changed much (or at all) for PX 5? Is there any
documentation for it?
The base work for per-storage bandwidth limiting was added,
see
Hi,
On 3/19/18 5:00 PM, Toan Pham wrote:
> Hi, I am very new to proxmox and I have a few questions/suggestions:
>
>
> 1. The web-management interface is well designed, but is there a way
> (perhaps new feature) to add custom commands to the webUI? Since I have
> node that's always powered
On 3/21/18 1:51 PM, Wolfgang Link wrote:
>
>> So does this mean that all those processes are sitting in a "queue" waiting
>> to execute? wouldn't it be more sensible for the script to terminate if a
>> process is already running for the same job?
>>
> No because as I wrote 15 is default, but we
Hi Uwe!
On 3/23/18 3:02 PM, Uwe Sauter wrote:
> Hi there,
>
> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
> migrations fail with this setting.
>
> # log of failed insecure migration #
> 2018-03-23 14:58:44 starting migration of VM 101 to node
Best,
> Uwe
>
>
> Am 23.03.2018 um 15:15 schrieb Thomas Lamprecht:
>> Hi Uwe!
>>
>> On 3/23/18 3:02 PM, Uwe Sauter wrote:
>>> Hi there,
>>>
>>> I wanted to test "migration: type=insecure" in /etc/pve/datacenter.cfg but
>&
On 6/20/18 12:51 PM, Dean Mumby wrote:
> just create a proxmox vm within each node and join them to the cluster , then
> you should be able to have each node run independently.
>
please don't do that, uneccessarily complicated and not good practice
to depend an additional vote to itself through
On 10/12/18 6:57 PM, Denis Morejon wrote:
> The 10 nodes lost the communication with each other. And they were working
> fine for a month. They all have version 5.1.
>
any environment changes? E.g., switch change or software update
(which then could block multicast)?
Can you also see if the
On 10/22/18 7:02 AM, Юрий Авдеев wrote:
> What I need: Two hosts (node1 and node2) with one virtual machine in
> replication without shared storage.
> If one of two hosts is dead - virtual machine will starts in other hosts.
> Node3 is online, only for quorum, not for virt.
> I using ZFS for
Hi,
On 10/22/18 5:29 PM, Eneko Lacunza wrote:
> El 22/10/18 a las 17:17, Eneko Lacunza escribió:
>>
>> I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4
>> to 5 migration:
>> https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous
>>
>> I see that after the procedure, there
Hi!
Am 10/28/2018 um 02:54 PM schrieb Dewangga Alam:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hello!
>
> I was new in proxmox and am trying to build large scale proxmox 5.2
> cluster (>128 nodes). My `/etc/pve/corosync.conf` configuration like :
>
> ```
> nodelist {
> node {
>
Hi!
Am 10/29/2018 um 05:36 PM schrieb Dewangga Alam:
> On 29/10/18 16.14, Thomas Lamprecht wrote:
>> Am 10/28/2018 um 02:54 PM schrieb Dewangga Alam: Hello!
>>
>> I was new in proxmox and am trying to build large scale proxmox
>> 5.2 cluster (>128 nodes). My `/etc/pv
On 11/8/18 1:43 PM, Alwin Antreich wrote:
> On Wed, Nov 07, 2018 at 09:01:09PM +0100, Uwe Sauter wrote:
>> This is a bug in 12.2.8 [1] and has been fixed in this PR [2].
>>
>> Would it be possible to get this backported as it is not recommended to
>> upgrade to 12.2.9?
> Possible yes, but it
Hi,
On 11/6/18 12:56 PM, Uwe Sauter wrote:
> Hi,
>
> in the documentation to pvecm [1] it says:
>
>
> At this point you must power off hp4 and make sure that it will not power on
> again (in the network) as it is.
> Important:
> As said above, it is critical to power off the node before
On 10/4/18 9:22 AM, lord_Niedzwiedz wrote:
> root@hayneee:~# apt install pve5-usb-automount
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> E: Unable to locate package pve5-usb-automount
>
>> apt install pve5-usb-automount
>>
>>
>>
>>
>> On Oct 3,
Hi,
On 9/20/18 6:22 PM, Gilberto Nunes wrote:
> HI there
>
> PVE 5.2
> CentOS guest with kernel 2.6.32
>
> With is safer: virtio or realtek?
hard to say, but 2.6.32 has virtio-net support, and it's normally
faster, so I'd start there. If you still run into problems you
can always try realtek
Hi,
On 1/23/19 7:03 PM, Gilberto Nunes wrote:
> I am facing some trouble with 2 LXC that cannot access either kill it.
> Already try lxc-stop -k -n but no effect.
> Any advice will be welcome...
does it have processes in the "D" (uninterruptible) state?
Probably because some network mount where
On 12/4/18 10:27 AM, lord_Niedzwiedz wrote:
> root@hayne:~# systemctl start pve-container@108
> Job for pve-container@108.service failed because the control process exited
> with error code.
> See "systemctl status pve-container@108.service" and "journalctl -xe" for
> details.
>
> root@hayne:~#
On 11/22/18 7:29 PM, Frank Thommen wrote:
> Please excuse, if this is too basic, but after reading
> https://pve.proxmox.com/wiki/Cluster_Manager I wondered, if the
> cluster/corosync network could be built by directly connected network
> interfaces. I.e not like this:
>
> +---+
> |
On 1/7/19 7:39 PM, Denis Morejon wrote:> Could you give me an example please?
Dietmar did already, research split brain.
>> In practice, I know a lot of people that are afraid of building a cluster
>> because of the lost of quorum, an have a plain html web page with the url of
>> each node
Hi all,
As you may have read[0], some bugs in the package manager APK in Alpine Linux
surfaced.
The most serious one allowing Remote Code Execution (RCE) if the host suffers a
Man In
The Middle Attack.
To mitigate this please update your APK version to:
* Alpine Linux v3.5: 2.6.10
* Alpine
On 3/22/19 3:17 PM, Eneko Lacunza wrote:
> Hi Alwin,
>
> El 22/3/19 a las 15:04, Alwin Antreich escribió:
>> On Fri,On a point release, a ISO is generated and the release info is needed
>> On a point release, a ISO is generated and the release info is needed
>> for that.
>>
>> The volume of
On 3/26/19 8:09 AM, lord_Niedzwiedz wrote:
> root@ave:~# apt upgrade
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Calculating upgrade... Done
> The following packages have been kept back:
> zfs-initramfs zfsutils-linux
> 0 upgraded, 0 newly
On 4/6/19 8:39 AM, Igor Podlesny wrote:
> -- Beyond of the obvious "well, it's for redundancy". That's obvious..
> but "What subsystems and under what circumstances are gonna use it?"
> -- isn't at all.
>
> I have strong suspicion that qemu-kvm isn't capable of fail-over
> switching in case its
On 2/25/19 6:03 PM, José Manuel Giner wrote:
> According to this link, Proxmox VE 5 is affected.
>
> https://www.cloudlinux.com/cloudlinux-os-blog/entry/major-9-8-vulnerability-affects-multiple-linux-kernels-cve-2019-8912-af-alg-release
>
> We have a patch?
>
ah yeah, the hyped CVE ^^ but yes,
Hi,
On 2/25/19 6:22 PM, Frederic Van Espen wrote:
> Hi,
>
> We're designing a new datacenter network where we will run proxmox nodes on
> about 30 servers. Of course, shared storage is a part of the design.
>
> What kind of shared storage would anyone recommend based on their
> experience and
Hi,
On 3/1/19 11:09 AM, Patrick Westenberg wrote:
> Hi everyone,
>
> I configured PAM authentication to use yubico but I can't login anymore.
>
> Mar 1 11:02:23 pve01 pvedaemon[4917]: authentication failure;
> rhost=172.31.0.1 user=root@pam msg=Invalid response from server: 410 Gone
>
> Is it
On 3/20/19 3:28 PM, DL Ford wrote:
> I am not sure if this will effect everyone or if something really strange
> just happened to my system, but after upgrading to PVE 4.15.18-35, all of my
> network name assignments have gone back to the old style (e.g. in my case
> enp4s0 is now eth0, enp6s0
On 3/7/19 7:42 PM, David Lawley wrote:
> sorry brain faart
>
> I'm on 3,x, more work.
>
> I think the last time I researched this I just decided it was time for a
> refresh anyway
surely the easiest and cleanest way, then you can also go straight to a
fresh PVE 5.X installation.
>
> On
On 1/23/19 10:27 AM, Fabian Grünbichler wrote:
> The APT package manager used by Proxmox VE and Proxmox Mail Gateway was
> recently discovered to be affected by CVE-2019-3462, allowing a
> Man-In-The-Middle or malicious mirror server to execute arbitrary code
> with root privileges when affected
On 4/11/19 2:47 PM, Uwe Sauter wrote:
> Thanks for all your effort. Two questions though:
>
>
> From the release notes:
>
> HA improvements and added flexibility
>
> It is now possible to set a datacenter wide HA policy which can change
> the way guests are treated upon a Node shutdown or
Hi,
On 4/12/19 4:41 PM, Mark Schouten wrote:
> Hi,
>
> I'm in the process of upgrading some older 4.x clusters with Ceph to current
> versions. All goes well, but we hit a bug that is understandable, but
> undocumented. To prevent others from hitting it, I think it would be wise to
> document
Hi,
On 5/17/19 4:27 AM, Christian Balzer wrote:
>
> Hello,
>
> is there anything that's stopping the current PVE to work with an
> externally configured Ceph Nautilus cluster?
Short: rather not, you need to try out to be sure though.
You probably cannot use the kernel RBD as it's support may
On 5/17/19 9:53 AM, Christian Balzer wrote:
> On Fri, 17 May 2019 08:05:21 +0200 Thomas Lamprecht wrote:
>> On 5/17/19 4:27 AM, Christian Balzer wrote:
>>> is there anything that's stopping the current PVE to work with an
>>> externally configured Ceph Nautilus cluste
Hi,
On 5/17/19 2:57 AM, Mike O'Connor wrote:
> Hi Guys
>
> Where can I download the source code for the PVE kernels with there
> patches (including old releases) ? I want to apply a patch to fix an issue.
>
All our sources are available at: https://git.proxmox.com/
For cloning the kernel do:
Hi,
On 6/13/19 11:47 AM, Mark Schouten wrote:
> Let me start off with saying that I am not fingerpointing at anyone,
> merely looking for how to prevent sh*t from happening again!
>
> Last month I emailed about issues with pve-firewall. I was told that
> there were fixes in the newest packages,
On 6/25/19 9:44 AM, Thomas Lamprecht wrote:
> And as also said (see quote below), for more specific hinters I need the raw
> logs, unmerged and as untouched as possible.
may just be that I did not saw the mail in my inbox, so it looks like
you already send it to me, sorry about m
On 6/25/19 9:10 AM, Mark Schouten wrote:
> On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
>>> 2: ha-manager should not be able to start the VM's when they are running
>>> elsewhere
>>
>> This can only happen if fencing fails, and that fencing wor
On 6/13/19 1:30 PM, Mark Schouten wrote:
> On Thu, Jun 13, 2019 at 12:34:28PM +0200, Thomas Lamprecht wrote:
>> Hi,
>> Do your ringX_addr in corosync.conf use the hostnames or the resolved
>> addresses? As with nodes added on newer PVE (at least 5.1, IIRC) we try
>> to r
own into that too..
Stefan (CCd), would you be willing to take a look at this and expand the
"Cluster Network" section from the pvecm chapter in pve-docs a bit
regarding this? That'd be great.
>
> On 6/13/19 12:29 PM, Thomas Lamprecht wrote:
>> On 6/13/19 1:30 PM, Mark Schout
On 5/9/19 10:09 AM, Mark Schouten wrote:
> On Thu, May 09, 2019 at 07:53:50AM +0200, Alexandre DERUMIER wrote:
>> But to really be sure to not have the problem anymore :
>>
>> add in /etc/sysctl.conf
>>
>> net.netfilter.nf_conntrack_tcp_be_liberal = 1
>
> This is very useful info. I'll create a
Hi,
On 5/15/19 9:34 AM, Anton Blau wrote:
> Hello,
>
> for better clarity, I have assigned 4-digit IDs for some VMs (eg 1250).
>
> In the menu Data Center -> Backup -> Add these VMs do no longer appear.
>
> Is this a bug or did I do something wrong?
>
Same as Dominic, I cannot reproduce this
Hi,
On 5/23/19 10:43 AM, Thomas Naumann wrote:
> there is an extra point "improved SDN support" under roadmap in
> official proxmox-wiki. Who can give a hint what this means in detail?
>
Maybe you did not see it but Alexandre answered already to the same mail
on pve-devel[0].
[0]:
On 5/17/19 4:07 PM, Igor Podlesny wrote:
> On Fri, 17 May 2019 at 17:59, Saint Michael wrote:
>>
>> Maybe you should share the patch here so we benefit from it.
>
> Thomas said everything is kept in public git repository, what else are
> you looking to benefit from? :)
>
The original poster of
On 5/3/19 12:57 PM, Igor Podlesny wrote:
> On Fri, 3 May 2019 at 14:44, Iztok Gregori wrote:
>>
>> Hi to all!
>>
>> So what happens when one of the configured servers fails, Proxmox
>> recognize the failure and mounts the secondary? If this so the running
>
> Proxmox tells you go suffer, that's
On 5/8/19 9:37 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 14:14, Thomas Lamprecht wrote:
>> On 5/8/19 8:57 AM, Igor Podlesny wrote:
>>> On Wed, 8 May 2019 at 13:11, Thomas Lamprecht
>>> wrote:
> [...]
>>> In short: pain, suffering and all That.
>&
On 5/8/19 10:15 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 15:02, Thomas Lamprecht wrote:
> [...]
>>> -- I didn't open no ticket, neither did I __complain__. I just let
>>> others know there's a pitfall, meanwhile thoroughly describing what it
>>> was. That'
Am 4/26/19 um 2:05 PM schrieb Craig Jones:
> Hello,
>
> To my understanding, in the vSphere world, a cluster with hosts of mixed
> CPU frequencies and generations (let's assume consistent manufacturer)
> will run at the lowest common denominator. In other words, if you have 3
> hosts each with
Hi,
On 7/5/19 9:32 AM, mj wrote:
> Looks like a great new release!
>
> Does corosync 3.0 mean that the notes on
> [https://pve.proxmox.com/wiki/Multicast_notes] are no longer relevant?
We will update the documentation and wiki articles regarding this in
the following days, until the final PVE
On 7/4/19 12:35 PM, Marco Gaiarin wrote:
> We had a major power outgage here, and our cluster have some trouble on
> restart. The worster was:
>
> Jul 3 19:58:40 pvecn1 corosync[3443]: [MAIN ] Corosync Cluster Engine
> ('2.4.4-dirty'): started and ready to provide service.
> Jul 3 19:58:40
Am 7/8/19 um 8:05 AM schrieb Fabian Grünbichler:
> On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva
> wrote:
>> Hello,
>>
>> while doing some test upgrades I ran into the buster RNG problem [1],
>> where the newer kernel and systemd use a lot more randomness during
>>
Am 7/8/19 um 9:34 AM schrieb arjenvanweel...@gmail.com:
> Having this (as an option) in the GUI would be very nice,
> and 'apt-get install haveged' is quick and easy.
opt-in is surely no problem, my concerns would be rather for
the case where we just add this for VMs with Linux as ostype,
Am 7/8/19 um 9:56 AM schrieb arjenvanweel...@gmail.com:
> Is just installing haveged sufficient? Can the Proxmox-team decide to
> add haveged to it's dependencies? Or is more discussion required?
It'd be, the service is then enabled and running by default.
For me it'd be OK to add as a
Am 7/8/19 um 12:13 PM schrieb Fabian Grünbichler:
> On Mon, Jul 08, 2019 at 09:10:48AM +0200, Thomas Lamprecht wrote:
>> Am 7/8/19 um 8:05 AM schrieb Fabian Grünbichler:
>>> On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva
>>> wrote:
>>>
Hi,
On 7/8/19 6:04 PM, bsd--- via pve-user wrote:
> Hello,
>
> There is a JS in Proxmox VE v.5.4.6 which reloads the page and forces all
> menu item at the top every 5".
A full page reload? We only do that on cluster creation, as there
the websites TLS certificate changed, and thus it's
Am 4/24/19 um 8:13 PM schrieb David Lawley:
> I did that as part of the migration
>
and the guest agent works? i.e., things like
# qm guest cmd VMID get-osinfo
also the guest config could be interesting:
# qm config VMID
> Serial driver? Don't have have any odd devices showing up in the
101 - 200 of 240 matches
Mail list logo