> Yoslan Raul Jimenez Carvajal hat am 16. Juni 2016 um
> 16:47 geschrieben:
>
>
> Hello list turns out that I made a backup of a lxc in Proxmox 4.2
> container and when I try to restore lxc perfect but it does not restore
> / var / log any idea about it?
>
> Thank you
>
> Jean-Laurent Ivars hat am 18. März 2016 um 11:15
> geschrieben:
>
>
> Thank you for your answer but do you know how to tell corosync.conf, quorum
> with one node ? (how to put the directive expected_votes=1 ?)
$ man pvecm
...
pvecm expected
Tells
> Paul Gray hat am 28. März 2016 um 02:00 geschrieben:
>
>
> I upgraded a 5-node 3.x cluster this weekend. The original cluster has
> a valid commercial star-certificate + intermediate certificate for both
> the proxy and inter-node communication.
>
> It was working under
> Paul Gray <g...@cs.uni.edu> hat am 29. März 2016 um 13:19 geschrieben:
> On 03/29/2016 01:32 AM, Fabian Grünbichler wrote:
> >
> > Please see https://pve.proxmox.com/wiki/HTTPSCertificateConfiguration for an
> > uptodate recommendation on how to confi
> Lindsay Mathieson hat am 27. April 2016 um 02:01
> geschrieben:
>
>
> Getting this when trying to move a Ceph RBD image to NFS. 100% reproducable
> at the moment.
>
> create full clone of drive ide0 (ceph1:vm-401-disk-1)
> Formatting
On Thu, Jul 21, 2016 at 10:18:00AM +0200, Eneko Lacunza wrote:
> Hi all,
>
> During some restore test, I have found that stopping a restore leaves rbd
> images unremoved:
>
Could you please file a bug report for this at bugzilla.proxmox.com ?
Thanks!
On Fri, Jul 15, 2016 at 11:35:37AM +0200, Fabrizio Cuseo wrote:
> Hello.
>
> With PVE 4.2, using ceph server, remove a snapshot on a powered-on VM is not
> supported; with PVE 3.4 i am able to remove it.
>
> Is this feature removed or planned in a future release ?
>
> Regards, Fabrizio
>
>
> Gwenn Gueguen hat am 31. Januar 2017 um
> 12:09 geschrieben:
> Hi all,
>
> When trying to connect to SPICE console via remote-viewer from a Debian
> testing system, SSL/TLS connection fails.
see https://bugzilla.proxmox.com/show_bug.cgi?id=1243 ,
On Thu, Feb 02, 2017 at 10:55:25AM +0100, Florent B wrote:
> Hi everyone,
>
> On a testing cluster, I have a problem when I destroy some VM, sometimes
> the task is "OK" but VM disk is not removed on RBD.
this is https://bugzilla.proxmox.com/show_bug.cgi?id=1063 , for which I
sent a v1 patch set
On Tue, Feb 21, 2017 at 02:37:49PM +0200, Dimitri Alexandris wrote:
> I have a 4.4-12 proxmox server, with two additional pools, the main one and
> a second raid (2 disks) pool P2T which was meant to be removable/portable.
> My h/w supports hot swap ok.
>
>
> When i export the pool ( zpool
On Tue, Feb 14, 2017 at 12:09:44AM +0100, Kevin Lemonnier wrote:
> > So I need to create the VM on the new host and then attach the moved disk –
> > is there any doc about ? I guess I also should copy the configuration file.
>
> Yeah, I usually just re-create it before hand, move disk them plan
On Fri, Aug 19, 2016 at 08:38:48AM +0200, Eneko Lacunza wrote:
> Hi,
>
> El 19/08/16 a las 05:23, Nguyễn Tấn Vỹ escribió:
> > Regarding flip-feng-shui attack to VM when KSM enable (
> > https://www.vusec.net/projects/flip-feng-shui/). Shoud I worry about
> > flip-feng-shui attack? I am using DDR3
On Tue, Feb 28, 2017 at 08:50:55PM +0100, Uwe Sauter wrote:
> Hi,
>
> I'd like to make you aware of a security flaw in virtfs [1] that was
> published about 2 weeks ago.
>
> Might be worth while to get this into the coming update if this applies to
> PVE.
>
> Regards,
>
> Uwe
>
>
>
On Fri, Sep 09, 2016 at 01:31:45PM +0200, Nicola Ferrari (#554252) wrote:
> Hi everybody.
> I upgraded my cluster to pve4 some weeks ago.
>
> I noticed that if I select the "no backup" option on a disk, the string
> added to the config is now "backup=0".
> In the past (and also now on my
On Thu, Sep 29, 2016 at 07:01:14PM +0200, Frank Thommen wrote:
> Hi all,
>
> the upgrade to PVE 4.3 introduced some nice features in the GUI, e.g. the
> nice CPU, memory and swap usage bars. But unfortunately it also brought
> back the screen space waste issue solved back in June
>
On Fri, Sep 30, 2016 at 11:36:56AM +0100, Brian :: wrote:
> Hi guys
>
> This doesn't seem to work for me..
>
> I get blank screen in disks section of gui.
>
> The command you use in Diskmange.pm translates to:
>
> /usr/sbin/smartctl -a -f brief /dev/device
>
> If I run that for
On Thu, Oct 06, 2016 at 03:17:03PM +0200, Marco Gaiarin wrote:
> Mandi! Alwin Antreich
> In chel di` si favelave...
>
> > > OK, i've removed 'ntp' in 'apt-get install' command on the wiki page.
> > As a remark, you need to configure /etc/systemd/timesyncd.conf with a
> > time source and restart
On Mon, Sep 26, 2016 at 11:52:04AM +0200, Florent B wrote:
> Hi,
>
> When I remove 2 VMs via GUI, when the first one is not finished
> destroying, I have this error on the second :
>
> trying to aquire cfs lock 'storage-PVE01-RBD03' ...TASK ERROR: got lock
> request timeout
>
> This is a RBD
On Wed, Nov 09, 2016 at 09:46:13AM +0100, Szabolcs F. wrote:
> It feels like as if the disks weren't ready when grub trying to mount the
> LVM volumes. Any ideas how to fix this? Maybe adding some other type of
> wait to the grub config?
>
"rootdelay" is probably what you are looking for (man
On Mon, Nov 07, 2016 at 11:53:20AM +, Guy Plunkett wrote:
>> On 7 Nov 2016, at 11:50, Kevin Lemonnier wrote:
>>
>>> moving the /etc/pve/nodes/pve11/qemu-server/*.conf files to another node
>>> worked well. I didn't have to restart any services.
>>
>> Still, that seems
On Thu, Oct 20, 2016 at 02:39:34PM +0200, Alexandre DERUMIER wrote:
> > migration: network=192.168.0.0/24,unsecure=on|off
>
> >>I like the latter more! I look a bit into that.
>
> As tls support is now avaible in qemu 2.6,
>
> maybe
>
> migration: network=192.168.0.0/24,encryption=ssh|tls|off
On Mon, Nov 14, 2016 at 09:43:40AM +0100, Daniel wrote:
> >
> > but I would advise you to use vzdump to backup containers - you get a
> > (compressed) tar archive, the config is backed up as well and you get
> > consistency "for free" (or almost free ;)). normally, you want to
> > restore
Hello,
I'd like people that have non-productive test setups to participate in
testing the updated grub2 packages that are available in the pvetest
repository.
We already tested them on all of our available hardware and setups, but
since issues with grub tend to be rather ugly to fix once
On Fri, Nov 18, 2016 at 02:04:48PM +0100, Marco Gaiarin wrote:
>
> > What i'm missing?! Thanks.
>
> Sorry, probably i'm missing some background info on LVM. Trying to
> reset and restart from the ground.
>
>
> With LVM, you define a storage with a VG, and proxmox itself create a
> LV for every
On Thu, Nov 17, 2016 at 04:36:31PM +, IMMO WETZEL wrote:
> Is that function may be not described at the current api doc ?
it is ;) are you using the online version[1]?
> I would expect at least three parameter
> node,vmid,snapshotname,description,savevmstate{Boolean}
>
> cos qm snapshot
On Mon, Nov 21, 2016 at 06:56:21PM +1000, Lindsay Mathieson wrote:
> On 21 November 2016 at 16:19, Alexandre DERUMIER wrote:
> > qemu already support block migration to remote nbd (network block device)
> > server.
>
> Thanks, I'll have a look into that.
>
> >
> > qemu 2.8
On Sat, Nov 12, 2016 at 08:46:53PM +0100, Daniel wrote:
> Hi there,
>
> before we used LVM-THIN we were able to Backup all Contains directly from the
> Host-System.
> Now, everythink is LVM. Is there any known and easy way to backup all Hosts
> including all VMs?
> For example with rsync or
On Wed, Nov 02, 2016 at 10:27:51AM +0100, Lutz Markus Willek wrote:
> Nope. Not in the /etc directory, as written in my first mail.
>
> Lutz Willek
>
instead, you now have the opposite problem: whenever we change something
in our service file, you have to notice this and check whether you need
On Wed, Oct 12, 2016 at 03:42:19PM +0200, Karsten Becker wrote:
> Hi,
>
> we are currently sitting here in the advanced training and try to create
> a VM from commandline according to the manpage of qm.
>
> We are reading this in the manpage:
>
> > -virtio[n] [file=] [,aio=]
On Wed, Dec 07, 2016 at 12:31:43PM +0100, Florent B wrote:
> Jewel 10.2.4 is released today
> (https://raw.githubusercontent.com/ceph/ceph/master/doc/release-notes.rst),
> is the bug fixed ? (and which one is it ?)
>
http://tracker.ceph.com/issues/16255 , should be fixed.
please note that
On Thu, Jan 12, 2017 at 11:27:24AM +0100, Frank Thommen wrote:
> Hi,
>
> one of my LXC containers is locked: It cannot be started and cannot be
> backed up. The error message is "CT is locked (backup)". The only
> suggestion I found to solve this prolem is `qm unlock XXX`. However in my
>
On Wed, Jan 04, 2017 at 01:46:07PM +0100, Florent B wrote:
> On 01/04/2017 01:42 PM, Fábio Rabelo wrote:
> > Hellows ...
> >
> > Look at storage view, then add .
> >
> > One of the options are nfs, you need to supply ip of the share, or, if
> > it hass a dns name, you can use it .
> >
> >
> >
On Wed, Jan 04, 2017 at 02:50:02PM +0100, Florent B wrote:
> On 01/04/2017 01:54 PM, Fabian Grünbichler wrote:
> > On Wed, Jan 04, 2017 at 01:46:07PM +0100, Florent B wrote:
> >> Hi Fábio,
> >>
> >> Excuse me if my message was not clear enough, but that was
installer environment in Puppet modules
> export FACTER_is_installer=true
> # passing a non-existent tag like "no_such_tag" to the puppet agent only
> initializes the node
> /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags
> no_such_tag --server urzlxdep
On Mon, Jan 09, 2017 at 01:26:03PM +0100, Fabian Grünbichler wrote:
> see comment inline
>
> On Mon, Jan 09, 2017 at 12:03:02PM +0100, Vadim Bulst wrote:
> > Sorry my fault. It was the Debian Jessie howto and not Wheezy. Attached are
> > some logs. Let me know
On Sun, Jan 08, 2017 at 09:48:00PM +0100, Vadim Bulst wrote:
> Dear all,
>
> I'm trying to automate the PVE server installation with Foreman and Puppet
> based on Debian stable. Well - i don't have any luck of installing the
> packages. I use
>
> "apt-get --force-yes -y install proxmox-ve ssh
On Wed, Jan 04, 2017 at 06:16:54PM +0100, Marco Gaiarin wrote:
>
> In a cluster of 5 PVE servers i receive, from ony one of that, logs
> like:
> Jan 4 17:02:52 thor pveproxy[58010]: Clearing outdated entries from
> certificate cache
>
> before christmas, i get the same line from another
On Tue, Dec 20, 2016 at 01:32:39PM +0100, Thomas Lamprecht wrote:
> On 12/20/2016 01:21 PM, IMMO WETZEL wrote:
> > How can I set multiline config descriptions ?
> >
> > root@node01:~# pvesh set /nodes/node04/qemu/315/config -description
> > "line1\nline2"
> >
> > è Description shows a n not a
On Thu, Mar 23, 2017 at 11:37:52AM +0100, Eneko Lacunza wrote:
> Yes, I searched mailing list archives and didn't found any announcement?
> Roadmap also doesn't contain any reference to this. (only found some commits
> in pve-devel).
>
> Thanks for the pointer, will begin upgrading our office
On Fri, Mar 24, 2017 at 10:26:37PM +0900, ribbon wrote:
> in Proxmox VE 5.0 beta1, there are many untranslated messages.
> I would like to translate untranslated messages.
> Is there a way to translate it somewhere?
>
AFAIK the information about 4.x is valid for 5.x as well (the "master"
branch
On Thu, Mar 02, 2017 at 02:41:16PM +, Cédric Bernard wrote:
> Hello
>
> I try to clone a qemu template with the API in HTTP but i have a "null"
> result.
>
> I use this command :
> curl --silent --insecure --cookie "$(<~/cookie)" --header "$(<~/csrftoken)"
> -X POST --data
On Sun, Aug 13, 2017 at 12:25:14PM +0200, Nils Privat wrote:
> I noticed that proxmox 5 is using kernel 4.10. The kernel 4.10 is EOL since
> may and proxmox stable released in july, so it is true that there will be
> no security fixes/bugfixes anymore? Are promox devs planing to upgrade to
> 4.12
On Fri, May 19, 2017 at 12:49:21PM +0200, Uwe Sauter wrote:
> Am 19.05.2017 um 11:53 schrieb Fabian Grünbichler:
> > On Fri, May 19, 2017 at 11:26:35AM +0200, Uwe Sauter wrote:
> >> Hi Fabian,
> >>
> >> thanks for looking into this.
> >>
> >&g
On Fri, May 19, 2017 at 09:40:54AM +0100, Steve wrote:
> I tried proxdebug. No extra messages are generated after the network has
> initialised.
> There is no log in /tmp folder ? Has the log moved?
>
> How do I run xinit? Where is it?
> Do you mean init? If I run this it says must be run as PID
On Fri, May 19, 2017 at 10:43:26AM +0200, Uwe Sauter wrote:
> Hi all,
>
> after having succeeded to have an almost TCP-based NFS share mounted (see
> yesterday's thread) I'm now struggling with the backup
> process itself.
>
> Definition of NFS share in /etc/pve/storage.cfg is:
>
> nfs: aurel
> Eugen Mayer hat am 19. Mai 2017 um 12:04
> geschrieben:
>
>
> Hello,
>
> having an issue with proxmox 4.x running any VM with more then 1 core.
>
> Hardware:
> - Intel(R) Xeon(R) CPU E3-1275 v5 @ 3.60GHz
> - 64GB ram
> - HW raid SSD
>
> PVE 4.x just got
On Fri, May 19, 2017 at 10:27:20AM +0100, Steve wrote:
> If I type xinit it says /bin/sh: xinit: not found
>
> I am the author of easy2boot which is a USB multiboot tool to allow people
> to boot from 100's of different ISOs (or images) all from one USB stick.
> I have been asked by a user to get
On Mon, May 22, 2017 at 02:52:13PM +0200, Uwe Sauter wrote:
> >>> perl -e 'use strict; use warnings; use PVE::ProcFSTools; use
> >>> Data::Dumper; print Dumper(PVE::ProcFSTools::parse_proc_mounts());'
> >>>
> >>
> >> $VAR1 = [
> >>
> >> [
> >> ':/backup/proxmox-infra',
On Mon, May 22, 2017 at 08:34:49AM +1000, Lindsay Mathieson wrote:
> I've had this happen several times on the latest pve-nosubcription repo.
> Take a snaphot of a windows vm, which completes successfully and later I
> notice that no snapshots or the "Now" enty are listed in the GUI.
>
>
> I
On Fri, May 19, 2017 at 01:59:27PM +0200, Uwe Sauter wrote:
>
> >>
> >> I suspect that something just doesn't send emails in that specific error
> >> case…
> >
> > yes, seems like activate_storage is called very early on to retrieve
> > maxfiles and dumpdir via PVE::API2::VZDump (POST) ->
On Thu, May 18, 2017 at 07:08:36PM +0100, Steve wrote:
> Thanks for the quick reply.
> I am booting from the ISO file itself which is on a multiboot USB drive.
> In previous versions, you could boot to the shell, mount the ISO as /mnt
> and then start the install by running unconfigured.sh.
>
>
On Wed, Jun 07, 2017 at 11:17:09AM +0200, Marco Gaiarin wrote:
> Mandi! Fabian Grünbichler
> In chel di` si favelave...
>
> > OSDs are supposed to be enabled by UDEV rules automatically. This does
> > not work on all systems, so PVE installs a ceph.service which trigger
On Mon, Jun 05, 2017 at 04:12:15PM -0300, Gilberto Nunes wrote:
> Hi friends...
>
> I have Proxmox 5 ( last beta ) running inside my LAN.
> The hostname is 'qapla', but I access it from outside as
>
> https://homekonnectati.ddns.net:8006
>
> Is work nice, but when I open Spice Terminal, I can't
On Mon, Jun 05, 2017 at 10:04:47AM +0200, Marco Gaiarin wrote:
>
> Again my Ceph cluster suffer a main power outgage. ;-(
>
> The cluster went down well, but after that the power come back a bit
> intermittently, so servers boot and shutdown some times...
>
>
> When power come back, all server
On Thu, May 04, 2017 at 09:15:03AM +0200, Alessandro Briosi wrote:
> Hi all,
> I have had some troubles with 1 server in a cluster.
> As the VM have all disks on shared storage I though it would have been
> possible to migrate them from the current out of order server to the
> others in the
On Fri, May 05, 2017 at 08:52:23PM +, Daniel wrote:
> Hi at All,
>
> i have a VM which is on a Ceph Storage: rootfs: ceph:vm-171-disk-1,size=20G
>
> Is there anyway to mount this Image on the local PMX Host? I need this to
> copy some data ;)
>
for future reference: "pct mount CTID"
On Fri, May 19, 2017 at 08:28:17AM +0100, Steve wrote:
> Thanks
> I tried that.
> I made a new .sh from the portion of the initrd that mounts all the
> squashfs files and runs unconfigured.sh.
> It seems to almost work, but it gets to Detecting network settings... done
> and then says
>
On Tue, Jun 06, 2017 at 09:16:12AM -0300, Gilberto Nunes wrote:
> Well, after install letsencrypt following this steps, I can't connect
> anymore:
>
> https://blog.hostonnet.com/install-letsencrypt-ssl-proxmox
>
that is a third-party howto with wrong information. please follow our
howto for
On Fri, Sep 22, 2017 at 10:42:35PM +0200, Eric Dillmann wrote:
> Hi,
>
> I'm using Proxmox 5.0 with latest kernel : 4.10.17-3-pve
>
> I get the following warning in kernel logs :
>
> Has this something to do with :
> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1715609 ?
>
yes, a
On Sun, Sep 24, 2017 at 05:11:11PM +, Daniel wrote:
> Doese it have any netork trouble or is it working fine just see that error in
> the kernel log?
it is just a warning, will be fixed with the next round of kernel updates.
___
pve-user mailing
On Fri, Nov 10, 2017 at 06:02:51PM +, Mark Adams wrote:
> Hi All,
>
> On proxmox 5.1, with ceph as storage, I'm trying to disable the
> snapshotting of a specific disk on a VM.
>
> This is not an option in the gui, but I've added the option to the disk in
> the conf file
>
> scsi1:
On Thu, Nov 30, 2017 at 12:42:29PM +0100, Florent B wrote:
> So https on repository is not possible without subscription ? Ok
> wonderfull...
>
> https cost is null nowadays.
>
you don't need TLS to ensure the integrity of updates, the packages are
hashed and (transitively) signed, all of which
> Francesco Ongaro hat am 30. November 2017 um
> 16:34 geschrieben:
>
>
> On 30/11/2017 15:11, lemonni...@ulrar.net wrote:
> > This is dumb. I agree that it wouldn't cost them anything to setup
> > HTTPS, but I also agree that it is useless. The packages are
On Wed, Dec 06, 2017 at 01:25:35PM -0500, David Lawley wrote:
>
>
> On 12/6/2017 12:32 PM, Andreas Herrmann wrote:
>
> Ok, got it. I see area you are talking about
>
> Guess it must be missing it, as fs.aio-max-nr is incorrect too.
>
> sysctl -a is showing fs.aio-max-nr = 65536
>
> pve.conf
On Fri, Oct 20, 2017 at 12:55:32PM +0200, Uwe Sauter wrote:
> Hi,
>
> I'm trying to use the virtualization support that Mellanox ConnectX-3 cards
> provide. In [1] you can find a document by Mellanox
> that describes the necessary steps for KVM.
>
> Currently I'm trying to install Mellanox OFED
On Mon, Oct 23, 2017 at 09:56:29AM +0200, Andreas Herrmann wrote:
> Hi there,
>
> after installing all the updates (see update.list) on the first of five nodes,
> the server is not able to reboot correctly - see start.log
>
> The zpool cannot be imported!
>
> After manually issuing the commands
On Mon, Jan 29, 2018 at 05:02:51PM +0100, Mark Schouten wrote:
> Hi,
>
> today I upgraded a 4.x cluster with Hammer (Ceph) to 5.1. I had almost no
> issues thanks to documentation and quality software.
>
good to hear! :)
>
> Then, upgrading to PVE 5.x, I ran into an issue which it seems is
On Fri, Feb 16, 2018 at 08:48:42PM +1030, Mike O'Connor wrote:
> Hi All
>
> I'm trying to build CEPH from the proxmox source before applying some
> patches and I'm getting the following build error.
>
...
> bin.v2/libs/python/build/gcc-6.3.0/release/link-static/threading-multi/tuple.o
> c++:
On Wed, Jul 11, 2018 at 10:45:11PM +0200, Krystian Basara wrote:
> Hello.
> I've got problem with restoring VM.
> ( all other api functions work correct for me )
> Example of what i'm using :
> POST methods:
>
> /nodes/NODE/qemu?vmid=102=local=local:backup/vzdump-kvm-102-2018_07_10-09_45_29.tar
>
a pve-kernel-4.15 meta package depending on a preview build based on
Ubuntu Bionic's 4.15 kernel is available on pvetest. it is provided as
opt-in package in order to catch potential regressions and hardware
incompatibilities early on, and allow testing on a wide range of systems
before the
On Mon, Mar 12, 2018 at 07:43:09PM +0100, Alexandre DERUMIER wrote:
> Hi,
>
> Is retpoline support enabled like ubuntu build ? (builded with recent gcc ?)
yes, it has KPTI for v3/Meltdown, full RETPOLINE for v2, and masking of
pointers passed from user space via array_index_mask_nospec for v1.
On Tue, Apr 10, 2018 at 11:43:11AM +0200, Uwe Sauter wrote:
> Hi all,
>
> I discourage you from updating ZFS to version 0.7.7 as it contains a
> regression. Version 0.7.8 was released today that reverts the
> commit that introduced the regression.
>
> For Infos check:
On Tue, Apr 10, 2018 at 11:09:34AM +, Lindsay Mathieson wrote:
> I notice that zfs 0.7.7 has arrived in the no-sub repo – it has a reasonably
> serious dataloss bug:
>
> https://github.com/zfsonlinux/zfs/issues/7401
>
> 0.7.8 has been released with a fix:
>
>
On Thu, Mar 22, 2018 at 05:32:23PM +0100, Colin 't Hart wrote:
> Hi,
>
> The 4.15 kernel seems to work fine for me. I'm also installing
> proposed updates from the Debian repository. The 4.13 kernel was
> broken with the newer version of apparmor, but the 4.15 works with it
> so I'm happy.
The APT package manager used by Proxmox VE and Proxmox Mail Gateway was
recently discovered to be affected by CVE-2019-3462, allowing a
Man-In-The-Middle or malicious mirror server to execute arbitrary code
with root privileges when affected systems attempt to install upgrades.
To securely
On Mon, Feb 18, 2019 at 07:21:04PM +0100, lists wrote:
> Hi,
>
> I would like (if possible) to store VMs and backups on local 6TB mirrored
> zfs volumes on our three-node proxmox cluster. (secondary to the ceph
> storage that we also use)
>
> So, what I did: In the proxmox GUI created on all
On Fri, Jul 05, 2019 at 09:10:09AM +0200, Alain Péan wrote:
> It seems that the upgrade from 5.4 to 6.0 will be a hard way. New version of
> Debian, and major version of corosync, 3.0...
new (major) versions of PVE are always based on new Debian releases ;)
the Corosync upgrade should be pretty
On Sat, Jul 06, 2019 at 07:57:57AM +0200, arjenvanweel...@gmail.com wrote:
> On Thu, 2019-07-04 at 21:06 +0200, Martin Maurer wrote:
> > Hi all!
> >
> > We're happy to announce the first beta release for the Proxmox VE 6.x
> > family! It's based on the great Debian Buster (Debian 10) and a 5.0
>
On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva wrote:
> Hello,
>
> while doing some test upgrades I ran into the buster RNG problem [1],
> where the newer kernel and systemd use a lot more randomness during
> boot, causing startup delays.
>
> Very clearly noticable in
On Mon, Jul 08, 2019 at 09:10:48AM +0200, Thomas Lamprecht wrote:
> Am 7/8/19 um 8:05 AM schrieb Fabian Grünbichler:
> > On Mon, Jul 08, 2019 at 02:16:34AM +0200, Chris Hofstaedtler | Deduktiva
> > wrote:
> >> Hello,
> >>
> >> while doing some test upg
On September 3, 2019 11:46 am, Thomas Lamprecht wrote:
> Hi Uwe,
>
> On 03.09.19 09:18, Uwe Sauter wrote:
>> Hi all,
>>
>> on a freshly installed PVE 6 my /etc/aliases looks like:
>>
>> # cat /etc/aliases
>> postmaster: root
>> nobody: root
>> hostmaster: root
>> webmaster: root
>> www:root
>>
On Mon, Jul 29, 2019 at 03:36:38PM +0200, Marco Gaiarin wrote:
>
> > In that servers i've also some other FS, but the ext4 ones are low
> > varying, mounted RO or noatime, but also some FS in XFS fs, that seems
> > does not suffer.
>
> I've disabled 'discard' for /dev/sda in both server, and
> Ricardo Correa hat am 16. Juli 2019 21:28 geschrieben:
>
>
> Hello all,
>
> While following the instructions for upgrade I encountered the following
> issue:
>
> ~# systemctl stop pve-ha-lrm
> ~# systemctl stop pve-ha-crm
> ~# echo "deb http://download.proxmox.com/debian/corosync-3/
On Wed, Jul 17, 2019 at 08:10:11AM +0200, Thomas Lamprecht wrote:
> On 7/16/19 11:26 PM, Chris Hofstaedtler | Deduktiva wrote:
> > * Fabian Grünbichler [190716 21:55]:
> > [..]
> >>>
> >>> dpkg: error processing package corosync (--configure):
> >&
On September 27, 2019 10:30 am, Mark Adams wrote:
> Hi All,
>
> I'm trying out one of these new processors, and it looks like I need at
> least 5.2 kernel to get some support, preferably 5.3.
>
> At present the machine will boot in to proxmox, but IOMMU does not work,
> and I can see ECC memory
On November 6, 2019 10:18 am, Marco Gaiarin wrote:
> Mandi! Fabian Grünbichler
> In chel di` si favelave...
>
>> > What are the correct syntiax? Thanks.
>
>> --rootfs STORAGE:SIZE_IN_GB
>> e.g.,
>> --rootfs local:4
>> see 'Storage Backed Mount Po
On November 5, 2019 6:30 pm, Marco Gaiarin wrote:
>
> I need to 'resize' (shrink) a container, so i've done a backup, and
> following some gogle-fu and 'pct' manpage i've done:
>
> root@tma-18:~# pct restore 130
> /mnt/pve/backup/dump/vzdump-lxc-130-2019_11_05-17_39_08.tar.lzo --rootfs
>
On October 22, 2019 2:43 pm, Gilberto Nunes wrote:
> Folks,
> When you create a VM, it generates an ID, for example 100, 101, 102 ... etc
no. when you create a VM in the GUI, it suggests the first free slot in
the guest ID range. you can choose whatever you want ;)
> ...
> By removing this VM
On March 3, 2020 9:14 pm, Mechtilde wrote:
> I can also use the dialog to create a VM and connect it to the *.iso.
>
> After that I get the message "Failled to connect to server and the
> splash screen for noVNC.
did the VM actually start? you should have a visual indication of that
in the left
On February 28, 2020 8:20 pm, Alarig Le Lay wrote:
> Hi,
>
> I would like to test (and integrate) a patch to local kernels, but if I
> try to build it, it fails:
>
> alarig@pikachu | master *%= pve-kernel % make
> test -f "submodules/ubuntu-eoan/README" || git submodule update --init
>
On February 5, 2020 2:35 am, Bryan Fields wrote:
> On 2/1/20 11:47 PM, Bryan Fields wrote:
>> greetings,
>> I have a policy in iptables for forwared traffic below :
>>
>> iptables -t filter -A INPUT -j ACCEPT --in-interface $INET_IF --protocol \
>> icmp --icmp-type echo-request --match limit
On February 16, 2020 5:26 pm, Frank Thommen wrote:
> Thank you for the link.
>
> Even though Fabian Gruenbichler writes in the bugreport
> (https://bugzilla.proxmox.com/show_bug.cgi?id=2490#c2) that Ceph RBD
> offers all features of CephFS, this doesn't seem to be true(?), as
> CephFS supports
On March 26, 2020 6:34 pm, Vadim Bulst wrote:
> Hi PVE users,
>
> I'm using PVE for some years now. I got a runnig cluster of 6 machines
> in version 6.x . Today I was going to add to additional machines. But
> the installation was aborting on both servers? There are two 10G Nics
> for the
On March 31, 2020 12:21 pm, Mikhail wrote:
> Hello,
>
> On one of our clusters we're seeing issues with VM backup task - the
> backup task fails with the following:
>
> ERROR: Node 'drive-scsi0' is busy: block device is in use by block job:
> mirror
> INFO: aborting backup job
> ERROR: Backup of
On March 31, 2020 5:07 pm, Mikhail wrote:
> On 3/31/20 2:53 PM, Fabian Grünbichler wrote:
>> you should be able to manually clean the messup using the QMP/monitor
>> interface:
>>
>> `man qemu-qmp-ref` gives a detailed tour, you probably want
>> `q
On April 1, 2020 9:49 am, Mikhail wrote:
> On 4/1/20 10:45 AM, Mikhail wrote:
>> At the time of writing this message my colleague is doing some other
>> Disk move on the cluster and he said he hit same problem with another
>> VM's disk - 40GB in size - task stuck at the very beggining:
>>
On May 13, 2020 3:29 pm, Amin Vakil wrote:
> I've forgot to add errors:
>
> Using CF_Account_ID and CF_Token:
>
> [Wed May 13 17:56:58 +0430 2020] Error
> [Wed May 13 17:56:58 +0430 2020] Error add txt for
> domain:_acme-challenge.subdomain.domain.com
> TASK ERROR: command 'setpriv --reuid
97 matches
Mail list logo