On 5/12/2018 2:50 pm, Lindsay Mathieson wrote:
Turned out to be my zabbix.list in the end, it was referencing jessie.
Once I updated it to stretch, all was good.
And turns out I was mistaken, still had the problem.
It was my zfs setup - I used to have zfsonlinux installed, must have
On 5/12/2018 12:39 am, Stoiko Ivanov wrote:
The/var/cache/apt-build/ lines look like you might have some leftover
installation of apt-build? [0]
Unless you know that you need it, remove the corresponding .list file
(probably in /etc/apt/sources.list.d/apt-build.list).
The lizardfs line still
vmbr10 is a bridge (or as switch by another name)
if you want the switch to work reliably with multicast you probably need
to enable multicast querier.
|echo 1 > /sys/devices/virtual/net/vmbr0/bridge/multicast_querier|
or you can disable snooping, so that it treats multicast as broadcast. |
Hi all,
Seems I found the solution.
eth3 on proxmox1 is a broadcom 1gbit card connected to HPE switch; it is
VLAN 10 untagged on the switch end.
I changed the vmbr10 bridge to use eth4.10 on the X540 card, and after
ifdown/ifup and corosync and pve-cluster restart, now everything seems
hi Marcus,
El 4/12/18 a las 16:09, Marcus Haarmann escribió:
Hi,
you did not provide details about your configuration.
How is the network card set up ? Bonding ?
Send your /etc/network/interfaces details.
If bonding is active, check if the mode is correct in /proc/net/bonding.
We encountered
Hi,
you did not provide details about your configuration.
How is the network card set up ? Bonding ?
Send your /etc/network/interfaces details.
If bonding is active, check if the mode is correct in /proc/net/bonding.
We encountered differences between /etc/network/interfaces setup and
Hi,
On Wed, 5 Dec 2018 00:25:36 +1000
Lindsay Mathieson wrote:
> On 5/12/2018 12:17 am, Stoiko Ivanov wrote:
> > Hi,
> >
> > to me the most strage thing is the removal of grub (which in turn
> > probably leads to the removal of proxmox-ve)
>
>
> Get:1 file:/var/cache/apt-build/repository
On 5/12/2018 12:23 am, Alwin Antreich wrote:
Please check your repository entries in '/etc/apt/source.list' &
'/etc/apt/source.list.d/', are they pointing to the right repositories?
sources.list should be ok:
deb http://ftp.au.debian.org/debian stretch main contrib
# PVE pve-no-subscription
On 5/12/2018 12:17 am, Stoiko Ivanov wrote:
Hi,
to me the most strage thing is the removal of grub (which in turn
probably leads to the removal of proxmox-ve)
Get:1 file:/var/cache/apt-build/repository apt-build InRelease
Ign:1 file:/var/cache/apt-build/repository apt-build InRelease
Get:2
Hi Lindsay,
On Tue, Dec 04, 2018 at 11:59:41PM +1000, Lindsay Mathieson wrote:
> One server has upgraded clean so far, but the 2nd one wants to remove pve :(
>
> apt-get dist-upgrade
> The following packages were automatically installed and are no longer
> required:
> apparmor ceph-fuse criu
Hi,
to me the most strage thing is the removal of grub (which in turn
probably leads to the removal of proxmox-ve)
could you please post the complete output of `apt update`?
do you have any other repositories configured
(/etc/apt/sources.list, /etc/apt/sources.list.d/*)?
Thanks,
stoiko
On
One server has upgraded clean so far, but the 2nd one wants to remove pve :(
apt-get dist-upgrade
The following packages were automatically installed and are no longer
required:
apparmor ceph-fuse criu ebtables fonts-font-awesome hdparm ipset
libapparmor-perl libboost-iostreams1.55.0
what a way to close the year
Many congratulations to the entire Proxmox team!
On 12/4/18 7:24 AM, Martin Maurer wrote:
> Hi all!
>
> We are very excited to announce the general availability of Proxmox VE
> 5.3!
>
> Proxmox VE now integrates CephFS, a distributed, POSIX-compliant file
>
root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process
exited with error code.
See "systemctl status pve-container@108.service" and "journalctl
-xe" for details.
root@hayne:~# systemctl status pve-container@108.service
●
W dniu 04.12.2018 o 10:41, Thomas Lamprecht pisze:
On 12/4/18 10:27 AM, lord_Niedzwiedz wrote:
root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process exited
with error code.
See "systemctl status pve-container@108.service" and
Hi there!
I would like to say a big THANK YOU!
Proxmox has evolve a lot since previosly versions... Indeed, I knew this
wonderful VE in version 1.x and compare it today, seems a lot of
improvements
Thanks a lot.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Hi all!
We are very excited to announce the general availability of Proxmox VE 5.3!
Proxmox VE now integrates CephFS, a distributed, POSIX-compliant file
system which serves as an interface to the Ceph storage (like the RBD).
You can store backupfiles, ISO images, and container templates.
On 12/4/18 10:27 AM, lord_Niedzwiedz wrote:
> root@hayne:~# systemctl start pve-container@108
> Job for pve-container@108.service failed because the control process exited
> with error code.
> See "systemctl status pve-container@108.service" and "journalctl -xe" for
> details.
>
> root@hayne:~#
root@hayne:~# systemctl start pve-container@108
Job for pve-container@108.service failed because the control process
exited with error code.
See "systemctl status pve-container@108.service" and "journalctl -xe"
for details.
root@hayne:~# systemctl status pve-container@108.service
●
19 matches
Mail list logo