Hi,
Not ure about this particular situation, but you could try:
apt-get --with-new-pkgs upgrade
or otherwise you could try manually installing the kept-back packages with
apt-get install libpve-access-control libpve-common-perl ...
Perhaps that helps?
MJ
On 5/29/20 11:14 AM, Olivier
Hi Ronny and Eneko,
Just to say: thank you for your replies!
MJ
On 3/2/20 9:26 AM, Eneko Lacunza wrote:
Hi MJ,
El 29/2/20 a las 12:21, mj escribió:
Hi,
We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph
12.2.13.
I have ordered a replacement SSD, but we have the following
particular to consider?
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi Stefan,
Thanks for your reply!
MJ
On 2/26/20 4:14 PM, Stefan Reiter wrote:
Hi!
On 2/26/20 3:35 PM, mj wrote:
Hi,
Just to make sure I understand something. We have an identical
three-node hyperconverged pve cluster, on Intel Xeon CPU's.
Now we would like to expand, and we
us, and what potential
other drawbacks this could have.
Has anyone ever done much testing on this subject? Anyone with
interesting insights / knowledge / experiences to share on this subject?
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
ipmi_watchdog.
But this is more a feeling - I do not have numbers.
That is exactly the kind of feedback we were after. We'll keep it the
way it is now, with softdog and bmc-watchdog for resetting a crashed
machine.
Thanks!
MJ
___
pve-user mailing list
pve
your ntp issues
are only a symptom.
Good luck!
MJ
On 9/11/19 5:14 PM, Geoffray Levasseur wrote:
Hi,
I have some difficulties on a Ryzen 5 2400G node. I switched recently to new
version 6 with no problems except performances on that machine. The new kernel
was suppose to have a better support
Hi,
Just our 2 cts: You should check out freenas/truenas as well. It will do
what you require (and a bit more) :-)
MJ
On 8/3/19 3:02 PM, JR Richardson wrote:
Gilberto,
I have 14 hypervisor nodes hitting 6 storage servers, dell 610's, and a
couple of backup servers. All VMs, 200 Linux
.
I find it surprising that a setup such as described above *would* be
supported, and a (in our eyes simpler and more elegant) setup with one
or two ceph-osd-only nodes is not supprted.
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https
are wondering for example if the extra mon nodes & OSDs would show up
in the pve gui.
Of course we could setup a test cluster and simply try it out, but is
anyone doing this? Any reasons why we should / should not consider this?
MJ
___
pve-user mailing
Hi Fabian,
Thanks you for your thorough reply. We appreciate it. :-)
Best,
MJ
On 2/20/19 10:27 AM, Fabian Grünbichler wrote:
usually you'd use the same pool name on all hosts, as PVE differentiates
between local and shared storages anyway. for containers, PVE assumes
that ZFS datasets use
not respond to heartbeat messages, why swap usage is
100%, and why there are multiple high-cpu kworker threads running on
this host only.
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
+scrubbing+deep
io:
client: 18.3MiB/s rd, 39.8MiB/s wr, 87op/s rd, 539op/s wr
What can I do to get rid of these messages..? They sound serious. More
info required, just let me know...!
Thanks very much in advance,
MJ
___
pve-user mailing list
ok,
I added a second disk to the win2012 machine, tried enlarging that,
which worked as it should. So I ended up moving all contents from my old
data-disk to this new (now larger) disk, and now I'm happy.
Have a nice weekend all,
MJ
On 1/26/19 1:05 PM, mj wrote:
Hi Alfio,
Yes, I realise
question: am I missing anything? Could there be an obvious
reason fo this behaviour? Am I missing a step? Any tips, ideas..?
MJ
On 1/25/19 1:47 PM, Alfio munoz wrote:
Hi, Is a Windows task, not a Proxmox problem, go to computer management and
right click over the disk you need to increase and choose
is
not recognising the new disk size. I have refreshed and rescanned: no
change. No empty space at the end of the device is showing up and extend
Volume is greyed out.
Am I missing a step somewhere..?
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https
Hi,
On 09/08/2018 12:11 AM, prox...@elchaka.de wrote:
That is the way you should Go. But do this in gentle way so there shouldnt much impact
for your Clients - gentle drain the crush weicht for osd in question to the value
"0"
This way you only have One rebalance instead of two!
Thanks,
onf, no restart or anything, we
could add the OSD/journal as expected.
Hope this helps someone else in the future :-)
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
..?
Strange that performence turns out to be ~3Gbps, instead of the expected
4...
Anyone with more informationon this subject?
Have a nice weekend all!
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo
hould work nicely.
MJ
On 08/24/2018 12:45 PM, Uwe Sauter wrote:
If using standard 802.3ad (LACP) you will always get only the performance of a
single link between one host and another.
Using "bond-xmit-hash-policy layer3+4" might get you a better performance but
is not standard L
fd 0
#dedicated proxmox vlan
auto bond0.99
iface bond0.99 inet static
address a.b.c.10
netmask 255.255.255.0
gateway a.b.c.1
#dedicated ceph vlan
auto bond0.100
iface bond0.100 inet static
address ...
netmask
- Mail original -
De: "mj"
À: "proxmoxve"
Envoyé: J
dd 10.10.89.10/11/12 dev vmbr0 || true (ceph mon IPs)
down ip addr del 10.100.222.1/24 dev vmbr0 || true
Any feedback on the above? As this is production, I'd like to be
reasonably sure that this would work, before trying.
Your comments will be ve
a new question. :-)
It says: "if using ceph, to the Luminous release *before you upgrade*"
So, keeping ceph a jewel, and upgrading just proxmox is no option?
And just curiosity... why is that..?
MJ
___
pve-user mailing list
pve-user@pve.p
4.4 to 5.0
Is that intentional..? Or should it work just like a regular update?
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Hi,
I set them to optimal, took the hit of the rebalance, and things are
fine now. (we're on jewel now)
MJ
On 11/03/2017 11:32 AM, Uwe Sauter wrote:
Hi,
is it still correct to set tunables to "hammer" even whit Proxmox 5? This is
mentioned in the wiki [1].
Regards,
On the guest< have verified that the spice-vdagent is actually running.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Even more details, on proxmox, the guest is started ith:
root 25870 1.9 0.3 5773784 504344 ? Sl 08:52 0:29 /usr/bin/kvm
-id 502 -chardev socket,id=qmp,path=/var/run/qemu-server/502.qmp,server,nowait
-mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/502.pid -daemonize
ing text in the virt-viewer cli console is not possible,
the mouse disappears when above the virt-viewer window.
Surprised that it works for you. Do I do something wrong?
MJ
On 10/31/2017 01:41 PM, Martin Maurer wrote:
Hi,
works for me.
pls describe exactly your setup.
- guest VM (Debian Stre
No replies means stupid question, or..?
MJ
On 10/26/2017 09:46 PM, mj wrote:
Hi,
Reading this page:
https://www.linux-kvm.org/page/SPICE
I gather that using spice, we should be able to copy and paste to (and
from) a guest.
So, we installed on a stretch guest the spice-vdagent, and selected
like to copy
and paste to and frfom the guest :-)
Thanks!
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
not sure if it's a wise
decision, and it's difficult to find info on it, as mostly the results
are about using btrfs as storage for ceph.
So, is it a good idea to use btrfs in a guest, or better stick with (for
example) xfs?
Have a nice sunday everybody!
MJ
Very interesting read!
On 03/23/2017 01:52 PM, Jeff Palmer wrote:
I just saw this recent blog post on the ceph blog. helps you figure
out how many objects will be moved when adding new OSD's and such.
http://ceph.com/planet/how-many-objects-will-move-when-changing-a-crushmap/
Thanks Jeff!
Hi Thomas,
Thank you for these improvements.
(I did not participate much in the following discussion, but I was the
one who started the thread "[PVE-User] HA question")
MJ
On 11/24/2016 10:05 AM, Thomas Lamprecht wrote:
Hi all,
regarding the discussion about our HA stack on th
, but I must say: in the discussion I started on my
feeling that the HA-interface is unlogical (a few days back) the
developpers turned out to be really quite the contrary! :-)
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com
name, since the id isn't really
that great to identify the VM for me in the list.
I'm generally much more interested my machine names, than in those
nummerical id's...
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin
management, so we can work on a machine, without having to
completely remove it from the HA system.
Nice to see you much discussion following-up my initial email! :-)
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman
s willing
to change this design.
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
stop/shutdown/start control has to be done manually.
Are we the only ones feeling this?
MJ
___
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
38 matches
Mail list logo