Thanks a lot, I will give it a try, at the least the online docs/FAQ
should be updated ;)
Cheers
El 13/2/20 a las 17:39, Thomas Lamprecht escribió:
On 2/13/20 4:12 PM, Eneko Lacunza wrote:
"
Does KVM support live migration from an AMD host to an Intel host and back?
Yes. [...]
Hi again,
It is also documented elsewhere:
https://www.linux-kvm.org/page/Migration#Note
Cheers
El 13/2/20 a las 16:12, Eneko Lacunza escribió:
Hi Stefan,
El 13/2/20 a las 15:58, Stefan Reiter escribió:
Hi,
On 2/13/20 1:27 PM, Eneko Lacunza wrote:
Hi Stefan,
Do you think this could help
Hi Stefan,
El 13/2/20 a las 15:58, Stefan Reiter escribió:
Hi,
On 2/13/20 1:27 PM, Eneko Lacunza wrote:
Hi Stefan,
Do you think this could help specifying a custom CPU model that will
migrate VMs with more than 1 core between Intel and AMD CPUs?
https://bugzilla.proxmox.com/show_bug.cgi?id
Hi Stefan,
Do you think this could help specifying a custom CPU model that will
migrate VMs with more than 1 core between Intel and AMD CPUs?
https://bugzilla.proxmox.com/show_bug.cgi?id=1660
Thanks a lot
Eneko
El 12/2/20 a las 16:11, Stefan Reiter escribió:
Based on the RFC and following
Hi Gilberto,
I took a look at their website, but didn't find any hint about why this
LizardFs would be better than currenty supported storages likes Ceph or
GlusterFS.
Did you find some use cases where this solution will be better than
currently supported ones?
Cheers
El 27/01/18 a las
Beaware that for upgrades they seem to have missed a patch for ceph-mgr
creation:
http://tracker.ceph.com/issues/20950
El 30/08/17 a las 10:41, Florent B escribió:
Ceph Luminous is released
On 04/07/2017 17:25, Martin Maurer wrote:
Hi all,
We are very happy to announce the final release
Hi Timofey,
El 22/08/17 a las 11:45, Timofey Titovets escribió:
Hi list,
By suggestion from: Fabian Grünbichler,
i try open topic on the dev list.
So, my suggestion (copy paste from issue):
since 3.11 kernel Linux support zswap, and since 3.15 kernel linux support zram.
(i'm a creator of
Hi Stefan,
This is a "just in case", not that I think this is you issue really.
We have seen lost packets with bridge + vlans and pfsense and Debian
guests; swiching to a virtual interface per vlan in pfsense fixed the
problem, but we didn't dig very deep into the problem (we saw this
Hi,
El 28/06/17 a las 05:58, Dietmar Maurer escribió:
Digging a bit deeper, I found the following:
BlueStore can use multiple block devices for storing different data,
for example: Hard Disk Drive (HDD) for the data Solid-state Drive (SSD) for
metadata Non-volatile Memory (NVM) or Non-volatile
How refreshing to see a 4 year old issue get taken care of! ;)
Thanks
El 01/08/16 a las 14:20, Emmanuel Kasper escribió:
Note that we *hide* the corresponding actions, instead of disabling by greying
out the menu command.
Disabling here does not make sense, since a low privilege user
has no
El 01/08/16 a las 08:50, Alexandre Derumier escribió:
As reported by Eneko Lacunza,
ceph/rbd by default use writethrough even if writeback is defined, until a
flush is detected.
We currently use writeback without sendind any flush for 3 things:
qemu-img convert : (default is unsafe, but we
El 01/08/16 a las 09:26, Dominik Csapak escribió:
On 08/01/2016 08:51 AM, Alexandre Derumier wrote:
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index
Sorry, this was on my /tmp, I didn't intend to re-send, please ignore.
El 27/07/16 a las 09:45, elacu...@binovo.es escribió:
From: Eneko Lacunza <elacu...@pve-test.binovo.net>
Signed-off-by: Eneko Lacunza <elacu...@binovo.es>
---
.../0054-vma-force-enable-rbd-cache-for-qmrestor
r, "rbd_cache", "true");
}
+ if (flags & BDRV_O_NO_FLUSH) {
+ rados_conf_set(s->cluster, "rbd_cache_writethrough_until_flush", "false");
+ }
I really think it's the right way (or it's impossible to use unsafe cache
option) and could be pushed to
: "Eneko Lacunza" <elacu...@binovo.es>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Cc: "Eneko Lacunza" <elacu...@pve-test.binovo.net>
Envoyé: Mardi 26 Juillet 2016 15:18:55
Objet: [pve-devel] [PATCH] Add patch to improve qmrestore to RBD,
activat
Hi Thomas,
El 26/07/16 a las 13:55, Thomas Lamprecht escribió:
Hi, first thanks for the contribution! Not commenting on the code
itself but we need a CLA for being able to add your contributions,
we use the Harmony CLA, a community-centered CLA for FOSS projects,
see
Hi,
I just tested this patch to work as well as the previous one. Instead of
setting rbd_cache_writethrough_until_flush=false in devfn, issue a bogus
flush so that Ceph activated rbd cache.
---
Index: b/vma.c
===
--- a/vma.c
+++
Hi,
El 26/07/16 a las 10:32, Alexandre DERUMIER escribió:
There is no reason to flush a restored disk until just the end, really.
Issuing flushes every x MB could hurt other storages without need.
I'm curious to see host memory usage of a big local file storage restore
(100GB), with writeback
El 26/07/16 a las 13:15, Dietmar Maurer escribió:
Index: b/vma.c
===
--- a/vma.c
+++ b/vma.c
@@ -328,6 +328,12 @@ static int extract_content(int argc, cha
}
+/* Force rbd cache */
+if (0 ==
Hi all,
This is my first code contribution for Proxmox. Please correct my
wrongdoings with patch creation/code style/solution etc. :-)
This small patch adds a flag to devfn to force rbd cache (writeback
cache) activation for qmrestore, to improve performance on restore to
RBD. This follows
Hi,
El 26/07/16 a las 10:04, Alexandre DERUMIER escribió:
I think qmrestore isn't issuing any flush request (until maybe the end),
Need to be checked! (but if I think we open restore block storage with
writeback, so I hope we send flush)
so for ceph storage backend we should set
Hi,
El 21/07/16 a las 09:34, Dietmar Maurer escribió:
But you can try to assemble larger blocks, and write them once you get
an out of order block...
Yes, this is the plan.
I always thought the ceph libraries does (or should do) that anyways?
(write combining)
Reading the docs:
El 20/07/16 a las 17:46, Dietmar Maurer escribió:
This is called from restore_extents, where a comment precisely says "try
to write whole clusters to speedup restore", so this means we're writing
64KB-8Byte chunks, which is giving a hard time to Ceph-RBD because this
means lots of ~64KB IOPS.
ement
and test this to check whether restore time improves.
Thanks a lot
Eneko
El 20/07/16 a las 08:24, Eneko Lacunza escribió:
El 16/02/16 a las 15:52, Stefan Priebe - Profihost AG escribió:
Am 16.02.2016 um 15:50 schrieb Dmitry Petuhov:
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is
Hi all,
El 16/02/16 a las 15:52, Stefan Priebe - Profihost AG escribió:
Am 16.02.2016 um 15:50 schrieb Dmitry Petuhov:
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
with 500-1500MB/s. See below for an example.
The backup
El 01/06/16 a las 17:01, Alexandre DERUMIER escribió:
@Alexandre: do you really need the other direction?
No. Only old->new.
I don't think that user try to migrate from new->old anyway.
Users try strange things, but I don't think a failure to migrate to
older version would be a surprise.
Hi,
El 19/04/16 a las 21:27, Daniel Hunsaker escribió:
Why is it worth to mention? I would just use:
win8: 'Microsoft Windows 8/10/2012',
?
as a server virtualization platform, we should list current microsoft
server OS
(which is 2012r2).
MS users like to see their OS listed as supported.
Hi Cesar,
I don't know how is DRBD9 integrated in PVE 4.x but I think you have one
thing wrong...
El 15/04/16 a las 02:34, Cesar Peschiera escribió:
Hi PVE developers team
Please, let me to do a suggestion (this topic is a bit complicated):
I suggest that in PVE 4.x, in the use of the PVE
El 26/02/16 a las 09:33, Dietmar Maurer escribió:
but I changed the limit to 8GB.
This is a welcome limit. Do you even get a usable virtualization server
even with 4-6GB of swap in use?
In some of our Proxmox servers I had to reduce swap size to 1-2GB or
even totally deactivate it after
El 14/01/16 a las 16:48, Stefan Priebe escribió:
Am 14.01.2016 um 16:46 schrieb Dietmar Maurer:
Hi Dietmar,
Hi Alexandre,
just to be sure. Nobody interested?
I personally do not need such feature. What is the use case exactly?
The idea is to enable people to easily migrate from vmware,
/12 Eneko Lacunza, add second argument for backup oldness
# - 2014/09/29 Eneko Lacunza, fix VM name extraction for snapshot support
if [ -f /etc/default/locale ]
then
. /etc/default/locale
export LANG
fi
#Directorio de log vzdump
LOG=/var/log/vzdump
#directorio .conf de las Maquinas
Hi Mario,
El 09/11/15 a las 16:48, Mario Loderer escribió:
it works :) Thanks!
Fine, we use it internally and in a pair of customers, glad to see it is
usefull for someone else :)
Is it right that i can only check on backup per check? Or is it
possible to check all vm's and make excludes?
El 20/08/15 a las 17:48, Cesar Peschiera escribió:
Hi to all
Excuse me please if i do a question in the incorrect site, but i did a
same
question in different dates in the PVE forum, and no one has responded me
(From June-28-2015).
The question is: how restore quickly by CLI a backup of a VM
Hi,
Part of the problem is that there isn't any ISO uploading/removing
option in GUI, which is something that new users always find quite odd.
Having an ISO removal option in GUI could help to check if the ISO is
mounted in some VM, and refuse to remove if that is the case.
Another option
Hi Philipp,
What Ceph packages?
We have 3 small clusters using creph/rbd and I'm not seeing this.
Biggest VM is 6GB of RAM though, biggest disk is 200GB and a VM has a
total of 425GB in 5 disks.
Are you using Proxmox Ceph Server?
What cache setting in VM harddisk config?
Cheers
Eneko
El
On 28/05/15 17:54, Dietmar Maurer wrote:
If you provide a buffer to the kernel, that you change while it is
working with it, I don't know why you expect a reliable/predictable
result? Specially (but not only) if you tell it not to make a copy!!
Note that without O_DIRECT you won't get a correct
Hi Stanislav,
On 28/05/15 13:10, Stanislav German-Evtushenko wrote:
Alexandre,
The important point is whether O_DIRECT is used with Ceph or not.
Don't you know?
qemu rbd access is only userland, so host don't have any cache or
buffer.
If RBD device does not use host cache then it is very
Hi,
I'm not kernel/IO expert in any way, but I think this test program has a
race condition, so it is not helping us diagnose the problem.
We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on userspace.
Cheers
Eneko
On 28/05/15 11:27, Wolfgang Bumiller
On 28/05/15 12:38, dea wrote:
Il Thu, 28 May 2015 12:02:21 +0200 (CEST), Dietmar Maurer scrisse
I've find this...
http://www.psc.edu/index.php/hpn-ssh
What do you all think?
This is great, but unfortunately ssh people rejected those patches
(AFAIK). So default ssh tools from Debian does not
On 28/05/15 13:44, Stanislav German-Evtushenko wrote:
Eneko,
I'm not kernel/IO expert in any way, but I think this test program has a race
condition, so it is not helping us diagnose the problem.
We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on
On 28/05/15 13:49, Dietmar Maurer wrote:
I'm not kernel/IO expert in any way, but I think this test program has a
race condition, so it is not helping us diagnose the problem.
We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on userspace.
For this test, we
On 28/05/15 15:01, Stanislav German-Evtushenko wrote:
Note that without O_DIRECT you won't get a correct result either; disk
may end not containing the data in the buffer when write was called.
Softmirror data will be identically uncertain :)
You are right. That is why I suppose there is a bug
On 28/05/15 15:32, Stanislav German-Evtushenko wrote:
What does it mean that operations with buffer are not ensured to be
thread-safe in qemu?
O_DIRECT doesn't guarantee that buffer reading is finished when write
returns if I read man -s 2 open correctly.
The statement seems to be not
Hi,
On 05/05/15 09:39, Martin Waschbüsch wrote:
Hi Andrew, hi Alexandre,
Am 05.05.2015 um 09:12 schrieb Alexandre DERUMIER
aderum...@odiso.com mailto:aderum...@odiso.com:
We have been running 3.10 in production for over 2 years now. No
real problems.
Same here, running since around
Hi Moula,
On 04/05/15 14:29, Moula BADJI wrote:
Le 04/05/2015 10:40, Alexandre DERUMIER a écrit :
root@pve-ceph6:/home/moula# pvecm add 192.168.100.40
root@192.168.100.40's password:
root@192.168.100.40's password:
root@192.168.100.40's password:
unable to copy ssh ID
Something seem to be
On 13/04/15 18:24, Dietmar Maurer wrote:
Otherwise a change to the parameters from
the GUI can reset the hand-set MTU in /etc/network/interfaces
We never reset parameters like MTU. Do you can a test case?
I can change anything from GUI, without overwriting the MTU settings.
Oops, I was sure
Hi,
On 14/04/15 09:44, Eneko Lacunza wrote:
On 13/04/15 18:24, Dietmar Maurer wrote:
Otherwise a change to the parameters from
the GUI can reset the hand-set MTU in /etc/network/interfaces
We never reset parameters like MTU. Do you can a test case?
I can change anything from GUI, without
Hi all,
It would be useful to be able to set the MTU for a node's interfaces
along the rest of parameters. Otherwise a change to the parameters from
the GUI can reset the hand-set MTU in /etc/network/interfaces
Cheers
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project,
Hi Detlef,
I have seen problems similar to this previously, but only on test clusters.
What version of packages?
My diagnose was that there was some process gone crazy that affected all
the node. I don't think I had CTs there. Did you have VMs running?
On 05/03/15 07:55, Detlef Bracker
Hi Detlef,
Have you checked syslog and dmesg for other errors? Is NFS on another
machine? What S.O. does run VM 997? What network and disks?
On 22/02/15 16:05, Detlef Bracker wrote:
Dear,
I dont know why, but 2nd time the scheduled backup hangs without
information - why - runs now for
Thanks a lot everyone involved!
On 19/02/15 14:38, Martin Maurer wrote:
Hi all,
We just released Proxmox VE 3.4 - the new version is centered around
ZFS, including a new ISO installer supporting all ZFS raid levels.
Small but quite useful additions includes hotplug, pending changes,
Hi Alexandre,
On 06/02/15 08:16, Alexandre DERUMIER wrote:
bugs
-resize disk qcow2 : not working when snapshot exist (qemu error)
-ceph : osd daemons are not always starting at boot. (maybe related to /etc/pve
and pve-cluster service ?)
I haven't seen this in our 3 clusters and in 3
Hi Alexandre,
On 30/01/15 11:16, Alexandre DERUMIER wrote:
Hi,
I'm going to fosdem at brussels tomorrow
https://fosdem.org/2015/schedule/event/leveragingceph/
Loic dachary invite me to talk briefly about the ceph integration in proxmox :)
Sage Weil will be there too,
I'll try to talk with
Hi,
On 26/01/15 18:36, Michael Rasmussen wrote:
I usually change it to soft because eventually this kind of problems happen.
You just have to accept that using soft mounts does not guaranty
against data loss.
True, but this prevents the nodes using that NFS share to become unstable.
If
On 26/01/15 18:21, Dietmar Maurer wrote:
I usually change it to soft because eventually this kind of problems happen.
Maybe it would be helpful to be able to set soft mount on GUI storage
configuration, this would help not to forget it ;)
Somebody wants to provide a patch?
Will try to do
I usually change it to soft because eventually this kind of problems happen.
Maybe it would be helpful to be able to set soft mount on GUI storage
configuration, this would help not to forget it ;)
On 26/01/15 16:59, Dietmar Maurer wrote:
Is it really expected to mount the nfs with hard
Hi,
On 08/11/14 11:02, Dietmar Maurer wrote:
SCENARIO: proxmox HA cluster, VM images exclusively on RBD
GOAL: use Ceph auth for fencing
Currently this can't be done, but I think the following changes would allow it:
- Move storage client auth keyring from /etc/pve/priv/ceph to somewhere
Hi all,
I have been lecturing a Proxmox+Ceph hands-on workshop the last 3 days.
Talking about HA and Fencing, I came with an idea I'm not sure could
work, but wanted to discuss with you.
SCENARIO: proxmox HA cluster, VM images exclusively on RBD
GOAL: use Ceph auth for fencing
Currently
I think shared storage is the correct solution. If you want the feature,
you'll have to provide the shared storage which is not difficult to do,
otherwise current local storage works ok.
This would keep things as simple as possible, which is quite good for
reliable logs.
On 06/10/14 10:43,
Hi all,
I just found out about this blog:
http://ayufan.eu/projects/proxmox-ve-differential-backups/
You surely know about it; I was curious about the reasons to reject the
patches and also about whether there are any plans for incremental or
differential backups. I think it would be a
Hi Martin,
On 19/09/14 11:07, Martin Maurer wrote:
I just found out about this blog:
http://ayufan.eu/projects/proxmox-ve-differential-backups/
You surely know about it; I was curious about the reasons to reject the patches
and also about whether there are any plans for incremental or
AM, Eneko Lacunza elacu...@binovo.es wrote:
Hi Martin,
On 19/09/14 11:07, Martin Maurer wrote:
I just found out about this blog:
http://ayufan.eu/projects/proxmox-ve-differential-backups/
You surely know about it; I was curious about the reasons to reject the
patches
and also about whether
62 matches
Mail list logo