Hello
09.10.2019 18:39, Thomas Lamprecht wrote:
I would actually really like to change this to scsi-hd, but we need to
be sure
it's OK for all possible supported setups..
So I tried to investigate a bit how it came to the use of scsi-generic, I came
to a commit[0] from Alexandre (CCd) which
Under Windows 7 I've just hit bug of q35-4.0 that timer runs few times
faster than it should. See
https://bugzilla.redhat.com/show_bug.cgi?id=1704375
Had to switch machine type to i440fx from q35 to workaround it.
29.07.2019 23:11, Kamil Trzciński пишет:
It seems that this has some
IIRC, BTRFS can use /sys/block/DEV/queue/rotational to enable
SSD-optimized allocator at mount time and mkfs.btrfs uses it to select
metadata allocation scheme.
08.11.2018 9:07, Alexandre DERUMIER пишет:
That's not the case—it's not enabled by default on any controller (my
previous e-mail
18.05.2017 06:59, Dietmar Maurer wrote:
>> However when going to the Summary page for a lizardfs storage Instance
>> it shows the usages details etc correctly, but the Type comes up as
>> "Unknown"
> You need to patch the GUI code to display something else.
>
>
18.04.2017 19:00, Luca De Andreis wrote:
Very similar to iscsi over zfs, but more standard (iscsi over zfs has problems
with bsd versions, Linux ietd, ecc.).
It's a bad idea?
I don't think proprietary storage plugin will be included in PVE.
Because to support proprietary storage, PVE team
17.01.2017 10:28, Jasmin J. wrote:
Hi!
THX for describing it, it would have been very helpful when I was writing
my DRBD8 plugin.
On 01/16/2017 09:46 PM, Dmitry Petuhov wrote:
+# public
+# returns $path to image/directory to be passed to qemu/lxc.
sub path {
my ($class, $scfg, $volname
Add comments for base class's methods to be reloaded in custom
storage plugins. Also add comment at end of file with exmaples of methods
that must be in child classes, but not present in base class.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
Described returning values
Needed for some storages with backing block devices to do oonlinr resize.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bc26da2..07c0c05 100644
---
With krbd we resize volume and tell QemuSever to notify running QEMU
with zero $size by returning undef.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
This patch depends on
[PATCH qemu-server] Set zero $size and continue if volume_resize() returns false
PVE/Storage/RBDPlugin.
Yes, you can. Just install pve-headers package corresponding to your running
kernel. Also you will have to manually install headers on every kernel update.
Or you can just wait for next PVE kernel release. It usually contains latest
RAID (including aacraid) and NIC drivers.
But I don't think
13.01.2017 12:26, Thomas Lamprecht wrote:
For the beginning a wiki would be probably also be a little better
suited than pve-docs, as docs is more an admin guide than a developer
guide.
In that case POD could contain developer guide, while pve-docs contain
admin guide.
But wiki is OK,
13.01.2017 11:21, Lindsay Mathieson пишет:
Is there any documentation or samples for this?
BTW, PVE team: if I document Plugin.pm in POD format, will you apply it?
Or any other format?
Or maybe just write wiki article?
___
pve-devel mailing list
13.01.2017 11:21, Lindsay Mathieson пишет:
Is there any documentation or samples for this?
No documentation currently.
You can look at /usr/share/perl5/PVE/Storage/*Plugin.pm as examples.
Also, see my plugins at
https://github.com/mityarzn/pve-storage-custom-mpnetapp and
Make volume_resize() return new volume size to be reported to qemu
instead of just true.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage/LVMPlugin.pm | 7 ---
PVE/Storage/Plugin.pm | 2 +-
PVE/Storage/RBDPlugin.pm | 2 +-
PVE/Storage/SheepdogPlu
Because after this patch series plugin's volume_resize() should return
new actual volume size or zero for some block storages. Returning just
1 will break online resize.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm | 2 +-
1 file changed, 1 insertion(+), 1 de
Resize volume and notify qemu about it with zero-size block_resize message.
Non-zero size notification about block device will cause error.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage/RBDPlugin.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
Actual volume size may slightly differ from requested because of
alignment and truncating, so feed qemu with actual size.
Also, some plugins may need to send zero instead of actual size to qemu.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/QemuServer.pm | 4 ++--
[PATCH qemu-server] Honour volume size returned by storage plugin.
[PATCH pve-storage 1/3] Make volume_resize() return new volume size
[PATCH pve-storage 2/3] Allow Ceph volume resize with krbd enabled.
[PATCH pve-storage 3/3] Increment storage plugin API version
This patch series allows plugins
12.01.2017 16:18, Fabian Grünbichler пишет:
On Thu, Jan 12, 2017 at 03:33:48PM +0300, Dmitry Petuhov wrote:
Set zero size for backing block devices in qmp call. In that case qemu
sets size of device in guest to current size of backing device, which
was resized earlier. Otherwise, any non-zero
Set zero size for backing block devices in qmp call. In that case qemu
sets size of device in guest to current size of backing device, which
was resized earlier. Otherwise, any non-zero value causes error here.
---
PVE/QemuServer.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git
12.01.2017 14:02, Alexandre DERUMIER пишет:
I think because of this resize is failing for CEPH with krbd enabled
qemu block resize can't work with krbd, only librbd.
Seem to be a regression in proxmox, because we don't do it before.
we can add in
/usr/share/perl5/PVE/Storage/RBDPlugin.pm
12.01.2017 13:57, Dietmar Maurer пишет:
So maybe we could do something like
-vm_mon_cmd($vmid, "block_resize", device => $deviceid, size =>
int($size));
+vm_mon_cmd($vmid, "block_resize", device => $deviceid, size =>
ceil($size/1024/1024) . "M");
in PVE::QemuServer::qemu_block_resize()
11.12.2016 21:16, Dietmar Maurer wrote:
> I simply do not understand why you think the current approach is wrong. It
> works
> with all major storage types, i.e. Ceph, sheepdog, iscsi, NFS, Gluster,
> DRBD on LVM in dual primary mode, DRBD9, ...
Actually, current approach can theoretically (I
10.10.2016 23:30, Alexandre DERUMIER wrote:
>> We could rework iscsi-manipulation code into another behavior. For exmaple,
>> Dell PS-series SAN exports each volume in separate target, lun 0. So we can
>> login into this target in activate_volume() and logout in
>> deactivate_volume(). See my
10.10.2016 21:08, Alexandre DERUMIER wrote:
>>> This is because the Lun is persisted through the scsi bus so the
>>> following should do it:
>>> echo 1 > /sys/bus/scsi/devices/${H:B:T:L}/delete (Where H = host:B = bus:T
>>> = target:L = lun)
> yes, but you need to do it on all nodes to be
30.09.2016 09:18, Dietmar Maurer wrote:
This is not really true - seems scsi-block and scsi-generic are quite same
speed.
So we could use iscsi-inq or iscsi-readcapacity16 to see what volume
actually (block device, or, say, streamer) is and select appropriate
device type for qemu.
Also, with
29.09.2016 09:21, Michael Rasmussen пишет:
On Thu, 29 Sep 2016 09:17:56 +0300
Dmitry Petuhov <mityapetu...@gmail.com> wrote:
It's side effect of scsi pass-through, which is being used by default for
[libi]scsi volumes with scsi VM disk interface. QEMU is just not aware of VM
bl
29.09.2016 09:05, Michael Rasmussen wrote:
On Thu, 29 Sep 2016 07:38:09 +0200 (CEST)
Alexandre DERUMIER wrote:
iostats are coming from qemu.
what is the output of monitor "info blockstats" for the vm where you don't have
stats ?
Two examples below:
# info blockstats
I planned to try gfs2 on couple of my clusters. It will not work without
corosync at all, because it's needed for DLM.
21.09.2016 2:25, Alexandre DERUMIER wrote:
> Another question about my first idea (replace corosync),
>
> is is really difficult to replace corosync by something else ?
>
>
Because it actually supports subvols, but it just was not checked.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage/NFSPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index df00f37..b
This patch series simplifies LXC volume creation and makes it independent
of storage plugin names. It will allow to use LXC with custom plugins.
Fixed version by comments.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Instead, rely on plugin-defined supported formats to decide if it
supports subvols.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
Still workaround 'rbd' plugin, that cannot be used for LXC without
krbd. Maybe better way is to pass $scfg into plugin's plugindata()
in storage co
19.09.2016 01:29, Alexandre DERUMIER wrote:
Hi,
I have add some strange behaviour of some host last week, (cpu performance
degrading)
and I have found than 3 hosts of my 15 host cluster have wrong cpu frequency.
All nodes are dell r630, with xeon v3 3,1ghz. (all with last bios/microcode
19.09.2016 08:51, Fabian Grünbichler wrote:
I sent you some feedback on patch #1 yesterday.. Since it breaks
stuff, it can't be merged (yet). Feel free to send an updated v2 (or
if you feel the feedback requires discussion, feel free to respond).
Thanks!
Sorry, did not received it. Maybe
). All seems to work.
11.09.2016 9:07, Dmitry Petuhov wrote:
> This patch series simplifies LXC volume creation and makes it independent
> of storage plugin names. It will allow to use LXC with custom plugins.
>
> We assume here that all storages (except rbd) can create raw volumes,
&
plugin's capabilities, that may or may not be available
in instance.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
src/PVE/LXC.pm | 45 +++--
1 file changed, 15 insertions(+), 30 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
Because it actually supports subvols, but it just was not checked.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage/NFSPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index df00f37..b
This patch series simplifies LXC volume creation and makes it independent
of storage plugin names. It will allow to use LXC with custom plugins.
We assume here that all storages (except rbd) can create raw volumes,
that we can format and mount, which is'nt acually true. But this case
is being
and other things for new custom
plugins integration
- Mail original -
De: "Dmitry Petuhov" <mityapetu...@gmail.com>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Vendredi 26 Août 2016 15:06:33
Objet: [pve-devel] [PATCH v4] Add support for custom storage
26.08.2016 13:00, Alexandre DERUMIER wrote:
Hi,
I thing that should be improved, is in Storage/Plugin.pm
if ($type eq 'iscsi' || $type eq 'nfs' || $type eq 'rbd' || $type eq
'sheepdog' || $type eq 'iscsidirect' || $type eq 'glusterfs' || $type eq 'zfs'
|| $type eq 'drbd') {
-manager to select input elements for storage plugins to target
this.
Currently tested with my NetApp plugin.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm | 32 ++--
1 file changed, 30 insertions(+), 2 deletions(-)
diff --git
Hello.
Some nowadays storages are exporting 4K LUNs. There may be 3 options:
512-byte physical sectors and 512-byte logical sectors, 4096 phys and
512 log (512e), 4096 phys and 4096 log (native 4K). Latest is
incompatible with two others. There will be more and more 4K-only
devices over
to update module. Until module update,
corresponding storage will just disappear from PVE, so it shall
not impose any data damage because of API change.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm | 24 +++-
1 file changed, 23 insertions(+), 1 de
10.08.2016 14:58, Dietmar Maurer wrote:
Idea of this patch is to add folder /usr/share/perl5/PVE/Storage/Custom
where user can place his plugins and PVE will automatically load
them on start or warn if it could not and continue. Maybe we could
even load all plugins (except PVE::Storage::Plugin
10.08.2016 14:58, Dietmar Maurer пишет:
This is by intention - we want people to use open source software.
``Use open source software'' and ``patch Storage.pm on every update''
are little different things.
Users are still free to share their code if it loads automatically, but
autoload eases
input elements for storage plugins to target
this.
Currently tested with my NetApp plugin.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm | 22 +-
1 file changed, 21 insertions(+), 1 deletion(-)
mode change 100755 => 100644 PVE/Storage
Add 2 functions to enable-disable multipath devices based on WWN
to PVE::Storage::Plugin and helper script. Script is simplified
and parallelized version of /sbin/rescan-scsi-bus from sg3-utils.
rescan-scsi-bus is too slow and noisy.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.
Third try. Rewrited to less general form, as Wolfgang and Dominik
suggested.
First patch is generic layer for low-level SCSI and multipath
devices manipulation. Script does almost same as sg3-utils's
rescan-scsi-bus (what we need from it), but much faster and
quietly.
For this patch
26.07.2016 19:08, Andreas Steinel wrote:
>> Why not use qcow2 format over generic NFS? It will give you
>> shapshot-rollback
>> features and I don't think that with much worse speed than these features
>> on ZFS level.
> I want to have send/receive also and I use QCOW2 on top of ZFS to have
>
Why not use qcow2 format over generic NFS? It will give you shapshot-rollback
features and I don't think that with much worse speed than these features on
ZFS level.
But on my experience, NFS storage for VMs is bad idea: it causes huge latencies
under
load, leading to chashes. If you want
::Simple skips array creation in
data.
- Delete unused functions in NetApp.pm.
- Disable inline compression (too slow).
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm| 2 +
PVE/Storage/LunCmd/Makefile | 2 +-
PVE/Storage/LunCmd/NetApp.pm
. Changes from v1:
- Fixed status output.
- Fixed `shared' option.
- Fixed typo.
- Code cleanups.
- Rewrited switch-case to if-elsif to remove extra dependency.
- Added libxml-simple-perl dependency to control.in.
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Stor
14.07.2016 17:55, Dominik Csapak wrote:
> first, i believe your commit message got merged in the title?
Yes, I'm still learning to use git :-)
> second, i believe your intention is good (support for netapp and so on), but
> your approach is backwards:
>
> instead of a multipath plugin with
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm| 2 +
PVE/Storage/LunCmd/NetApp.pm | 459 ++
PVE/Storage/MPDirectPlugin.pm | 383 +++
3 files changed, 844 insertions(+)
New version of patch for multipath-backed storage.
Additional Debian packages required: multipath-tools, scsitools.
Multipath devices are being added|deleted in [de]activate_storage().
Backing scsi devices ("paths") are being updated on activate_storage()
and free_image(). This makes these
with qemu.
- Mail original -
De: "Dmitry Petuhov" <mityapetu...@gmail.com>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Lundi 11 Juillet 2016 10:53:59
Objet: Re: [pve-devel] [PATCHv2] Add support for multipath-backed direct
attached storage
Okay, there
Okay, there's problem: Linux is little ugly when it comes to delete
external LUN.
So we have to unlink (rescan-scsi-bus.sh -r or echo 1 >
/sys/block/sdX/device/delete) all paths to this LUN from all nodes, and
only then we may safely delete it. BTW, iSCSI (not direct) storage is
also affected
promises this will get merged.
>
> On Thu, Jun 30, 2016 at 01:21:27PM +0300, Dmitry Petuhov wrote:
>> This patch series adds storage type "mpdirect". Before using it,
>> multipath-tools
>> package must be installed. After that, multipathd will create
&
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage/MPDirectPlugin.pm | 63 ++-
1 file changed, 50 insertions(+), 13 deletions(-)
diff --git a/PVE/Storage/MPDirectPlugin.pm b/PVE/Storage/MPDirectPlugin.pm
index 76ced94..e9f30f2
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
www/manager6/lxc/ResourceEdit.js | 3 ++-
www/manager6/qemu/Clone.js | 1 +
www/manager6/qemu/HDEdit.js | 3 ++-
www/manager6/qemu/HDMove.js | 1 +
4 files changed, 6 insertions(+), 2 deletions(-)
diff --git
29.06.2016 18:58, Dietmar Maurer пишет:
>>> I wonder if this is maybe a too general approach to what you're really
>>> after?
>> General approaches are cool. We will be able to use many
>> enterprise-grade storages with single universal plugin. Also later we'll
>> be able to control them via
with single universal plugin. Also later we'll
be able to control them via LunCmd/* specialised modules.
A few inline comments follow:
On Wed, Jun 29, 2016 at 02:00:42PM +0300, Dmitry Petuhov wrote:
Signed-off-by: Dmitry Petuhov <mityapetu...@gmail.com>
---
PVE/Storage.pm
This patch series adds storage type "mpdirect". Before using it, multipath-tools
package must be installed. After that, multipathd will create
/dev/disk/by-id/wwn-0x*
devices, which can be directly attached to VMs via GUI. Partitions are ignored.
I'm not skilled in UI programming, so there's no
.
27.06.2016 12:24, Wolfgang Link wrote:
Alexandre has written such a plugin for netapp, may be it fits to your
needs.
https://forum.proxmox.com/threads/proxmox-netapp-storage-plugin.20966/#post-112335
On 06/27/2016 11:14 AM, Dmitry Petuhov wrote:
I want to write storage plugin for my Netapp
I want to write storage plugin for my Netapp iSCSI SAN. But I'm little
confused what to use as base.
ISCSIDirectPlugin.pm is first candidate, but currently it looks not very
usable. Am I right understood that the only way to use it is to manually
create LUN on SAN and write its number in VM
16.02.2016 13:20, Dietmar Maurer wrote:
Storage Backend is ceph using 2x 10Gbit/s and i'm able to read from it
with 500-1500MB/s. See below for an example.
The backup process reads 64KB blocks, and it seems this slows down ceph.
This is a known behavior, but I found no solution to speed it up.
Hi. I have cluster with shared FC storage. On top of it I created
clustered LVM group and added it as storage to PVE (currently latest
from pve-no-subcription). Problem is that newly created machines
machines cannot start with error: TASK ERROR: can't activate LV
Damn. Sorry, thunderbird broke formatting. Again.
Hi.
I have cluster with shared FC storage. On top of it I created clustered
LVM group and added it as storage to PVE (currently latest from
pve-no-subcription).
Problem is that newly created machines machines cannot start with error:
TASK
30.11.2015 15:17, Dietmar Maurer пишет:
Problem is that newly created machines machines cannot start with error:
TASK ERROR: can't activate LV '/dev/EVA6000-1-2TB/vm-101-disk-1': Error
locking on node 1: Device or resource busy
I cannot reproduce that bug here. I can create and start a VM on
22.07.2015 12:42, Alexandre DERUMIER пишет:
just found this:
http://openvswitch.org/support/ovscon2014/17/1030-conntrack_nat.pdf
I'll try to look at this in the next months. (ovs firewall without
iptables/bridge trick)
Maybe better adopt nftables in PVE 4.0? It works on all network layers
Maybe run pigz with nice, always with maximum number of cores? So it will
always run at max speed while not bothering main processes.
11.07.2015 10:25, Cesar Peschiera пишет:
Yes, that makes sense to me.
Or maybe the PVE GUI also can has a third option for use pgzip, and the user
also can
25.03.2015 09:18, Dietmar Maurer пишет:
(You can plug 256 drives on 1 controller,
so with 1disk by controller, I only manage 14 drives (which is the
limit of
gui currently))
We slowly run out of free PCI slots ...
So if we add this patch, we need a way to add another pci bus - any idea?
25.03.2015 11:09, Dietmar Maurer пишет:
And I think it is maybe better to have different scsihw types:
Maybe better to always have thread by contoller and ability to manually
add controllers.
And selection to which controller to connect each drive.
Like:
scsihw0: virtio-scsi-pci
scsihw1:
27.11.2014 10:46, Stefan Priebe - Profihost AG пишет:
Am 26.11.2014 um 21:55 schrieb Dietmar Maurer:
does anybody know if it is intended that qemu uses a full core while hanging in
the boot process / looping.
AFAIK BIOS code use busy waiting.
Any solution or fix for this? This is really
27.10.2014 16:15, Cesar Peschiera пишет:
@Dmitry:
Excuse me please, I did not express properly, what I meant is that with
130.000 IP addresses and 1 rule in iptables, this rule will check
130.000 IP
address, and in this case, i believe that this firewall will be very slow
due to that for each
27.10.2014 0:31, Cesar Peschiera пишет:
I guess that your firewall not be functioning optimally if you add the
130.00 rules in ipset, due to that for each network packet the firewall must
do 130.000 checks.
What? Did you mean plain list of single-address rules? Because IPSET
20.10.2014 02:20, Lindsay Mathieson пишет:
True and in my case I wonder if the problem is my NAS. Backups to
locally attached USB drive are no problem, but I've always had
problems backing up to my NAS over GB ethernet, using NFS or SMB. I'd
often get 1 or 2 VM's failing with disk write
16.10.2014 22:33, VELARTIS Philipp Dürhammer пишет:
Why do backups with ceph make so high iops?
I get around 600 iops for 40mb/sec which is by the way very slow for a backup.
When I make a disk clone from local to ceph I get 120mb/sec (which is the
network limit from the old proxmox nodes) and
zero block on read.
for restore no problem.
For drive-mirror write to target, it's also missing a feature to skip
zero-block write.
- Mail original -
De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Vendredi 17 Octobre 2014 08:04:04
Objet: Re: [pve-devel
Still no errors. All seems OK.
08.09.2014 21:03, Dmitry Petuhov пишет:
After 2 days of torture VMs are still starting OK.
Maybe it's not enough. Tomorrow I'll add some load to host to higher
memory pressure.
05.09.2014 12:40, Alexandre DERUMIER пишет:
Hi,
here the kernel with the patch
Please test!
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Dmitry Petuhov mityapetu...@gmail.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 4 Septembre 2014 15:51:17
Objet: Re: [pve-devel] [PATCH] vhost-net: extend device allocation to vmalloc
Thanks,
I'll build a kernel
.odiso.net/pve-kernel-3.10.0-4-pve_3.10.0-17_amd64.deb
Please test!
- Mail original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Dmitry Petuhov mityapetu...@gmail.com
Cc: pve-devel@pve.proxmox.com
Envoyé: Jeudi 4 Septembre 2014 15:51:17
Objet: Re: [pve-devel] [PATCH] vhost-net: extend
06.09.2014 17:39, Alexandre DERUMIER пишет:
Now I'll start workload that
caused memory fragmentation last time (copy hundreds gigabytes of small
files via cephfs) for couple days to try to reproduce.
Do you use cephfs on proxmox host ? if yes, for what?
As storage :).
MDS is on one of guests
Yes, patch mostly unchanged, only difference is in this piece:
8
+@@ -736,7 +749,7 @@ static int vhost_net_open(struct inode *
+ }
+ r = vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX);
+ if (r 0) {
+- kfree(n);
++ vhost_net_free(n);
+ kfree(vqs);
+ return r;
+ }
8
There's no such
Backport upstream commit 23cc5a991c7a9fb7e6d6550e65cee4f4173111c5 to
3.10 kernel.
In upstream, this code is modified later in patch
d04257b07f2362d4eb550952d5bf5f4241a8046d, but it's unapplicable in 3.10
because there's still no open-coded kvfree() function (appeared in
v3.15-rc5).
Should
Yes.
04.09.2014 17:51, Alexandre DERUMIER пишет:
Thanks,
I'll build a kernel for you tomorrow,
do you have time to test it ?
- Mail original -
De: Dmitry Petuhov mityapetu...@gmail.com
À: pve-devel@pve.proxmox.com
Envoyé: Jeudi 4 Septembre 2014 12:23:18
Objet: [pve-devel] [PATCH
86 matches
Mail list logo