Am 20.01.2017 um 08:20 schrieb Yann Ylavic:
> Hi,
>
> On Fri, Jan 20, 2017 at 8:03 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hi Stefan,
>>
>> Am 19.01.2017 um 22:44 schrieb Stefan Eissing:
>>> this seems to be a tough
ve just a bunch of live webshops producing this. So
just real users - most probably GET + POST.
Stefan
>
> Thanks for the help!
>
> Cheers,
>
> Stefan
>
>> Am 19.01.2017 um 22:01 schrieb Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>
>&
Stefan
Am 19.01.2017 um 21:48 schrieb Stefan Eissing:
> On top please. There is only one way: forward!
>
>> Am 19.01.2017 um 21:47 schrieb Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>
>>
>> Am 19.01.2017 um 21:39 schrieb Stefan Eissing:
>&g
Am 19.01.2017 um 21:39 schrieb Stefan Eissing:
> Thanks, Stefan. Can you given the attached Patch a try?
sure. On top of the last one? Or should i drop it?
Stefan
>> Am 19.01.2017 um 19:33 schrieb Stefan Priebe <s.pri...@profihost.ag>:
>>
>> Here some more segfaul
Greets,
Stefan
Am 19.01.2017 um 16:47 schrieb Stefan Priebe - Profihost AG:
arg sorry my fault.
Here is a complete trace:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x7fc1c23e0f23 in apr_brigade_length () from
/usr/lib/x86_64-linux-gnu/libaprutil-1.so.0
(gdb) bt
#0
0x7fc1c1cc262d in clone () from /lib/x86_64-linux-gnu/libc.so.6
Stefan
Am 19.01.2017 um 16:45 schrieb Stefan Priebe - Profihost AG:
>
> Am 19.01.2017 um 16:34 schrieb Stefan Eissing:
>> Yann might already have asked this: any chance to compile with symbols and
>> get a more readable stacktr
o pass a specific option to configure
Stefan
>> Am 19.01.2017 um 16:30 schrieb Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>
>> With stock 2.4.25 + patch i'm getting this one again:
>> (gdb) bt
>> #0 0x00521dcd in h2_stream_out_pre
in ?? ()
#7 0x in ?? ()
Stefan
Am 19.01.2017 um 16:28 schrieb Stefan Priebe - Profihost AG:
> I'm now testing stock 2.4.25 + patch.
>
> May this configure option have an influence?
> --enable-nonportable-atomics=yes
>
> Greets,
> Stefan
>
> Am 19.01
I'm now testing stock 2.4.25 + patch.
May this configure option have an influence?
--enable-nonportable-atomics=yes
Greets,
Stefan
Am 19.01.2017 um 15:35 schrieb Yann Ylavic:
> Hi,
>
> On Thu, Jan 19, 2017 at 3:00 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>
:35 schrieb Yann Ylavic:
> Hi,
>
> On Thu, Jan 19, 2017 at 3:00 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> @Yann:
>> should i use V7 or V6?
>
> I'd prefer you'd use none (such that we can verify the patch with
> stock
()
#17 0x0004 in ?? ()
#18 0x7f4c206f50a0 in ?? ()
#19 0x7f4c22fec3c0 in ?? ()
#20 0x in ?? ()
i've now removed any mpm_event patch and try to look again at v1.8.8 +
your patch.
Stefan
>
>> Am 19.01.2017 um 15:00 schrieb Stefan Priebe - Profihost AG
>> &
gt; Cheers, Stefan
>
>
>
>
>
>> Am 19.01.2017 um 12:45 schrieb Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>
>> Am 19.01.2017 um 11:56 schrieb Stefan Eissing:
>>> Stefan,
>>>
>>> yes, that is a known one that wa
00 in ?? ()
Greets,
Stefan
>
> Cheers,
>
> Stefan
>
>> Am 19.01.2017 um 11:39 schrieb Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag>:
>>
>> Hi,
>>
>> with V7 and mod_http2 core i'm always seeing exactly THIS trace and
>>
b6599100a0 in ?? ()
#5 0x7fb659910f70 in ?? ()
#6 0x7fb65bfeeac0 in ?? ()
#7 0x in ?? ()
not all the others like with v1.8.8. So may this be a different one?
Stefan
Am 19.01.2017 um 09:21 schrieb Stefan Priebe - Profihost AG:
> And another one:
>
> #0
in ?? ()
#3 0x7f7d7112ca8c in ?? ()
#4 0x7f7d7112ca90 in ?? ()
#5 0x7f7d60a4bad0 in ?? ()
#6 0x in ?? ()
Am 19.01.2017 um 09:20 schrieb Stefan Priebe - Profihost AG:
> Hi,
> Am 19.01.2017 um 09:11 schrieb Yann Ylavic:
>> On Thu, Jan 19, 2017 at 9:05 AM, S
Hi,
Am 19.01.2017 um 09:11 schrieb Yann Ylavic:
> On Thu, Jan 19, 2017 at 9:05 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> With a vanilla apache 2.4.25 i got this one:
>>
>> Core was generated by `/usr/local/apache2/bin/httpd -k start'.
>
sources. May this be relevant?
Still not oberving any crashes running apache 2.4.23.
Greets,
Stefan
Am 19.01.2017 um 08:22 schrieb Stefan Priebe - Profihost AG:
> Hi Stefan,
>
> Apache 2.4.25 + mpm_event V7 and core mod_http2:
> Core was generated by `/usr/local/apache2/bin/ht
Am 19.01.2017 um 08:26 schrieb Stefan Eissing:
> Will look into this. With which patch do you have it most frequently?
It happens pretty fast while using:
Apache 2.4.25 + mpm_event V6 + mod_http2 V1.8.8
Greets,
Stefan
> Cheers, Stefan
>
>> Am 19.01.2017 um 08:22 schrie
mod_http2 and mod_http2 v1.8.8.
Is a regression in mod_http2 possible?
Greets,
Stefan
Am 19.01.2017 um 07:55 schrieb Stefan Priebe - Profihost AG:
> Dear Stefan,
> dear Yann,
>
> a longer test run shows it's also crashing without any mpm_event patch
> at all. So I'm really sorry.
in 2.4.25 itself or in mod_http2.
Stefan
Am 19.01.2017 um 00:52 schrieb Yann Ylavic:
> On Wed, Jan 18, 2017 at 10:49 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> v5 does not apply to 2.4.25. If you can send me a v5 version that
>> applies
Am 18.01.2017 um 22:53 schrieb Yann Ylavic:
> On Wed, Jan 18, 2017 at 10:50 PM, Yann Ylavic <ylavic@gmail.com> wrote:
>> On Wed, Jan 18, 2017 at 10:44 PM, Yann Ylavic <ylavic@gmail.com> wrote:
>>> On Wed, Jan 18, 2017 at 10:23 PM, Stefan Priebe - Profihost
Am 18.01.2017 um 22:50 schrieb Yann Ylavic:
> On Wed, Jan 18, 2017 at 10:44 PM, Yann Ylavic <ylavic@gmail.com> wrote:
>> On Wed, Jan 18, 2017 at 10:23 PM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>>>
>>
>> Also, do
Am 18.01.2017 um 22:44 schrieb Yann Ylavic:
> On Wed, Jan 18, 2017 at 10:23 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> sadly it does not solve the issue.
>
> OK, thanks, back to the paper.
>
> Any [error] log maybe?
What kind of?
I also tried to remove rev 1762701 from the v6 patchset. But it does not
match.
Stefan
Am 18.01.2017 um 22:23 schrieb Stefan Priebe - Profihost AG:
> Hi Yann,
>
> sadly it does not solve the issue.
>
> Trace looks still the same:
> Core was generated by `/usr/local/apache2/
:38 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Ok will wait whether you provide a fix or I should revert that part.
>
> Patch updated in PR 57399.
>
> Thanks,
> Yann.
>
Ok will wait whether you provide a fix or I should revert that part.
Stefan
Excuse my typo sent from my mobile phone.
> Am 18.01.2017 um 16:32 schrieb Yann Ylavic <ylavic@gmail.com>:
>
> On Wed, Jan 18, 2017 at 4:17 PM, Stefan Priebe - Profihost AG
> <s.pri..
Hi Yann,
is it enought to revert that commit?
http://svn.apache.org/r1762701
Stefan
Am 18.01.2017 um 12:49 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> site is mostly used with http2. So it may be totally unrelated to
> mod_http2. Sorry for the noise than. I just thought i
Am 18.01.2017 um 12:49 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> site is mostly used with http2. So it may be totally unrelated to
> mod_http2. Sorry for the noise than. I just thought it by digging
> through the traces.
>
> Stefan
>
> Am 18.01.2017 um 12:34
Hi,
site is mostly used with http2. So it may be totally unrelated to
mod_http2. Sorry for the noise than. I just thought it by digging
through the traces.
Stefan
Am 18.01.2017 um 12:34 schrieb Yann Ylavic:
> Hi Stefan,
>
> On Wed, Jan 18, 2017 at 11:33 AM, Stefan Priebe - Pro
0x7fd2d82140a0 in ?? ()
#17 0x0004 in ?? ()
#18 0x7fd2d82140a0 in ?? ()
#19 0x7fd2c97e93c0 in ?? ()
#20 0x in ?? ()
Greets,
Stefan
Am 17.01.2017 um 21:53 schrieb Stefan Priebe:
> Hi Yann,
>
> while testing V6 i'm experiencing segfaults.
>
&
Hi Yann,
while testing V6 i'm experiencing segfaults.
exit signal Segmentation
server-error.log:
AH00052: child pid 14110 exit signal Segmentation fault (11)
currently i'm trying to grab a core dump.
Greets,
Stefan
Am 26.12.2016 um 21:18 schrieb Stefan Priebe - Profihost AG:
Am 23.12.2016
//www.marc.info/?l=linux-btrfs=148338312525137=2
Stefan
> Thanks,
> Qu
>
> On 12/31/2016 03:31 PM, Stefan Priebe - Profihost AG wrote:
>> Any news on this series? I can't see it in 4.9 nor in 4.10-rc
>>
>> Stefan
>>
>> Am 11.11.2016 um 09:39 schrieb Wang
Any news on this series? I can't see it in 4.9 nor in 4.10-rc
Stefan
Am 11.11.2016 um 09:39 schrieb Wang Xiaoguang:
> When having compression enabled, Stefan Priebe ofen got enospc errors
> though fs still has much free space. Qu Wenruo also has submitted a
> fstests test case
Hi,
W: Failed to fetch
http://download.proxmox.com/debian/dists/jessie/pve-no-subscription/binary-amd64/Packages
Hash Sum mismatch
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Am 23.12.2016 um 01:41 schrieb Yann Ylavic:
> Hi Stefan,
>
> On Tue, Dec 20, 2016 at 1:52 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> Today i had another server giving no answers to any requests. apache
>> fullstatus did not re
Hi Yann,
here is a follow up on this old thread with some problems again. See below.
Am 23.09.2016 um 21:26 schrieb Stefan Priebe - Profihost AG:
> Hi Yann,
> Hi Luca,
>
> right now i had another server which shows:
> AH00485: scoreboard is full, not at MaxRequestWorkers msg.
&
Am 19.12.2016 um 12:03 schrieb Stefan Hajnoczi:
> On Fri, Dec 16, 2016 at 10:00:36PM +0100, Stefan Priebe - Profihost AG wrote:
>>
>> Am 15.12.2016 um 07:46 schrieb Alexandre DERUMIER:
>>> does rollbacking the kernel to previous version fix the problem ?
>>
>>
Am 19.12.2016 um 08:40 schrieb Fabian Grünbichler:
> On Mon, Dec 19, 2016 at 07:23:35AM +0100, Stefan Priebe - Profihost AG wrote:
>> Anything wrong or a bug?
>>
>> Greets,
>> Stefan
>
> nothing wrong. unlocking a VM is possible with the special "qm unloc
I think you can simply remove it. It's already upstream and I'm not sure if
there are users in 3.10.
Thanks.
Stefan
Excuse my typo sent from my mobile phone.
> Am 19.12.2016 um 07:08 schrieb Dietmar Maurer :
>
> Hi Stefan,
>
> I just updated the 3.10.0 kernel:
>
>
Anything wrong or a bug?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
> Am 16.12.2016 um 22:19 schrieb Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag>:
>
> Hello,
>
> is there a way to unlock a VM through the API?
>
> I tried it this way bu
Hello,
is there a way to unlock a VM through the API?
I tried it this way but this does not work:
pve:/nodes/testnode1/qemu/100> set config -delete lock
VM is locked (migrate)
Greets,
Stefan
___
pve-devel mailing list
pve-devel@pve.proxmox.com
ing another
profile like throughput-performance everything is fine again.
Geets,
Stefan
>
> i'm not sure if "perf" could give you some hints
> - Mail original -
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "aderumier"
maybe it could be related to vhost-net module too.
>
>
> - Mail original -----
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "qemu-devel" <qemu-devel@nongnu.org>
> Envoyé: Mercredi 14 Décembre 2016 16:04:08
> Objet: [
Hello,
after upgrading a cluster OS, Qemu, ... i'm experiencing slow and
volatile network speeds inside my VMs.
Currently I've no idea what causes this but it's related to the host
upgrades. Before i was running Qemu 2.6.2.
I'm using virtio for the network cards.
Greets,
Stefan
Hello,
after upgrading a PVE cluster from 3.4 to 4.4 i have some higher volume
vms which have high ping times even when they're on the same node and
slow network speed tested with iperf.
Has anybody seen something like this before?
--- 192.168.0.11 ping statistics ---
20 packets transmitted, 20
Hi,
since starting to upgrade some nodes to PVE 4.x i've seen that a lot of
them have a failed service watchdog-mux.
Is there any reasons why this one is enabled by default?
# systemctl --failed
UNIT LOAD ACTIVE SUBDESCRIPTION
● watchdog-mux.service loaded failed failed
isn't there a way to move free space to unallocated space again?
Am 03.12.2016 um 05:43 schrieb Andrei Borzenkov:
> 01.12.2016 18:48, Chris Murphy пишет:
>> On Thu, Dec 1, 2016 at 7:10 AM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>>>
>>&
Am 01.12.2016 um 16:48 schrieb Chris Murphy:
> On Thu, Dec 1, 2016 at 7:10 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>>
>> Am 01.12.2016 um 14:51 schrieb Hans van Kranenburg:
>>> On 12/01/2016 09:12 AM, Andrei Borzenkov wrote:
>>>
Am 01.12.2016 um 14:51 schrieb Hans van Kranenburg:
> On 12/01/2016 09:12 AM, Andrei Borzenkov wrote:
>> On Thu, Dec 1, 2016 at 10:49 AM, Stefan Priebe - Profihost AG
>> <s.pri...@profihost.ag> wrote:
>> ...
>>>
>>> Custom 4.4 kernel with patches up t
Am 01.12.2016 um 09:12 schrieb Andrei Borzenkov:
> On Thu, Dec 1, 2016 at 10:49 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
> ...
>>
>> Custom 4.4 kernel with patches up to 4.10. But i already tried 4.9-rc7
>> which does the same.
>>
&
Am 01.12.2016 um 00:02 schrieb Chris Murphy:
> On Wed, Nov 30, 2016 at 2:03 PM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hello,
>>
>> # btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
>> Dumping filters: flags 0x7, state 0x
Hello,
# btrfs balance start -v -dusage=0 -musage=1 /ssddisk/
Dumping filters: flags 0x7, state 0x0, force is off
DATA (flags 0x2): balancing, usage=0
METADATA (flags 0x2): balancing, usage=1
SYSTEM (flags 0x2): balancing, usage=1
ERROR: error during balancing '/ssddisk/': No space left on
Am 29.11.2016 um 10:29 schrieb Dietmar Maurer:
>> So it seems that the whole firewall breaks if there is somewhere
>> something wrong.
>>
>> I think especially for the firewall it's important to jsut skip that
>> line but process all other values.
>
> That is how it should work. If there is a
Am 29.11.2016 um 10:24 schrieb Fabian Grünbichler:
> On Tue, Nov 29, 2016 at 10:10:53AM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> today i've noticed that the firewall is nearly inactive on a node.
>>
>> systemctl status says:
>> Nov 29 10
-v4_swap PVEFW-120-letsencrypt-v4
flush PVEFW-120-letsencrypt-v4_swap
destroy PVEFW-120-letsencrypt-v4_swap
which fails:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Stefan
Am 29.11.2016 um 10:10 schrieb Stefan Priebe - Profihost
Hello,
today i've noticed that the firewall is nearly inactive on a node.
systemctl status says:
Nov 29 10:07:05 node2 pve-firewall[2534]: status update error:
ipset_restore_cmdlist: ipset v6.23: Error in line 3: The value of the
CIDR parameter of the IP address is invalid
Nov 29 10:07:14 node2
Am 23.11.2016 um 19:23 schrieb Holger Hoffstätte:
> On 11/23/16 18:21, Stefan Priebe - Profihost AG wrote:
>> Am 04.11.2016 um 20:20 schrieb Liu Bo:
>>> If we have
>>>
>>> |0--hole--4095||4096--preallocate--12287|
>>>
>>> instead of using prea
Hi,
sorry last mail was from the wrong box.
Am 04.11.2016 um 20:20 schrieb Liu Bo:
> If we have
>
> |0--hole--4095||4096--preallocate--12287|
>
> instead of using preallocated space, a 8K direct write will just
> create a new 8K extent and it'll end up with
>
> |0--new
Am 22.11.2016 um 14:38 schrieb Fabian Grünbichler:
> On Tue, Nov 22, 2016 at 01:57:47PM +0100, Fabian Grünbichler wrote:
>> On Tue, Nov 22, 2016 at 01:09:17PM +0100, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>>> Am 22.11.2016 um 12:26 schrieb Fabian Gr
Hi,
Am 22.11.2016 um 12:26 schrieb Fabian Grünbichler:
> On Tue, Nov 22, 2016 at 12:11:22PM +0100, Stefan Priebe - Profihost AG wrote:
>> Am 22.11.2016 um 11:56 schrieb Dietmar Maurer:
>>> I think this commit should solve the issue:
>>>
>>> https://git.proxmox
oot certificate, in PEM format)"
With the full chain it's not working. I then removed the whole chain and
only putted my final crt into that one and now it's working fine. With
the full chain $depth was 2 in my case.
Greets,
Stefan
>>> On November 22, 2016 at 11:49 AM Stefan Priebe - Pr
Hi,
while using a custom certificate was working fine for me with 3. I'm
getting the following error message if i'm connected to node X and want
to view the hw tab of a VM running on node Y.
596 ssl3_get_server_certificate: certificate verify failed
Request
ignore me my fault...
Stefan
Am 22.11.2016 um 10:15 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> in the past / with V3 i was able to move qemu-server VM config files
> around simply with mv.
>
> Under v4 it seems this no longer works the file automagically move to
&g
Hi,
in the past / with V3 i was able to move qemu-server VM config files
around simply with mv.
Under v4 it seems this no longer works the file automagically move to
their old location.
Here an example:
[node4 ~]# for VM in $(ps aux|grep "kvm"|grep -- "-id"|sed -e "s/.*-id
//" -e "s/ .*//"); do
Am 17.11.2016 um 08:42 schrieb Fabian Grünbichler:
> On Thu, Nov 17, 2016 at 07:01:24AM +0100, Dietmar Maurer wrote:
>>> It is really hard to review patches without descriptions. Please
>>> can you add minimal information?
>>
>> Oh, just saw you sent that in a separate mail - please ignore me!
>
Am 17.11.2016 um 07:33 schrieb Dietmar Maurer:
> AFAIK we only protect base volumes, and we 'unprotect' that in the code.
> So what is the purpose of this patch?
>
good question ;-) I just remember that i had this situation where i did
clones from snapshots which results in protected snapshots.
Am 17.11.2016 um 07:20 schrieb Dietmar Maurer:
>> While blessing it is good practise to provide the class. This makes it also
>> possible to use
>> QemuServer as a base / parent class.
>
> Why do you want another class (QemuServer) as base?
We've a custom class PHQemuServer which has
, ext4, btrfs, ...
So there are several cases where you want to shrink a volume. Without a
downtime of the server.
Greets,
Stefan
>
>> On November 16, 2016 at 8:13 PM Stefan Priebe <s.pri...@profihost.ag> wrote:
>>
>>
>> Signed-off-by: Stefan Priebe <s.p
h.com/issues/15991
> https://github.com/ceph/ceph/pull/9408
Both are closed or marked as refused so i don't think this will change
in the future.
Greets,
Stefan
> - Mail original -----
> De: "Stefan Priebe, Profihost AG" <s.pri...@profihost.ag>
> À: "pv
This took me some time that a custom modification was preventing a whole plugin
to fail loading.
The warn also hides in the systemctl status -l / journal log. I think dying is
better if a plugin
contains an error.
[PATCH] VZDump: die with error if plugin loading fails
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
PVE/VZDump.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index a4b40ce..42e34d1 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -191,7 +191,7 @@ foreach my $plug (@pve_vzdump_c
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
PVE/Storage/RBDPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 09562fe..eb0c256 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlu
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
PVE/Storage/RBDPlugin.pm | 36 +---
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index c1f88e4..09562fe 100644
--- a/PVE/S
While blessing it is good practise to provide the class. This makes it also
possible to use
QemuServer as a base / parent class.
[PATCH] VZDump/QemuServer: set bless clas correctly
___
pve-devel mailing list
pve-devel@pve.proxmox.com
Signed-off-by: Stefan Priebe <s.pri...@profihost.ag>
---
PVE/VZDump/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 5322b37..6a58a79 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuSer
Am 15.11.2016 um 12:07 schrieb Ladi Prosek:
> Hi,
>
> On Tue, Nov 15, 2016 at 11:37 AM, Stefan Priebe - Profihost AG
> <s.pri...@profihost.ag> wrote:
>> Hello,
>>
>> Am 15.11.2016 um 11:30 schrieb Dr. David Alan Gilbert:
>>> * Stefan Priebe - Profihost
Hello,
Am 15.11.2016 um 11:30 schrieb Dr. David Alan Gilbert:
> * Stefan Priebe - Profihost AG (s.pri...@profihost.ag) wrote:
>> Hello,
>>
>> today i did a first live migration from Qemu 2.6.2 to Qemu 2.7.0. The VM
>> is running windows and virtio-balloon and with
peration not permitted
>
> mmm, maybe incompatibility between qemu 2.3 (proxmox3) and qemu 2.(7)?
> (proxmox 4).
>
> When I have done the migration, it was between qemu 2.3 and 2.5.
Yes might be. My migration path was Qemu 2.6.2 => Qemu 2.7.0
>
>
> - Mail original ---
Hello,
today i did a first live migration from Qemu 2.6.2 to Qemu 2.7.0. The VM
is running windows and virtio-balloon and with machine type pc-i440fx-2.5.
The output of the target qemu process was:
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
kvm_apic_post_load: Yeh
Am 14.11.2016 um 11:32 schrieb Stefan Priebe - Profihost AG:
> Am 14.11.2016 um 08:28 schrieb Alexandre DERUMIER:
>> Hi Stefan,
>> I have sent an howto on the forum
>>
>> https://forum.proxmox.com/threads/howto-proxmox-3-4-4-2-upgrade-with-qemu-live-migration.27348/
Am 14.11.2016 um 08:28 schrieb Alexandre DERUMIER:
> Hi Stefan,
> I have sent an howto on the forum
>
> https://forum.proxmox.com/threads/howto-proxmox-3-4-4-2-upgrade-with-qemu-live-migration.27348/
THX. Seems not that easy ;-) Will try it.
Stefan
> - Mail original
Hello list,
is there a way to do the upgrade without shutting down the vms? I've in mind
Alexandre did so.
Greets,
Stefan
Excuse my typo sent from my mobile phone.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
seems stable kernel line misses this one:
http://git.kernel.org/cgit/linux/kernel/git/jejb/scsi.git/commit/?id=5e5ec1759dd663a1d5a2f10930224dd009e500e8
Am 12.11.2016 um 12:47 schrieb Stefan Priebe:
Hello,
the mentioned commit introduces a regression to me into v4.4.31. After
upgrading from
Hello,
the mentioned commit introduces a regression to me into v4.4.31. After
upgrading from 4.4.30 to 4.4.31 my megasas controller no longer exports
any drives to the os.
dmesg:
Nov 12 11:09:54 cloud2-1394 kernel: [ 17.381898] scsi 0:2:13:0:
Direct-Access
Am 12.11.2016 um 03:18 schrieb Liu Bo:
> On Wed, Nov 09, 2016 at 09:19:21PM +0100, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> found this one from 2014:
>> https://patchwork.kernel.org/patch/5551651/
>>
>> it this still valid?
>
> The spac
Hello,
found this one from 2014:
https://patchwork.kernel.org/patch/5551651/
it this still valid?
Am 09.11.2016 um 09:09 schrieb Stefan Priebe - Profihost AG:
> Dear list,
>
> even there's a lot of free space on my disk:
>
> # df -h /vmbackup/
> Filesystem
Dear list,
even there's a lot of free space on my disk:
# df -h /vmbackup/
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/stripe0-backup 37T 24T 13T 64% /backup
# btrfs filesystem df /backup/
Data, single: total=23.75TiB, used=22.83TiB
System, DUP:
Am 08.11.2016 um 10:17 schrieb Kees Meijs:
> Hi,
>
> As promised, our findings so far:
>
> * For the time being, the new scrubbing parameters work well.
Which parameters do you refer to? Currently we're on hammer.
> * Using CFQ for spinners and NOOP voor SSD seems to spread load over
>
Hi,
currently i've an fs which triggers this one on mount while originally
having 50% disk free - but btrfs progs fails too.
# btrfs check --repair -p /dev/vdb1
enabling repair mode
couldn't open RDWR because of unsupported option features (3).
ERROR: cannot open file system
[ 164.378512]
, used=155045216256,
pinned=0, reserved=0, may_use=524288, readonly=65536
Greets,
Stefan
Am 29.09.2016 um 09:27 schrieb Stefan Priebe - Profihost AG:
> Am 29.09.2016 um 09:13 schrieb Wang Xiaoguang:
>>>> I found that compress sometime report ENOSPC error even in 4.8-rc8,
>>>
Hello list,
just want to report again that i've seen not a single ENOSPC msg with
this series applied.
Now working fine since 18 days.
Stefan
Am 14.10.2016 um 15:09 schrieb Stefan Priebe - Profihost AG:
>
> Am 06.10.2016 um 04:51 schrieb Wang Xiaoguang:
>> This issue was revealed
Am 17.10.2016 um 03:50 schrieb Qu Wenruo:
> At 10/17/2016 02:54 AM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> Hi,
>>>
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>>
>>
Am 16.10.2016 um 21:48 schrieb Hans van Kranenburg:
> On 10/16/2016 08:54 PM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>>
>>>> cp --reflink=
Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
> Hi,
>
> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>
>> cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
>>
>> An example:
>>
>> source file:
>> #
Hello,
cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
An example:
source file:
# ls -la vm-279-disk-1.img
-rw-r--r-- 1 root root 204010946560 Oct 14 12:15 vm-279-disk-1.img
target file after around 10 minutes:
# ls -la vm-279-disk-1.img.tmp
-rw-r--r-- 1 root root
Hi,
Am 14.10.2016 um 15:19 schrieb Stefan Priebe - Profihost AG:
> Dear julian,
>
> Am 14.10.2016 um 14:26 schrieb Julian Taylor:
>> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>>> Hello list,
>>>
>>> while running the same workload on two
Dear julian,
Am 14.10.2016 um 14:26 schrieb Julian Taylor:
> On 10/14/2016 08:28 AM, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> while running the same workload on two machines (single xeon and a dual
>> xeon) both with 64GB RAM.
>>
>> I need
ion path.
>
> With this patch, we can fix these false enospc error for compression.
>
> Signed-off-by: Wang Xiaoguang <wangxg.f...@cn.fujitsu.com>
Tested-by: Stefan Priebe <s.pri...@profihost.ag>
Works fine since 8 days - no ENOSPC errors anymore.
Greets,
Stefan
ve_meta
>Just as a place holder.
> 2) Increase *accurate* outstanding_extents at set_bit_hooks()
>This is the real increaser.
> 3) Decrease *INACCURATE* outstanding_extents before returning
>This makes outstanding_extents to correct value.
>
> For 128M BTRFS_MAX_EXTENT
Hello list,
while running the same workload on two machines (single xeon and a dual
xeon) both with 64GB RAM.
I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to
keep the speed as good as on the non numa system. I'm not sure whether
this is related to numa.
Is there any sysctl
501 - 600 of 3170 matches
Mail list logo