Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-02-01 Thread Ian Campbell
On Fri, 2016-01-29 at 15:59 +, Andy Smith wrote:
> Hi Ian,
> 
> On Fri, Jan 29, 2016 at 02:57:23PM +, Ian Campbell wrote:
> > I spent a bit of time investigating this, but sadly I'm not able to
> > reproduce the basis failure.
> 
> FWIW it was me who reported this with the packages in Debian stable
> (linux-image-3.16.0-4-amd64 3.16.7-ckt20-1+deb8u3,
> xen-hypervisor-4.4-amd64 4.4.1-9+deb8u3) when using
> "dom0_mem=1024M,max:1024M" on the hypervisor command line.
> 
> I must admit I found this bug when searching for the error message,
> and have only been seeing it printed a couple of times at guest
> shutdown, not thousands of times.
> 
> So if having it printed a couple of times isn't considered a bug,
> I'm sorry if I've led you astray here.

No worries, thanks for letting me know.

>  Might be worth finding a way
> to remove it anyway though; anyone having a problem is going to keep
> searching for it thinking it is relevant to their case.

Indeed. That might be tricky to arrange 100% reliably due to the way these
things work out in practice wrt ballooning and maxes and dynamic changes
due to the use by backends, an upstream thing in any case.

> At the moment I am avoiding seeing the message at all by running with
> only "dom0_mem=1024M" on the command line. What's the disadvantage
> of not having the "max:1024M" there?

I'm not 100% sure. It looks like it causes no max to be set (LONG_MAX is
the default), which I suppose would allow dom0 (from Xen's PoV, the kernel
might have its own limitations) to balloon to more than 1024M if it tried
to (which would explain it working around this issue without).

Ian.



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-29 Thread Ian Campbell
On Wed, 2016-01-27 at 10:57 +, Ian Campbell wrote:
> 
> I'm still unable to spot what might have changed between 3.16.7-ckt20-
> 1+deb8u2 and 4.3.3-5 though to explain it going away, which I'd still
> quite liketo get to the bottom of in order to fix in Jessie.

I spent a bit of time investigating this, but sadly I'm not able to
reproduce the basis failure.

I've tried the combinations below and all are OK. Some of them produce 1 or
2 of the "-17" messages (I should have noted which but didn't, I think it
was most) but in no case did I see thousands of them.

Ian.

for i in $(seq 1 15) ; do xl reboot debian.guest.osstest ; sleep 10s; done

dom0_mem=2048M,max:2056M
L: 3.16.7-ckt20-1+deb8u3
X: 4.6.0-1+nmu1

=> OK

dom0_mem=2048M,max:2048M
L: 3.16.7-ckt20-1+deb8u3
X: 4.6.0-1+nmu1

=> OK

dom0_mem=2048M,max:2048M
L: 3.16.7-ckt9-3~deb8u1
X: 4.6.0-1+nmu1

=> OK

-

for i in $(seq 1 15) ; do xl create /etc/xen/debian.guest.osstest.cfg ; sleep 
10s ;  xl shutdown -w debian.guest.osstest; sleep 5s; done

dom0_mem=2048M,max:2048M
L: 3.16.7-ckt9-3~deb8u1
X: 4.6.0-1+nmu1

=> OK

dom0_mem=2048M,max:2048M
L: 3.16.7-ckt9-3~deb8u1
X: 4.4.1-9+deb8u3

=> OK



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-29 Thread Andy Smith
Hi Ian,

On Fri, Jan 29, 2016 at 02:57:23PM +, Ian Campbell wrote:
> I spent a bit of time investigating this, but sadly I'm not able to
> reproduce the basis failure.

FWIW it was me who reported this with the packages in Debian stable
(linux-image-3.16.0-4-amd64 3.16.7-ckt20-1+deb8u3,
xen-hypervisor-4.4-amd64 4.4.1-9+deb8u3) when using
"dom0_mem=1024M,max:1024M" on the hypervisor command line.

I must admit I found this bug when searching for the error message,
and have only been seeing it printed a couple of times at guest
shutdown, not thousands of times.

So if having it printed a couple of times isn't considered a bug,
I'm sorry if I've led you astray here. Might be worth finding a way
to remove it anyway though; anyone having a problem is going to keep
searching for it thinking it is relevant to their case.

At the moment I am avoiding seeing the message at all by running with
only "dom0_mem=1024M" on the command line. What's the disadvantage
of not having the "max:1024M" there?

Cheers,
Andy



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-27 Thread Ian Campbell
On Tue, 2016-01-26 at 19:46 +0200, KSB wrote:
> > This is actually useful, because it shows that the issue occurs even
> > with
> > Xen 4.6, which I think rules out a Xen side issue (otherwise we'd have
> > had
> > lots more reports from 4.4 through to 4.6) and points to a kernel side
> > issue somewhere.
> > 
> > > But I checked logs more thoroughly and found it even on more recent
> > > kernels:
> > > 1) Lot of messages on 3.14-2-amd64 with xen-4.6, 13 domU's.
> > 
> > Just to be clear, "Lots" here means "hundreds or thousands"? I think it
> > is
> > expected to see one or two around the time a VM is started or stopped,
> > so
> > with 13 domUs a couple of dozen messages wouldn't seem out of line to
> > me.
> > 
> pkg 3.14.15-2
> ~1600 from last dmesg cleanup which was 23h ago, but all of them 
> distributed in last 15h
> 
> 
> > > 2) 4.3.0-1-amd64 xen-4.6, only two messages shortly after boot, only
> > > 1
> > > domU running:
> > > [   12.473778] xen:balloon: Cannot add additional memory (-17)
> > > [   21.673298] xen:balloon: Cannot add additional memory (-17)
> > > uptime 17 days.
> > > 
> > > Previous on same machine was 4.2.0-1-amd64 with more (-17)'s
> > 
> > Was it running xen-4.6 when it was running 4.2.0 or was that also
> > older?
> 
> 4.3.3-5 xen-4.6.0 and previous 4.2.6-1 xen-4.4.1

Thanks. And just to clarify, with Linux 4.2.6-1 xen-4.4.1 you were or were
not seeing this issue?

To summarise what I can tell from this bug log the following combinations
are/are not prone to this issue:

xen-??? xen-4.1 xen-4.4.1 4.4.1-9+deb8u3 xen-4.6.0
3.14.15-2Y[1]

3.16.7-ckt7-1   N[1]
3.16.7-ckt9-3~deb8u1Y[2]
3.16.7-ckt20-1+deb8u2 Y[3]

4.2.6-1 ?[1]
4.3.3-5  NN[1]
4.3.3-7  N[1]

[1] KSV
[2] ML (original report, Xen version unknown)
[3] AS (with dom0_mem=1024M,max:1024M, but not dom0_mem=1024M)

The N for xen-4.1 + linux-3.16.7-ckt7-1 (KSV's #4) seems anomalous. Perhaps 
that version is susceptible but not exhibiting it during the span of the logs.

The ? for xen-4.4.1 + linux-4.2.6-1 is the "just to clarify" above.

In any case it does appear to correlate with the Linux version and not the Xen 
version, and it does appear to be fixed in 4.3.3-5, or possibly even 4.2.6-1.

I'm still unable to spot what might have changed between 3.16.7-ckt20-1+deb8u2 
and 4.3.3-5 though to explain it going away, which I'd still quite liketo get 
to the bottom of in order to fix in Jessie.

Thanks,


Ian.



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-27 Thread Chad Dougherty

On 2016-01-27 05:57, Ian Campbell wrote:

To summarise what I can tell from this bug log the following combinations
are/are not prone to this issue:

 xen-??? xen-4.1 xen-4.4.1 4.4.1-9+deb8u3 xen-4.6.0
3.14.15-2Y[1]

3.16.7-ckt7-1   N[1]
3.16.7-ckt9-3~deb8u1Y[2]
3.16.7-ckt20-1+deb8u2 Y[3]

4.2.6-1 ?[1]
4.3.3-5  NN[1]
4.3.3-7  N[1]

[1] KSV
[2] ML (original report, Xen version unknown)
[3] AS (with dom0_mem=1024M,max:1024M, but not dom0_mem=1024M)



Although it may not add a lot to the situation at this point, you can 
add my configuration as being affected:

- kernel 3.16.7-ckt11-1+deb8u5
- Xen 4.4.1-9+deb8u2
- autoballoon="off" in xl.conf
- GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=2048M,max:2056M" (and also just 
"dom0_mem=2048M").


--
-Chad



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-26 Thread Ian Campbell
On Mon, 2016-01-25 at 20:36 +0200, KSB wrote:
> > Do you have a package version which you know to be good? How confident
> > are
> > you that it is ok (sometimes the problem is intermittent)?
> > 
> > Lastly, is there any chance you upgraded the Xen packages at the same
> > time?
> > I'm starting to wonder if maybe this is not a kernel issue.
> > 
> Sorry, but there is chance, sadly.

This is actually useful, because it shows that the issue occurs even with
Xen 4.6, which I think rules out a Xen side issue (otherwise we'd have had
lots more reports from 4.4 through to 4.6) and points to a kernel side
issue.

> But I checked logs more thoroughly and found it even on more recent
> kernels:
> 1) Lot of messages on 3.14-2-amd64 with xen-4.6, 13 domU's.

Just to be clear, "Lots" here means "hundreds or thousands"? I think it is
expected to see one or two around the time a VM is started or stopped, so
with 13 domUs a couple of dozen messages wouldn't seem out of line to me.

> 2) 4.3.0-1-amd64 xen-4.6, only two messages shortly after boot, only 1 
> domU running:
> [   12.473778] xen:balloon: Cannot add additional memory (-17)
> [   21.673298] xen:balloon: Cannot add additional memory (-17)
> uptime 17 days.
> 
> Previous on same machine was 4.2.0-1-amd64 with more (-17)'s

Was it running xen-4.6 when it was running 4.2.0 or was that also older?

Also 4.2.0-1-amd64 here (and all the other numbers you gave) is the ABI,
not the package version. The package versions is either in dpkg or you can
find it in /proc/version:

Linux version 4.1.0-2-amd64 (debian-kernel@lists.debian.org) (gcc version 4.9.3 
(Debian 4.9.3-3) ) #1 SMP Debian 4.1.6-1 (2015-08-23)
  ^^^ABI^^^ 
  ^^^VERSION

> 3) 4.3.0-1-amd64, one month, several reboots, average 4 domU's, and no 
> messages

Any idea which Xen?

> 4) 3.16.0-4-amd64, xen-4.1, 22 domU's, uptime 188 days, in last month I 
> see only
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:27:47
> Jan 7 14:27:47
> Jan 7 14:27:47
> Jan 7 14:27:48
> and this is roughly the time last machine was created(started).
> 
> 
> 



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-26 Thread KSB

This is actually useful, because it shows that the issue occurs even with
Xen 4.6, which I think rules out a Xen side issue (otherwise we'd have had
lots more reports from 4.4 through to 4.6) and points to a kernel side
issue somewhere.


But I checked logs more thoroughly and found it even on more recent
kernels:
1) Lot of messages on 3.14-2-amd64 with xen-4.6, 13 domU's.


Just to be clear, "Lots" here means "hundreds or thousands"? I think it is
expected to see one or two around the time a VM is started or stopped, so
with 13 domUs a couple of dozen messages wouldn't seem out of line to me.


pkg 3.14.15-2
~1600 from last dmesg cleanup which was 23h ago, but all of them 
distributed in last 15h




2) 4.3.0-1-amd64 xen-4.6, only two messages shortly after boot, only 1
domU running:
[   12.473778] xen:balloon: Cannot add additional memory (-17)
[   21.673298] xen:balloon: Cannot add additional memory (-17)
uptime 17 days.

Previous on same machine was 4.2.0-1-amd64 with more (-17)'s


Was it running xen-4.6 when it was running 4.2.0 or was that also older?


4.3.3-5 xen-4.6.0 and previous 4.2.6-1 xen-4.4.1



Also 4.2.0-1-amd64 is the ABI, not the package version. The package
versions is either in dpkg or you can find it in /proc/version:

Linux version 4.1.0-2-amd64 (debian-kernel@lists.debian.org) (gcc version 4.9.3 
(Debian 4.9.3-3) ) #1 SMP Debian 4.1.6-1 (2015-08-23)
   ^^^ABI^^^
   ^^^VERSION


Ok, if pkg versions is more important, then I updated all data with pkg 
versions in this post




3) 4.3.0-1-amd64, one month, several reboots, average 4 domU's, and no
messages


Any idea which Xen?


kernel pkg 4.3.3-5 and 4.3.3-7 and xen-4.6.0




4) 3.16.0-4-amd64, xen-4.1, 22 domU's, uptime 188 days, in last month I
see only
Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:27:47
Jan 7 14:27:47
Jan 7 14:27:47
Jan 7 14:27:48
and this is roughly the time last machine was created(started).


pkg 3.16.7-ckt7-1



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-26 Thread Ian Campbell
On Mon, 2016-01-25 at 20:36 +0200, KSB wrote:
> > Do you have a package version which you know to be good? How confident
> > are
> > you that it is ok (sometimes the problem is intermittent)?
> > 
> > Lastly, is there any chance you upgraded the Xen packages at the same
> > time?
> > I'm starting to wonder if maybe this is not a kernel issue.
> > 
> Sorry, but there is chance, sadly.

This is actually useful, because it shows that the issue occurs even with
Xen 4.6, which I think rules out a Xen side issue (otherwise we'd have had
lots more reports from 4.4 through to 4.6) and points to a kernel side
issue somewhere.

> But I checked logs more thoroughly and found it even on more recent
> kernels:
> 1) Lot of messages on 3.14-2-amd64 with xen-4.6, 13 domU's.

Just to be clear, "Lots" here means "hundreds or thousands"? I think it is
expected to see one or two around the time a VM is started or stopped, so
with 13 domUs a couple of dozen messages wouldn't seem out of line to me.

> 2) 4.3.0-1-amd64 xen-4.6, only two messages shortly after boot, only 1 
> domU running:
> [   12.473778] xen:balloon: Cannot add additional memory (-17)
> [   21.673298] xen:balloon: Cannot add additional memory (-17)
> uptime 17 days.
> 
> Previous on same machine was 4.2.0-1-amd64 with more (-17)'s

Was it running xen-4.6 when it was running 4.2.0 or was that also older?

Also 4.2.0-1-amd64 is the ABI, not the package version. The package
versions is either in dpkg or you can find it in /proc/version:

Linux version 4.1.0-2-amd64 (debian-kernel@lists.debian.org) (gcc version 4.9.3 
(Debian 4.9.3-3) ) #1 SMP Debian 4.1.6-1 (2015-08-23)
  ^^^ABI^^^ 
  ^^^VERSION

> 3) 4.3.0-1-amd64, one month, several reboots, average 4 domU's, and no 
> messages

Any idea which Xen?

> 4) 3.16.0-4-amd64, xen-4.1, 22 domU's, uptime 188 days, in last month I 
> see only
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:12:08
> Jan 7 14:27:47
> Jan 7 14:27:47
> Jan 7 14:27:47
> Jan 7 14:27:48
> and this is roughly the time last machine was created(started).
> 
> 
> 



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-25 Thread Ian Campbell
On Fri, 2016-01-22 at 21:38 +0200, KSB wrote:
> Seen this behavior on earlier kernels (i.e. 3.14-2-amd64 pkg 3.14.15-2.) 
> and seems to be gone at least in 4.3

That's useful info thanks, I've been unable to pinpoint a culprit for this
for ages now.

Do you have a package version which you know to be good? How confident are
you that it is ok (sometimes the problem is intermittent)?

Lastly, is there any chance you upgraded the Xen packages at the same time?
I'm starting to wonder if maybe this is not a kernel issue.

Ian.



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-25 Thread KSB

Do you have a package version which you know to be good? How confident are
you that it is ok (sometimes the problem is intermittent)?

Lastly, is there any chance you upgraded the Xen packages at the same time?
I'm starting to wonder if maybe this is not a kernel issue.


Sorry, but there is chance, sadly.

But I checked logs more thoroughly and found it even on more recent kernels:
1) Lot of messages on 3.14-2-amd64 with xen-4.6, 13 domU's.
2) 4.3.0-1-amd64 xen-4.6, only two messages shortly after boot, only 1 
domU running:

[   12.473778] xen:balloon: Cannot add additional memory (-17)
[   21.673298] xen:balloon: Cannot add additional memory (-17)
uptime 17 days.

Previous on same machine was 4.2.0-1-amd64 with more (-17)'s

3) 4.3.0-1-amd64, one month, several reboots, average 4 domU's, and no 
messages


4) 3.16.0-4-amd64, xen-4.1, 22 domU's, uptime 188 days, in last month I 
see only

Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:12:08
Jan 7 14:27:47
Jan 7 14:27:47
Jan 7 14:27:47
Jan 7 14:27:48
and this is roughly the time last machine was created(started).



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-22 Thread KSB
Seen this behavior on earlier kernels (i.e. 3.14-2-amd64 pkg 3.14.15-2.) 
and seems to be gone at least in 4.3




Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2016-01-13 Thread Andy Smith
Hi,

I'm also seeing this.

With:

GRUB_CMDLINE_XEN="dom0_mem=1024M,max:1024M …"

I see a flurry of "xen:balloon: Cannot add additional memory (-17)"
messages every time a domU is shut down.

In this configuration I note that I also see with "free -m" that
dom0 has 929M, though "xl list" shows 1024M.

I can run "xl mem-set 0 1024" without visible error but it does not
prevent the error messages being logged.

With:

GRUB_CMDLINE_XEN="dom0_mem=1024M …"

I do not see any of the balloon error messages.

In this configuration "free -m" shows 768M RAM for dom0, though "xl
list" still shows 1024M.

xen-hypervisor-4.4-amd644.4.1-9+deb8u3
linux-image-3.16.0-4-amd64  3.16.7-ckt20-1+deb8u2

I'm guessing that the reason I end up with 768M RAM in dom0 is
because I have not specified the *maximum* RAM for dom0, so the
kernel scales structures for the possible case of dom0 getting the
entire 128G of RAM at some point.

I added "mem=1G" to the Linux kernel command line and then dom0
boots with "free -m" showing 930M.

So, at the moment I am running with "dom0_mem=1024M" on the Xen
command line, "mem=1G" on the Linux kernel command line, and
"autoballoon=off" in /etc/xen/xl.conf.

This seems to avoid the xen:balloon error messages and provide
something more like thr expected amount of dom0 memory. Is there a
downside of not specifying the ",max:1024M" part?

Cheers,
Andy



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2015-10-31 Thread Andre'
Hello,

I'm seeing the same.

After a fresh boot before starting any guests xl shows:
root@tank0:~# xl list 0
NameID   Mem VCPUs  State   Time(s)
Domain-0 0  4096 8 r-   8.4

I did run "xl mem-set 0 4096" then started an guest.
This produced a load of the "xen:balloon: Cannot add additional memory (-17)"
messages.

xl now shows:
root@tank0:~# xl list 0
NameID   Mem VCPUs  State   Time(s)
Domain-0 0  4094 8 r-  24.4

Runnig "xl mem-set 0 4094" after the guest had started seems to have
stopped the messages for the moment.
(not running mem-set at this point continued to produce the messages in
some frequent irregular intervalls)

Xen options are "dom0_mem=4096M,max:4096M" .
Dom0 linux kernel options are "mem=4096" .
xl.conf got autoballoon="off" .

Have a nice day,
 Andre



Bug#784688: Thousands of "xen:balloon: Cannot add additional memory (-17) messages" despite dom0 ballooning disabled

2015-09-08 Thread Chad Dougherty
I'm experiencing this bug too.  The effect is that one particular guest 
cannot transmit on its vif.


xl.conf has 'autoballoon="off"', /etc/default/grub has 
'GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=2048M,max:2048M"'


Trying the 'xl mem-set 0 4065' from earlier in the thread results in:

root@xen4:/sys/devices/system/xen_memory/xen_memory0# xl mem-set 0 4065
libxl: error: libxl.c:4098:libxl_set_memory_target: memory_dynamic_max 
must be less than or equal to memory_static_max


Any updates on this issue?
Thanks!

-Chad



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-06-04 Thread Martin Lucina
On Wednesday, 27.05.2015 at 10:05, Martin Lucina wrote:
  Does xl mem-set 0 4051 make the messages stop? If not what about using
  4050?
 
 I'm not sure if this is due to my rebooting the dom0 since I reported this
 issue, it now appears to be using a slightly different amount of memory
 just under the target:
 
 Domain-0 0  4065 8 r-   
 36437.3
 
 xl mem-set 0 4065 seems to have made the messages stop. Will run with it
 for a while and see what happens.

No messages for ~4 days, then it started again.

What is weird is that dom0 now thinks it has less memory:

Domain-0 0  4064 8 r-   41240.6

4064 vs. 4065. I did not change anything.

So, something is causing dom0 to change its memory target, despite the fact
that ballooning should be disabled.

Relevant files from xen_memory0:

./info/current_kb:
4162252
./info/low_kb:
0
./info/high_kb:
0
./max_schedule_delay:
20
./retry_count:
1
./target_kb:
4162560
./target:
4262461440
./uevent:
./schedule_delay:
1
./max_retry_count:
0

Any ideas?

Martin


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150604095408.gc20...@nodbug.lucina.net



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-27 Thread Martin Lucina
On Sunday, 17.05.2015 at 15:19, Ian Campbell wrote:
 One last thing... If you are able to try it then it would be interesting
 to know if the 4.0.x kernel from sid exhibits this behaviour too.

I can try this at some point next week when I have time to go and
physically kick the box in case it doesn't boot for whatever reason.

Martin


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150527080633.gb27...@nodbug.lucina.net



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-27 Thread Martin Lucina
On Sunday, 17.05.2015 at 15:14, Ian Campbell wrote:
 On Thu, 2015-05-07 at 21:02 +0200, Martin Lucina wrote:
  Note that despite the memory setting above, the dom0 has not been allocated
  exactly the amount asked for:
  
  # xl list 0
  NameID   Mem VCPUs  State   Time(s)
  Domain-0 0  4051 8 r-
  5606.1
 
 Could you also post the output of xenstore-ls -fp | grep target
 please.

/local/domain/1/memory/target = 2097153   (n0,r1)
/local/domain/76/memory/target = 2097153   (n0,r76)
/local/domain/276/memory/target = 262145   (n0,r276)
/local/domain/314/memory/target = 65537   (n0,r314)
/local/domain/318/memory/target = 65537   (n0,r318)
/local/domain/319/memory/target = 65537   (n0,r319)
/local/domain/323/memory/target = 65537   (n0,r323)
/local/domain/325/memory/target = 65537   (n0,r325)

 You should have a directory /sys/devices/system/xen_memory/xen_memory0.
 Please could you post the contents of each of the files in there. In
 particular target*, *retry_count and info/*.

/sys/devices/system/xen_memory/xen_memory0/info/current_kb:
4163284
/sys/devices/system/xen_memory/xen_memory0/info/low_kb:
0
/sys/devices/system/xen_memory/xen_memory0/info/high_kb:
0
/sys/devices/system/xen_memory/xen_memory0/max_schedule_delay:
20
/sys/devices/system/xen_memory/xen_memory0/retry_count:
1
/sys/devices/system/xen_memory/xen_memory0/power/control:
auto
/sys/devices/system/xen_memory/xen_memory0/power/async:
disabled
/sys/devices/system/xen_memory/xen_memory0/power/runtime_enabled:
disabled
/sys/devices/system/xen_memory/xen_memory0/power/runtime_active_kids:
0
/sys/devices/system/xen_memory/xen_memory0/power/runtime_active_time:
0
/sys/devices/system/xen_memory/xen_memory0/power/autosuspend_delay_ms:
cat: /sys/devices/system/xen_memory/xen_memory0/power/autosuspend_delay_ms: 
Input/output error
/sys/devices/system/xen_memory/xen_memory0/power/runtime_status:
unsupported
/sys/devices/system/xen_memory/xen_memory0/power/runtime_usage:
0
/sys/devices/system/xen_memory/xen_memory0/power/runtime_suspended_time:
0
/sys/devices/system/xen_memory/xen_memory0/target_kb:
4194304
/sys/devices/system/xen_memory/xen_memory0/target:
4294967296
/sys/devices/system/xen_memory/xen_memory0/uevent:
/sys/devices/system/xen_memory/xen_memory0/schedule_delay:
1
/sys/devices/system/xen_memory/xen_memory0/max_retry_count:
0

 It might also be interesting to see the result of xl debug-keys q
 followed by xl dmesg.

(XEN) 'q' pressed - dumping domain info (now=0x70765:8E6C9FAB)
(XEN) General information for domain 0:
(XEN) refcnt=3 dying=0 pause_count=0
(XEN) nr_pages=1040821 xenheap_pages=5 shared_pages=0 paged_pages=0 
dirty_cpus={3-4} max_pages=1048576
(XEN) handle=---- vm_assist=000d
(XEN) Rangesets belonging to domain 0:
(XEN) I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-cfb, d00-1007, 100c- }
(XEN) Interrupts { 1-51 }
(XEN) I/O Memory { 0-fedff, fef00- }
(XEN) Memory pages belonging to domain 0:
(XEN) DomPage list too long to display
(XEN) XenPage 0082e6a4: caf=c002, taf=7402
(XEN) XenPage 0082e6a3: caf=c001, taf=7401
(XEN) XenPage 0082e6a2: caf=c001, taf=7401
(XEN) XenPage 0082e6a1: caf=c001, taf=7401
(XEN) XenPage 000cfd04: caf=c002, taf=7402
(XEN) NODE affinity for domain 0: [0]
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU1: CPU6 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU2: CPU7 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU3: CPU5 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU4: CPU4 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={4} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU5: CPU3 [has=T] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={3} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=0
(XEN) No periodic timer
(XEN) VCPU6: CPU7 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) VCPU7: CPU4 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-7}
(XEN) pause_count=0 pause_flags=1
(XEN) No periodic timer
(XEN) 

Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-17 Thread Ian Campbell
On Sun, 2015-05-17 at 15:14 +0100, Ian Campbell wrote:
 On Thu, 2015-05-07 at 21:02 +0200, Martin Lucina wrote:
  Note that despite the memory setting above, the dom0 has not been allocated
  exactly the amount asked for:
  
  # xl list 0
  NameID   Mem VCPUs  State   Time(s)
  Domain-0 0  4051 8 r-
  5606.1
 
 Could you also post the output of xenstore-ls -fp | grep target
 please.
 
 You should have a directory /sys/devices/system/xen_memory/xen_memory0.
 Please could you post the contents of each of the files in there. In
 particular target*, *retry_count and info/*.
 
 It might also be interesting to see the result of xl debug-keys q
 followed by xl dmesg.
 
 Does xl mem-set 0 4051 make the messages stop? If not what about using
 4050?
 
 If you try and balloon up to 4096 (with xl mem-set) does the amount of
 memory in xl list change?

One last thing... If you are able to try it then it would be interesting
to know if the 4.0.x kernel from sid exhibits this behaviour too.

Cheers,
Ian.


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1431872367.5748.69.ca...@debian.org



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-17 Thread Ian Campbell
On Thu, 2015-05-07 at 21:02 +0200, Martin Lucina wrote:
 Note that despite the memory setting above, the dom0 has not been allocated
 exactly the amount asked for:
 
 # xl list 0
 NameID   Mem VCPUsState   Time(s)
 Domain-0 0  4051 8 r-
 5606.1

Could you also post the output of xenstore-ls -fp | grep target
please.

You should have a directory /sys/devices/system/xen_memory/xen_memory0.
Please could you post the contents of each of the files in there. In
particular target*, *retry_count and info/*.

It might also be interesting to see the result of xl debug-keys q
followed by xl dmesg.

Does xl mem-set 0 4051 make the messages stop? If not what about using
4050?

If you try and balloon up to 4096 (with xl mem-set) does the amount of
memory in xl list change?

Ian.


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1431872073.5748.65.ca...@debian.org



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-17 Thread Ian Campbell
On Thu, 2015-05-07 at 21:02 +0200, Martin Lucina wrote:
 Bug #776448 claims to have fixed this problem, however it seems that the
 fix is incomplete?

Yes, it was. It looks like I was even told and didn't notice:
http://article.gmane.org/gmane.comp.emulators.xen.devel/230401 :-/

I'll queue fd8b79511349efd1f0decea920f61b93acb34a75 for a Jessie update.

Ian.


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1431870665.5748.57.ca...@debian.org



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-17 Thread Ian Campbell
On Sun, 2015-05-17 at 14:51 +0100, Ian Campbell wrote:
 On Thu, 2015-05-07 at 21:02 +0200, Martin Lucina wrote:
  Bug #776448 claims to have fixed this problem, however it seems that the
  fix is incomplete?
 
 Yes, it was. It looks like I was even told and didn't notice:
 http://article.gmane.org/gmane.comp.emulators.xen.devel/230401 :-/
 
 I'll queue fd8b79511349efd1f0decea920f61b93acb34a75 for a Jessie update.

Actually, it looks like fd8b79511349 _was_ added as part of #776448. So
I'm still in the dark as to what causes this issue.

Ian.


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/1431871052.5748.59.ca...@debian.org



Bug#784688: Thousands of xen:balloon: Cannot add additional memory (-17) messages despite dom0 ballooning disabled

2015-05-07 Thread Martin Lucina
Package: linux-image-3.16.0-4-amd64
Version: 3.16.7-ckt9-3~deb8u1
Severity: normal

Dear maintainer,

my Xen dom0 is printing thousands of these messages to its logs:

# journalctl -b |grep xen:balloon: Cannot add additional memory (-17) | wc -l
126892
# uptime
 20:50:08 up 3 days,  8:28,  7 users,  load average: 0.21, 0.07, 0.06

The messages start occuring the first time I shutdown a domU. Due to using
this dom0 for Xen unikernel testing, such shutdowns are a common occurence.

Some shutdowns cause a few of these messages, some cause hundreds.

The dom0 has memory ballooning disabled:

# grep dom0_mem /etc/default/grub 
GRUB_CMDLINE_XEN=dom0_mem=4096M,max:4096M loglvl=info guest_loglvl=info
# grep autoballoon /etc/xen/xl.conf 
autoballoon=0

Note that despite the memory setting above, the dom0 has not been allocated
exactly the amount asked for:

# xl list 0
NameID   Mem VCPUs  State   Time(s)
Domain-0 0  4051 8 r-5606.1

I am also including an excerpt from xl dmesg, in case this helps:

(XEN) *** LOADING DOMAIN 0 ***
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x100 - 0x1f18000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   00081400-00081800 (1031107 pages to be 
allocated)
(XEN)  Init. ramdisk: 00082fbc3000-00082212
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: 8100-81f18000
(XEN)  Init. ramdisk: 81f18000-82354212
(XEN)  Phys-Mach map: 82355000-82b55000
(XEN)  Start info:82b55000-82b554b4
(XEN)  Page tables:   82b56000-82b6f000
(XEN)  Boot stack:82b6f000-82b7
(XEN)  TOTAL: 8000-82c0
(XEN)  ENTRY ADDRESS: 819021f0
(XEN) Dom0 has maximum 8 VCPUs

Bug #776448 claims to have fixed this problem, however it seems that the
fix is incomplete?

Thanks,

Martin


-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150507190215.gd5...@nodbug.lucina.net