On 03/15/2017 03:27 PM, John Ferlan wrote:
> Fix a "bug" in the storage pool test driver code which "assumed"
> testStoragePoolObjSetDefaults should fill in the configFile for
> both the Define/Create (persistent) and CreateXML (transient) pools
> by just VIR_FREE()'ing it during CreateXML.
On 03/15/2017 03:27 PM, John Ferlan wrote:
> Commit id 'bb74a7ffe' added a fairly non specific message when providing
> only the or instead of providing
> both wwnn and wwpn. This patch just modifies the message to be more specific
> about which was missing.
>
> Signed-off-by: John Ferlan
On 03/15/2017 01:59 PM, John Ferlan wrote:
> [...]
>
>>>
>>> 4. Using virPCIDeviceAddressPtr in getSCSIHostNumber and getAdapterName.
>>> Ironically the "original" series I had passed along the
>>> virStorageAdapterSCSIHostPtr, but since it's been decreed that a
>>> src/util function cannot
On 03/14/2017 12:58 PM, Andrea Bolognani wrote:
> ---
> docs/news.xml | 9 +
> 1 file changed, 9 insertions(+)
>
> diff --git a/docs/news.xml b/docs/news.xml
> index 04783aa..434a9f7 100644
> --- a/docs/news.xml
> +++ b/docs/news.xml
> @@ -17,6 +17,15 @@
>The
On 03/14/2017 12:58 PM, Andrea Bolognani wrote:
> We want pcie-root-ports to be used for aarch64/virt guests
> when available in QEMU, but at the same time we need to
> ensure that other machine type and hosts where QEMU releases
> lacking the new device type are not affected.
> ---
>
Hi. Does it possible to limit vm cpu speed for example to 1Ghz or 500Mhz ?
I need to simulate some hardware with specific cpu speed and test my
application inside this vm , i want to measure results from each test
and need constant cpu speed for testing (i need to run test not only
my notebook but
On Wed, Mar 15, 2017 at 02:11:23PM -0300, Marcelo Tosatti wrote:
On Wed, Mar 15, 2017 at 03:59:31PM +0100, Martin Kletzander wrote:
On Wed, Mar 15, 2017 at 02:23:26PM +, Daniel P. Berrange wrote:
>On Wed, Mar 15, 2017 at 03:11:26PM +0100, Martin Kletzander wrote:
>>On Mon, Mar 06, 2017 at
On 03/14/2017 12:58 PM, Andrea Bolognani wrote:
> ioh3420 is emulated Intel hardware, so it always looked
> quite out of place in aarch64/virt guests.
>
> If pcie-root-port is available in QEMU, use that device
> instead.
> ---
> It was mentioned somewhere, at some point, that we might
> want to
Fix a "bug" in the storage pool test driver code which "assumed"
testStoragePoolObjSetDefaults should fill in the configFile for
both the Define/Create (persistent) and CreateXML (transient) pools
by just VIR_FREE()'ing it during CreateXML. Because the configFile
was filled in, during Destroy the
Commit id 'bb74a7ffe' added a fairly non specific message when providing
only the or instead of providing
both wwnn and wwpn. This patch just modifies the message to be more specific
about which was missing.
Signed-off-by: John Ferlan
---
src/conf/node_device_conf.c | 17
The first patch is related to review comments from my Converge Storage
Pool vHBA logic Patch1 w/r/t a weak error message
The second patch is pulled from Patch17 of that series and is just an
adjustment in the storage pool create xml logic
Neither of these require news.xml updates...
John Ferlan
On 03/14/2017 12:58 PM, Andrea Bolognani wrote:
> QEMU 2.9 introduces the pcie-root-port device, which is
> a generic version of the existing ioh3420 device.
>
> Make the new device available to libvirt users.
>
> Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1408808
Yes, but kind of
Hi,
On Wed, Mar 15, 2017 at 04:55:04PM +, Daniel P. Berrange wrote:
> Linux still defaults to a 1024 open file handle limit. This causes
> scalability problems for libvirtd / virtlockd / virtlogd on large
> hosts which might want > 1024 guest to be running. In fact if each
> guest needs > 1
On 03/15/2017 10:46 AM, Daniel P. Berrange wrote:
> On Fri, Mar 10, 2017 at 04:10:49PM -0500, John Ferlan wrote:
>> If we have a connection pointer there's no sense walking through the
>> sysfs in order to create/destroy the vHBA. Instead, let's make use of
>> the node device create/destroy
On 03/15/2017 11:19 AM, Laine Stump wrote:
> On 03/15/2017 10:08 AM, John Ferlan wrote:
>>
>>
>> On 03/12/2017 06:35 PM, Laine Stump wrote:
>>> On 03/10/2017 04:10 PM, John Ferlan wrote:
Move the bulk of the code to the node_device_conf and rename to
virNodeDevice{Create|Delete}Vport.
On Wed, Mar 15, 2017 at 06:48:56PM +0100, Guido Günther wrote:
> Hi,
> while looking into a regression failing to start any mips qemu systems
> (http://bugs.debian.org/854125) I noticed that querying cpu definition
> does not work for lots of non intel architectures like mips due to lack
> of
If the SASL config does not have any mechanisms we currently
just report an empty list to the client which will then
fail to identify a usable mechanism. This is a server config
error, so we should fail immediately on the server side.
Signed-off-by: Daniel P. Berrange
---
Signed-off-by: Daniel P. Berrange
---
src/rpc/virnettlscontext.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/rpc/virnettlscontext.c b/src/rpc/virnettlscontext.c
index 847d457..0d5928e 100644
--- a/src/rpc/virnettlscontext.c
+++ b/src/rpc/virnettlscontext.c
@@
When providing explicit x509 cert/key paths in libvirtd.conf,
the user must provide all three. If one or more is missed,
this leads to obscure errors at runtime when negotiating
the TLS session
Signed-off-by: Daniel P. Berrange
---
daemon/libvirtd.c | 16
1
[...]
>>
>> 4. Using virPCIDeviceAddressPtr in getSCSIHostNumber and getAdapterName.
>> Ironically the "original" series I had passed along the
>> virStorageAdapterSCSIHostPtr, but since it's been decreed that a
>> src/util function cannot include a src/conf header, I had to back that off.
>
>
Hi,
while looking into a regression failing to start any mips qemu systems
(http://bugs.debian.org/854125) I noticed that querying cpu definition
does not work for lots of non intel architectures like mips due to lack
of support for the query-cpu-definition monitor command:
2017-03-15
On 03/15/2017 12:55 PM, Daniel P. Berrange wrote:
> Linux still defaults to a 1024 open file handle limit. This causes
> scalability problems for libvirtd / virtlockd / virtlogd on large
> hosts which might want > 1024 guest to be running. In fact if each
> guest needs > 1 FD, we can't even get to
On Wed, Mar 15, 2017 at 03:59:31PM +0100, Martin Kletzander wrote:
> On Wed, Mar 15, 2017 at 02:23:26PM +, Daniel P. Berrange wrote:
> >On Wed, Mar 15, 2017 at 03:11:26PM +0100, Martin Kletzander wrote:
> >>On Mon, Mar 06, 2017 at 06:06:32PM +0800, Eli Qiao wrote:
> >>> This patch adds new xml
Linux still defaults to a 1024 open file handle limit. This causes
scalability problems for libvirtd / virtlockd / virtlogd on large
hosts which might want > 1024 guest to be running. In fact if each
guest needs > 1 FD, we can't even get to 500 guests. This is not
good enough when we see machines
Having this information available will make it easier to determine the
culprit when MAC or vlan tag appear to not be set, eg.:
https://bugzilla.redhat.com/1364073
(This patch doesn't fix that bug, just makes it easier to diagnose)
---
src/util/virnetdev.c | 17 ++---
1 file
For consistency with the new virHostdevSaveNetConfig() and
virHostdevSetNetConfig().
---
src/util/virhostdev.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/util/virhostdev.c b/src/util/virhostdev.c
index a402c01..dce0bbe 100644
--- a/src/util/virhostdev.c
+++
On Tue, Mar 14, 2017 at 05:57:39PM +0100, Jiri Denemark wrote:
When starting a domain with custom guest CPU specification QEMU may add
or remove some CPU features. There are several reasons for this, e.g.,
QEMU/KVM does not support some requested features or the definition of
the requested CPU
Add code to call the appropriate monitor command and code to lookup the
given disk backing chain member.
---
src/qemu/qemu_driver.c | 70
src/qemu/qemu_monitor.c | 13
src/qemu/qemu_monitor.h | 5
On Tue, Mar 14, 2017 at 05:57:44PM +0100, Jiri Denemark wrote:
The checks are now in a dedicated qemuProcessVerifyHypervFeatures
function.
Signed-off-by: Jiri Denemark
---
src/qemu/qemu_process.c | 88 ++---
1 file changed, 55
Our code calls it when starting or re-starting the domain or when
hotplugging the disk so there's nothing to be detected.
---
src/qemu/qemu_driver.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2032fac71..06bd442ee 100644
---
---
.../qemumonitorjson-nodename-2.json| 2270
.../qemumonitorjson-nodename-2.result | 60 +
tests/qemumonitorjsontest.c|1 +
3 files changed, 2331 insertions(+)
create mode 100644
The code is rather magic so a test case will help making sure that
everything works well. The first case is a simple backing chain.
---
.../qemumonitorjson-nodename-1.json| 268 +
.../qemumonitorjson-nodename-1.result | 15 ++
To allow matching the node names gathered via 'query-named-block-nodes'
we need to query and then use the top level nodes from 'query-block'.
Add the data to the structure returned by qemuMonitorGetBlockInfo.
---
src/qemu/qemu_domain.h | 1 +
src/qemu/qemu_monitor.c | 12 +++-
Detect the node names when setting block threshold and when reconnecting
or when they are cleared when a block job finishes. This operation will
become a no-op once we fully support node names.
---
src/qemu/qemu_block.c| 98
This is another version of the stuff that I've posted here:
https://www.redhat.com/archives/libvir-list/2017-February/msg01391.html
which was partially based on the very old discussion at
https://www.redhat.com/archives/libvir-list/2015-May/msg00580.html
This version fixes some of the review
The new API can be used to configure the threshold when
VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD should be fired.
---
include/libvirt/libvirt-domain.h | 5
src/driver-hypervisor.h | 8 +++
src/libvirt-domain.c | 51
When using thin provisioning, management tools need to resize the disk
in certain cases. To avoid having them to poll disk usage introduce an
event which will be fired when a given offset of the storage is written
by the hypervisor. Together with the API which will be added later, it
will allow
Add a simple wrapper which will allow to set the threshold for
delivering the event.
---
tools/virsh-domain.c | 64
tools/virsh.pod | 8 +++
2 files changed, 72 insertions(+)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
---
src/qemu/qemu_capabilities.c| 2 ++
src/qemu/qemu_capabilities.h| 1 +
tests/qemucapabilitiesdata/caps_2.1.1.x86_64.xml| 1 +
tests/qemucapabilitiesdata/caps_2.4.0.x86_64.xml| 1 +
Looks up a disk and it's corresponding backing chain element by node
name.
---
src/qemu/qemu_domain.c | 43 +++
src/qemu/qemu_domain.h | 6 ++
2 files changed, 49 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index
Bind it to qemus BLOCK_WRITE_THRESHOLD event. Look up the disk by
nodename and construct the string to return.
---
src/qemu/qemu_process.c | 40
1 file changed, 40 insertions(+)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index
Add monitor tooling for calling query-named-block-nodes. The monitor
returns the data as the raw JSON array that is returned from the
monitor.
Unfortunately the logic to extract the node names for a complete backing
chain will be so complex that I won't be able to extract any meaningful
subset of
The event is fired when a given block backend node (identified by the
node name) experiences a write beyond the bound set via
block-set-write-threshold QMP command. This wires up the monitor code to
extract the data and allow us receiving the events and the capability.
---
qemu for some time already sets node names automatically for the block
nodes. This patch adds code that attempts a best-effort detection of the
node names for the backing chain from the output of
'query-named-block-nodes'. The only drawback is that the data provided
by qemu needs to be matched by
---
src/qemu/qemu_domain.c | 37 +
src/qemu/qemu_domain.h | 3 +++
2 files changed, 40 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 0c633af33..402b0730e 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@
oVirt uses relative names with directories in them. Test such
configuration. Also tests a snapshot done with _REUSE_EXTERNAL and a
relative backing file pre-specified in the qcow2 metadata.
---
.../qemumonitorjson-nodename-relative.json | 554 +
'nodeformat' should be used for strings which describe the storage
format object, and 'nodebacking' for the actual storage object itself.
---
src/libvirt_private.syms | 1 +
src/util/virstoragefile.c | 40
src/util/virstoragefile.h | 10 ++
3
---
.../qemumonitorjson-nodename-gluster.json | 135 +
.../qemumonitorjson-nodename-gluster.result| 10 ++
tests/qemumonitorjsontest.c| 1 +
3 files changed, 146 insertions(+)
create mode 100644
Since we have to match the images by filename a common backing image
will break the detection process. Add a test case to see that the code
correctly did not continue the detection process.
---
.../qemumonitorjson-nodename-same-backing.json | 316 +
The code is currently simple, but if we later add node names, it will be
necessary to generate the names based on the node name. Add a helper so
that there's a central point to fix once we add self-generated node
names.
---
src/qemu/qemu_domain.c | 22 ++
The function has very specific semantics. Split out the part that parses
the backing store specification string into a separate helper so that it
can be reused later while keeping the wrapper with existing semantics.
Note that virStorageFileParseChainIndex is pretty well covered by the
test
It will be useful to set indentation level to 0 after formatting a
nested structure rather than having to track the depth.
---
src/libvirt_private.syms | 1 +
src/util/virbuffer.c | 19 +++
src/util/virbuffer.h | 2 ++
3 files changed, 22 insertions(+)
diff --git
On Tue, Mar 14, 2017 at 05:57:42PM +0100, Jiri Denemark wrote:
The attribute can be used to request a specific way of checking whether
the virtual CPU matches created by the hypervisor matches the
specification in domain XML.
Signed-off-by: Jiri Denemark
---
On 03/14/2017 02:36 PM, John Ferlan wrote:
>
>
> On 03/09/2017 11:06 AM, Michal Privoznik wrote:
>> So, majority of the code is just ready as-is. Well, with one
>> slight change: differentiate between dimm and nvdimm in places
>> like device alias generation, generating the command line and so
On 03/14/2017 03:58 PM, John Ferlan wrote:
>
>
> On 03/09/2017 11:06 AM, Michal Privoznik wrote:
>> Now that we have APIs for relabel memdevs on hotplug, fill in the
>> missing implementation in qemu hotplug code.
>>
>> The qemuSecurity wrappers might look like overkill for now,
>> because qemu
On 03/15/2017 10:22 AM, John Ferlan wrote:
>
>
> On 03/12/2017 09:20 PM, Laine Stump wrote:
>> On 03/10/2017 04:10 PM, John Ferlan wrote:
>>> If we have a connection pointer there's no sense walking through the
>>> sysfs in order to create/destroy the vHBA. Instead, let's make use of
>>> the
On 03/15/2017 10:08 AM, John Ferlan wrote:
>
>
> On 03/12/2017 06:35 PM, Laine Stump wrote:
>> On 03/10/2017 04:10 PM, John Ferlan wrote:
>>> Move the bulk of the code to the node_device_conf and rename to
>>> virNodeDevice{Create|Delete}Vport.
>>>
>>> Alter the create algorithm slightly in
On 03/15/2017 09:33 AM, John Ferlan wrote:
>
>
> On 03/12/2017 05:53 PM, Laine Stump wrote:
>> On 03/10/2017 04:10 PM, John Ferlan wrote:
>>> Move the virStoragePoolSourceAdapter from storage_conf.h and rename
>>> to virStorageAdapter.
>>>
>>> Continue with code realignment for brevity and flow.
On Mon, Mar 06, 2017 at 06:06:33PM +0800, Eli Qiao wrote:
virResCtrlSetCacheBanks: Set cache banks of a libvirt domain. It will
create new resource domain under
`/sys/fs/resctrl` and fill the schemata according
the cache
On Wed, Mar 15, 2017 at 02:23:26PM +, Daniel P. Berrange wrote:
On Wed, Mar 15, 2017 at 03:11:26PM +0100, Martin Kletzander wrote:
On Mon, Mar 06, 2017 at 06:06:32PM +0800, Eli Qiao wrote:
> This patch adds new xml element to support cache tune as:
>
>
> ...
>
I tend to view use of
On Wed, Mar 15, 2017 at 03:40:04PM +0100, Andrea Bolognani wrote:
On Wed, 2017-03-15 at 14:10 +0100, Martin Kletzander wrote:
I'll take the opportunity to repeat here what I said in another
news-related thread.
The good thing about the new news file layout is that we can express
freely what
On Wed, 2017-03-15 at 14:10 +0100, Martin Kletzander wrote:
I would s/Adaptive/Use adaptive/ though.
Oh, and it should be "QEMU" rather than "qemu" in the
description ;)
--
Andrea Bolognani / Red Hat / Virtualization
--
libvir-list mailing list
libvir-list@redhat.com
When enabling IPv6 on all interfaces, we may get the host Router
Advertisement routes discarded. To avoid this, the user needs to set
accept_ra to 2 for the interfaces with such routes.
See https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
on this topic.
To avoid user mistakenly
Hi Laine, all,
Here is the v2 of my series. The changes are:
* Add a commit to create a virNetDevGetName() function
* Fix Laine's comments
Cédric Bosdonnat (5):
util: extract the request sending code from virNetlinkCommand()
util: add virNetlinkDumpCommand()
bridge_driver.c: more uses
virNetlinkCommand() processes only one response message, while some
netlink commands like routes dumping need to process several ones.
Add virNetlinkDumpCommand() as a virNetlinkCommand() sister.
---
src/libvirt_private.syms | 1 +
src/util/virnetlink.c| 58
Allow to reuse as much as possible from virNetlinkCommand(). This
comment prepares for the introduction of virNetlindDumpCommand()
only differing by how it handles the responses.
---
src/util/virnetlink.c | 89 +++
1 file changed, 54 insertions(+),
Add a function getting the name of a network interface out of its index.
---
src/libvirt_private.syms | 1 +
src/util/virnetdev.c | 19 +++
src/util/virnetdev.h | 2 ++
3 files changed, 22 insertions(+)
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
Replace a few occurences of /proc/sys by the corresponding macro
defined a few lines after: SYSCTL_PATH
---
src/network/bridge_driver.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index c5ec2823d..3f6561055
On Fri, Mar 10, 2017 at 04:10:49PM -0500, John Ferlan wrote:
> If we have a connection pointer there's no sense walking through the
> sysfs in order to create/destroy the vHBA. Instead, let's make use of
> the node device create/destroy API's.
In general we should *not* call out to the public API
On Wed, 2017-03-15 at 14:10 +0100, Martin Kletzander wrote:
> I'll take the opportunity to repeat here what I said in another
> news-related thread.
>
> The good thing about the new news file layout is that we can express
> freely what is the feature that was added and we don't have to
>
On Wed, Mar 15, 2017 at 03:11:26PM +0100, Martin Kletzander wrote:
> On Mon, Mar 06, 2017 at 06:06:32PM +0800, Eli Qiao wrote:
> > This patch adds new xml element to support cache tune as:
> >
> >
> > ...
> >
> Again, this was already discussed, probably, I just can't find the
> source of
Michal Privoznik wrote:
> On 03/15/2017 02:16 AM, Jim Fehlig wrote:
>> On 03/13/2017 06:29 AM, Michal Privoznik wrote:
>>> There were couple of reports on the list (e.g. [1]) that guests
>>> with huge amounts of RAM are unable to start because libvirt
>>> kills qemu in the initialization phase.
On 03/12/2017 09:20 PM, Laine Stump wrote:
> On 03/10/2017 04:10 PM, John Ferlan wrote:
>> If we have a connection pointer there's no sense walking through the
>> sysfs in order to create/destroy the vHBA. Instead, let's make use of
>> the node device create/destroy API's.
>>
>> Since we don't
On Mon, Mar 06, 2017 at 06:06:32PM +0800, Eli Qiao wrote:
This patch adds new xml element to support cache tune as:
...
Again, this was already discussed, probably, I just can't find the
source of it. But host_id actually already selects what cache is
supposed to be used, so instead of
On 03/12/2017 06:35 PM, Laine Stump wrote:
> On 03/10/2017 04:10 PM, John Ferlan wrote:
>> Move the bulk of the code to the node_device_conf and rename to
>> virNodeDevice{Create|Delete}Vport.
>>
>> Alter the create algorithm slightly in order to return the name of
>> the scsi_host vHBA that was
On 03/12/2017 05:53 PM, Laine Stump wrote:
> On 03/10/2017 04:10 PM, John Ferlan wrote:
>> Move the virStoragePoolSourceAdapter from storage_conf.h and rename
>> to virStorageAdapter.
>>
>> Continue with code realignment for brevity and flow.
>>
>> Signed-off-by: John Ferlan
On Wed, Mar 15, 2017 at 01:06:12PM +0100, Michal Privoznik wrote:
Signed-off-by: Michal Privoznik
---
docs/news.xml | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/docs/news.xml b/docs/news.xml
index 04783aa5e..6ce6ab362 100644
---
On Mon, Mar 06, 2017 at 06:06:31PM +0800, Eli Qiao wrote:
This patch expose cache information to host's capabilites xml.
For l3 cache allocation
For l3 cache allocation supported cdp(seperate data/code):
I know this was discussed before, but why having a vector
On Mon, Mar 06, 2017 at 06:06:30PM +0800, Eli Qiao wrote:
This patch adds some utils struct and functions to expose resctrl
information.
virResCtrlAvailable: if resctrl interface exist on host.
virResCtrlGet: get specific type resource control information.
virResCtrlInit: initialize resctrl
Signed-off-by: Michal Privoznik
---
docs/news.xml | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/docs/news.xml b/docs/news.xml
index 04783aa5e..6ce6ab362 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -43,7 +43,20 @@
On Mon, Mar 06, 2017 at 06:06:30PM +0800, Eli Qiao wrote:
This patch adds some utils struct and functions to expose resctrl
information.
virResCtrlAvailable: if resctrl interface exist on host.
virResCtrlGet: get specific type resource control information.
virResCtrlInit: initialize resctrl
On 03/15/2017 02:16 AM, Jim Fehlig wrote:
> On 03/13/2017 06:29 AM, Michal Privoznik wrote:
>> There were couple of reports on the list (e.g. [1]) that guests
>> with huge amounts of RAM are unable to start because libvirt
>> kills qemu in the initialization phase. The problem is that if
>> guest
On 03/14/2017 03:08 PM, Laine Stump wrote:
> On 03/14/2017 02:30 PM, John Ferlan wrote:
>>
>> On 03/11/2017 08:16 PM, Laine Stump wrote:
>>> On 03/10/2017 04:10 PM, John Ferlan wrote:
Rather than use virXPathString, pass along an virXPathNode and alter
the parsing to use
[...]
int
-virStoragePoolSourceAdapterParseValidate(virStoragePoolDefPtr ret)
+virStorageAdapterParseValidate(virStoragePoolDefPtr ret)
>>>
>>> This function should take a virStoragePoolSourceAdapterPtr rather than
>>> virStoragePoolDefPtr, and the name should just be
>>>
On Wed, 2017-03-15 at 08:59 +0100, Jiri Denemark wrote:
> > > Removing all memory locking limits should be something that
> > > admins very carefully opt-in into, because of the potential
> > > host DoS consequences. Certainly not the default.
> >
> > There's no opt-in with , it is mandatory to
On Tue, Mar 14, 2017 at 15:54:25 -0400, Luiz Capitulino wrote:
> On Tue, 14 Mar 2017 20:28:12 +0100
> Andrea Bolognani wrote:
>
> > On Tue, 2017-03-14 at 14:54 -0400, Luiz Capitulino wrote:
> > > > It's unfortunate that the current, buggy behavior made
> > > > it look like
thank's Cedric i try this way, like i do with lxcfs and libvirt for
having good counter on the lxc domain side ( RAM / disk / CPu )
i post her because i'm not sure it's a way of using or a limit of
libvirt, i try the other mailing list too in the future
On Wed, 15 Mar 2017 09:53:15 +0100
Hello Pierre-Jacques,
First note that you posted your message on the developer's mailing list.
For such user questions, rather email:
https://www.redhat.com/mailman/listinfo/libvirt-users
According to
https://linuxcontainers.org/lxc/manpages//man5/lxc.container.conf.5.html
lxc.kmsg is only
Hi all, il try to find the way to obtain the same use as in the
lxc config file lxc.kmsg=0
how to do that, with the xml of a lxc domain, i make lot of try without
success .
is there another way to deny access to the dmesg for a libvirt lxc
domain ?
---
Kogitae AE
Infogérance Linux /
89 matches
Mail list logo