I have to say that I wouldn't do the networking that way - in fact, in the
clusters I manage, we haven't done the networking that way :-). Rather
than layer 3 routing between VMs, we've chosen to use layer 2 virtual
switching (yes, using openvswitch). We have the luxury of multiple 10G
NICs
You say you can ping but not ssh. If you install tcpdump on the VM, can you
see the ping packets arriving and leaving? If not, I suspect an address
collision - especially if ping continues to work with the VM shut down. If
you can't ping, check the other end of your bridge. I'm more familiar with
Not directly. Is this for security, or for administrative convenience? If
convenience, the usual approach would be to add a DHCP reservation. If
security, you'll also want some network infrastructure to expect a
particular IP address on the virtual port. Without more information on your
Why do you reckon this is to do with your virtualisation system (presumably
qemu/kvm, though you don't say) rather than Windows 10?
Peter
On 13 Feb 2018 7:06 p.m., "Benjammin2068" wrote:
>
>
> Hey all,
>
> This horrid bug seems to be back after the system got all
Pacemaker always knows where its resources are running. Query it, stop the
domain, then use the queried location as the host to which to issue the
snapshot?
Cheers,
Peter
On Mon, 15 Oct 2018, 20:36 Lentes, Bernd, <
bernd.len...@helmholtz-muenchen.de> wrote:
> Hi,
>
> i have a two node cluster
On Wed, 26 Dec 2018 at 16:26, b f31415 wrote:
> If not, is there a way with one of the virt command line tools to create
> the XML (with the PCI addresses specified) so that I can process that XML
> and re-write the PCI addressing values? Right now the only way I’ve been
> able to get that
You've not given libvirt anything to install; therefore, it has nothing it
can tell about the IP address. Are you intending, for example, to add a
mapping between MAC and IP address to a DHCP server for later use?
- Peter
On Mon, 17 Sep 2018 at 13:23, Devis Reagan Ponraj -ERS, HCL Tech <
The file may have been unlinked (name to inode mapping removed from the
filesystem), but the inode and therefore the data will stay until no
process references the inode. I *think* this may be what's happening in
your case?
Peter
On Wed, 15 May 2019, 14:25 Lentes, Bernd, <
On Tue, 30 Apr 2019 at 16:43, Michal Privoznik wrote:
> Long story short, why bother with /system if you can't use it and not
> use /session instead?
>
> Because according to the FAQ, /session isn't suitable for my use:
- You will definitely want to use qemu:///system if your VMs are acting
On Tue, 30 Apr 2019 at 10:40, Michal Privoznik wrote:
> Is there any problem running libvirtd as root?
>
> Yes, in the regulated environment in which I work! I have to do far more
thorough threat analysis than I would do if I knew which capabilities it
had. So far, we've accepted the extra
On Tue, 30 Apr 2019 at 10:48, Daniel P. Berrangé
wrote:
> On Tue, Apr 30, 2019 at 10:45:03AM +0100, Peter Crowther wrote:
> > On Tue, 30 Apr 2019 at 10:40, Michal Privoznik
> wrote:
> >
> > > Is there any problem running libvirtd as root?
> > >
> > >
*may or may not be* enough - a huge amount depends on your workloads.
As ever, watching the available memory and how much CPU is in use in
"top" will give you a better idea of how your infrastructure is
behaving, as will watching the server charts in virt-manager if it's
available. For example,
On Thu, 8 Aug 2019 at 10:08, Sven Vogel wrote:
> The question for me is why they use the smallest?
>
Because otherwise some frames won't get through because the interface can't
receive them, and the job of network equipment is to enable and maintain
communication :-).
>
> Cheers
>
> Sven
>
-
Architecturally, separating the data and control channels feels like the
right approach - whether nbd or something else. Would need signposting for
those of us who routinely implement firewalling on hosts, but that's a
detail.
I presume there's no flow control on streams at the moment?
Cheers,
On Fri, 24 Apr 2020 at 21:10, Mauricio Tavares wrote:
> Let's say I have libvirt
>
> [root@vmhost2 ~]# virsh version
> [...]
Running hypervisor: QEMU 2.12.0
> [root@vmhost2 ~]#
> [...]
>
When I try to start the guest I get the following error message:
>
> [root@vmhost2 ~]# virsh start
Seconds since UNIX epoch of midnight on 1 January 1970. See, for example,
https://www.epochconverter.com/ to convert to a human readable format.
Cheers,
Peter
On Fri, 15 May 2020, 00:08 Santhosh Kumar Gunturu, <
santhosh.978@gmail.com> wrote:
> I see the output.
>
>
Bernd, another option would be a mismatch between the message that "virsh
destroy" issues and the message that force_stop() in the pacemaker agent
expects to receive. Pacemaker is trying to determine the success or
failure of the destroy based on the concatenation of the text of the exit
code and
Paul, if you can set up a VLAN on your network infrastructure between the
two hosts, I'll share the recipe I use with Open VSwitch. We trunk a VLAN
between our hosts for sandboxed guests, setting up a OVS bridge on each
host that handles guests but also has a connection onto the VLAN. Are you
Specify the MAC address as part of the domain XML for the bootstrap node.
See https://libvirt.org/formatdomain.html#elementsNICS.
If using virt-install, set it as part of the --network option: "--network
NETWORK,mac=12:34..."
- Peter
On Fri, 12 Jun 2020 at 18:07, Ian Pilcher wrote:
> Is it
Do you want this on CentOS 8, Ubuntu 20.04, or something else? The help we
can give you will vary quite considerably depending on the distro you use;
and, as you point out, RedHat-derived and Debian-derived distributions are
very different.
For CentOS, do you have the advanced virtualization
... hang on. Why does the *bridge* have an IP address? Think of a bridge
as being like a switch; it has no address of its own.
Cheers,
Peter
On Tue, 15 Feb 2022 at 20:21, Wolf wrote:
> On 15 Feb 2022, at 20:04, Peter Crowther
> wrote:
>
>
> And eno1 and eno2 are *
And eno1 and eno2 are *both* connected to the same external switch, yes?
Cheers,
Peter
On Tue, 15 Feb 2022 at 17:17, Wolf wrote:
> Hi!
>
> 1) I have two network ports on my server.
> - eno1 has the IP: XX1.XX1.XX1.150
>
> - bridge0 has the IP: XX2.XX2.XX2.100
> and has
virt-install is expecting the installation process to need a reboot partway
through, so carefully restarts the guest on its first shutdown. Then it
gets out of the way. Either change what virt-install thinks it's installing
(not sure how, never done it), or write and enable a service during
23 matches
Mail list logo