The version of ovirt-engine is 4.2.8
The version of ovirt-node is 4.2.8
When I create a new domain in storage , the storage type is NFS , it report :
VDSM command ActivateStorageDomainVDS failed: Unknown pool id, pool not
connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)
The error of
Does it make sense to install nodectl utility on plain CentOS 7.x nodes?
Or any other alternative for plain OS nodes vs ovirt-node-ng ones?
On my updated CentOS 7.6 oVirt node I have not the command; I think it is
provided by the package ovirt-node-ng-nodectl, that is one of the available
ones if
Il giorno gio 22 ago 2019 alle ore 22:41 ha scritto:
> Thanks Paul,
>
> Hey Paul,
>
> Thanks for the reply!
>
> Not really sure here, I read the oVirt 3.0 pdf and it says you need to
> enable LACP for Cisco switches.
> This is really not becoming a learning setup any longer, just a headache.
>
>
Il giorno ven 23 ago 2019 alle ore 08:10 ha
scritto:
> The version of ovirt-engine is 4.2.8
> The version of ovirt-node is 4.2.8
>
>
Hi, please note oVirt 4.2 reached End Of Life state a few months ago.
If this is a new deployment, please redeploy using 4.3 instead.
If this is an existing
Allen, please create bonds like described in
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks.html#creating-a-bond-device-using-the-administration-portal
avoid manual steps on the host.
On Fri, Aug 23, 2019 at 11:37 AM Sandro Bonazzola
wrote:
>
>
> Il giorno gio 22 ago 2019
On Thu, Aug 22, 2019 at 1:18 PM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:
> On Wed, Aug 21, 2019 at 9:18 AM wrote:
> >
> > good day
> > currently i am testing oVirt on a single box and setup some tagged vms
> and non tagged vm.
> > the non tagged vm is a firewall but it has
Relevant error in the logs seems to be:
MainThread::DEBUG::2016-04-30
19:45:56,428::unified_persistence::46::root::(run)
upgrade-unified-persistence upgrade persisting networks {} and bondings {}
MainThread::INFO::2016-04-30
19:45:56,428::netconfpersistence::187::root::(_clearDisk) Clearing
Hi,
this is a bug in the scheduler. Currently, it ignores hugepages when
evaluating NUMA pinning.
There is a bugzilla ticket[1] that was originally reported as a similar
case, but then later the reporter changed it.
Could you open a new bugzilla ticket and attach the details from this email?
Il giorno ven 23 ago 2019 alle ore 11:27 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:
> Does it make sense to install nodectl utility on plain CentOS 7.x nodes?
>
No, doesn't make sense. nodectl checks for oVirt Node specific
configuration.
# nodectl check
Status: OK
Bootloader ...
Il giorno lun 8 lug 2019 alle ore 21:44 Christopher Cox
ha scritto:
> On the node in question, the metadata isn't coming across (state) wise.
> It shows VMs being in an unknown state (some are up and some are down),
> some show as migrating and there are 9 forever hung migrating tasks. We
>
In the UI one can create hosts using two authentication methods: 'Password' and
'SSH Public Key'.
I have only found the Password authentication in the API Docs
(/ovirt-engine/apidoc/#/services/hosts/methods/add).
My question is: How can i create hosts using SSH Public Key authentication via
the
Good day.
sorry if i got you guys confused.
for clarity:
i have a server with two nic, currently one nic is connected to public
network and the other one is disconnected.
And i have a vm that will be the firewall of other vm inside this
standalone/selfhosted ovirt.
then i am figuring out how
Have the VM and the firewall on the same L2 network. Configure the VM with
a default gateway of the interface of the firewall.
Is it what you're looking for?
On Fri., 23 Aug. 2019, 21:15 Ernest Clyde Chua,
wrote:
> Good day.
> sorry if i got you guys confused.
> for clarity:
>
> i have a
Hi,
the following request should work, but I didn't test it.
POST /ovirt-engine/api/hosts
myhost
myhost.example.com
publickey
Here is the relevant API documentation:
http://ovirt.github.io/ovirt-engine-api-model/4.4/#types/ssh
Regards,
Andrej
On Fri, 23 Aug 2019
On Fri, Aug 23, 2019 at 2:32 PM Sandro Bonazzola
wrote:
>
>
> Or any other alternative for plain OS nodes vs ovirt-node-ng ones?
>>
>
> what's the use case here? check host sanity? because nodectl is not
> checking that, it just check node config matches to requirements to be able
> to perform
On Fri, Aug 23, 2019 at 2:25 PM Sandro Bonazzola
wrote:
> Relevant error in the logs seems to be:
>
> MainThread::DEBUG::2016-04-30
> 19:45:56,428::unified_persistence::46::root::(run)
> upgrade-unified-persistence upgrade persisting networks {} and bondings {}
> MainThread::INFO::2016-04-30
>
Good day.
yes the VMs and the firewall on the same L2 network also the firewall is
hosted in oVirt along side the VMs, currently there is no external switch
connected to the nic and i would like to know if it is possible to pass tag
internally.
On Fri, Aug 23, 2019 at 9:21 PM Tony Pearce wrote:
Dear oVIrt,
Would you be so kind to help me/tell me or point me how to find which Hooks,
and in which order, are triggered when VM is being migrated?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
___
Users
May be I misunderstand but no need for any tag on same layer 2 network
On Fri., 23 Aug. 2019, 22:15 Ernest Clyde Chua,
wrote:
> Good day.
> yes the VMs and the firewall on the same L2 network also the firewall is
> hosted in oVirt along side the VMs, currently there is no external switch
>
On Fri, Aug 23, 2019 at 2:25 PM Sandro Bonazzola
wrote:
> Relevant error in the logs seems to be:
>
> MainThread::DEBUG::2016-04-30
> 19:45:56,428::unified_persistence::46::root::(run)
> upgrade-unified-persistence upgrade persisting networks {} and bondings {}
> MainThread::INFO::2016-04-30
>
Sorry to dead bump this, but I'm beginning to suspect that maybe it's
not STP that's the problem.
2 of my hosts just went down when a few VMs tried to migrate.
Do any of you have any idea what might be going on here? I don't even
know where to start. I'm going to include the dmesg in case it
On Thu, Aug 22, 2019, 04:47 wrote:
> Hi,
> I have a 4.3.5 hyperconverged setup with 3 hosts, each host has 2x10G NIC
> ports
>
> Host1:
> NIC1: 192.168.1.11
> NIC2: 192.168.0.67 (Gluster)
>
> Host2:
> NIC1: 10.10.1.12
> NIC2: 192.168.0.68 (Gluster)
>
> Host3:
> NIC1: 10.10.1.13
> NIC2:
Is you storage connected via NFS?
Can you manually access the storage on the host?
On Fri, Aug 23, 2019 at 5:19 PM Curtis E. Combs Jr.
wrote:
> Sorry to dead bump this, but I'm beginning to suspect that maybe it's
> not STP that's the problem.
>
> 2 of my hosts just went down when a few VMs
Hey Dominik,
Thanks for helping. I really want to try to use ovirt.
When these events happen, I cannot even SSH to the nodes due to the
link being down. After a little while, the hosts come back...
On Fri, Aug 23, 2019 at 11:30 AM Dominik Holler wrote:
>
> Is you storage connected via NFS?
>
Also, if it helps, the hosts will sit there, quietly, for hours or
days before anything happens. They're up and working just fine. But
then, when I manually migrate a VM from one host to another, they
become completely inaccessible.
These are vanilla-as-possible CentOS7 nodes. Very basic ovirt
On Fri, Aug 23, 2019 at 5:41 PM Curtis E. Combs Jr.
wrote:
> Also, if it helps, the hosts will sit there, quietly, for hours or
> days before anything happens. They're up and working just fine. But
> then, when I manually migrate a VM from one host to another, they
> become completely
Sure! Right now, I only have a 500gb partition on each node shared over
NFS, added as storage domains. This is on each node - so, currently 3.
How can the storage cause a node to drop out?
On Fri, Aug 23, 2019, 11:46 AM Dominik Holler wrote:
>
>
> On Fri, Aug 23, 2019 at 5:41 PM Curtis E.
and here contents of yum.log-20190823 that contains logs since March this
year if it can help:
https://drive.google.com/file/d/1zKXbY2ySLPM4TSyzzZ1_AvUrA8sCMYOm/view?usp=sharing
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscri
On Fri, Aug 23, 2019 at 5:49 PM Curtis E. Combs Jr.
wrote:
> Sure! Right now, I only have a 500gb partition on each node shared over
> NFS, added as storage domains. This is on each node - so, currently 3.
>
> How can the storage cause a node to drop out?
>
>
Thanks, I got it.
All three links go
Unfortunately, I can't check on the switch. Trust me, I've tried.
These servers are in a Co-Lo and I've put 5 tickets in asking about
the port configuration. They just get ignored - but that's par for the
coarse for IT here. Only about 2 out of 10 of our tickets get any
response and usually the
Thanks Dominik,
Went through this documentation earlier and it looked as if this was done after
the Engine is installed.
I am looking to get networking completely setup before the Engine so I have a
template and can duplicate this effort across multiple hosts during install.
Possibly using a
> POST /ovirt-engine/api/hosts
>
> myhost
> myhost.example.com
>
> publickey
>
>
It works, thank you very much!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
On Fri, Aug 23, 2019 at 6:45 PM Curtis E. Combs Jr.
wrote:
> Unfortunately, I can't check on the switch. Trust me, I've tried.
> These servers are in a Co-Lo and I've put 5 tickets in asking about
> the port configuration. They just get ignored - but that's par for the
> coarse for IT here. Only
This little cluster isn't in production or anything like that yet.
So, I went ahead and used your ethtool commands to disable pause
frames on both interfaces of each server. I then, chose a few VMs to
migrate around at random.
swm-02 and swm-03 both went out again. Unreachable. Can't ping, can't
On Fri, Aug 23, 2019 at 8:03 PM Curtis E. Combs Jr.
wrote:
> This little cluster isn't in production or anything like that yet.
>
> So, I went ahead and used your ethtool commands to disable pause
> frames on both interfaces of each server. I then, chose a few VMs to
> migrate around at random.
It took a while for my servers to come back on the network this time.
I think it's due to ovirt continuing to try to migrate the VMs around
like I requested. The 3 servers' names are "swm-01, swm-02 and
swm-03". Eventually (about 2-3 minutes ago) they all came back online.
So I disabled and
On Fri, Aug 23, 2019 at 9:19 PM Dominik Holler wrote:
>
>
> On Fri, Aug 23, 2019 at 8:03 PM Curtis E. Combs Jr.
> wrote:
>
>> This little cluster isn't in production or anything like that yet.
>>
>> So, I went ahead and used your ethtool commands to disable pause
>> frames on both interfaces of
Is the nic to the network staying up or going down for a period?
I'm just thinking, if the network has been configured to block unknown
unicast traffic, I think the VM would need to send a layer 2 frame to the
network before the network would send any frames to that switch port
destined for the
On Fri, Aug 23, 2019 at 8:50 PM Tony Pearce wrote:
>
> Is the nic to the network staying up or going down for a period?
Which nic? The one on the pserver or the virtual machine? For clarity,
I've only ever referred to the one on the pserver. I can't even reach
the VM when the pserver becomes
39 matches
Mail list logo