Re: [ovirt-users] ovirt4.0.1 and new node, migration fails

2016-08-26 Thread Bill James
Can anyone offer suggestions on how to figure out why when I create a 
new VM and go to "Host" and try to select "Specific Host", although 2 
nodes are listed it won't let me pick on of them?


No updates on the bug reported.
I tried ovirt-engine-4.0.2.7-1.el7.centos.noarch on both ovirt engine 
and the new node, no change.


The 2 nodes are not identical if that makes a difference. Both are HP 
DL360 G8's but one has 128GB RAM and the other has 32GB. Does that matter?


If after creating VM I go to "Run-once", then I can select the second 
node and VM starts up fine.

Or I can migrate the VM once its started.
Why doesn't the initial "New VM" window allow me to select a host?



On 8/15/16 8:51 AM, Bill James wrote:

sure did. No word yet on that.
*bug 1363900*



On 08/14/2016 03:43 AM, Yaniv Dary wrote:

Did you open a bug like Michal asked?

Yaniv Dary Technical Product Manager Red Hat Israel Ltd. 34 Jerusalem 
Road Building A, 4th floor Ra'anana, Israel 4350109 Tel : +972 (9) 
7692306 8272306 Email: yd...@redhat.com  IRC 
: ydary


On Fri, Aug 12, 2016 at 2:23 AM, Bill James > wrote:


just fyi, I tried updating to
ovirt-engine-4.0.2.6-1.el7.centos.noarch
and still same story, it won't let me select a specific host to
deploy a VM to, and doesn't tell me why not.

However I can migrate the VM from one node to the other one just
fine.

Any ideas why?

I provided logs earlier.




On 08/03/2016 02:32 PM, Bill James wrote:

opened bug 1363900.
It also includes recent logs from all 3 servers.

Also tried updating vdsm to 4.18.10-1 and restarted ovirt-engine.
No change in results.

I can migrate VMs to new node but not initially assign VMs to
that node.


On 07/29/2016 09:31 AM, Bill James wrote:

ok, I will raise a bug. Yes it is very frustrating just
having button not work without explanation.
I don't know if this is related, the new host that I am
having troubles with is running 4.0.1.1-1,
other other host is running 4.0.0.6-1

I was planning on migrating VMs then upgrade the older host.
Also Cluster is still in 3.6 mode, also waiting for
upgrade of older node.
All storage domains are on older node, its the NFS server.


hmm, just retried migration so I could get vdsm logs from
both hosts.
1 VM worked, the other 3 failed.
And I still can't assign VMs to new node.

VM that worked: f4cd4891-977d-44c2-8554-750ce86da7c9

Not sure what's special about it.
It and 45f4f24b-2dfa-401b-8557-314055a4662c are clones
from same template.


Attaching logs from 2 nodes and engine now.



On 7/29/16 4:00 AM, Michal Skrivanek wrote:

On 28 Jul 2016, at 18:37, Bill James
> wrote:

I'm trying to test out ovirt4.0.1.
I added a new hardware node to the cluster. Its
status is Up.
But for some reason when I create a new VM and
try to select "Specific Host" the radio button
doesn't let me select it so I can't assign a VM
to new node.

please raise that as a bug if there is no explanation why

When I try to migrate a VM to new node it fails with:

MigrationError: Domain not found: no domain with
matching uuid '45f4f24b-2dfa-401b-8557-314055a4662c'


What did I miss?

for migration issues you always need to include vdsm
logs from both source and destination host. Hard to
say otherwise. Anything special about your deployment?

ovirt-engine-4.0.1.1-1.el7.centos.noarch
vdsm-4.18.6-1.el7.centos.x86_64

storage is NFS.

Attached logs.


___

Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users







--

Check it out Tomorrow night:
http://thebilljamesgroup.com/Events.html

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-26 Thread Yaniv Kaul
On Fri, Aug 26, 2016 at 1:33 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

>
>
> Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:
> >
> >
> > On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
> >  > > wrote:
> >
> > iSCSI & Ovirt is an awful combination, no matter if multipathed or
> > bonded. its always gambling how long it will work, and when it fails
> why
> > did it fail.
> >
> >
> > I disagree. In most cases, it's actually a lower layer issues. In most
> > cases, btw, it's because multipathing was not configured (or not
> > configured correctly).
> >
>
> experience tells me it is like is said, this was something i have seen
> from 3.0 up to 3.6. Ovirt, and - suprprise - RHEV. Both act the same
> way. I am absolutly aware of Multpath Configurations, iSCSI Multipathing
> is very widespread in use in our DC. But such problems are an excluse
> Ovirt/RHEV Feature.
>

I don't think the resentful tone is appropriate for the oVirt community
mailing list.


>
> >
> >
> > its supersensitive to latency, and superfast with setting an host to
> > inactive because the engine thinks something is wrong with it. in
> most
> > cases there was no real reason for.
> >
> >
> > Did you open bugs for those issues? I'm not aware of 'no real reason'
> > issues.
> >
>
> Support Tickets for Rhev Installation, after Support (even after massive
> escalation requests) kept telling me the same again and again i gave up
> and we dropped the RHEV Subscriptions to Migrate the VMS to a different
> Plattform Solution (still iSCSI Backend). Problems gone.
>

I wish you peace of mind with your new platform solution.
>From a (shallow!) search I've made on oVirt bugs, I could not find any
oVirt issue you've reported or commented on.
I am aware of the request to set rp_filter correctly for setups with
multiple interfaces in the same IP subnet.


>
>
> >
> >
> > we had this in several different hardware combinations, self built
> > filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer
> >
> > Been there, done that, wont do again.
> >
> >
> > We've had good success and reliability with most enterprise level
> > storage, such as EMC, NetApp, Dell filers.
> > When properly configured, of course.
> > Y.
> >
>
> Dell Equallogic? Cant really believe since Ovirt / Rhev and the
> Equallogic Network Configuration wont play nice together (EQL wants all
> Interfaces in the same Subnet). And they only work like expected when
> there Hit Kit Driverpackage is installed. Without Path Failover is like
> russian Roulette. But Ovirt hates the Hit Kit, so this Combo ends up in
> a huge mess, because Ovirt does changes to iSCSI, as well as the Hit Kit
> -> Kaboom. Host not available.
>

Thanks - I'll look into this specific storage.
I'm aware it's unique in some cases, but I don't have experience with it
specifically.


>
> There are several KB Articles in the RHN, without real Solution.
>
>
> But like you try to tell between the lines, this must be the Customers
> misconfiguration. Yep, typical Supportkilleranswer. Same Style than in
> RHN Tickets, i am done with this.
>

A funny sentence I've read yesterday:
Schrodinger's backup: "the condition of any backup is unknown until restore
is attempted."

In a sense, this is similar to no SPOF high availability setup - in many
cases you don't know it works well until needed.
There are simply many variables and components involved.
That was all I meant, nothing between the lines and I apologize if I've
given you a different impression.
Y.


> Thanks.
>
> >
> >
> >
> > Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:
> > > Hi Elad,
> > >
> > > thank you very much for clearing things up.
> > >
> > > Initiator/iface 'a' tries to connect target 'b' and vice versa. As
> 'a'
> > > and 'b' are in completely separate networks this can never work as
> > long
> > > as there is no routing between the networks.
> > >
> > > So it seems the iSCSI-bonding feature is not useful for my setup. I
> > > still wonder how and where this feature is supposed to be used?
> > >
> > > thank you,
> > > Uwe
> > >
> > > Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:
> > >> Thanks.
> > >>
> > >> You're getting an iSCSI connection timeout [1], [2]. It means the
> > host
> > >> cannot connect to the targets from iface: enp9s0f1 nor iface:
> > enp9s0f0.
> > >>
> > >> This causes the host to loose its connection to the storage and
> also,
> > >> the connection to the engine becomes inactive. Therefore, the host
> > >> changes its status to Non-responsive [3] and since it's the SPM,
> the
> > >> whole DC, with all its storage domains become inactive.
> > >>
> > >>
> > >> vdsm.log:
> > >> [1]
> > >> Traceback (most recent call last):
> > >>   File 

Re: [ovirt-users] VM Live Migration issues

2016-08-26 Thread Yaniv Dary
OVS is experimental and there is a open item on making migration work:
https://bugzilla.redhat.com/show_bug.cgi?id=1362495

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Aug 26, 2016 at 11:25 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> Not at all.
>
> Just newly created Virtual Machine. No user is accessing the VM yet. Only
> Ubuntu Trusty Tahr (14.04) OS with oVirt Guest Agent is installed. VM and
> both hosts are in the same network as well.
>
> Applied migration policy is "Minimum Downtime". Bandwidth limit is set to
> Auto. Cluster switch is set to "OVS" I can send the screen shot tomorrow.
>
> Host Hardware configuration. Intel Xeon CPU, 2 Socket X 16 Core in each
> socket. Installed Memory is 256 GB on each host. I have 4 X 10Gbps NIC, 2
> given for VM traffic and 2 given for iSCSI traffic. oVirt Management is on
> a separate 1 X 1Gbps NIC.
>
> Yet, migration fails.
>
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava eXza Technology Consulting & Services
>
> Do not print this e-mail unless required. Save Paper & trees.
> On Friday 26 August 2016 09:31 PM, Yaniv Dary wrote:
>
> Is the VM very busy?
> Did you apply the new cluster migration policies?
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Fri, Aug 26, 2016 at 1:54 AM, Anantha Raghava <
> rag...@exzatechconsulting.com> wrote:
>
>> Hi,
>>
>> In our setup we have configured two hosts, both of same CPU type, same
>> amount of memory and Master storage domain is created on iSCSI storage and
>> is live.
>>
>> I created a single VM with Ubuntu Trusty as OS. It installed properly and
>> when I attempted to migrate the running VM, the migration failed.
>>
>> Engine log, Host 1 log and Host 2 logs are attached for your reference.
>> Since logs are running into several MBs, I have compressed them and
>> attached here.
>>
>> Can someone help us to solve this issue?
>>
>> --
>>
>> Thanks & Regards,
>> Anantha Raghava
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Live Migration issues

2016-08-26 Thread Yaniv Dary
Is the VM very busy?
Did you apply the new cluster migration policies?

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Aug 26, 2016 at 1:54 AM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> In our setup we have configured two hosts, both of same CPU type, same
> amount of memory and Master storage domain is created on iSCSI storage and
> is live.
>
> I created a single VM with Ubuntu Trusty as OS. It installed properly and
> when I attempted to migrate the running VM, the migration failed.
>
> Engine log, Host 1 log and Host 2 logs are attached for your reference.
> Since logs are running into several MBs, I have compressed them and
> attached here.
>
> Can someone help us to solve this issue?
>
> --
>
> Thanks & Regards,
> Anantha Raghava
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-26 Thread Nicolas Ecarnot

Le 26/08/2016 à 12:33, InterNetX - Juergen Gotteswinter a écrit :



Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:



On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
> wrote:

iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.


I disagree. In most cases, it's actually a lower layer issues. In most
cases, btw, it's because multipathing was not configured (or not
configured correctly).



experience tells me it is like is said, this was something i have seen
from 3.0 up to 3.6. Ovirt, and - suprprise - RHEV. Both act the same
way. I am absolutly aware of Multpath Configurations, iSCSI Multipathing
is very widespread in use in our DC. But such problems are an excluse
Ovirt/RHEV Feature.




its supersensitive to latency, and superfast with setting an host to
inactive because the engine thinks something is wrong with it. in most
cases there was no real reason for.


Did you open bugs for those issues? I'm not aware of 'no real reason'
issues.



Support Tickets for Rhev Installation, after Support (even after massive
escalation requests) kept telling me the same again and again i gave up
and we dropped the RHEV Subscriptions to Migrate the VMS to a different
Plattform Solution (still iSCSI Backend). Problems gone.





we had this in several different hardware combinations, self built
filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer

Been there, done that, wont do again.


We've had good success and reliability with most enterprise level
storage, such as EMC, NetApp, Dell filers.
When properly configured, of course.
Y.



Dell Equallogic? Cant really believe since Ovirt / Rhev and the
Equallogic Network Configuration wont play nice together (EQL wants all
Interfaces in the same Subnet). And they only work like expected when
there Hit Kit Driverpackage is installed. Without Path Failover is like
russian Roulette. But Ovirt hates the Hit Kit, so this Combo ends up in
a huge mess, because Ovirt does changes to iSCSI, as well as the Hit Kit
-> Kaboom. Host not available.

There are several KB Articles in the RHN, without real Solution.


I'm working with Dell, EMC and EQL for 7 years, and with oVirt for 4 
years, and I must recognize that what Juergen said is true : HIT is not 
compatible with the way oVirt is using iSCSI (or the other way round).

What he says about LVM is also true.
I still love oVirt, and won't quit. I'm just very patient and hoping my 
BZ and RFE get fixed.



--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.2 Post Release update is now available

2016-08-26 Thread Sandro Bonazzola
On Fri, Aug 26, 2016 at 12:06 PM, Gianluca Cecchi  wrote:

> On Mon, Aug 22, 2016 at 2:04 PM, Sandro Bonazzola 
> wrote:
>
>> The oVirt Project is pleased to announce the availability of an oVirt
>> 4.0.2 post release update, as of August 22nd, 2016.
>>
>> This update includes new builds of oVirt Engine SDKs which fixes password
>> leaking into HTTPD logs when obtaining SSO token.
>>
>>
>>
> So in case of environment already at previous 4.0.2 level do we have to
> run engine-setup again?
> Also host to patch or only engines?
>

This post release update includes only oVirt Engine SDKs  so just yum/dnf
update is enough.
It may be applied both on nodes and manager and any other host where the
sdk is used by 3rd party tools.





> Thanks,
> Gianluca
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-26 Thread InterNetX - Juergen Gotteswinter
one more thing, which i am sure that most ppl are not aware of.

when using thin provisioned disks for vms, hosted on iSCSI SAN Ovirt
uses a for me unusual way to do this.

Ovirt adds a new LVM LV for a vm,  generates a Thin Qcow Image which is
written directly raw onto that LV. So for, ok, can be done like this.

But try generating some Write Load from within the Guest and see
yourself what will happen. Supports answer to this is: use Raw, without
thin.

Seems to me like a wrong design decision, and i can only warn everyone
to use this.

Am 26.08.2016 um 12:33 schrieb InterNetX - Juergen Gotteswinter:
> 
> 
> Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:
>>
>>
>> On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
>> > > wrote:
>>
>> iSCSI & Ovirt is an awful combination, no matter if multipathed or
>> bonded. its always gambling how long it will work, and when it fails why
>> did it fail.
>>
>>
>> I disagree. In most cases, it's actually a lower layer issues. In most
>> cases, btw, it's because multipathing was not configured (or not
>> configured correctly).
>>  
> 
> experience tells me it is like is said, this was something i have seen
> from 3.0 up to 3.6. Ovirt, and - suprprise - RHEV. Both act the same
> way. I am absolutly aware of Multpath Configurations, iSCSI Multipathing
> is very widespread in use in our DC. But such problems are an excluse
> Ovirt/RHEV Feature.
> 
>>
>>
>> its supersensitive to latency, and superfast with setting an host to
>> inactive because the engine thinks something is wrong with it. in most
>> cases there was no real reason for.
>>
>>
>> Did you open bugs for those issues? I'm not aware of 'no real reason'
>> issues.
>>  
> 
> Support Tickets for Rhev Installation, after Support (even after massive
> escalation requests) kept telling me the same again and again i gave up
> and we dropped the RHEV Subscriptions to Migrate the VMS to a different
> Plattform Solution (still iSCSI Backend). Problems gone.
> 
> 
>>
>>
>> we had this in several different hardware combinations, self built
>> filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer
>>
>> Been there, done that, wont do again.
>>
>>
>> We've had good success and reliability with most enterprise level
>> storage, such as EMC, NetApp, Dell filers.
>> When properly configured, of course.
>> Y.
>>
> 
> Dell Equallogic? Cant really believe since Ovirt / Rhev and the
> Equallogic Network Configuration wont play nice together (EQL wants all
> Interfaces in the same Subnet). And they only work like expected when
> there Hit Kit Driverpackage is installed. Without Path Failover is like
> russian Roulette. But Ovirt hates the Hit Kit, so this Combo ends up in
> a huge mess, because Ovirt does changes to iSCSI, as well as the Hit Kit
> -> Kaboom. Host not available.
> 
> There are several KB Articles in the RHN, without real Solution.
> 
> 
> But like you try to tell between the lines, this must be the Customers
> misconfiguration. Yep, typical Supportkilleranswer. Same Style than in
> RHN Tickets, i am done with this.
> 
> Thanks.
> 
>>  
>>
>>
>> Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:
>> > Hi Elad,
>> >
>> > thank you very much for clearing things up.
>> >
>> > Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'
>> > and 'b' are in completely separate networks this can never work as
>> long
>> > as there is no routing between the networks.
>> >
>> > So it seems the iSCSI-bonding feature is not useful for my setup. I
>> > still wonder how and where this feature is supposed to be used?
>> >
>> > thank you,
>> > Uwe
>> >
>> > Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:
>> >> Thanks.
>> >>
>> >> You're getting an iSCSI connection timeout [1], [2]. It means the
>> host
>> >> cannot connect to the targets from iface: enp9s0f1 nor iface:
>> enp9s0f0.
>> >>
>> >> This causes the host to loose its connection to the storage and also,
>> >> the connection to the engine becomes inactive. Therefore, the host
>> >> changes its status to Non-responsive [3] and since it's the SPM, the
>> >> whole DC, with all its storage domains become inactive.
>> >>
>> >>
>> >> vdsm.log:
>> >> [1]
>> >> Traceback (most recent call last):
>> >>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>> >> connectStorageServer
>> >> conObj.connect()
>> >>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in
>> connect
>> >> iscsi.addIscsiNode(self._iface, self._target, self._cred)
>> >>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
>> >> iscsiadm.node_login(iface.name 
>> , portalStr,
>> >> target.iqn)
>> >>   File 

Re: [ovirt-users] ovirt-ha-agent

2016-08-26 Thread David Gossage
On Fri, Aug 26, 2016 at 1:38 AM, Renout Gerrits  wrote:

> Depends on your systemd configuration. ovirt-ha-agent and broker daemon's
> both log to stdout and it's own logfile. All messages to stdout will go to
> journald and be forwarded to /var/log/messages (ForwardToSyslog=yes in
> /etc/systemd/journald.conf I think).
> So the ovirt-ha-agent doesn't log to /var/log/messages, journald does. if
> it should log to stdout is another discussion, but maybe there's good
> reason for that, backwards compatibility, don't know.
>
> An easy fix is redirecting the output of the daemon to /dev/null. in
> /usr/lib/systemd/system/ovirt-ha-agent.service add StandardOutput=null to
> the [service] section.
>

This did not start occurring until after I updated ovirt on August 12th. I
jumped a couple updates I think so which update it started with not sure.

Aug 12 21:54:58 Updated: ovirt-vmconsole-1.0.2-1.el7.centos.noarch
Aug 12 21:55:23 Updated: ovirt-vmconsole-host-1.0.2-1.el7.centos.noarch
Aug 12 21:55:34 Updated: ovirt-setup-lib-1.0.1-1.el7.centos.noarch
Aug 12 21:55:42 Updated: libgovirt-0.3.3-1.el7_2.4.x86_64
Aug 12 21:56:37 Updated: ovirt-hosted-engine-ha-1.3.5.7-1.el7.centos.noarch
Aug 12 21:56:37 Updated: ovirt-engine-sdk-python-3.6.7.0-1.el7.centos.noarch
Aug 12 21:56:38 Updated:
ovirt-hosted-engine-setup-1.3.7.2-1.el7.centos.noarch
Aug 12 21:58:46 Updated: 1:ovirt-release36-3.6.7-1.noarch

So is this considered normal behavior that everyone should need to
reconfigure their logging to deal with or did someone leave it going to
stdout for debugging and not turn it back off?  I know I can manipulate
logging myself if I need to, but I'd rather not have to customize that and
then check every update that it's still applied or that I still get logs at
all.



> Renout
>
> On Thu, Aug 25, 2016 at 10:39 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> This service seems to be logging to both /var/log/messages
>> and /var/log/ovirt-hosted-engine-ha/agent.log
>>
>> Anything that may be causing that?  Centos7 ovirt 3.6.7
>>
>> MainThread::INFO::2016-08-25 15:38:36,912::ovf_store::109::
>> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> Extracting Engine VM OVF from the OVF_STORE
>> MainThread::INFO::2016-08-25 15:38:36,976::ovf_store::116::
>> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> OVF_STORE volume path: /rhev/data-center/mnt/glusterS
>> D/ccgl1.gl.local:HOST1/6a0bca4a-a1be-47d3-be51-64c627
>> 7d1f0f/images/c12c8000-0373-419b-963b-98b04adca760/fb6e250
>> 9-4786-433d-868f-a6303dd69cca
>> MainThread::INFO::2016-08-25 15:38:37,097::config::226::ovi
>> rt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>> Found an OVF for HE VM, trying to convert
>> MainThread::INFO::2016-08-25 15:38:37,102::config::231::ovi
>> rt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
>> Got vm.conf from OVF_STORE
>> MainThread::INFO::2016-08-25 15:38:37,200::hosted_engine::4
>> 62::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Current state EngineDown (score: 3400)
>> MainThread::INFO::2016-08-25 15:38:37,201::hosted_engine::4
>> 67::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Best remote host ccovirt1.carouselchecks.local (id: 1, score: 3400)
>> MainThread::INFO::2016-08-25 15:38:47,346::hosted_engine::6
>> 13::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
>> Initializing VDSM
>> MainThread::INFO::2016-08-25 15:38:47,439::hosted_engine::6
>> 58::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
>> Connecting the storage
>> MainThread::INFO::2016-08-25 15:38:47,454::storage_server::218::
>> ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
>> Connecting storage server
>> MainThread::INFO::2016-08-25 15:38:47,618::storage_server::222::
>> ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
>> Connecting storage server
>> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.br
>> oker.listener.ConnectionHandler:Connection closed
>>
>> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:mem_free.MemFree:memFree:
>> 16466
>> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: 
>> INFO:engine_health.CpuLoadNoEngine:VM
>> not on this host
>> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: INFO:mgmt_bridge.MgmtBridge:Found
>> bridge ovirtmgmt with ports
>> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
>> INFO:cpu_load_no_engine.EngineHealth:VM
>> not on this host
>> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
>> INFO:cpu_load_no_engine.EngineHealth:System
>> load total=0.1022, engine=0., non-engine=0.1022
>> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.ag
>> ent.hosted_engine.HostedEngine:Initializing VDSM
>> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.ag
>> 

Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-26 Thread Arnaud Lauriou
Upgrade done. In my case, the engine was already updated to the latest 
4.x release before the hosts.

Following marks 1 and 2 works fine.
After upgrading the cluster compatibility version, got message to reboot 
all the VMs.

Don't know if I really have to do that.


On 08/23/2016 02:28 PM, Barak Korren wrote:


The following process worked for me:
1. ensure all hosts are running el7 or above, and all clusters had 
been upgraded to 3.6 level
2. upgrade vdsm to 4.x on all hosts (assuming you are not using 
ovirt-node) by switching to maintainance upgrading and activating one 
by one to avoid VM downtime.
3. Upgrade engine by following one of the documented procedures (we 
just inatalled a new machine for it and switched DNS after 
backup/restore).
4. Once satisfied that illing back to 3.6 engine will not be needed - 
upgrade cluster level.



בתאריך 23 באוג׳ 2016 12:43,‏ "Arnaud Lauriou" > כתב:


Hi,

What is the best way to upgrade a datacenter from ovirt 3.6 to 4.0 ?
Upgrade engine, hosts then cluster ?
If right, how do you upgrade a host running ovirt 3.6.7 ? Simple
upgrade or fresh install ?
During the update process of a cluster, some hosts will be in
3.6.7 and others in 4.0.2, is it a problem for migrating VMs ?

Found docs about engine upgrade but not a lot about host and cluster.

Thanks,

Arnaud
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent

2016-08-26 Thread Renout Gerrits
Depends on your systemd configuration. ovirt-ha-agent and broker daemon's
both log to stdout and it's own logfile. All messages to stdout will go to
journald and be forwarded to /var/log/messages (ForwardToSyslog=yes in
/etc/systemd/journald.conf I think).
So the ovirt-ha-agent doesn't log to /var/log/messages, journald does. if
it should log to stdout is another discussion, but maybe there's good
reason for that, backwards compatibility, don't know.

An easy fix is redirecting the output of the daemon to /dev/null. in
/usr/lib/systemd/system/ovirt-ha-agent.service add StandardOutput=null to
the [service] section.

Renout

On Thu, Aug 25, 2016 at 10:39 PM, David Gossage  wrote:

> This service seems to be logging to both /var/log/messages
> and /var/log/ovirt-hosted-engine-ha/agent.log
>
> Anything that may be causing that?  Centos7 ovirt 3.6.7
>
> MainThread::INFO::2016-08-25 15:38:36,912::ovf_store::109::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2016-08-25 15:38:36,976::ovf_store::116::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> OVF_STORE volume path: /rhev/data-center/mnt/glusterSD/ccgl1.gl.local:
> HOST1/6a0bca4a-a1be-47d3-be51-64c6277d1f0f/images/c12c8000-
> 0373-419b-963b-98b04adca760/fb6e2509-4786-433d-868f-a6303dd69cca
> MainThread::INFO::2016-08-25 15:38:37,097::config::226::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Found an OVF for HE VM, trying to
> convert
> MainThread::INFO::2016-08-25 15:38:37,102::config::231::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Got vm.conf from OVF_STORE
> MainThread::INFO::2016-08-25 15:38:37,200::hosted_engine::
> 462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 3400)
> MainThread::INFO::2016-08-25 15:38:37,201::hosted_engine::
> 467::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host ccovirt1.carouselchecks.local (id: 1, score: 3400)
> MainThread::INFO::2016-08-25 15:38:47,346::hosted_engine::
> 613::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_vdsm) Initializing VDSM
> MainThread::INFO::2016-08-25 15:38:47,439::hosted_engine::
> 658::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Connecting the storage
> MainThread::INFO::2016-08-25 15:38:47,454::storage_server::
> 218::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2016-08-25 15:38:47,618::storage_server::
> 222::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.
> broker.listener.ConnectionHandler:Connection closed
>
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:mem_free.MemFree:memFree:
> 16466
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: 
> INFO:engine_health.CpuLoadNoEngine:VM
> not on this host
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: INFO:mgmt_bridge.MgmtBridge:Found
> bridge ovirtmgmt with ports
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
> INFO:cpu_load_no_engine.EngineHealth:VM
> not on this host
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
> INFO:cpu_load_no_engine.EngineHealth:System
> load total=0.1022, engine=0., non-engine=0.1022
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine:Initializing VDSM
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine:Connecting the storage
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer:Connecting storage server
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer:Connecting storage server
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users