Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Nope, not in any official repo.  I only use those suggested by oVirt, ie:

http://centos.bhs.mirrors.ovh.net/ftp.centos.org/7/storage/x86_64/gluster-3.7/

No 3.7.14 there.  Thanks though.

Scott

On Sat, Aug 13, 2016 at 11:23 AM David Gossage 
wrote:

> On Sat, Aug 13, 2016 at 11:00 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Sat, Aug 13, 2016 at 8:19 AM, Scott  wrote:
>>
>>> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
>>> works for me too where 3.7.12/13 did not.
>>>
>>> I did find that you should NOT turn off network.remote-dio or turn
>>> on performance.strict-o-direct as suggested earlier in the thread.  They
>>> will prevent dd (using direct flag) and other things from working
>>> properly.  I'd leave those at network.remote-dio=enabled
>>> and performance.strict-o-direct=off.
>>>
>>
>> Those were actually just suggested during a testing phase trying to trace
>> down the issue.  Neither of those 2 I think have ever been suggested as
>> good practice. At least not for VM storage.
>>
>>
>>> Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.
>>>
>>
> Is it still in testing repo? I updated my production cluster I think 2
> weeks ago from default repo on centos7.
>
>
>>> Scott
>>>
>>> On Tue, Aug 2, 2016 at 9:05 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 So far gluster 3.7.14 seems to have resolved issues at least on my test
 box.  dd commands that failed previously now work with sharding on zfs
 backend,

 Where before I couldn't even mount a new storage domain it now mounted
 and I have a test vm being created.

 Still have to let VM run for a few days and make sure no locking
 freezing occurs but looks hopeful so far.

 *David Gossage*
 *Carousel Checks Inc. | System Administrator*
 *Office* 708.613.2284

 On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

> On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay <
> kdhan...@redhat.com> wrote:
>
>> Hi,
>>
>> 1.  Could you please attach the glustershd logs from all three nodes?
>>
>
> Here are ccgl1 and ccgl2.  as previously mentioned ccgl3 third node
> was down from bad nic so no relevant logs would be on that node.
>
>
>>
>> 2. Also, so far what we know is that the 'Operation not permitted'
>> errors are on the main vm image itself and not its individual shards (ex
>> deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
>> Get the inode number of
>> .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d (ls -li) from the
>> first brick. I'll call this number INODE_NUMBER.
>> Execute `find . -inum INODE_NUMBER` from the brick root on first
>> brick to print the hard links against the file in the prev step and share
>> the output.
>>
> [dgossage@ccgl1 ~]$ sudo ls -li
> /gluster1/BRICK1/1/.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
> 16407 -rw-r--r--. 2 36 36 466 Jun  5 16:52
> /gluster1/BRICK1/1/.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
> [dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/
> [dgossage@ccgl1 1]$ sudo find . -inum 16407
> ./7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
> ./.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>
>
>
>>
>> 3. Did you delete any vms at any point before or after the upgrade?
>>
>
> Immediately before or after on same day pretty sure I deleted
> nothing.  During week prior I deleted a few dev vm's that were never setup
> and some the week after upgrade as I was preparing for moving disks off 
> and
> on storage to get them sharded and felt it would be easier to just 
> recreate
> some disks that had no data yet rather than move them off and on later.
>
>>
>> -Krutika
>>
>> On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
>>> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay <
>>> kdhan...@redhat.com> wrote:
>>>
 OK, could you try the following:

 i. Set network.remote-dio to off
 # gluster volume set  network.remote-dio off

 ii. Set performance.strict-o-direct to on
 # gluster volume set  performance.strict-o-direct on

 iii. Stop the affected vm(s) and start again

 and tell me if you notice any improvement?


>>> Previous instll I had issue with is still on gluster 3.7.11
>>>
>>> My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
>>> locak disk right now isn't allowing me to add the gluster storage at 
>>> all.
>>>
>>> Keep getting some type of UI error
>>>
>>> 2016-07-25 12:49:09,277 ERROR
>>> 

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Sounds good, except they aren't even that ("suggestions during testing
phase").  They will flat out break the configuration.  So they shouldn't be
tests AT ALL.  They shouldn't be anything except the "don't do this."

Thanks.

Scott

On Sat, Aug 13, 2016 at 11:01 AM David Gossage 
wrote:

> On Sat, Aug 13, 2016 at 8:19 AM, Scott  wrote:
>
>> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
>> works for me too where 3.7.12/13 did not.
>>
>> I did find that you should NOT turn off network.remote-dio or turn
>> on performance.strict-o-direct as suggested earlier in the thread.  They
>> will prevent dd (using direct flag) and other things from working
>> properly.  I'd leave those at network.remote-dio=enabled
>> and performance.strict-o-direct=off.
>>
>
> Those were actually just suggested during a testing phase trying to trace
> down the issue.  Neither of those 2 I think have ever been suggested as
> good practice. At least not for VM storage.
>
>
>> Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.
>>
>> Scott
>>
>> On Tue, Aug 2, 2016 at 9:05 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> So far gluster 3.7.14 seems to have resolved issues at least on my test
>>> box.  dd commands that failed previously now work with sharding on zfs
>>> backend,
>>>
>>> Where before I couldn't even mount a new storage domain it now mounted
>>> and I have a test vm being created.
>>>
>>> Still have to let VM run for a few days and make sure no locking
>>> freezing occurs but looks hopeful so far.
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay  wrote:

> Hi,
>
> 1.  Could you please attach the glustershd logs from all three nodes?
>

 Here are ccgl1 and ccgl2.  as previously mentioned ccgl3 third node was
 down from bad nic so no relevant logs would be on that node.


>
> 2. Also, so far what we know is that the 'Operation not permitted'
> errors are on the main vm image itself and not its individual shards (ex
> deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
> Get the inode number of
> .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d (ls -li) from the
> first brick. I'll call this number INODE_NUMBER.
> Execute `find . -inum INODE_NUMBER` from the brick root on first brick
> to print the hard links against the file in the prev step and share the
> output.
>
 [dgossage@ccgl1 ~]$ sudo ls -li
 /gluster1/BRICK1/1/.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
 16407 -rw-r--r--. 2 36 36 466 Jun  5 16:52
 /gluster1/BRICK1/1/.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
 [dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/
 [dgossage@ccgl1 1]$ sudo find . -inum 16407
 ./7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
 ./.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d



>
> 3. Did you delete any vms at any point before or after the upgrade?
>

 Immediately before or after on same day pretty sure I deleted nothing.
 During week prior I deleted a few dev vm's that were never setup and some
 the week after upgrade as I was preparing for moving disks off and on
 storage to get them sharded and felt it would be easier to just recreate
 some disks that had no data yet rather than move them off and on later.

>
> -Krutika
>
> On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay <
>> kdhan...@redhat.com> wrote:
>>
>>> OK, could you try the following:
>>>
>>> i. Set network.remote-dio to off
>>> # gluster volume set  network.remote-dio off
>>>
>>> ii. Set performance.strict-o-direct to on
>>> # gluster volume set  performance.strict-o-direct on
>>>
>>> iii. Stop the affected vm(s) and start again
>>>
>>> and tell me if you notice any improvement?
>>>
>>>
>> Previous instll I had issue with is still on gluster 3.7.11
>>
>> My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
>> locak disk right now isn't allowing me to add the gluster storage at all.
>>
>> Keep getting some type of UI error
>>
>> 2016-07-25 12:49:09,277 ERROR
>> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
>> (default task-33) [] Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA785
>> 2016-07-25 12:49:09,277 ERROR
>> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
>> (default task-33) [] Uncaught exception: : 

Re: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 browser client -> WebSocket error: Can't connect to websocket on URL: wss://ovirt.engine.fqdn:6100/

2016-08-13 Thread Jiri Belka
I have different files for those variables, maybe this is the case?

Review again.

j.

- Original Message -
From: "aleksey maksimov" 
To: "Jiri Belka" 
Cc: "users" 
Sent: Saturday, August 13, 2016 4:57:45 PM
Subject: Re: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 
browser client -> WebSocket error: Can't connect to websocket on URL: 
wss://ovirt.engine.fqdn:6100/


I changed my file /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf 
to:


PROXY_PORT=6100
#SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/websocket-proxy.cer
#SSL_KEY=/etc/pki/ovirt-engine/keys/websocket-proxy.key.nopass
#CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer
SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/apache-ca.pem
SSL_ONLY=True

...and restart HostedEngine VM.
Problem still exists.

13.08.2016, 17:52, "aleksey.maksi...@it-kb.ru" :
> It does not work for me. any ideas?
>
> 02.08.2016, 17:22, "Jiri Belka" :
>>  This works for me:
>>
>>  # cat /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
>>  PROXY_PORT=6100
>>  SSL_CERTIFICATE=/etc/pki/ovirt-engine/apache-ca.pem
>>  SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
>>  CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
>>  SSL_ONLY=True
>>
>>  - Original Message -
>>  From: "aleksey maksimov" 
>>  To: "users" 
>>  Sent: Monday, August 1, 2016 12:13:38 PM
>>  Subject: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 
>> browser client -> WebSocket error: Can't connect to websocket on URL: 
>> wss://ovirt.engine.fqdn:6100/
>>
>>  Hello oVirt guru`s !
>>
>>  I have successfully replaced the oVirt 4 site SSL-certificate according to 
>> the instructions from "Replacing oVirt SSL Certificate"
>>  section in "oVirt Administration Guide"
>>  http://www.ovirt.org/documentation/admin-guide/administration-guide/
>>
>>  3 files have been replaced:
>>
>>  /etc/pki/ovirt-engine/certs/apache.cer
>>  /etc/pki/ovirt-engine/keys/apache.key.nopass
>>  /etc/pki/ovirt-engine/apache-ca.pem
>>
>>  Now the oVirt site using my certificate and everything works fine, but when 
>> I try to use SPICE HTML5 browser client in Firefox or Chrome I see a gray 
>> screen and message under the button "Toggle messages output":
>>
>>  WebSocket error: Can't connect to websocket on URL: 
>> wss://ovirt.engine.fqdn:6100/eyJ...0=[object Event]
>>
>>  Before replacing certificates SPICE HTML5 browser client works.
>>  Native SPICE client works fine.
>>
>>  Tell me what to do with SPICE HTML5 browser client?
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] unable to find valid certification path to requested target

2016-08-13 Thread Bill Bill
Recently upgraded ovirt-engine and now cannot log in. Getting the error below:


sun.security.validator.ValidatorException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Reports

2016-08-13 Thread Fernando Fuentes
Team,

I got two oVirt Servers. One in 4.0.2 and one in 3.6.X
I love the new 4.0 dashboard, But I need a way to continue to build
reports with ease, 3.6 Can do this with ovirt-reports. How can I make my
oVirt 4.0.2 send data for reports to my ovirt reports on 3.6?

Thanks for the help team!

Regards,

-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread David Gossage
On Sat, Aug 13, 2016 at 11:00 AM, David Gossage  wrote:

> On Sat, Aug 13, 2016 at 8:19 AM, Scott  wrote:
>
>> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
>> works for me too where 3.7.12/13 did not.
>>
>> I did find that you should NOT turn off network.remote-dio or turn
>> on performance.strict-o-direct as suggested earlier in the thread.  They
>> will prevent dd (using direct flag) and other things from working
>> properly.  I'd leave those at network.remote-dio=enabled
>> and performance.strict-o-direct=off.
>>
>
> Those were actually just suggested during a testing phase trying to trace
> down the issue.  Neither of those 2 I think have ever been suggested as
> good practice. At least not for VM storage.
>
>
>> Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.
>>
>
Is it still in testing repo? I updated my production cluster I think 2
weeks ago from default repo on centos7.


>> Scott
>>
>> On Tue, Aug 2, 2016 at 9:05 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> So far gluster 3.7.14 seems to have resolved issues at least on my test
>>> box.  dd commands that failed previously now work with sharding on zfs
>>> backend,
>>>
>>> Where before I couldn't even mount a new storage domain it now mounted
>>> and I have a test vm being created.
>>>
>>> Still have to let VM run for a few days and make sure no locking
>>> freezing occurs but looks hopeful so far.
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>
 On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay  wrote:

> Hi,
>
> 1.  Could you please attach the glustershd logs from all three nodes?
>

 Here are ccgl1 and ccgl2.  as previously mentioned ccgl3 third node was
 down from bad nic so no relevant logs would be on that node.


>
> 2. Also, so far what we know is that the 'Operation not permitted'
> errors are on the main vm image itself and not its individual shards (ex
> deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
> Get the inode number of 
> .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
> (ls -li) from the first brick. I'll call this number INODE_NUMBER.
> Execute `find . -inum INODE_NUMBER` from the brick root on first brick
> to print the hard links against the file in the prev step and share the
> output.
>
 [dgossage@ccgl1 ~]$ sudo ls -li /gluster1/BRICK1/1/.glusterfs/
 de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
 16407 -rw-r--r--. 2 36 36 466 Jun  5 16:52
 /gluster1/BRICK1/1/.glusterfs/de/b6/deb61291-5176-4b81-8315-
 3f1cf8e3534d
 [dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/
 [dgossage@ccgl1 1]$ sudo find . -inum 16407
 ./7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
 ./.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d



>
> 3. Did you delete any vms at any point before or after the upgrade?
>

 Immediately before or after on same day pretty sure I deleted nothing.
 During week prior I deleted a few dev vm's that were never setup and some
 the week after upgrade as I was preparing for moving disks off and on
 storage to get them sharded and felt it would be easier to just recreate
 some disks that had no data yet rather than move them off and on later.

>
> -Krutika
>
> On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay <
>> kdhan...@redhat.com> wrote:
>>
>>> OK, could you try the following:
>>>
>>> i. Set network.remote-dio to off
>>> # gluster volume set  network.remote-dio off
>>>
>>> ii. Set performance.strict-o-direct to on
>>> # gluster volume set  performance.strict-o-direct on
>>>
>>> iii. Stop the affected vm(s) and start again
>>>
>>> and tell me if you notice any improvement?
>>>
>>>
>> Previous instll I had issue with is still on gluster 3.7.11
>>
>> My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
>> locak disk right now isn't allowing me to add the gluster storage at all.
>>
>> Keep getting some type of UI error
>>
>> 2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.
>> server.gwt.OvirtRemoteLoggingService] (default task-33) []
>> Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA785
>> 2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.
>> server.gwt.OvirtRemoteLoggingService] (default task-33) [] Uncaught
>> exception: : java.lang.ClassCastException
>> at Unknown.ps(https://ccengine2.c
>> 

Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread David Gossage
On Sat, Aug 13, 2016 at 8:19 AM, Scott  wrote:

> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
> works for me too where 3.7.12/13 did not.
>
> I did find that you should NOT turn off network.remote-dio or turn
> on performance.strict-o-direct as suggested earlier in the thread.  They
> will prevent dd (using direct flag) and other things from working
> properly.  I'd leave those at network.remote-dio=enabled
> and performance.strict-o-direct=off.
>

Those were actually just suggested during a testing phase trying to trace
down the issue.  Neither of those 2 I think have ever been suggested as
good practice. At least not for VM storage.


> Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.
>
> Scott
>
> On Tue, Aug 2, 2016 at 9:05 AM, David Gossage  > wrote:
>
>> So far gluster 3.7.14 seems to have resolved issues at least on my test
>> box.  dd commands that failed previously now work with sharding on zfs
>> backend,
>>
>> Where before I couldn't even mount a new storage domain it now mounted
>> and I have a test vm being created.
>>
>> Still have to let VM run for a few days and make sure no locking freezing
>> occurs but looks hopeful so far.
>>
>> *David Gossage*
>> *Carousel Checks Inc. | System Administrator*
>> *Office* 708.613.2284
>>
>> On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>> On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay 
>>> wrote:
>>>
 Hi,

 1.  Could you please attach the glustershd logs from all three nodes?

>>>
>>> Here are ccgl1 and ccgl2.  as previously mentioned ccgl3 third node was
>>> down from bad nic so no relevant logs would be on that node.
>>>
>>>

 2. Also, so far what we know is that the 'Operation not permitted'
 errors are on the main vm image itself and not its individual shards (ex
 deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
 Get the inode number of 
 .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
 (ls -li) from the first brick. I'll call this number INODE_NUMBER.
 Execute `find . -inum INODE_NUMBER` from the brick root on first brick
 to print the hard links against the file in the prev step and share the
 output.

>>> [dgossage@ccgl1 ~]$ sudo ls -li /gluster1/BRICK1/1/.glusterfs/
>>> de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>>> 16407 -rw-r--r--. 2 36 36 466 Jun  5 16:52 /gluster1/BRICK1/1/.glusterfs/
>>> de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>>> [dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/
>>> [dgossage@ccgl1 1]$ sudo find . -inum 16407
>>> ./7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
>>> ./.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>>>
>>>
>>>

 3. Did you delete any vms at any point before or after the upgrade?

>>>
>>> Immediately before or after on same day pretty sure I deleted nothing.
>>> During week prior I deleted a few dev vm's that were never setup and some
>>> the week after upgrade as I was preparing for moving disks off and on
>>> storage to get them sharded and felt it would be easier to just recreate
>>> some disks that had no data yet rather than move them off and on later.
>>>

 -Krutika

 On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <
 dgoss...@carouselchecks.com> wrote:

>
> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay <
> kdhan...@redhat.com> wrote:
>
>> OK, could you try the following:
>>
>> i. Set network.remote-dio to off
>> # gluster volume set  network.remote-dio off
>>
>> ii. Set performance.strict-o-direct to on
>> # gluster volume set  performance.strict-o-direct on
>>
>> iii. Stop the affected vm(s) and start again
>>
>> and tell me if you notice any improvement?
>>
>>
> Previous instll I had issue with is still on gluster 3.7.11
>
> My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> locak disk right now isn't allowing me to add the gluster storage at all.
>
> Keep getting some type of UI error
>
> 2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.
> server.gwt.OvirtRemoteLoggingService] (default task-33) []
> Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA785
> 2016-07-25 12:49:09,277 ERROR [org.ovirt.engine.ui.frontend.
> server.gwt.OvirtRemoteLoggingService] (default task-33) [] Uncaught
> exception: : java.lang.ClassCastException
> at Unknown.ps(https://ccengine2.c
> arouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1
> C7FDD91EDAA785.cache.html@3837)at Unknown.ts(https://ccengine2.c
> arouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1
> C7FDD91EDAA785.cache.html@20)  at Unknown.vs(https://ccengine2.c
> arouselchecks.local/ovirt-engine/webadmin/430985F23DFC1C8BE1
> 

Re: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 browser client -> WebSocket error: Can't connect to websocket on URL: wss://ovirt.engine.fqdn:6100/

2016-08-13 Thread aleksey . maksimov

I changed my file /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf 
to:


PROXY_PORT=6100
#SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/websocket-proxy.cer
#SSL_KEY=/etc/pki/ovirt-engine/keys/websocket-proxy.key.nopass
#CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer
SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/apache-ca.pem
SSL_ONLY=True

...and restart HostedEngine VM.
Problem still exists.

13.08.2016, 17:52, "aleksey.maksi...@it-kb.ru" :
> It does not work for me. any ideas?
>
> 02.08.2016, 17:22, "Jiri Belka" :
>>  This works for me:
>>
>>  # cat /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
>>  PROXY_PORT=6100
>>  SSL_CERTIFICATE=/etc/pki/ovirt-engine/apache-ca.pem
>>  SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
>>  CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
>>  SSL_ONLY=True
>>
>>  - Original Message -
>>  From: "aleksey maksimov" 
>>  To: "users" 
>>  Sent: Monday, August 1, 2016 12:13:38 PM
>>  Subject: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 
>> browser client -> WebSocket error: Can't connect to websocket on URL: 
>> wss://ovirt.engine.fqdn:6100/
>>
>>  Hello oVirt guru`s !
>>
>>  I have successfully replaced the oVirt 4 site SSL-certificate according to 
>> the instructions from "Replacing oVirt SSL Certificate"
>>  section in "oVirt Administration Guide"
>>  http://www.ovirt.org/documentation/admin-guide/administration-guide/
>>
>>  3 files have been replaced:
>>
>>  /etc/pki/ovirt-engine/certs/apache.cer
>>  /etc/pki/ovirt-engine/keys/apache.key.nopass
>>  /etc/pki/ovirt-engine/apache-ca.pem
>>
>>  Now the oVirt site using my certificate and everything works fine, but when 
>> I try to use SPICE HTML5 browser client in Firefox or Chrome I see a gray 
>> screen and message under the button "Toggle messages output":
>>
>>  WebSocket error: Can't connect to websocket on URL: 
>> wss://ovirt.engine.fqdn:6100/eyJ...0=[object Event]
>>
>>  Before replacing certificates SPICE HTML5 browser client works.
>>  Native SPICE client works fine.
>>
>>  Tell me what to do with SPICE HTML5 browser client?
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 browser client -> WebSocket error: Can't connect to websocket on URL: wss://ovirt.engine.fqdn:6100/

2016-08-13 Thread aleksey . maksimov

It does not work for me. any ideas?

02.08.2016, 17:22, "Jiri Belka" :
> This works for me:
>
> # cat /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
> PROXY_PORT=6100
> SSL_CERTIFICATE=/etc/pki/ovirt-engine/apache-ca.pem
> SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
> CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
> SSL_ONLY=True
>
> - Original Message -
> From: "aleksey maksimov" 
> To: "users" 
> Sent: Monday, August 1, 2016 12:13:38 PM
> Subject: [ovirt-users] oVirt 4 with custom SSL-certificate and SPICE HTML5 
> browser client -> WebSocket error: Can't connect to websocket on URL: 
> wss://ovirt.engine.fqdn:6100/
>
> Hello oVirt guru`s !
>
> I have successfully replaced the oVirt 4 site SSL-certificate according to 
> the instructions from "Replacing oVirt SSL Certificate"
> section in "oVirt Administration Guide"
> http://www.ovirt.org/documentation/admin-guide/administration-guide/
>
> 3 files have been replaced:
>
> /etc/pki/ovirt-engine/certs/apache.cer
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> /etc/pki/ovirt-engine/apache-ca.pem
>
> Now the oVirt site using my certificate and everything works fine, but when I 
> try to use SPICE HTML5 browser client in Firefox or Chrome I see a gray 
> screen and message under the button "Toggle messages output":
>
> WebSocket error: Can't connect to websocket on URL: 
> wss://ovirt.engine.fqdn:6100/eyJ...0=[object Event]
>
> Before replacing certificates SPICE HTML5 browser client works.
> Native SPICE client works fine.
>
> Tell me what to do with SPICE HTML5 browser client?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6.6 and gluster 3.7.13

2016-08-13 Thread Scott
Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
works for me too where 3.7.12/13 did not.

I did find that you should NOT turn off network.remote-dio or turn
on performance.strict-o-direct as suggested earlier in the thread.  They
will prevent dd (using direct flag) and other things from working
properly.  I'd leave those at network.remote-dio=enabled
and performance.strict-o-direct=off.

Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.

Scott

On Tue, Aug 2, 2016 at 9:05 AM, David Gossage 
wrote:

> So far gluster 3.7.14 seems to have resolved issues at least on my test
> box.  dd commands that failed previously now work with sharding on zfs
> backend,
>
> Where before I couldn't even mount a new storage domain it now mounted and
> I have a test vm being created.
>
> Still have to let VM run for a few days and make sure no locking freezing
> occurs but looks hopeful so far.
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Tue, Jul 26, 2016 at 8:15 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay 
>> wrote:
>>
>>> Hi,
>>>
>>> 1.  Could you please attach the glustershd logs from all three nodes?
>>>
>>
>> Here are ccgl1 and ccgl2.  as previously mentioned ccgl3 third node was
>> down from bad nic so no relevant logs would be on that node.
>>
>>
>>>
>>> 2. Also, so far what we know is that the 'Operation not permitted'
>>> errors are on the main vm image itself and not its individual shards (ex
>>> deb61291-5176-4b81-8315-3f1cf8e3534d). Could you do the following:
>>> Get the inode number of 
>>> .glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>>> (ls -li) from the first brick. I'll call this number INODE_NUMBER.
>>> Execute `find . -inum INODE_NUMBER` from the brick root on first brick
>>> to print the hard links against the file in the prev step and share the
>>> output.
>>>
>> [dgossage@ccgl1 ~]$ sudo ls -li /gluster1/BRICK1/1/.glusterfs/
>> de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>> 16407 -rw-r--r--. 2 36 36 466 Jun  5 16:52 /gluster1/BRICK1/1/.glusterfs/
>> de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>> [dgossage@ccgl1 ~]$ cd /gluster1/BRICK1/1/
>> [dgossage@ccgl1 1]$ sudo find . -inum 16407
>> ./7c73a8dd-a72e-4556-ac88-7f6813131e64/dom_md/metadata
>> ./.glusterfs/de/b6/deb61291-5176-4b81-8315-3f1cf8e3534d
>>
>>
>>
>>>
>>> 3. Did you delete any vms at any point before or after the upgrade?
>>>
>>
>> Immediately before or after on same day pretty sure I deleted nothing.
>> During week prior I deleted a few dev vm's that were never setup and some
>> the week after upgrade as I was preparing for moving disks off and on
>> storage to get them sharded and felt it would be easier to just recreate
>> some disks that had no data yet rather than move them off and on later.
>>
>>>
>>> -Krutika
>>>
>>> On Mon, Jul 25, 2016 at 11:30 PM, David Gossage <
>>> dgoss...@carouselchecks.com> wrote:
>>>

 On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay  wrote:

> OK, could you try the following:
>
> i. Set network.remote-dio to off
> # gluster volume set  network.remote-dio off
>
> ii. Set performance.strict-o-direct to on
> # gluster volume set  performance.strict-o-direct on
>
> iii. Stop the affected vm(s) and start again
>
> and tell me if you notice any improvement?
>
>
 Previous instll I had issue with is still on gluster 3.7.11

 My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
 locak disk right now isn't allowing me to add the gluster storage at all.

 Keep getting some type of UI error

 2016-07-25 12:49:09,277 ERROR 
 [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
 (default task-33) [] Permutation name: 430985F23DFC1C8BE1C7FDD91EDAA785
 2016-07-25 12:49:09,277 ERROR 
 [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
 (default task-33) [] Uncaught exception: : java.lang.ClassCastException
 at Unknown.ps(https://ccengine2.carouselchecks.local/ovirt-
 engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@3837)
  at Unknown.ts(https://ccengine2.carouselchecks.local/ovirt-
 engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@20)
  at Unknown.vs(https://ccengine2.carouselchecks.local/ovirt-
 engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@18)
  at Unknown.iJf(https://ccengine2.carouselchecks.local/ovirt-
 engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@19) at
 Unknown.Xab(https://ccengine2.carouselchecks.local/ovirt-
 engine/webadmin/430985F23DFC1C8BE1C7FDD91EDAA785.cache.html@48) at
 Unknown.P8o(https://ccengine2.carouselchecks.local/ovirt-
 

Re: [ovirt-users] ovirt 3.6 python sdk how to find logical network from a host nic?

2016-08-13 Thread Juan Hernández
On 08/13/2016 12:17 AM, Huan He (huhe) wrote:
> Assuming the logical network ovirtmgmt has been configured in host NIC
> enp6s0.
> 
> host = api.hosts.get(‘host-123’)
> host_nic = host.nics.get(‘enp6s0’)
> 
> How to get the logical network name ovirtmgmt?
> 
> I basically need to find ovirtmgmt is configured in which NIC.
> 
> Thanks,
> Huan
> 

To do this first you need to find the identifier of the "ovirtmgmt"
network of the relevant cluster (the same network name can be used in
multiple clusters) and then iterate the network attachments to find
which network interfaces are connected to that network. Something like this:

---8<---
# Find the host:
host_name = 'myhost'
host = api.hosts.get(name=host_name)

# Find the identifier of the cluster that the host belongs to:
cluster_id = host.get_cluster().get_id()

# Find the networks available in the cluster, and locate the one
# ones with the name we are looking for:
network_name = 'ovirtmgmt'
network_ids = []
networks = api.clusters.get(id=cluster_id).networks.list()
for network in networks:
if network.get_name() == network_name:
network_ids.append(network.get_id())

# Find the network interface of the host that has the network attached:
nic_ids = []
network_attachments = host.networkattachments.list()
for network_attachment in network_attachments:
if network_attachment.get_network().get_id() in network_ids:
nic_ids.append(network_attachment.get_host_nic().get_id())

# Print the details of the nics:
for nic_id in nic_ids:
nic = host.nics.get(id=nic_id)
print(nic.get_name())
--->8---

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users