Re: [ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Mahdi Adnan
Hi,


Can you put some numbers ? what tests are you doing ?

Im running oVirt with Gluster without performance issues, but im running 
replica 2 all SSDs.

Gluster logs might help too.


--

Respectfully
Mahdi A. Mahdi


From: users-boun...@ovirt.org  on behalf of Chris Boot 

Sent: Monday, June 19, 2017 5:46:08 PM
To: oVirt users
Subject: [ovirt-users] Very poor GlusterFS performance

Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Cheers,
Chris

--
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing MAC Pool

2017-06-19 Thread Mahdi Adnan
Thank you very much.


I am running engine 4.1 but the cluster version is still 4.0.

I extended the MAC pool and it worked fine.


Thanks again.


--

Respectfully
Mahdi A. Mahdi


From: Michael Burman 
Sent: Monday, June 19, 2017 7:56:24 AM
To: Mahdi Adnan
Cc: Ovirt Users
Subject: Re: [ovirt-users] Changing MAC Pool

Hi Mahdi

What version are you running? it is possible to extend the MAC pool range.

Before 4.1 you can extend the MAC pool range globally with engine-config 
command, for example:

- engine-config -s 
MacPoolRanges=00:00:00:00:00:00-00:00:00:10:00:00,00:00:00:02:00:00-00:03:00:00:00:0A

- restart ovirt-engine service

>From version 4.1, the MAC pool range moved to be in the cluster level and it's 
>now possible to edit/create/extend the MAC pool range per each cluster 
>separately  via the UI:

- 'Clusters' > edit cluster > 'MAC Address pool' range sub tab > 
add/extend/edit/remove
- Or via 'Configure' it is possible to create MAC pool entities and then assign 
them to desired clusters.

Cheers)

On Sun, Jun 18, 2017 at 1:25 PM, Mahdi Adnan 
> wrote:

Hi,


I ran into an issue where i have no more MAC in the MAC pool.

I used the default MAC pool and now i want to create a new one for the Cluster.

Is it possible to create new MAC pool for the cluster without affecting the VMs 
?


Appreciate your help.


--

Respectfully
Mahdi A. Mahdi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-19 Thread Arsène Gschwind

Hi Jenny,

Thanks for the explanations..

Please find vm.conf attached, it looks like the ovirtmgmt network is defined

Regards,
Arsène


On 06/19/2017 01:46 PM, Evgenia Tokar wrote:

Hi,

It should be in one of the directories on your storage domain:
/cd1f6775-61e9-4d04-b41c-c64925d5a905/images//

To see which one you can run the following command:

vdsm-client Volume getInfo volumeID= imageID= 
storagedomainID= storagepoolID=


the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
the storage pool id can be found using: vdsm-client StorageDomain 
getInfo storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905


The volume that has "description": "HostedEngineConfigurationImage" is 
the one you are looking for.
Untar it and it should contain the original vm.conf which was used to 
start the hosted engine.


Jenny Tokar


On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind 
> wrote:


Hi Jenny,

1. I couldn't locate any tar file containing vm.conf, do you know
the exact place where it is stored?

2. The ovirtmgmt appears in the network dropdown but I'm not able
to change since it complains about locked values.

Thanks a lot for your help.

Regards,
Arsène



On 06/14/2017 01:26 PM, Evgenia Tokar wrote:

Hi Arseny,

Looking at the log the ovf doesn't contain the ovirtmgmt network.

1. Can you provide the original vm.conf file the engine was
started with? It is located in a tar archive on your storage domain.
2. It's uncelar from the screenshot, in the network dropdown do
you have an option to add a ovirtmgmt network?

Thanks,
Jenny


On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind
> wrote:

Sorry for that, I haven't checked.

I've replaced the log file with a new version which should
work i hope.

Many Thanks.

Regards,
Arsène


On 06/12/2017 02:33 PM, Martin Sivak wrote:

I am sorry to say so, but it seems the log archive is corrupted. I
can't open it.

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 12:47 PM, Arsène Gschwind
   wrote:

Please find the logs here:

https://www.dropbox.com/sh/k2zk7ig4tbd9tnj/AAB2NKjVk2z6lVPQ15NIeAtCa?dl=0



Thanks.

Regards,
Arsène

Hi,

Sorry for this, it seems that the attachment have been detached.

So let's try again

Regards,
Arsène


On 06/12/2017 11:59 AM, Martin Sivak wrote:

Hi,

I would love to help you, but I didn't get the log file..

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 11:49 AM, Arsène Gschwind
   wrote:

Hi all,

Any chance to get help or a hint to solve my Problem, I have no idea 
how to
change this configuration since it is not possible using the WebUI.

Thanks a lot.

Regards,
Arsène


On 06/07/2017 11:50 AM, Arsène Gschwind wrote:

Hi all,

Please find attached the agent.log DEBUG and a screenshot from webui

Thanks a lot

Best regards,

Arsène


On 06/07/2017 11:27 AM, Martin Sivak wrote:

Hi all,

Yanir is right, the local vm.conf is just a cache of what was
retrieved from the engine.

I might be interesting to check what the configuration of the engine
VM shows when edited using the webadmin. Or enable debug logging [1]
for hosted engine and add the OVF dump we send there now and then (the
xml representation of the VM).

[1] See /etc/ovirt-hosted-engine-ha/agent-log.conf and change the
level for root logger to DEBUG

Best regards

Martin Sivak

On Wed, Jun 7, 2017 at 11:12 AM, Yanir Quinn 
  wrote:

If im not mistaken the values of vm.conf are repopulated from the 
database ,
but i wouldn't recommend meddling with DB data.
maybe the network device wasn't set properly during the hosted engine 
setup
?

On Wed, Jun 7, 2017 at 11:47 AM, Arsène Gschwind 

wrote:

Hi,

Any chance to get a hint how to change the vm.conf file so it will not 
be
overwritten constantly?

Thanks a lot.

Arsène


On 06/06/2017 09:50 AM, Arsène Gschwind wrote:

Hi,

I've migrated our oVirt engine to hosted-engine located on a FC storage
LUN, so far so good.
For some reason I'm not able to start the 

Re: [ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Ralf Schenk
Hello,

Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi
access for Ovirt-VM's to gluster volumes which I thought to be possible
since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is
using fuse to mount gluster-based VM-Disks.

Bye


Am 19.06.2017 um 17:23 schrieb Darrell Budic:
> Chris-
>
> You probably need to head over to gluster-us...@gluster.org
>  for help with performance issues.
>
> That said, what kind of performance are you getting, via some form or
> testing like bonnie++ or even dd runs? Raw bricks vs gluster
> performance is useful to determine what kind of performance you’re
> actually getting.
>
> Beyond that, I’d recommend dropping the arbiter bricks and re-adding
> them as full replicas, they can’t serve distributed data in this
> configuration and may be slowing things down on you. If you’ve got a
> storage network setup, make sure it’s using the largest MTU it can,
> and consider adding/testing these settings that I use on my main
> storage volume:
>
> performance.io -thread-count: 32
> client.event-threads: 8
> server.event-threads: 3
> performance.stat-prefetch: on
>
> Good luck,
>
>   -Darrell
>
>
>> On Jun 19, 2017, at 9:46 AM, Chris Boot > > wrote:
>>
>> Hi folks,
>>
>> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
>> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
>> 6 bricks, which themselves live on two SSDs in each of the servers (one
>> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
>> SSDs. Connectivity is 10G Ethernet.
>>
>> Performance within the VMs is pretty terrible. I experience very low
>> throughput and random IO is really bad: it feels like a latency issue.
>> On my oVirt nodes the SSDs are not generally very busy. The 10G network
>> seems to run without errors (iperf3 gives bandwidth measurements of >=
>> 9.20 Gbits/sec between the three servers).
>>
>> To put this into perspective: I was getting better behaviour from NFS4
>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>> feel right at all.
>>
>> My volume configuration looks like this:
>>
>> Volume Name: vmssd
>> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet6
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io -cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 1
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> features.shard-block-size: 128MB
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>>
>> I would really appreciate some guidance on this to try to improve things
>> because at this rate I will need to reconsider using GlusterFS
>> altogether.
>>
>> Cheers,
>> Chris
>>
>> -- 
>> Chris Boot
>> bo...@bootc.net 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Darrell Budic
Chris-

You probably need to head over to gluster-us...@gluster.org 
 for help with performance issues.

That said, what kind of performance are you getting, via some form or testing 
like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to 
determine what kind of performance you’re actually getting.

Beyond that, I’d recommend dropping the arbiter bricks and re-adding them as 
full replicas, they can’t serve distributed data in this configuration and may 
be slowing things down on you. If you’ve got a storage network setup, make sure 
it’s using the largest MTU it can, and consider adding/testing these settings 
that I use on my main storage volume:

performance.io-thread-count: 32
client.event-threads: 8
server.event-threads: 3
performance.stat-prefetch: on

Good luck,

  -Darrell


> On Jun 19, 2017, at 9:46 AM, Chris Boot  wrote:
> 
> Hi folks,
> 
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
> 
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
> 
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
> 
> My volume configuration looks like this:
> 
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> 
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
> 
> Cheers,
> Chris
> 
> -- 
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine

2017-06-19 Thread Joel Diaz
Ok.

Simone,

Please let me know if I can provide any additional log files.

Thanks for taking the time to look into this.

Joel

On Jun 16, 2017 8:59 AM, "Sahina Bose"  wrote:

> I don't notice anything wrong on the gluster end.
>
> Maybe Simone can help take a look at HE behaviour?
>
> On Fri, Jun 16, 2017 at 6:14 PM, Joel Diaz  wrote:
>
>> Good morning,
>>
>> Info requested below.
>>
>> [root@ovirt-hyp-02 ~]# hosted-engine --vm-start
>>
>> Exception in thread Client localhost:54321 (most likely raised during
>> interpreter shutdown):VM exists and its status is Up
>>
>>
>>
>> [root@ovirt-hyp-02 ~]# ping engine
>>
>> PING engine.example.lan (192.168.170.149) 56(84) bytes of data.
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=1 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=2 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=3 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=4 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=5 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=6 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=7 Destination
>> Host Unreachable
>>
>> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=8 Destination
>> Host Unreachable
>>
>>
>>
>>
>>
>> [root@ovirt-hyp-02 ~]# gluster volume status engine
>>
>> Status of volume: engine
>>
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> 
>> --
>>
>> Brick 192.168.170.141:/gluster_bricks/engin
>>
>> e/engine49159 0  Y
>> 1799
>>
>> Brick 192.168.170.143:/gluster_bricks/engin
>>
>> e/engine49159 0  Y
>> 2900
>>
>> Self-heal Daemon on localhost   N/A   N/AY
>> 2914
>>
>> Self-heal Daemon on ovirt-hyp-01.example.lan   N/A   N/A
>> Y   1854
>>
>>
>>
>> Task Status of Volume engine
>>
>> 
>> --
>>
>> There are no active volume tasks
>>
>>
>>
>> [root@ovirt-hyp-02 ~]# gluster volume heal engine info
>>
>> Brick 192.168.170.141:/gluster_bricks/engine/engine
>>
>> Status: Connected
>>
>> Number of entries: 0
>>
>>
>>
>> Brick 192.168.170.143:/gluster_bricks/engine/engine
>>
>> Status: Connected
>>
>> Number of entries: 0
>>
>>
>>
>> Brick 192.168.170.147:/gluster_bricks/engine/engine
>>
>> Status: Connected
>>
>> Number of entries: 0
>>
>>
>>
>> [root@ovirt-hyp-02 ~]# cat /var/log/glusterfs/rhev-data-c
>> enter-mnt-glusterSD-ovirt-hyp-01.example.lan\:engine.log
>>
>> [2017-06-15 13:37:02.009436] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
>> 0-glusterfs: No change in volfile, continuing
>>
>>
>>
>>
>>
>> Each of the three host sends out the following notifications about every
>> 15 minutes.
>>
>> Hosted engine host: ovirt-hyp-01.example.lan changed state:
>> EngineDown-EngineStart.
>>
>> Hosted engine host: ovirt-hyp-01.example.lan changed state:
>> EngineStart-EngineStarting.
>>
>> Hosted engine host: ovirt-hyp-01.example.lan changed state:
>> EngineStarting-EngineForceStop.
>>
>> Hosted engine host: ovirt-hyp-01.example.lan changed state:
>> EngineForceStop-EngineDown.
>>
>> Please let me know if you need any additional information.
>>
>> Thank you,
>>
>> Joel
>>
>>
>>
>> On Jun 16, 2017 2:52 AM, "Sahina Bose"  wrote:
>>
>>> From the agent.log,
>>> MainThread::INFO::2017-06-15 11:16:50,583::states::473::ovi
>>> rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine
>>> vm is running on host ovirt-hyp-02.reis.com (id 2)
>>>
>>> It looks like the HE VM was started successfully? Is it possible that
>>> the ovirt-engine service could not be started on the HE VM. Could you try
>>> to start the HE vm using below and then logging into the VM console.
>>> #hosted-engine --vm-start
>>>
>>> Also, please check
>>> # gluster volume status engine
>>> # gluster volume heal engine info
>>>
>>> Please also check if there are errors in gluster mount logs - at
>>> /var/log/glusterfs/rhev-data-center-mnt...log
>>>
>>>
>>> On Thu, Jun 15, 2017 at 8:53 PM, Joel Diaz  wrote:
>>>
 Sorry. I forgot to attached the requested logs in the previous email.

 Thanks,

 On Jun 15, 2017 9:38 AM, "Joel Diaz"  wrote:

 Good morning,

 Requested info below. Along with some additional info.

 You'll notice the data volume is not mounted.

 Any help in getting HE back running would be greatly appreciated.

 Thank you,

 Joel

 [root@ovirt-hyp-01 ~]# hosted-engine --vm-status




[ovirt-users] Very poor GlusterFS performance

2017-06-19 Thread Chris Boot
Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Cheers,
Chris

-- 
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] when creating VMs, I don't want hosted_storage to be an option

2017-06-19 Thread Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not 
available to add new VMs to?  Right now it’s the default storage domain when 
adding a VM.  At the least, I’d like to make another storage domain the default.
Is there a way to do this?

Thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] cloud init hostname from pools

2017-06-19 Thread Paul
Hi,

 

I would like to automatically set the hostname of a VM to be the same as the
ovirt machine name seen in the portal.

This can be done by creating a template and activating cloud-init in the
initial run tab. 

A new VM named "test" based in this template is created and the hostname is
"test", works perfect!

 

But when I create a pool (i.e. "testpool") based on this template I get
machines with names "testpool-1", "testpool-2", etc. but the machine name is
not present in the metadata and cannot be set as hostname. This is probably
due to the fact that the machine names are auto generated by the oVirt Pool.

 

Is this expected/desired behavior for cloud-init from pools? 

If so, what would be the best way to retrieve the machine name (as seen in
the portal) and manually set it to be hostname via cloud-init (i.e. runcmd -
hostnamectl set-hostname $(hostname))

 

Kind regards,

 

Paul

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-19 Thread Evgenia Tokar
Hi,

It should be in one of the directories on your storage domain:
/cd1f6775-61e9-4d04-b41c-c64925d5a905/images//

To see which one you can run the following command:

vdsm-client Volume getInfo volumeID= imageID=
storagedomainID= storagepoolID=

the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
the storage pool id can be found using: vdsm-client StorageDomain getInfo
storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905

The volume that has "description": "HostedEngineConfigurationImage" is the
one you are looking for.
Untar it and it should contain the original vm.conf which was used to start
the hosted engine.

Jenny Tokar


On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind  wrote:

> Hi Jenny,
>
> 1. I couldn't locate any tar file containing vm.conf, do you know the
> exact place where it is stored?
>
> 2. The ovirtmgmt appears in the network dropdown but I'm not able to
> change since it complains about locked values.
>
> Thanks a lot for your help.
>
> Regards,
> Arsène
>
>
>
> On 06/14/2017 01:26 PM, Evgenia Tokar wrote:
>
> Hi Arseny,
>
> Looking at the log the ovf doesn't contain the ovirtmgmt network.
>
> 1. Can you provide the original vm.conf file the engine was started with?
> It is located in a tar archive on your storage domain.
> 2. It's uncelar from the screenshot, in the network dropdown do you have
> an option to add a ovirtmgmt network?
>
> Thanks,
> Jenny
>
>
> On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind <
> arsene.gschw...@unibas.ch> wrote:
>
>> Sorry for that, I haven't checked.
>>
>> I've replaced the log file with a new version which should work i hope.
>>
>> Many Thanks.
>>
>> Regards,
>> Arsène
>>
>> On 06/12/2017 02:33 PM, Martin Sivak wrote:
>>
>> I am sorry to say so, but it seems the log archive is corrupted. I
>> can't open it.
>>
>> Regards
>>
>> Martin Sivak
>>
>> On Mon, Jun 12, 2017 at 12:47 PM, Arsène Gschwind 
>>  wrote:
>>
>> Please find the logs 
>> here:https://www.dropbox.com/sh/k2zk7ig4tbd9tnj/AAB2NKjVk2z6lVPQ15NIeAtCa?dl=0
>>
>> Thanks.
>>
>> Regards,
>> Arsène
>>
>> Hi,
>>
>> Sorry for this, it seems that the attachment have been detached.
>>
>> So let's try again
>>
>> Regards,
>> Arsène
>>
>>
>> On 06/12/2017 11:59 AM, Martin Sivak wrote:
>>
>> Hi,
>>
>> I would love to help you, but I didn't get the log file..
>>
>> Regards
>>
>> Martin Sivak
>>
>> On Mon, Jun 12, 2017 at 11:49 AM, Arsène Gschwind 
>>  wrote:
>>
>> Hi all,
>>
>> Any chance to get help or a hint to solve my Problem, I have no idea how to
>> change this configuration since it is not possible using the WebUI.
>>
>> Thanks a lot.
>>
>> Regards,
>> Arsène
>>
>>
>> On 06/07/2017 11:50 AM, Arsène Gschwind wrote:
>>
>> Hi all,
>>
>> Please find attached the agent.log DEBUG and a screenshot from webui
>>
>> Thanks a lot
>>
>> Best regards,
>>
>> Arsène
>>
>>
>> On 06/07/2017 11:27 AM, Martin Sivak wrote:
>>
>> Hi all,
>>
>> Yanir is right, the local vm.conf is just a cache of what was
>> retrieved from the engine.
>>
>> I might be interesting to check what the configuration of the engine
>> VM shows when edited using the webadmin. Or enable debug logging [1]
>> for hosted engine and add the OVF dump we send there now and then (the
>> xml representation of the VM).
>>
>> [1] See /etc/ovirt-hosted-engine-ha/agent-log.conf and change the
>> level for root logger to DEBUG
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Wed, Jun 7, 2017 at 11:12 AM, Yanir Quinn  
>>  wrote:
>>
>> If im not mistaken the values of vm.conf are repopulated from the database ,
>> but i wouldn't recommend meddling with DB data.
>> maybe the network device wasn't set properly during the hosted engine setup
>> ?
>>
>> On Wed, Jun 7, 2017 at 11:47 AM, Arsène Gschwind  
>> 
>> wrote:
>>
>> Hi,
>>
>> Any chance to get a hint how to change the vm.conf file so it will not be
>> overwritten constantly?
>>
>> Thanks a lot.
>>
>> Arsène
>>
>>
>> On 06/06/2017 09:50 AM, Arsène Gschwind wrote:
>>
>> Hi,
>>
>> I've migrated our oVirt engine to hosted-engine located on a FC storage
>> LUN, so far so good.
>> For some reason I'm not able to start the hosted-engine VM, after digging
>> in the log files i could figured out the reason. The Network device was set
>> to "None" as follow:
>>
>> devices={nicModel:pv,macAddr:00:16:3e:3a:6b:60,linkActive:true,network:None,deviceId:56cb4d71-13ff-42a8-bb83-7faef99fd3ea,address:{slot:0x03,bus:0x00,domain:0x,type:pci,function:0x0},device:bridge,type:interface}
>>
>> I've created a new config file /var/run/ovirt-hosted-engine-ha/vm.conf.new
>> and set the nic device to ovirtmgnt, the I could start the hosted-engine
>> using :
>> hosted-engine --vm-start
>> --vm-conf=var/run/ovirt-hosted-engine-ha/vm.conf.new
>>
>> The nic  device line in vm.conf.new looks 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-19 Thread Evgenia Tokar
>From the output it looks like the agent is down, try starting it by
running: systemctl start ovirt-ha-agent.

The engine is supposed to see the hosted engine storage domain and import
it to the system, then it should import the hosted engine vm.

Can you attach the agent log from the host
(/var/log/ovirt-hosted-engine-ha/agent.log)
and the engine log from the engine vm (/var/log/ovirt-engine/engine.log)?

Thanks,
Jenny


On Mon, Jun 19, 2017 at 12:41 PM, cmc  wrote:

>  Hi Jenny,
>
> > What version are you running?
>
> 4.1.2.2-1.el7.centos
>
> > For the hosted engine vm to be imported and displayed in the engine, you
> > must first create a master storage domain.
>
> To provide a bit more detail: this was a migration of a bare-metal
> engine in an existing cluster to a hosted engine VM for that cluster.
> As part of this migration, I built an entirely new host and ran
> 'hosted-engine --deploy' (followed these instructions:
> http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_
> Metal_to_an_EL-Based_Self-Hosted_Environment/).
> I restored the backup from the engine and it completed without any
> errors. I didn't see any instructions regarding a master storage
> domain in the page above. The cluster has two existing master storage
> domains, one is fibre channel, which is up, and one ISO domain, which
> is currently offline.
>
> > What do you mean the hosted engine commands are failing? What happens
> when
> > you run hosted-engine --vm-status now?
>
> Interestingly, whereas when I ran it before, it exited with no output
> and a return code of '1', it now reports:
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : kvm-ldn-03.ldn.fscfc.co.uk
> Host ID: 1
> Engine status  : unknown stale-data
> Score  : 0
> stopped: True
> Local maintenance  : False
> crc32  : 0217f07b
> local_conf_timestamp   : 2911
> Host timestamp : 2897
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=2897 (Thu Jun 15 16:22:54 2017)
> host-id=1
> score=0
> vm_conf_refresh_time=2911 (Thu Jun 15 16:23:08 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
>
> Yet I can login to the web GUI fine. I guess it is not HA due to being
> in an unknown state currently? Does the hosted-engine-ha rpm need to
> be installed across all nodes in the cluster, btw?
>
> Thanks for the help,
>
> Cam
>
> >
> > Jenny Tokar
> >
> >
> > On Thu, Jun 15, 2017 at 6:32 PM, cmc  wrote:
> >>
> >> Hi,
> >>
> >> I've migrated from a bare-metal engine to a hosted engine. There were
> >> no errors during the install, however, the hosted engine did not get
> >> started. I tried running:
> >>
> >> hosted-engine --status
> >>
> >> on the host I deployed it on, and it returns nothing (exit code is 1
> >> however). I could not ping it either. So I tried starting it via
> >> 'hosted-engine --vm-start' and it returned:
> >>
> >> Virtual machine does not exist
> >>
> >> But it then became available. I logged into it successfully. It is not
> >> in the list of VMs however.
> >>
> >> Any ideas why the hosted-engine commands fail, and why it is not in
> >> the list of virtual machines?
> >>
> >> Thanks for any help,
> >>
> >> Cam
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann



On 19.06.2017 11:51, Markus Stockhausen wrote:
Maybe NFS Mounts with Version 4.2 and on Server side no SELinux nfs_t 
rule defined?


Both (nfs-server and ovirt host) are in selinux permissive mode.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-19 Thread Arsène Gschwind

Hi Jenny,

1. I couldn't locate any tar file containing vm.conf, do you know the 
exact place where it is stored?


2. The ovirtmgmt appears in the network dropdown but I'm not able to 
change since it complains about locked values.


Thanks a lot for your help.

Regards,
Arsène



On 06/14/2017 01:26 PM, Evgenia Tokar wrote:

Hi Arseny,

Looking at the log the ovf doesn't contain the ovirtmgmt network.

1. Can you provide the original vm.conf file the engine was started 
with? It is located in a tar archive on your storage domain.
2. It's uncelar from the screenshot, in the network dropdown do you 
have an option to add a ovirtmgmt network?


Thanks,
Jenny


On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind 
> wrote:


Sorry for that, I haven't checked.

I've replaced the log file with a new version which should work i
hope.

Many Thanks.

Regards,
Arsène


On 06/12/2017 02:33 PM, Martin Sivak wrote:

I am sorry to say so, but it seems the log archive is corrupted. I
can't open it.

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 12:47 PM, Arsène Gschwind
   wrote:

Please find the logs here:
https://www.dropbox.com/sh/k2zk7ig4tbd9tnj/AAB2NKjVk2z6lVPQ15NIeAtCa?dl=0


Thanks.

Regards,
Arsène

Hi,

Sorry for this, it seems that the attachment have been detached.

So let's try again

Regards,
Arsène


On 06/12/2017 11:59 AM, Martin Sivak wrote:

Hi,

I would love to help you, but I didn't get the log file..

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 11:49 AM, Arsène Gschwind
   wrote:

Hi all,

Any chance to get help or a hint to solve my Problem, I have no idea how to
change this configuration since it is not possible using the WebUI.

Thanks a lot.

Regards,
Arsène


On 06/07/2017 11:50 AM, Arsène Gschwind wrote:

Hi all,

Please find attached the agent.log DEBUG and a screenshot from webui

Thanks a lot

Best regards,

Arsène


On 06/07/2017 11:27 AM, Martin Sivak wrote:

Hi all,

Yanir is right, the local vm.conf is just a cache of what was
retrieved from the engine.

I might be interesting to check what the configuration of the engine
VM shows when edited using the webadmin. Or enable debug logging [1]
for hosted engine and add the OVF dump we send there now and then (the
xml representation of the VM).

[1] See /etc/ovirt-hosted-engine-ha/agent-log.conf and change the
level for root logger to DEBUG

Best regards

Martin Sivak

On Wed, Jun 7, 2017 at 11:12 AM, Yanir Quinn 
  wrote:

If im not mistaken the values of vm.conf are repopulated from the database ,
but i wouldn't recommend meddling with DB data.
maybe the network device wasn't set properly during the hosted engine setup
?

On Wed, Jun 7, 2017 at 11:47 AM, Arsène Gschwind 

wrote:

Hi,

Any chance to get a hint how to change the vm.conf file so it will not be
overwritten constantly?

Thanks a lot.

Arsène


On 06/06/2017 09:50 AM, Arsène Gschwind wrote:

Hi,

I've migrated our oVirt engine to hosted-engine located on a FC storage
LUN, so far so good.
For some reason I'm not able to start the hosted-engine VM, after digging
in the log files i could figured out the reason. The Network device was set
to "None" as follow:


devices={nicModel:pv,macAddr:00:16:3e:3a:6b:60,linkActive:true,network:None,deviceId:56cb4d71-13ff-42a8-bb83-7faef99fd3ea,address:{slot:0x03,bus:0x00,domain:0x,type:pci,function:0x0},device:bridge,type:interface}

I've created a new config file /var/run/ovirt-hosted-engine-ha/vm.conf.new
and set the nic device to ovirtmgnt, the I could start the hosted-engine
using :
hosted-engine --vm-start
--vm-conf=var/run/ovirt-hosted-engine-ha/vm.conf.new

The nic  device line in vm.conf.new looks like:


devices={nicModel:pv,macAddr:00:16:3e:3a:6b:60,linkActive:true,network:ovirtmgmt,deviceId:56cb4d71-13ff-42a8-bb83-7faef99fd3ea,address:{slot:0x03,bus:0x00,domain:0x,type:pci,function:0x0},device:bridge,type:interface}

I tried to find out a way to change this setting on vm.conf but it is
constantly rewritten, even when using the webui is say's:
HostedEngine:

There was an attempt to change Hosted Engine VM values that are locked.

Is there a way to modify/correct the hosted-engine vm.conf file so it will
stay and not be overwritten?

Thanks a lot for any hint/help

rgds,
arsène

--

Arsène Gschwind

Re: [ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Markus Stockhausen
Maybe NFS Mounts with Version 4.2 and on Server side no SELinux nfs_t rule 
defined?

Sent from mobile...

Am 19.06.2017 11:01 vorm. schrieb Moritz Baumann :
> Is there a way to "reinitialize" the lockspace so one node can become
> SPM again and we can run VMS.

errors in /var/log/sanlock.log look like this:


2017-06-19 10:57:00+0200 1617673 [126217]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:00+0200 1617673 [126217]: s51 open_disk
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
error -13
2017-06-19 10:57:01+0200 1617674 [880]: s51 add_lockspace fail result -19
2017-06-19 10:57:02+0200 1617674 [881]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases
2017-06-19 10:57:02+0200 1617674 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:10+0200 1617683 [881]: s52 lockspace
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:10+0200 1617683 [126235]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:10+0200 1617683 [126235]: s52 open_disk
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
error -13
2017-06-19 10:57:11+0200 1617684 [881]: s52 add_lockspace fail result -19
2017-06-19 10:57:13+0200 1617685 [880]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases
2017-06-19 10:57:13+0200 1617685 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:15+0200 1617688 [881]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases
2017-06-19 10:57:15+0200 1617688 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:20+0200 1617693 [881]: s53 lockspace
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:20+0200 1617693 [126255]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:20+0200 1617693 [126255]: s53 open_disk
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
error -13
2017-06-19 10:57:21+0200 1617694 [881]: s53 add_lockspace fail result -19
2017-06-19 10:57:26+0200 1617699 [880]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases
2017-06-19 10:57:26+0200 1617699 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:29+0200 1617702 [881]: open error -13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases
2017-06-19 10:57:29+0200 1617702 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:30+0200 1617703 [880]: s54 lockspace
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0



ovirt-node01[0]:/var/log# ls -ld
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids

-rw-rw. 1 vdsm kvm 1048576 28. Mai 23:13
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids

The nfs share is writeable:

ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md#
touch blabla
ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md#
ls -l
total 3320
-rw-r--r--. 1 root root0 19. Jun 11:00 blabla
-rw-rw. 1 vdsm kvm   1048576 28. Mai 23:13 ids
-rw-rw. 1 vdsm kvm  16777216 19. Jun 10:56 inbox
-rw-rw. 1 vdsm kvm   2097152 22. Mai 15:48 leases
-rw-r--r--. 1 vdsm kvm   361  1. Mär 18:21 metadata
-rw-rw. 1 vdsm kvm  16777216 22. Mai 15:48 outbox
-rw-rw. 1 vdsm kvm   1305088  1. Mär 18:21 xleases

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-19 Thread cmc
 Hi Jenny,

> What version are you running?

4.1.2.2-1.el7.centos

> For the hosted engine vm to be imported and displayed in the engine, you
> must first create a master storage domain.

To provide a bit more detail: this was a migration of a bare-metal
engine in an existing cluster to a hosted engine VM for that cluster.
As part of this migration, I built an entirely new host and ran
'hosted-engine --deploy' (followed these instructions:
http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/).
I restored the backup from the engine and it completed without any
errors. I didn't see any instructions regarding a master storage
domain in the page above. The cluster has two existing master storage
domains, one is fibre channel, which is up, and one ISO domain, which
is currently offline.

> What do you mean the hosted engine commands are failing? What happens when
> you run hosted-engine --vm-status now?

Interestingly, whereas when I ran it before, it exited with no output
and a return code of '1', it now reports:

--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : kvm-ldn-03.ldn.fscfc.co.uk
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 0217f07b
local_conf_timestamp   : 2911
Host timestamp : 2897
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2897 (Thu Jun 15 16:22:54 2017)
host-id=1
score=0
vm_conf_refresh_time=2911 (Thu Jun 15 16:23:08 2017)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True

Yet I can login to the web GUI fine. I guess it is not HA due to being
in an unknown state currently? Does the hosted-engine-ha rpm need to
be installed across all nodes in the cluster, btw?

Thanks for the help,

Cam

>
> Jenny Tokar
>
>
> On Thu, Jun 15, 2017 at 6:32 PM, cmc  wrote:
>>
>> Hi,
>>
>> I've migrated from a bare-metal engine to a hosted engine. There were
>> no errors during the install, however, the hosted engine did not get
>> started. I tried running:
>>
>> hosted-engine --status
>>
>> on the host I deployed it on, and it returns nothing (exit code is 1
>> however). I could not ping it either. So I tried starting it via
>> 'hosted-engine --vm-start' and it returned:
>>
>> Virtual machine does not exist
>>
>> But it then became available. I logged into it successfully. It is not
>> in the list of VMs however.
>>
>> Any ideas why the hosted-engine commands fail, and why it is not in
>> the list of virtual machines?
>>
>> Thanks for any help,
>>
>> Cam
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann
Is there a way to "reinitialize" the lockspace so one node can become 
SPM again and we can run VMS.


errors in /var/log/sanlock.log look like this:


2017-06-19 10:57:00+0200 1617673 [126217]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:00+0200 1617673 [126217]: s51 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:01+0200 1617674 [880]: s51 add_lockspace fail result -19
2017-06-19 10:57:02+0200 1617674 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:02+0200 1617674 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:10+0200 1617683 [881]: s52 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:10+0200 1617683 [126235]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:10+0200 1617683 [126235]: s52 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:11+0200 1617684 [881]: s52 add_lockspace fail result -19
2017-06-19 10:57:13+0200 1617685 [880]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:13+0200 1617685 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:15+0200 1617688 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:15+0200 1617688 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:20+0200 1617693 [881]: s53 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0
2017-06-19 10:57:20+0200 1617693 [126255]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids
2017-06-19 10:57:20+0200 1617693 [126255]: s53 open_disk 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids 
error -13

2017-06-19 10:57:21+0200 1617694 [881]: s53 add_lockspace fail result -19
2017-06-19 10:57:26+0200 1617699 [880]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:26+0200 1617699 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:29+0200 1617702 [881]: open error -13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/leases

2017-06-19 10:57:29+0200 1617702 [828]: ci 2 fd 11 pid -1 recv errno 104
2017-06-19 10:57:30+0200 1617703 [880]: s54 lockspace 
c17d9d7f-e578-4626-a5d9-94ea555d7115:2:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids:0




ovirt-node01[0]:/var/log# ls -ld 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids


-rw-rw. 1 vdsm kvm 1048576 28. Mai 23:13 
/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md/ids


The nfs share is writeable:

ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md# 
touch blabla
ovirt-node01[0]:/rhev/data-center/mnt/scratch-inf.inf.ethz.ch:_export_scratch_ovirt_data/c17d9d7f-e578-4626-a5d9-94ea555d7115/dom_md# 
ls -l

total 3320
-rw-r--r--. 1 root root0 19. Jun 11:00 blabla
-rw-rw. 1 vdsm kvm   1048576 28. Mai 23:13 ids
-rw-rw. 1 vdsm kvm  16777216 19. Jun 10:56 inbox
-rw-rw. 1 vdsm kvm   2097152 22. Mai 15:48 leases
-rw-r--r--. 1 vdsm kvm   361  1. Mär 18:21 metadata
-rw-rw. 1 vdsm kvm  16777216 22. Mai 15:48 outbox
-rw-rw. 1 vdsm kvm   1305088  1. Mär 18:21 xleases

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1.2 sanlock issue with nfs data domain

2017-06-19 Thread Moritz Baumann

Hi,
I'm still strugling to get our ovirt 4.1.2 back to life.

The data domain is nfs and the nfs mount works fine. However it appears 
that the sanlock does not work anymore.


Is there a way to "reinitialize" the lockspace so one node can become 
SPM again and we can run VMS.


Best,
Mo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while installing oVirt Self Hosted Engine

2017-06-19 Thread Simone Tiraboschi
On Fri, Jun 16, 2017 at 5:00 AM, Jon Bornstein  wrote:

> Thanks - makes sense.
>
> I've worked on this a bit more and have pushed a bit further, but from
> looking through my new log, it looks like the engine is error-ing out
> because my engine FQDN is cannot be resolved to an IP address.
>
> *The error: *
> [ ERROR ] Host name is not valid: *engine.example.rocks* did not resolve
> into an IP address
>
> *engine.example.rocks* is the FQDN I supplied when answering the
> following:
>
>  Please provide the FQDN you would like to use for the engine appliance.
>>  Note: This will be the FQDN of the engine VM you are now going to launch.
>>  It should not point to the base host or to any other existing machine.
>>  Engine VM FQDN: (leave it empty to skip):
>
>
>
>
> *My /etc/hosts file: *
> 192.168.1.44 host.example.rocks host
> 192.168.1.45 engine.example.rocks engine
>
>
> I can see why it's erroring, but I'm not sure what I need to do now to get
> it working.  The IP 192.168.1.45 is one I just made up, because the only
> system I have access to is the one I'm currently using (192.168.1.44)
>

Could you please share your hosted-engine-setup log file from the latest
attempt?


>
> Jon
>
> On Thu, Jun 15, 2017 at 12:01 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Jun 15, 2017 at 4:08 PM, Jon Bornstein <
>> bornstein.jonat...@gmail.com> wrote:
>>
>>> My lack of Linux proficiency is going to show here, but..
>>>
>>> I guess I'm a bit confused on how to correctly configure my network
>>> interface(s) for oVirt.
>>>
>>> I currently have two network interfaces:
>>>
>>> enp0s25 -
>>> This is my Ethernet interface, but it is unused.  It currently is set to
>>> DHCP and has no IP address.  However, it is the only interface that oVirt
>>> suggests I use when configuring which nic to set the bridge on.
>>>
>>> wlo1 -
>>> My wireless interface, and IS how i'm connecting to the internet.  This
>>> is the IP address that I was using in my /etc/hosts file.
>>>
>>> Is it it not possible have a system that can run oVirt as well as
>>> maintain an internet connection?
>>>
>>>
>> oVirt by default works in bridge mode.
>> This means that is going to create a bridge on your hosts and the vnic of
>> your VMs will be connected to that bridge as well.
>>
>> oVirt is composed by a central engine managing physical hosts trough an
>> agent deployed on each host.
>> So the engine has to be able to reach the managed hosts, this happens
>> though what we call management network.
>>
>> hosted-engine is special deployment where for ha reasons the oVirt engine
>> is going to run on a VM hosted on the host that it's managing.
>> So, wrapping up, with hosted-engine setup you are going to create a VM
>> for the engine, the engine VM will have a nic on the management network and
>> this mean that you have a management bridge on your host.
>> The host has to have an address over the management network in order to
>> have the engine able to reach your host.
>>
>> That's why hosted-engine-setup is checking the address of the interface
>> you choose for the management network.
>>
>>
>>
>>>
>>> On Thu, Jun 15, 2017 at 9:02 AM, Simone Tiraboschi 
>>> wrote:
>>>


 On Thu, Jun 15, 2017 at 2:55 PM, Jon Bornstein <
 bornstein.jonat...@gmail.com> wrote:

> Hi Marton,
>
> Here is the log: https://gist.github.com/a
> nonymous/ac777a70b8e8fc23016c0b6731f24706
>


 You tried to create the management bridge over enp0s25 but it wasn't
 configured with an IP address for your host.
 Could you please configure it or choose a correctly configured
 interface?


 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 cloud_init._getMyIPAddress:115 Acquiring 'enp0s25' address
 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 plugin.executeRaw:813 execute: ('/sbin/ip', 'addr', 'show', 'enp0s25'),
 executable='None', cwd='None', env=None
 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 plugin.executeRaw:863 execute-result: ('/sbin/ip', 'addr', 'show',
 'enp0s25'), rc=0
 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 plugin.execute:921 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25')
 stdout:
 2: enp0s25:  mtu 1500 qdisc
 pfifo_fast state DOWN qlen 1000
 link/ether c4:34:6b:26:6a:d1 brd ff:ff:ff:ff:ff:ff

 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 plugin.execute:926 execute-output: ('/sbin/ip', 'addr', 'show', 'enp0s25')
 stderr:


 2017-06-14 16:32:22 DEBUG otopi.plugins.gr_he_common.vm.cloud_init
 cloud_init._getMyIPAddress:132 address: None
 2017-06-14 16:32:22 DEBUG otopi.context context._executeMethod:142
 method exception
 Traceback (most recent call last):
   File