[ovirt-users] Re: qemu-img info showed iscsi/FC lun size 0

2019-02-22 Thread Nir Soffer
On Fri, Feb 22, 2019 at 7:14 PM Nir Soffer  wrote:

> On Fri, Feb 22, 2019 at 5:00 PM Jingjie Jiang 
> wrote:
>
>> What about qcow2 format?
>>
> qcow2 reports the real size regardless of the underlying storage, since
qcow2 manages
the allocations. However the size is reported in qemu-img check in the
image-end-offset.

$ dd if=/dev/zero bs=1M count=10 | tr "\0" "\1" > test.raw

$ truncate -s 200m test.raw

$ truncate -s 1g backing

$ sudo losetup -f backing --show
/dev/loop2

$ sudo qemu-img convert -f raw -O qcow2 test.raw /dev/loop2

$ sudo qemu-img info --output json /dev/loop2
{
"virtual-size": 209715200,
"filename": "/dev/loop2",
"cluster-size": 65536,
"format": "qcow2",
"actual-size": 0,
"format-specific": {
"type": "qcow2",
"data": {
"compat": "1.1",
"lazy-refcounts": false,
"refcount-bits": 16,
"corrupt": false
}
},
"dirty-flag": false
}

$ sudo qemu-img check --output json /dev/loop2
{
"image-end-offset": 10813440,
"total-clusters": 3200,
"check-errors": 0,
"allocated-clusters": 160,
"filename": "/dev/loop2",
"format": "qcow2"
}

We use this for reducing volumes to optimal size after merging snapshots,
but
we don't report this value to engine.

Is there a choice  to create vm disk with format qcow2 instead of raw?
>>
> Not for LUNs, only for images.
>
> The available formats in 4.3 are documented here:
>
> https://ovirt.org/develop/release-management/features/storage/incremental-backup.html#disk-format
>
> incremental means you checked the checkbox "Enable incremental backup"
> when creating a disk.
> But note that the fact that we will create qcow2 image is implementation
> detail and the behavior
> may change in the future. For example, qemu is expected to provide a way
> to do incremental
> backup with raw volumes, and in this case we may create a raw volume
> instead of qcow2 volume.
> (actually raw data volume and qcow2 metadata volume).
>
> If you want to control the disk format, the only way is via the REST API
> or SDK, where you can
> specify the format instead of allocation policy. However even if you
> specify the format in the SDK
> the system may chose to change the format when copying the disk to another
> storage type. For
> example if you copy qcow2/sparse image from block storage to file storage
> the system will create
> a raw/sparse image.
>
> If you desire to control the format both from the UI and REST API/SDK and
> ensure that the system
> will never change the selected format please file a bug explaining the use
> case.
>
> On 2/21/19 5:46 PM, Nir Soffer wrote:
>>
>>
>>
>> On Thu, Feb 21, 2019, 21:48 >
>>> Hi,
>>> Based on oVirt 4.3.0, I have data domain from FC lun, then I create new
>>> vm on the disk from FC data domain.
>>> After VM was created. According to qemu-img info, the disk size is 0.
>>> # qemu-img info
>>> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
>>>
>>> image:
>>> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
>>> file format: raw
>>> virtual size: 10G (10737418240 bytes)
>>> disk size: 0
>>>
>>> I tried on iscsi and same result.
>>>
>>> Is the behaviour expected?
>>>
>>
>> It is expected in a way. Disk size is the amount of storage actually
>> used, and block devices has no way to tell that.
>>
>> oVirt report the size of the block device in this case, which is more
>> accurate than zero.
>>
>> However the real size allocated on the undrelying storage is somewhere
>> between zero an device size, and depends on the imlementation of the
>> storage. Nither qemu-img nor oVirt can tell the real size.
>>
>> Nir
>>
>>
>>> Thanks,
>>> Jingjie
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSXP7RENWIFMHIJWIAF6AGQPI3NOVNIZ/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KSBETETRYBCP2H75EUZXK77RJWMTMQB6/


[ovirt-users] Gluster setup Problem

2019-02-22 Thread Matthew Roth
I have 3 servers,  Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb,  node3 3tb 
/dev/sdb

I start the process for gluster deployment. I change node 1 to sda and all the 
other ones to sdb. I get no errors however, 

when I get to 
Creating physical Volume all it does is spin forever . doesnt get any further. 
I can leave it there for 5 hours and doesn't go anywhere. 

#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
cmdnode1.cmd911.com
cmdnode2.cmd911.com
cmdnode3.cmd911.com

[script1:cmdnode1.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda -h 
cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com

[script1:cmdnode2.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com

[script1:cmdnode3.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com

[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no

[pv1:cmdnode1.cmd911.com]
action=create
devices=sda
ignore_pv_errors=no

[pv1:cmdnode2.cmd911.com]
action=create
devices=sdb
ignore_pv_errors=no

[pv1:cmdnode3.cmd911.com]
action=create
devices=sdb
ignore_pv_errors=no

[vg1:cmdnode1.cmd911.com]
action=create
vgname=gluster_vg_sda
pvname=sda
ignore_vg_errors=no

[vg1:cmdnode2.cmd911.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg1:cmdnode3.cmd911.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:cmdnode1.cmd911.com]
action=create
poolname=gluster_thinpool_sda
ignore_lv_errors=no
vgname=gluster_vg_sda
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB

[lv2:cmdnode2.cmd911.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB

[lv3:cmdnode3.cmd911.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=41GB
poolmetadatasize=1GB

[lv4:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/engine
size=100GB
lvtype=thick

[lv5:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sda
virtualsize=500GB

[lv6:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sda
virtualsize=500GB

[lv7:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick

[lv8:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv9:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv10:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=20GB
lvtype=thick

[lv11:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv12:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs

[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=cmdnode1.cmd911.com:/gluster_bricks/engine/engine,cmdnode2.cmd911.com:/gluster_bricks/engine/engine,cmdnode3.cmd911.com:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3

[ovirt-users] Re: Design question: one big NFS share or several smaller ones?

2019-02-22 Thread Nir Soffer
On Mon, Feb 18, 2019 at 2:00 PM Markus Schaufler <
markus.schauf...@digit-all.at> wrote:

> Hi all,
>
> I've got a design question:
> Are there any best practices regarding NFS datastores especially datastore
> sizing? Should I use one big NFS datastore and expand it on demand or
> should the size not exceed a certain limit and start with a new NFS
> datastore?
>

Unless you have a reason to use multiple storage domains, like separating
storage for
different groups (production,testing, development) less storage domain is
best. Every storage
domain includes monitoring and ioprocess child process so you don't want to
create many
storage domain for no reason.

We don't have any limit on the size of a storage domain. We do support up
to 50 storage
domains.


> Are there any other configuration considerations (NFS v3 or v4(.1) and
> mounting options)?
>

I think the only NFS version that should be used now is 4.2. It gives
*huge* performance
improvements because it supports sparseness.

Here are some examples:
- creating preallocated disk is instant, instead of minutes/hours
(depending on disk zie)
  with NFS < 4.2
- copying preallocated disks can be much faster because qemu can read only
the allocated
  parts and can zero unallocated parts instantly.

You can see here how much faster is copying raw disk from NFS 4.2 vs block
storage:
https://bugzilla.redhat.com/show_bug.cgi?id=1511891#c57

Coping here the relevant part:

## Copying from NFS 4.2 to FC storage domain
...
image   qemu-imgqemu-img/-W  ddparallel-dd
--
100/19G  242 41  165   128

## Copying from FC storage domain to FC storage domain
...
image   qemu-imgqemu-img/-W  ddparallel-dd
--
100/19G  383194  178   141
As you can see, copying from NFS 4.2 was 4.5 times faster (41 vs 194). When
copying from NFS to NFS the difference is much smaller (242 vs 383).

You can add mount options if you have special needs via engine UI or REST
API / SDK.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73RQ3V2RXATZVCSJGYWKFDY2VABKLQIJ/


[ovirt-users] Re: HC : JBOD or RAID5/6 for NVME SSD drives?

2019-02-22 Thread Jayme
Personally I feel like raid on top of GlusterFS is too wasteful.  It would
give you a few advantages such as being able to replace a failed drive at
raid level vs replacing bricks with Gluster.

In my production HCI setup I have three Dell hosts each with two 2Tb SSDs
in JBOD.  I find this setup works well for me, but I have not yet run in to
any drive failure scenarios.

What Perc card do you have in the dell machines?   Jbod is tough with most
Perc cards, in many cases to do Jbod you have to fake it using individual
raid 0 for each drive.  Only some perc controllers allow true jbod
passthrough.

On Fri, Feb 22, 2019 at 12:30 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Hi,
>
> We have been evaluating oVirt HyperConverged for 9 month now with a test
> cluster of 3 DELL Hosts with Hardware RAID5 on PERC card.
> We were not impressed with the performance...
> No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a
> ram device. Perf were almost unchanged.
>
> It seems that LV Cache is its own source of bugs and problems anyway, so
> we are thinking going for full NVME drives when buying the production
> cluster.
>
> What would the recommandation be in that case, JBOD or RAID?
>
> Thanks
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBPT2RMEPWCXBTJUU3LV3JUD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77KGETD62UI4JSREQ2JVIH4E26S7Z3NO/


[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread emmanualvnebu1
Your mean, by pressing tab & adding above in the parameter?

ovirt-node-ng-installer-4.3.0-2019020409.el7.iso is my iso file name

How do I find what's the usb name? Coz I am at the boot screen only.

Below is the parameter by default set in Install option:

>vmlinuz initrd=initdd.img inst.stage2=hd:LABEL=OVIRT quiet 
>inst.ks=hd:LABEL=CentOS\x207\x20x86_64:\interactive-defaults.ks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDYDQ4QA6XQQ4C54QPKNWJVL2T5UOAF3/


[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread Sandro Bonazzola
Il giorno ven 22 feb 2019 alle ore 18:24  ha
scritto:

> iso, bootable usb made with rufus.
>

can you please try by just using "dd if=
of=/dev/"
changing values accordingly to your case?
I'd like to exclude rufus issues.



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQ56533KBEHCUZ2KNFKBM4UL2IJDH2N4/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WAEUD4GPTY3JWWQVSNVPA6VDIMJFKCZ6/


[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread emmanualvnebu1
iso, bootable usb made with rufus.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQ56533KBEHCUZ2KNFKBM4UL2IJDH2N4/


[ovirt-users] Re: qemu-img info showed iscsi/FC lun size 0

2019-02-22 Thread Nir Soffer
On Fri, Feb 22, 2019 at 5:00 PM Jingjie Jiang 
wrote:

> What about qcow2 format?
>
> Is there a choice  to create vm disk with format qcow2 instead of raw?
>
Not for LUNs, only for images.

The available formats in 4.3 are documented here:
https://ovirt.org/develop/release-management/features/storage/incremental-backup.html#disk-format

incremental means you checked the checkbox "Enable incremental backup" when
creating a disk.
But note that the fact that we will create qcow2 image is implementation
detail and the behavior
may change in the future. For example, qemu is expected to provide a way to
do incremental
backup with raw volumes, and in this case we may create a raw volume
instead of qcow2 volume.
(actually raw data volume and qcow2 metadata volume).

If you want to control the disk format, the only way is via the REST API or
SDK, where you can
specify the format instead of allocation policy. However even if you
specify the format in the SDK
the system may chose to change the format when copying the disk to another
storage type. For
example if you copy qcow2/sparse image from block storage to file storage
the system will create
a raw/sparse image.

If you desire to control the format both from the UI and REST API/SDK and
ensure that the system
will never change the selected format please file a bug explaining the use
case.

On 2/21/19 5:46 PM, Nir Soffer wrote:
>
>
>
> On Thu, Feb 21, 2019, 21:48 
>> Hi,
>> Based on oVirt 4.3.0, I have data domain from FC lun, then I create new
>> vm on the disk from FC data domain.
>> After VM was created. According to qemu-img info, the disk size is 0.
>> # qemu-img info
>> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
>>
>> image:
>> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
>> file format: raw
>> virtual size: 10G (10737418240 bytes)
>> disk size: 0
>>
>> I tried on iscsi and same result.
>>
>> Is the behaviour expected?
>>
>
> It is expected in a way. Disk size is the amount of storage actually used,
> and block devices has no way to tell that.
>
> oVirt report the size of the block device in this case, which is more
> accurate than zero.
>
> However the real size allocated on the undrelying storage is somewhere
> between zero an device size, and depends on the imlementation of the
> storage. Nither qemu-img nor oVirt can tell the real size.
>
> Nir
>
>
>> Thanks,
>> Jingjie
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSXP7RENWIFMHIJWIAF6AGQPI3NOVNIZ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NG62NX3ASV4WBVSTXZMNQK63GOZANFUU/


[ovirt-users] Re: qemu-img info showed iscsi/FC lun size 0

2019-02-22 Thread Jingjie Jiang

Hi Nir,

Thanks for your reply.

What about qcow2 format?

Is there a choice  to create vm disk with format qcow2 instead of raw?


Thanks,

Jingjie


On 2/21/19 5:46 PM, Nir Soffer wrote:



On Thu, Feb 21, 2019, 21:48  wrote:


Hi,
Based on oVirt 4.3.0, I have data domain from FC lun, then I
create new vm on the disk from FC data domain.
After VM was created. According to qemu-img info, the disk size is 0.
# qemu-img info

/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b

image:

/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0

I tried on iscsi and same result.

Is the behaviour expected?


It is expected in a way. Disk size is the amount of storage actually 
used, and block devices has no way to tell that.


oVirt report the size of the block device in this case, which is more 
accurate than zero.


However the real size allocated on the undrelying storage is somewhere 
between zero an device size, and depends on the imlementation of the 
storage. Nither qemu-img nor oVirt can tell the real size.


Nir


Thanks,
Jingjie

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSXP7RENWIFMHIJWIAF6AGQPI3NOVNIZ/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OEWJQGCXOPB5EDAODL4JCWGXDWN7KOQ3/


[ovirt-users] HC : JBOD or RAID5/6 for NVME SSD drives?

2019-02-22 Thread Guillaume Pavese
Hi,

We have been evaluating oVirt HyperConverged for 9 month now with a test
cluster of 3 DELL Hosts with Hardware RAID5 on PERC card.
We were not impressed with the performance...
No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a
ram device. Perf were almost unchanged.

It seems that LV Cache is its own source of bugs and problems anyway, so we
are thinking going for full NVME drives when buying the production cluster.

What would the recommandation be in that case, JBOD or RAID?

Thanks

Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBPT2RMEPWCXBTJUU3LV3JUD/


[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread Yuval Turgeman
How are you installing it (iso, pxe) ?

On Fri, Feb 22, 2019, 17:15  wrote:

> Motherboard :   ASUS M5A78L-M/USB3
> UEFI Support:   No
> Processor   :   AMD Fx-4300 (3800MGz)
> :   Virtualization Enabled in BIOS
> NIC :   1. Ethernet
> 2. Wifi Adapter
> HDD :   1TB WD
> RAM :   8GB
>
> I am working on a VDI project & testing on this machine.
>
> I had centos 7 earlier & installed oVirt. It was working fine & was able
> to access the service url.
>
> Now, I am trying to remove the base OS from the infra & go ahead with
> oVirt-node.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/INAIAAKK2VBHYCXNHWSRCNQDV3OANIL2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ITTMEVOX24TXHG563S6VGU6OHFBPUBC/


[ovirt-users] Problem with snapshots in illegal status

2019-02-22 Thread Bruno Rodriguez
Hello,

We are experiencing some problems with some snapshots in illegal status
generated with the python API. I think I'm not the only one, and that is
not a relief but I hope someone can help about it.

I'm a bit scared because, for what I see, the creation date in the engine
for every snapshot is way different from the date when it was really
created. The name of the snapshot is in the format
backup_snapshot_MMDD-HHMMSS, but as you can see in the following
examples, the stored date is totally random...

Size
Creation Date
Snapshot Description
Status
Disk Snapshot ID

33 GiB
Mar 2, 2018, 5:03:57 PM
backup_snapshot_20190217-011645
Illegal
5734df23-de67-41a8-88a1-423cecfe7260

33 GiB
May 8, 2018, 10:02:56 AM
backup_snapshot_20190216-013047
Illegal
f649d9c1-563e-49d4-9fad-6bc94abc279b

10 GiB
Feb 21, 2018, 11:10:17 AM
backup_snapshot_20190217-010004
Illegal
2929df28-eae8-4f27-afee-a984fe0b07e7

43 GiB
Feb 2, 2018, 12:55:51 PM
backup_snapshot_20190216-015544
Illegal
4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f

11 GiB
Feb 13, 2018, 12:51:08 PM
backup_snapshot_20190217-010541
Illegal
fbaff53b-30ce-4b20-8f10-80e70becb48c

11 GiB
Feb 13, 2018, 4:05:39 PM
backup_snapshot_20190217-011207
Illegal
c628386a-da6c-4a0d-ae7d-3e6ecda27d6d

11 GiB
Feb 13, 2018, 4:38:25 PM
backup_snapshot_20190216-012058
Illegal
e9ddaa5c-007d-49e6-8384-efefebb00aa6

11 GiB
Feb 13, 2018, 10:52:09 AM
backup_snapshot_20190216-012550
Illegal
5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4

55 GiB
Jan 22, 2018, 5:02:29 PM
backup_snapshot_20190217-012659
Illegal
7efe2e7e-ca24-4b27-b512-b42795c79ea4


When I'm getting the logs for the first one, to check what happened to it,
I get the following

2019-02-17 01:16:45,839+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default
task-100) [96944daa-c90a-4ad7-a556-c98e66550f87] START,
CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
ignoreFailoverLimit='false',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
imageSizeInBytes='32212254720', volumeFormat='COW',
newImageId='fa154782-0dbb-45b5-ba62-d6937259f097', imageType='Sparse',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='5734df23-de67-41a8-88a1-423cecfe7260',
sourceImageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f'}), log id:
497c168a
2019-02-17 01:18:26,506+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
GetVolumeInfoVDSCommand(HostName = hood13.pic.es,
GetVolumeInfoVDSCommandParameters:{hostId='0a774472-5737-4ea2-b49a-6f0ea4572199',
storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
imageId='5734df23-de67-41a8-88a1-423cecfe7260'}), log id: 111a34cf
2019-02-17 01:18:26,764+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] Successfully
added Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260') for image transfer command
'11104d8c-2a9b-4924-96ce-42ef66725616'
2019-02-17 01:18:27,310+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
AddImageTicketVDSCommand(HostName = hood11.pic.es,
AddImageTicketVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e',
ticketId='0d389c3e-5ea5-4886-8ea7-60a1560e3b2d', timeout='300',
operations='[read]', size='35433480192',
url='file:///rhev/data-center/mnt/blockSD/e655abce-c5e8-44f3-8d50-9fd76edf05cb/images/c5cc464e-eb71-4edf-a780-60180c592a6f/5734df23-de67-41a8-88a1-423cecfe7260',
filename='null'}), log id: f5de141
2019-02-17 01:22:28,898+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-100)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:26:33,588+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-100)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:26:49,319+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-6)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Finalizing successful transfer for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:27:17,771+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-6)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] EVENT_ID:

[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Martin Perina
On Fri, Feb 22, 2019 at 4:00 PM Nicolas Ecarnot  wrote:

> Le 22/02/2019 à 15:45, Martin Perina a écrit :
>
> If I understand that correctly, this is a request to open session to IPMI.
> If you haven't received any response, then I'd check:
>
> 1. Do you have IPMI enabled?
>
>
> Hello Martin,
>
> you hit the point.
>
> IPMI was not unable (anymore).
>
> IPMI is activated by default since years in all our hosts.
>
> But recent firmware upgrades on some of our Dell hosts, and especially on
> iDRAC firmwares led to the disabling of IPMI.
>
>
> I'm sorry for having bothered you and the audience. Sorry for this waste
> of time. Thank you Dell :-\
>

No problem, I'm glad the issue is solved.

Have a nice weekend!
Martin

>
> --
> Nicolas ECARNOT
>
>

-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PHDG6E226SDO64UQGQ3HXUPXU3KKGHDZ/


[ovirt-users] Re: Ovirt 4.2.8.. Possible bug?

2019-02-22 Thread Sandro Bonazzola
Il ven 22 feb 2019, 15:31 matteo fedeli  ha scritto:

> sorry, but I don't understand...
>


I added to the discussion an engineer that may be able to help you.


___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EVCRSE7EOWJPGFYHIJUDFYQY5RARGYTR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLF5MMINOOCHZHDDZZDR5ECTF3A3ZGJL/


[ovirt-users] Re: update to 4.2.8 fails

2019-02-22 Thread Sandro Bonazzola
Nir Levy can you please add this to upgrade guide? Please add Yuval to
reviewers for the PR.

Il dom 17 feb 2019, 10:49 Yuval Turgeman  ha scritto:

> It's mentioned in the release notes [1], probably worth to mention it in
> the general upgrade guide
>
> [1]
> https://ovirt.org/release/4.3.0/#install--upgrade-from-previous-versions
>
>
> On Fri, Feb 15, 2019 at 1:05 PM Greg Sheremeta 
> wrote:
>
>>
>> On Thu, Feb 14, 2019 at 11:06 PM Vincent Royer 
>> wrote:
>>
>>> Greg,
>>>
>>> Can I contribute?
>>>
>>
>> I'm so glad you asked -- YES, please do! :)
>>
>> The docs, along with the rest of the ovirt.org site, are in GitHub, and
>> very easily editable. To edit a single page -- in the page footer, click
>> "Edit this page on GitHub". That will create a simple Pull Request for
>> review.
>>
>> You can also clone and fork like any other project, if you are more
>> comfortable with that. If you want to run the site locally,
>> https://github.com/oVirt/ovirt-site/blob/master/CONTRIBUTING.md
>>
>> Let me know if you have any questions!
>>
>>
>>>
>>>
>>>
>>> On Thu, Feb 14, 2019 at 2:05 PM Greg Sheremeta 
>>> wrote:
>>>

 On Thu, Feb 14, 2019 at 2:16 PM Vincent Royer 
 wrote:

> Greg,
>
> The first thing on the list in your link is to check what repos are
> enabled with yum repolist, but makes no mention of what repos
> *should *be enabled on node-ng, nor what to do about it if you have
> the wrong ones. I've never had an Ovirt update go the way it was 
> "supposed"
> to go, despite having a bunch of documentation at hand.  Usually I end up
> blowing away the hosts and starting with a fresh ISO.
>
> The page you linked makes no mention of the command Edward mentioned
> that got things working for me:
>
> yum update ovirt-node-ng-image-update
>
> All the instructions I can find just say to run yum update, but that
> resulted in a bunch of dependency errors for me, on a normal 4.2.6 node
> install that I haven't touched since installation.  Why?  If I'm following
> the instructions, shouldn't it work?
>
>
> running this command in the upgrade guide:
>
> [image: image.png]
>
> Gives me "This system is not registered with an entitlement server".
>  Is that an outdated instruction?  Does it apply to the particular update
> I'm trying to apply?   No way to tell...
>

 It only applies to RHEL. If you are not on RHEL, you wouldn't run that.
 So it definitely needs improvement.


>
> What would really help is a clear separation between commands intended
> for centos/RHEL and commands intended for Node.  As an outsider, it's very
> difficult to know.   Every chapter, where there is any difference in
> procedure, the documents should be split with RHEL on one side and NODE on
> the other.
>

 +1.


>
> The documentation would also benefit from tags like dates and versions
> that they apply to.  "Valid for Ovirt 4.2.6 to 4.2.8 as of Feb 2, 2019".
> Then the documents should be tested and the dates/versions adjusted, or 
> the
> docs adjusted, as needed.
>

 Agree.


>
> Ovirt is awesome.
>

 Agree :)


> But the docs are the project's worst enemy.
>

 I understand your frustrations. We've been trying to improve the
 documentation lately, and feedback like yours is crucial. So thank you.
 I opened to https://github.com/oVirt/ovirt-site/issues/1906 track this.

 Best wishes,
 Greg



>
>
>
> On Thu, Feb 14, 2019 at 3:34 AM Greg Sheremeta 
> wrote:
>
>> Hi,
>>
>> On Wed, Feb 13, 2019 at 11:18 PM Vincent Royer 
>> wrote:
>>
>>> wow am I crazy or is that not mentioned anywhere that I can find in
>>> the docs?
>>>
>>
>>
>> https://ovirt.org/documentation/upgrade-guide/appe-Manually_Updating_Hosts.html
>>
>> Does that make sense, or do you think it needs enhancement? If it
>> needs enhancement, please open a documentation bug:
>> https://github.com/oVirt/ovirt-site/issues/new
>>
>>
>>
>>>
>>> some combinations of commands and reboots finally got the update to
>>> take.
>>>
>>> Any idea about the messages about not being registered to an
>>> entitlement server?  whats that all about??
>>>
>>
>> If you're on CentOS, it's a harmless side effect of cockpit being
>> installed.
>> # cat  /etc/yum/pluginconf.d/subscription-manager.conf
>> [main]
>> enabled=1
>> # change to 0 if you prefer
>>
>>
>>
>>>
>>>
>>>
>>> On Wed, Feb 13, 2019 at 7:30 PM Edward Berger 
>>> wrote:
>>>
 If its a node-ng install, you should just update the whole image
 with
 yum update ovirt-node-ng-image-update

 On Wed, Feb 13, 2019 

[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread emmanualvnebu1
Motherboard :   ASUS M5A78L-M/USB3
UEFI Support:   No
Processor   :   AMD Fx-4300 (3800MGz) 
:   Virtualization Enabled in BIOS
NIC :   1. Ethernet
2. Wifi Adapter
HDD :   1TB WD
RAM :   8GB

I am working on a VDI project & testing on this machine.

I had centos 7 earlier & installed oVirt. It was working fine & was able to 
access the service url.

Now, I am trying to remove the base OS from the infra & go ahead with 
oVirt-node.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/INAIAAKK2VBHYCXNHWSRCNQDV3OANIL2/


[ovirt-users] Re: Design question: one big NFS share or several smaller ones?

2019-02-22 Thread Sandro Bonazzola
adding some people

Il lun 18 feb 2019, 13:00 Markus Schaufler 
ha scritto:

> Hi all,
>
> I've got a design question:
> Are there any best practices regarding NFS datastores especially datastore
> sizing? Should I use one big NFS datastore and expand it on demand or
> should the size not exceed a certain limit and start with a new NFS
> datastore?
> Are there any other configuration considerations (NFS v3 or v4(.1) and
> mounting options)?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUWWSUUL4BHNMIHCNTOGQAOUCBDDYGMF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QP2BAKVPNYY3CV6T3SYQPRQLUC2ZFBQ5/


[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Nicolas Ecarnot

Le 22/02/2019 à 15:45, Martin Perina a écrit :
If I understand that correctly, this is a request to open session to 
IPMI. If you haven't received any response, then I'd check:


1. Do you have IPMI enabled?



Hello Martin,

you hit the point.

IPMI was not unable (anymore).

IPMI is activated by default since years in all our hosts.

But recent firmware upgrades on some of our Dell hosts, and especially 
on iDRAC firmwares led to the disabling of IPMI.



I'm sorry for having bothered you and the audience. Sorry for this waste 
of time. Thank you Dell :-\


--
Nicolas ECARNOT

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KO7REWCFUWRGU453N5XYSFZSS75RFFU6/


[ovirt-users] Re: oVirt Node install failed

2019-02-22 Thread Sandro Bonazzola
Il mer 20 feb 2019, 06:43  ha scritto:

> Current version my oVirt 4.2.6.
> Maybe I need to update it?
>

4.2.6 engine is supposed to work with 4.2.8 node but yes, better to
upgrade.
If you are not using Gluster I would recommend to upgrade to 4.3 which is
currently supported version




___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/35Z45SHVOAUNBDIFH3I75GCU22DXNZPQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3327JSNWFALJPUOZMYU7HFXMWOBLNAES/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Karli Sjöberg
Den 22 feb. 2019 15:51 skrev Dominik Holler :On Fri, 22 Feb 2019 15:50:23 +0100 (CET)Karli Sjöberg  wrote:> > > Den 22 feb. 2019 15:48 skrev Dominik Holler :> On Fri, 22 Feb 2019 15:46:00 +0100 (CET)> > Karli Sjöberg wrote:> > > > >> > >> > > Den 22 feb. 2019 15:35 skrev Dominik Holler :> > > On Fri, 22 Feb 2019 15:02:10 +0100 (CET)> > > > Karli Sjöberg wrote:> > > >> > > > >> > > > >> > > > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :> > > > > Hello,> > > > > >> > > > > > I'm almost sure the following is useless as I think I know how it's> > > > > > working, but as I'm preparing a major change in our infrastructure, I'd> > > > > > rather be sure and not mess up. And also to be sure.> > > > > > (Just to be sure)> > > > > >> > > > > > For some reasons, and for the first time in our infra., one of our new> > > > > > DC will temporary include heterogeneous hosts : some networks will be> > > > > > available only on parts of them.> > > > > >> > > >> > > > Should work.> > > >> > > > > Hosts _needs_ the same networks to be available in the same cluster. Different networked hosts needs to be put in a separate cluster.> > > > >> > > >> > > > This is the most straight approach, which is supported by oVirt.> > > > But there is the possibility to attach logical networks, which are> > > > neither required in the cluster, nor attached to all hosts in the> > > > cluster, to a VM. oVirt's scheduling will respect this.> > > >> > > So you're saying oVirt knows which other hosts in the cluster have the non-mandatory network(s) the VM has and only chooses between those a host to migrate the VM to?> > >> > > > Yes. If you try to trigger the migration manually, UI will provide you> > the list of possible hosts to migrate the VM.> > > Well, what about automatically migrated VM's?> The same rules apply.Cool! Seems like your question got answered Nic, right?/K> /K> > > > > /K> > >> > > Of course this introduces some obvious limitations, e.g. you cannot> > > > hotplug a network to a VM, which runs on a host, which is not connected> > > > to this network or neither you nor oVirt can schedule a VM to a host,> > > > which does not provide all the networks attached to the VM.> > > >> > > > If you want to be even more sure, the reference to the relevant source> > > > https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132> > > >> > > >> > > > > /K> > > > >> > > > >> > > > > > Please may someone confirm me that with every load balancing / VM> > > > > > startup / VM migration / host choice, oVirt will smartly choose the> > > > > > available host equipped with the adequate networks?> > > > > >> > > > > > --> > > > > > Nicolas ECARNOT> > > > > > ___> > > > > > Users mailing list -- users@ovirt.org> > > > > > To unsubscribe send an email to users-le...@ovirt.org> > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/> > > > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/> > > > > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/> > > > > >> > > >> > > >> > > > ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/263KTLVE2DPHMKCXGSLE62NHIAIYCIMY/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Nicolas Ecarnot

Le 22/02/2019 à 15:48, Dominik Holler a écrit :

Hosts _needs_ the same networks to be available in the same cluster. Different 
networked hosts needs to be put in a separate cluster.



This is the most straight approach, which is supported by oVirt.
But there is the possibility to attach logical networks, which are
neither required in the cluster, nor attached to all hosts in the
cluster, to a VM. oVirt's scheduling will respect this.


So you're saying oVirt knows which other hosts in the cluster have the 
non-mandatory network(s) the VM has and only chooses between those a host to 
migrate the VM to?



Yes. If you try to trigger the migration manually, UI will provide you
the list of possible hosts to migrate the VM.
https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132




*THIS* is precisely the answer I was expecting.

Thank you Dominik.

--
Nicolas ECARNOT
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LT6I4GS42VIPQYBF4EGT7HBS2LVLUN2Z/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Dominik Holler
On Fri, 22 Feb 2019 15:50:23 +0100 (CET)
Karli Sjöberg  wrote:

> 
> 
> Den 22 feb. 2019 15:48 skrev Dominik Holler :
> On Fri, 22 Feb 2019 15:46:00 +0100 (CET)
> > Karli Sjöberg wrote:
> > 
> > >
> > >
> > > Den 22 feb. 2019 15:35 skrev Dominik Holler :
> > > On Fri, 22 Feb 2019 15:02:10 +0100 (CET)
> > > > Karli Sjöberg wrote:
> > > >
> > > > >
> > > > >
> > > > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :
> > > > > Hello,
> > > > > >
> > > > > > I'm almost sure the following is useless as I think I know how it's
> > > > > > working, but as I'm preparing a major change in our infrastructure, 
> > > > > > I'd
> > > > > > rather be sure and not mess up. And also to be sure.
> > > > > > (Just to be sure)
> > > > > >
> > > > > > For some reasons, and for the first time in our infra., one of our 
> > > > > > new
> > > > > > DC will temporary include heterogeneous hosts : some networks will 
> > > > > > be
> > > > > > available only on parts of them.
> > > > > >
> > > >
> > > > Should work.
> > > >
> > > > > Hosts _needs_ the same networks to be available in the same cluster. 
> > > > > Different networked hosts needs to be put in a separate cluster.
> > > > >
> > > >
> > > > This is the most straight approach, which is supported by oVirt.
> > > > But there is the possibility to attach logical networks, which are
> > > > neither required in the cluster, nor attached to all hosts in the
> > > > cluster, to a VM. oVirt's scheduling will respect this.
> > > >
> > > So you're saying oVirt knows which other hosts in the cluster have the 
> > > non-mandatory network(s) the VM has and only chooses between those a host 
> > > to migrate the VM to?
> > >
> > 
> > Yes. If you try to trigger the migration manually, UI will provide you
> > the list of possible hosts to migrate the VM.
> > 
> Well, what about automatically migrated VM's?
> 

The same rules apply.

> /K
> 
> 
> > > /K
> > >
> > > Of course this introduces some obvious limitations, e.g. you cannot
> > > > hotplug a network to a VM, which runs on a host, which is not connected
> > > > to this network or neither you nor oVirt can schedule a VM to a host,
> > > > which does not provide all the networks attached to the VM.
> > > >
> > > > If you want to be even more sure, the reference to the relevant source
> > > > https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132
> > > >
> > > >
> > > > > /K
> > > > >
> > > > >
> > > > > > Please may someone confirm me that with every load balancing / VM
> > > > > > startup / VM migration / host choice, oVirt will smartly choose the
> > > > > > available host equipped with the adequate networks?
> > > > > >
> > > > > > --
> > > > > > Nicolas ECARNOT
> > > > > > ___
> > > > > > Users mailing list -- users@ovirt.org
> > > > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > > > oVirt Code of Conduct: 
> > > > > > https://www.ovirt.org/community/about/community-guidelines/
> > > > > > List Archives: 
> > > > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/
> > > > > >
> > > >
> > > >
> > 
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C23VQXZZS6GS4OQS6JHWEOPVHLK73QCI/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Karli Sjöberg
Den 22 feb. 2019 15:48 skrev Dominik Holler :On Fri, 22 Feb 2019 15:46:00 +0100 (CET)Karli Sjöberg  wrote:> > > Den 22 feb. 2019 15:35 skrev Dominik Holler :> On Fri, 22 Feb 2019 15:02:10 +0100 (CET)> > Karli Sjöberg wrote:> > > > >> > >> > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :> > > Hello,> > > >> > > > I'm almost sure the following is useless as I think I know how it's> > > > working, but as I'm preparing a major change in our infrastructure, I'd> > > > rather be sure and not mess up. And also to be sure.> > > > (Just to be sure)> > > >> > > > For some reasons, and for the first time in our infra., one of our new> > > > DC will temporary include heterogeneous hosts : some networks will be> > > > available only on parts of them.> > > >> > > > Should work.> > > > > Hosts _needs_ the same networks to be available in the same cluster. Different networked hosts needs to be put in a separate cluster.> > >> > > > This is the most straight approach, which is supported by oVirt.> > But there is the possibility to attach logical networks, which are> > neither required in the cluster, nor attached to all hosts in the> > cluster, to a VM. oVirt's scheduling will respect this.> > > So you're saying oVirt knows which other hosts in the cluster have the non-mandatory network(s) the VM has and only chooses between those a host to migrate the VM to?> Yes. If you try to trigger the migration manually, UI will provide youthe list of possible hosts to migrate the VM.Well, what about automatically migrated VM's?/K> /K> > Of course this introduces some obvious limitations, e.g. you cannot> > hotplug a network to a VM, which runs on a host, which is not connected> > to this network or neither you nor oVirt can schedule a VM to a host,> > which does not provide all the networks attached to the VM.> > > > If you want to be even more sure, the reference to the relevant source> > https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132> > > > > > > /K> > >> > >> > > > Please may someone confirm me that with every load balancing / VM> > > > startup / VM migration / host choice, oVirt will smartly choose the> > > > available host equipped with the adequate networks?> > > >> > > > --> > > > Nicolas ECARNOT> > > > ___> > > > Users mailing list -- users@ovirt.org> > > > To unsubscribe send an email to users-le...@ovirt.org> > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/> > > > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/> > > > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/> > > >> > > > ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WILKV6JIJU3DU6HJFEHP6KXGRJLDDRH4/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Dominik Holler
On Fri, 22 Feb 2019 15:46:00 +0100 (CET)
Karli Sjöberg  wrote:

> 
> 
> Den 22 feb. 2019 15:35 skrev Dominik Holler :
> On Fri, 22 Feb 2019 15:02:10 +0100 (CET)
> > Karli Sjöberg wrote:
> > 
> > >
> > >
> > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :
> > > Hello,
> > > >
> > > > I'm almost sure the following is useless as I think I know how it's
> > > > working, but as I'm preparing a major change in our infrastructure, I'd
> > > > rather be sure and not mess up. And also to be sure.
> > > > (Just to be sure)
> > > >
> > > > For some reasons, and for the first time in our infra., one of our new
> > > > DC will temporary include heterogeneous hosts : some networks will be
> > > > available only on parts of them.
> > > >
> > 
> > Should work.
> > 
> > > Hosts _needs_ the same networks to be available in the same cluster. 
> > > Different networked hosts needs to be put in a separate cluster.
> > >
> > 
> > This is the most straight approach, which is supported by oVirt.
> > But there is the possibility to attach logical networks, which are
> > neither required in the cluster, nor attached to all hosts in the
> > cluster, to a VM. oVirt's scheduling will respect this.
> > 
> So you're saying oVirt knows which other hosts in the cluster have the 
> non-mandatory network(s) the VM has and only chooses between those a host to 
> migrate the VM to?
> 

Yes. If you try to trigger the migration manually, UI will provide you
the list of possible hosts to migrate the VM.

> /K
> 
> Of course this introduces some obvious limitations, e.g. you cannot
> > hotplug a network to a VM, which runs on a host, which is not connected
> > to this network or neither you nor oVirt can schedule a VM to a host,
> > which does not provide all the networks attached to the VM.
> > 
> > If you want to be even more sure, the reference to the relevant source
> > https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132
> > 
> > 
> > > /K
> > >
> > >
> > > > Please may someone confirm me that with every load balancing / VM
> > > > startup / VM migration / host choice, oVirt will smartly choose the
> > > > available host equipped with the adequate networks?
> > > >
> > > > --
> > > > Nicolas ECARNOT
> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > oVirt Code of Conduct: 
> > > > https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives: 
> > > > https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/
> > > >
> > 
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O42QHQSJTSWGBFUWOQPALSO457MP3SL4/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Karli Sjöberg
Den 22 feb. 2019 15:35 skrev Dominik Holler :On Fri, 22 Feb 2019 15:02:10 +0100 (CET)Karli Sjöberg  wrote:> > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :> Hello,> > > > I'm almost sure the following is useless as I think I know how it's> > working, but as I'm preparing a major change in our infrastructure, I'd> > rather be sure and not mess up. And also to be sure.> > (Just to be sure)> > > > For some reasons, and for the first time in our infra., one of our new> > DC will temporary include heterogeneous hosts : some networks will be> > available only on parts of them.> > Should work.> Hosts _needs_ the same networks to be available in the same cluster. Different networked hosts needs to be put in a separate cluster.> This is the most straight approach, which is supported by oVirt.But there is the possibility to attach logical networks, which areneither required in the cluster, nor attached to all hosts in thecluster, to a VM. oVirt's scheduling will respect this.So you're saying oVirt knows which other hosts in the cluster have the non-mandatory network(s) the VM has and only chooses between those a host to migrate the VM to?/KOf course this introduces some obvious limitations, e.g. you cannothotplug a network to a VM, which runs on a host, which is not connectedto this network or neither you nor oVirt can schedule a VM to a host,which does not provide all the networks attached to the VM.If you want to be even more sure, the reference to the relevant sourcehttps://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132> /K> > > > Please may someone confirm me that with every load balancing / VM> > startup / VM migration / host choice, oVirt will smartly choose the> > available host equipped with the adequate networks?> > > > --> > Nicolas ECARNOT> > ___> > Users mailing list -- users@ovirt.org> > To unsubscribe send an email to users-le...@ovirt.org> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/> > ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P54PTP7FYFGDFJ4MPNMUGUBWTFF4K2PU/


[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Martin Perina
On Fri, Feb 22, 2019 at 2:21 PM Nicolas Ecarnot  wrote:

> Le 22/02/2019 à 12:13, Martin Perina a écrit :
>
> Unfortunately using fence_ipmilan is not possible to display more
> debugging details, so as mentioned earlier could you please run ipmitool
> directly?
>
> ipmitool vv -I lanplus -H c-hv05.prd.sdis38.fr -p 623 -U stonith -P
>  -L ADMINISTRATOR chassis power status
>
> Above should display more details ...
>
>
> root@hv04:/etc# ipmitool -vv -I lanplus -H c-hv05.prd.sdis38.fr -p 623 -U 
> stonith -P 'xxx' -L ADMINISTRATOR chassis power status
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x8e 0x04
>
>
If I understand that correctly, this is a request to open session to IPMI.
If you haven't received any response, then I'd check:

1. Do you have IPMI enabled?
2. Is it exposed on the relevant IP/port?
3. Isn't there any firewall blocking access the client to the IPMI
interface?


>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x8e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x8e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x8e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x0e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x0e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x0e 0x04
>
>
> >> Sending IPMI command payload
> >>netfn   : 0x06
> >>command : 0x38
> >>data: 0x0e 0x04
>
> Get Auth Capabilities error
> Error issuing Get Channel Authentication Capabilities request
> Error: Unable to establish IPMI v2 / RMCP+ session
> root@hv04:/etc#
>
> --
> Nicolas ECARNOT
>
>

-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVJH7Y4V4TTETTXRJ5I4UT4SESI5RSR2/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Dominik Holler
On Fri, 22 Feb 2019 15:02:10 +0100 (CET)
Karli Sjöberg  wrote:

> 
> 
> Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :
> Hello,
> > 
> > I'm almost sure the following is useless as I think I know how it's
> > working, but as I'm preparing a major change in our infrastructure, I'd
> > rather be sure and not mess up. And also to be sure.
> > (Just to be sure)
> > 
> > For some reasons, and for the first time in our infra., one of our new
> > DC will temporary include heterogeneous hosts : some networks will be
> > available only on parts of them.
> > 

Should work.

> Hosts _needs_ the same networks to be available in the same cluster. 
> Different networked hosts needs to be put in a separate cluster.
> 

This is the most straight approach, which is supported by oVirt.
But there is the possibility to attach logical networks, which are
neither required in the cluster, nor attached to all hosts in the
cluster, to a VM. oVirt's scheduling will respect this.
Of course this introduces some obvious limitations, e.g. you cannot
hotplug a network to a VM, which runs on a host, which is not connected
to this network or neither you nor oVirt can schedule a VM to a host,
which does not provide all the networks attached to the VM.

If you want to be even more sure, the reference to the relevant source
https://github.com/oVirt/ovirt-engine/blob/7d111f3aa089f77f92049f4d3ec792e5ff7e5324/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/scheduling/policyunits/NetworkPolicyUnit.java#L132


> /K
> 
> 
> > Please may someone confirm me that with every load balancing / VM
> > startup / VM migration / host choice, oVirt will smartly choose the
> > available host equipped with the adequate networks?
> > 
> > --
> > Nicolas ECARNOT
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OAO3TSWWPXX3SUXOIOLTARDYD4OEE3W4/


[ovirt-users] Re: Ovirt 4.2.8.. Possible bug?

2019-02-22 Thread matteo fedeli
sorry, but I don't understand...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EVCRSE7EOWJPGFYHIJUDFYQY5RARGYTR/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Karli Sjöberg
Den 22 feb. 2019 15:22 skrev Nicolas Ecarnot :Le 22/02/2019 à 15:02, Karli Sjöberg a écrit :> > > Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :> > Hello,> > I'm almost sure the following is useless as I think I know how it's> working, but as I'm preparing a major change in our infrastructure, I'd> rather be sure and not mess up. And also to be sure.> (Just to be sure)> > For some reasons, and for the first time in our infra., one of our new> DC will temporary include heterogeneous hosts : some networks will be> available only on parts of them.> > Hi Karli,> Hosts _needs_ the same networks to be available in the same cluster. Correct me if I'm wrong, but I think that your statement is true *if* the networks are set as mandatory, which is not automatically wanted nor true. In our case, we have to disable this mandatory attribute.Correct. I took a hasty look at the logical networks page[*] and didn't see any mention of it and thought they must've removed that option.Can you still do that? Doesn't make much sense to me, purposely having non-operational/unavailable hosts in a cluster due to network incompatibility.../Khttps://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks.htmlI agree that when the networks are mandatory, every host unable to use them will end up unavailable.> Different networked hosts needs to be put in a separate cluster.> > /K-- Nicolas ECARNOT___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS4ILF2JD2QYTUJDVTZ7PZN6YYNVEM4W/


[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread Sandro Bonazzola
adding node team.
Can you provide some more info on the host? is it a uefi system?

Il ven 22 feb 2019, 12:11  ha scritto:

> Hi Guys,
>
> I am trying to install oVirt node on a physical machine.
>
> Once selected the option to install oVirt 4.3 it give a screen
> "dracut-initqueue time out And then get into emergency mode & gives dracut
> cmd line.
>
> if I exit the command line, it continue boot into the installation window.
>
> However it gives a message "Kickstart file /run/install/ks.config is
> missing" (Pane is dead).
>
> Tried the same on vmware, it boot fine & am able to install it.
>
> Can someone tell me how could I fix this?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIFTFXRFAOGL4GBS3UOSBHSYH3GCJRTK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4MPRTMIA5G423GD2SMNSY3UEJUVU4ZE/


[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Nicolas Ecarnot

Le 22/02/2019 à 15:02, Karli Sjöberg a écrit :



Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :

Hello,

I'm almost sure the following is useless as I think I know how it's
working, but as I'm preparing a major change in our infrastructure, I'd
rather be sure and not mess up. And also to be sure.
(Just to be sure)

For some reasons, and for the first time in our infra., one of our new
DC will temporary include heterogeneous hosts : some networks will be
available only on parts of them.




Hi Karli,

Hosts _needs_ the same networks to be available in the same cluster. 


Correct me if I'm wrong, but I think that your statement is true *if* 
the networks are set as mandatory, which is not automatically wanted nor 
true. In our case, we have to disable this mandatory attribute.


I agree that when the networks are mandatory, every host unable to use 
them will end up unavailable.



Different networked hosts needs to be put in a separate cluster.

/K



--
Nicolas ECARNOT
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGPHGFXYI3OZX2XKTLCFZ6W3GN4Q6U4Q/


[ovirt-users] Re: Ovirt 4.2.8.. Possible bug?

2019-02-22 Thread Sandro Bonazzola
adding sahina

Il ven 22 feb 2019, 14:01 matteo fedeli  ha scritto:

> Hi considering that the deploy with 4.2.7.8 failed I try to reinstall
> ovirt to version 4.2.8 and there are appened two strange things.
> During the volume step if i choose jbod mode in the deploy conf remain
> raid6 type... Why? To solve  I have only tried to editing manually the file
> at line about volume type and the deploy stuck on creating physical
> volume...
>
> this is my conf file: (I used 3 HDDs 500GB each, node,engine + vmstore and
> data)
>
> #gdeploy configuration generated by cockpit-gluster plugin
> [hosts]
> kansas.planet.bn
> germany.planet.bn
> singapore.planet.bn
>
> [script1:kansas.planet.bn]
> action=execute
> ignore_script_errors=no
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> kansas.planet.bn, germany.planet.bn, singapore.planet.bn
>
> [script1:germany.planet.bn]
> action=execute
> ignore_script_errors=no
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> kansas.planet.bn, germany.planet.bn, singapore.planet.bn
>
> [script1:singapore.planet.bn]
> action=execute
> ignore_script_errors=no
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> kansas.planet.bn, germany.planet.bn, singapore.planet.bn
>
> [disktype]
> jbod
>
> [diskcount]
> 12
>
> [stripesize]
> 256
>
> [service1]
> action=enable
> service=chronyd
>
> [service2]
> action=restart
> service=chronyd
>
> [shell2]
> action=execute
> command=vdsm-tool configure --force
>
> [script3]
> action=execute
> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
> ignore_script_errors=no
>
> [pv1:kansas.planet.bn]
> action=create
> devices=sdb
> ignore_pv_errors=no
>
> [pv1:germany.planet.bn]
> action=create
> devices=sdb
> ignore_pv_errors=no
>
> [pv1:singapore.planet.bn]
> action=create
> devices=sdb
> ignore_pv_errors=no
>
> [vg1:kansas.planet.bn]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
>
> [vg1:germany.planet.bn]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
>
> [vg1:singapore.planet.bn]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
>
> [lv1:kansas.planet.bn]
> action=create
> poolname=gluster_thinpool_sdb
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> lvtype=thinpool
> size=1005GB
> poolmetadatasize=5GB
>
> [lv2:germany.planet.bn]
> action=create
> poolname=gluster_thinpool_sdb
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> lvtype=thinpool
> size=1005GB
> poolmetadatasize=5GB
>
> [lv3:singapore.planet.bn]
> action=create
> poolname=gluster_thinpool_sdb
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> lvtype=thinpool
> size=1005GB
> poolmetadatasize=5GB
>
> [lv4:kansas.planet.bn]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> size=100GB
> lvtype=thick
>
> [lv5:kansas.planet.bn]
> action=create
> lvname=gluster_lv_data
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/data
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [lv6:kansas.planet.bn]
> action=create
> lvname=gluster_lv_vmstore
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/vmstore
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [lv7:germany.planet.bn]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> size=100GB
> lvtype=thick
>
> [lv8:germany.planet.bn]
> action=create
> lvname=gluster_lv_data
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/data
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [lv9:germany.planet.bn]
> action=create
> lvname=gluster_lv_vmstore
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/vmstore
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [lv10:singapore.planet.bn]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> size=100GB
> lvtype=thick
>
> [lv11:singapore.planet.bn]
> action=create
> lvname=gluster_lv_data
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/data
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [lv12:singapore.planet.bn]
> action=create
> lvname=gluster_lv_vmstore
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/vmstore
> lvtype=thinlv
> poolname=gluster_thinpool_sdb
> virtualsize=500GB
>
> [selinux]
> yes
>
> [service3]
> action=restart
> service=glusterd
> slice_setup=yes
>
> [firewalld]
> action=add
>
> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
> services=glusterfs
>
> [script2]
> action=execute
> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
>
> [shell3]
> action=execute
> command=usermod -a -G gluster qemu
>
> [volume1]
> action=create
> volname=engine
> transport=tcp
> replica=yes
> 

[ovirt-users] Re: Host choice when migrating VMs

2019-02-22 Thread Karli Sjöberg
Den 22 feb. 2019 09:24 skrev Nicolas Ecarnot :Hello,I'm almost sure the following is useless as I think I know how it's working, but as I'm preparing a major change in our infrastructure, I'd rather be sure and not mess up. And also to be sure.(Just to be sure)For some reasons, and for the first time in our infra., one of our new DC will temporary include heterogeneous hosts : some networks will be available only on parts of them.Hosts _needs_ the same networks to be available in the same cluster. Different networked hosts needs to be put in a separate cluster./KPlease may someone confirm me that with every load balancing / VM startup / VM migration / host choice, oVirt will smartly choose the available host equipped with the adequate networks?-- Nicolas ECARNOT___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WIML5YC5WES3ERTOJR3D7QHB7LC3KRF/


[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Nicolas Ecarnot

Le 22/02/2019 à 12:13, Martin Perina a écrit :

Unfortunately using fence_ipmilan is not possible to display more 
debugging details, so as mentioned earlier could you please run 
ipmitool directly?


ipmitool vv -I lanplus -H c-hv05.prd.sdis38.fr 
 -p 623 -U stonith -P  -L 
ADMINISTRATOR chassis power status


Above should display more details ...


root@hv04:/etc# ipmitool -vv -I lanplus -H c-hv05.prd.sdis38.fr -p 623 -U 
stonith -P 'xxx' -L ADMINISTRATOR chassis power status


Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x8e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x8e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x8e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x8e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x0e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x0e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x0e 0x04 




Sending IPMI command payload
   netfn   : 0x06
   command : 0x38
   data: 0x0e 0x04 


Get Auth Capabilities error
Error issuing Get Channel Authentication Capabilities request
Error: Unable to establish IPMI v2 / RMCP+ session
root@hv04:/etc#

--
Nicolas ECARNOT

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQKUC2G745CKN6BT2SC3T6LSCEEML7NN/


[ovirt-users] Ovirt 4.2.8.. Possible bug?

2019-02-22 Thread matteo fedeli
Hi considering that the deploy with 4.2.7.8 failed I try to reinstall ovirt to 
version 4.2.8 and there are appened two strange things.
During the volume step if i choose jbod mode in the deploy conf remain raid6 
type... Why? To solve  I have only tried to editing manually the file at line 
about volume type and the deploy stuck on creating physical volume...

this is my conf file: (I used 3 HDDs 500GB each, node,engine + vmstore and data)

#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
kansas.planet.bn
germany.planet.bn
singapore.planet.bn

[script1:kansas.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
kansas.planet.bn, germany.planet.bn, singapore.planet.bn

[script1:germany.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
kansas.planet.bn, germany.planet.bn, singapore.planet.bn

[script1:singapore.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 
kansas.planet.bn, germany.planet.bn, singapore.planet.bn

[disktype]
jbod

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no

[pv1:kansas.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no

[pv1:germany.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no

[pv1:singapore.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no

[vg1:kansas.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg1:germany.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[vg1:singapore.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:kansas.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB

[lv2:germany.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB

[lv3:singapore.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB

[lv4:kansas.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick

[lv5:kansas.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv6:kansas.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv7:germany.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick

[lv8:germany.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv9:germany.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv10:singapore.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick

[lv11:singapore.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[lv12:singapore.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs

[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=kansas.planet.bn:/gluster_bricks/engine/engine,germany.planet.bn:/gluster_bricks/engine/engine,singapore.planet.bn:/gluster_bricks/engine/engine
ignore_volume_errors=no

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3

[ovirt-users] Quota Actual Consumption

2019-02-22 Thread Andrey Rusakov
Hi,

Is it possible to configure oVirt to work Quota  with actual vCPU Consumption.

Currently i got 1 VM with 2 vCPU and 8Gb.
In VM stats - 
0% of CPU in use
0% of RAM in Use
(it is fresh Linux instalaltion)

In Quota  consumption i can see
40% of CPU (vCPU quota = 5) 
27% of RAM (Memory quota 30Gb)

Is it possible to adjust Quata management to work with Actual workload, instead 
of VM Configuration? 

P.S. Seems like it is working for Storage, as i can see 0% Consumption in Quota 
(1Gb of thick disk with 20Gb Max and 200Gb in Quota)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K5KCQUTZTQY6H23DACFZGXPG2AQYEG66/


[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Martin Perina
On Fri, Feb 22, 2019 at 11:39 AM Nicolas Ecarnot 
wrote:

> Hi Martin,
>
> Le 21/02/2019 à 13:04, Martin Perina a écrit :
> > Hi Nicolas,
> >
> > see my reply inline
>
> See mine below.
>
> >
> > On Mon, Feb 18, 2019 at 9:51 AM Nicolas Ecarnot  > > wrote:
> >
> > Hello,
> >
> > As fence_idrac has never worked for us, and as fence_ipmilan has
> worked
> > nicely since years, we are using fence_ipmilan with the lanplus=1
> > option
> > and we're happy with it.
> >
> > We upgraded to 4.3.0.4 and we're witnessing that we cannot fence our
> > hosts anymore :
> >
> > 2019-02-18 09:42:08,678+01 ERROR
> > [org.ovirt.engine.core.bll.pm
> > .FenceProxyLocator] (default
> > task-11)
> > [2f78ed99-6703-4d92-b7cb-948c2d24b623] Can not run fence action on
> host
> > 'x', no suitable proxy host was found.
> >
> >
> > This is not related fence_ipmi issue below. Engine, is order to be able
> > to execute fencing operation, needs at least one other hosts in Up
> > status, which is used as a proxy host to perform fencing operation. So
> > do you have at least one host in Up status in the same
> > cluster/datacenter as the host you want to run fencing operation on?
>
> Yes.
>
> > If so, then please enable debug information to find out why we cannot
> > find any host acting as fence proxy:
> >
> > 1. Please download log-control.sh script from
> > https://github.com/oVirt/ovirt-engine/tree/master/contrib#log-control-sh
> > and save on engine machine
> > 2. Please execute following on engine machine
> >log-control.sh org.ovirt.engine.core.bll.pm
> >  DEBUG
> > 3. Go to the problematic host, click Edit, go to Power Management tab,
> > click on the existing fence agent and click on Test button
> > 4. Take a look at engine.log, there should be logged information, why we
> > were not able to find out fence proxy
>
> I followed the instructions above, but I feel this is not the best debug
> path. I learned nothing new.
> The fence proxy is not missing. It is known and found, and it is trying
> to do its job, as written below :
>
> >
> >
> > and on the SPM :
> >
> > fence_ipmilan: Failed: Unable to obtain correct plug status or plug
> is
> > not available
> >
> >
> > Could you please provide debug output of below command?
> >
> > ipmitool -vv -I lanplus -H  -p 623 -U
> 
> > -P  -L ADMINISTRATOR chassis power status
>
> See below a debug session.
> I'm comparing two hosts, and one only is answering fence status queries.
>
> I must add that before the upgrade to 4.3, both hosts were responding
> correctly.
>
> fence_ipmilan --username=stonith --password='xxx' --lanplus
> --ip=c-serv-hv-prds01.sdis.isere.fr --action=status -v
> 2019-02-22 11:34:01,537 INFO: Executing: /usr/bin/ipmitool -I lanplus -H
> c-serv-hv-prds01.sdis.isere.fr -p 623 -U stonith -P [set] -L
> ADMINISTRATOR chassis power status
>
> 2019-02-22 11:34:01,654 DEBUG: 0 Chassis Power is on
>
>
> Status: ON
> root@hv04:/etc# fence_ipmilan --username=stonith --password='xxx'
> --lanplus --ip=c-hv05.prd.sdis38.fr --action=status -v
> 2019-02-22 11:34:15,335 INFO: Executing: /usr/bin/ipmitool -I lanplus -H
> c-hv05.prd.sdis38.fr -p 623 -U stonith -P [set] -L ADMINISTRATOR chassis
> power status
>
> 2019-02-22 11:34:35,338 ERROR: Connection timed out
>

Unfortunately using fence_ipmilan is not possible to display more debugging
details, so as mentioned earlier could you please run ipmitool directly?

ipmitool vv -I lanplus -H c-hv05.prd.sdis38.fr -p 623 -U stonith -P
 -L ADMINISTRATOR chassis power status

Above should display more details ...


>
> root@hv04:/etc# nmap c-serv-hv-prds01.sdis.isere.fr
>
> Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-22 11:34 CET
> Nmap scan report for c-serv-hv-prds01.sdis.isere.fr (192.168.53.2)
> Host is up (0.010s latency).
> rDNS record for 192.168.53.2: c-5g3yxx1.sdis.isere.fr
> Not shown: 996 closed ports
> PORT STATE SERVICE
> 22/tcp   open  ssh
> 80/tcp   open  http
> 443/tcp  open  https
> 5900/tcp open  vnc
>
> Nmap done: 1 IP address (1 host up) scanned in 0.45 seconds
> root@hv04:/etc# nmap c-hv05.prd.sdis38.fr
>
> Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-22 11:34 CET
> Nmap scan report for c-hv05.prd.sdis38.fr (192.168.50.194)
> Host is up (0.00060s latency).
> rDNS record for 192.168.50.194: C-550W2S2.sdis.isere.fr
> Not shown: 996 closed ports
> PORT STATE SERVICE
> 22/tcp   open  ssh
> 80/tcp   open  http
> 443/tcp  open  https
> 5900/tcp open  vnc
> MAC Address: CC:C5:E5:57:26:E0 (Unknown)
>
> Nmap done: 1 IP address (1 host up) scanned in 0.20 seconds
> root@hv04:/etc# ping -c 1 c-serv-hv-prds01.sdis.isere.fr
> PING c-5g3yxx1.sdis.isere.fr (192.168.53.2) 56(84) bytes of data.
> 64 bytes from c-5g3yxx1.sdis.isere.fr (192.168.53.2): icmp_seq=1 ttl=61
> time=2.37 ms
>
> --- c-5g3yxx1.sdis.isere.fr ping statistics ---
> 1 packets 

[ovirt-users] Installing oVirt on Physical machine V4.3

2019-02-22 Thread emmanualvnebu1
Hi Guys, 

I am trying to install oVirt node on a physical machine. 

Once selected the option to install oVirt 4.3 it give a screen 
"dracut-initqueue time out And then get into emergency mode & gives dracut cmd 
line. 

if I exit the command line, it continue boot into the installation window. 

However it gives a message "Kickstart file /run/install/ks.config is missing" 
(Pane is dead). 

Tried the same on vmware, it boot fine & am able to install it. 

Can someone tell me how could I fix this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MIFTFXRFAOGL4GBS3UOSBHSYH3GCJRTK/


[ovirt-users] Re: [ovirt-devel] Installing oVirt on Physical machine

2019-02-22 Thread Greg Sheremeta
redirecting to users list

Greg

On Fri, Feb 22, 2019 at 6:00 AM  wrote:

> Hi Guys,
>
> I am trying to install oVirt node on a physical machine.
>
> Once selected the option to install oVirt 4.3 it give a screen
>
> "dracut-initqueue time out
> And then get into emergency mode & gives dracut cmd line.
>
> if I exit the command line, it continue bootinto the installation window.
>
> However it gives a message "Kickstart file /run/install/ks.config is
> missing" (Pane is dead).
>
> Tried the same on vmware, it boot fine & am able to install it.
>
> Can someone tell me how could I fix this?
> ___
> Devel mailing list -- de...@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/de...@ovirt.org/message/MQQPLZ4QTFPLW6FAGWCGZZA6ZNONH2L7/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TMEZZPC5JML6R3RQ5EDWMYUK2BLNWAG/


[ovirt-users] Re: Fencing : SSL or not?

2019-02-22 Thread Nicolas Ecarnot

Hi Martin,

Le 21/02/2019 à 13:04, Martin Perina a écrit :

Hi Nicolas,

see my reply inline


See mine below.



On Mon, Feb 18, 2019 at 9:51 AM Nicolas Ecarnot > wrote:


Hello,

As fence_idrac has never worked for us, and as fence_ipmilan has worked
nicely since years, we are using fence_ipmilan with the lanplus=1
option
and we're happy with it.

We upgraded to 4.3.0.4 and we're witnessing that we cannot fence our
hosts anymore :

2019-02-18 09:42:08,678+01 ERROR
[org.ovirt.engine.core.bll.pm
.FenceProxyLocator] (default
task-11)
[2f78ed99-6703-4d92-b7cb-948c2d24b623] Can not run fence action on host
'x', no suitable proxy host was found.


This is not related fence_ipmi issue below. Engine, is order to be able 
to execute fencing operation, needs at least one other hosts in Up 
status, which is used as a proxy host to perform fencing operation. So 
do you have at least one host in Up status in the same 
cluster/datacenter as the host you want to run fencing operation on?


Yes.

If so, then please enable debug information to find out why we cannot 
find any host acting as fence proxy:


1. Please download log-control.sh script from 
https://github.com/oVirt/ovirt-engine/tree/master/contrib#log-control-sh 
and save on engine machine

2. Please execute following on engine machine
   log-control.sh org.ovirt.engine.core.bll.pm 
 DEBUG
3. Go to the problematic host, click Edit, go to Power Management tab, 
click on the existing fence agent and click on Test button
4. Take a look at engine.log, there should be logged information, why we 
were not able to find out fence proxy


I followed the instructions above, but I feel this is not the best debug 
path. I learned nothing new.
The fence proxy is not missing. It is known and found, and it is trying 
to do its job, as written below :





and on the SPM :

fence_ipmilan: Failed: Unable to obtain correct plug status or plug is
not available


Could you please provide debug output of below command?

ipmitool -vv -I lanplus -H  -p 623 -U  
-P  -L ADMINISTRATOR chassis power status


See below a debug session.
I'm comparing two hosts, and one only is answering fence status queries.

I must add that before the upgrade to 4.3, both hosts were responding 
correctly.


fence_ipmilan --username=stonith --password='xxx' --lanplus 
--ip=c-serv-hv-prds01.sdis.isere.fr --action=status -v
2019-02-22 11:34:01,537 INFO: Executing: /usr/bin/ipmitool -I lanplus -H 
c-serv-hv-prds01.sdis.isere.fr -p 623 -U stonith -P [set] -L 
ADMINISTRATOR chassis power status


2019-02-22 11:34:01,654 DEBUG: 0 Chassis Power is on


Status: ON
root@hv04:/etc# fence_ipmilan --username=stonith --password='xxx' 
--lanplus --ip=c-hv05.prd.sdis38.fr --action=status -v
2019-02-22 11:34:15,335 INFO: Executing: /usr/bin/ipmitool -I lanplus -H 
c-hv05.prd.sdis38.fr -p 623 -U stonith -P [set] -L ADMINISTRATOR chassis 
power status


2019-02-22 11:34:35,338 ERROR: Connection timed out


root@hv04:/etc# nmap c-serv-hv-prds01.sdis.isere.fr

Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-22 11:34 CET
Nmap scan report for c-serv-hv-prds01.sdis.isere.fr (192.168.53.2)
Host is up (0.010s latency).
rDNS record for 192.168.53.2: c-5g3yxx1.sdis.isere.fr
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
443/tcp  open  https
5900/tcp open  vnc

Nmap done: 1 IP address (1 host up) scanned in 0.45 seconds
root@hv04:/etc# nmap c-hv05.prd.sdis38.fr

Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-22 11:34 CET
Nmap scan report for c-hv05.prd.sdis38.fr (192.168.50.194)
Host is up (0.00060s latency).
rDNS record for 192.168.50.194: C-550W2S2.sdis.isere.fr
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
443/tcp  open  https
5900/tcp open  vnc
MAC Address: CC:C5:E5:57:26:E0 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.20 seconds
root@hv04:/etc# ping -c 1 c-serv-hv-prds01.sdis.isere.fr
PING c-5g3yxx1.sdis.isere.fr (192.168.53.2) 56(84) bytes of data.
64 bytes from c-5g3yxx1.sdis.isere.fr (192.168.53.2): icmp_seq=1 ttl=61 
time=2.37 ms


--- c-5g3yxx1.sdis.isere.fr ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.371/2.371/2.371/0.000 ms
root@hv04:/etc# ping -c 1 c-hv05.prd.sdis38.fr
PING c-550w2s2.prd.sdis38.fr (192.168.50.194) 56(84) bytes of data.
64 bytes from C-550W2S2.sdis.isere.fr (192.168.50.194): icmp_seq=1 
ttl=64 time=0.189 ms


--- c-550w2s2.prd.sdis38.fr ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms




Above is the command which fence_ipmi is internally executing, and -vv 
adds debugging output which can reveal issue with the plug status


Regards,
Martin


I found the suggested workaround here :


[ovirt-users] Re: Novnc issues are not available to regular users

2019-02-22 Thread Greg Sheremeta
The VM Portal enhanecment for this is
https://github.com/oVirt/ovirt-web-ui/pull/956

On Fri, Feb 22, 2019 at 1:21 AM  wrote:

> Ok, thank you very much. There is a problem that we cannot add when we
> create the normal users of the management platform.
> Could you create a user that can be added directly to the admin console?
>

Yes, see the documentation here to see what best fits your use case:
https://www.ovirt.org/documentation/admin-guide/chap-Global_Configuration.html#roles

Best wishes,
Greg


> You still need to add it by command.
> Thank you very much!!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJF3KFKONPOYE4HV4JJA7YWCV4BMFNBL/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XEBM6CGK7LRKKNOWG53I3NRIH5ODSBVV/


[ovirt-users] Host choice when migrating VMs

2019-02-22 Thread Nicolas Ecarnot

Hello,

I'm almost sure the following is useless as I think I know how it's 
working, but as I'm preparing a major change in our infrastructure, I'd 
rather be sure and not mess up. And also to be sure.

(Just to be sure)

For some reasons, and for the first time in our infra., one of our new 
DC will temporary include heterogeneous hosts : some networks will be 
available only on parts of them.


Please may someone confirm me that with every load balancing / VM 
startup / VM migration / host choice, oVirt will smartly choose the 
available host equipped with the adequate networks?


--
Nicolas ECARNOT
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGX3PHA4T3SXXDTYZ4VGY6UHECO7P6V5/


[ovirt-users] Re: Ovirt Glusterfs

2019-02-22 Thread Sandro Bonazzola
Adding Sahina

Il giorno ven 22 feb 2019 alle ore 06:51 Strahil  ha
scritto:

> I have done some testing and it seems that storhaug + ctdb + nfs-ganesha
> is showing decent performance in a 3 node  hyperconverged setup.
> Fuse mounts are hitting some kind of limit when mounting gluster -3.12.15
> volumes.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U5J4KYDJJS4W3BE2KEIR67NU3532XGY/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQCLXX2VFTPZWALHSO32JMVNILBKFCYJ/