Re: [ovirt-users] oVirt monitoring with libirt-snmp

2015-10-25 Thread Dan Kenigsberg
On Fri, Oct 23, 2015 at 01:31:10PM +0200, Kevin COUSIN wrote:
> Hi Dan, 
> 
> I want to monitor my oVirt infrastructure with Zabbix. I found a zabbix 
> template , but it needs to modifiy libirtd.conf to enable libvirt-snmp to 
> access VM informations (https://github.com/jensdepuydt/zabbix-ovirt).

If all you need is monitoring, libvirt-snmp should be happy to use a
read-only connection to libvirt, which is kept open by default.

If this is impossible, I suggest to file a bug on libvirt-snmp.

If you trust libvirt-snmp and zabbix not modify VM state, you can
replace "sasl" with "none", and restart libvirt, supervdsm and vdsm.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6 disk speeds

2015-10-25 Thread Yaniv Kaul
On Fri, Oct 23, 2015 at 6:55 AM, Julian De Marchi  wrote:

> heya--
>
> Playing around with oVirt 3.6 on some new kit before I rack and stack in
> the DC.
>
>   - Dell Eql PS4210(Raid-6)
>   - Dell R630
>   - Dell N400 10GB switch
>
> I have my 10GB interfaces bonded and in a port-channel. I have setup a
> basic 3.6 oVirt self-hosted stack.
>
> I am seeing some pretty big differences in disks speeds from the
> self-hosted engine and a VM guest. Both are direct luns via iscsi from the
> SAN on the 10GB port-channel. Normal network traffic does not go over the
> 10GB.
>
> On the engine I'm seeing this:
>
> # dd if=/dev/zero of=~/testfile bs=1G count=1 oflag=dsync
>

You are not using direct IO, please add relevant flag.


> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 1.58877 s, 676 MB/s
>
> On a test VM I'm seeing this:
>
> # dd if=/dev/zero of=~/testfile bs=1G count=1 oflag=dsync
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB) copied, 22.0732 s, 48.6 MB/s
>
> I have played with the disk drivers for my guest VM, changing between
> drivers does not change the speed at all.
>

I assume both VMs have the same image format and drivers?


>
> What ideas do you folks have for me to narrow down this slow down? I don't
> really know where to start with this one. I also noticed that 3.6 does not
> yet have ovirt-guest-agent available.
>
> I know DD is not the best test for disk speeds, but it works for what I
> need right now, iozone will be next when I'm happy.
>

dd is by no means a tool for disk speeds. Specially not when you are
writing zero's. I suggest fio .
Y.


>
> Many thanks!
>
> --julian
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] schedule a VM backup in ovirt 3.5

2015-10-25 Thread Yedidyah Bar David
On Fri, Oct 23, 2015 at 6:46 AM, Indunil Jayasooriya
 wrote:
>
>
>
>>
>> If you know the name of the VM then you can find it, including its id,
>> doing a search:
>>
>> https://engine.example.com/ovirt-engine/api/vms?search=name%3Dmyvm
>
>
>   Thanks for the above info. It worked.
>
>>
>>
>> If you prefer to use a script, which is probably the right thing, then
>> you can do something like this, using the Python SDK:
>>
>
> Can I shedule a VM backup with Ovirt Manager GUI. If possible, Pls let me
> know.

Not at this moment, and this is not planned for the future, afaik.

Did you play with the other SDK examples etc suggested during this thread?

May I suggest that this is best done as a plugin/contribution to a backup
solution, not standalone. I guess most sites with more than a few hosts use
some such solution - FOSS (amanda/bacula/backuppc/etc) or proprietary
(netbackup/backupexec/acronis/etc) - and backup of VMs (ovirt or other)
should be handled in this scope.

Searching the net I manage to find [1] and [2], which might be what
you want. Note that I never tried any of them myself.

[1] https://github.com/ovirt-china/vm-backup-scheduler
[2] https://github.com/wefixit-AT/oVirtBackup

Best,

>
>
>>
>> ---8<---
>> #!/usr/bin/python
>>
>> from ovirtsdk.api import API
>> from ovirtsdk.xml import params
>>
>> # Connect to the server:
>> api = API(
>> url="https://engine.example.com/ovirt-engine/api;,
>> username="admin@internal",
>> password="**",
>> ca_file="/etc/pki/ovirt-engine/ca.pem",
>> debug=False
>> )
>>
>> # Find the VM:
>> vm = api.vms.get(name="myvm")
>>
>> # Print the id:
>> print(vm.get_id())
>>
>> # Disconnect:
>> api.disconnect()
>> --->8---
>>
>
> Many thanks for the above script. I ran it on Ovirt manager. It worked. It
> gave me the ID of my CentOS_71 VM that I want to backup.
>
>
>
>>
>> Once you have the VM you can create a snapshot like this:
>>
>> ---8<---
>> vm.snapshots.add(
>> params.Snapshot(description="My snapshot")
>> )
>> --->8---
>
>
> I rewrote the script in this way and ran it.
>
>
> #!/usr/bin/python
>
> from ovirtsdk.api import API
> from ovirtsdk.xml import params
>
> # Connect to the server:
> api = API(
> url="https://engine.example.com/ovirt-engine/api;,
> username="admin@internal",
> password="**",
> ca_file="/etc/pki/ovirt-
>>
>> engine/ca.pem",
>> debug=False
>> )
>>
>> # Find the VM:
>> vm = api.vms.get(name="myvm")
>>
>> # Print the id:
>> print(vm.get_id())
>>
>
>  vm.snapshots.add(
>  params.Snapshot(description="My snapshot")
>  )
>>
>> # Disconnect:
>> api.disconnect()
>
>
> No error was given. I have no idea whether snapshot wad added or NOT by the
> above script.
>
> I can't display it.
>
> How to display it?
>
> Then, How to backup the VM with this snaphot?
>
> Then, finally how to delete this snapshot? to delete the snapshot,
>
> I tried with below command. the did NOT worked.
>
>   vm.snapshots.delete()
>
> or
>   vm.snapshots.detach(
>
>  or
>   vm.snapshots.remove(
>
> I think I am have completed about 50% of this backup process. If you can
> write down the  above steps, It would be very grateful.
>
>
> I searched a whole lot. But , I still can't do it.
>
> These are the links I came across.
>
> https://github.com/laravot/backuprestoreapi/blob/master/example.py
>
> http://www.ovirt.org/Testing/PythonApi#Create_a_Basic_Environment_using_ovirt-engine-sdk
>
> this is your one
>
> http://users.ovirt.narkive.com/Z29BQWAD/ovirt-users-python-sdk-attach-disk-snapshot-to-another-virtual-machine
>
>
> Hmm,
>
> How to list the sanphot?
> how to backup the VM with snapshot?
> finally , how to remove this snapshot?
>
>
> Then. I think it will be OVER. Yesterday, I tried a lot.  but, NO success.
>
> Hope to hear from you.
>
>
>
>
>
>
>
>
>
>>
>> You can also use the Java SDK, if you prefer Java.
>>
>> --
>> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
>> 3ºD, 28016 Madrid, Spain
>> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
>
>
>
>
> --
> cat /etc/motd
>
> Thank you
> Indunil Jayasooriya
> http://www.theravadanet.net/
> http://www.siyabas.lk/sinhala_how_to_install.html   -  Download Sinhala
> Fonts
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Maor Lipchuk
Hi Devin,

Does your host is working with enabled selinux?

If so, can you attach the output of the following commands
   ls -lh /dev/vgname/  # show which dm-x devices are used
   ls -Z /dev/dm-*  # show selinux label of these devices
   ps -efZ  # show vdsm selinux context

After that (If selinux was enabled), can you try to use the permissive mode, 
and try to reinstall the Host back again using oVirt.

Regards,
Maor


- Original Message -
> From: "Devin A. Bougie" 
> To: Users@ovirt.org
> Sent: Friday, October 23, 2015 11:12:23 PM
> Subject: [ovirt-users] Error while executing action New SAN Storage Domain: 
> Cannot zero out volume
> 
> Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error
> while executing action New SAN Storage Domain: Cannot zero out volume"
> error.
> 
> iscsid does login to the node, and the volumes appear to have been created.
> However, I cannot use it to create or import a Data / iSCSI storage domain.
> 
> [root@lnx84 ~]# iscsiadm -m node
> #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1
> 
> [root@lnx84 ~]# iscsiadm -m session
> tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)
> 
> [root@lnx84 ~]# pvscan
>   PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6
>   lvm2 [547.62 GiB / 543.75 GiB free]
> ...
> [root@lnx84 ~]# lvscan
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata'
>   [512.00 MiB] inherit
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox'
>   [128.00 MiB] inherit
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00
>   GiB] inherit
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00
>   MiB] inherit
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00
>   MiB] inherit
>   inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00
>   GiB] inherit
> ...
> 
> Any help would be greatly appreciated.
> 
> Many thanks,
> Devin
> 
> Here are the relevant lines from engine.log:
> --
> 2015-10-23 16:04:56,925 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84,
> HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id:
> 44a64578
> 2015-10-23 16:04:57,681 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs
> [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn,
> volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu,
> serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET,
> productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#,
> iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions:
> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };],
> deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI,
> status=Used, diskId=null, diskAlias=null, storageDomainId=null,
> storageDomainName=null]], log id: 44a64578
> 2015-10-23 16:05:06,474 INFO
> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] Running command:
> AddSANStorageDomainCommand internal: false. Entities affected :  ID:
> aaa0----123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
> 2015-10-23 16:05:06,488 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName =
> lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26,
> storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19,
> deviceList=[1IET_00010001], force=true), log id: 12acc23b
> 2015-10-23 16:05:07,379 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return:
> dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
> 2015-10-23 16:05:07,384 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] START,
> CreateStorageDomainVDSCommand(HostName = lnx84, HostId =
> a650e161-75f6-4916-bc18-96044bf3fc26,
> storageDomain=StorageDomainStatic[lnx88,
> cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19],
> args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
> 2015-10-23 16:05:10,356 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
> 2015-10-23 16:05:10,360 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (ajp--127.0.0.1-8702-8) [53dd8c98] Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand
> return value
>  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374,
>  mMessage=Cannot zero out volume:
>  

[ovirt-users] Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE

2015-10-25 Thread Giuseppe Ragusa
Hi all,
I'm experiencing some difficulties using oVirt 3.6 latest snapshot.

I'm trying to trick the self-hosted-engine setup to create a custom engine vm 
with 3 nics (with fixed MACs/UUIDs).

The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) and 
the network bridges (ovirtmgmt and other two bridges, called nfs and lan, for 
the engine vm) have been preconfigured on the initial fully-patched CentOS 7.1 
host (plus other two identical hosts which are awaiting to be added).

I'm stuck at a point with the engine vm successfully starting but with only one 
nic present (connected to the ovirtmgmt bridge).

I'm trying to obtain the modified engine vm by means of a trick which used to 
work in a previous (aborted because of lacking GlusterFS-by-libgfapi support) 
oVirt 3.5 test setup (about a year ago, maybe more): I'm substituting the 
standard /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the 
following:

vmId=@VM_UUID@
memSize=@MEM_SIZE@
display=@CONSOLE_TYPE@
devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
devices={device:scsi,model:virtio-scsi,type:controller}
devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
 slot:0x0c, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
vmName=@NAME@
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp=@VCPUS@
cpuType=@CPU_TYPE@
emulatedMachine=@EMULATED_MACHINE@

but unfortunately the vm gets created like this (output from "ps"; note that 
I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the 
installation should proceed by HTTP on the lan network but obviously fails):

/usr/libexec/qemu-kvm -name HostedEngine -S -machine 
pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime mlock=off 
-smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2
.8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
0e38aa4a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/li
b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=mo
nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global kvm-pit.l
ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uh
ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr
=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=
/var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= 
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 
-drive 
file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3 
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev 

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor,

On Oct 25, 2015, at 12:03 PM, Maor Lipchuk  wrote:
> few questions:
> Which RHEL version is installed on your Host?

7.1

> Can you please share the output of "ls -l 
> /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/"

[root@lnx84 ~]# ls -l /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/
total 0
lrwxrwxrwx 1 root root 8 Oct 23 16:05 ids -> ../dm-23
lrwxrwxrwx 1 root root 8 Oct 23 16:05 inbox -> ../dm-24
lrwxrwxrwx 1 root root 8 Oct 23 16:05 leases -> ../dm-22
lrwxrwxrwx 1 root root 8 Oct 23 16:05 metadata -> ../dm-20
lrwxrwxrwx 1 root root 8 Oct 23 16:05 outbox -> ../dm-21

> What happen when you run this command from your Host:
> /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero 
> of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 
> skip=0 conv=notrunc count=40 oflag=direct

[root@lnx84 ~]# /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd 
if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 
seek=0 skip=0 conv=notrunc count=40 oflag=direct
40+0 records in
40+0 records out
41943040 bytes (42 MB) copied, 0.435552 s, 96.3 MB/s

> Also, please consider to open a bug at 
> https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt, with all the logs 
> and output so it can be resolved ASAP.

I'll open a bug report in the morning unless you have any other suggestions.  
Many thanks for following up!

Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Maor Lipchuk
Hi Devin

few questions:
Which RHEL version is installed on your Host?
Can you please share the output of "ls -l 
/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/"

What happen when you run this command from your Host:
/usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero 
of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 skip=0 
conv=notrunc count=40 oflag=direct

Also, please consider to open a bug at 
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt, with all the logs and 
output so it can be resolved ASAP.

Thanks,
Maor


- Original Message -
> From: "Devin A. Bougie" 
> To: "Maor Lipchuk" 
> Cc: Users@ovirt.org
> Sent: Sunday, October 25, 2015 4:10:25 PM
> Subject: Re: [ovirt-users] Error while executing action New SAN Storage 
> Domain: Cannot zero out volume
> 
> Hi Maor,
> 
> On Oct 25, 2015, at 6:36 AM, Maor Lipchuk  wrote:
> > Does your host is working with enabled selinux?
> 
> No, selinux is disabled.  Sorry, I should have mentioned that initially.
> 
> Any other suggestions would be greatly appreciated.
> 
> Many thanks!
> Devin
> 
> > - Original Message -
> >> 
> >> Every time I try to create a Data / iSCSI Storage Domain, I receive an
> >> "Error
> >> while executing action New SAN Storage Domain: Cannot zero out volume"
> >> error.
> >> 
> >> iscsid does login to the node, and the volumes appear to have been
> >> created.
> >> However, I cannot use it to create or import a Data / iSCSI storage
> >> domain.
> >> 
> >> [root@lnx84 ~]# iscsiadm -m node
> >> #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1
> >> 
> >> [root@lnx84 ~]# iscsiadm -m session
> >> tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)
> >> 
> >> [root@lnx84 ~]# pvscan
> >>  PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6
> >>  lvm2 [547.62 GiB / 543.75 GiB free]
> >> ...
> >> [root@lnx84 ~]# lvscan
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata'
> >>  [512.00 MiB] inherit
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox'
> >>  [128.00 MiB] inherit
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases'
> >>  [2.00
> >>  GiB] inherit
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00
> >>  MiB] inherit
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox'
> >>  [128.00
> >>  MiB] inherit
> >>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master'
> >>  [1.00
> >>  GiB] inherit
> >> ...
> >> 
> >> Any help would be greatly appreciated.
> >> 
> >> Many thanks,
> >> Devin
> >> 
> >> Here are the relevant lines from engine.log:
> >> --
> >> 2015-10-23 16:04:56,925 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> >> (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84,
> >> HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id:
> >> 44a64578
> >> 2015-10-23 16:04:57,681 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> >> (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs
> >> [id=1IET_00010001,
> >> physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn,
> >> volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu,
> >> serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET,
> >> productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#,
> >> iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions:
> >> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };],
> >> deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI,
> >> status=Used, diskId=null, diskAlias=null, storageDomainId=null,
> >> storageDomainName=null]], log id: 44a64578
> >> 2015-10-23 16:05:06,474 INFO
> >> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
> >> (ajp--127.0.0.1-8702-8) [53dd8c98] Running command:
> >> AddSANStorageDomainCommand internal: false. Entities affected :  ID:
> >> aaa0----123456789aaa Type: SystemAction group
> >> CREATE_STORAGE_DOMAIN with role type ADMIN
> >> 2015-10-23 16:05:06,488 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
> >> (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName =
> >> lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26,
> >> storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19,
> >> deviceList=[1IET_00010001], force=true), log id: 12acc23b
> >> 2015-10-23 16:05:07,379 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
> >> (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return:
> >> dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
> >> 2015-10-23 16:05:07,384 INFO
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> >> (ajp--127.0.0.1-8702-8) [53dd8c98] START,
> >> 

Re: [ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume

2015-10-25 Thread Devin A. Bougie
Hi Maor,

On Oct 25, 2015, at 6:36 AM, Maor Lipchuk  wrote:
> Does your host is working with enabled selinux?

No, selinux is disabled.  Sorry, I should have mentioned that initially.

Any other suggestions would be greatly appreciated.

Many thanks!
Devin

> - Original Message -
>> 
>> Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error
>> while executing action New SAN Storage Domain: Cannot zero out volume"
>> error.
>> 
>> iscsid does login to the node, and the volumes appear to have been created.
>> However, I cannot use it to create or import a Data / iSCSI storage domain.
>> 
>> [root@lnx84 ~]# iscsiadm -m node
>> #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1
>> 
>> [root@lnx84 ~]# iscsiadm -m session
>> tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)
>> 
>> [root@lnx84 ~]# pvscan
>>  PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6
>>  lvm2 [547.62 GiB / 543.75 GiB free]
>> ...
>> [root@lnx84 ~]# lvscan
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata'
>>  [512.00 MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox'
>>  [128.00 MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00
>>  GiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00
>>  MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00
>>  MiB] inherit
>>  inactive  '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00
>>  GiB] inherit
>> ...
>> 
>> Any help would be greatly appreciated.
>> 
>> Many thanks,
>> Devin
>> 
>> Here are the relevant lines from engine.log:
>> --
>> 2015-10-23 16:04:56,925 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84,
>> HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id:
>> 44a64578
>> 2015-10-23 16:04:57,681 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs
>> [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn,
>> volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu,
>> serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET,
>> productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#,
>> iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions:
>> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };],
>> deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI,
>> status=Used, diskId=null, diskAlias=null, storageDomainId=null,
>> storageDomainName=null]], log id: 44a64578
>> 2015-10-23 16:05:06,474 INFO
>> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Running command:
>> AddSANStorageDomainCommand internal: false. Entities affected :  ID:
>> aaa0----123456789aaa Type: SystemAction group
>> CREATE_STORAGE_DOMAIN with role type ADMIN
>> 2015-10-23 16:05:06,488 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName =
>> lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26,
>> storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19,
>> deviceList=[1IET_00010001], force=true), log id: 12acc23b
>> 2015-10-23 16:05:07,379 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return:
>> dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
>> 2015-10-23 16:05:07,384 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] START,
>> CreateStorageDomainVDSCommand(HostName = lnx84, HostId =
>> a650e161-75f6-4916-bc18-96044bf3fc26,
>> storageDomain=StorageDomainStatic[lnx88,
>> cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19],
>> args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
>> 2015-10-23 16:05:10,356 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
>> 2015-10-23 16:05:10,360 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] Command
>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand
>> return value
>> StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374,
>> mMessage=Cannot zero out volume:
>> ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]]
>> 2015-10-23 16:05:10,367 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84
>> 2015-10-23 16:05:10,370 ERROR
>>