Re: [ovirt-users] multiple ip routing table issue

2017-11-20 Thread Edward Haas
On Tue, Nov 21, 2017 at 1:24 AM, Edward Clay 
wrote:

> Hello,
>
> We have an issue where hosts are configured with the public facing nework
> interface as the ovirtmgmt network and it's default route is added to a
> ovirt created table but not to the main routing table. From my searching
> I've found this snippet from https://www.ovirt.org/develop/
> release-management/features/network/multiple-gateways/ which seems to
> explain why I can't ping anything or communicate with any other system
> needing a default route.
>

By default, the default route is set on the ovirtmgmt network (the default
one, defined on the interface/ip which you added the host to Engine).
Do you have a different network set up which you will like to set the
default route on?


>
> "And finally, here's the host's main routing table. Any traffic coming in
> to the host will use the ip rules and an interface's routing table. The
> main routing table is only used for traffic originating from the host."
>
> I'm seeing the following main and custom ovirt created tables.
>
> main:
> # ip route show table main
> 10.0.0.0/8 via 10.4.16.1 dev enp3s0.106
> 10.4.16.0/24 dev enp3s0.106 proto kernel scope link src 10.4.16.15
> 1.1.1.0/24 dev PUBLICB proto kernel scope link src 1.1.1.1 169.254.0.0/16
> dev enp6s0 scope link metric 1002
> 169.254.0.0/16 dev enp3s0 scope link metric 1003
> 169.254.0.0/16 dev enp7s0 scope link metric 1004
> 169.254.0.0/16 dev enp3s0.106 scope link metric 1020
> 169.254.0.0/16 dev PRIVATE scope link metric 1022
> 169.254.0.0/16 dev PUBLIC scope link metric 1024
>
> table 1138027711
> # ip route show table 1138027711
> default via 1.1.1.1 dev PUBLIC
> 1.1.1.0/24 via 1.1.1.1 dev PUBLIC
>
> If I manually execute the following command to add the default route as
> well to the main table I can ping ouside of the local network.
>
> ip route add 0.0.0.0/0 via 1.1.1.1 dev PUBLIC
>
> If I attempt to modify the /etc/sysconfig/network-scripts/route-PUBLIC ad
> reboot the server ad one would think this file is recreated by vdsm on boot.
>
> What I'm looking for is the correct way to setup a default gateway for the
> main routing table so the hosts can get OS updates and communicate with the
> outside world.
>

Providing the output from "ip addr" may help clear up some things.
It looks like you have on the host the default route set as 10.4.16.1 (on
enp3s0.106), could you elaborate what this interface is?

Thanks,
Edy.


> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] multiple ip routing table issue

2017-11-20 Thread Edward Clay
Hello,

We have an issue where hosts are configured with the public facing
nework interface as the ovirtmgmt network and it's default route is
added to a ovirt created table but not to the main routing table.  From
my searching I've found this snippet from https://www.ovirt.org/develop
/release-management/features/network/multiple-gateways/ which seems to
explain why I can't ping anything or communicate with any other system
needing a default route.

"And finally, here's the host's main routing table. Any traffic coming
in to the host will use the ip rules and an interface's routing table.
The main routing table is only used for traffic originating from the
host."

I'm seeing the following main and custom ovirt created tables.

main:
# ip route show table main
10.0.0.0/8 via 10.4.16.1 dev enp3s0.106 
10.4.16.0/24 dev enp3s0.106 proto kernel scope link src 10.4.16.15 
1.1.1.0/24 dev PUBLICB proto kernel scope link src
1.1.1.1 169.254.0.0/16 dev enp6s0 scope link metric 1002 
169.254.0.0/16 dev enp3s0 scope link metric 1003 
169.254.0.0/16 dev enp7s0 scope link metric 1004 
169.254.0.0/16 dev enp3s0.106 scope link metric 1020 
169.254.0.0/16 dev PRIVATE scope link metric 1022 
169.254.0.0/16 dev PUBLIC scope link metric 1024 

table 1138027711
# ip route show table 1138027711
default via 1.1.1.1 dev PUBLIC
1.1.1.0/24 via 1.1.1.1 dev PUBLIC

If I manually execute the following command to add the default route as
well to the main table I can ping ouside of the local network.

ip route add 0.0.0.0/0 via 1.1.1.1 dev PUBLIC

If I attempt to modify the /etc/sysconfig/network-scripts/route-PUBLIC
ad reboot the server ad one would think this file is recreated by vdsm
on boot.

What I'm looking for is the correct way to setup a default gateway for
the main routing table so the hosts can get OS updates and communicate
with the outside world.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Why was HE Install made OVA only?

2017-11-20 Thread Simone Tiraboschi
On Mon, Nov 20, 2017 at 4:39 PM, Alan Griffiths 
wrote:

> Hi,
>
> What was the reasoning behind making Hosted Engine install OVA only?
> The PXEBoot feature always worked really well for me, and now I have a
> number of extra steps to achieve the same end result.
>
>
Do you mean that you were customizing the image shipped via PXE?
Deploying from the OVA is pretty convenient, if you want just to forget
about it, you have to install also ovirt-engine-appliance rpm when you
install ovirt-hosted-engine-setup one.
I don't see other additional steps.



> Thanks,
>
> Alan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Openshift in VMs created from oVirt

2017-11-20 Thread Wayna Runa
Thanks Evgheni for your message.

Minishift/Minikube require to prepare the VM (boot2docker, docker machine,
drivers, etc.) before running Openshift/Kubernetes inside. Anyway, I'm
going to explore these suggestions.

1) oVirt templates info:
https://www.ovirt.org/documentation/vmm-guide/chap-Templates

2) Is this (https://cloud-init.io) Cloud Init?

3) Does that mean executing the command (minishift start ..) in the VM
from oVirt right after the VM has been created?. Do you have any info about
this?.

I appreciate a lot your support.
Regards.

-- wr


On 20 November 2017 at 11:48, Evgheni Dereveanchin 
wrote:

> Hi Wayna,
>
> A VM with OpenShift is no different from any other VM. Users should be
> able to reach it via SSH if your network/firewall settings allow that. If
> you want to auto-provision minishift I see several options:
> 1) install minishift manually in a VM and save it as a template to
> provision new VMs from
> 2) use a cloud image from glance and invoke cloud-init to run the installer
> 3) deploy new VMs using your standard workflow, then use Ansible or other
> configuration management tools to setup minishift after install.
>
> Hope this helps.
>
> On Fri, Nov 17, 2017 at 2:26 PM, Wayna Runa  wrote:
>
>> Hi there!,
>>
>> I've oVirt running and providing several VMs (CentOS and RedHat), now I
>> want to provide VMs with minimalist Openshift cluster running inside to
>> developers.
>> Minishift creates a VM with Openshift locally, that is fine, but now we
>> have oVirt to provide VMs which the developers can get access remotely.
>>
>> How I can use oVirt to do that?.
>> Thanks in advance.
>>
>> --
>> *Wayna Runa*
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Why was HE Install made OVA only?

2017-11-20 Thread Alan Griffiths
Hi,

What was the reasoning behind making Hosted Engine install OVA only?
The PXEBoot feature always worked really well for me, and now I have a
number of extra steps to achieve the same end result.

Thanks,

Alan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] slow performance with export storage on glusterfs

2017-11-20 Thread Jiří Sléžka
Hi,

I am trying realize why is exporting of vm to export storage on
glusterfs such slow.

I am using oVirt and RHV, both instalations on version 4.1.7.

Hosts have dedicated nics for rhevm network - 1gbps, data storage itself
is on FC.

GlusterFS cluster lives separate on 4 dedicated hosts. It has slow disks
but I can achieve about 200-400mbit throughput in other applications (we
are using it for "cold" data, backups mostly).

I am using this glusterfs cluster as backend for export storage. When I
am exporting vm I can see only about 60-80mbit throughput.

What could be the bottleneck here?

Could it be qemu-img utility?

vdsm  97739  0.3  0.0 354212 29148 ?S

smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.7: Duplicate MAC addresses had to be introduced into mac pool violating no duplicates setting.

2017-11-20 Thread nicolas

Hi Alona,

El 2017-11-19 08:31, Alona Kaplan escribió:

Hi Nicolas,

There was a bug in oVirt [1] that in some cases a mac pool that
doesn't support duplicate macs could get duplicates.
The bug was (hopefully) fixed.

The error that you see means that you have a mac pool that was
affected by this bug and have duplicate macs although it doesn't
support it.
The second part of the error should contain the list of the duplicate
macs ("Following MACs violates duplicity restriction, and was pushed
into MAC pool without respect to it:").



Indeed, being this message:

engine.log-20171117.gz:2017-11-16 10:34:55,373Z ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ServerService Thread Pool -- 52) [] EVENT_ID: 
MAC_ADDRESS_VIOLATES_NO_DUPLICATES_SETTING(10,917), Following MACs 
violates duplicity restriction, and was pushed into MAC pool without 
respect to it:[00:1a:4a:97:5f:16, 00:1a:4a:97:5f:05, 00:1a:4a:97:5f:1d, 
00:1a:4a:97:5f:07, 00:1a:4a:97:5f:0f, 00:1a:4a:4d:cc:1c, 
00:1a:4a:97:5f:15, 00:1a:4a:97:5f:0d]



It is just a warning, you can ignore it.
But we advise you to go over the duplicate macs and fix them.


However, recently I made a small script to find duplicate MACs because I 
was having an issue with dupe MACs (which BTW you helped me to debug in 
BZ [1]). I'm attaching the script just in case something is wrong in it, 
but the script shows no dupe MAC addresses currently. Maybe in the past 
(before the upgrade) there was dupes with those MAC addresses, but not 
now.


The output of the script shows only one machine per MAC address:

00:1a:4a:97:5f:16 ['VM1']
00:1a:4a:97:5f:05 ['VM2']
00:1a:4a:97:5f:1d ['VM3']
00:1a:4a:97:5f:07 ['VM4']
00:1a:4a:97:5f:0f ['VM5']
00:1a:4a:4d:cc:1c ['VM6']
00:1a:4a:97:5f:15 ['VM7']
00:1a:4a:97:5f:0d ['VM8']

Should I file a bug or simply ignore it?

Thanks.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1497242



Alona.

[1] https://bugzilla.redhat.com/1485688 [2]

On Fri, Nov 17, 2017 at 4:54 PM,  wrote:


Hi,

We just upgraded from 4.1.6 to 4.1.7 and just after the engine was
brought back the following event showed up:

   Duplicate MAC addresses had to be introduced into mac pool
violating no duplicates setting.

Can someone explain why is that?

Thanks.

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]




Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
[2] https://bugzilla.redhat.com/1485688#!/usr/bin/env python

from sys import path

import os.path
from sys import exit

from ovirtsdk4 import Connection, types
from scriptsparams import URI, USERNAME, PASSWORD, CERTPATH

conn = Connection(
   url=URI,
   username=USERNAME,
   password=PASSWORD,
   ca_file=CERTPATH
)

if not conn.test(raise_exception=False):
print "ERROR: Cannot connect. Is engine up? Are credentials ok?"
exit(1)

vms_macs = {}

sys_serv = conn.system_service()
vms_serv = sys_serv.vms_service()

macs = []
for vm in vms_serv.list():
nics = conn.follow_link(vm.nics)
for nic in nics:
if nic.mac.address in vms_macs:
vms_macs[nic.mac.address].append(vm.name)
else:
vms_macs[nic.mac.address] = [vm.name]
macs.append(nic.mac.address)

print '00:1a:4a:97:5f:16', vms_macs['00:1a:4a:97:5f:16']
print '00:1a:4a:97:5f:05', vms_macs['00:1a:4a:97:5f:05']
print '00:1a:4a:97:5f:1d', vms_macs['00:1a:4a:97:5f:1d']
print '00:1a:4a:97:5f:07', vms_macs['00:1a:4a:97:5f:07']
print '00:1a:4a:97:5f:0f', vms_macs['00:1a:4a:97:5f:0f']
print '00:1a:4a:4d:cc:1c', vms_macs['00:1a:4a:4d:cc:1c']
print '00:1a:4a:97:5f:15', vms_macs['00:1a:4a:97:5f:15']
print '00:1a:4a:97:5f:0d', vms_macs['00:1a:4a:97:5f:0d']
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-20 Thread Arman Khalatyan
On Mon, Nov 20, 2017 at 12:23 PM, Yaniv Kaul  wrote:
>
>
> Define QoS on the NIC.
> But I think you wish to limit IO, no?
> Y.
>
For the moment QoS is unlimited.
Actually for some tasks I wish to allocate 80% of 10Gbit interface,
but the VM interface is always 1Gbit.
Inside the QoS of the host interface I cannot put 8000 for the Rate
Limit, it claims that rate limit should be between number 1...1024,
looks like it assumes only 1Gbit interfaces?
a.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Openshift in VMs created from oVirt

2017-11-20 Thread Evgheni Dereveanchin
Hi Wayna,

A VM with OpenShift is no different from any other VM. Users should be able
to reach it via SSH if your network/firewall settings allow that. If you
want to auto-provision minishift I see several options:
1) install minishift manually in a VM and save it as a template to
provision new VMs from
2) use a cloud image from glance and invoke cloud-init to run the installer
3) deploy new VMs using your standard workflow, then use Ansible or other
configuration management tools to setup minishift after install.

Hope this helps.

On Fri, Nov 17, 2017 at 2:26 PM, Wayna Runa  wrote:

> Hi there!,
>
> I've oVirt running and providing several VMs (CentOS and RedHat), now I
> want to provide VMs with minimalist Openshift cluster running inside to
> developers.
> Minishift creates a VM with Openshift locally, that is fine, but now we
> have oVirt to provide VMs which the developers can get access remotely.
>
> How I can use oVirt to do that?.
> Thanks in advance.
>
> --
> *Wayna Runa*
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some tests results: lustrefs over nfs on VM

2017-11-20 Thread Yaniv Kaul
On Nov 19, 2017 11:05 PM, "Arman Khalatyan"  wrote:

hi Yaniv,
yes for sure I hit some cache in between but not the vm cache, it has a 4GB
ram, with oflag=direct I get about 120MB/s

for the data analysis the cache is our friend:)

The backend is a lustrefs 2.10.x.
yes we have dedicated 10G on the hosts, where we can limit the vm interface
to 10Gbit?


Define QoS on the NIC.
But I think you wish to limit IO, no?
Y.


Am 19.11.2017 8:33 nachm. schrieb "Yaniv Kaul" :



On Sun, Nov 19, 2017 at 7:08 PM, Arman Khalatyan  wrote:

> Hi, in our environment we got pretty good io performance on VM, with
> following configuration:
> lustrebox: /lust mounted on "GATEWAY" over IB
> GATEWAY: export /lust as nfs4 on 10G interface
> VM(test.vm): import as NFS over 10G interface
>
> [r...@test.vm  ~]# dd  if=/dev/zero bs=128K count=10
>

Without oflag=direct, you are hitting (somewhat) the cache.


>
> of=/test/tmp/test.tmp
> 10+0 records in
> 10+0 records out
> 1310720 <(310)%20720-> bytes (13 GB) copied, 20.8402 s, 629 MB/s
> looks promising for the future deployments.
>

Very - what's the backend storage?


>
> only one problem remains that on heavy io I get some wornings that the vm
> network is saturated, are there way to configure the bandwidth limits to
> 10G for the VM Interface??
>

Yes, but you really need a dedicated storage interface, no?
Y.


>
>
> thank you beforehand,
> Arman.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM command GetVmsInfoVDS failed: Missing OVF file from VM

2017-11-20 Thread Benny Zlotnik
Yes, you can remove it

On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:

> I found an empty directory in the Export domain storage:
>
> # ls -la /rhev/data-center/mnt/fs01.my.dom-holding.com:_mnt_quadstor-
> vv1_ovirt-vm-backup/3a514c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x.   2 vdsm kvm  4096 Nov  9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov  9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik" :
>
> + ovirt-users
>
> On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik 
> wrote:
>
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure the export domain was formatted
> properly? Couldn't find it in the engine.log
> Looking at the logs it seems VMs were found on the export domain
> (id=3a514c90-e574-4282-b1ee-779602e35f24)
>
> 2017-11-19 13:18:13,007+0300 INFO  (jsonrpc/2) [storage.StorageDomain]
> vmList=[u'01a4f53e-699e-4ea5-aef4-458638f23ce9',
> u'03c9e965-710d-4fc8-be06-583abbd1d7a9', 
> u'07dab4f6-d677-4faa-9875-97bd6d601f49',
> u'0b94a559-b31a-475d-9599-36e0dbea579a', 
> u'13b42f3a-3057-4eb1-ad4b-f4e52f6ff196',
> u'151a4e75-d67a-4603-8f52-abfb46cb74c1', 
> u'177479f5-2ed8-4b6c-9120-ec067d1a1247',
> u'18945b31-3ba5-4e54-9bf0-8fdc3a7d7411', 
> u'1e72be16-f540-4cfd-b0e9-52b66220a98b',
> u'1ec85134-a7b5-46c2-9c6c-eaba340c5ffd', 
> u'20b88cfc-bfae-4983-8d83-ba4e0c7feeb7',
> u'25fa96d1-6083-4daa-9755-026e632553d9', 
> u'273ffd05-6f93-4e4a-aac9-149360b5f0b4',
> u'28188426-ae8b-4999-8e31-4c04fbba4dac', 
> u'28e9d5f2-4312-4d0b-9af9-ec1287bae643',
> u'2b7093dc-5d16-4204-b211-5b3a1d729872', u'32ecfcbb-2678-4f43-8d59-418e
> 03920693', u'3376ef0b-2af5-4a8b-9987-18f28f6bb334',
> u'34d1150f-7899-44d9-b8cf-1c917822f624', 
> u'383bbfc6-6841-4476-b108-a1878ed9ce43',
> u'388e372f-b0e8-408f-b21b-0a5c4a84c457', 
> u'39396196-42eb-4a27-9a57-a3e0dad8a361',
> u'3fc02ca2-7a03-4d5e-bc21-688f138a914f', 
> u'4101ac1e-0582-4ebe-b4fb-c4aed39fadcf',
> u'44e10588-8047-4734-81b3-6a98c229b637', 
> u'4794ca9c-5abd-4111-b19c-bdfbf7c39c86',
> u'47a83986-d3b8-4905-b017-090276e967f5', 
> u'49d83471-a312-412e-b791-8ee0badccbb5',
> u'4b1b9360-a48a-425b-9a2e-19197b167c99', 
> u'4d783e2a-2d81-435a-98c4-f7ed862e166b',
> u'51976b6e-d93f-477e-a22b-0fa84400ff84', 
> u'56b77077-707c-4949-9ea9-3aca3ea912ec',
> u'56dc5c41-6caf-435f-8146-6503ea3eaab9', 
> u'5729e036-5f6e-473b-9d1d-f1c4c5c55b2d',
> u'5873f804-b992-4559-aff5-797f97bfebf7', 
> u'58b7a4ea-d572-4ab4-a4f1-55dddc5dc8e8',
> u'590d1adb-52e4-4d29-af44-c9aa5d328186', 
> u'5c79f970-6e7b-4996-a2ce-1781c28bff79',
> u'5feab1f2-9a3d-4870-a0f3-fd97ea3c85c3', 
> u'63749307-4486-4702-ade9-4324f5bfe80c',
> u'6555ac11-7b20-4074-9d71-f86bc10c01f9', 
> u'66b4b8a0-b53b-40ea-87ab-75f6d9eef728',
> u'672c4e12-628f-4dcd-a57e-b4ff822a19f3', 
> u'679c0445-512c-4988-8903-64c0c08b5fab',
> u'6ae337d0-e6a0-489f-82e6-57a85f63176a', u'6d713cb9-993d-4822-a030-ac75
> 91794050', u'72a50ef0-945d-428a-a336-6447c4a70b99',
> u'751dfefc-9e18-4f26-bed6-db412cdb258c', 
> u'7587db59-e840-41bc-96f3-b212b7b837a4',
> u'778c969e-1d22-46e3-bdbe-e20e0c5bb967', 
> u'7810dec1-ee1c-4291-93f4-18e9a15fa8e2',
> u'7a6cfe35-e493-4c04-8fc6-e0bc72efc72d', 
> u'7a7d814e-4586-40d5-9750-8896b00a6490',
> u'7af76921-4cf2-4c3c-9055-59c24d9e8b08', 
> u'7d781e21-6613-41f4-bcea-8b57417e1211',
> u'7da51499-d7db-49fd-88f6-bcac30e5dd86', 
> u'850a8041-77a4-4ae3-98f9-8d5f3a5778e6',
> u'85169fe8-8198-492f-b988-b8e24822fd01', 
> u'87839926-8b84-482b-adec-5d99573edd9e',
> u'8a7eb414-71fa-4f91-a906-d70f95ccf995', 
> u'8a9a1071-b005-4448-ba3f-c72bd7e0e34b',
> u'8b73e593-8513-4a8e-b051-ce91765b22bd', 
> u'8cbd5615-4206-4e4a-992d-8705b2f2aac2',
> u'92e9d966-c552-4cf9-b84a-21dda96f3f81', 
> u'95209226-a9a5-4ada-8eed-a672d58ba72c',
> u'986ce2a5-9912-4069-bfa9-e28f7a17385d', 
> u'9f6c8d1d-da81-4020-92e5-1c14cf082d2c',
> u'9ff87197-d089-4b2d-8822-b0d6f6e67292', 
> u'a0a0c756-fbe9-4f8e-b6e9-1f2d58f1d957',
> u'a46d5615-8d9f-4944-9334-2fca2b53c27e', 
> u'a6a50244-366b-4b7c-b80f-04d7ce2d8912',
> u'aa6a4de6-cc9e-4d79-a795-98326bbd83db', 
> u'accc0bc3-c501-4f0b-aeeb-6858f7e894fd',
> u'b09e5783-6765-4514-a5a3-86e5e73b729b', 
> u'b1ecfe29-7563-44a9-b814-0faefac5465b',
> u'baa542e1-492a-4b1b-9f54-e9566a4fe315', 
> u'bb91f9f5-98df-45b1-b8ca-9f67a92eef03',
> u'bd11f11e-be3d-4456-917c-f93ba9a19abe', 
> u'bee3587e-50f4-44bc-a199-35b38a19ffc5',
> u'bf573d58-1f49-48a9-968d-039e0916c973', 
> u'c01d466a-8ad8-4afe-b383-e365deebc6b8',
> u'c0be5c12-be26-47b7-ad26-3ec2469f1d3f', 
> u'c31f4f53-c22b-40ff-8408-f36f591f55b5',
> u'c530e339-99bf-48a2-a63a-cfd2a4dba198', 
> u'c8a610c8-72e5-4217-b4d9-130f85db1db7',
> u'ca0567e1-d445-4875-94b1-85e31f331b87', 
> u'd2c3dab7-eb8c-410e-87c1-3d0139d9903c',
> u'd330c4b6-bfa0-4061-aa1a-0f016598f2d0', 
>