[ovirt-users] Re: ISO Upload

2020-01-17 Thread Strahil Nikolov
On January 17, 2020 6:20:09 PM GMT+02:00, Nir Soffer  wrote:
>On Fri, Jan 17, 2020 at 6:41 AM Strahil Nikolov 
>wrote:
>
>> On January 17, 2020 12:10:56 AM GMT+02:00, Chris Adams
>
>> wrote:
>> >Once upon a time, Nir Soffer  said:
>> >> On Tue, Jan 7, 2020 at 4:02 PM Chris Adams 
>wrote:
>> >> > Once upon a time, m.skrzetu...@gmail.com
>
>> >said:
>> >> > > I'd give up on the ISO domain. I started like you and then
>read
>> >the docs
>> >> > which said that ISO domain is deprecated.
>> >> > > I'd upload all files to a data domain.
>> >> >
>> >> > Note that that only works if your data domain is NFS... iSCSI
>data
>> >> > domains will let you upload ISOs, but connecting them to a VM
>> >fails.
>> >>
>> >> ISO on iSCSI/FC domains works fine for starting a VM from ISO,
>which
>> >is the
>> >> main use case.
>> >
>> >Okay - it didn't the last time I tried it (I just got errors). 
>Thanks.
>>
>> I have opened and RFE for ISO checksumming, as currently the uploader
>can
>> silently corrupt your DVD.
>>
>
>Can you share the bug number?
>
>
>> With gluster, I have an option to check the ISO checksum and
>> verify/replace the file, but with Block-based storage that will be
>quite
>> difficult.
>>
>
>Checksumming is a general feature not related to ISO uploads. But
>checksumming tools do not understand
>sparseness so you should really use a tool designed for compare disk
>images, like "qemu-img compare".
>
>Here is an example:
>
>1. Create fedora 30 image for testing:
>
>$ virt-builder fedora-30 -o fedora-30.raw
>...
>$ qemu-img info fedora-30.raw
>image: fedora-30.raw
>file format: raw
>virtual size: 6 GiB (6442450944 bytes)
>disk size: 1.15 GiB
>
>2. Create a checksum of the image
>
>$ time shasum fedora-30.raw
>991c2efee723e04b7d41d75f70d19bade02b400d  fedora-30.raw
>
>real 0m14.641s
>user 0m12.653s
>sys 0m1.749s
>
>3. Create compressed qcow2 image with same content
>
>$ qemu-img convert -f raw -O qcow2 -c fedora-30.raw fedora-30.qcow2
>...
>$ qemu-img info fedora-30.qcow2
>image: fedora-30.qcow2
>file format: qcow2
>virtual size: 6 GiB (6442450944 bytes)
>disk size: 490 MiB
>cluster_size: 65536
>Format specific information:
>compat: 1.1
>lazy refcounts: false
>refcount bits: 16
>corrupt: false
>
>This is typical file format used for publishing disk images. The
>contents
>of this
>image are the same as the raw version from the guest point of view.
>
>3. Compare image content
>
>$ time qemu-img compare fedora-30.raw fedora-30.qcow2
>Images are identical.
>
>real 0m4.680s
>user 0m4.273s
>sys 0m0.553s
>
>3 times faster to compare 2 images with different format compared with
>creating
>a checksum of single image.
>
>Now lets see how we can use this to verify uploads.
>
>4. Upload the qcow2 compressed image to a new raw disk (requires ovirt
>4.4
>alpah3):
>
>$ python3 upload_disk.py --engine-url https://engine/ --username
>admin@internal --password-file password \
> --cafile ca.pem --sd-name nfs1-export2 --disk-format raw --disk-sparse
>fedora-30.qcow2
>
>5. Download image to raw format:
>
>$ python3 download_disk.py --engine-url https://engine/ --username
>admin@internal --password-file password \
>--cafile ca.pem --format raw f40023a5-ddc4-4fcf-b8e2-af742f372104
>fedora-30.download.raw
>
>6. Comparing original and downloaded images
>
>$ qemu-img compare fedora-30.qcow2 fedora-30.download.raw
>Images are identical.
>
>Back to the topic of ISO uploads to block storage. Block volumes in
>oVirt
>are always aligned to
>128 MiB, so when you upload an image which is not aligned to 128 MiB,
>oVirt
>creates a bigger
>block device. The contents of the device after the image content are
>not
>defined, unless you
>zero this area during upload. The current upload_disk.py example does
>not
>zero the end of
>the device since the guest do not care about it, but this makes
>verifying
>uploads harder.
>
>The best way to handle this issue is to truncate the ISO image up to
>the
>next multiple of 128 MiB
>before uploading it:
>
>$ ls -l Fedora-Server-dvd-x86_64-30-1.2.iso
>-rw-rw-r--. 1 nsoffer nsoffer 3177185280 Nov  8 23:09
>Fedora-Server-dvd-x86_64-30-1.2.iso
>
>$ python3 -c 'n = 3177185280 + 128 * 1024**2 - 1; print(n - (n % (128 *
>1024**2)))'
>3221225472
>
>$ truncate -s 3221225472 Fedora-Server-dvd-x86_64-30-1.2.iso
>
>The contents of the iso image is the same as it will be on the block
>device
>after the upload, and
>uploading this image will zero the end of the device.
>
>If we upload this image, we can check the upload using qemu img
>compare.
>
>$ python3 upload_disk.py --engine-url https://engine/ --username
>admin@internal --password-file password \
>--cafile ca.pem --sd-name iscsi-1 --disk-format raw
>Fedora-Server-dvd-x86_64-30-1.2.iso
>
>$ python3 download_disk.py --engine-url https://engine/ --username
>admin@internal --password-file password \
>--cafile ca.pem --format raw 5f0b5347-bbbc-4521-9ca0-8fc17670bab0
>iso.raw
>
>$ qemu-img compare iso.raw Fedora-Server-dvd-x86_64-30-1.2.iso
>Images 

[ovirt-users] Re: ISO Upload

2020-01-17 Thread Nir Soffer
On Fri, Jan 17, 2020 at 6:41 AM Strahil Nikolov 
wrote:

> On January 17, 2020 12:10:56 AM GMT+02:00, Chris Adams 
> wrote:
> >Once upon a time, Nir Soffer  said:
> >> On Tue, Jan 7, 2020 at 4:02 PM Chris Adams  wrote:
> >> > Once upon a time, m.skrzetu...@gmail.com 
> >said:
> >> > > I'd give up on the ISO domain. I started like you and then read
> >the docs
> >> > which said that ISO domain is deprecated.
> >> > > I'd upload all files to a data domain.
> >> >
> >> > Note that that only works if your data domain is NFS... iSCSI data
> >> > domains will let you upload ISOs, but connecting them to a VM
> >fails.
> >>
> >> ISO on iSCSI/FC domains works fine for starting a VM from ISO, which
> >is the
> >> main use case.
> >
> >Okay - it didn't the last time I tried it (I just got errors).  Thanks.
>
> I have opened and RFE for ISO checksumming, as currently the uploader can
> silently corrupt your DVD.
>

Can you share the bug number?


> With gluster, I have an option to check the ISO checksum and
> verify/replace the file, but with Block-based storage that will be quite
> difficult.
>

Checksumming is a general feature not related to ISO uploads. But
checksumming tools do not understand
sparseness so you should really use a tool designed for compare disk
images, like "qemu-img compare".

Here is an example:

1. Create fedora 30 image for testing:

$ virt-builder fedora-30 -o fedora-30.raw
...
$ qemu-img info fedora-30.raw
image: fedora-30.raw
file format: raw
virtual size: 6 GiB (6442450944 bytes)
disk size: 1.15 GiB

2. Create a checksum of the image

$ time shasum fedora-30.raw
991c2efee723e04b7d41d75f70d19bade02b400d  fedora-30.raw

real 0m14.641s
user 0m12.653s
sys 0m1.749s

3. Create compressed qcow2 image with same content

$ qemu-img convert -f raw -O qcow2 -c fedora-30.raw fedora-30.qcow2
...
$ qemu-img info fedora-30.qcow2
image: fedora-30.qcow2
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 490 MiB
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

This is typical file format used for publishing disk images. The contents
of this
image are the same as the raw version from the guest point of view.

3. Compare image content

$ time qemu-img compare fedora-30.raw fedora-30.qcow2
Images are identical.

real 0m4.680s
user 0m4.273s
sys 0m0.553s

3 times faster to compare 2 images with different format compared with
creating
a checksum of single image.

Now lets see how we can use this to verify uploads.

4. Upload the qcow2 compressed image to a new raw disk (requires ovirt 4.4
alpah3):

$ python3 upload_disk.py --engine-url https://engine/ --username
admin@internal --password-file password \
--cafile ca.pem --sd-name nfs1-export2 --disk-format raw --disk-sparse
fedora-30.qcow2

5. Download image to raw format:

$ python3 download_disk.py --engine-url https://engine/ --username
admin@internal --password-file password \
--cafile ca.pem --format raw f40023a5-ddc4-4fcf-b8e2-af742f372104
fedora-30.download.raw

6. Comparing original and downloaded images

$ qemu-img compare fedora-30.qcow2 fedora-30.download.raw
Images are identical.

Back to the topic of ISO uploads to block storage. Block volumes in oVirt
are always aligned to
128 MiB, so when you upload an image which is not aligned to 128 MiB, oVirt
creates a bigger
block device. The contents of the device after the image content are not
defined, unless you
zero this area during upload. The current upload_disk.py example does not
zero the end of
the device since the guest do not care about it, but this makes verifying
uploads harder.

The best way to handle this issue is to truncate the ISO image up to the
next multiple of 128 MiB
before uploading it:

$ ls -l Fedora-Server-dvd-x86_64-30-1.2.iso
-rw-rw-r--. 1 nsoffer nsoffer 3177185280 Nov  8 23:09
Fedora-Server-dvd-x86_64-30-1.2.iso

$ python3 -c 'n = 3177185280 + 128 * 1024**2 - 1; print(n - (n % (128 *
1024**2)))'
3221225472

$ truncate -s 3221225472 Fedora-Server-dvd-x86_64-30-1.2.iso

The contents of the iso image is the same as it will be on the block device
after the upload, and
uploading this image will zero the end of the device.

If we upload this image, we can check the upload using qemu img compare.

$ python3 upload_disk.py --engine-url https://engine/ --username
admin@internal --password-file password \
--cafile ca.pem --sd-name iscsi-1 --disk-format raw
Fedora-Server-dvd-x86_64-30-1.2.iso

$ python3 download_disk.py --engine-url https://engine/ --username
admin@internal --password-file password \
--cafile ca.pem --format raw 5f0b5347-bbbc-4521-9ca0-8fc17670bab0
iso.raw

$ qemu-img compare iso.raw Fedora-Server-dvd-x86_64-30-1.2.iso
Images are identical.

This is not easy to use and requires knowledge about oVirt internals (128
MiB alignment), so I guess we
need to make this simpler.

Nir
___
Users mailing list -- users@ovirt.org

[ovirt-users] Re: [moVirt] Expanding oVirt

2020-01-17 Thread Michal Skrivanek
[fwding to the right list]

> On 15 Jan 2020, at 22:57, mwe...@maetrix.tech wrote:
> 
> I am planning a basic 3-host hyperconverged oVirt cluster and am very happy 
> with tests I have conducted regarding the deployment. 
> 
> I have a question regarding expanding the cluster and cant seem to find a 
> direct answer. My hosts have a limited number of HDD slots and am curious 
> about expanding the gluster volumes. Is this a simple process of adding 
> another host or hosts as I go and adding the gluster bricks to the volume and 
> rebalancing? I also recall seeing a hard limit of nine hosts in a cluster. Is 
> this correct?
> Thank you,
> ___
> moVirt mailing list -- mov...@ovirt.org
> To unsubscribe send an email to movirt-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/mov...@ovirt.org/message/YSSYKBGN6HMUTPIHBXJHRGE7VU73ASER/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73PBXE5XFFOWJ42VS2HRET3PCPXIFAP4/


[ovirt-users] Re: Ovirt OVN help needed

2020-01-17 Thread Strahil Nikolov
On January 17, 2020 10:58:00 AM GMT+02:00, Miguel Duarte de Mora Barroso 
 wrote:
>On Fri, Jan 10, 2020 at 4:45 PM Strahil  wrote:
>>
>> Hi Miguel,
>>
>> It seems the Cluster's switch is of type 'Linux Bridge'.
>
>I apologize, but for some reason, this thread does not have the whole
>conversation; I lost track of what we're trying to solve here.
>
>What exactly is your problem ?
>
>>
>> Best Regards,
>> Strahil NikolovOn Jan 10, 2020 12:37, Miguel Duarte de Mora Barroso
> wrote:
>> >
>> > On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov
> wrote:
>> > >
>> > > Hi Miguel,
>> > >
>> > > I had read some blogs about OVN and I tried to collect some data
>that might hint where the issue is.
>> > >
>> > > I still struggle to "decode" that , but it may be easier for you
>or anyone on the list.
>> > >
>> > > I am eager to receive your reply.
>> > > Thanks in advance and Happy New Year !
>> >
>> > Hi,
>> >
>> > Sorry for not noticing your email before. Hope late is better than
>never ..
>> >
>> > >
>> > >
>> > > Best Regards,
>> > > Strahil Nikolov
>> > >
>> > > В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil
>Nikolov  написа:
>> > >
>> > >
>> > > That's a good question.
>> > > ovirtmgmt is using linux bridge, but I'm not so sure about the
>br-int.
>> > > 'brctl show' is not understanding what type is br-int , so I
>guess openvswitch.
>> > >
>> > > This is still a guess, so you can give me the command to verify
>that :)
>> >
>> > You can use the GUI for that; access "Compute > clusters" , choose
>the
>> > cluster in question, hit 'edit', then look for the 'Swtich type'
>> > entry.
>> >
>> >
>> > >
>> > > As the system was first build on 4.2.7 , most probably it never
>used anything except openvswitch.
>> > >
>> > > Thanks in advance for your help. I really appreciate that.
>> > >
>> > > Best Regards,
>> > > Strahil Nikolov
>> > >
>> > >
>> > > В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel
>Duarte de Mora Barroso  написа:
>> > >
>> > >
>> > > On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov
> wrote:
>> > > >
>> > > > Hi Dominik,
>> > > >
>> > > > sadly reinstall of all hosts is not helping.
>> > > >
>> > > > @ Miguel,
>> > > >
>> > > > I have 2 clusters
>> > > > 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2
>(192.168.1.64)
>> > > > 2. Intel (intel-based one and a gluster arbiter) -> ovirt3
>(192.168.1.41)
>> > >
>> > > But what are the switch types used on the clusters: openvswitch
>*or*
>> > > legacy / linux bridges ?
>> > >
>> > >
>> > >
>> > > >
>> > > > The output of the 2 commands (after I run reinstall on all
>hosts ):
>> > > >
>> > > > [root@engine ~]# ovn-sbctl list encap
>> > > > _uuid  : d4d98c65-11da-4dc8-9da3-780e7738176f
>> > > > chassis_name: "baa0199e-d1a4-484c-af13-a41bcad19dbc"
>> > > > ip  : "192.168.1.90"
>> > > > options: {csum="true"}
>> > > > type: geneve
>> > > >
>> > > > _uuid  : ed8744a5-a302-493b-8c3b-19a4d2e170de
>> > > > chassis_name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
>> > > > ip  : "192.168.1.64"
>> > > > options: {csum="true"}
>> > > > type: geneve
>> > > >
>> > > > _uuid  : b72ff0ab-92fc-450c-a6eb-ab2869dee217
>> > > > chassis_name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
>> > > > ip  : "192.168.1.41"
>> > > > options: {csum="true"}
>> > > > type: geneve
>> > > >
>> > > >
>> > > > [root@engine ~]# ovn-sbctl list chassis
>> > > > _uuid  : b1da5110-f477-4c60-9963-b464ab96c644
>> > > > encaps  : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
>> > > > external_ids: {datapath-type="",
>iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>ovn-bridge-mappings=""}
>> > > > hostname: "ovirt2.localdomain"
>> > > > name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
>> > > > nb_cfg  : 0
>> > > > transport_zones: []
>> > > > vtep_logical_switches: []
>> > > >
>> > > > _uuid  : dcc94e1c-bf44-46a3-b9d1-45360c307b26
>> > > > encaps  : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
>> > > > external_ids: {datapath-type="",
>iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>ovn-bridge-mappings=""}
>> > > > hostname: "ovirt3.localdomain"
>> > > > name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
>> > > > nb_cfg  : 0
>> > > > transport_zones: []
>> > > > vtep_logical_switches: []
>> > > >
>> > > > _uuid  : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
>> > > > encaps  : [d4d98c65-11da-4dc8-9da3-780e7738176f]
>> > > > external_ids: {datapath-type="",
>iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>ovn-bridge-mappings=""}
>> > > > hostname: "ovirt1.localdomain"
>> > > > name: 

[ovirt-users] Re: Ovirt OVN help needed

2020-01-17 Thread Miguel Duarte de Mora Barroso
On Fri, Jan 10, 2020 at 4:45 PM Strahil  wrote:
>
> Hi Miguel,
>
> It seems the Cluster's switch is of type 'Linux Bridge'.

I apologize, but for some reason, this thread does not have the whole
conversation; I lost track of what we're trying to solve here.

What exactly is your problem ?

>
> Best Regards,
> Strahil NikolovOn Jan 10, 2020 12:37, Miguel Duarte de Mora Barroso 
>  wrote:
> >
> > On Mon, Jan 6, 2020 at 9:21 PM Strahil Nikolov  
> > wrote:
> > >
> > > Hi Miguel,
> > >
> > > I had read some blogs about OVN and I tried to collect some data that 
> > > might hint where the issue is.
> > >
> > > I still struggle to "decode" that , but it may be easier for you or 
> > > anyone on the list.
> > >
> > > I am eager to receive your reply.
> > > Thanks in advance and Happy New Year !
> >
> > Hi,
> >
> > Sorry for not noticing your email before. Hope late is better than never ..
> >
> > >
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > > В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov 
> > >  написа:
> > >
> > >
> > > That's a good question.
> > > ovirtmgmt is using linux bridge, but I'm not so sure about the br-int.
> > > 'brctl show' is not understanding what type is br-int , so I guess 
> > > openvswitch.
> > >
> > > This is still a guess, so you can give me the command to verify that :)
> >
> > You can use the GUI for that; access "Compute > clusters" , choose the
> > cluster in question, hit 'edit', then look for the 'Swtich type'
> > entry.
> >
> >
> > >
> > > As the system was first build on 4.2.7 , most probably it never used 
> > > anything except openvswitch.
> > >
> > > Thanks in advance for your help. I really appreciate that.
> > >
> > > Best Regards,
> > > Strahil Nikolov
> > >
> > >
> > > В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de 
> > > Mora Barroso  написа:
> > >
> > >
> > > On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov  
> > > wrote:
> > > >
> > > > Hi Dominik,
> > > >
> > > > sadly reinstall of all hosts is not helping.
> > > >
> > > > @ Miguel,
> > > >
> > > > I have 2 clusters
> > > > 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 
> > > > (192.168.1.64)
> > > > 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 
> > > > (192.168.1.41)
> > >
> > > But what are the switch types used on the clusters: openvswitch *or*
> > > legacy / linux bridges ?
> > >
> > >
> > >
> > > >
> > > > The output of the 2 commands (after I run reinstall on all hosts ):
> > > >
> > > > [root@engine ~]# ovn-sbctl list encap
> > > > _uuid  : d4d98c65-11da-4dc8-9da3-780e7738176f
> > > > chassis_name: "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > > > ip  : "192.168.1.90"
> > > > options: {csum="true"}
> > > > type: geneve
> > > >
> > > > _uuid  : ed8744a5-a302-493b-8c3b-19a4d2e170de
> > > > chassis_name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > > > ip  : "192.168.1.64"
> > > > options: {csum="true"}
> > > > type: geneve
> > > >
> > > > _uuid  : b72ff0ab-92fc-450c-a6eb-ab2869dee217
> > > > chassis_name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > > > ip  : "192.168.1.41"
> > > > options: {csum="true"}
> > > > type: geneve
> > > >
> > > >
> > > > [root@engine ~]# ovn-sbctl list chassis
> > > > _uuid  : b1da5110-f477-4c60-9963-b464ab96c644
> > > > encaps  : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
> > > > external_ids: {datapath-type="", 
> > > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > > >  ovn-bridge-mappings=""}
> > > > hostname: "ovirt2.localdomain"
> > > > name: "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> > > > nb_cfg  : 0
> > > > transport_zones: []
> > > > vtep_logical_switches: []
> > > >
> > > > _uuid  : dcc94e1c-bf44-46a3-b9d1-45360c307b26
> > > > encaps  : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
> > > > external_ids: {datapath-type="", 
> > > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > > >  ovn-bridge-mappings=""}
> > > > hostname: "ovirt3.localdomain"
> > > > name: "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> > > > nb_cfg  : 0
> > > > transport_zones: []
> > > > vtep_logical_switches: []
> > > >
> > > > _uuid  : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
> > > > encaps  : [d4d98c65-11da-4dc8-9da3-780e7738176f]
> > > > external_ids: {datapath-type="", 
> > > > iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
> > > >  ovn-bridge-mappings=""}
> > > > hostname: "ovirt1.localdomain"
> > > > name: "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> > > > nb_cfg  : 0
> > > > transport_zones: []
> >