[ovirt-devel] Re: [ovirt-users] Re: Re: oVirt HCI point-to-point interconnection

2018-06-29 Thread Yaniv Kaul
On Fri, Jun 29, 2018 at 12:24 PM, Stefano Zappa 
wrote:

> Hi Alex,
> I have evaluated this approach using the latest version of oVirt Node.
>
> In principle I would like to follow the guidelines of oVirt development
> team, and adopt its best practices, trying to limit excessive
> customizations.
>
> So I installed oVirt Node on 3 hosts and I manually configured the two
> additional interfaces for the point-to-point interconnections on which then
> I configured Gluster.
> This was done very easily, and it works!
>

Sounds like an interesting item for ovirt.org blog!
Y.


>
> I know that SDN is supported by oVirt, even if by default it is bridging
> to the outside.
> I have no experience with the SDN, but I realized that the VxLAN or maybe
> Geneve was the way to go, and now I also have your confirmation.
> Creating Geneve L3 point-to-point tunnels between the nodes, on which then
> configure an L2 overlay network shared by oVirt pool hosts and on which
> then move all internal oVirt communications.
>
> In first analysis this intervention seems structural, not trivial, I think
> it would be better to evaluate it carefully, and if interesting, write a
> best practice or better integrate the functionality in the solution.
> At the end the system must be stable and above all upgradable with
> traditional procedures.
>
> Stefano.
>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3
> Technical Department
>
> Industrie Saleri Italo S.p.A.
> Phone:+39 0308250480
> Fax:+39 0308250466
>
> This message contains confidential information and is intended only for
> rightkickt...@gmail.com, yk...@redhat.com, devel@ovirt.org,
> us...@ovirt.org, sab...@redhat.com, sbona...@redhat.com,
> stira...@redhat.com. If you are not rightkickt...@gmail.com,
> yk...@redhat.com, devel@ovirt.org, us...@ovirt.org, sab...@redhat.com,
> sbona...@redhat.com, stira...@redhat.com you should not disseminate,
> distribute or copy this e-mail. Please notify stefano.za...@saleri.it
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. E-mail transmission cannot be
> guaranteed to be secure or error-free as information could be intercepted,
> corrupted, lost, destroyed, arrive late or incomplete, or contain viruses.
> Stefano Zappa therefore does not accept liability for any errors or
> omissions in the contents of this message, which arise as a result of
> e-mail transmission. If verification is required please request a hard-copy
> version.
> 
> Da: Alex K 
> Inviato: giovedì 28 giugno 2018 21:34
> A: Stefano Zappa
> Cc: Yaniv Kaul; devel; us...@ovirt.org
> Oggetto: Re: [ovirt-users] Re: [ovirt-devel] Re: oVirt HCI point-to-point
> interconnection
>
> Hi,
>
> Network virtualization is already here and widely used. I am still not
> sure what is the gain of this approach. You can do the same with VxLANs or
> Geneve. And how do you scale this? Is it only for 3 node clusters?
>
> Alex
>
> On Thu, Jun 28, 2018, 11:47 Stefano Zappa  stefano.za...@saleri.it>> wrote:
> Hi Yaniv,
> no, the final purpose is not to save a couple of ports.
>
> The benefit is the trend to convergence and consolidation of the
> networking, as already done for the storage.
> The external interconnection is a critical element of the HCI solution, a
> configuration without switches would give more soundness and more
> independence.
>
> If we think about HCI as the transition of IT infrastructure from
> hardware-defined to software-defined, the first step was virtualization of
> the servers, the second step was storage virtualization, the third step
> will be virtualization of networking.
>
> Of course this also has an economic benefit, reduce the total cost of
> ownership and traditional data-center inefficiencies.
>
> We suppose a very traditional and simple external uplink, we can use the
> interfaces integrated into the motherboard.
> This will not be critical for the integrity of the our HCI solution, this
> will not require broad bandwidth and low latency, this will be used
> exclusively to interconnect the solution with the outside world, then with
> a traditional ethernet switch.
>
> We suppose to give more attention to the point-to-point interconnection of
> the three nodes, using 100 or 200 GbE cards for direct interconnection
> without switches.
> The hypothetical purchase of these 3 cards will not be cheaper, sure, but
> it will be twenty times less than introducing expensive switches that will
> certainly have more ports than necessary.
>
> Thanks for your attention,
> Stefano.
>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3
> Technical Department
>
> Industrie Saleri Italo S.p.A.
> Phone:+39 0308250480
> Fax:+39 0308250466
>
> This message contains confidential information and is intended only for
> yk...@redhat.com, sab...@redhat.com e...@redhat.com>, sbona...@redhat.com,
> 

[ovirt-devel] Re: [VDSM] test_echo(1024, False) (stomp_test.StompTests) fails

2018-06-29 Thread Piotr Kliczewski
Here is the patch [1] which should fix it.

[1] https://gerrit.ovirt.org/#/c/92690/
On Thu, Jun 28, 2018 at 2:44 PM Piotr Kliczewski  wrote:
>
> Nir,
>
> It looks like we have a race condition where request arrive sooner than 
> subscribe frame:
>
> 16:19:02 2018-06-27 14:17:54,653 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] 
> RPC call echo succeeded in 0.01 seconds (__init__:311)
> 16:19:02 2018-06-27 14:17:54,661 INFO  (Detector thread) 
> [Broker.StompAdapter] Subscribe command received (stompserver:123)
>
> Thanks for reporting I will push a patch to fix it.
>
> Thanks,
> Piotr
>
>
> On Wed, Jun 27, 2018 at 5:40 PM, Piotr Kliczewski  wrote:
>>
>> Ok, I'll take a look.
>>
>> śr., 27 cze 2018, 17:36 użytkownik Nir Soffer  napisał:
>>>
>>> On Wed, Jun 27, 2018 at 6:13 PM Piotr Kliczewski  
>>> wrote:

 On Wed, Jun 27, 2018 at 5:01 PM, Nir Soffer  wrote:
>
> This tests used to fail in the past, but since we fixed it or the code
>
> it never failed.
>
>
> Maybe the slave was overloaded?


 Very possible. Can you paste a link to the job which failed?
>>>
>>>
>>> Here: 
>>> https://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/24130/
>>>
>>> The next build passed.
>>>
>>> Maybe we need to solve the flakiness of some tests by running a flaky test
>>> again, and let the build fail only if a test failed twice. I wonder if 
>>> there is some
>>> pytest plugin doing this.
>>>


>
>
> 14:19:02 
> ==
> 14:19:02 ERROR:
>
> 14:19:02 
> --
> 14:19:02 Traceback (most recent call last):
> 14:19:02   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/testlib.py",
>  line 143, in wrapper
> 14:19:02 return f(self, *args)
> 14:19:02   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/stomp_test.py",
>  line 95, in test_echo
> 14:19:02 str(uuid4())),
> 14:19:02   File 
> "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/lib/yajsonrpc/jsonrpcclient.py",
>  line 77, in callMethod
> 14:19:02 raise exception.JsonRpcNoResponseError(method=methodName)
> 14:19:02 JsonRpcNoResponseError: No response for JSON-RPC request: 
> {'method': 'echo'}
> 14:19:02  >> begin captured logging << 
> 
> 14:19:02 2018-06-27 14:17:54,524 DEBUG (MainThread) 
> [vds.MultiProtocolAcceptor] Creating socket (host='::1', port=0, 
> family=10, socketype=1, proto=6) (protocoldetector:225)
> 14:19:02 2018-06-27 14:17:54,526 INFO  (MainThread) 
> [vds.MultiProtocolAcceptor] Listening at ::1:36713 (protocoldetector:183)
> 14:19:02 2018-06-27 14:17:54,535 DEBUG (MainThread) [Scheduler] Starting 
> scheduler test.Scheduler (schedule:98)
> 14:19:02 2018-06-27 14:17:54,537 DEBUG (test.Scheduler) [Scheduler] START 
> thread  
> (func= 0x7fd74ca00390>>, args=(), kwargs={}) (concurrent:193)
> 14:19:02 2018-06-27 14:17:54,538 DEBUG (test.Scheduler) [Scheduler] 
> started (schedule:140)
> 14:19:02 2018-06-27 14:17:54,546 DEBUG (JsonRpc (StompReactor)) [root] 
> START thread  140562629256960)> (func= >, args=(), 
> kwargs={}) (concurrent:193)
> 14:19:02 2018-06-27 14:17:54,547 DEBUG (MainThread) [Executor] Starting 
> executor (executor:128)
> 14:19:02 2018-06-27 14:17:54,549 DEBUG (MainThread) [Executor] Starting 
> worker jsonrpc/0 (executor:286)
> 14:19:02 2018-06-27 14:17:54,553 DEBUG (jsonrpc/0) [Executor] START 
> thread  (func= method _Worker._run of  0x7fd74ca00650>>, args=(), kwargs={}) (concurrent:193)
> 14:19:02 2018-06-27 14:17:54,554 DEBUG (jsonrpc/0) [Executor] Worker 
> started (executor:298)
> 14:19:02 2018-06-27 14:17:54,557 DEBUG (MainThread) [Executor] Starting 
> worker jsonrpc/1 (executor:286)
> 14:19:02 2018-06-27 14:17:54,558 DEBUG (MainThread) [Executor] Starting 
> worker jsonrpc/2 (executor:286)
> 14:19:02 2018-06-27 14:17:54,559 DEBUG (jsonrpc/2) [Executor] START 
> thread  (func= method _Worker._run of  0x7fd74c9fb350>>, args=(), kwargs={}) (concurrent:193)
> 14:19:02 2018-06-27 14:17:54,560 DEBUG (jsonrpc/2) [Executor] Worker 
> started (executor:298)
> 14:19:02 2018-06-27 14:17:54,561 DEBUG (MainThread) [Executor] Starting 
> worker jsonrpc/3 (executor:286)
> 14:19:02 2018-06-27 14:17:54,562 DEBUG (jsonrpc/3) [Executor] START 
> thread  (func= method _Worker._run of  0x7fd74bc80290>>, args=(), kwargs={}) (concurrent:193)
> 14:19:02 2018-06-27 14:17:54,563 DEBUG (jsonrpc/3) [Executor] Worker 
> started (executor:298)
> 14:19:02 2018-06-27 14:17:54,564 DEBUG (MainThread) [Executor] Starting 
> worker jsonrpc/4 (executor:286)
> 

[ovirt-devel] Re: [ovirt-users] Re: Re: oVirt HCI point-to-point interconnection

2018-06-29 Thread Stefano Zappa
Hi Alex,
I have evaluated this approach using the latest version of oVirt Node.

In principle I would like to follow the guidelines of oVirt development team, 
and adopt its best practices, trying to limit excessive customizations.

So I installed oVirt Node on 3 hosts and I manually configured the two 
additional interfaces for the point-to-point interconnections on which then I 
configured Gluster.
This was done very easily, and it works!

I know that SDN is supported by oVirt, even if by default it is bridging to the 
outside.
I have no experience with the SDN, but I realized that the VxLAN or maybe 
Geneve was the way to go, and now I also have your confirmation.
Creating Geneve L3 point-to-point tunnels between the nodes, on which then 
configure an L2 overlay network shared by oVirt pool hosts and on which then 
move all internal oVirt communications.

In first analysis this intervention seems structural, not trivial, I think it 
would be better to evaluate it carefully, and if interesting, write a best 
practice or better integrate the functionality in the solution.
At the end the system must be stable and above all upgradable with traditional 
procedures.

Stefano.




Stefano Zappa
IT Specialist CAE - TT-3
Technical Department

Industrie Saleri Italo S.p.A.
Phone:+39 0308250480
Fax:+39 0308250466

This message contains confidential information and is intended only for 
rightkickt...@gmail.com, yk...@redhat.com, devel@ovirt.org, us...@ovirt.org, 
sab...@redhat.com, sbona...@redhat.com, stira...@redhat.com. If you are not 
rightkickt...@gmail.com, yk...@redhat.com, devel@ovirt.org, us...@ovirt.org, 
sab...@redhat.com, sbona...@redhat.com, stira...@redhat.com you should not 
disseminate, distribute or copy this e-mail. Please notify 
stefano.za...@saleri.it immediately by e-mail if you have received this e-mail 
by mistake and delete this e-mail from your system. E-mail transmission cannot 
be guaranteed to be secure or error-free as information could be intercepted, 
corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. 
Stefano Zappa therefore does not accept liability for any errors or omissions 
in the contents of this message, which arise as a result of e-mail 
transmission. If verification is required please request a hard-copy version.

Da: Alex K 
Inviato: giovedì 28 giugno 2018 21:34
A: Stefano Zappa
Cc: Yaniv Kaul; devel; us...@ovirt.org
Oggetto: Re: [ovirt-users] Re: [ovirt-devel] Re: oVirt HCI point-to-point 
interconnection

Hi,

Network virtualization is already here and widely used. I am still not sure 
what is the gain of this approach. You can do the same with VxLANs or Geneve. 
And how do you scale this? Is it only for 3 node clusters?

Alex

On Thu, Jun 28, 2018, 11:47 Stefano Zappa 
mailto:stefano.za...@saleri.it>> wrote:
Hi Yaniv,
no, the final purpose is not to save a couple of ports.

The benefit is the trend to convergence and consolidation of the networking, as 
already done for the storage.
The external interconnection is a critical element of the HCI solution, a 
configuration without switches would give more soundness and more independence.

If we think about HCI as the transition of IT infrastructure from 
hardware-defined to software-defined, the first step was virtualization of the 
servers, the second step was storage virtualization, the third step will be 
virtualization of networking.

Of course this also has an economic benefit, reduce the total cost of ownership 
and traditional data-center inefficiencies.

We suppose a very traditional and simple external uplink, we can use the 
interfaces integrated into the motherboard.
This will not be critical for the integrity of the our HCI solution, this will 
not require broad bandwidth and low latency, this will be used exclusively to 
interconnect the solution with the outside world, then with a traditional 
ethernet switch.

We suppose to give more attention to the point-to-point interconnection of the 
three nodes, using 100 or 200 GbE cards for direct interconnection without 
switches.
The hypothetical purchase of these 3 cards will not be cheaper, sure, but it 
will be twenty times less than introducing expensive switches that will 
certainly have more ports than necessary.

Thanks for your attention,
Stefano.




Stefano Zappa
IT Specialist CAE - TT-3
Technical Department

Industrie Saleri Italo S.p.A.
Phone:+39 0308250480
Fax:+39 0308250466

This message contains confidential information and is intended only for 
yk...@redhat.com, 
sab...@redhat.com, 
sbona...@redhat.com, 
devel@ovirt.org, 
us...@ovirt.org. If you are not 
yk...@redhat.com, 
sab...@redhat.com, 
sbona...@redhat.com, 
devel@ovirt.org, 

[ovirt-devel] Re: oVirt on Fedora 28 for developers

2018-06-29 Thread Sandro Bonazzola
2018-06-29 9:32 GMT+02:00 Dan Kenigsberg :

>
>
> On Fri, Jun 29, 2018 at 12:26 AM, Nir Soffer  wrote:
>
>> On Thu, Jun 28, 2018 at 8:22 PM Greg Sheremeta 
>> wrote:
>>
>>> On Thu, Jun 28, 2018 at 12:04 PM Sandro Bonazzola 
>>> wrote:
>>>


 2018-06-28 15:42 GMT+02:00 Nir Soffer :

> I want to share the current state of oVirt on Fedora 28, hopefully it
> will
> save time for other developers.
>

 Thanks for the summary!

>>>
>>> +1
>>>
>>>



>
>
> 1. Why should I run oVirt on Fedora 28?
>
> - Fedora is the the base for future CentOS. The bugs we find *today*
> on Fedora 28
> are the bugs that we will not have next year when we try oVirt on
> CentOS.
>
> - I want to contribute to projects that require python 3. For example,
> virt-v2v
>   require python 3.6 in upstream. If you want to contribute you need
> to test
>   it on Fedora 28.
>
> - I need to develop with latest libvirt and qemu. One example is
> incremental
>   backup, you will need Fefora 28 to work on this.
>
> - CentOS is old and boring, I want to play with the newest bugs :-)
>
>
> 2. How to install oVirt with Fedora 28 hosts
>
> Warning: ugly hacks bellow!
>
> - Install ovirt-release-master.rpm on all hosts
>
> dnf install http://resources.ovirt.org/pub
> /yum-repo/ovirt-release-master.rpm
>
> - Install or build engine on CentOS 7.5 (1804)[1]
>

>>> Why have only half the fun? Engine works on fc28.
>>>
>>
>> I needed the shortest way to Fedora 28 host. Next time I'll move it to
>> Fedora :-)
>>
>
> According to  https://ovirt-jira.atlassian.net/browse/OVIRT-2259
> 
> Engine-on-fc28 cannot add fc28 hosts. ssh to the host fails with "The key
> algorithm 'EC' is not supported".
>
> Have any of you noticed this outside OST?
>

Yes, tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1591801



>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/QIP4PHKCRJIQBDQSYXVW3GBFQTLHFT6Z/
>
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OTNITEZ54BQWTORKNIVJWRBZXRIMYXWH/


[ovirt-devel] Re: oVirt on Fedora 28 for developers

2018-06-29 Thread Dan Kenigsberg
On Fri, Jun 29, 2018 at 12:26 AM, Nir Soffer  wrote:

> On Thu, Jun 28, 2018 at 8:22 PM Greg Sheremeta 
> wrote:
>
>> On Thu, Jun 28, 2018 at 12:04 PM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> 2018-06-28 15:42 GMT+02:00 Nir Soffer :
>>>
 I want to share the current state of oVirt on Fedora 28, hopefully it
 will
 save time for other developers.

>>>
>>> Thanks for the summary!
>>>
>>
>> +1
>>
>>
>>>
>>>
>>>


 1. Why should I run oVirt on Fedora 28?

 - Fedora is the the base for future CentOS. The bugs we find *today* on
 Fedora 28
 are the bugs that we will not have next year when we try oVirt on
 CentOS.

 - I want to contribute to projects that require python 3. For example,
 virt-v2v
   require python 3.6 in upstream. If you want to contribute you need to
 test
   it on Fedora 28.

 - I need to develop with latest libvirt and qemu. One example is
 incremental
   backup, you will need Fefora 28 to work on this.

 - CentOS is old and boring, I want to play with the newest bugs :-)


 2. How to install oVirt with Fedora 28 hosts

 Warning: ugly hacks bellow!

 - Install ovirt-release-master.rpm on all hosts

 dnf install http://resources.ovirt.org/pub/yum-repo/ovirt-release-
 master.rpm

 - Install or build engine on CentOS 7.5 (1804)[1]

>>>
>> Why have only half the fun? Engine works on fc28.
>>
>
> I needed the shortest way to Fedora 28 host. Next time I'll move it to
> Fedora :-)
>

According to  https://ovirt-jira.atlassian.net/browse/OVIRT-2259

Engine-on-fc28 cannot add fc28 hosts. ssh to the host fails with "The key
algorithm 'EC' is not supported".

Have any of you noticed this outside OST?
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QIP4PHKCRJIQBDQSYXVW3GBFQTLHFT6Z/