[opnfv-tech-discuss] [dovetail] weekly meeting agenda 7/21

2017-07-19 Thread Wenjing Chu
Hi Dovetailers

I am proposing we continue on the same topics for this week,

1) Update on the most recent Dovetail release 0.3
2) Another round of review on the addendum document
3) Continue to look through the open task list, including docs that still
remain in drafts

Anything else to add?

Regards
Wenjing
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [Bottlenecks] Minutes of Bottlenecks Discussion on July 20, 2017

2017-07-19 Thread Yuyang (Gabriel)
Minutes of Bottlenecks Discussion on July 20, 2017

  *   Date and Time
 *   Thursday at 
0100-0200
 UTC, July 20, 2017
 *   Thursday at 0900-1000 Beijing, July 20, 2017
 *   Wednesday at 1800-1900 PDT, July 20, 2017
  *   Minutes
 *   Yang Yu
  *   Participants (9 peoples):
 *   Ace Lee
 *   Jack Chen
 *   Jing Lu
 *   Kubi
 *   Manuel Rebellon
 *   Rex
 *   Ross Brattain
Agenda:

  *   Discussion with Yardstick about the scaling test
  *   Stress Testing Discuss & Release Discuss
  *   Action Items Review
Discussion & Action Item

1.  Discussion with Yardstick about the scaling test

  *https://etherpad.opnfv.org/p/yardstick_release_e
 *   Scale-out test
 *   Scale-up test
  *   Ross briefly introduced the scaling test cases
 *   The test cases are similar to the netperf and ping tests in 
Bottlenecks repo
  *   Ross asks about how Bottlenecks passes parameters to Yardstick
 *   There is a separated config file located in Bottlenecks repo and 
modified test case yaml file located in Yardstick's. Then Bottlenecks pass the 
stack number to Yardstick in each testing iteration
*   
https://gerrit.opnfv.org/gerrit/gitweb?p=bottlenecks.git;a=blob;f=testsuites/posca/testcase_cfg/posca_factor_ping.yaml;h=ed1e3475321668d50f4dcdd658d1f00f579e1f77;hb=HEAD
*   
https://gerrit.opnfv.org/gerrit/gitweb?p=yardstick.git;a=blob;f=samples/ping_bottlenecks.yaml;h=01977a1dee1f0c9318ce272609d4ad9a253d7625;hb=HEAD
*   Some modifications of the testing codes are also needed to 
accommodate the changes in Yardstick
*   Bottlenecks chooses a more stable way to call Yardstick by using 
docker library to call docker exec command making Yardstick run certain test 
case
 *   Action on Ross to provide an example yaml for Scaling and discuss with 
Bottleneck. Then Bottlenecks implement the testing part
  *   Ross asks about how Bottlenecks process the results
 *   Bottlenecks get the testing result through the result file produced by 
yardstick filtering the KPIs that Bottlenecks would like to collect. Then plot 
the KPIs vs each iteration in Kibana and the monitored time series results in 
Grafana.
 *   Gabriel is also introduce the monitoring tools that recently be merged 
in Bottlenecks named Prometheus that monitoring the Host status
  *   Ross asks about the stop criteria for the testing iterations
 *   Currently Bottlenecks examines the increasement/decreasement for the 
monitoring results.
 *   Once the increasement/decreasement between 2 consecutive iterations is 
below the preset threshold, e.g., 2.5.%. Then we stop the test.

2.  Stress Testing Discuss & Release Discuss

  *   Long duration POD for stress testing
 *   Report from last TSC meeting
*   Questions raised about the choice of SUT and stability criteria
   *   Jose will organize a discuss the Testperf meeting focusing on 
the plan
*   Need to go to TSC next week
 *   Cross project stress is under consideration
*   Running VSperf while StorPerf is excuted
*   Bottlenecks act as load manager while monitoring/analyzing
  *   Intern Projects
 *   Have contacted the intern for CPU limit
*   An interview has been held and recruit the Intern for CPU limit
*   Action on Gabriel to confirm communication media
  *   Euphrates Planning updated
 *   https://wiki.opnfv.org/display/bottlenecks/Bottlenecks+Release+Plan
 *   Testing Framework Refactoring
*   Monitoring module has been installed in Bottlenecks
   *   Prometheus has been supporting in Bottlencks regarding 
installation, collectd and node integration.
   *   Working on installation Grafana to gether with Prometheus
   *   Prometheus is a powerful tool for monitoring test process and 
alert when the KPI pass certain threshold. Gabriel show the dashboard and 
results for Prometheus and Grafana.
   *   Manual asked Prometheus is open source software
  *   https://prometheus.io/docs/introduction/overview/
 *   Initiate Container Testing
*   This plan has been assigned low priority
*   Prakash should feedback about Kola deployment/testing
*   K8S, OpenRetriever, ...
 *   Initiate Tuning Testing
*   This plan has been assigned medium priority
*   Action item on Gabriel to reorganize the priority
*   WIKI page should be elaborated
   *   https://wiki.opnfv.org/pages/viewpage.action?pageId=12386549
*   Ceph or CPU tuning is under consideration now
   *   Need investigation work
   *   Action item on Gabriel

3.  Project Engagement

  *   More contributors are welcome to Bottlenecks
  *   Cooperation with other projects
 *   Integration with upstream
  *   Non-active committer contact
 *   Has sent out emails to 

Re: [opnfv-tech-discuss] [releng][opnfvdocs]document verify rtd job failed

2017-07-19 Thread Trevor Bramwell
Hi Matthew, Sofia, Mark, et. al.

I've uploaded a patch for this[1]. This appears to be a bug introduced
by docutils > 0.12 related to parsing remote image links in docs.

Got it to fix locally for me, though I had to remove the tox virtualenv
after introducing the fix, so we may need to do the same on the docs
build server.

Regards,
Trevor Bramwell

[1] https://gerrit.opnfv.org/gerrit/#/c/37807/
[2] https://sourceforge.net/p/docutils/bugs/301/

On Wed, Jul 19, 2017 at 08:54:54PM +, Beierl, Mark wrote:
> Hello,
> 
> It looks like since this build job #1582 [1], every rtd-verify job is failing 
> with this error at the end:
> 
> 
> Exception occurred:
>   File 
> "/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/local/lib/python2.7/site-packages/docutils/writers/_html_base.py",
>  line 671, in depart_document
> assert not self.context, 'len(context) = %s' % len(self.context)
> AssertionError: len(context) = 1
> The full traceback has been saved in /tmp/sphinx-err-TRcc6N.log, if you want 
> to report the issue to the developers.
> Please also report this if it was a user error, so that a better error 
> message can be provided next time.
> A bug report can be filed in the tracker at 
> . Thanks!
> ERROR: InvocationError: 
> '/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/bin/sphinx-build
>  -b html -n -d 
> /home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/tmp/doctrees
>  ./docs/ 
> /home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/docs/_build/html'
> 
> [1] https://build.opnfv.org/ci/view/opnfvdocs/job/docs-verify-rtd-master/1582
> 
> Regards,
> Mark
> 
> Mark Beierl
> SW System Sr Principal Engineer
> Dell EMC | Office of the CTO
> mobile +1 613 314 8106
> mark.bei...@dell.com
> 
> On Jul 19, 2017, at 05:20, Sofia Wallin 
> > wrote:
> 
> Hi Matthew,
> Thanks for reaching out.
> 
> If I look 
> here 
> it seems like this patch is the 
> one failing.
> But I’m not sure how to solve this…
> 
> Hoping that Aric or Trevor will manage to help.
> 
> //Sofia
> 
> 
> From: Lijun >
> Date: Wednesday, 19 July 2017 at 09:45
> To: Aric Gardner 
> >, 
> "tbramw...@linuxfoundation.org" 
> >, Sofia 
> Wallin >, 
> "julien...@gmail.com" 
> >, 
> "shang.xiaod...@zte.com.cn" 
> >
> Cc: OPNFV 
> >
> Subject: [opnfv-tech-discuss][releng][opnfvdocs]document verify rtd job failed
> 
> Hi
> 
> Since 07/18 the document verify rtd job for all project document failed, such 
> as this one
> https://build.opnfv.org/ci/job/docs-verify-rtd-master/1590/console
> 
> so can somebody try to solve this? One assumption is caused of 
> https://gerrit.opnfv.org/gerrit/#/c/37609/
> since it has some .rst not included under /docs directory and has no 
> doc-verify-rtd-job? I am not sure.
> 
> Any suggestions can be help, it blocks many patches there.
> 
> 
> Best regards
> 
> /MatthewLi
> ___
> opnfv-tech-discuss mailing list
> opnfv-tech-discuss@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
> 


signature.asc
Description: PGP signature
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [StorPerf] Weekly meeting notes

2017-07-19 Thread Beierl, Mark
Hello,

You can find the weekly meeting notes in the wiki here [1].  Shrenik and I went 
over the process for manually starting the new StorPerf docker-compose 
containers as a developer using the local git repo.

[1] https://wiki.opnfv.org/display/meetings/StorPerf+2017-07-19+Meeting+Notes

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [releng][opnfvdocs]document verify rtd job failed

2017-07-19 Thread Beierl, Mark
Hello,

It looks like since this build job #1582 [1], every rtd-verify job is failing 
with this error at the end:


Exception occurred:
  File 
"/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/local/lib/python2.7/site-packages/docutils/writers/_html_base.py",
 line 671, in depart_document
assert not self.context, 'len(context) = %s' % len(self.context)
AssertionError: len(context) = 1
The full traceback has been saved in /tmp/sphinx-err-TRcc6N.log, if you want to 
report the issue to the developers.
Please also report this if it was a user error, so that a better error message 
can be provided next time.
A bug report can be filed in the tracker at 
. Thanks!
ERROR: InvocationError: 
'/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/bin/sphinx-build
 -b html -n -d 
/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/.tox/docs/tmp/doctrees
 ./docs/ 
/home/jenkins-ci/opnfv/slave_root/workspace/docs-verify-rtd-master/docs/_build/html'

[1] https://build.opnfv.org/ci/view/opnfvdocs/job/docs-verify-rtd-master/1582

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 19, 2017, at 05:20, Sofia Wallin 
> wrote:

Hi Matthew,
Thanks for reaching out.

If I look 
here it 
seems like this patch is the one 
failing.
But I’m not sure how to solve this…

Hoping that Aric or Trevor will manage to help.

//Sofia


From: Lijun >
Date: Wednesday, 19 July 2017 at 09:45
To: Aric Gardner 
>, 
"tbramw...@linuxfoundation.org" 
>, Sofia 
Wallin >, 
"julien...@gmail.com" 
>, 
"shang.xiaod...@zte.com.cn" 
>
Cc: OPNFV 
>
Subject: [opnfv-tech-discuss][releng][opnfvdocs]document verify rtd job failed

Hi

Since 07/18 the document verify rtd job for all project document failed, such 
as this one
https://build.opnfv.org/ci/job/docs-verify-rtd-master/1590/console

so can somebody try to solve this? One assumption is caused of 
https://gerrit.opnfv.org/gerrit/#/c/37609/
since it has some .rst not included under /docs directory and has no 
doc-verify-rtd-job? I am not sure.

Any suggestions can be help, it blocks many patches there.


Best regards

/MatthewLi
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [infra] xci-aio How to consume as a user for hands-on investigation and manual testing

2017-07-19 Thread Dave Urschatz
I’m able to run openstack commands from the ansible_ssh_host but I would also 
like to use the horizon dashboard and console to VMs.

Is there an easy way to get the horizon dashboard url in the all-in-one 
deployment?
Is there a default user and password?
Any documentation?

Best Regards,
Dave

Dave Urschatz
Senior Technical Lead
[cid:image001.jpg@01D300AD.56EFBA90]
555 Legget Drive| Tower A | Suite 600| Ottawa ON | K2K 2X3 | 613-963-1201

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] Agenda for this week's Technical Discussion call

2017-07-19 Thread Raymond Paik
All,

You can find the agenda for tomorrow's call at
https://wiki.opnfv.org/display/PROJ/Tc+Agenda+20170720

Thanks,

Ray
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [infra] xci-aio deployment failed on on an Intel NUC Kit NUC6i7KYK

2017-07-19 Thread Dave Urschatz

Hi Markos.

That worked. Thank you.
xci: aio has been installed

I will also attempt the same on SuperMicro and Kontron hardware in the next 
several days.

Regards,
Dave

 

On 2017-07-19, 8:35 AM, "Markos Chandras"  wrote:

Hello,

On 19/07/17 13:30, Dave Urschatz wrote:
> Hi Markos.
> 
> Thanks for your reply.
> 
> The distribution is Ubuntu 16.04 as per XCI Wiki.
> 
> Here is the failure:
> 

The real problem is this

> self.install_requires:\\nAttributeError: Distribution instance has no 
attribute 'install_requires'\",

This has been reported in upstream openstack
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119694.html

This is caused by the latest setuptools release

The upstream OpenStack Ansible project has a workaround in place
https://review.openstack.org/#/c/483874/

but we don't have that in the XCI yet

Your best bet right now is to do the following

export OPENSTACK_OSA_VERSION=d4ae08646d1c1192e5806be2f81b1748f520fd39
./xci-deploy.sh

but it's not guarantee to work at this point.

-- 
markos

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [StorPerf] Master instability ahead

2017-07-19 Thread Beierl, Mark
Hello,

As part of the container decomposition project, StorPerf is going through a 
transition period in master where it will not be able to report statistics 
until the cutover is complete.  If you rely on working metrics as part of 
integration or other testing, please use danube.3.0 until further notice.

My apologies for the instability, but I want to ensure that Shrenik (the intern 
working on this) has time and opportunity to learn and execute this change 
without pressure :)

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

2017-07-19 Thread Aaron Smith
Hi Rami/Maryam,
  Thanks for the quick response.  We had started looking at RedFish for a
different topic.
I did a cursory review of the API, but will need to look at it in more
detail.  If you have any notes
to share that would be great as well.

As a start, could you give examples of the aspects of the RedFish API that
you think are suitable?
Are you thinking the Yang modeling tools are also appropriate?

Aaron

On Wed, Jul 19, 2017 at 10:48 AM, Rosen, Rami  wrote:

> Hi Aaron,
>
> Indeed from initial exploration of the Redfish spec, it seems a very good
> candidate for a REST API that sits in front of collectd.
>
> As said this is a WIP, we are exploring further.
>
>
>
> Regards,
>
> Rami Rosen
>
>
>
> *From:* Tahhan, Maryam
> *Sent:* Wednesday, July 19, 2017 17:15
> *To:* Aaron Smith ; opnfv-tech-discuss@lists.opnfv.org
> *Cc:* Mcmahon, Tony B ; Rosen, Rami <
> rami.ro...@intel.com>
> *Subject:* RE: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime
> config)
>
>
>
> Hey Aaron
>
>
>
> Rami in CC has been looking at this, and we’ve been using the Redfish API
> definition in DMTF as a reference. It’s very much a WIP but  I will add
> this as a discussion topic to the next barometer call.
>
> BR
> Maryam
>
>
>
> *From:* opnfv-tech-discuss-boun...@lists.opnfv.org [
> mailto:opnfv-tech-discuss-boun...@lists.opnfv.org
> ] *On Behalf Of *Aaron Smith
> *Sent:* Wednesday, July 19, 2017 3:01 PM
> *To:* opnfv-tech-discuss@lists.opnfv.org
> *Subject:* [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)
>
>
>
> Has any work been done on further definition of
>
> BAROMETER-63 (https://jira.opnfv.org/browse/BAROMETER-63)?
>
> We would like to coordinate with any work that has been done.
>
>
>
> Aaron Smith
>
> --
>
> *AARON SMITH*
>
> SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE
>
> Red Hat
>
> 
>
> 314 Littleton Rd, Westford, MA 01886
>
> aasm...@redhat.comM: 617.877.4814
>
> 
>
>
>



-- 

AARON SMITH

SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE

Red Hat



314 Littleton Rd, Westford, MA 01886

aasm...@redhat.comM: 617.877.4814

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] SFC Project

2017-07-19 Thread Manuel Buil
HI Pavan,

Seems like it is not blocking. Can you check that your packets are
being classified at all? Look into table=11 of the compute:

ovs-ofctl -O Openflow13 dump-flows br-int table=11

Regards,
Manuel

On Wed, 2017-07-19 at 17:54 +0530, Pavan Gupta wrote:
> > > > Hi Andres/Manuel,I could run the test cases to completion, however,
each one passed partially. I am not sure if the error msgs from
neutron log point to subset failure of each test case. Let me know if
you have any further pointers.
> Pavan
> 
> SFC.log
> > > 2017-07-19 11:33:49,911 - __main__ - INFO - Results of test case
'sfc_one_chain_two_service_functions - ODL-SFC Testing SFs when they
are located on the same chain':
> > {'status': 'FAIL', 'details': [{'HTTP works': 'PASS'}, {'HTTP not
blocked': 'FAIL'}]}
> 
> > 2017-07-19 11:39:17,153 - __main__ - INFO - Results of test case
'sfc_two_chains_SSH_and_HTTP - ODL-SFC tests':
> > {'status': 'FAIL', 'details': [{'SSH Blocked': 'FAIL'}, {'HTTP
works': 'PASS'}, {'HTTP Blocked': 'FAIL'}, {'SSH works': 'PASS'}]}
> 
> > > 2017-07-19 11:40:57,321 - __main__ - INFO - Results of test case
'sfc_symmetric_chain - Verify the behavior of a symmetric service
chain':
> > {'status': 'FAIL', 'details': [{'HTTP works': 'PASS'}, {'HTTP
Blocked': 'FAIL'}]}
> 
> 
> 
> Neutron-all.log
> > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache
Traceback (most recent call last):
> > > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache  
File "/usr/lib/python2.7/dist-
packages/networking_odl/common/cache.py", line 120, in fetch_all
> > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache    
for key, value in self._fetch_all(new_entry_keys):
> > > > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache  
File "/usr/lib/python2.7/dist-
packages/networking_odl/ml2/network_topology.py", line 228, in
_fetch_and_parse_network_topology
> > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache    
.format(', '.join(addresses)))
> > > 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache
ValueError: No such topology element for given host addresses: node-
4.domain.tld
> 2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache
> > > > 2017-07-19T11:35:17.097015+00:00 node-1 neutron-server: 2017-07-19
11:35:17.096 12923 ERROR networking_odl.ml2.network_topology [req-
960c5cd1-947a-47a6-a2dd-d002dce56176 - - - - -] Network topology
element has failed binding port:
> > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology Traceback (most recent call
last):
> > > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology   File "/usr/lib/python2.7/dist-
packages/networking_odl/ml2/network_topology.py", line 117, in
bind_port
> > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology     port_context, vif_type,
self._vif_details)
> > > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology   File "/usr/lib/python2.7/dist-
packages/networking_odl/ml2/ovsdb_topology.py", line 175, in
bind_port
> > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology     _('Unable to find any valid
segment in given context.'))
> > > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology ValueError: Unable to find any
valid segment in given context.
> > 2017-07-19 11:35:17.096 12923 ERROR
networking_odl.ml2.network_topology
> > > > 2017-07-19T11:35:17.097555+00:00 node-1 neutron-server: 2017-07-19
11:35:17.097 12923 ERROR networking_odl.ml2.network_topology [req-
960c5cd1-947a-47a6-a2dd-d002dce56176 - - - - -] Unable to bind port
element for given host and valid VIF types:
> > > > > > > 2017-07-19T11:35:17.098497+00:00 node-1 neutron-server: 2017-07-19
11:35:17.098 12923 ERROR neutron.plugins.ml2.managers [req-960c5cd1-
947a-47a6-a2dd-d002dce56176 - - - - -] Failed to bind port d09ae072-
6b8b-423b-8f07-f562b819ee6c on host node-4.domain.tld for vnic_type
normal using segments [{'segmentation_id': None, 'physical_network':
u'physnet1', 'id': u'd106fed7-3947-4a29-8506-7841d8a91992',
'network_type': u'flat'}]
> 
> 
> > > > On 18-Jul-2017, at 6:31 PM, andres.sanchez.ra...@estudiant.upc.edu
wrote:
> > Hello Pavan,
> > 
> > > > > > > > > > > > I encountered similar errors in my SFC log, my script 
> > > > > > > > > > > > also gave an
error when waiting the instance to come up but i have not been able
to resolve it. Looking into your logs I think the problem is in
Neutron so you should probably check Neutron logs, and also
validate that you can manually start instances and assign floating
IPs to them. I will let you know if i am able to resolve my issues.
> > 
> > Best regards,
> > 
> > > > Quoting "Pavan Gupta" :
> > 
> > > Hi Andres,
> > > > > > > > > > > > I ran ‘functest openstack clean’ and that tool care of 
> > > > > > > > > > > > ‘SFC
already exist’ error. The test ran further till it hit the
following issue. In case, you have come across 

Re: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

2017-07-19 Thread Bugenhagen, Michael K
Just a FYI..

  Some of us providers are looking at staring a project using Redfish to 
manage “objects” --- specifically as a “UI” (USER INTERFACE) …

If you take a step back and consider

  1.  What will the user API be – we’ll 95% chance that’s REST.
  2.  Will they want to read, or have experience using the protocols and data 
models  -  SURE that’s Java for them & JSON
  3.  What are they using DOS or Windows (GUI’s) -  Well now we realize we need 
a GUI pattern to display that managed Object.

If you then walk over to a cloud group – you immediately find out that.

  1.  Cloud templates are full of “managed objects”
  2.  The managed object data models (pattern) supports showing the object in a 
GUI (like Open Stack dashboard).


  *   All of a sudden people realize .. hey that is what cloud does.

The nail in the coffin –

Agile to a cloud customer is giving them the Objects to manage / 
configure, ….   (ask ONUG SD-WAN – they have been pounding the table to get 
this like they have it in cloud).

i.e. – if one decides to build a Life cycle manager for 
“customer objects” – you in explicably add the internal IT, and Network 
controller dev. to meeting the customer requirement.
Aka -  you are out of the race…  (when I say in explicably – 
you can’t explain to a customer that this takes 9-12 months to deliver) …

The punch line –
Software isn’t agile – user managed Objects are..   (you can 
orchestrate an resource to give it to the customer to manage, or just give it 
as part of the platform without orchestration)

The project scope is being hammered out now…
More to come.

Best,
Mike





From:  on behalf of "Rosen, Rami" 

Date: Wednesday, July 19, 2017 at 9:57 AM
To: "Tahhan, Maryam" , Aaron Smith 
, "opnfv-tech-discuss@lists.opnfv.org" 

Cc: "Mcmahon, Tony B" 
Subject: Re: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Hi Aaron,
Indeed from initial exploration of the Redfish spec, it seems a very good 
candidate for a REST API that sits in front of collectd.
As said this is a WIP, we are exploring further.

Regards,
Rami Rosen

From: Tahhan, Maryam
Sent: Wednesday, July 19, 2017 17:15
To: Aaron Smith ; opnfv-tech-discuss@lists.opnfv.org
Cc: Mcmahon, Tony B ; Rosen, Rami 

Subject: RE: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Hey Aaron

Rami in CC has been looking at this, and we’ve been using the Redfish API 
definition in DMTF as a reference. It’s very much a WIP but  I will add this as 
a discussion topic to the next barometer call.

BR
Maryam

From: 
opnfv-tech-discuss-boun...@lists.opnfv.org
 [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Aaron Smith
Sent: Wednesday, July 19, 2017 3:01 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Subject: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Has any work been done on further definition of
BAROMETER-63 (https://jira.opnfv.org/browse/BAROMETER-63)?
We would like to coordinate with any work that has been done.

Aaron Smith
--

AARON SMITH

SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE

Red Hat



314 Littleton Rd, Westford, MA 01886

aasm...@redhat.comM: 
617.877.4814
[https://www.redhat.com/files/brand/email/sig-redhat.png]


This communication is the property of CenturyLink and may contain confidential 
or privileged information. Unauthorized use of this communication is strictly 
prohibited and may be unlawful. If you have received this communication in 
error, please immediately notify the sender by reply e-mail and destroy all 
copies of the communication and any attachments.
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

2017-07-19 Thread Rosen, Rami
Hi Aaron,
Indeed from initial exploration of the Redfish spec, it seems a very good 
candidate for a REST API that sits in front of collectd.
As said this is a WIP, we are exploring further.

Regards,
Rami Rosen

From: Tahhan, Maryam
Sent: Wednesday, July 19, 2017 17:15
To: Aaron Smith ; opnfv-tech-discuss@lists.opnfv.org
Cc: Mcmahon, Tony B ; Rosen, Rami 

Subject: RE: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Hey Aaron

Rami in CC has been looking at this, and we’ve been using the Redfish API 
definition in DMTF as a reference. It’s very much a WIP but  I will add this as 
a discussion topic to the next barometer call.

BR
Maryam

From: 
opnfv-tech-discuss-boun...@lists.opnfv.org
 [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Aaron Smith
Sent: Wednesday, July 19, 2017 3:01 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Subject: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Has any work been done on further definition of
BAROMETER-63 (https://jira.opnfv.org/browse/BAROMETER-63)?
We would like to coordinate with any work that has been done.

Aaron Smith
--

AARON SMITH

SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE

Red Hat



314 Littleton Rd, Westford, MA 01886

aasm...@redhat.comM: 
617.877.4814
[https://www.redhat.com/files/brand/email/sig-redhat.png]


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] New project proposal for ONAP-Automated OPNFV (Auto)

2017-07-19 Thread Tina Tsou
Dear all,

Hope things going well.

We will present a new project proposal ONAP-Automated OPNFV (Auto) at 
tomorrow's weekly Technical Community Discussion meeting.

OPNFV is a SDNFV system integration project for open source components, which 
so far have been mostly limited to the NFVI+VIM as generally described by ETSI. 
In particular, OPNFV has yet to integrate higher-level automation features. 
This project will focus on ONAP component integration and verification. More 
details can be found at the wiki page. 
https://wiki.opnfv.org/pages/viewpage.action?pageId=12387216

Talk to you then.


Thank you,

Bryan & Tina
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

2017-07-19 Thread Tahhan, Maryam
Hey Aaron

Rami in CC has been looking at this, and we’ve been using the Redfish API 
definition in DMTF as a reference. It’s very much a WIP but  I will add this as 
a discussion topic to the next barometer call.

BR
Maryam

From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Aaron Smith
Sent: Wednesday, July 19, 2017 3:01 PM
To: opnfv-tech-discuss@lists.opnfv.org
Subject: [opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

Has any work been done on further definition of
BAROMETER-63 (https://jira.opnfv.org/browse/BAROMETER-63)?
We would like to coordinate with any work that has been done.

Aaron Smith
--

AARON SMITH

SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE

Red Hat



314 Littleton Rd, Westford, MA 01886

aasm...@redhat.comM: 
617.877.4814
[https://www.redhat.com/files/brand/email/sig-redhat.png]


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [BAROMETER] BAROMETER-63 (runtime config)

2017-07-19 Thread Aaron Smith
Has any work been done on further definition of
BAROMETER-63 (https://jira.opnfv.org/browse/BAROMETER-63)?
We would like to coordinate with any work that has been done.

Aaron Smith
-- 

AARON SMITH

SENIOR PRINCIPAL SOFTWARE ENGINEER, NFVPE

Red Hat



314 Littleton Rd, Westford, MA 01886

aasm...@redhat.comM: 617.877.4814

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [StorPerf] Weekly Meeting

2017-07-19 Thread Beierl, Mark
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Eastern Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=2SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN="Beierl, Mark":MAILTO:mark.bei...@emc.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=opnfv-tech
 -disc...@lists.opnfv.org:MAILTO:opnfv-tech-discuss@lists.opnfv.org
DESCRIPTION;LANGUAGE=en-US:When: Wednesday\, July 19\, 2017 10:00 AM-11:00 
 AM. (UTC-05:00) Eastern Time (US & Canada)\n\n*~*~*~*~*~*~*~*~*~*\n\nSorry
  for the late invite.  Please be welcome to join in today's StorPerf weekl
 y team meeting\, starting in 10 minutes.\n\nhttps://meet.emc.com/mark.beie
 rl/69MEZFLU\n\n\nFind a local number:\nhttp://www.emcconferencing.com/glob
 alaccess/\n\nConference ID: 58108948\n
SUMMARY;LANGUAGE=en-US:[StorPerf] Weekly Meeting
DTSTART;TZID=Eastern Standard Time:20170719T10
DTEND;TZID=Eastern Standard Time:20170719T11
UID:6FBCEE59-FE0B-4DAB-B9FB-63143FD5E280
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170719T135032Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
X-MICROSOFT-CDO-APPT-SEQUENCE:0
X-MICROSOFT-CDO-OWNERAPPTID:2115475657
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-DISALLOW-COUNTER:FALSE
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:REMINDER
TRIGGER;RELATED=START:-PT5M
END:VALARM
END:VEVENT
END:VCALENDAR
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] OPNFV Docker builds on Dockerhub account

2017-07-19 Thread Fatih Degirmenci
Hi,

It looks nice!

Here are some comments.

The most important comment I have is the ability to run docker builds for 
patches as part of verify jobs and post feedback to OPNFV Gerrit. If we go this 
path, we will have pre-merge builds done on our machines and post-merge builds 
done on docker hub, which will result in at least same amount of maintenance 
effort if not more. Also things might result differently due to having 2 
different environments where the builds are done.

The other comments are the synching, the no of concurrent builds, and the 
visibility.

The synching will definitely slow things down as we need to wait for it to be 
done. This might be annoying when a crucial bugfix needs to be merged/built. 
The other possibility is if/when we have issues with synching which might 
further delay builds.

The no of concurrent builds will cause limitations time to time and some builds 
will have to wait in the dockerhub queue.

And finally there will be 2 places to look at for the builds/logs for different 
things; OPNFV jenkins and docker hub.

Also, the synching and concurrent build limit will contribute to the increase 
in time to get feedback as well. Time to get feedback will increase to time to 
synch + possible queueing from direct/post-merge triggered builds on our 
Jenkins.

Ps. I will be one of the happy persons if we move to docker hub so I can get 
rid of maintaining build servers. But I also need to highlight some of the 
limitations if we do this.

/Fatih

On 19 Jul 2017, at 15:30, Jose Lausuch  wrote:

Hi,
 
Following up on the discussion about how to build our Docker images, I started 
a trial with Trevor Bramwell with automated builds on Dockerhub for some of the 
new Functest Docker images:
https://hub.docker.com/r/opnfv/functest-core/builds/
https://hub.docker.com/r/opnfv/functest-smoke/builds/
https://hub.docker.com/r/opnfv/functest-healthcheck/builds/
 
It triggers a build after the corresponding repository in Github (mirror) has 
new code. Basically, whenever a new patch is merged on OPNFV gerrit and synched 
with Github.
 
There is currently a small limitation in the OPNFV Dockerhub account: it can 
build only 1 image at a time, but we can change that up to 5 or more parallel 
builds by requesting an account upgrade to LF. You can see the pricing plan 
here:
https://hub.docker.com/account/billing-plans/
 
We could use this to avoid load on our build servers, use them for something 
else and of course stop maintaining docker builds in OPNFV.
I would like to know your opinion on that.
 
Thanks,
Jose
 
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] SFC Project

2017-07-19 Thread Pavan Gupta
Hi Andres/Manuel,
I could run the test cases to completion, however, each one passed partially. I 
am not sure if the error msgs from neutron log point to subset failure of each 
test case. Let me know if you have any further pointers.
Pavan

SFC.log
2017-07-19 11:33:49,911 - __main__ - INFO - Results of test case 
'sfc_one_chain_two_service_functions - ODL-SFC Testing SFs when they are 
located on the same chain':
{'status': 'FAIL', 'details': [{'HTTP works': 'PASS'}, {'HTTP not blocked': 
'FAIL'}]}

2017-07-19 11:39:17,153 - __main__ - INFO - Results of test case 
'sfc_two_chains_SSH_and_HTTP - ODL-SFC tests':
{'status': 'FAIL', 'details': [{'SSH Blocked': 'FAIL'}, {'HTTP works': 'PASS'}, 
{'HTTP Blocked': 'FAIL'}, {'SSH works': 'PASS'}]}

2017-07-19 11:40:57,321 - __main__ - INFO - Results of test case 
'sfc_symmetric_chain - Verify the behavior of a symmetric service chain':
{'status': 'FAIL', 'details': [{'HTTP works': 'PASS'}, {'HTTP Blocked': 
'FAIL'}]}



Neutron-all.log
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache Traceback (most 
recent call last):
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache   File 
"/usr/lib/python2.7/dist-packages/networking_odl/common/cache.py", line 120, in 
fetch_all
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache for key, 
value in self._fetch_all(new_entry_keys):
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache   File 
"/usr/lib/python2.7/dist-packages/networking_odl/ml2/network_topology.py", line 
228, in _fetch_and_parse_network_topology
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache .format(', 
'.join(addresses)))
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache ValueError: No 
such topology element for given host addresses: node-4.domain.tld
2017-07-19 11:35:17.094 12923 ERROR networking_odl.common.cache
2017-07-19T11:35:17.097015+00:00 node-1 neutron-server: 2017-07-19 11:35:17.096 
12923 ERROR networking_odl.ml2.network_topology 
[req-960c5cd1-947a-47a6-a2dd-d002dce56176 - - - - -] Network topology element 
has failed binding port:
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology 
Traceback (most recent call last):
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology   File 
"/usr/lib/python2.7/dist-packages/networking_odl/ml2/network_topology.py", line 
117, in bind_port
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology 
port_context, vif_type, self._vif_details)
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology   File 
"/usr/lib/python2.7/dist-packages/networking_odl/ml2/ovsdb_topology.py", line 
175, in bind_port
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology 
_('Unable to find any valid segment in given context.'))
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology 
ValueError: Unable to find any valid segment in given context.
2017-07-19 11:35:17.096 12923 ERROR networking_odl.ml2.network_topology
2017-07-19T11:35:17.097555+00:00 node-1 neutron-server: 2017-07-19 11:35:17.097 
12923 ERROR networking_odl.ml2.network_topology 
[req-960c5cd1-947a-47a6-a2dd-d002dce56176 - - - - -] Unable to bind port 
element for given host and valid VIF types:
2017-07-19T11:35:17.098497+00:00 node-1 neutron-server: 2017-07-19 11:35:17.098 
12923 ERROR neutron.plugins.ml2.managers 
[req-960c5cd1-947a-47a6-a2dd-d002dce56176 - - - - -] Failed to bind port 
d09ae072-6b8b-423b-8f07-f562b819ee6c on host node-4.domain.tld for vnic_type 
normal using segments [{'segmentation_id': None, 'physical_network': 
u'physnet1', 'id': u'd106fed7-3947-4a29-8506-7841d8a91992', 'network_type': 
u'flat'}]

> On 18-Jul-2017, at 6:31 PM, andres.sanchez.ra...@estudiant.upc.edu wrote:
> 
> Hello Pavan,
> 
> I encountered similar errors in my SFC log, my script also gave an error when 
> waiting the instance to come up but i have not been able to resolve it. 
> Looking into your logs I think the problem is in Neutron so you should 
> probably check Neutron logs, and also validate that you can manually start 
> instances and assign floating IPs to them. I will let you know if i am able 
> to resolve my issues.
> 
> Best regards,
> 
> Quoting "Pavan Gupta" :
> 
>> Hi Andres,
>> I ran ‘functest openstack clean’ and that tool care of ‘SFC already exist’ 
>> error. The test ran further till it hit the following issue. In case, you 
>> have come across this issue, let me know.
>> 
>> 
>> 
>> SFC.log
>> 
>> 2017-07-18 08:45:38,445 - ovs_logger - ERROR - list index out of range
>> 2017-07-18 08:45:38,447 - sfc.lib.utils - INFO - This is the first_RSP:
>> 2017-07-18 08:45:38,589 - ovs_logger - ERROR - list index out of range
>> 2017-07-18 08:45:38,590 - sfc.lib.utils - INFO - These are the rsps: 
>> [u'0x24d']
>> 2017-07-18 08:45:39,592 - sfc.lib.utils - INFO - classification rules updated
>> 2017-07-18 08:45:39,592 - functest_utils - INFO - 

Re: [opnfv-tech-discuss] [infra] xci-aio deployment failed on on an Intel NUC Kit NUC6i7KYK

2017-07-19 Thread Markos Chandras
Hello,

On 19/07/17 13:30, Dave Urschatz wrote:
> Hi Markos.
> 
> Thanks for your reply.
> 
> The distribution is Ubuntu 16.04 as per XCI Wiki.
> 
> Here is the failure:
> 

The real problem is this

> self.install_requires:\\nAttributeError: Distribution instance has no 
> attribute 'install_requires'\",

This has been reported in upstream openstack
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119694.html

This is caused by the latest setuptools release

The upstream OpenStack Ansible project has a workaround in place
https://review.openstack.org/#/c/483874/

but we don't have that in the XCI yet

Your best bet right now is to do the following

export OPENSTACK_OSA_VERSION=d4ae08646d1c1192e5806be2f81b1748f520fd39
./xci-deploy.sh

but it's not guarantee to work at this point.

-- 
markos

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [infra] xci-aio deployment failed on on an Intel NUC Kit NUC6i7KYK

2017-07-19 Thread Dave Urschatz
Hi Markos.

Thanks for your reply.

The distribution is Ubuntu 16.04 as per XCI Wiki.

Here is the failure:

"fatal: [aio1]: FAILED! => {", "\"attempts\": 3, ", "\"changed\": 
false, ", "\"cmd\": [", "\"python\", ", "
\"/opt/get-pip.py\", ", "\"--isolated\", ", "\"pip==9.0.1\", ", 
"\"setuptools==33.1.1\", ", "\"wheel==0.29.0\"", "], ", "   
 \"delta\": \"0:00:01.317586\", ", "\"end\": \"2017-07-15 
01:30:31.556473\", ", "\"failed\": true, ", "\"invocation\": {", "  
  \"module_args\": {", "\"_raw_params\": \"python /opt/get-pip.py 
--isolated \\n \\n pip==9.0.1 setuptools==33.1.1 wheel==0.29.0\", ", "  
  \"_uses_shell\": false, ", "\"chdir\": null, ", "
\"creates\": null, ", "\"executable\": null, ", "
\"removes\": null, ", "\"warn\": true", "}", "}, ", "   
 \"rc\": 2, ", "\"start\": \"2017-07-15 01:30:30.238887\", ", "
\"stderr\": \"Exception:\\nTraceback (most recent call last):\\n  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/basecommand.py\\\", line 215, in main\\n
status = self.run(options, args)\\n  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/commands/install.py\\\", line 342, in run\\n
prefix=options.prefix_path,\\n  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_set.py\\\", line 784, in install\\n
**kwargs\\n  File \\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_install.py\\\", line 
851, in install\\nself.move_wheel_files(self.source_dir, root=root, 
prefix=prefix)\\n  File \\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_install.py\\\", 
line 1064, in move_wheel_files\\nisolated=self.isolated,\\n  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/wheel.py\\\", line 247, in move_wheel_files\\n   
 prefix=prefix,\\n  File \\\"/tmp/tmpV8cx9j/pip.zip/pip/locations.py\\\", line 
140, in distutils_scheme\\nd = Distribution(dist_args)\\n  File 
\\\"/usr/local/lib/python2.7/dist-packages/setuptools/dist.py\\\", line 365, in 
__init__\\nself._finalize_requires()\\n  File 
\\\"/usr/local/lib/python2.7/dist-packages/setuptools/dist.py\\\", line 372, in 
_finalize_requires\\nif not self.install_requires:\\nAttributeError: 
Distribution instance has no attribute 'install_requires'\", ", "
\"stderr_lines\": [", "\"Exception:\", ", "\"Traceback (most 
recent call last):\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/basecommand.py\\\", line 215, in main\", ", "
\"status = self.run(options, args)\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/commands/install.py\\\", line 342, in run\", ", 
"\"prefix=options.prefix_path,\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_set.py\\\", line 784, in install\", ", " 
   \"**kwargs\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_install.py\\\", line 851, in install\", 
", "\"self.move_wheel_files(self.source_dir, root=root, 
prefix=prefix)\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/req/req_install.py\\\", line 1064, in 
move_wheel_files\", ", "\"isolated=self.isolated,\", ", "\" 
 File \\\"/tmp/tmpV8cx9j/pip.zip/pip/wheel.py\\\", line 247, in 
move_wheel_files\", ", "\"prefix=prefix,\", ", "\"  File 
\\\"/tmp/tmpV8cx9j/pip.zip/pip/locations.py\\\", line 140, in 
distutils_scheme\", ", "\"d = Distribution(dist_args)\", ", "   
 \"  File \\\"/usr/local/lib/python2.7/dist-packages/setuptools/dist.py\\\", 
line 365, in __init__\", ", "\"self._finalize_requires()\", ", "
\"  File \\\"/usr/local/lib/python2.7/dist-packages/setuptools/dist.py\\\", 
line 372, in _finalize_requires\", ", "\"if not 
self.install_requires:\", ", "\"AttributeError: Distribution instance 
has no attribute 'install_requires'\"", "], ", "\"stdout\": 
\"Requirement already up-to-date: pip==9.0.1 in 
/usr/local/lib/python2.7/dist-packages\\nCollecting setuptools==33.1.1\\n  
Using cached setuptools-33.1.1-py2.py3-none-any.whl\\nCollecting 
wheel==0.29.0\\n  Using cached wheel-0.29.0-py2.py3-none-any.whl\\nInstalling 
collected packages: setuptools, wheel\\n  Found existing installation: 
setuptools 36.2.0\\nUninstalling setuptools-36.2.0:\\n  Successfully 
uninstalled setuptools-36.2.0\\n  Rolling back uninstall of setuptools\", ", "  
  \"stdout_lines\": [", "\"Requirement already up-to-date: pip==9.0.1 
in /usr/local/lib/python2.7/dist-packages\", ", "\"Collecting 
setuptools==33.1.1\", ", "\"  Using cached 
setuptools-33.1.1-py2.py3-none-any.whl\", ", "\"Collecting 
wheel==0.29.0\", ", "\"  Using cached 
wheel-0.29.0-py2.py3-none-any.whl\", ", "\"Installing collected 
packages: setuptools, wheel\", ", "\"  Found existing installation: 
setuptools 36.2.0\", ", "\"Uninstalling setuptools-36.2.0:\", ", "  
  \"  

[opnfv-tech-discuss] OPNFV Docker builds on Dockerhub account

2017-07-19 Thread Jose Lausuch
Hi,

Following up on the discussion about how to build our Docker images, I started 
a trial with Trevor Bramwell with automated builds on Dockerhub for some of the 
new Functest Docker images:

https://hub.docker.com/r/opnfv/functest-core/builds/

https://hub.docker.com/r/opnfv/functest-smoke/builds/

https://hub.docker.com/r/opnfv/functest-healthcheck/builds/



It triggers a build after the corresponding repository in Github (mirror) has 
new code. Basically, whenever a new patch is merged on OPNFV gerrit and synched 
with Github.



There is currently a small limitation in the OPNFV Dockerhub account: it can 
build only 1 image at a time, but we can change that up to 5 or more parallel 
builds by requesting an account upgrade to LF. You can see the pricing plan 
here:

https://hub.docker.com/account/billing-plans/



We could use this to avoid load on our build servers, use them for something 
else and of course stop maintaining docker builds in OPNFV.

I would like to know your opinion on that.



Thanks,

Jose


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [releng][opnfvdocs]document verify rtd job failed

2017-07-19 Thread Sofia Wallin
Hi Matthew,
Thanks for reaching out.

If I look 
here it 
seems like this patch is the one 
failing.
But I’m not sure how to solve this…

Hoping that Aric or Trevor will manage to help.

//Sofia


From: Lijun 
Date: Wednesday, 19 July 2017 at 09:45
To: Aric Gardner , 
"tbramw...@linuxfoundation.org" , Sofia Wallin 
, "julien...@gmail.com" , 
"shang.xiaod...@zte.com.cn" 
Cc: OPNFV 
Subject: [opnfv-tech-discuss][releng][opnfvdocs]document verify rtd job failed

Hi

Since 07/18 the document verify rtd job for all project document failed, such 
as this one
https://build.opnfv.org/ci/job/docs-verify-rtd-master/1590/console

so can somebody try to solve this? One assumption is caused of 
https://gerrit.opnfv.org/gerrit/#/c/37609/
since it has some .rst not included under /docs directory and has no 
doc-verify-rtd-job? I am not sure.

Any suggestions can be help, it blocks many patches there.


Best regards

/MatthewLi
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [SFC] Service Function Chaining in OpenStack using OpenDaylight

2017-07-19 Thread Raúl Álvarez Pinilla
Manuel, Tim, thank you very much for the information of the current status.


I will try to finish testing ODL Demos (SFC103 and SFC104) and I will continue 
with the test case scenario that you mention Manuel.


Best regards,


[http://www.firmasdecorreo.com/dyndata/images/firmasdecorreo.com-2014-10-31_11:36:18_6532-02.jpg]
Raúl Álvarez Pinilla



De: Tim Rozet 
Enviado: lunes, 17 de julio de 2017 21:25
Para: Manuel Buil
Cc: Raúl Álvarez Pinilla; opnfv-tech-discuss@lists.opnfv.org
Asunto: Re: [opnfv-tech-discuss] [SFC] Service Function Chaining in OpenStack 
using OpenDaylight

Hi Manuel,
I only tested networking-sfc <--> ODL <--> OVS NSH with devstack.  We are 
working in the next OPNFV release to support it in the Apex installer.  I think 
we will need some fixes to the networking-sfc driver for ODL (lives in 
networking-odl) and ODL itself for Carbon.

Thanks,

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Manuel Buil" 
To: "Raúl Álvarez Pinilla" , 
opnfv-tech-discuss@lists.opnfv.org
Cc: "Tim Rozet" 
Sent: Thursday, July 13, 2017 6:29:03 AM
Subject: Re: [opnfv-tech-discuss] [SFC] Service Function Chaining in OpenStack 
using OpenDaylight

Hello Raul,

When using fuel, the available scenario is Tacker + OpenStack + ODL +
OVS(+NSH) implemented in OPNFV. We have also two test cases running
against that deployment and we are running them everyday
(successfully!):

https://build.opnfv.org/ci/job/functest-fuel-baremetal-daily-danube/870
/ (this is yesterday's run)

Note that we are currently using a non-maintained version of Tacker
which configures ODL directly through a ODL plug-in inside Tacker. The
upstream version of Tacker configures ODL through networking-sfc, a
neutron subproject which is capable of configuring ODL. The link you
mention is talking about that integration with networking-sfc. We would
like to add that integration in the next release of OPNFV SFC and
Miguel Lavalle from our team is looking into that. That way we will be
able to use upstream tacker. Anyway, if you want to mirror what we test
everyday in OPNFV, follow this guide:

https://wiki.opnfv.org/display/sfc/OPNFV-SFC+Functest+test+cases

And if you want to collaborate and help us, for example, with the
integration of networking-sfc, you are more than welcome! We need
people!

Having said so, I think networking-sfc is already working in the APEX
installer. @Tim: can you confirm this?

Regards,
Manuel


On Thu, 2017-07-13 at 10:07 +, Raúl Álvarez Pinilla wrote:
>
>
> Hi all,
>
>
>
> > > > I have deployed OPNFV Danube 2.0 with Fuel and I would like to test
SFC in the OpenStack environment through OpenDaylight. I am using
OpenDaylight and NSH plugins in Fuel but I am not sure if this
feature is completely implemented or not.
>
>
>
> > > I have seen SFC103 and SFC104 Demos of OpenDaylight but nothing
related to OpenStack. In addition, this page (https://docs.openstack.
org/networking-odl/latest/specs/sfc-driver.html)
> >  mention that 'currently there is no formal integration mechanism to
consume OpenDaylight as an SFC provider for networking-sfc'.
>
>
>
> > So, is this working right now? Are there some manuals related to this
OpenDaylight SFC integration with OpenStack?
>
>
>
> Thank you ver much.
>
>
>
> Best regards.
>
>
>
>
>
>
>
>
>
>
> ___
> opnfv-tech-discuss mailing list
> opnfv-tech-discuss@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [releng][opnfvdocs]document verify rtd job failed

2017-07-19 Thread Lijun (Matthew)
Hi

Since 07/18 the document verify rtd job for all project document failed, such 
as this one
https://build.opnfv.org/ci/job/docs-verify-rtd-master/1590/console

so can somebody try to solve this? One assumption is caused of 
https://gerrit.opnfv.org/gerrit/#/c/37609/
since it has some .rst not included under /docs directory and has no 
doc-verify-rtd-job? I am not sure.

Any suggestions can be help, it blocks many patches there.


Best regards

/MatthewLi
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [Bottlenecks] Bottlenecks weekly meeting 7-20 (1:00-2:00 UTC, Thursday, 9:00-10:00 Beijing Time, Thursday, PDT 18:00-19:00 Wednesday )

2017-07-19 Thread Yuyang (Gabriel)
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:W. Australia Standard Time
BEGIN:STANDARD
DTSTART:16010101T00
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T00
TZOFFSETFROM:+0800
TZOFFSETTO:+0800
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN=Yuyang (Gabriel):MAILTO:gabriel.yuy...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=Tianhongbo
 :MAILTO:hongbo.tianhon...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=Lijun (Mat
 thew):MAILTO:matthew.li...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=liangqi (D
 ):MAILTO:liang...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=Liyiting:M
 AILTO:liyit...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='mrebellon
 @sandvine.com':MAILTO:mrebel...@sandvine.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=wangyaogua
 ng (A):MAILTO:sunshine.w...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='michael.a
 .ly...@intel.com':MAILTO:michael.a.ly...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=limingjian
 g:MAILTO:limingji...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=邓灵莉/
 Lingli Deng:MAILTO:denglin...@chinamobile.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='qwyang012
 6...@gmail.com':MAILTO:qwyang0...@gmail.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=Prakash Ra
 mchandran:MAILTO:prakash.ramchand...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=opnfv-tech
 -disc...@lists.opnfv.org:MAILTO:opnfv-tech-discuss@lists.opnfv.org
DESCRIPTION;LANGUAGE=zh-CN:Hi\,\n\nThe Bottlenecks weekly meeting will be h
 eld at 1:00-2:00 UTC\, Thursday\, 9:00-10:00 Beijing Time\, Thursday\, PDT
  18:00-19:00 Wednesday.\nWelcome to join our discussion. Details of this m
 eeting are shown below.\n\n\nAgenda:\n1.  Discussion with Yardstick ab
 out the scaling test\n2.  Bottlenecks E Rel. Discussion\n3.  Stres
 s testing Discussion\n4.  Action Item Review\n\nMeeting Resources\n\nP
 lease join the meeting from your computer\, tablet or smartphone.\nhttps:/
 /global.gotomeeting.com/join/391235029\n\nYou can also dial in using your 
 phone.\nUnited States (Toll-free): 1 877 309 2070\nUnited States : +1 (312
 ) 757-3119\n\n\nAccess Code: 882-532-573\n\n\nBest\,\nYang\n\n\n\n\n\n
SUMMARY;LANGUAGE=zh-CN:[Bottlenecks] Bottlenecks weekly meeting 7-20 (1:00-
 2:00 UTC\, Thursday\, 9:00-10:00 Beijing Time\, Thursday\, PDT 18:00-19:00
  Wednesday )
DTSTART;TZID=W. Australia Standard Time:20170720T09
DTEND;TZID=W. Australia Standard Time:20170720T10
UID:04008200E00074C5B7101A82E00830B8B18EA200D301000
 010005F79673E12EF504D9368EF70C5CB32DA
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170719T072048Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
LOCATION;LANGUAGE=zh-CN:https://global.gotomeeting.com/join/391235029
X-MICROSOFT-CDO-APPT-SEQUENCE:0
X-MICROSOFT-CDO-OWNERAPPTID:-485263391
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-DISALLOW-COUNTER:FALSE
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:REMINDER
TRIGGER;RELATED=START:-PT15M
END:VALARM
END:VEVENT
END:VCALENDAR
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss