Re: [openstack-dev] [Tempest][qa] Adding tags to commit messages

2013-12-24 Thread Masayuki Igawa
Hi,

On Tue, Dec 24, 2013 at 3:47 PM, Yair Fried yfr...@redhat.com wrote:
 Hi,
 Suggestion: Please consider tagging your Tempest commit messages the same way 
 you do your mails in the mailing list

 Explanation: Since tempest is a single project testing multiple Openstack 
 project we have a very diverse collection of patches as well as reviewers. 
 Tagging our commit messages will allow us to classify patches and thus:
 1. Allow reviewer to focus on patches related to their area of expertise
 2. Track trends in patches - I think we all know that we lack in Neutron 
 testing for example, but can we assess how many network related patches are 
 for awaiting review
 3. Future automation of flagging interesting patches

 You can usually tell all of this from reviewing the patch, but by then - 
 you've spent time on a patch you might not even be qualified to review.
 I suggest we tag our patches with, to start with, the components we are 
 looking to test, and the type of test (sceanrio, api, ...) and that reviewers 
 should -1 untagged patches.

 I think the tagging should be the 2nd line in the message:

 ==
 Example commit message

 [Neutron][Nova][Network][Scenario]

 Explanation of how this scenario tests both Neutron and Nova
 Network performance

 Chang-id XXX
 ===

 I would like this to start immediately but what do you guys think?

+1

And, how about do we the tagging about the services in the subject(1st line)?
For example:
  Neutron:Example commit subject

Because the dashboard of the gerrit shows the subject only now.
I think reviewers can find interesting patches easily if the
dashboard shows the tags.
This is not so strong opinion because some scenario tests may have
several services tags.

-- 
Masayuki Igawa

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][qa] Adding tags to commit messages

2013-12-24 Thread David Kranz

On 12/24/2013 06:32 AM, Sean Dague wrote:

On 12/24/2013 01:47 AM, Yair Fried wrote:

Hi,
Suggestion: Please consider tagging your Tempest commit messages the same way 
you do your mails in the mailing list

Explanation: Since tempest is a single project testing multiple Openstack 
project we have a very diverse collection of patches as well as reviewers. 
Tagging our commit messages will allow us to classify patches and thus:
1. Allow reviewer to focus on patches related to their area of expertise
2. Track trends in patches - I think we all know that we lack in Neutron 
testing for example, but can we assess how many network related patches are for awaiting 
review
3. Future automation of flagging interesting patches

You can usually tell all of this from reviewing the patch, but by then - you've 
spent time on a patch you might not even be qualified to review.
I suggest we tag our patches with, to start with, the components we are looking 
to test, and the type of test (sceanrio, api, ...) and that reviewers should -1 
untagged patches.

I think the tagging should be the 2nd line in the message:

==
Example commit message

[Neutron][Nova][Network][Scenario]

Explanation of how this scenario tests both Neutron and Nova
Network performance

Chang-id XXX
===

I would like this to start immediately but what do you guys think?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-2

I think this is just extra clutter, please don't.

Also, it's Holiday season so tons of people are out, policy changes are
completely on hold until January.

Yes


The commit message should be meaningful so I can read it, a bunch of
tags I find just ugly and don't want to go near. We already have this
information in the directory structure for API tests. And in service
tags for the scenario tests.

2  3 you can through gerrit API queries. Replicating that information
in another place is just error prone.
Perhaps so.  Maybe  we can figure out some helpful workflows for tempest 
reviewers and share useful queries.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate is broken right now

2013-12-24 Thread Jeremy Stanley
On 2013-12-24 08:41:29 -0600 (-0600), Matt Riedemann wrote:
 On Tuesday, December 24, 2013 6:11:25 AM, Chmouel Boudjnah wrote:
 We currently have an issue with the grenade job due of an new release
 of boto.
 
 Sean was kind of enough (on his vacaion) to fix/workaround it here:
 [...]
 
 The bug to recheck against is 1263824.

Also, the fix is merged as of a few hours ago, so we shouldn't
expect any new recurrences.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread James Slagle
I built some vm image files for testing with TripleO based off of the
icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
interested in giving them a try you can find a set of instructions and
how to download the images at:

https://gist.github.com/slagle/981b279299e91ca91bd9

The steps are similar to the devtest process, but you use the prebuilt
vm images for the undercloud and overcloud and don't need a seed vm.
When the undercloud vm is started it uses the OpenStack Configuration
Drive as a data source for cloud-init.  This eliminates some of the
manual configuration that would otherwise be needed.  To that end, the
steps currently use some different git repos for some of the tripleo
tooling since not all of that functionality is upstream yet.  I can
submit those upstream, but they didn't make a whole lot of sense
without the background, so I wanted to provide that first.

At the very least, this could be an easier way for developers to get
setup with tripleo to do a test overcloud deployment to develop on
things like Tuskar.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread Clint Byrum
Excerpts from James Slagle's message of 2013-12-24 08:50:32 -0800:
 I built some vm image files for testing with TripleO based off of the
 icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
 interested in giving them a try you can find a set of instructions and
 how to download the images at:
 
 https://gist.github.com/slagle/981b279299e91ca91bd9
 

This is great, thanks for working hard to make the onramp shorter. :)

 The steps are similar to the devtest process, but you use the prebuilt
 vm images for the undercloud and overcloud and don't need a seed vm.
 When the undercloud vm is started it uses the OpenStack Configuration
 Drive as a data source for cloud-init.  This eliminates some of the
 manual configuration that would otherwise be needed.  To that end, the
 steps currently use some different git repos for some of the tripleo
 tooling since not all of that functionality is upstream yet.  I can
 submit those upstream, but they didn't make a whole lot of sense
 without the background, so I wanted to provide that first.
 

Why would config drive be easier than putting a single json file in
/var/lib/heat-cfntools/cfn-init-data the way the seed works?

Do you experience problems with that approach that we haven't discussed?

If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the
undercloud, not the seed. The seed is basically an undercloud in a VM
with a static configuration. That is what you have described but done
in a slightly different way. I am curious what the benefits of this
approach are.

 At the very least, this could be an easier way for developers to get
 setup with tripleo to do a test overcloud deployment to develop on
 things like Tuskar.
 

Don't let my questions discourage you. This is great as-is!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread James Slagle
On Tue, Dec 24, 2013 at 12:26 PM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from James Slagle's message of 2013-12-24 08:50:32 -0800:
 I built some vm image files for testing with TripleO based off of the
 icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
 interested in giving them a try you can find a set of instructions and
 how to download the images at:

 https://gist.github.com/slagle/981b279299e91ca91bd9


 This is great, thanks for working hard to make the onramp shorter. :)

 The steps are similar to the devtest process, but you use the prebuilt
 vm images for the undercloud and overcloud and don't need a seed vm.
 When the undercloud vm is started it uses the OpenStack Configuration
 Drive as a data source for cloud-init.  This eliminates some of the
 manual configuration that would otherwise be needed.  To that end, the
 steps currently use some different git repos for some of the tripleo
 tooling since not all of that functionality is upstream yet.  I can
 submit those upstream, but they didn't make a whole lot of sense
 without the background, so I wanted to provide that first.


 Why would config drive be easier than putting a single json file in
 /var/lib/heat-cfntools/cfn-init-data the way the seed works?

 Do you experience problems with that approach that we haven't discussed?

That approach works fine if you're going to build the seed image.   In
devtest, you modify the cfn-init-data with a sed command, then include
it in your build seed image.  So, everyone that runs devtest ends up
with a unique seed image pretty much.

In this approach, everyone uses the same undercloud vm image.  In
order to make that work, their's a script to build the config drive
iso and that is then used to make config changes at boot time to the
undercloud.  Specifically, there's cloud-init data on the config drive
iso to update the virtual power manager user and ssh key, and sets the
user's ssh key in authorized keys.


 If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the
 undercloud, not the seed. The seed is basically an undercloud in a VM
 with a static configuration. That is what you have described but done
 in a slightly different way. I am curious what the benefits of this
 approach are.

True, there's not a whole lot of difference between eliminating the
seed or the undercloud.  You eliminate either one, then call your
first cloud whichever you want.  To me, the seed has always seemed
short lived, once you use it to deploy the undercloud it can go away
(eventually, anyway).  So, that's why I am calling the first cloud
here the undercloud.  Plus, since it will eventually include Tuskar
and deploy the overcloud, it seemed more inline with the current
devtest flow to call it an undercloud.


 At the very least, this could be an easier way for developers to get
 setup with tripleo to do a test overcloud deployment to develop on
 things like Tuskar.


 Don't let my questions discourage you. This is great as-is!

Great, thanks.  I appreciate the feedback!


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2013-12-24 Thread Clint Byrum
Excerpts from James Slagle's message of 2013-12-24 10:40:23 -0800:
 On Tue, Dec 24, 2013 at 12:26 PM, Clint Byrum cl...@fewbar.com wrote:
  Excerpts from James Slagle's message of 2013-12-24 08:50:32 -0800:
  I built some vm image files for testing with TripleO based off of the
  icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
  interested in giving them a try you can find a set of instructions and
  how to download the images at:
 
  https://gist.github.com/slagle/981b279299e91ca91bd9
 
 
  This is great, thanks for working hard to make the onramp shorter. :)
 
  The steps are similar to the devtest process, but you use the prebuilt
  vm images for the undercloud and overcloud and don't need a seed vm.
  When the undercloud vm is started it uses the OpenStack Configuration
  Drive as a data source for cloud-init.  This eliminates some of the
  manual configuration that would otherwise be needed.  To that end, the
  steps currently use some different git repos for some of the tripleo
  tooling since not all of that functionality is upstream yet.  I can
  submit those upstream, but they didn't make a whole lot of sense
  without the background, so I wanted to provide that first.
 
 
  Why would config drive be easier than putting a single json file in
  /var/lib/heat-cfntools/cfn-init-data the way the seed works?
 
  Do you experience problems with that approach that we haven't discussed?
 
 That approach works fine if you're going to build the seed image.   In
 devtest, you modify the cfn-init-data with a sed command, then include
 it in your build seed image.  So, everyone that runs devtest ends up
 with a unique seed image pretty much.
 

I had not considered this but it makes sense.

 In this approach, everyone uses the same undercloud vm image.  In
 order to make that work, their's a script to build the config drive
 iso and that is then used to make config changes at boot time to the
 undercloud.  Specifically, there's cloud-init data on the config drive
 iso to update the virtual power manager user and ssh key, and sets the
 user's ssh key in authorized keys.
 

Is this because it is less work to build an iso than to customize an
existing seed image? How hard would it be to just mount the guest image
and drop the json file in it?

Anyway I like the approach, though I generally do not like config drive.
:)

 
  If I were trying to shrink devtest from 3 clouds to 2, I'd eliminate the
  undercloud, not the seed. The seed is basically an undercloud in a VM
  with a static configuration. That is what you have described but done
  in a slightly different way. I am curious what the benefits of this
  approach are.
 
 True, there's not a whole lot of difference between eliminating the
 seed or the undercloud.  You eliminate either one, then call your
 first cloud whichever you want.  To me, the seed has always seemed
 short lived, once you use it to deploy the undercloud it can go away
 (eventually, anyway).  So, that's why I am calling the first cloud
 here the undercloud.  Plus, since it will eventually include Tuskar
 and deploy the overcloud, it seemed more inline with the current
 devtest flow to call it an undercloud.
 

The more I think about it the more I think we should just take the three
cloud approach. The seed can be turned off as soon as the undercloud is
running, but it allows testing and modification of the seed to undercloud
transfer, which is something we are going to need to put work in to at
some point. It would be a shame to force developers to switch gears and
use something entirely different when they need to get into that.

Perhaps we could just use your config drive approach for the seed all
the time. Then users can start with pre-built images, but don't have to
change everything when they want to start changing said images.

I'm not 100% convinced that it is needed, but I'd rather have one path
than two if we can manage that and not drive away potential
contributors.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Delivering datastore logs to customers

2013-12-24 Thread Vipul Sabhaya
On Mon, Dec 23, 2013 at 8:59 AM, Daniel Morris
daniel.mor...@rackspace.comwrote:

   Vipul,

  I know we discussed this briefly in the Wednesday meeting but I still
 have a few questions.   I am not bought in to the idea that we do not need
 to maintain the records of saved logs.   I agree that we do not need to
 enable users to download and manipulate the logs themselves via Trove (
 that can be left to Swift), but at a minimum, I believe that the system
 will still need to maintain a mapping of where the logs are stored in
 swift.  This is a simple addition to the list of available logs per
 datastore (an additional field of its swift location – a location exists,
 you know the log has been saved).  If we do not do this, how then does the
 user know where to find the logs they have saved or if they even exist in
 Swift without searching manually?  It may be that this is covered, but I
 don't see this represented in the BP.  Is the assumption that it is some
 known path?  I would expect to see the Swift location retuned on a GET of
 the available logs types for a specific instance (there is currently only a
 top-level GET for logs available per datastore type).

 The Swift location can be returned in the response to the POST/‘save’
operation.  We may consider returning a top-level immutable resource (like
‘flavors’) that when queried, could return the Base path for logs in Swift.


Logs are not meaningful to Trove, since you can’t act on them or perform
other meaningful Trove operations on them.  Thus I don’t believe they
qualify as a resource in Trove.  Multiple ‘save’ operations should not
result in a replace of the previous logs, it should just add to what may
already be there in Swift.


  I am also assuming in this case, and per the BP, that If the user does
 not have the ability to select the storage location in Swift of if this is
 controlled exclusively by the deployer.  And that you would only allow one
 occurrence of the log, per datastore / instance and that the behavior of
 writing a log more than once to the same location is that it will overwrite
 / append, but it is not detailed in the BP.

 The location should be decided by Trove, not the user.  We’ll likely need
to group them in Swift by InstanceID buckets.  I don’t believe we should do
appends/overwrites - new Logs saved would just add to what may already
exist.  If the user chooses they don’t need the logs, they can perform the
delete directly in Swift.



   Thanks,
 Daniel
 From: Vipul Sabhaya vip...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Friday, December 20, 2013 2:14 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [trove] Delivering datastore logs to
 customers

   Yep agreed, this is a great idea.

  We really only need two API calls to get this going:
 - List available logs to ‘save’
 - Save a log (to swift)

  Some additional points to consider:
  - We don’t need to create a record of every Log ‘saved’ in Trove.  These
 entries, treated as a Trove resource aren’t useful, since you don’t
 actually manipulate that resource.
 - Deletes of Logs shouldn’t be part of the Trove API, if the user wants to
 delete them, just use Swift.
 - A deployer should be able to choose which logs can be ‘saved’ by their
 users


 On Wed, Dec 18, 2013 at 2:02 PM, Michael Basnight mbasni...@gmail.comwrote:

  I think this is a good idea and I support it. In todays meeting [1]
 there were some questions, and I encourage them to get brought up here. My
 only question is in regard to the tail of a file we discussed in irc.
 After talking about it w/ other trovesters, I think it doesnt make sense to
 tail the log for most datstores. I cant imagine finding anything useful in
 say, a java, applications last 100 lines (especially if a stack trace was
 present). But I dont want to derail, so lets try to focus on the deliver
 to swift first option.

  [1]
 http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-12-18-18.13.log.txt

  On Wed, Dec 18, 2013 at 5:24 AM, Denis Makogon dmako...@mirantis.comwrote:

  Greetings, OpenStack DBaaS community.

  I'd like to start discussion around a new feature in Trove. The
 feature I would like to propose covers manipulating  database log files.



  Main idea. Give user an ability to retrieve database log file for
 any purposes.

 Goals to achieve. Suppose we have an application (binary
 application, without source code) which requires a DB connection to perform
 data manipulations and a user would like to perform development, debbuging
 of an application, also logs would be useful for audit process. Trove
 itself provides access only for CRUD operations inside of database, so the
 user cannot access the instance directly and analyze its log files.
 Therefore, Trove should be able to provide ways to allow a user to download
 

[openstack-dev] [Nova][BluePrint Register] Shrink the volume when file in the instance was deleted.

2013-12-24 Thread Qixiaozhen
Hi,all

A blueprint is registered that is about shrinking the volume in thin provision.

Thin provision means allocating the disk space once the instance writes the 
data on the area of volume in the first time.

However, if the files in the instance were deleted, thin provision could not 
deal with this situation. The space that was allocated by the files could not 
be released.

So it is necessary to shrink the volume when the files are deleted in the 
instance.

The operation of shrinking can be manually executed by the user with the web 
portal or CLI command or periodically in the backgroud.

Sincerely

Qi


Qi Xiaozhen
CLOUD OS PDU, IT Product Line, Huawei Enterprise Business Group
Mobile: +86 13609283376Tel: +86 29-89191578



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova]Question about TaskAPI Blueprint

2013-12-24 Thread haruka tanizawa
Hi!
OpenStackers.

I am interested in Task API[0].
I saw this blueprint's milestone target was changed from icehouse-2 to
icehouse-3.

Is there any schedule about possibility of implementation?
Or is there any ideas and detailed design?
If you already have code, I really want to see the code.

Thanks.

Sincerely, Haruka Tanizawa

[0] https://blueprints.launchpad.net/nova/+spec/instance-tasks-api
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] UnderCloud OverCloud

2013-12-24 Thread LeslieWang
Dear All,
Merry Christmas  Happy New Year!
I'm new in TripleO. After some investigation, I have one question on UnderCloud 
 OverCloud. Per my understanding, UnderCloud will pre-install and set up all 
baremetal servers used for OverCloud. Seems like it assumes all baremetal 
server should be installed in advance. Then my question is from green and 
elasticity point of view. Initially OverCloud should have zero baremetal 
server. Per user demands, OverCloud Nova Scheduler should decide if I need more 
baremetal server, then talk to UnderCloud to allocate more baremetal servers, 
which will use Heat to orchestrate baremetal server starts. Does it make 
senses? Does it already plan in the roadmap?
If UnderCloud resources are created/deleted elastically, why not OverCloud 
talks to Ironic to allocate resource directly? Seems like it can achieve same 
goal. What else features UnderCloud will provide? Thanks in advance.
Best RegardsLeslie___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack Ceph

2013-12-24 Thread Rushi Agrawal
Hi Sebastien,

+1 from my side if Ceph can be installed in a single-node.

I am interested in making a contribution towards this effort, but my
understanding towards Ceph is only elementary at present.



Regards,
Rushi Agrawal,
OpenStack storage engineer,
Reliance Jio Infocomm
Ph: (+91) 99 4518 4519


On Tue, Dec 24, 2013 at 5:19 PM, Sebastien Han
sebastien@enovance.comwrote:

 Hello everyone,

 I’ve been working on a new feature for Devstack that includes a native
 support for Ceph.
 The patch includes the following:

 * Ceph installation (using the ceph.com repo)
 * Glance integration
 * Cinder integration (+ nova virsh secret)
 * Cinder backup integration
 * Partial Nova integration since master is currently broken. Lines are
 already there, the plan is to un-comment those lines later.
 * Everything works with Cephx (Ceph authentification system).

 Does anyone will be interested to see this going into Devstack mainstream?

 Cheers.

 
 Sébastien Han
 Cloud Engineer

 Always give 100%. Unless you're giving blood.”

 Phone: +33 (0)1 49 70 99 72
 Mail: sebastien@enovance.com
 Address : 10, rue de la Victoire - 75009 Paris
 Web : www.enovance.com - Twitter : @enovance


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: ./run_test.sh Fails

2013-12-24 Thread Sayali Lunkad
Hey,

 I checked ~/.pip but I do not see any pip.conf file. I see only pip log.

Thanks,
Sayali



On Mon, Dec 23, 2013 at 9:48 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2013-12-21 01:45, Sayali Lunkad wrote:


 Subject: ./run_test.sh fails to build environment

 Hello,
 I get this error when I try to set the environment for Horizon. Any idea
 why this is happening? I am running Devstack on a VM with Ubuntu 12.04.

 sayali@sayali:/opt/stack/horizon$ ./run_tests.sh

 [snip]

 Downloading/unpacking iso8601=0.1.8 (from -r
 /opt/stack/horizon/requirements.txt (line 9))
   Error urlopen error [Errno -2] Name or service not known while getting
 https://pypi.python.org/packages/source/i/iso8601/iso8601-0.1.8.tar.gz#md5=b207ad4f2df92810533ce6145ab9c3e7(from
 https://pypi.python.org/simple/iso8601/)
 Cleaning up...
 Exception:
 Traceback (most recent call last):
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 134, in main
 status = self.run(options, args)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 236, in run
 requirement_set.prepare_files(finder, force_root_egg_info=self.bundle,
 bundle=self.bundle)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/req.py,
 line 1092, in prepare_files
 self.unpack_url(url, location, self.is_download)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/req.py,
 line 1238, in unpack_url
 retval = unpack_http_url(link, location, self.download_cache,
 self.download_dir)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py,
 line 602, in unpack_http_url
 resp = _get_response_from_url(target_url, link)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py,
 line 638, in _get_response_from_url
 resp = urlopen(target_url)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py,
 line 176, in __call__
 response = self.get_opener(scheme=scheme).open(url)
   File /usr/lib/python2.7/urllib2.py, line 400, in open
 response = self._open(req, data)
   File /usr/lib/python2.7/urllib2.py, line 418, in _open
 '_open', req)
   File /usr/lib/python2.7/urllib2.py, line 378, in _call_chain
 result = func(*args)
   File
 /opt/stack/horizon/.venv/local/lib/python2.7/site-packages/pip/download.py,
 line 155, in https_open
 return self.do_open(self.specialized_conn_class, req)
   File /usr/lib/python2.7/urllib2.py, line 1177, in do_open
 raise URLError(err)
 URLError: urlopen error [Errno -2] Name or service not known

 Storing complete log in /home/sayali/.pip/pip.log
 Command tools/with_venv.sh pip install --upgrade -r
 /opt/stack/horizon/requirements.txt -r
 /opt/stack/horizon/test-requirements.txt failed.
 None

 This looks like a simple download failure.  It happens sometimes with
 pypi.  It's probably not a bad idea to just configure pip to use our mirror
 as it's generally more stable.  You can see what we do in
 tripleo-image-elements here:
 https://github.com/openstack/tripleo-image-elements/blob/master/elements/pypi-openstack/pre-install.d/00-configure-openstack-pypi-mirror
 Mostly I think you just need to look at the pip.conf part.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev