[openstack-dev] [Fuel] Let's change the way we distribute Fuel (was: [Fuel] Remove MOS DEB repo from master node)

2015-09-10 Thread Yuriy Taraday
Hello, thread!

First let me address some of the very good points Alex raised in his email.

On Wed, Sep 9, 2015 at 10:33 PM Alex Schultz  wrote:

> Fair enough, I just wanted to raise the UX issues around these types of
> things as they should go into the decision making process.
>

UX issues is what we definitely should address even for ourselves: number
of things that need to happen to deploy Master with just one small change
is enormous.


> Let me explain why I think having local MOS mirror by default is bad:
>> 1) I don't see any reason why we should treat MOS  repo other way than
>> all other online repos. A user sees on the settings tab the list of repos
>> one of which is local by default while others are online. It can make user
>> a little bit confused, can't it? A user can be also confused by the fact,
>> that some of the repos can be cloned locally by fuel-createmirror while
>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>
> I agree. The process should be the same and it should be just another
> repo. It doesn't mean we can't include a version on an ISO as part of a
> release.  Would it be better to provide the mirror on the ISO but not have
> it enabled by default for a release so that we can gather user feedback on
> this? This would include improved documentation and possibly allowing a
> user to choose their preference so we can collect metrics?
>

I think instead of relying on average user of spherical Fuel we should let
user decide what goes to ISO.

2) Having local MOS mirror by default makes things much more convoluted. We
>> are forced to have several directories with predefined names and we are
>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>> to MOS, which is not true. It is possible to implement really flexible
>> delivery scheme, but we need to think of these things as they are
>> independent.
>>
>
> I'm not sure what you mean by this. Including a point in time copy on an
> ISO as a release is a common method of distributing software. Is this a
> messaging thing that needs to be addressed? Perhaps I'm not familiar with
> people referring to the ISO as being MOS.
>

It is so common that some people think it's very broken. But we can fix
that.

For large users it is easy to build custom ISO and put there what they need
>> but first we need to have simple working scheme clear for everyone. I think
>> dealing with all repos the same way is what is gonna makes things simpler.
>>
>
> Who is going to build a custom ISO? How does one request that? What
> resources are consumed by custom ISO creation process/request? Does this
> scale?
>

How about user building ISO on one's workstation?

This thread is not about internet connectivity, it is about aligning things.
>>
>
> You are correct in that this thread is not explicitly about internet
> connectivity, but they are related. Any changes to remove a local
> repository and only provide an internet based solution makes internet
> connectivity something that needs to be included in the discussion.  I just
> want to make sure that we properly evaluate this decision based on end user
> feedback not because we don't want to manage this from a developer
> standpoint.
>

We can use Internet connectivity not only in target DC.

Now what do I mean by all that? Let's make Fuel distribution that's easier
to develop and distribute while making it more comfortable to use in the
process.

As Alex pointed out, the common way to distribute an OS is to put some
number of packages from some snapshot of golden repo on ISO and let user
install that. Let's say, it's a DVD way (although there was time OS could
fit CD). The other less common way of distributing OS is a small minimal
ISO and use online repo to install everything. Let's say, it's a MiniCD way.

Fuel is now using a DVD way: we put everything user will ever need to an
ISO and give it to user. Vladimir's proposal was to use smth similar to
MiniCD way: put only Fuel on ISO and keep online repo running.

Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
bit bigger, but it deploys clouds, not just separate machines. Packages and
OS then translate to everything needed to deploy OpenStack: packages and
deploy scripts (puppet manifests, could be packaged as well). We could
apply the same logic to distribution of Fuel itself though, but let's not
get into it right now.

Let's compare these ways from distributor (D) and user (U) point of view.

DVD way.
Pros:
- (D) a single piece to deliver to user;
- (D,U) a snapshot of repo put on ISO is easier to cover with QA and so
it's better tested;
- (U) one-time download for everything;
- (U) no need for Internet connectivity when you're installing OS;
- (U) you can store ISO and reuse it any number of times.
Cons:
- (D) you still have to maintain online repo for updates;
- (D,U) it's hard to create a custom 

[openstack-dev] [Fuel][Plugins] SDK is updated with the latest information

2015-09-10 Thread Irina Povolotskaya
Hi to all,

Please be informed that the Fuel Plugin SDK now has a set of useful
instructions that cover the following issues:
- how to create a new project for Fuel Plugins in /openstack namespace [1].
- how to add your plugin to DriverLog [2]
- how to form documentation for your plugin [3].

If you suppose some issues are still missing, please let me know.

Thanks.


[1] https://wiki.openstack.org/wiki/Fuel/Plugins#How_to_create_a_project

[2]
https://wiki.openstack.org/wiki/Fuel/Plugins#Add_your_plugin_to_DriverLog
[3]
https://wiki.openstack.org/wiki/Fuel/Plugins#Creating_documentation_for_Fuel_Plugins
-- 
Best regards,

Irina

*Business Analyst*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kilo-devstack] [disk-usage]

2015-09-10 Thread Sean Dague
disk = 0 does not mean there is no disk. It means the disk image won't
be expanded to a larger size. The disk used will be whatever your image
size is.

-Sean

On 09/10/2015 06:54 AM, Abhishek Talwar wrote:
> Hi Folks,
> 
> 
> I have installed devstack kilo version of OpenStack and created an
> instance on it with flavor *“m1.nano”* that gives a disk of 0 to your
> instance.
> 
> But while checking disk usage of the instance using *“disk.usage”* meter
> it gives the output to be greater than 0 so how is it possible ?
> 
> 
> *stack@abhishek:/opt/stack/ceilometer$ nova show
> e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa *
> 
> +--++
> 
> 
> | Property | Value |
> 
> +--++
> 
> 
> | OS-DCF:diskConfig | AUTO |
> 
> | OS-EXT-AZ:availability_zone | nova |
> 
> | OS-EXT-SRV-ATTR:host | tcs-HP-Compaq-Elite-8300-SFF |
> 
> | OS-EXT-SRV-ATTR:hypervisor_hostname | tcs-HP-Compaq-Elite-8300-SFF |
> 
> | OS-EXT-SRV-ATTR:instance_name | instance-0002 |
> 
> | OS-EXT-STS:power_state | 1 |
> 
> | OS-EXT-STS:task_state | - |
> 
> | OS-EXT-STS:vm_state | active |
> 
> | OS-SRV-USG:launched_at | 2015-09-10T05:24:19.00 |
> 
> | OS-SRV-USG:terminated_at | - |
> 
> | accessIPv4 | |
> 
> | accessIPv6 | |
> 
> | config_drive | True |
> 
> | created | 2015-09-10T05:24:10Z |
> 
> | flavor | m1.nano (42) |
> 
> | hostId | 4a3e03e0a89fbf3790a1b1cd59b1b10acbaad6aa31a4361996d52440 |
> 
> | id | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa |
> 
> | image | cirros-0.3.2-x86_64-uec (221c46b3-9619-485e-8f60-0e1a363fc0e5) |
> 
> | key_name | - |
> 
> | metadata | {} |
> 
> | name | vmssasa |
> 
> | os-extended-volumes:volumes_attached | [] |
> 
> | progress | 0 |
> 
> | public network | 172.24.4.4 |
> 
> | security_groups | default |
> 
> | status | ACTIVE |
> 
> | tenant_id | 5f4f5ee531a441d7bb3830529e611c7d |
> 
> | updated | 2015-09-10T05:24:19Z |
> 
> | user_id | d04b218204414a1891646735befd449c |
> 
> +--++
> 
> 
> 
> 
> 
> *stack@abhishek:/opt/stack/ceilometer$ nova flavor-show m1.nano *
> 
> ++-+
> 
> | Property | Value |
> 
> ++-+
> 
> | OS-FLV-DISABLED:disabled | False |
> 
> | OS-FLV-EXT-DATA:ephemeral | 0 |
> 
> | disk | 0 |
> 
> | extra_specs | {} |
> 
> | id | 42 |
> 
> | name | m1.nano |
> 
> | os-flavor-access:is_public | True |
> 
> | ram | 64 |
> 
> | rxtx_factor | 1.0 |
> 
> | swap | |
> 
> | vcpus | 1 |
> 
> ++-+
> 
> 
> *stack@abhishek:/opt/stack/ceilometer$ ceilometer sample-list -m
> 'cpu_util' -q "resource_id=e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa"*
> 
> +--++---++--+-+
> 
> 
> | Resource ID | Name | Type | Volume | Unit | Timestamp |
> 
> +--++---++--+-+
> 
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T10:30:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T10:20:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T10:10:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T10:00:54 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T09:48:25 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T09:38:25 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T09:21:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T09:11:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T09:01:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T08:51:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T08:41:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T08:31:42 |
> 
> | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa | disk.usage | gauge | 2448.0
> | B | 2015-09-10T08:21:42 |
> 
> 
> 
> +--++---++--+-+
> 
> 
> 
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> informatio

Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

2015-09-10 Thread Asselin, Ramy
I added Fnst OpenStackTest 
CI
 to the third-party ci group.
Ramy

From: Evgeny Antyshev [mailto:eantys...@virtuozzo.com]
Sent: Thursday, September 10, 2015 3:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit 
server


On 10.09.2015 11:30, Xie, Xianshan wrote:
Hi, all,
   In my CI environment, after submitting a patch into openstack-dev/sandbox,
the Jenkins Job can be launched automatically, and the result message of the 
job also can be posted into the gerrit server successfully.
Everything seems fine.

But in the "Verified" column, there is no verified vote, such as +1 or -1.
You will be able when your CI account is added to "Third-Party CI" group on 
review.openstack.org
https://review.openstack.org/#/admin/groups/270,members
I advice you to ask for such a permission in an IRC meeting for third-party CI 
maintainers:
https://wiki.openstack.org/wiki/Meetings/ThirdParty
But you still won't be able to vote on other projects, except the sandbox.


(patch url: https://review.openstack.org/#/c/222049/,
CI name:  Fnst OpenStackTest CI)

Although I have already added the "verified" label into the layout.yaml , under 
the check pipeline, it does not work yet.

And my configuration info is setted as follows:
Layout.yaml
---
pipelines:
  - name: check
   trigger:
 gerrit:
  - event: patchset-created
  - event: change-restored
  - event: comment-added
...
   success:
gerrit:
  verified: 1
   failure:
gerrit:
  verified: -1

jobs:
   - name: noop-check-communication
  parameter-function: reusable_node
projects:
- name: openstack-dev/sandbox
   - noop-check-communication
---


And the projects.yaml of Jenkins job:
---
- project:
name: sandbox
jobs:
  - noop-check-communication:
 node: 'devstack_slave || devstack-precise-check || d-p-c'
...
---

Could anyone help me? Thanks in advance.

Xiexs





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [docs] Two kinds of 'region' entity: finding better names for them

2015-09-10 Thread Timur Sufiev
I went forward and filed a bug for this issue (since we agreed that it
should be fixed): https://bugs.launchpad.net/horizon/+bug/1494251
The code is already in gerrit (see links in bug), feel free to review.

On Fri, Jul 10, 2015 at 1:51 AM Douglas Fish  wrote:

> I think another important question is how to represent this to the user on
> the login screen. "Keystone Endpoint:" matches the setting, but seems like
> a weird choice to me. Is there a better terminology to use for the label
> for this on the login screen?
>
> I see the related selector has no label at all in the header. Maybe using
> the same label there would be a good idea.
>
> Doug
>
> Thai Q Tran/Silicon Valley/IBM@IBMUS wrote on 07/08/2015 01:05:53 PM:
>
> > From: Thai Q Tran/Silicon Valley/IBM@IBMUS
> > To: "OpenStack Development Mailing List \(not for usage questions\)"
> > 
> > Date: 07/09/2015 01:17 PM
> > Subject: Re: [openstack-dev] [horizon] [keystone] [docs] Two kinds
> > of 'region' entity: finding better names for them
> >
> > Had the same issue when I worked on the context selection menu for
> > switching domain and project. I think it make sense to rename it to
> > AVAILABLE_KEYSTONE_ENDPOINTS. Since it is local_settings.py, its
> > going to affect some folks (maybe even break) until they also update
> > their setting, something that would have to be done manually.
> >
> > -Jay Pipes  wrote: -
> > To: openstack-dev@lists.openstack.org
> > From: Jay Pipes 
> > Date: 07/08/2015 07:14AM
> > Subject: Re: [openstack-dev] [horizon] [keystone] [docs] Two kinds
> > of 'region' entity: finding better names for them
>
> > Got it, thanks for the excellent explanation, Timur! Yeah, I think
> > renaming to AVAILABLE_KEYSTONE_ENDPOINTS would be a good solution.
> >
> > Best,
> > -jay
> >
> > On 07/08/2015 09:53 AM, Timur Sufiev wrote:
> > > Hi, Jay!
> > >
> > > As Doug said, Horizon regions are just different Keystone endpoints
> that
> > > Horizon could use to authorize against (and retrieve the whole catalog
> > > from any of them afterwards).
> > >
> > > Another example of how complicated things could be: imagine that
> Horizon
> > > config has two Keystone endpoints inside AVAILABLE_REGIONS setting,
> > > http://keystone.europe and http://keystone.asia, each of them hosting
> a
> > > different catalog with service endpoint pointing to Europe/Asia located
> > > services. For European Keystone all Europe-based services are marked as
> > > 'RegionOne', for Asian Keystone all its Asia-based services are marked
> > > as 'RegionOne'. Then, imagine that each Keystone also has 'RegionTwo'
> > > region, for European Keystone the Asian services are marked so, for
> > > Asian Keystone the opposite is true. One of customers did roughly the
> > > same thing (with both Keystones using common LDAP backend), and
> > > understanding what exactly in Horizon didn't work well was a puzzling
> > > experience.
> > >
> > > On Wed, Jul 8, 2015 at 4:37 PM Jay Pipes  > > > wrote:
> > >
> > > On 07/08/2015 08:50 AM, Timur Sufiev wrote:
> > >  > Hello, folks!
> > >  >
> > >  > Somehow it happened that we have 2 different kinds of regions:
> the
> > >  > service regions inside Keystone catalog and AVAILABLE_REGIONS
> setting
> > >  > inside Horizon, yet use the same name 'regions' for both of
> them.
> > > That
> > >  > creates a lot of confusion when solving some
> region-relatedissues at
> > >  > the Horizon/Keystone junction, even explaining what is exactly
> being
> > >  > broken poses a serious challenge when our common language has
> > > such a flaw!
> > >  >
> > >  > I propose to invent 2 distinct terms for these entities, so at
> > > least we
> > >  > won't be terminologically challenged when fixing the related
> bugs.
> > >
> > > Hi!
> > >
> > > I understand what the Keystone region represents: a simple,
> > > non-geographically-connotated division of the entire OpenStack
> > > deployment.
> > >
> > > Unfortunately, I don't know what the Horizon regions represent.
> Could
> > > you explain?
> > >
> > > Best,
> > > -jay
> > >
> > >
> >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >
> 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> __

[openstack-dev] [kilo-devstack] [disk-usage]

2015-09-10 Thread Abhishek Talwar



	
	
	
	Hi Folks,I have installed
devstack kilo version of OpenStack and created an instance on it with
flavor “m1.nano” that gives a disk of 0 to your instance.
But while checking
disk usage of the instance using “disk.usage” meter it
gives the output to be greater than 0 so how is it possible ?


stack@abhishek:/opt/stack/ceilometer$
nova show e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa

+--++

| Property  
  | Value
 |

+--++

| OS-DCF:diskConfig 
  | AUTO 
 |

|
OS-EXT-AZ:availability_zone  | nova  
|

|
OS-EXT-SRV-ATTR:host | tcs-HP-Compaq-Elite-8300-SFF  
|

|
OS-EXT-SRV-ATTR:hypervisor_hostname  | tcs-HP-Compaq-Elite-8300-SFF  
|

|
OS-EXT-SRV-ATTR:instance_name| instance-0002 
|

|
OS-EXT-STS:power_state   | 1 
|

|
OS-EXT-STS:task_state| - 
|

|
OS-EXT-STS:vm_state  | active
|

|
OS-SRV-USG:launched_at   | 2015-09-10T05:24:19.00
|

|
OS-SRV-USG:terminated_at | - 
|

| accessIPv4
  |  
 |

| accessIPv6
  |  
 |

| config_drive  
  | True 
 |

| created   
  | 2015-09-10T05:24:10Z 
 |

| flavor
  | m1.nano (42) 
 |

| hostId
  |
4a3e03e0a89fbf3790a1b1cd59b1b10acbaad6aa31a4361996d52440   |

| id
  | e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa 
 |

| image 
  | cirros-0.3.2-x86_64-uec
(221c46b3-9619-485e-8f60-0e1a363fc0e5) |

| key_name  
  | -
 |

| metadata  
  | {}   
 |

| name  
  | vmssasa  
 |

|
os-extended-volumes:volumes_attached | []
|

| progress  
  | 0
 |

| public network
  | 172.24.4.4   
 |

| security_groups   
  | default  
 |

| status
  | ACTIVE   
 |

| tenant_id 
  | 5f4f5ee531a441d7bb3830529e611c7d 
 |

| updated   
  | 2015-09-10T05:24:19Z 
 |

| user_id   
  | d04b218204414a1891646735befd449c 
 |

+--++






stack@abhishek:/opt/stack/ceilometer$
nova flavor-show m1.nano

++-+

| Property  
| Value   |

++-+

|
OS-FLV-DISABLED:disabled   | False   |

|
OS-FLV-EXT-DATA:ephemeral  | 0   |

| disk  
| 0   |

| extra_specs   
| {}  |

| id
| 42  |

| name  
| m1.nano |

|
os-flavor-access:is_public | True|

| ram   
| 64  |

| rxtx_factor   
| 1.0 |

| swap  
| |

| vcpus 
| 1   |

++-+



stack@abhishek:/opt/stack/ceilometer$
ceilometer sample-list -m 'cpu_util' -q
"resource_id=e3fd6b56-25df-4498-aa9d-8d9af3dfb4fa"
+--++---++--+-+

| Resource ID   
  | Name   | Type  | Volume | Unit |
Timestamp   |

+--+

Re: [openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

2015-09-10 Thread Evgeny Antyshev



On 10.09.2015 11:30, Xie, Xianshan wrote:


Hi, all,

   In my CI environment, after submitting a patch into 
openstack-dev/sandbox,


the Jenkins Job can be launched automatically, and the result message 
of the job also can be posted into the gerrit server successfully.


Everything seems fine.

But in the “Verified” column, there is no verified vote, such as +1 or -1.

You will be able when your CI account is added to "Third-Party CI" group 
on review.openstack.org

https://review.openstack.org/#/admin/groups/270,members
I advice you to ask for such a permission in an IRC meeting for 
third-party CI maintainers:

https://wiki.openstack.org/wiki/Meetings/ThirdParty
But you still won't be able to vote on other projects, except the sandbox.


(patch url: https://review.openstack.org/#/c/222049/,

CI name:  Fnst OpenStackTest CI)

Although I have already added the “verified” label into the 
layout.yaml , under the check pipeline, it does not work yet.


And my configuration info is setted as follows:

Layout.yaml

---

pipelines:

  - name: check

   trigger:

 gerrit:

  - event: patchset-created

  - event: change-restored

  - event: comment-added

…

   success:

gerrit:

  verified: 1

   failure:

gerrit:

  verified: -1

jobs:

   - name: noop-check-communication

  parameter-function: reusable_node

projects:

- name: openstack-dev/sandbox

   - noop-check-communication

---

And the projects.yaml of Jenkins job:

---

- project:

name: sandbox

jobs:

  - noop-check-communication:

 node: 'devstack_slave || devstack-precise-check || d-p-c'

…

---

Could anyone help me? Thanks in advance.

Xiexs



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC

2015-09-10 Thread Sean Dague
On 09/09/2015 07:16 PM, Doug Hellmann wrote:
> Excerpts from Matt Riedemann's message of 2015-09-09 13:45:29 -0500:
>>
>> On 9/9/2015 1:04 PM, Doug Hellmann wrote:
>>> Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:

>> The problem with the static file paths in rootwrap.conf is that we don't 
>> know where those other library filter files are going to end up on the 
>> system when the library is installed.  We could hard-code nova's 
>> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 
> 
> I thought the configuration file passed to rootwrap was something the
> deployer could change, which would let them fix the paths on their
> system. Did I misunderstand what the argument was?
> 
>> that means the deploy/config management tooling that installing this 
>> stuff needs to copy that directory structure from the os-brick install 
>> location (which we're finding non-deterministic, at least when using 
>> data_files with pbr) to the target location that rootwrap.conf cares about.
>>
>> That's why we were proposing adding things to rootwrap.conf that 
>> oslo.rootwrap can parse and process dynamically using the resource 
>> access stuff in pkg_resources, so we just say 'I want you to load the 
>> os-brick.filters file from the os-brick project, thanks.'.
>>
> 
> Doesn't that put the rootwrap config file for os-brick in a place the
> deployer can't change it? Maybe they're not supposed to? If they're not,
> then I agree that burying the actual file inside the library and using
> something like pkgtools to get its contents makes more sense.

Right now, they are all a bunch of files, they can be anywhere. And then
you have other files that have to reference these files by path, which
can be anywhere. We could just punt in that part and say "punt! every
installer and configuration management install needs to solve this on
their own." I'm not convinced that's a good answer. The os-brick filters
aren't really config. If you change them all that happens is
terribleness. Stuff stops working, and you don't know why. They are data
to exchange with another process about how to function. Honestly, they
should probably be python code that's imported by rootwrap.

Much like the issues around clouds failing when you try to GET /v2 on
the Nova API (because we have a bunch of knobs you have to align for SSL
termination, and a bunch of deployers didn't), I don't think we should
be satisfied with "there's a config for that!" when all that config
means is that someone can break their configuration if they don't get it
exactly right.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Igor Kalnitsky
Mike,

> still not exactly true for some large enterprises. Due to all the security, 
> etc.,
> there are sometimes VPNs / proxies / firewalls with very low throughput.

It's their problem, and their policies. We can't and shouldn't handle
all possible cases. If some enterprise has "no Internet" policy, I bet
it won't be a problem for their IT guys to create an intranet mirror
for MOS packages. Moreover, I also bet they do have a mirror for
Ubuntu or other Linux distributive. So it basically about approach how
to consume our mirrors.

On Thu, Sep 10, 2015 at 12:30 PM, Vladimir Kuklin  wrote:
> Folks
>
> I think, Mike is completely right here - we need an option to build
> all-in-one ISO which can be tried-out/deployed unattendedly without internet
> access. Let's let a user make a choice what he wants, not push him into
> embarassing situation. We still have many parts of Fuel which make choices
> for user that cannot be overriden. Let's not pretend that we know more than
> user does about his environment.
>
> On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh 
> wrote:
>>
>> The reason people want offline deployment feature is not because of poor
>> connection, but rather the enterprise intranets where getting subnet with
>> external access sometimes is a real pain in various body parts.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky 
>> wrote:
>>>
>>> Hello,
>>>
>>> I agree with Vladimir - the idea of online repos is a right way to
>>> move. In 2015 I believe we can ignore this "poor Internet connection"
>>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>>> distributives - most of them fetch needed packages from the Internet
>>> during installation, not from CD/DVD. The netboot installers are
>>> popular, I can't even remember when was the last time I install my
>>> Debian from the DVD-1 - I use netboot installer for years.
>>>
>>> Thanks,
>>> Igor
>>>
>>>
>>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
>>> >
>>> >
>>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz 
>>> > wrote:
>>> >>
>>> >>
>>> >> Hey Vladimir,
>>> >>
>>> >>>
>>> >>>
>>> >
>>> > 1) There won't be such things in like [1] and [2], thus less
>>> > complicated flow, less errors, easier to maintain, easier to
>>> > understand,
>>> > easier to troubleshoot
>>> > 2) If one wants to have local mirror, the flow is the same as in
>>> > case
>>> > of upstream repos (fuel-createmirror), which is clrear for a user
>>> > to
>>> > understand.
>>> 
>>> 
>>>  From the issues I've seen,  fuel-createmirror isn't very straight
>>>  forward and has some issues making it a bad UX.
>>> >>>
>>> >>>
>>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>>> >>> is a
>>> >>> way too naive. Reliable internet connection is totally up to network
>>> >>> engineering rather than deployment. Even using proxy is much better
>>> >>> that
>>> >>> creating local mirror. But this discussion is totally out of the
>>> >>> scope of
>>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>>> >>> straightforward (installed as rpm, has just a couple of command line
>>> >>> options). The quality of this script is also out of the scope of this
>>> >>> thread. BTW we have plans to improve it.
>>> >>
>>> >>
>>> >>
>>> >> Fair enough, I just wanted to raise the UX issues around these types
>>> >> of
>>> >> things as they should go into the decision making process.
>>> >>
>>> >>
>>> >>>
>>> >
>>> >
>>> > Many people still associate ISO with MOS, but it is not true when
>>> > using
>>> > package based delivery approach.
>>> >
>>> > It is easy to define necessary repos during deployment and thus it
>>> > is
>>> > easy to control what exactly is going to be installed on slave
>>> > nodes.
>>> >
>>> > What do you guys think of it?
>>> >
>>> >
>>> 
>>>  Reliance on internet connectivity has been an issue since 6.1. For
>>>  many
>>>  large users, complete access to the internet is not available or not
>>>  desired.  If we want to continue down this path, we need to improve
>>>  the
>>>  tools to setup the local mirror and properly document what
>>>  urls/ports/etc
>>>  need to be available for the installation of openstack and any
>>>  mirror
>>>  creation process.  The ideal thing is to have an all-in-one CD
>>>  similar to a
>>>  live cd that allows a user to completely try out fuel wherever they
>>>  want
>>>  with out further requirements of internet access.  If we don't want
>>>  to
>>>  continue with that, we need to do a better job around providing the
>>>  tools
>>>  for a user to get up and running in a timely fashion.  Perhaps
>>>  providing an
>>>  net-only iso and an all-included iso would be a better solution so
>>>  people
>>>  will have their expectations

Re: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-10 Thread Jan Provaznik

On 09/09/2015 12:15 PM, Dougal Matthews wrote:

Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had
a release[1]. I am not familiar with the release process - what do we need to
do to make sure it is regularly released with other TripleO packages?

We will also want to do something similar with the new python-tripleoclient
which doesn't seem to be registered on PyPI yet at all.

Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common



Hi Dougal,
thanks for moving this forward. I've never finished the release process 
upstream, there was no interest/consumer of this lib upstream as UI/CLI 
decided to use midstream. I'm excited to see this is changing now.


Jan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about generating an oslo.utils release

2015-09-10 Thread Davanum Srinivas
Paul,

Usually there are releases every week from the oslo team. At the moment,
oslo.* releases are frozen until stable/liberty branches are cut. Usually
you can also request a new oslo library release by logging a review against
openstack/releases repository as well.

Depends-On tag works for things installed from git, NOT libraries from
pypi. hence the fail. If you want to try the change locally you can
use  LIBS_FROM_GIT config option in devstack's configuration files to
specify the library in question. However you would that once your patch has
been merged into the master branch. There are additional toggles in
devstack local.conf to do this from a pending review as well if you really
want to try it.

-- Dims

On Thu, Sep 10, 2015 at 3:53 AM, Paul Carlton  wrote:

> Hi
>
> I have an olso.utils change merged (220620
> ).  A nova change (220622
> ) depends on this.  What is the
> process for creating a new version of olso.utils?  Is this performed
> periodically by a release manager or do I need to do something myself?
>
> Incidentally, despite including a depends-on tag in my nova change's
> commit message my tests that depend on the oslo.utils change failed in CI,
> I thought the use of depends-on would cause it to load olso.utils using the
> referenced development commit?
>
> Thanks
>
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Mobile:+44 (0)7768 994283
> Email:mailto:paul.carlt...@hp.com 
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 
> 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential and 
> may be legally privileged. If you have received this message in error, you 
> should delete it from your system immediately and advise the sender. To any 
> recipient of this message within HP, unless otherwise stated you should 
> consider this message and attachments as "HP CONFIDENTIAL".
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Instance STOPS/STARTS while taking snapshot

2015-09-10 Thread James Galvin
Openstack Kilo release Ceph storage backend

I have run into an issue lately where I create a snapshot of a running 
instance, while the snapshot state is "image pending upload" And "Queued" state 
in the image service, I cant access my Instance

I cant access it via the console, ssh and I ran a continuous ping to the 
floating IP and it times out, but the instance still shows running on the 
dashboard

As soon as the instance state is "Image uploading" and in the image service 
"saving" The instance becomes available again,

Is this a known issue? All of this is done via Horizon dashboard

I can see the following in the logs on the compute:

2015-09-09 14:33:39.265 23261 INFO nova.compute.manager 
[req-0252f823-73f5-4c37-aa86-efbe6536e4f6 d2b1cc9566d44a909de46689569118e3 
3b5e03b8a83e44dd9a7140d868d28a9e - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] instance snapshotting 2015-09-09 
14:33:39.877 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] VM Paused (Lifecycle Event) 2015-09-09 
14:33:40.049 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has 
a pending task (image_snapshot). Skip. 2015-09-09 14:33:50.756 23261 INFO 
nova.virt.libvirt.driver [req-0252f823-73f5-4c37-aa86-efbe6536e4f6 
d2b1cc9566d44a909de46689569118e3 3b5e03b8a83e44dd9a7140d868d28a9e - - -] 
[instance: a0e855e5-c205-4d61-bd48-99384d6310f5] Beginning cold snapshot 
process 2015-09-09 14:33:50.759 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] VM Stopped (Lifecycle Event) 2015-09-09 
14:33:50.939 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has 
a pending task (image_snapshot). Skip. 2015-09-09 14:35:57.831 23261 INFO 
nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] 
[instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Started (Lifecycle Event) 
2015-09-09 14:35:57.990 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has 
a pending task (image_pending_upload). Skip. 2015-09-09 14:35:57.991 23261 INFO 
nova.compute.manager [req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] 
[instance: a0e855e5-c205-4d61-bd48-99384d6310f5] VM Resumed (Lifecycle Event) 
2015-09-09 14:35:58.151 23261 INFO nova.compute.manager 
[req-1a3446d4-c183-4729-86b3-d63e15fe38d7 - - - - -] [instance: 
a0e855e5-c205-4d61-bd48-99384d6310f5] During sync_power_state the instance has 
a pending task (image_pending_upload). Skip. 2015-09-09 14:35:58.498 23261 INFO 
nova.virt.libvirt.driver [req-0252f823-73f5-4c37-aa86-efbe6536e4f6 
d2b1cc9566d44a909de46689569118e3 3b5e03b8a83e44dd9a7140d868d28a9e - - -] 
[instance: a0e855e5-c205-4d61-bd48-99384d6310f5] Snapshot extracted, beginning 
image upload

Any help with this would be appreciated :)

Thanks
James
This e-mail contains confidential information or information belonging to 
Servecentric Ltd and is intended solely for the addressee(s). The unauthorized 
disclosure, use, dissemination or copy (either in whole or in part) of this 
e-mail, or any information it contains, is prohibited. Any views or opinions 
presented are solely those of the author and do not necessarily represent those 
of Servecentric Ltd. E-mails are susceptible to alteration and their integrity 
cannot be guaranteed. Servecentric shall not be liable for the contents of this 
e-mail if modified or falsified. If you are not the intended recipient of this 
e-mail, please delete it immediately from your system and notify the sender of 
the wrong delivery and of the email's deletion.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] trello

2015-09-10 Thread Derek Higgins



On 09/09/15 18:03, Jason Rist wrote:

On 09/09/2015 07:09 AM, Derek Higgins wrote:



On 08/09/15 16:36, Derek Higgins wrote:

Hi All,

 Some of ye may remember some time ago we used to organize TripleO
based jobs/tasks on a trello board[1], at some stage this board fell out
of use (the exact reason I can't put my finger on). This morning I was
putting a list of things together that need to be done in the area of CI
and needed somewhere to keep track of it.

I propose we get back to using this trello board and each of us add
cards at the very least for the things we are working on.

This should give each of us a lot more visibility into what is ongoing
on in the tripleo project currently, unless I hear any objections,
tomorrow I'll start archiving all cards on the boards and removing
people no longer involved in tripleo. We can then start adding items and
anybody who wants in can be added again.


This is now done, see
https://trello.com/tripleo

Please ping me on irc if you want to be added.



thanks,
Derek.

[1] - https://trello.com/tripleo

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Derek - you weren't on today when I went to ping you, can you please add me so 
I can track it for RHCI purposes?


Done



Thanks!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Vladimir Kuklin
Folks

I think, Mike is completely right here - we need an option to build
all-in-one ISO which can be tried-out/deployed unattendedly without
internet access. Let's let a user make a choice what he wants, not push him
into embarassing situation. We still have many parts of Fuel which make
choices for user that cannot be overriden. Let's not pretend that we know
more than user does about his environment.

On Thu, Sep 10, 2015 at 10:33 AM, Oleg Gelbukh 
wrote:

> The reason people want offline deployment feature is not because of poor
> connection, but rather the enterprise intranets where getting subnet with
> external access sometimes is a real pain in various body parts.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky 
> wrote:
>
>> Hello,
>>
>> I agree with Vladimir - the idea of online repos is a right way to
>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> distributives - most of them fetch needed packages from the Internet
>> during installation, not from CD/DVD. The netboot installers are
>> popular, I can't even remember when was the last time I install my
>> Debian from the DVD-1 - I use netboot installer for years.
>>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
>> >
>> >
>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz 
>> wrote:
>> >>
>> >>
>> >> Hey Vladimir,
>> >>
>> >>>
>> >>>
>> >
>> > 1) There won't be such things in like [1] and [2], thus less
>> > complicated flow, less errors, easier to maintain, easier to
>> understand,
>> > easier to troubleshoot
>> > 2) If one wants to have local mirror, the flow is the same as in
>> case
>> > of upstream repos (fuel-createmirror), which is clrear for a user to
>> > understand.
>> 
>> 
>>  From the issues I've seen,  fuel-createmirror isn't very straight
>>  forward and has some issues making it a bad UX.
>> >>>
>> >>>
>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>> is a
>> >>> way too naive. Reliable internet connection is totally up to network
>> >>> engineering rather than deployment. Even using proxy is much better
>> that
>> >>> creating local mirror. But this discussion is totally out of the
>> scope of
>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> >>> straightforward (installed as rpm, has just a couple of command line
>> >>> options). The quality of this script is also out of the scope of this
>> >>> thread. BTW we have plans to improve it.
>> >>
>> >>
>> >>
>> >> Fair enough, I just wanted to raise the UX issues around these types of
>> >> things as they should go into the decision making process.
>> >>
>> >>
>> >>>
>> >
>> >
>> > Many people still associate ISO with MOS, but it is not true when
>> using
>> > package based delivery approach.
>> >
>> > It is easy to define necessary repos during deployment and thus it
>> is
>> > easy to control what exactly is going to be installed on slave
>> nodes.
>> >
>> > What do you guys think of it?
>> >
>> >
>> 
>>  Reliance on internet connectivity has been an issue since 6.1. For
>> many
>>  large users, complete access to the internet is not available or not
>>  desired.  If we want to continue down this path, we need to improve
>> the
>>  tools to setup the local mirror and properly document what
>> urls/ports/etc
>>  need to be available for the installation of openstack and any mirror
>>  creation process.  The ideal thing is to have an all-in-one CD
>> similar to a
>>  live cd that allows a user to completely try out fuel wherever they
>> want
>>  with out further requirements of internet access.  If we don't want
>> to
>>  continue with that, we need to do a better job around providing the
>> tools
>>  for a user to get up and running in a timely fashion.  Perhaps
>> providing an
>>  net-only iso and an all-included iso would be a better solution so
>> people
>>  will have their expectations properly set up front?
>> >>>
>> >>>
>> >>> Let me explain why I think having local MOS mirror by default is bad:
>> >>> 1) I don't see any reason why we should treat MOS  repo other way than
>> >>> all other online repos. A user sees on the settings tab the list of
>> repos
>> >>> one of which is local by default while others are online. It can make
>> user a
>> >>> little bit confused, can't it? A user can be also confused by the
>> fact, that
>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> others
>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>
>> >>
>> >>
>> >> I agree. The process should be the same and it should be just another
>> >> repo. It doesn't mean we can't include a version on an ISO as part of a
>> >> release.  Would it be better to provide the mirror on the ISO

Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-10 Thread Vladimir Kuklin
Folks

I have a strong +1 for the proposal to decouple master node and slave nodes.
Here are the stregnths of this approach
1) We can always decide which particular node runs which particular set of
manifests. This will allow us to do be able to apply/roll back changes
node-by-node. This is very important from operations perspective.
2) We can decouple master and slave nodes manifests and not drag new
library version onto the master node when it is not needed. This allows us
to decrease probability of regressions
3) This makes life easier for the user - you just run 'apt-get/yum install'
instead of some difficult to digest `mco` command.

The only weakness that I see here is on mentioned by Andrey. I think we can
fix it by providing developers with clean and simple way of building
library package on the fly. This will make developers life easier enough to
work with proposed approach.

Also, we need to provide ways for better UX, like provide one button/api
call for:

1) update all manifests on particular nodes (e.g. all or only a part of
nodes of the cluster) to particular version
2)  revert all manifests back to the version which is provided by the
particular GA release
3) 

So far I would mark need for simple package-building system for developer
as a dependency for the proposed change, but I do not see any other way
than proceeding with it.



On Thu, Sep 10, 2015 at 11:50 AM, Sergii Golovatiuk <
sgolovat...@mirantis.com> wrote:

> Oleg,
>
> Alex gave a perfect example regarding support folks when they need to fix
> something really quick. It's client's choice what to patch or not. You may
> like it or not, but it's client's choice.
>
> On 10 Sep 2015, at 09:33, Oleg Gelbukh  wrote:
>
> Alex,
>
> I absolutely understand the point you are making about need for deployment
> engineers to modify things 'on the fly' in customer environment. It's makes
> things really flexible and lowers the entry barrier for sure.
>
> However, I would like to note that in my opinion this kind on 'monkey
> patching' is actually a bad practice for any environments other than dev
> ones. It immediately leads to emergence of unsupportable frankenclouds. I
> would greet any modification to the workflow that will discourage people
> from doing that.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz 
> wrote:
>
>> Hey Vladimir,
>>
>>
>>
>>> Regarding plugins: plugins are welcome to install specific additional
>>> DEB/RPM repos on the master node, or just configure cluster to use
>>> additional onl?ne repos, where all necessary packages (including plugin
>>> specific puppet manifests) are to be available. Current granular deployment
>>> approach makes it easy to append specific pre-deployment tasks
>>> (master/slave does not matter). Correct me if I am wrong.
>>>
>>>
>> Don't get me wrong, I think it would be good to move to a fuel-library
>> distributed via package only.  I'm bringing these points up to indicate
>> that there is many other things that live in the fuel library puppet path
>> than just the fuel-library package.  The plugin example is just one place
>> that we will need to invest in further design and work to move to the
>> package only distribution.  What I don't want is some partially executed
>> work that only works for one type of deployment and creates headaches for
>> the people actually having to use fuel.  The deployment engineers and
>> customers who actually perform these actions should be asked about
>> packaging and their comfort level with this type of requirements.  I don't
>> have a complete understanding of the all the things supported today by the
>> fuel plugin system so it would be nice to get someone who is more familiar
>> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
>> don't think we are building fuel-library debs at this time either.  So
>> without some work on both sides, we cannot move to just packages.
>>
>>
>>> Regarding flexibility: having several versioned directories with puppet
>>> modules on the master node, having several fuel-libraryX.Y packages
>>> installed on the master node makes things "exquisitely convoluted" rather
>>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>>> rsync, etc. if you really need to do things manually. But we have
>>> convenient service (Perestroika) which builds packages in minutes if you
>>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>>> available as an application independent from CI. So, what is wrong with
>>> building fuel-library package? What if you want to troubleshoot nova (we
>>> install it using packages)? Should we also use rsync for everything else
>>> like nova, mysql, etc.?
>>>
>>>
>> Yes, we do have a service like Perestroika to build packages for us.
>> That doesn't mean everyone else does or has access to do that today.
>> Setting up a build system is a major undertaking and making that a hard
>> requirement to interact wit

Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-09-10 Thread Lucas Alvares Gomes
Hi,

> Disclaimer: I don't know much about OSC or its syntax, command
> structure, etc. These may not be well-formed thoughts. :)
>

Same here, I don't know much about OSC in general.

> So, many of the nova commands (openstack server foo) don't make sense in
> an Ironic context, and vice versa. It would also be difficult to
> determine if the commands should go through Nova or through Ironic.
> The path could be something like: check that Ironic exists, see if user
> has access, hence standalone mode (oh wait, operators probably have
> access to manage Ironic *and* deploy baremetal through Nova, what do?).
>

I was looking at the list of OSC commands [1], some I can think it
could be possible to map to Ironic functions are:

* openstack server create
* openstack server delete
* openstack server list
* openstack server show
* openstack server reboot
* openstack server rebuild

But when I go to the specifics, I find it hard to map all the
parameters supported for it in the Ironic context, i.e the "openstack
server list" command [2] supports parameters such as "--flavor" or
"--instance-name" to search by flavor or instance name wouldn't be
possible to be implement in Ironic at present (we don't keep such
information registered within the deployed Nodes); among other things
"--ip", "--ip6", etc...

So I think it may worth to do a better research about those commands
and it's parameters to see what can be reused in the Ironic context.
But, at first glance it also seems to me that this is going to bring
more confusion around the usage of the CLI than actually help by
having generic commands for different services.

[1] 
http://docs.openstack.org/cli-reference/content/openstackclient_commands.html
[2] 
http://docs.openstack.org/cli-reference/content/openstackclient_commands.html#openstackclient_subcommand_server_list

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Re: [neutron][L3][dvr][fwaas] FWaaS

2015-09-10 Thread bharath

Hi ,

Instance creation is failing with below error from last 4 days.

*2015-09-10 02:14:00.583 WARNING neutron.plugins.ml2.drivers.mech_agent 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempting to bind with dead agent: 
{'binary': u'neutron-openvswitch-agent', 'des
cription': None, 'admin_state_up': True, 'heartbeat_timestamp': 
datetime.datetime(2015, 9, 10, 9, 6, 57), 'alive': False, 'topic': 
u'N/A', 'host': u'ci-jslave-base', 'agent_type': u'Open vSwitch agent', 
'created_at': datetime.datetime(2
015, 9, 10, 9, 4, 57), 'started_at': datetime.datetime(2015, 9, 10, 9, 
6, 57), 'id': u'aa9098fe-c412-449e-b979-1f5ab46c3c1d', 'configurations': 
{u'in_distributed_mode': False, u'arp_responder_enabled': False, 
u'tunneling_ip': u'192.168.
30.41', u'devices': 0, u'log_agent_heartbeats': False, u'l2_population': 
False, u'tunnel_types': [u'vxlan'], u'enable_distributed_routing': 
False, u'bridge_mappings': {u'ext': u'br-ext', u'mng': u'br-mng'}}}*
2015-09-10 02:14:00.583 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempting to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on network c6fb26cc-96
1e-4f38-bf40-bfc72cc59f67 from (pid=25516) bind_port 
/opt/stack/neutron/neutron/plugins/ml2/drivers/mech_agent.py:60
*2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base
2015-09-10 02:14:00.588 ERROR neutron.plugins.ml2.managers 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Failed to bind port 
6733610d-e7dc-4ecd-a810-b2b791af9b97 on host ci-jslave-base*
2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] For port 
6733610d-e7dc-4ecd-a810-b2b791af9b97, host ci-jslave-base, cleared 
binding levels from (pi
d=25516) clear_binding_levels 
/opt/stack/neutron/neutron/plugins/ml2/db.py:189
2015-09-10 02:14:00.608 DEBUG neutron.plugins.ml2.db 
[req-44530c97-56fa-4d5d-ad35-c5e988ab4644 neutron 
24109c82ae76465c8fb20562cce67a4f] Attempted to set empty binding levels 
from (pid=25516) set_binding_levels /opt/stack/neutron/neutro

n/plugins/ml2/db.py:164


Recent commit seems to be broken this.


During stacking i am getting below error , but i dont know whether its 
related to above issue or not


2015-09-09 15:18:48.658 | ERROR: openstack 'module' object has no attribute 
'UpdateDataSource'


would love some help on this issue

Thanks,
bharath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [nova] Verification of glance images before boot

2015-09-10 Thread Bhandaru, Malini K
Brianna, I can imagine a denial of service attack by uploading images whose 
signature is invalid if we allow them to reside in Glance
In a "killed" state. This would be less of an issue "killed" images still 
consume storage quota until actually deleted.
Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
Regards
Malini

-Original Message-
From: Poulos, Brianna L. [mailto:brianna.pou...@jhuapl.edu] 
Sent: Wednesday, September 09, 2015 9:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: stuart.mcla...@hp.com
Subject: Re: [openstack-dev] [glance] [nova] Verification of glance images 
before boot

Stuart is right about what will currently happen in Nova when an image is 
downloaded, which protects against unintentional modifications to the image 
data.

What is currently being worked on is adding the ability to verify a signature 
of the checksum.  The flow of this is as follows:
1. The user creates a signature of the "checksum hash" (currently MD5) of the 
image data offline.
2. The user uploads a public key certificate, which can be used to verify the 
signature to a key manager (currently Barbican).
3. The user creates an image in glance, with signature metadata properties.
4. The user uploads the image data to glance.
5. If the signature metadata properties exist, glance verifies the signature of 
the "checksum hash", including retrieving the certificate from the key manager.
6. If the signature verification fails, glance moves the image to a killed 
state, and returns an error message to the user.
7. If the signature verification succeeds, a log message indicates that it 
succeeded, and the image upload finishes successfully.

8. Nova requests the image from glance, along with the image properties, in 
order to boot it.
9. Nova uses the signature metadata properties to verify the signature (if a 
configuration option is set).
10. If the signature verification fails, nova does not boot the image, but 
errors out.
11. If the signature verification succeeds, nova boots the image, and a log 
message notes that the verification succeeded.

Regarding what is currently in Liberty, the blueprint mentioned [1] has merged, 
and code [2] has also been merged in glance, which handles steps
1-7 of the flow above.

For steps 7-11, there is currently a nova blueprint [3], along with code [4], 
which are proposed for Mitaka.

Note that we are in the process of adding official documentation, with examples 
of creating the signature as well as the properties that need to be added for 
the image before upload.  In the meantime, there's an etherpad that describes 
how to test the signature verification functionality in Glance [5].

Also note that this is the initial approach, and there are some limitations.  
For example, ideally the signature would be based on a cryptographically secure 
(i.e. not MD5) hash of the image.  There is a spec in glance to allow this hash 
to be configurable [6].

[1]
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificatio
n-support
[2]
https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107ddb
4b04ff6e
[3] https://review.openstack.org/#/c/188874/
[4] https://review.openstack.org/#/c/189843/
[5]
https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
[6] https://review.openstack.org/#/c/191542/


Thanks,
~Brianna




On 9/9/15, 12:16 , "Nikhil Komawar"  wrote:

>That's correct.
>
>The size and the checksum are to be verified outside of Glance, in this 
>case Nova. However, you may want to note that it's not necessary that 
>all Nova virt drivers would use py-glanceclient so you would want to 
>check the download specific code in the virt driver your Nova 
>deployment is using.
>
>Having said that, essentially the flow seems appropriate. Error must be 
>raise on mismatch.
>
>The signing BP was to help prevent the compromised Glance from changing 
>the checksum and image blob at the same time. Using a digital 
>signature, you can prevent download of compromised data. However, the 
>feature has just been implemented in Glance; Glance users may take time to 
>adopt.
>
>
>
>On 9/9/15 11:15 AM, stuart.mcla...@hp.com wrote:
>>
>> The glance client (running 'inside' the Nova server) will 
>> re-calculate the checksum as it downloads the image and then compare 
>> it against the expected value. If they don't match an error will be raised.
>>
>>> How can I know that the image that a new instance is spawned from - 
>>> is actually the image that was originally registered in glance - and 
>>> has not been maliciously tampered with in some way?
>>>
>>> Is there some kind of verification that is performed against the 
>>> md5sum of the registered image in glance before a new instance is spawned?
>>>
>>> Is that done by Nova?
>>> Glance?
>>> Both? Neither?
>>>
>>> The reason I ask is some 'paranoid' security (that is their job I
>>> suppose) people have raised these questions.
>>>
>>> I know there is

Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-10 Thread Bhandaru, Malini K
Thank you! -- Malini

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Wednesday, September 09, 2015 8:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

FYI, this was granted FFE.

On 9/8/15 11:02 AM, Nikhil Komawar wrote:
> Malini,
>
> Your note on the etherpad [1] went unnoticed as we had that sync on 
> Friday outside of our regular meeting and weekly meeting agenda 
> etherpad was not fit for discussion purposes.
>
> It would be nice if you all can update & comment on the spec, ref. the 
> note or have someone send a relative email here that explains the 
> redressal of the issues raised on the spec and during Friday sync [2].
>
> [1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstac
> k-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>
> On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Glance team on the FFE consideration.
>> We are committed to making the revisions per suggestion and separately seek 
>> help from the Flavio, Sabari, and Harsh.
>> Regards
>> Malini, Kent, and Jakub
>>
>>
>> -Original Message-
>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>> Sent: Friday, September 04, 2015 9:44 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> Hi Malini et.al.,
>>
>> We had a sync up earlier today on this topic and a few items were discussed 
>> including new comments on the spec and existing code proposal.
>> You can find the logs of the conversation here [1].
>>
>> There are 3 main outcomes of the discussion:
>> 1. We hope to get a commitment on the feature (spec and the code) that the 
>> comments would be addressed and code would be ready by Sept 18th; after 
>> which the RC1 is planned to be cut [2]. Our hope is that the spec is merged 
>> way before and implementation to the very least is ready if not merged. The 
>> comments on the spec and merge proposal are currently implementation details 
>> specific so we were positive on this front.
>> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has 
>> newer patch sets with major concerns addressed.
>> 3. We cannot commit to granting a backport to this feature so, we ask the 
>> implementors to consider using the plug-ability and modularity of the 
>> taskflow library. You may consult developers who have already worked on 
>> adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can 
>> then use those scripts and put them back in their Liberty deployments even 
>> if it's not in the standard tarball.
>>
>> Please let me know if you have more questions.
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23opensta
>> ck-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>>
>> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>>> Thank you Nikhil and Brian!
>>>
>>> -Original Message-
>>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>>> Sent: Thursday, September 03, 2015 9:42 AM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> We agreed to hold off on granting it a FFE until tomorrow.
>>>
>>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and 
>>> cast your vote.
>>>
>>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
 I added an agenda item for this for today's Glance meeting:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

 I'd prefer to hold my vote until after the meeting.

 cheers,
 brian


 On 9/3/15, 6:14 AM, "Kuvaja, Erno"  wrote:

> Malini, all,
>
> My current opinion is -1 for FFE based on the concerns in the spec 
> and implementation.
>
> I'm more than happy to realign my stand after we have updated spec 
> and a) it's agreed to be the approach as of now and b) we can 
> evaluate how much work the implementation needs to meet with the 
> revisited spec.
>
> If we end up to the unfortunate situation that this functionality 
> does not merge in time for Liberty, I'm confident that this is one 
> of the first things in Mitaka. I really don't think there is too 
> much to go, we just might run out of time.
>
> Thanks for your patience and endless effort to get this done.
>
> Best,
> Erno
>
>> -Original Message-
>> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
>> Sent: Thursday, September 03, 2015 10:10 AM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>> usage
>> questions)
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Excep

Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-10 Thread Sergii Golovatiuk
Oleg,

Alex gave a perfect example regarding support folks when they need to fix
something really quick. It's client's choice what to patch or not. You may
like it or not, but it's client's choice.

On 10 Sep 2015, at 09:33, Oleg Gelbukh  wrote:

Alex,

I absolutely understand the point you are making about need for deployment
engineers to modify things 'on the fly' in customer environment. It's makes
things really flexible and lowers the entry barrier for sure.

However, I would like to note that in my opinion this kind on 'monkey
patching' is actually a bad practice for any environments other than dev
ones. It immediately leads to emergence of unsupportable frankenclouds. I
would greet any modification to the workflow that will discourage people
from doing that.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz  wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz 
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>> flexible enough to allow for deployment engineers to make custom changes
>>> for a given environment.  Unless we can provide an improved process to
>>> allow for people to p

Re: [openstack-dev] [glance] differences between def detail() and def index() in glance/registry/api/v1/images.py

2015-09-10 Thread Kuvaja, Erno
This was the case until about two weeks ago.

Since 1.0.0 release we have been defaulting to Images API v2 instead of v1 [0].

If you want to exercise the v1 functionality from the CLI client you would need 
to specify the either environmental variable OS_IMAGE_API_VERSION=1 or use the 
command line option –os-image-api-version 1. Either case –debug can be used 
with glanceclient to provide detailed information about where the request is 
being sent and what the responses are.

If you haven’t moved to the latest client yet, forget about the above apart 
from the –debug part.

[0] 
https://github.com/openstack/python-glanceclient/blob/master/doc/source/index.rst


-  Erno

From: Fei Long Wang [mailto:feil...@catalyst.net.nz]
Sent: Thursday, September 10, 2015 1:04 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] differences between def detail() and def 
index() in glance/registry/api/v1/images.py

I assume you're using Glance client, if so, by default, when you issuing 
command 'glance image-list', it will call /v1/images/detail instead of 
/v1/images, you can use curl or any browser http client to see the difference. 
Basically, just like the endpoint name, /v1/images/detail will give you more 
details. See below difference of their response.

Response from /v1/images/detail
{
"images": [
{
"status": "active",
"deleted_at": null,
"name": "fedora-21-atomic-3",
"deleted": false,
"container_format": "bare",
"created_at": "2015-09-03T22:56:37.00",
"disk_format": "qcow2",
"updated_at": "2015-09-03T23:00:15.00",
"min_disk": 0,
"protected": false,
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"min_ram": 0,
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"owner": "5f290ac4b100440b8b4c83fce78c2db7",
"is_public": true,
"virtual_size": null,
"properties": {
"os_distro": "fedora-atomic"
},
"size": 770179072
}
]
}

Response with /v1/images
{
"images": [
{
"name": "fedora-21-atomic-3",
"container_format": "bare",
"disk_format": "qcow2",
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"size": 770179072
}
]
}
On 10/09/15 11:46, Su Zhang wrote:

Hello,

I am hitting an error and its trace passes def index () in 
glance/registry/api/v1/images.py.

I assume def index() is called by glance image-list. However, while testing 
glance image-list I realized that def detail() is called under 
glance/registry/api/v1/images.py instead of def index().

Could someone let me know what's the difference between the two functions? How 
can I test out def index() under glance/registry/api/v1/images.py through CLI 
or API?

Thanks,

--
Su Zhang



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Cheers & Best regards,

Fei Long Wang (王飞龙)

--

Senior Cloud Software Engineer

Tel: +64-48032246

Email: flw...@catalyst.net.nz

Catalyst IT Limited

Level 6, Catalyst House, 150 Willis Street, Wellington

--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI] [zuul] Can not vote +/-1 verified into gerrit server

2015-09-10 Thread Xie, Xianshan
Hi, all,
   In my CI environment, after submitting a patch into openstack-dev/sandbox,
the Jenkins Job can be launched automatically, and the result message of the 
job also can be posted into the gerrit server successfully.
Everything seems fine.

But in the “Verified” column, there is no verified vote, such as +1 or -1.
(patch url: https://review.openstack.org/#/c/222049/,
CI name:  Fnst OpenStackTest CI)

Although I have already added the “verified” label into the layout.yaml , under 
the check pipeline, it does not work yet.

And my configuration info is setted as follows:
Layout.yaml
---
pipelines:
  - name: check
   trigger:
 gerrit:
  - event: patchset-created
  - event: change-restored
  - event: comment-added
…
   success:
gerrit:
  verified: 1
   failure:
gerrit:
  verified: -1

jobs:
   - name: noop-check-communication
  parameter-function: reusable_node
projects:
- name: openstack-dev/sandbox
   - noop-check-communication
---


And the projects.yaml of Jenkins job:
---
- project:
name: sandbox
jobs:
  - noop-check-communication:
 node: 'devstack_slave || devstack-precise-check || d-p-c'
…
---

Could anyone help me? Thanks in advance.

Xiexs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Command structure for OSC plugin

2015-09-10 Thread Dmitry Tantsur

On 09/09/2015 06:48 PM, Jim Rollenhagen wrote:

On Tue, Sep 01, 2015 at 03:47:03PM -0500, Dean Troyer wrote:

[late catch-up]

On Mon, Aug 24, 2015 at 2:56 PM, Doug Hellmann 
wrote:


Excerpts from Brad P. Crochet's message of 2015-08-24 15:35:59 -0400:

On 24/08/15 18:19 +, Tim Bell wrote:



>From a user perspective, where bare metal and VMs are just different

flavors (with varying capabilities), can we not use the same commands
(server create/rebuild/...) ? Containers will create the same conceptual
problems.


OSC can provide a converged interface but if we just replace '$ ironic

' by '$ openstack baremetal ', this seems to be a missed
opportunity to hide the complexity from the end user.


Can we re-use the existing server structures ?




I've wondered about how users would see doing this, we've done it already
with the quota and limits commands (blurring the distinction between
project APIs).  At some level I am sure users really do not care about some
of our project distinctions.



To my knowledge, overriding or enhancing existing commands like that

is not possible.

You would have to do it in tree, by making the existing commands
smart enough to talk to both nova and ironic, first to find the
server (which service knows about something with UUID XYZ?) and
then to take the appropriate action on that server using the right
client. So it could be done, but it might lose some of the nuance
between the server types by munging them into the same command. I
don't know what sorts of operations are different, but it would be
worth doing the analysis to see.



I do have an experimental plugin that hooks the server create command to
add some options and change its behaviour so it is possible, but right now
I wouldn't call it supported at all.  That might be something that we could
consider doing though for things like this.

The current model for commands calling multiple project APIs is to put them
in openstackclient.common, so yes, in-tree.

Overall, though, to stay consistent with OSC you would map operations into
the current verbs as much as possible.  It is best to think in terms of how
the CLI user is thinking and what she wants to do, and not how the REST or
Python API is written.  In this case, 'baremetal' is a type of server, a
set of attributes of a server, etc.  As mentioned earlier, containers will
also have a similar paradigm to consider.


Disclaimer: I don't know much about OSC or its syntax, command
structure, etc. These may not be well-formed thoughts. :)


With the same disclaimer applied...



While it would be *really* cool to support the same command to do things
to nova servers or do things to ironic servers, I don't know that it's
reasonable to do so.

Ironic is an admin-only API, that supports running standalone or behind
a Nova installation with the Nova virt driver. The API is primarily used
by Nova, or by admins for management. In the case of a standalone
configuration, an admin can use the Ironic API to deploy a server,
though the recommended approach is to use Bifrost[0] to simplify that.
In the case of Ironic behind Nova, users are expected to boot baremetal
servers through Nova, as indicated by a flavor.

So, many of the nova commands (openstack server foo) don't make sense in
an Ironic context, and vice versa. It would also be difficult to
determine if the commands should go through Nova or through Ironic.
The path could be something like: check that Ironic exists, see if user
has access, hence standalone mode (oh wait, operators probably have
access to manage Ironic *and* deploy baremetal through Nova, what do?).


I second this. I'd like also to add that in case of Ironic "server 
create" may involve actually several complex actions, that do not map to 
'nova boot'. First of all we create a node record in database, second we 
check it's power credentials, third we do properties inspection, finally 
we do cleaning. None of these make any sense on a virtual environment.




I think we should think of "openstack baremetal foo" as commands to
manage the baremetal service (Ironic), as that is what the API is
primarily intended for. Then "openstack server foo" just does what it
does today, and if the flavor happens to be a baremetal flavor, the user
gets a baremetal server.

// jim

[0] https://github.com/openstack/bifrost

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Question about generating an oslo.utils release

2015-09-10 Thread Paul Carlton

Hi

I have an olso.utils change merged (220620 
).  A nova change (220622 
) depends on this.  What is 
the process for creating a new version of olso.utils?  Is this performed 
periodically by a release manager or do I need to do something myself?


Incidentally, despite including a depends-on tag in my nova change's 
commit message my tests that depend on the oslo.utils change failed in 
CI, I thought the use of depends-on would cause it to load olso.utils 
using the referenced development commit?


Thanks

--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hp.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] questions about nova compute monitors extensions

2015-09-10 Thread Hou Gang HG Liu
Hi all,

I notice nova compute monitor now only tries to load monitors with 
namespace "nova.compute.monitors.cpu", and only one monitor in one 
namespace can be enabled(
https://review.openstack.org/#/c/209499/6/nova/compute/monitors/__init__.py
).

Is there a plan to make MonitorHandler.NAMESPACES configurable or just 
hard code constraint as it is now? And how to make compute monitor support 
user defined as it was?

Thanks!
B.R

Hougang Liu (刘侯刚)
Developer - IBM Platform Resource Scheduler   
Systems and Technology Group

Mobile: 86-13519121974 | Phone: 86-29-68797023 | Tie-Line: 87023 陕西
省西安市高新六路42号中清大厦3层
E-mail: liuh...@cn.ibm.com  Xian, Shaanxi 
Province 710075, China 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] RFE process question

2015-09-10 Thread Gal Sagie
Hi James,

I think that https://review.openstack.org/#/c/216021/ might be what you are
looking for.
Please review and see that it fits your requierment.
Hopefully this gets approved for next release and i can start working on
it, if you would like to join
and contribute (or anyone in your team) i would love any help on that.

Thanks
Gal.

On Thu, Sep 10, 2015 at 8:59 AM, Armando M.  wrote:

>
> On 10 September 2015 at 11:04, James Dempsey 
> wrote:
>
>> Greetings Devs,
>>
>> I'm very excited about the new RFE process and thought I'd test it by
>> requesting a feature that is very often requested by my users[1].
>>
>> There are some great docs out there about how to submit an RFE, but I
>> don't know what should happen after the submission to launchpad. My RFE
>> bug seems to have been untouched for a month and I'm unsure if I've done
>> something wrong. So, here are a few questions that I have.
>>
>>
>> 1. Should I be following up on the dev list to ask for someone to look
>> at my RFE bug?
>> 2. How long should I expect it to take to have my RFE acknowledged?
>> 3. As an operator, I'm a bit ignorant as to whether or not there are
>> times during the release cycle during which there simply won't be
>> bandwidth to consider RFE bugs.
>> 4. Should I be doing anything else?
>>
>> Would love some guidance.
>>
>
> you did nothing wrong, the team was simply busy going through the existing
> schedule. Having said that, you could have spared a few more words on the
> use case and what you mean by annotations.
>
> I'll follow up on the RFE for more questions.
>
> Cheers,
> Armando
>
>
>>
>> Cheers,
>> James
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1483480
>>
>> --
>> James Dempsey
>> Senior Cloud Engineer
>> Catalyst IT Limited
>> +64 4 803 2264
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-10 Thread Oleg Gelbukh
Alex,

I absolutely understand the point you are making about need for deployment
engineers to modify things 'on the fly' in customer environment. It's makes
things really flexible and lowers the entry barrier for sure.

However, I would like to note that in my opinion this kind on 'monkey
patching' is actually a bad practice for any environments other than dev
ones. It immediately leads to emergence of unsupportable frankenclouds. I
would greet any modification to the workflow that will discourage people
from doing that.

--
Best regards,
Oleg Gelbukh

On Wed, Sep 9, 2015 at 5:56 PM, Alex Schultz  wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz 
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only valid for the master.  Another issue is being
>>> flexible enough to allow for deployment engineers to make custom changes
>>> for a given environment.  Unless we can provide an improved process to
>>> allow for people to provide in place modifications for an environment, we
>>> can't do away with the rsync.
>>>
>>> If we want to go completely down the package route (and we probably
>>> should), we need to make sure that all of the other pieces that currently
>>> go

Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Oleg Gelbukh
The reason people want offline deployment feature is not because of poor
connection, but rather the enterprise intranets where getting subnet with
external access sometimes is a real pain in various body parts.

--
Best regards,
Oleg Gelbukh

On Thu, Sep 10, 2015 at 8:52 AM, Igor Kalnitsky 
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz 
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >
> > 1) There won't be such things in like [1] and [2], thus less
> > complicated flow, less errors, easier to maintain, easier to
> understand,
> > easier to troubleshoot
> > 2) If one wants to have local mirror, the flow is the same as in case
> > of upstream repos (fuel-createmirror), which is clrear for a user to
> > understand.
> 
> 
>  From the issues I've seen,  fuel-createmirror isn't very straight
>  forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >
> >
> > Many people still associate ISO with MOS, but it is not true when
> using
> > package based delivery approach.
> >
> > It is easy to define necessary repos during deployment and thus it is
> > easy to control what exactly is going to be installed on slave nodes.
> >
> > What do you guys think of it?
> >
> >
> 
>  Reliance on internet connectivity has been an issue since 6.1. For
> many
>  large users, complete access to the internet is not available or not
>  desired.  If we want to continue down this path, we need to improve
> the
>  tools to setup the local mirror and properly document what
> urls/ports/etc
>  need to be available for the installation of openstack and any mirror
>  creation process.  The ideal thing is to have an all-in-one CD
> similar to a
>  live cd that allows a user to completely try out fuel wherever they
> want
>  with out further requirements of internet access.  If we don't want to
>  continue with that, we need to do a better job around providing the
> tools
>  for a user to get up and running in a timely fashion.  Perhaps
> providing an
>  net-only iso and an all-included iso would be a better solution so
> people
>  will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't mean we can't include a version on an ISO as part of a
> >> release.  Would it be better to provide the mirror on the ISO but not
> have
> >> it enabled by default for a release so that we can gather user feedback
> on
> >> this? This would include improved documentation and possibly allowing a
> user
> >> to choose their preference so we can collect metrics?
> >>
> >>
> >>> 2) Having local MOS mirror by default makes things much more
> convoluted.
> >>> We are forced to have several directories with predefined names and we
> are
> >>> forced to manage these directories in nailgun, in upgrade script, etc.
> Why?
> >>> 3) When putting MOS mirror on ISO, we make people think that ISO is
> equal
> >>> to MOS, which is no

Re: [openstack-dev] [Ansible][Infra] Moving ansible roles into big tent?

2015-09-10 Thread Yolanda Robla Mota

Hi
I will be interested as well. Having these playbooks in ansible can also 
be useful

in order to integrate with infra-ansible project.
I really see that collection as a valid alternative for puppet modules, 
with the advantages
that ansible can provide, but of course that moving from puppet to 
ansible on infra internally

is something that cannot be done easily, and needs a wider discussion.
If we limit the scope of the ansible playbooks only to infra components, 
I think that infra

namespace is the way to go, having an independent group of reviewers.

Best
Yolanda


El 09/09/15 a las 21:31, Ricardo Carrillo Cruz escribió:
I'm interested in ansible roles for openstack-infra, but as there is 
overlap in functionality
with the current openstack-infra puppet roles I'm not sure what's the 
stance from the

openstack-infra core members and PTL.

I think they should go to openstack-infra, since Nodepoo/Zuul/etc are 
very specific

to the OpenStack CI.

Question is if we should have a subgroup within openstack-infra 
namespace for
'stuff that is not used by OpenStack CI but interesting from CI 
perspective and/or

used by other downstream groups'.

Regards

2015-09-09 19:22 GMT+02:00 Paul Belanger >:


On Tue, Sep 08, 2015 at 06:50:38PM -0400, Emilien Macchi wrote:
>
>
> On 09/08/2015 10:57 AM, Paul Belanger wrote:
> > Greetings,
> >
> > I wanted to start a discussion about the future of ansible /
ansible roles in
> > OpenStack. Over the last week or so I've started down the
ansible path, starting
> > my first ansible role; I've started with ansible-role-nodepool[1].
> >
> > My initial question is simple, now that big tent is upon us, I
would like
> > some way to include ansible roles into the opentack git
workflow.  I first
> > thought the role might live under openstack-infra however I am
not sure that
> > is the right place.  My reason is, -infra tents to include
modules they
> > currently run under the -infra namespace, and I don't want to
start the effort
> > to convince people to migrate.
>
> I'm wondering what would be the goal of ansible-role-nodepool
and what
> it would orchestrate exactly. I did not find README that
explains it,
> and digging into the code makes me think you try to prepare nodepool
> images but I don't exactly see why.
>
> Since we already have puppet-nodepool, I'm curious about the
purpose of
> this role.
> IMHO, if we had to add such a new repo, it would be under
> openstack-infra namespace, to be consistent with other repos
> (puppet-nodepool, etc).
>
> > Another thought might be to reach out to the
os-ansible-deployment team and ask
> > how they see roles in OpenStack moving foward (mostly the
reason for this
> > email).
>
> os-ansible-deployment aims to setup OpenStack services in containers
> (LXC). I don't see relation between os-ansible-deployment (openstack
> deployment related) and ansible-role-nodepool (infra related).
>
> > Either way, I would be interested in feedback on moving
forward on this. Using
> > travis-ci and github works but OpenStack workflow is much better.
> >
> > [1] https://github.com/pabelanger/ansible-role-nodepool
> >
>
> To me, it's unclear how and why we are going to use
ansible-role-nodepool.
> Could you explain with use-case?
>
The most basic use case is managing nodepool using ansible, for
the purpose of
CI.  Bascially, rewrite puppet-nodepool using ansible.  I won't go
into the
reasoning for that, except to say people do not want to use puppet.

Regarding os-ansible-deployment, they are only related due to both
using
ansible. I wouldn't see os-ansible-deployment using the module,
however I would
hope to learn best practices and code reviews from the team.

Where ever the module lives, I would hope people interested in ansible
development would be group somehow.

> Thanks,
> --
> Emilien Macchi
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
Open

Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Adam Heczko
Agree, we should understand taht this is not engineering decision but
rather political/business related and fully functional ISO or ISO+some
QCOW2/whatever is a strong requirement.

regards,

A.

On Thu, Sep 10, 2015 at 8:53 AM, Yaguang Tang  wrote:

>
>
> On Thu, Sep 10, 2015 at 1:52 PM, Igor Kalnitsky 
> wrote:
>
>> Hello,
>>
>> I agree with Vladimir - the idea of online repos is a right way to
>> move. In 2015 I believe we can ignore this "poor Internet connection"
>> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
>> distributives - most of them fetch needed packages from the Internet
>> during installation, not from CD/DVD. The netboot installers are
>> popular, I can't even remember when was the last time I install my
>> Debian from the DVD-1 - I use netboot installer for years.
>>
>
> You are think in a way of developers, but the fact is that Fuel are widely
> used by various  enterprises in the world wide, there are many security
> policies  for enterprise to have no internet connection.
>
>
>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
>> >
>> >
>> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz 
>> wrote:
>> >>
>> >>
>> >> Hey Vladimir,
>> >>
>> >>>
>> >>>
>> >
>> > 1) There won't be such things in like [1] and [2], thus less
>> > complicated flow, less errors, easier to maintain, easier to
>> understand,
>> > easier to troubleshoot
>> > 2) If one wants to have local mirror, the flow is the same as in
>> case
>> > of upstream repos (fuel-createmirror), which is clrear for a user to
>> > understand.
>> 
>> 
>>  From the issues I've seen,  fuel-createmirror isn't very straight
>>  forward and has some issues making it a bad UX.
>> >>>
>> >>>
>> >>> I'd say the whole approach of having such tool as fuel-createmirror
>> is a
>> >>> way too naive. Reliable internet connection is totally up to network
>> >>> engineering rather than deployment. Even using proxy is much better
>> that
>> >>> creating local mirror. But this discussion is totally out of the
>> scope of
>> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> >>> straightforward (installed as rpm, has just a couple of command line
>> >>> options). The quality of this script is also out of the scope of this
>> >>> thread. BTW we have plans to improve it.
>> >>
>> >>
>> >>
>> >> Fair enough, I just wanted to raise the UX issues around these types of
>> >> things as they should go into the decision making process.
>> >>
>> >>
>> >>>
>> >
>> >
>> > Many people still associate ISO with MOS, but it is not true when
>> using
>> > package based delivery approach.
>> >
>> > It is easy to define necessary repos during deployment and thus it
>> is
>> > easy to control what exactly is going to be installed on slave
>> nodes.
>> >
>> > What do you guys think of it?
>> >
>> >
>> 
>>  Reliance on internet connectivity has been an issue since 6.1. For
>> many
>>  large users, complete access to the internet is not available or not
>>  desired.  If we want to continue down this path, we need to improve
>> the
>>  tools to setup the local mirror and properly document what
>> urls/ports/etc
>>  need to be available for the installation of openstack and any mirror
>>  creation process.  The ideal thing is to have an all-in-one CD
>> similar to a
>>  live cd that allows a user to completely try out fuel wherever they
>> want
>>  with out further requirements of internet access.  If we don't want
>> to
>>  continue with that, we need to do a better job around providing the
>> tools
>>  for a user to get up and running in a timely fashion.  Perhaps
>> providing an
>>  net-only iso and an all-included iso would be a better solution so
>> people
>>  will have their expectations properly set up front?
>> >>>
>> >>>
>> >>> Let me explain why I think having local MOS mirror by default is bad:
>> >>> 1) I don't see any reason why we should treat MOS  repo other way than
>> >>> all other online repos. A user sees on the settings tab the list of
>> repos
>> >>> one of which is local by default while others are online. It can make
>> user a
>> >>> little bit confused, can't it? A user can be also confused by the
>> fact, that
>> >>> some of the repos can be cloned locally by fuel-createmirror while
>> others
>> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
>> >>
>> >>
>> >>
>> >> I agree. The process should be the same and it should be just another
>> >> repo. It doesn't mean we can't include a version on an ISO as part of a
>> >> release.  Would it be better to provide the mirror on the ISO but not
>> have
>> >> it enabled by default for a release so that we can gather user
>> feedback on
>> >> this? This would include improved documentation and possibly allowing
>> a user
>> >> to choose their preference so we can collect metrics?
>>

Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-10 Thread Adam Heczko
Folks, what I can see is that most of you represent 'engineering' point of
view.
Way of installing OpenStack by Fuel is not 'engineering' decision - it is
political and business related decision.
I believe that possibility to get fully working / not internet dependent
ISO or ISO + some additional let's say QCOW2 disk image, holding all
necessary stuff required to deploy OpenStack is crucial determinant factor
to most of enterprise customers.
This is absolutely not internet connectivity related, this is not technical
question, we are touching philosophical approach to software distribution
approach and way of understanding of things by all kind of 'C' grade
managers.
In order OpenStack to succeed, 'no internet connectivity' approach is a
must have.
If not directly from within ISO, then we have to provide easy to use and
understand guidance how to deploy OS without any internet connectivity,
e.g. fuel-createmirror or other similar scripts.

Regards,

A.


On Thu, Sep 10, 2015 at 7:52 AM, Igor Kalnitsky 
wrote:

> Hello,
>
> I agree with Vladimir - the idea of online repos is a right way to
> move. In 2015 I believe we can ignore this "poor Internet connection"
> reason, and simplify both Fuel and UX. Moreover, take a look at Linux
> distributives - most of them fetch needed packages from the Internet
> during installation, not from CD/DVD. The netboot installers are
> popular, I can't even remember when was the last time I install my
> Debian from the DVD-1 - I use netboot installer for years.
>
> Thanks,
> Igor
>
>
> On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
> >
> >
> > On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz 
> wrote:
> >>
> >>
> >> Hey Vladimir,
> >>
> >>>
> >>>
> >
> > 1) There won't be such things in like [1] and [2], thus less
> > complicated flow, less errors, easier to maintain, easier to
> understand,
> > easier to troubleshoot
> > 2) If one wants to have local mirror, the flow is the same as in case
> > of upstream repos (fuel-createmirror), which is clrear for a user to
> > understand.
> 
> 
>  From the issues I've seen,  fuel-createmirror isn't very straight
>  forward and has some issues making it a bad UX.
> >>>
> >>>
> >>> I'd say the whole approach of having such tool as fuel-createmirror is
> a
> >>> way too naive. Reliable internet connection is totally up to network
> >>> engineering rather than deployment. Even using proxy is much better
> that
> >>> creating local mirror. But this discussion is totally out of the scope
> of
> >>> this letter. Currently,  we have fuel-createmirror and it is pretty
> >>> straightforward (installed as rpm, has just a couple of command line
> >>> options). The quality of this script is also out of the scope of this
> >>> thread. BTW we have plans to improve it.
> >>
> >>
> >>
> >> Fair enough, I just wanted to raise the UX issues around these types of
> >> things as they should go into the decision making process.
> >>
> >>
> >>>
> >
> >
> > Many people still associate ISO with MOS, but it is not true when
> using
> > package based delivery approach.
> >
> > It is easy to define necessary repos during deployment and thus it is
> > easy to control what exactly is going to be installed on slave nodes.
> >
> > What do you guys think of it?
> >
> >
> 
>  Reliance on internet connectivity has been an issue since 6.1. For
> many
>  large users, complete access to the internet is not available or not
>  desired.  If we want to continue down this path, we need to improve
> the
>  tools to setup the local mirror and properly document what
> urls/ports/etc
>  need to be available for the installation of openstack and any mirror
>  creation process.  The ideal thing is to have an all-in-one CD
> similar to a
>  live cd that allows a user to completely try out fuel wherever they
> want
>  with out further requirements of internet access.  If we don't want to
>  continue with that, we need to do a better job around providing the
> tools
>  for a user to get up and running in a timely fashion.  Perhaps
> providing an
>  net-only iso and an all-included iso would be a better solution so
> people
>  will have their expectations properly set up front?
> >>>
> >>>
> >>> Let me explain why I think having local MOS mirror by default is bad:
> >>> 1) I don't see any reason why we should treat MOS  repo other way than
> >>> all other online repos. A user sees on the settings tab the list of
> repos
> >>> one of which is local by default while others are online. It can make
> user a
> >>> little bit confused, can't it? A user can be also confused by the
> fact, that
> >>> some of the repos can be cloned locally by fuel-createmirror while
> others
> >>> can't. That is not straightforward, NOT fuel-createmirror UX.
> >>
> >>
> >>
> >> I agree. The process should be the same and it should be just another
> >> repo. It doesn't me

[openstack-dev] [Fuel] IRC meeting today

2015-09-10 Thread Mike Scherbakov
Hi folks,
please add topics to the agenda before the meeting:
https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda

It would be great to discuss:

   - Critical bugs, and the pipeline of bugs which may become Criticals
   with larger audience.
   - Current status of builds from master
   - Updates on progress in other areas not related to bugs

Thanks,


-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

2015-09-10 Thread SHTILMAN, Tomer (Tomer)
>>On 09/09/15 04:10, SHTILMAN, Tomer (Tomer) wrote:
>> We are currently building in our lab multi cloud setup with keystone 
>> federation and I will check if my understating is correct, I am 
>> planning for propose a BP for this once will be clear
> On 09/09/15 Zane wrote:
>There was further interest in this at the IRC meeting today (from Daniel 
>Gonzalez), so I raised this blueprint:
>
>https://blueprints.launchpad.net/heat/+spec/multi-cloud-federation
>
>I left the Drafter and Assignee fields blank, so whoever starts working on the 
>spec and the code, respectively, should put their names in those fields. If 
>you see someone else's name there, you should co-ordinate with them to avoid 
>double-handling.
>
>cheers,
>Zane.
>
Hi Zane
Couldn't change the assignee and the drafter on this from some reason can you 
please assign me on this BP

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-10 Thread Mike Scherbakov
+1 to Alex & Andrey. Let's just be very careful, and consider all the use
cases before making a change.
If we can have answers to all the use cases, then we are good to go.

Important thing which we need to fix now - is to enable easy UX for making
changes to environments after deployments. Like standard configuration
management allows you to do. Namely, being able to:
a) modify params on settings tab
b) modify templates / puppet manifests
and apply changes easily to nodes.

Now, we can do b) easy and just click Deploy button or run two-three
commands [1]. a) requires changes in Nailgun code to allow post-deployment
modification of settings (we currently lock settings tab after deployment).

If we switch to package installation and this workflow (change to
manifests + 2-3 commands to rsync/run puppet on nodes) will become a
nightmare - then we'll need to figure out something else. It has to be easy
to do development and use Fuel as configuration management tool.

[1] https://bugs.launchpad.net/fuel/+bug/1385615

On Wed, Sep 9, 2015 at 8:01 AM Alex Schultz  wrote:

> Hey Vladimir,
>
>
>
>> Regarding plugins: plugins are welcome to install specific additional
>> DEB/RPM repos on the master node, or just configure cluster to use
>> additional onl?ne repos, where all necessary packages (including plugin
>> specific puppet manifests) are to be available. Current granular deployment
>> approach makes it easy to append specific pre-deployment tasks
>> (master/slave does not matter). Correct me if I am wrong.
>>
>>
> Don't get me wrong, I think it would be good to move to a fuel-library
> distributed via package only.  I'm bringing these points up to indicate
> that there is many other things that live in the fuel library puppet path
> than just the fuel-library package.  The plugin example is just one place
> that we will need to invest in further design and work to move to the
> package only distribution.  What I don't want is some partially executed
> work that only works for one type of deployment and creates headaches for
> the people actually having to use fuel.  The deployment engineers and
> customers who actually perform these actions should be asked about
> packaging and their comfort level with this type of requirements.  I don't
> have a complete understanding of the all the things supported today by the
> fuel plugin system so it would be nice to get someone who is more familiar
> to weigh in on this idea. Currently plugins are only rpms (no debs) and I
> don't think we are building fuel-library debs at this time either.  So
> without some work on both sides, we cannot move to just packages.
>
>
>> Regarding flexibility: having several versioned directories with puppet
>> modules on the master node, having several fuel-libraryX.Y packages
>> installed on the master node makes things "exquisitely convoluted" rather
>> than flexible. Like I said, it is flexible enough to use mcollective, plain
>> rsync, etc. if you really need to do things manually. But we have
>> convenient service (Perestroika) which builds packages in minutes if you
>> need. Moreover, In the nearest future (by 8.0) Perestroika will be
>> available as an application independent from CI. So, what is wrong with
>> building fuel-library package? What if you want to troubleshoot nova (we
>> install it using packages)? Should we also use rsync for everything else
>> like nova, mysql, etc.?
>>
>>
> Yes, we do have a service like Perestroika to build packages for us.  That
> doesn't mean everyone else does or has access to do that today.  Setting up
> a build system is a major undertaking and making that a hard requirement to
> interact with our product may be a bit much for some customers.  In
> speaking with some support folks, there are times when files have to be
> munged to get around issues because there is no package or things are on
> fire so they can't wait for a package to become available for a fix.  We
> need to be careful not to impose limits without proper justification and
> due diligence.  We already build the fuel-library package, so there's no
> reason you couldn't try switching the rsync to install the package if it's
> available on a mirror.  I just think you're going to run into the issues I
> mentioned which need to be solved before we could just mark it done.
>
> -Alex
>
>
>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz 
>> wrote:
>>
>>> I agree that we shouldn't need to sync as we should be able to just
>>> update the fuel-library package. That being said, I think there might be a
>>> few issues with this method. The first issue is with plugins and how to
>>> properly handle the distribution of the plugins as they may also include
>>> puppet code that needs to be installed on the other nodes for a deployment.
>>> Currently I do not believe we install the plugin packages anywhere except
>>> the master and when they do get installed there may be some post-install
>>> actions that are only val

<    1   2