Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Allamaraju, Subbu
Harshad,

Curious to know if there is a broad interest in an AWS compatible API in the 
community? To clarify, a clear incremental path from an AWS compatible API to 
an OpenStack model is not clear.

Subbu

On Feb 15, 2014, at 10:04 PM, Harshad Nakil  wrote:

> 
> I agree with problem as defined by you and will require more fundamental 
> changes.
> Meanwhile many users will benefit from AWS VPC api compatibility.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Allamaraju, Subbu
True. The domain hierarchy isn't useful to capture resource sharing across a 
VPC. For instance, if a VPC admin would like to scope certain networks or 
images to a projects managed within a VPC, there isn't an abstraction today.

Subbu

On Feb 14, 2014, at 11:42 AM, Martin, JC  wrote:

> Arvind,
> 
> Thanks for point me to the blueprint. I'll add it to the related blueprints.
> 
> I think this could be part of the solution, but in addition to defining 
> administrative boundaries, we need to change the way object sharing works. 
> Today, there is only two levels : project private or public. You can share 
> objects between projects but there is no single model across openstack to 
> define resource scope, each component has a slightly different model. The VPC 
> implementation will also have to address that.
> 
> JC
> 
> On Feb 14, 2014, at 11:26 AM, "Tiwari, Arvind"  wrote:
> 
>> Hi JC,
>> 
>> I have proposed BP to address VPC using domain hierarchy and hierarchical 
>> administrative boundary.
>> 
>> https://blueprints.launchpad.net/keystone/+spec/hierarchical-administrative-boundary
>> 
>> 
>> Thanks,
>> Arvind
>> -Original Message-
>> From: Martin, JC [mailto:jch.mar...@gmail.com] 
>> Sent: Friday, February 14, 2014 12:09 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] VPC Proposal
>> 
>> 
>> There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
>> AWS VPC api. I don't think that this blueprint is providing the necessary 
>> constructs to really implement a VPC, and it is not taking into account the 
>> domains, or proposed multi tenant hierarchy. In addition, I could not find a 
>> discussion about this topic leading to the approval.
>> 
>> For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
>> discussion on how to really implement VPC, and eventually split it into 
>> multiple real blueprints for each area.
>> 
>> Please, provide feedback on the following document, and on the best way to 
>> move this forward.
>> 
>> https://wiki.openstack.org/wiki/Blueprint-VPC
>> 
>> Thanks,
>> 
>> JC.
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Trove-Gate timeouts

2014-02-15 Thread Mirantis
Hello, Mathew.

I'm seeing same issues with the gate.
I also tried to found out why gate job is failing. First ran into issue related 
to cinder installation failure in devstack. But then I found same problem as 
you described. The best option is to increase job time range. 
Thanks for such research. I hope gate will be fixed in the easiest way and for 
the shortest period of time.

Best regards
Denis Makogon.
Sent from an iPad

> 16 февр. 2014, в 00:46, "Lowery, Mathew"  написал(а):
> 
> Hi all,
> 
> Issue #1: Jobs that need more than one hour
> 
> Of the last 30 Trove-Gate builds (spanning three days), 7 have failed due to 
> a Jenkins job-level timeout (not a proboscis timeout). These jobs had no 
> failed tests when the timeout occurred.
> 
> Not having access to the job config to see what the job looks like, I used 
> the console output to guess what was going on. It appears that a Jenkins 
> plugin named boot-hpcloud-vm is booting a VM and running the commands given, 
> including redstack int-tests. From the console output, it states that it was 
> supplied with an ssh_shell_timeout="7200". This is passed down to another 
> library called net-ssh-simple. net-ssh-simple has two timeouts: an idle 
> timeout and an operation timeout.
> 
> In the latest boot-hpcloud-vm, ssh_shell_timeout is passed down to 
> net-ssh-simple for both the idle timeout and the operation timeout. But in 
> older versions of boot-hp-cloud-vm, ssh_shell_timeout is passed down to 
> net-ssh-simple for only the idle timeout, leaving a default operation timeout 
> of 3600. This is why I believe these jobs are failing after exactly one hour.
> 
> FYI: Here are the jobs that failed due to the Jenkins job-level timeout (and 
> had no test failures when the timeout occurred) along with their associated 
> patch sets:
> https://rdjenkins.dyndns.org/job/Trove-Gate/2532/console 
> (http://review.openstack.org/73786)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2530/console 
> (http://review.openstack.org/73736)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2517/console 
> (http://review.openstack.org/63789)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2514/console 
> (https://review.openstack.org/50944)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2513/console 
> (https://review.openstack.org/50944)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2504/console 
> (https://review.openstack.org/73147)
> https://rdjenkins.dyndns.org/job/Trove-Gate/2503/console 
> (https://review.openstack.org/73147)
> 
> Suggested action items:
> If it is acceptable to have jobs that run over one hour, then install the 
> latest boot-hpcloud-vm plugin for Jenkins which will increase the make the 
> operation timeout match the idle timeout.
> 
> Issue #2: The running time of all jobs is 1 hr 1 min
> 
> While the Jenkins job-level timeout will end the job after one hour, it also 
> appears to keep every job running for a minimum of one hour.  To be more 
> precise, the timeout (or minimum running time) occurs on the part of the 
> Jenkins job that runs commands on the VM; the VM provision (which takes about 
> one minute) is excluded from this timeout which is why the running time of 
> all jobs is around 1 hr 1 min. A sampling of console logs showing the time 
> the int-tests completed and when the timeout kicks in:
> 
> https://rdjenkins.dyndns.org/job/Trove-Gate/2531/console (00:01:03 wasted)
> 04:51:12 COMMAND_0: echo refs/changes/36/73736/2
> ...
> 05:50:10 335.41 proboscis.case.MethodTest (test_instance_created)
> 05:50:10 194.05 proboscis.case.MethodTest 
> (test_instance_returns_to_active_after_resize)
> 05:51:13 **
> 05:51:13 ** STDERR-BEGIN **
> 
> https://rdjenkins.dyndns.org/job/Trove-Gate/2521/console (00:06:44 wasted)
> 21:11:44 COMMAND_0: echo refs/changes/89/63789/13
> ...
> 22:05:00 195.11 proboscis.case.MethodTest 
> (test_instance_returns_to_active_after_resize)
> 22:05:00 186.89 proboscis.case.MethodTest (test_resize_down)
> 22:11:44 **
> 22:11:44 ** STDERR-BEGIN **
> 
> https://rdjenkins.dyndns.org/job/Trove-Gate/2518/consoleFull (00:06:01 wasted)
> 17:46:59 COMMAND_0: echo refs/changes/02/64302/20
> ...
> 18:40:57 210.03 proboscis.case.MethodTest 
> (test_instance_returns_to_active_after_resize)
> 18:40:57 187.89 proboscis.case.MethodTest (test_resize_down)
> 18:46:58 **
> 18:46:58 ** STDERR-BEGIN **
> 
> Suggested action items:
> Given that the minimum running time is one hour, I assume the problem is in 
> the net-ssh-simple library. Needs more investigation.
> 
> Issue #3: Jenkins console log line timestamps different between full and 
> truncated views
> 
> I assume this is due to JENKINS-17779.
> 
> Suggested action items:
> Upgrade the timestamper plugin.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.

Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Harshad Nakil
Comments Inline

Regards
-Harshad


On Sat, Feb 15, 2014 at 3:18 PM, Martin, JC  wrote:

> Harshad,
>
> Thanks, What happens when I create two VPC ? Beside the project private
> networks, what is isolated ?
>

Since VPC is mapped to project. All the isolation provided by the project
is available.

>
> What do you call DC admin ? I know two administrators :
>- Cloud administrators
>- VPC admnistrator
>

I mean cloud administrator

>
> Are you saying that VPCs cannot have their own external gateways and NAT
> pools  ?
>

Yes conceptually as far as AWS API compatibility is concerned.

>
> Also, maybe more importantly, why try to build an AWS API before the
> function is available in openstack ? Why not wait to do it properly before
> defining the API mapping ?
>
Actually  as fas as AWS API is concerned we have all the proper building
blocks in openstack.

I agree with problem as defined by you and will require more fundamental
changes.
Meanwhile many users will benefit from AWS VPC api compatibility.

>
> JC
> On Feb 15, 2014, at 8:47 AM, Harshad Nakil 
> wrote:
>
> > EIP will be allocated from public pools. So in effect public pools and
> > shared networks are only DC admin functions. Not available to VPC
> > users.
> > There is a implicit external gateway. When one creates NAT instance or
> > VPN instance, external interfaces of these interfaces come from the
> > shared network which can be configured by the DC admin.
> >
> >
> > Regards
> > -Harshad
> >
> >
> >> On Feb 14, 2014, at 10:07 PM, "Martin, JC" 
> wrote:
> >>
> >> Harshad,
> >>
> >> I'm not sure to understand what you mean by :
> >>> However many of these concepts are not exposed to a AWS customers and
> >>> the API work well.
> >>
> >> So for example in :
> >>
> >>
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#VPC_EIP_EC2_Differences
> >>
> >> When it says :
> >> "When you allocate an EIP, it's for use only in a VPC."
> >>
> >> Are you saying that the behavior of your API would be consistent
> without scoping the external networks to a VPC and using the public pool
> instead ?
> >>
> >> I believe that your api may work for basic features on a small
> deployments with only one VPC, but as soon as you have complex setups with
> external gateways that need to be isolated, I'm not sure that it will
> provide parity anyway with what EC2 provides.
> >>
> >>
> >> Maybe I missed something.
> >>
> >>
> >> JC
> >>
> >>> On Feb 14, 2014, at 7:35 PM, Harshad Nakil 
> wrote:
> >>>
> >>> Hi JC,
> >>>
> >>> You have put it aptly. Goal of the blueprint is to present facade for
> >>> AWS VPC API as the name suggest.
> >>> As per your definition of VPC, shared network will have issues.
> >>> However many of these concepts are not exposed to a AWS customers and
> >>> the API work well.
> >>> While we work incrementally towards your definition of VPC we can
> >>> maintain API compatibility to AWS API that we are proposing. As we are
> >>> subset of your proposal and don't expose all features within VPC.
> >>>
> >>> Regards
> >>> -Harshad
> >>>
> >>>
>  On Feb 14, 2014, at 6:22 PM, "Martin, JC" 
> wrote:
> 
>  Rudra,
> 
>  I do not agree that the current proposal provides the semantic of a
> VPC. If the goal is to only provide a facade through the EC2 API, it may
> address this, but unless you implement the basic features of a VPC, what
> good is it doing ?
> 
>  I do believe that the work can be done incrementally if we agree on
> the basic properties of a VPC, for example :
>  - allowing projects to be created while using resources defined at
> the VPC level
>  - preventing resources not explicitly defined at the VPC level to be
> used by a VPC.
> 
>  I do not see in the current proposal how resources are scoped to a
> VPC, and how, for example, you prevent shared network to be used within a
> VPC, or how you can define shared networks (or other shared resources) to
> only be scoped to a VPC.
> 
>  I think we already raised our concern to you several months ago, but
> it did not seem to have been addressed in the current proposal.
> 
>  thanks,
> 
>  JC
> 
> > On Feb 14, 2014, at 3:50 PM, Rudra Rugge  wrote:
> >
> > Hi JC,
> >
> > We agree with your proposed model of a VPC resource object. Proposal
> you are making makes sense to us and we would like to collaborate further
> on this. After reading your blueprint two things come to mind.
> >
> > 1. VPC vision for Openstack? (Your blueprint is proposing this
> vision)
> > 2. Providing AWS VPC api compatibility with current constrains of
> openstack structure.
> >
> > The blueprint that we proposed targets #2.
> > It gives a way to implement "AWS VPC api" compatible API. This helps
> subset of customers to migrate their workloads from AWS to openstack based
> clouds. In our implementation we tied VPC to project. That was easiest way
> to keep isolation with c

Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-15 Thread Jay Pipes
On Sat, 2014-02-15 at 17:20 +1300, Robert Collins wrote:
> On 15 February 2014 14:34, Fox, Kevin M  wrote:
> > I think a lot of projects don't bother to gate, because its far to much 
> > work to set up a workable system.
> >
> > I can think of several projects I've worked on that would benefit from it 
> > but haven't because of time/cost of setting it up.
> >
> > If I could just say "solum create project foo" and get it, I'm sure it 
> > would be much more used.
> >
> > The same has been said of Unit tests and CI in the past. "We don't need 
> > it". When you give someone a simple to use system though, they see its 
> > value pretty quickly.
> >
> > Yeah, gerrit and jenkins are a pain to setup. Thats one of the things that 
> > might make solum great. That it removes that pain.
> 
> Gating is hard, so we should do more of it.
> 
> +1 on gating by default, rather than being nothing more than a remote
> git checkout - there are lots of those systems already, and being one
> won't make solum stand out,.

Personally, I believe having Gerrit and Jenkins as the default will turn
more people off Solumn than attract them to it.

Just because we in the OpenStack community love our gating workflow and
think it's all groovy does not mean that view is common, wanted, or
understood by the vast majority of users of Heroku-like solutions.

Who is the audience here? It is not experienced developers who already
understand things like Gerrit and Jenkins. It's developers who just want
to simplify the process of pushing code up to some system other than
Github or their workstation. Adding the awkwardness of Gerrit's code
review system -- and the associated pain of trying to understand how to
define Jenkins jobs -- is something that I don't think the *default*
Solum experience should invite.

The default experience should be a simple push code, run merge tests,
and deploy into the deployment unit (whatever that is called in Solum
nowadays). There should be well-documented ways to add commit hooks into
this workflow, but having a complex Gerrit and Jenkins gated workflow is
just overkill for the default experience.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][VMWare] VMwareVCDriver related to resize/cold migration

2014-02-15 Thread Jay Lau
Hey,

I have one question related with OpenStack vmwareapi.VMwareVCDriver
resize/cold migration.

The following is my configuration:

 DC
|
|Cluster1
|  |
|  |9.111.249.56
|
|Cluster2
   |
   |9.111.249.49

*Scenario 1:*
I started two nova computes manage the two clusters:
1) nova-compute1.conf
cluster_name=Cluster1

2) nova-compute2.conf
cluster_name=Cluster2

3) Start up two nova computes on host1 and host2 separately
4) Create one VM instance and the VM instance was booted on Cluster2 node
9.111.249.49
| OS-EXT-SRV-ATTR:host | host2 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |
domain-c16(Cluster2) |
5) Cold migrate the VM instance
6) After migration finished, the VM goes to VERIFY_RESIZE status, and "nova
show" indicates that the VM now located on host1:Cluster1
| OS-EXT-SRV-ATTR:host | host1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  |
domain-c12(Cluster1) |
7) But from vSphere client, it indicates the the VM was still running on
Cluster2
8) Try to confirm the resize, confirm will be failed. The root cause is
that nova compute on host2 has no knowledge of domain-c12(Cluster1)

2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2810, in
do_confirm_resize
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
migration=migration)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2836, in
_confirm_resize
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
network_info)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 420,
in confirm_migration
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
_vmops = self._get_vmops_for_compute_node(instance['node'])
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 523,
in _get_vmops_for_compute_node
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
resource = self._get_resource_for_node(nodename)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/driver.py", line 515,
in _get_resource_for_node
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
raise exception.NotFound(msg)
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp
NotFound: NV-3AB798A The resource domain-c12(Cluster1) does not exist
2014-02-16 07:10:17.166 12720 TRACE nova.openstack.common.rpc.amqp


*Scenario 2:*

1) Started two nova computes manage the two clusters, but the two computes
have same nova conf.
1) nova-compute1.conf
cluster_name=Cluster1
cluster_name=Cluster2

2) nova-compute2.conf
cluster_name=Cluster1
cluster_name=Cluster2

3) Then create and resize/cold migrate a VM, it can always succeed.


*Questions:*
For multi-cluster management, does vmware require all nova compute have
same cluster configuration to make sure resize/cold migration can succeed?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] "bad" default values in conf files

2014-02-15 Thread Michael Chapman
>Have the folks creating our puppet modules and install recommendations
taken a >close look at all the options and determined
>that the defaults are appropriate for deploying RHEL OSP in the
configurations we >are recommending?

If by our puppet modules you mean the ones in stackforge, in the vast
majority of cases they follow the defaults provided. I check that this is
the case during review, and the only exceptions should be stuff like the db
and mq locations that have to change for almost every install.

 - Michael



On Sat, Feb 15, 2014 at 10:15 AM, Dirk Müller  wrote:

> >> were not appropriate for real deployment, and our puppet modules were
> >> not providing better values
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1064061.
>
> I'd agree that raising the caching timeout is a not a good "production
> default" choice. I'd also argue that the underlying issue is fixed
> with https://review.openstack.org/#/c/69884/
>
> In our testing this patch has speed up the revocation retrieval by factor
> 120.
>
> > The default probably is too low, but raising it too high will cause
> > concern with those who want revoked tokens to take effect immediately
> > and are willing to scale the backend to get that result.
>
> I agree, and changing defaults has a cost as well: Every deployment
> solution out there has to detect the value change, update their config
> templates and potentially also migrate the setting from the old to the
> new default for existing deployments. Being in that situation, it has
> happened that we were "surprised" by default changes that had
> undesireable side effects, just because we chose to overwrite a
> different default elsewhere.
>
> I'm totally on board with having "production ready" defaults, but that
> also includes that they seldomly change and change only for a very
> good, possibly documented reason.
>
>
> Greetings,
> Dirk
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Urgent questions on Service Type Framework for VPNaaS

2014-02-15 Thread Paul Michali
Hi Nachi and other cores!

I'm very close to publishing my vendor based VPNaaS driver (service driver is 
ready, device driver is a day or two out), but have a bit of an issue. This 
code uses the Service Type Framework, which, as you know, is still out for 
review (and has been idle for a long time).  I updated the STF client code and 
it is updated in Gerrit.

I saw you put a -1 on your STF server code. Is the feature being abandoned or 
was that for some other reason?

If going forward with it, can you update the server STF code, or should I do it 
(I have a branch with the STF based on master of about 2 weeks ago, so it 
should update OK)?

Also, I'm wondering (worried) about the logistics of my reviews. I wanted to do 
my service driver and device driver separately (I guess making the latter 
dependent on the former in Gerrit). However, because of the STF, I'd need to 
make my service driver dependent on the STF server code too (my current branch 
has both code pieces). Really worried about the complexity there and about it 
getting hung up, if there is more delay on the STF review.

I've been working on another branch without the STF dependency, however that 
has to hack in part of the STF to be able to select the service driver based on 
config vs hardwired to the reference driver.

Should I proceed with the STF review chaining or push out my code w/o the STF?

Thanks!

PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-15 Thread Paul Michali
Great!

Looks like we have a bunch of people interested in this. Given the Neutron 
deadline for I-3, I'll wait and try to setup in IRC for next week. From there, 
if there's enough interest, we can try to get a session to discuss at the 
Summit.

I'd love to hear the ideas and thoughts on this topic… hopefully we can enhance 
both the core and vendor services capabilities.

Kind regards.

PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Feb 12, 2014, at 11:43 AM, Stephen Wong  wrote:

> Hi Paul,
> 
> I am interested in this topic - please let me know if there is any update 
> on meeting or discussions.
> 
> Thanks,
> - Stephen
> 
> 
> On Mon, Feb 3, 2014 at 2:19 PM, Paul Michali  wrote:
> I'd like to see if there is interest in discussing vendor plugins for L3 
> services. The goal is to strive for consistency across vendor plugins/drivers 
> and across service types (if possible/sensible). Some of this could/should 
> apply to reference drivers as well. I'm thinking about these topics (based on 
> questions I've had on VPNaaS - feel free to add to the list):
> 
> How to handle vendor specific validation (e.g. say a vendor has restrictions 
> or added capabilities compared to the reference drivers for attributes).
> Providing "client" feedback (e.g. should help and validation be extended to 
> include vendor capabilities or should it be delegated to server reporting?)
> Handling and reporting of errors to the user (e.g. how to indicate to the 
> user that a failure has occurred establishing a IPSec tunnel in device 
> driver?)
> Persistence of vendor specific information (e.g. should new tables be used or 
> should/can existing reference tables be extended?).
> Provider selection for resources (e.g. should we allow --provider attribute 
> on VPN IPSec policies to have vendor specific policies or should we rely on 
> checks at connection creation for policy compatibility?)
> Handling of multiple device drivers per vendor (e.g. have service driver 
> determine which device driver to send RPC requests, or have agent determine 
> what driver requests should go to - say based on the router type)
> If you have an interest, please reply to me and include some days/times that 
> would be good for you, and I'll send out a notice on the ML of the time/date 
> and we can discuss.
> 
> Looking to hearing form you!
> 
> PCM (Paul Michali)
> 
> MAIL  p...@cisco.com
> IRCpcm_  (irc.freenode.net)
> TW@pmichali
> GPG key4525ECC253E31A83
> Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Martin, JC
Harshad,

Thanks, What happens when I create two VPC ? Beside the project private 
networks, what is isolated ?

What do you call DC admin ? I know two administrators :
   - Cloud administrators
   - VPC admnistrator

Are you saying that VPCs cannot have their own external gateways and NAT pools  
?

Also, maybe more importantly, why try to build an AWS API before the function 
is available in openstack ? Why not wait to do it properly before defining the 
API mapping ?

JC
On Feb 15, 2014, at 8:47 AM, Harshad Nakil  wrote:

> EIP will be allocated from public pools. So in effect public pools and
> shared networks are only DC admin functions. Not available to VPC
> users.
> There is a implicit external gateway. When one creates NAT instance or
> VPN instance, external interfaces of these interfaces come from the
> shared network which can be configured by the DC admin.
> 
> 
> Regards
> -Harshad
> 
> 
>> On Feb 14, 2014, at 10:07 PM, "Martin, JC"  wrote:
>> 
>> Harshad,
>> 
>> I'm not sure to understand what you mean by :
>>> However many of these concepts are not exposed to a AWS customers and
>>> the API work well.
>> 
>> So for example in :
>> 
>> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#VPC_EIP_EC2_Differences
>> 
>> When it says :
>> "When you allocate an EIP, it's for use only in a VPC."
>> 
>> Are you saying that the behavior of your API would be consistent without 
>> scoping the external networks to a VPC and using the public pool instead ?
>> 
>> I believe that your api may work for basic features on a small deployments 
>> with only one VPC, but as soon as you have complex setups with external 
>> gateways that need to be isolated, I'm not sure that it will provide parity 
>> anyway with what EC2 provides.
>> 
>> 
>> Maybe I missed something.
>> 
>> 
>> JC
>> 
>>> On Feb 14, 2014, at 7:35 PM, Harshad Nakil  
>>> wrote:
>>> 
>>> Hi JC,
>>> 
>>> You have put it aptly. Goal of the blueprint is to present facade for
>>> AWS VPC API as the name suggest.
>>> As per your definition of VPC, shared network will have issues.
>>> However many of these concepts are not exposed to a AWS customers and
>>> the API work well.
>>> While we work incrementally towards your definition of VPC we can
>>> maintain API compatibility to AWS API that we are proposing. As we are
>>> subset of your proposal and don't expose all features within VPC.
>>> 
>>> Regards
>>> -Harshad
>>> 
>>> 
 On Feb 14, 2014, at 6:22 PM, "Martin, JC"  wrote:
 
 Rudra,
 
 I do not agree that the current proposal provides the semantic of a VPC. 
 If the goal is to only provide a facade through the EC2 API, it may 
 address this, but unless you implement the basic features of a VPC, what 
 good is it doing ?
 
 I do believe that the work can be done incrementally if we agree on the 
 basic properties of a VPC, for example :
 - allowing projects to be created while using resources defined at the VPC 
 level
 - preventing resources not explicitly defined at the VPC level to be used 
 by a VPC.
 
 I do not see in the current proposal how resources are scoped to a VPC, 
 and how, for example, you prevent shared network to be used within a VPC, 
 or how you can define shared networks (or other shared resources) to only 
 be scoped to a VPC.
 
 I think we already raised our concern to you several months ago, but it 
 did not seem to have been addressed in the current proposal.
 
 thanks,
 
 JC
 
> On Feb 14, 2014, at 3:50 PM, Rudra Rugge  wrote:
> 
> Hi JC,
> 
> We agree with your proposed model of a VPC resource object. Proposal you 
> are making makes sense to us and we would like to collaborate further on 
> this. After reading your blueprint two things come to mind.
> 
> 1. VPC vision for Openstack? (Your blueprint is proposing this vision)
> 2. Providing AWS VPC api compatibility with current constrains of 
> openstack structure.
> 
> The blueprint that we proposed targets #2.
> It gives a way to implement "AWS VPC api" compatible API. This helps 
> subset of customers to migrate their workloads from AWS to openstack 
> based clouds. In our implementation we tied VPC to project. That was 
> easiest way to keep isolation with current structure. We agree that what 
> you are proposing is more generic. One to way is to implement our current 
> proposal to have one VPC to one project mapping. As your blueprint 
> matures we will
> move VPC to multiple project mapping.
> 
> We feel that instead of throwing away all the work done we can take an 
> incremental approach.
> 
> Regards,
> Rudra
> 
> 
>> On Feb 14, 2014, at 11:09 AM, Martin, JC  wrote:
>> 
>> 
>> There is a Blueprint targeted for Icehouse-3 that is aiming to implement 
>> the AWS VPC api. I don't think that t

[openstack-dev] [Trove] Trove-Gate timeouts

2014-02-15 Thread Lowery, Mathew
Hi all,

Issue #1: Jobs that need more than one hour

Of the last 30 Trove-Gate builds 
(spanning three days), 7 have failed due to a Jenkins job-level timeout (not a 
proboscis timeout). These jobs had no failed tests when the timeout occurred.

Not having access to the job config to see what the job looks like, I used the 
console output to guess what was going on. It appears that a Jenkins plugin 
named 
boot-hpcloud-vm
 is booting a VM and running the commands given, including redstack int-tests. 
From the console output, it states that it was supplied with an 
ssh_shell_timeout="7200". This is passed down to another library called 
net-ssh-simple.
 net-ssh-simple has two timeouts: an idle timeout and an operation timeout.

In the latest 
boot-hpcloud-vm,
 ssh_shell_timeout is passed down to net-ssh-simple for both the idle timeout 
and the operation timeout. But in older versions of 
boot-hp-cloud-vm,
 ssh_shell_timeout is passed down to net-ssh-simple for only the idle timeout, 
leaving a default operation timeout of 3600. This is why I believe these jobs 
are failing after exactly one hour.

FYI: Here are the jobs that failed due to the Jenkins job-level timeout (and 
had no test failures when the timeout occurred) along with their associated 
patch sets:
https://rdjenkins.dyndns.org/job/Trove-Gate/2532/console 
(http://review.openstack.org/73786)
https://rdjenkins.dyndns.org/job/Trove-Gate/2530/console 
(http://review.openstack.org/73736)
https://rdjenkins.dyndns.org/job/Trove-Gate/2517/console 
(http://review.openstack.org/63789)
https://rdjenkins.dyndns.org/job/Trove-Gate/2514/console 
(https://review.openstack.org/50944)
https://rdjenkins.dyndns.org/job/Trove-Gate/2513/console 
(https://review.openstack.org/50944)
https://rdjenkins.dyndns.org/job/Trove-Gate/2504/console 
(https://review.openstack.org/73147)
https://rdjenkins.dyndns.org/job/Trove-Gate/2503/console 
(https://review.openstack.org/73147)

Suggested action items:

  *   If it is acceptable to have jobs that run over one hour, then install the 
latest boot-hpcloud-vm plugin for Jenkins which will increase the make the 
operation timeout match the idle timeout.

Issue #2: The running time of all jobs is 1 hr 1 min

While the Jenkins job-level timeout will end the job after one hour, it also 
appears to keep every job running for a minimum of one hour.  To be more 
precise, the timeout (or minimum running time) occurs on the part of the 
Jenkins job that runs commands on the VM; the VM provision (which takes about 
one minute) is excluded from this timeout which is why the running time of all 
jobs is around 1 hr 1 
min. A sampling of 
console logs showing the time the int-tests completed and when the timeout 
kicks in:

https://rdjenkins.dyndns.org/job/Trove-Gate/2531/console (00:01:03 wasted)

04:51:12 COMMAND_0: echo refs/changes/36/73736/2

...

05:50:10 335.41 proboscis.case.MethodTest (test_instance_created)
05:50:10 194.05 proboscis.case.MethodTest 
(test_instance_returns_to_active_after_resize)
05:51:13 **
05:51:13 ** STDERR-BEGIN **

https://rdjenkins.dyndns.org/job/Trove-Gate/2521/console (00:06:44 wasted)

21:11:44 COMMAND_0: echo refs/changes/89/63789/13

...

22:05:00 195.11 proboscis.case.MethodTest 
(test_instance_returns_to_active_after_resize)
22:05:00 186.89 proboscis.case.MethodTest (test_resize_down)
22:11:44 **
22:11:44 ** STDERR-BEGIN **


https://rdjenkins.dyndns.org/job/Trove-Gate/2518/consoleFull (00:06:01 wasted)

17:46:59 COMMAND_0: echo refs/changes/02/64302/20

...

18:40:57 210.03 proboscis.case.MethodTest 
(test_instance_returns_to_active_after_resize)
18:40:57 187.89 proboscis.case.MethodTest (test_resize_down)
18:46:58 **
18:46:58 ** STDERR-BEGIN **


Suggested action items:

  *

Given that the minimum running time is one hour, I assume the problem is in the 
net-ssh-simple library. Needs more investigation.


Issue #3: Jenkins console log line timestamps different between full and 
truncated views

I assume this is due to 
JENKINS-17779.

Suggested action items:

  *   Upgrade the timestamper 
plugin.
___

Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-15 Thread Clint Byrum
Excerpts from Alexander Tivelkov's message of 2014-02-14 18:17:10 -0800:
> Hi folks,
> 
> Murano matures, and we are getting more and more feedback from our early
> adopters. The overall reception is very positive, but at the same time
> there are some complaints as well. By now the most significant complaint is
> is hard to write workflows for application deployment and maintenance.
> 
> Current version of workflow definition markup really have some design
> drawbacks which limit its potential adoption. They are caused by the fact
> that it was never intended for use for Application Catalog use-cases.
> 

Just curious, is there any reason you're not collaborating on Mistral
for this rather than both having a workflow engine?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread Clint Byrum
Excerpts from James Slagle's message of 2014-02-15 13:02:36 -0800:
> On Fri, Feb 14, 2014 at 11:12 PM, Robert Collins
>  wrote:
> > On 15 February 2014 08:42, Dan Prince  wrote:
> >>
> >
> >> Option B is we make our job easy by strong arming everyone into the same 
> >> defaults of our "upstream" choosing.
> >
> > Does Nova strong arm everyone into using kvm? Its the default. Or
> > keystone into using the SQL token store - its the default?
> >
> > No - defaults are not strong arming. But the defaults are obviously
> > defaults, and inherited by downstreams. And some defaults are larger
> > than others -  we've got well defined interfaces in OpenStack, which
> > have the primary characteristic of 'learn once, apply everywhere' -
> > even though in principle you can replace them. At the low level REST
> > and message-bus RPCs, at a level up Keystone and more recently Nova
> > and Neutron have become that as we get higher order code like Heat and
> > Savanna that depend on them. I hope none would replace Nova with
> > Eucalyptus and then say they're running OpenStack - in the same way
> > we're both defining defaults, *and* building interfaces. *That* is our
> > job - making OpenStack *upstream* deployable, in the places, and on
> > the platforms, with the options, that our users want.
> >
> > Further to that, upstream we're making choices with the thoughts of
> > our *users* in mind - both cloud consumers and cloud operators. They
> > are why we ask questions like 'is having every install have
> > potentially different usernames for the nova service a good idea'. The
> > only answer so far has been 'because distros have chosen different
> > usernames already and we need to suck it up'. Thats not a particularly
> > satisfying answer.
> 
> It seems we're talking defaults now. I did not get that from the email
> that started the discussion, it sounded like an "either or" to me.
> 
> So, if we're talking defaults, let the upstream architecture be the
> default. Let your #A be a choice. If someone wants to come along and
> do #B, we let them, but I really don't think you'd fine anyone :).
> 
> We question and we challenge new implementations (just like we always
> do and as you're doing here, which is good).  But at the end of day if
> someone is saying they have a real documented need in order for them
> to adopt OpenStack, and it makes sense for that option to be "in tree"
> for OpenStack,  we empower them by having a flexible framework, and
> hopefully our tools are good enough to handle what differences there
> are. If not, we make them better.
> 
> I'm not sure I entirely get the comparisons to Nova you're making. Let
> me take a shot though and try to further it along as I see it:
> 
> Let the dib style element be the "API". What the implementation is,
> whether a source install or a package install shouldn't affect the
> interface. And there's no reason to "coerce" all the implementations
> to be exactly the same.
> 
> The Nova virt drivers (libvirt/kvm, libvirt/xen, xenapi, etc) are our
> different install types today, source-install, package-install. Our
> different install type implementations still conform to the framework,
> our install scripts have to be in the right place, we make use of
> install-packages, we use os-svc-*, we use os-*-config, etc. Just like
> the python code for each Nova virt driver has requirements (the
> correct base class inheritance, methods, etc).
> 
> However, Nova makes no enforcement of how each driver is *actually*
> implemented. It doesn't particularly care about what's happening in
> each driver's power_on method.  It doesn't enforce that all
> hypervisors kernel modules are called the same thing so that it's
> easier to check if you loaded the right one. No one said that we can't
> have a Nova baremetal driver b/c what if someone drops down to the
> console to troubleshoot and they run "virsh list" and don't see
> baremetal instances.  Coercing the implementation to be the same just
> doesn't make sense to me, and I think where this comparison you're
> making really breaks down.
> 
> The abstracting away of the hypervisor differences happens in the virt
> drivers. For TripleO's purposes, the abstracting away of the install
> type particularities should happen by the install and setup code. Not
> by trying to coerce things to look the same after the fact, which
> sounds to me exactly like what Nova *doesn't* do and require.
> 
> >> Take /mnt/state for example. This isn't the normal place for things to 
> >> live. Why not use the read only root mechanism some distributions already 
> >> have and work with that instead. Or perhaps have /mnt/state as a backup 
> >> solution which can be used if a mechanism doesn't exist or is faulty?
> >
> > Currently we have two options for upgrading images. A) /mnt/state, B)
> > a SAN + cinder. We haven't tested B), and I expect for many installs B
> > won't be an option. /mnt/state is 100% technical, as no other options
> > exist - non

Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread Clint Byrum
Excerpts from Dan Prince's message of 2014-02-15 06:05:30 -0800:
> 
> - Original Message -
> > From: "Robert Collins" 
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> > 
> > Cc: openstack-operat...@lists.openstack.org
> > Sent: Friday, February 14, 2014 11:12:12 PM
> > Subject: Re: [Openstack-operators] [openstack-dev] [TripleO] consistency vs 
> >packages in TripleO
> > 
> > Currently we have two options for upgrading images. A) /mnt/state, B)
> > a SAN + cinder. We haven't tested B), and I expect for many installs B
> > won't be an option. /mnt/state is 100% technical, as no other options
> > exist - none of the Linux distro 'read only root' answers today answer
> > the problem /mnt/state solves in a way compatible with Nova.
> 
> I would argue that we haven't tried all of the read only root mechanism
> either... at least to the point where we can definitely don't work. Sure
> the data has to go somewhere... but it is how we present this to the
> end user that is the point in this thread, no?
> 
> All I'm arguing for here is the ability to avoid doing this to our nova.conf 
> file (what we do today in TripleO):
> 
>  state_path=/mnt/state/var/lib/nova
> 
> And instead simply use the Nova default (/var/lib/nova) and have some
> other mechanism take care of the mapping for us (bind mounts, etc). In
> the context of end user documentation I certainly think this has less
> of an impact and is more pleasing to all.
> 

Could I divert you a bit into explaining what other readonly root schemes
exist? I've not looked into them very much as none seemed "simple" to
me when you consider that things like /var/lib/dpkg are intended to be
readonly parts of the image.

I'll start by saying that I'm very suspicious of any mechanism which tries
to intermix writable data with readonly data. A TripleO user needs only
learn once that "/mnt is preserved, everything else is not". It is also
extremely discoverable. So I am quite curious to hear what alternative
we could use that would make this simpler.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread Chris Jones
Hi

Assuming I am interpreting your mail correctly, I think option A makes
vastly more sense, with one very specific provision I'd add. More on that
in a moment.

Option B, the idea that we would mangle a package-installed environment to
suit our desired layout, is not going to work out well for us at all.

Firstly, I think we'll find there are a ton of problems here (just thinking
about assuming that we can alter a system username, in the context of the
crazy things you can hook into libnss, makes me shiver).
Secondly, it is also going to cause a significant increase in
distro-specific manglery in our DIB elements, no? Right now, handling the
username case could be simplified if we recognised "ok, this is a thing
that can vary", abstracted them into environment variables and allowed the
distros to override them in their base OS element. That is not very much
code, it would simplify the elements from where we are today and the
documentation could be auto-generated to account for it, or made to refer
to the usernames in a way the operator can dereference locally. Maybe we
can't do something like that for every friction point we hit, but I'd wager
we could for most).

Back to the specific provision I mentioned for option A. Namely, put the
extra work you mention, on the distros. If they want to get their atypical
username layout into TripleO, ask them to provide a fork of our
documentation that accounts for it, and keep that up to date. If their
choice is do that, or have their support department maintain a fork of all
their openstack support material, because it might be wildly different if
the customer is using TripleO, I suspect they'd prefer to do a bit of work
on our docs.

I completely agree with your comment later in the thread that "our job is
to be the upstream installer", so I suggest we do our best to only focus on
upstream, but in a way that enables our downstreams to engage with our
element repositories, by shouldering most of the burdens of their
divergence from upstream.

For me, the absolute worst case scenario in any distro's adoption of
TripleO, is that they are unwilling to be part of the community that
maintains tripleo-image-elements, and instead have their own elements that
are streamlined for their OS, but lack our community's polish and don't
contribute back their features/fixes. I think option B would drive any
serious distro away from us purely on the grounds that it would be a
nightmare for them to support.

Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model change - Multiple services per floating IP

2014-02-15 Thread Eugene Nikanorov
Stephen,

> Aah, Ok. FWIW, splitting up the VIP into instance/"floating IP entity"
Right now I'm not sure what would be the best. Currently we don't have
implementation that allows creating VIP on external network directly. For
example, when haproxy VIP is created, it has address on the tenant network
and floating ip is associated with vip address then. Other providers could
allow creating VIP on external network directly.

Basically tenants can't share floating ip because they can't specify
floating IP on external network.
However, VIP address may or may not be internet facing. If it's internal
address on tenant network then nothing prevents another tenant from having
vip with the same ip address on it's own tenant network. However
internet-facing addresses will obviously be different.

Thanks,
Eugene.



On Fri, Feb 14, 2014 at 3:57 AM, Stephen Balukoff wrote:

> Hi Eugene,
>
> Aah, Ok. FWIW, splitting up the VIP into instance/"floating IP entity"
> separate from listener (ie. carries most of the attributes of VIP, in
> current implementation) still allows us to ensure tenants don't end up
> accidentally sharing an IP address. The "instance" could be associated with
> the neutron network port, and the haproxy listeners (one process per
> listener) could simply be made to listen on that port (ie. in that network
> namespace on the neutron node). There wouldn't be a need for two instances
> to share a single neutron network port.
>
> Has any thought been put to preventing tenants from accidentally sharing
> an IP if we stick with the current model?
>
> Stephen
>
>
> On Thu, Feb 13, 2014 at 4:20 AM, Eugene Nikanorov  > wrote:
>
>> So we have some constraints here because of existing haproxy driver impl,
>> the particular reason is that VIP created by haproxy is not a floating ip,
>> but an ip on the internal tenant network with a neutron port. So ip
>> uniqueness is enforced at port level and not at VIP level. We need to allow
>> VIPs to share the port, that is a part of multiple-vips-per-pool blueprint.
>>
>> Thanks,
>> Eugene.
>>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request for testing new cloud foundation layer on bare metal

2014-02-15 Thread Aryeh Friedman
Very quick note it turns out our mailing lists archives where private I
have no marked them as public.   If the links didn't work for you in the
last 24 hrs try again.


On Sat, Feb 15, 2014 at 2:40 AM, Aryeh Friedman wrote:

> We apologize for the unclearness of our wording both here and on
> our site (http://www.petitecloud.org).  Over the next few weeks we will
> work on improving our descriptions of various aspects of what PetiteCloud
> is and what it is not.  We will also add a set of tutorials showing what a
> cloud foundation layer (CFL) is and how it can make OpenStack more stable
> and robust in non-data-center environments.  In the meantime, hopefully my
> answers below will help with some immediate clarification.
>
> For general answers as to what a CFL is, see our 25 words or less
> answer on our site (http://petitecloud.org/cloudFoundation.jsp) or see
> the draft notes for a forthcoming white paper on the topic (
> http://lists.petitecloud.nyclocal.net/private.cgi/petitecloud-general-petitecloud.nyclocal.net/attachments/20140213/3fee4df0/attachment-0001.pdf).
> OpenStack does not currently have a cloud foundation layer of its own
> (creating one might be a good sub-project for OpenStack).
>
> Your specfic questions are answered inline:
>
>
>
> On Fri, Feb 14, 2014 at 11:28 PM, Robert Collins <
> robe...@robertcollins.net> wrote:
>
>> I'm sorry if this sounds rude, but I've been seeing your emails come
>> in, and I've read your website, and I still have 0% clue about what
>> PetiteCloud is.
>>
>> On 12 February 2014 21:56, Aryeh Friedman 
>> wrote:
>> > PetiteCloud is a 100% Free Open Source and Open Knowledge bare metal
>> capable
>> > Cloud Foundation Layer for Unix-like operating systems. It has the
>> following
>> > features:
>>
>> What is a Cloud Foundation Layer? Whats the relevance of OK here (I
>> presume you mean http://okfn.org/ ?).
>>
>
>
> We have no connection with the above site. Personally we agree with its
> goals, but our use of the term "Open Knowledge" is different and pertains
> only to technical knowledge. See our web site for details on what we mean
> by that term. http://petitecloud.org/fosok.jsp
>
>
>>
>> > * Support for bhyve (FreeBSD only) and QEMU
>> > * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
>> others
>> > via QEMU only) and all supported software (including running OpenStack
>> on
>> > VM's)
>> > * Install, import, start, stop and reboot instances safely (guest OS
>> > needs to be controlled independently)
>> > * Clone, backup/export, delete stopped instances 100% safely
>>
>> So far it sounds like a hypervisor management layer - which is what Nova
>> is.
>>
>
> Nova is for running end user instances. PetiteCloud is designed (see
> below) to run instances that OpenStack can run on and then partition into
> end-user instances.
>
>
>>
>> > * Keep track of all your instances on one screen
>>
>> I think you'll need a very big screen eventually :)
>>
> Not a huge one.  A CFL needs to run only a relatively small number of
> instances itself. Remember that a cloud foundation layer's instances can be
> used as hosts (a.k.a. nodes) for a full-fledged IAAS platform such as
> OpenStack. Thus, for example, a set of just four PetiteCloud instances
> might serve as the complete compute, networking, storage, etc. nodes for an
> OpenStack installation which in turn is running, say 10 instances.
> Addtional compute, storage and/or hybrid nodes (real and virtual) can be
> added to the deploy via any combination of bare metal openstack nodes and
> CFL'ed ones. Since PetiteCloud does not, yet, have any API hooks you would
> need to limit this to a small number of PetiteCloud hosts.
>
>
>>
>> > * All transactions that change instance state are password
>> protected at
>> > all critical stages
>> > * Advanced options:
>> > * Ability to use/make bootable bare metal disks for backing
>> stores
>> > * Multiple NIC's and disks
>> > * User settable (vs. auto assigned) backing store locations
>>
>> if backing store == virtual disk, this sounds fairly straight forward,
>> though 'bootable bare metal disks' is certainly an attention grabbing
>> statement for a hypervisor.
>>
>
> As explained in the white paper, since we are a full layer 0 cloud
> platform instead of just a hypervisor manager we can do stuff that would
> normally not be possible for a unmanaged hypervisor (or even wise if not
> managed by a full layer 0 platform). One of them is you can make the
> storage target of your layer 0 instances be a physical disk. Additionally
> since petitecloud does not require any "guest modifications" when you
> install the OS (which is managed by the hypervisor) you can make your root
> disk be a physical drive. You can take this to some really interesting
> extremes like one of our core team members (not me) posted a few nights ago
> to our mailing list how to make a "cloud on a stick".
> http://lists.peti

Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread James Slagle
On Fri, Feb 14, 2014 at 11:12 PM, Robert Collins
 wrote:
> On 15 February 2014 08:42, Dan Prince  wrote:
>>
>
>> Option B is we make our job easy by strong arming everyone into the same 
>> defaults of our "upstream" choosing.
>
> Does Nova strong arm everyone into using kvm? Its the default. Or
> keystone into using the SQL token store - its the default?
>
> No - defaults are not strong arming. But the defaults are obviously
> defaults, and inherited by downstreams. And some defaults are larger
> than others -  we've got well defined interfaces in OpenStack, which
> have the primary characteristic of 'learn once, apply everywhere' -
> even though in principle you can replace them. At the low level REST
> and message-bus RPCs, at a level up Keystone and more recently Nova
> and Neutron have become that as we get higher order code like Heat and
> Savanna that depend on them. I hope none would replace Nova with
> Eucalyptus and then say they're running OpenStack - in the same way
> we're both defining defaults, *and* building interfaces. *That* is our
> job - making OpenStack *upstream* deployable, in the places, and on
> the platforms, with the options, that our users want.
>
> Further to that, upstream we're making choices with the thoughts of
> our *users* in mind - both cloud consumers and cloud operators. They
> are why we ask questions like 'is having every install have
> potentially different usernames for the nova service a good idea'. The
> only answer so far has been 'because distros have chosen different
> usernames already and we need to suck it up'. Thats not a particularly
> satisfying answer.

It seems we're talking defaults now. I did not get that from the email
that started the discussion, it sounded like an "either or" to me.

So, if we're talking defaults, let the upstream architecture be the
default. Let your #A be a choice. If someone wants to come along and
do #B, we let them, but I really don't think you'd fine anyone :).

We question and we challenge new implementations (just like we always
do and as you're doing here, which is good).  But at the end of day if
someone is saying they have a real documented need in order for them
to adopt OpenStack, and it makes sense for that option to be "in tree"
for OpenStack,  we empower them by having a flexible framework, and
hopefully our tools are good enough to handle what differences there
are. If not, we make them better.

I'm not sure I entirely get the comparisons to Nova you're making. Let
me take a shot though and try to further it along as I see it:

Let the dib style element be the "API". What the implementation is,
whether a source install or a package install shouldn't affect the
interface. And there's no reason to "coerce" all the implementations
to be exactly the same.

The Nova virt drivers (libvirt/kvm, libvirt/xen, xenapi, etc) are our
different install types today, source-install, package-install. Our
different install type implementations still conform to the framework,
our install scripts have to be in the right place, we make use of
install-packages, we use os-svc-*, we use os-*-config, etc. Just like
the python code for each Nova virt driver has requirements (the
correct base class inheritance, methods, etc).

However, Nova makes no enforcement of how each driver is *actually*
implemented. It doesn't particularly care about what's happening in
each driver's power_on method.  It doesn't enforce that all
hypervisors kernel modules are called the same thing so that it's
easier to check if you loaded the right one. No one said that we can't
have a Nova baremetal driver b/c what if someone drops down to the
console to troubleshoot and they run "virsh list" and don't see
baremetal instances.  Coercing the implementation to be the same just
doesn't make sense to me, and I think where this comparison you're
making really breaks down.

The abstracting away of the hypervisor differences happens in the virt
drivers. For TripleO's purposes, the abstracting away of the install
type particularities should happen by the install and setup code. Not
by trying to coerce things to look the same after the fact, which
sounds to me exactly like what Nova *doesn't* do and require.

>> Take /mnt/state for example. This isn't the normal place for things to live. 
>> Why not use the read only root mechanism some distributions already have and 
>> work with that instead. Or perhaps have /mnt/state as a backup solution 
>> which can be used if a mechanism doesn't exist or is faulty?
>
> Currently we have two options for upgrading images. A) /mnt/state, B)
> a SAN + cinder. We haven't tested B), and I expect for many installs B
> won't be an option. /mnt/state is 100% technical, as no other options
> exist - none of the Linux distro 'read only root' answers today answer
> the problem /mnt/state solves in a way compatible with Nova.
>
>> In the end I think option A is the way we have to go. Is it more work... 
>> maybe. But in the end users will like us

Re: [openstack-dev] [ceilometer] Unable to run unit test cases

2014-02-15 Thread Clark Boylan
On Sat, Feb 15, 2014 at 6:45 AM, Henry Gessau  wrote:
> On Sat, Feb 15, at 4:41 am, Akhil Sadashiv Hingane  wrote:
>
>>
>> When I try to run the test cases for ceilometer, it fails with
>>
>> 
>>
>> Traceback (most recent call last):
>> File "/usr/local/bin/tox", line 9, in 
>> load_entry_point('tox==1.7.0', 'console_scripts', 'tox')()
>> File "/usr/local/lib/python2.7/dist-packages/tox/_cmdline.py", line 25, in 
>> main
>> config = parseconfig(args, 'tox')
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 47, in
>> parseconfig
>> parseini(config, inipath)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 281, in
>> __init__
>> config)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 346, in
>> _makeenvconfig
>> vc.commands = reader.getargvlist(section, "commands")
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 518, in
>> getargvlist
>> commandlist.append(self._processcommand(current_command))
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 550, in
>> _processcommand
>> new_word = self._replace(word)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 666, in
>> _replace
>> return RE_ITEM_REF.sub(self._replace_match, x)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 662, in
>> _replace_match
>> return handler(match)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 633, in
>> _replace_substitution
>> val = self._substitute_from_other_section(sub_key)
>> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 627, in
>> _substitute_from_other_section
>> "substitution key %r not found" % key)
>> tox.ConfigError: ConfigError: substitution key 'posargs' not found
>
> This happens with tox 1.7.0. You need to downgrade tox 1.6.1.
> "sudo pip install -U tox=1.6.1"
>
> By they way, I googled for "openstack tox.ConfigError: ConfigError:
> substitution key 'posargs' not found" and the first result gave me the 
> solution.
>

And for those interested in more details the upstream tox bug can be
found at [1]  and my proposed fix is at [2].

[1] https://bitbucket.org/hpk42/tox/issue/150/posargs-configerror
[2] 
https://bitbucket.org/hpk42/tox/pull-request/85/fix-command-expansion-and-parsing/diff

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-15 Thread Harshad Nakil
EIP will be allocated from public pools. So in effect public pools and
shared networks are only DC admin functions. Not available to VPC
users.
There is a implicit external gateway. When one creates NAT instance or
VPN instance, external interfaces of these interfaces come from the
shared network which can be configured by the DC admin.


Regards
-Harshad


> On Feb 14, 2014, at 10:07 PM, "Martin, JC"  wrote:
>
> Harshad,
>
> I'm not sure to understand what you mean by :
>> However many of these concepts are not exposed to a AWS customers and
>> the API work well.
>
> So for example in :
>
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#VPC_EIP_EC2_Differences
>
> When it says :
> "When you allocate an EIP, it's for use only in a VPC."
>
> Are you saying that the behavior of your API would be consistent without 
> scoping the external networks to a VPC and using the public pool instead ?
>
> I believe that your api may work for basic features on a small deployments 
> with only one VPC, but as soon as you have complex setups with external 
> gateways that need to be isolated, I'm not sure that it will provide parity 
> anyway with what EC2 provides.
>
>
> Maybe I missed something.
>
>
> JC
>
>> On Feb 14, 2014, at 7:35 PM, Harshad Nakil  
>> wrote:
>>
>> Hi JC,
>>
>> You have put it aptly. Goal of the blueprint is to present facade for
>> AWS VPC API as the name suggest.
>> As per your definition of VPC, shared network will have issues.
>> However many of these concepts are not exposed to a AWS customers and
>> the API work well.
>> While we work incrementally towards your definition of VPC we can
>> maintain API compatibility to AWS API that we are proposing. As we are
>> subset of your proposal and don't expose all features within VPC.
>>
>> Regards
>> -Harshad
>>
>>
>>> On Feb 14, 2014, at 6:22 PM, "Martin, JC"  wrote:
>>>
>>> Rudra,
>>>
>>> I do not agree that the current proposal provides the semantic of a VPC. If 
>>> the goal is to only provide a facade through the EC2 API, it may address 
>>> this, but unless you implement the basic features of a VPC, what good is it 
>>> doing ?
>>>
>>> I do believe that the work can be done incrementally if we agree on the 
>>> basic properties of a VPC, for example :
>>> - allowing projects to be created while using resources defined at the VPC 
>>> level
>>> - preventing resources not explicitly defined at the VPC level to be used 
>>> by a VPC.
>>>
>>> I do not see in the current proposal how resources are scoped to a VPC, and 
>>> how, for example, you prevent shared network to be used within a VPC, or 
>>> how you can define shared networks (or other shared resources) to only be 
>>> scoped to a VPC.
>>>
>>> I think we already raised our concern to you several months ago, but it did 
>>> not seem to have been addressed in the current proposal.
>>>
>>> thanks,
>>>
>>> JC
>>>
 On Feb 14, 2014, at 3:50 PM, Rudra Rugge  wrote:

 Hi JC,

 We agree with your proposed model of a VPC resource object. Proposal you 
 are making makes sense to us and we would like to collaborate further on 
 this. After reading your blueprint two things come to mind.

 1. VPC vision for Openstack? (Your blueprint is proposing this vision)
 2. Providing AWS VPC api compatibility with current constrains of 
 openstack structure.

 The blueprint that we proposed targets #2.
 It gives a way to implement "AWS VPC api" compatible API. This helps 
 subset of customers to migrate their workloads from AWS to openstack based 
 clouds. In our implementation we tied VPC to project. That was easiest way 
 to keep isolation with current structure. We agree that what you are 
 proposing is more generic. One to way is to implement our current proposal 
 to have one VPC to one project mapping. As your blueprint matures we will
 move VPC to multiple project mapping.

 We feel that instead of throwing away all the work done we can take an 
 incremental approach.

 Regards,
 Rudra


> On Feb 14, 2014, at 11:09 AM, Martin, JC  wrote:
>
>
> There is a Blueprint targeted for Icehouse-3 that is aiming to implement 
> the AWS VPC api. I don't think that this blueprint is providing the 
> necessary constructs to really implement a VPC, and it is not taking into 
> account the domains, or proposed multi tenant hierarchy. In addition, I 
> could not find a discussion about this topic leading to the approval.
>
> For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
> discussion on how to really implement VPC, and eventually split it into 
> multiple real blueprints for each area.
>
> Please, provide feedback on the following document, and on the best way 
> to move this forward.
>
> https://wiki.openstack.org/wiki/Blueprint-VPC
>
> Thanks,
>
> JC.
> __

Re: [openstack-dev] [ceilometer] Unable to run unit test cases

2014-02-15 Thread Henry Gessau
On Sat, Feb 15, at 4:41 am, Akhil Sadashiv Hingane  wrote:

> 
> When I try to run the test cases for ceilometer, it fails with
> 
> 
> 
> Traceback (most recent call last):
> File "/usr/local/bin/tox", line 9, in 
> load_entry_point('tox==1.7.0', 'console_scripts', 'tox')()
> File "/usr/local/lib/python2.7/dist-packages/tox/_cmdline.py", line 25, in 
> main
> config = parseconfig(args, 'tox')
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 47, in
> parseconfig
> parseini(config, inipath)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 281, in
> __init__
> config)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 346, in
> _makeenvconfig
> vc.commands = reader.getargvlist(section, "commands")
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 518, in
> getargvlist
> commandlist.append(self._processcommand(current_command))
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 550, in
> _processcommand
> new_word = self._replace(word)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 666, in
> _replace
> return RE_ITEM_REF.sub(self._replace_match, x)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 662, in
> _replace_match
> return handler(match)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 633, in
> _replace_substitution
> val = self._substitute_from_other_section(sub_key)
> File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 627, in
> _substitute_from_other_section
> "substitution key %r not found" % key)
> tox.ConfigError: ConfigError: substitution key 'posargs' not found

This happens with tox 1.7.0. You need to downgrade tox 1.6.1.
"sudo pip install -U tox=1.6.1"

By they way, I googled for "openstack tox.ConfigError: ConfigError:
substitution key 'posargs' not found" and the first result gave me the solution.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-15 Thread Dan Prince


- Original Message -
> From: "Robert Collins" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: openstack-operat...@lists.openstack.org
> Sent: Friday, February 14, 2014 11:12:12 PM
> Subject: Re: [Openstack-operators] [openstack-dev] [TripleO] consistency vs   
> packages in TripleO
> 
> On 15 February 2014 08:42, Dan Prince  wrote:
> >
> 
> > Let me restate the options the way I see it:
> >
> > Option A is we do our job... by making it possible to install OpenStack
> > using various distributions using a set of distro agnostic tools
> > (TripleO).
> 
> So our job is to be the upstream installer. If upstream said 'we only
> support RHEL and Ubuntu, our job would arguably end there. And in
> fact, we're being much more inclusive than many other parts of
> upstream - Suse isn't supported upstream, nor is Ubuntu non-LTS, nor
> Fedora.
> 
> > Option B is we make our job easy by strong arming everyone into the same
> > defaults of our "upstream" choosing.
> 
> Does Nova strong arm everyone into using kvm? Its the default. Or
> keystone into using the SQL token store - its the default?


Your initial email didn't use the word default at all. You said "we coerce the 
package into reference upstream shape as part of installing it". That sounds 
like strong arming to me... and is what I'm concerned with.

> 
> No - defaults are not strong arming. But the defaults are obviously
> defaults, and inherited by downstreams. And some defaults are larger
> than others -  we've got well defined interfaces in OpenStack, which
> have the primary characteristic of 'learn once, apply everywhere' -
> even though in principle you can replace them. At the low level REST
> and message-bus RPCs, at a level up Keystone and more recently Nova
> and Neutron have become that as we get higher order code like Heat and
> Savanna that depend on them. I hope none would replace Nova with
> Eucalyptus and then say they're running OpenStack - in the same way
> we're both defining defaults, *and* building interfaces. *That* is our
> job - making OpenStack *upstream* deployable, in the places, and on
> the platforms, with the options, that our users want.
> 
> Further to that, upstream we're making choices with the thoughts of
> our *users* in mind - both cloud consumers and cloud operators. They
> are why we ask questions like 'is having every install have
> potentially different usernames for the nova service a good idea'. The
> only answer so far has been 'because distros have chosen different
> usernames already and we need to suck it up'. Thats not a particularly
> satisfying answer.
> 
> > Does option B look appealing? Perhaps at first glance. By taking away the
> > differences it seems like we are making everyone's lives easier by
> > "streamlining" our depoyment codebase. There is this one rub though: it
> > isn't what users expect.
> 
> I don't know what users expect: There's an assumption stated in some
> of the reponses that people which choose 'TripleO + Packages' do that
> for a reason. I think this is likely going to be wrong much of the
> time. Why? Because upstream doesn't offer someone to ring when there
> is a problem. So people will grab RDO, or Suse's offering, or
> Rackspace Private Cloud, or HP Cloud OS, or
> $distribution-of-openstack-of-choice : and I don't expect for most
> people that 'and we used a nova deb package' vs 'and we used a nova
> pip package' is going to be *why* they choose that vendor, so as a
> result many people will get TripleO+Packages because their vendor
> chose that for them. That places a responsibility on the vendors and
> on us. The vendors need to understand the consequences of their
> packages varying trivially from upstream - the
> every-unix-is-a-little-different death of a thousand cuts problem -
> and we need to help vendors understand the drivers that lead them to
> need to build images via packages.
> 
> > Take /mnt/state for example. This isn't the normal place for things to
> > live. Why not use the read only root mechanism some distributions already
> > have and work with that instead. Or perhaps have /mnt/state as a backup
> > solution which can be used if a mechanism doesn't exist or is faulty?
> 
> Currently we have two options for upgrading images. A) /mnt/state, B)
> a SAN + cinder. We haven't tested B), and I expect for many installs B
> won't be an option. /mnt/state is 100% technical, as no other options
> exist - none of the Linux distro 'read only root' answers today answer
> the problem /mnt/state solves in a way compatible with Nova.

I would argue that we haven't tried all of the read only root mechanism 
either... at least to the point where we can definitely don't work. Sure the 
data has to go somewhere... but it is how we present this to the end user that 
is the point in this thread, no?

All I'm arguing for here is the ability to avoid doing this to our nova.conf 
file (what we do today in TripleO):

 state_path=/mnt/state/var/

[openstack-dev] [Nova][EC2][Cinder] Asking for time to review my patches

2014-02-15 Thread Rushi Agrawal
Over the last two months, I have submitted few patches which increases
support of block storage volumes in OpenStack's EC2 API. The blueprints for
them have been approved, and the code is ready for review.

* Expose volume type
 https://review.openstack.org/#/c/61041/

* Expose volume tags
https://review.openstack.org/#/c/64690/

* Expose volume snapshot tags
https://review.openstack.org/#/c/66291/

* Expose filtering of volumes
https://review.openstack.org/#/c/62350/
https://review.openstack.org/#/c/70085/

I would like to solicit some feedback from interested/affected folks, so
that I can incorporate them sooner so as not to bother you people near the
end of milestone :)

Any help would be greatly appreciated.

Thanks and regards,
Rushi Agrawal
Cloud engineer,
Reliance Jio Infocomm, India
Ph: (+91) 99 4518 4519
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 22, Issue 39

2014-02-15 Thread Vinod Kumar Boppanna

Dear Vish,

I completely agree with you. Its like a trade off between getting 
re-authenticated (when in a hierarchy user has different roles at different 
levels) or parsing the entire hierarchy till the leaf and include all the roles 
the user has at each level in the scope.

I am ok with any one (both has some advantages and dis-advantages).

But one point i didn't understand why should we parse the tree above the level 
where the user gets authenticated (as you specified in the reply). Like if user 
is authenticated at level 3, then do we mean that the roles at level 2 and 
level 1 also should be passed?
Why this is needed? I only see either we pass only the role at the level the 
user is getting authenticated or pass the roles at the level till the leaf 
starting from the level the user is getting authenticated.

Regards,
Vinod Kumar Boppanna

Message: 21
Date: Fri, 14 Feb 2014 10:13:59 -0800
From: Vishvananda Ishaya 
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] Hierarchicical Multitenancy Discussion
Message-ID: <4508b18f-458b-4a3e-ba66-22f9fa47e...@gmail.com>
Content-Type: text/plain; charset="windows-1252"

Hi Vinod!

I think you can simplify the roles in the hierarchical model by only passing 
the roles for the authenticated project and above. All roles are then inherited 
down. This means it isn?t necessary to pass a scope along with each role. The 
scope is just passed once with the token and the project-admin role (for 
example) would be checking to see that the user has the project-admin role and 
that the project_id prefix matches.

There is only one case that this doesn?t handle, and that is when the user has 
one role (say member) in ProjA and project-admin in ProjA2. If the user is 
authenticated to ProjA, he can?t do project-adminy stuff for ProjA2 without 
reauthenticating. I think this is a reasonable sacrifice considering how much 
easier it would be to just pass the parent roles instead of going through all 
of the children.

Vish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][EC2] attach and detach volume response status

2014-02-15 Thread Rushi Agrawal
I remember seeing the same while attaching -- return value is 'detached'.
So I can confirm this is a bug.

I couldn't locate a bug report for it, so I created one:
https://bugs.launchpad.net/nova/+bug/1280572

Please mark it as a dup if you already have a bug report.

Regards,
Rushi Agrawal
Ph: (+91) 99 4518 4519


On Sat, Feb 15, 2014 at 11:56 AM, wu jiang  wrote:

> Hi,
>
> I checked the AttachVolume in AWS EC2:
>
> http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-AttachVolume.html
>
> The status returned is 'attaching':
>
> http://ec2.amazonaws.com/doc/2013-10-15/";>
>   59dbff89-35bd-4eac-99ed-be587EXAMPLE
>   vol-1a2b3c4d
>   i-1a2b3c4d
>   /dev/sdh
>   attaching
>   -MM-DDTHH:MM:SS.000Z
> 
>
>
> So I think it's a bug IMO.Thanks~
>
>
> wingwj
>
>
> On Sat, Feb 15, 2014 at 11:35 AM, Rui Chen  wrote:
>
>> Hi Stackers;
>>
>> I use Nova EC2 interface to attach a volume, attach success, but volume
>> status is detached in message response.
>>
>> # euca-attach-volume -i i-000d -d /dev/vdb vol-0001
>> ATTACHMENT  vol-0001i-000d  detached
>>
>> This make me confusion, I think the status should be attaching or in-use.
>>
>> I find attach and detach volume interfaces return
>> volume['attach_status'], but describe volume interface return
>> volume['status']
>>
>> Is it a bug? or for other considerations I do not know.
>>
>> Thanks
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-15 Thread Stan Lagun
Just to add my 2 cents.

1. YAML is already familiar to OpenStack developers as Heat and others use
it. So at least the syntax (not to mess with semantics) doesn't have to be
learned.

2. YAML parser is very flexible an can be extended with additional types or
constructs like Key:  to include content of other file etc.
Murano DSL may operate on already deserialized (parsed) data leaving yaml
files access to hosting operation. Thus the engine itself can be
independent from how and where the YAML files are stored. This is very good
for App Catalog that stores its data in Glance. Also it always possible to
serialize it to some other format (XML, JSON, whatever) if it required for
some purpose

3. YAML declarations can be processed by Murano dashboard, 3-rd party
software and various tooling for Murano. If it wasn't YAML but some
handcrafted syntax then we would have to provide them with embeddable
parser for that syntax

4. Implementing parser for full-blown language is not a trivial task to say
the least. It would require much more time for development and greatly
increase probability to shoot ourself in the foot. And we really don't want
to have like a year between Murano versions.


On Sat, Feb 15, 2014 at 12:18 PM, Alexander Tivelkov  wrote:

> Hi Joshua,
>
> Thank you very much for you feedback!
> This is really a great question, and it was the first question we've asked
> ourselves when we started thinking about this new design.
> We've considered both options: to have our own syntax (python-like,
> java-like, something-else-like) or to use YAML. We've chosen the latest,
> and here are the reasons.
>
> The most important moment: this is not a general-purpose language, but a
> domain-specific one. And Murano's domain is manipulating with complex
> nested data-structures, which are usually represented by YAML. Murano's
> environments are in YAML. Murano's VM-side commands are wrapped into YAML.
> The primary building blocks of Murano - Heat templates - are written on
> YAML. Actually, at a simplified level murano's workflows can be thought of
> as algorithms that just generate yaml fragments. So, it would be beneficial
> for Murano to manipulate with YAML-constructs at the DSL level. If we use
> YAML notation, yaml-blocks become first-class citizens in the language,
> while in a regular python-like language they would be just
> formatted-strings. For example, look at this code snippet which generates a
> heat template to deploy an instance:
> http://paste.openstack.org/show/65725/
> As you may see, the code on lines 7-11 is a Heat-template, seamlessly
> embedded inside Murano's workflow code. It has the variables right inside
> the template, and the Murano engine will substitute them with a
> user-specified (or workflow-computed) data
>
> Another reason for YAML: the notation is very easy to extend: you'll just
> have to add some new predefined key and a handler for its value: the format
> will not be broken, so older code will run out of the box, and even the
> newer code will most probably run fine on the older parser (the unknown
> sections will simply be ignored). This will allow us to extend the language
> without making any breaking-changes. The regular languages do not provide
> this flexibility: they will have problems if detecting unrecognised lexems
> or constructs.
>
> Last but not least: the perception. You are absolutely right when you say
> that this is actually a full programming language. However, we don't want
> to rush all its capabilities to unprepared developers. If some developer
> does not want any complexity, they may think about it as about some fancy
> configuration markup language: a declarative, heat-template-like header
> followed by a sequence of simple actions. And only if needed the power
> comes at your service: variable assignments, flow control, flexible data
> contracts, complex compositions, inheritance trees.. I can imagine a lot of
> scary stuff here J
> But at the same time, YAML's indent-based syntax will look familiar to
> python developers.
>
> Yes, everything comes at cost, and yaml may seem a bit bulky at the first
> glance. But I believe that people will get used to it soon enough, and the
> benefits are really important.
>
>
> I hope this answers your concern. We'll come up with more examples and
> ideas: this thing has just emerged, nothing is set in stone yet, I am
> actively seeking for feedback and ideas.  So thanks a loot for your
> question, I really appreciate it.
>
>
>
> --
> Regards,
> Alexander Tivelkov
>
>
> On Fri, Feb 14, 2014 at 6:41 PM, Joshua Harlow wrote:
>
>>  An honest question,
>>
>>  U are mentioning what appears to be the basis for a full programming
>> language (variables, calling other workflows - similar to functions) but
>> then u mention this is being stuffed into yaml.
>>
>>  Why?
>>
>>  It appears like u might as well spend the effort and define a grammar
>> and simplistic language that is not stuffed inside yaml. Shoving one into

Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-15 Thread Sergey Lukjanov
Sukhdev, that's awesome, I think it'll be great to make folks able to start
from something easy configurable like gerrit trigger plugin.


On Sat, Feb 15, 2014 at 6:01 AM, Sukhdev Kapur
wrote:

>
>
>
> On Thu, Feb 13, 2014 at 12:39 PM, Jay Pipes  wrote:
>
>> On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
>> > Jay,
>> >
>> > Just an FYI. We have modified the Gerrit plugin it accept/match regex
>> > to generate notifications of for "receck no bug/bug ###". It turned
>> > out to be very simple fix and we (Arista Testing) is now triggering on
>> > recheck comments as well.
>>
>> Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
>> where other folks can use it?
>>
>
>
> Yes the patch is ready.  I am documenting it as a part of overall
> description of Arista Testing Setup and will be releasing soon as part of
> the document that I am writing.
> Hopefully next week.
>
> regards..
> -Sukhdev
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Unable to run unit test cases

2014-02-15 Thread Akhil Sadashiv Hingane

When I try to run the test cases for ceilometer, it fails with 


 


Traceback (most recent call last): 
File "/usr/local/bin/tox", line 9, in  
load_entry_point('tox==1.7.0', 'console_scripts', 'tox')() 
File "/usr/local/lib/python2.7/dist-packages/tox/_cmdline.py", line 25, in main 
config = parseconfig(args, 'tox') 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 47, in 
parseconfig 
parseini(config, inipath) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 281, in 
__init__ 
config) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 346, in 
_makeenvconfig 
vc.commands = reader.getargvlist(section, "commands") 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 518, in 
getargvlist 
commandlist.append(self._processcommand(current_command)) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 550, in 
_processcommand 
new_word = self._replace(word) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 666, in 
_replace 
return RE_ITEM_REF.sub(self._replace_match, x) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 662, in 
_replace_match 
return handler(match) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 633, in 
_replace_substitution 
val = self._substitute_from_other_section(sub_key) 
File "/usr/local/lib/python2.7/dist-packages/tox/_config.py", line 627, in 
_substitute_from_other_section 
"substitution key %r not found" % key) 
tox.ConfigError: ConfigError: substitution key 'posargs' not found 

 
Regards, 
Akhil 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-15 Thread Alexander Tivelkov
Hi Joshua,

Thank you very much for you feedback!
This is really a great question, and it was the first question we've asked
ourselves when we started thinking about this new design.
We've considered both options: to have our own syntax (python-like,
java-like, something-else-like) or to use YAML. We've chosen the latest,
and here are the reasons.

The most important moment: this is not a general-purpose language, but a
domain-specific one. And Murano's domain is manipulating with complex
nested data-structures, which are usually represented by YAML. Murano's
environments are in YAML. Murano's VM-side commands are wrapped into YAML.
The primary building blocks of Murano - Heat templates - are written on
YAML. Actually, at a simplified level murano's workflows can be thought of
as algorithms that just generate yaml fragments. So, it would be beneficial
for Murano to manipulate with YAML-constructs at the DSL level. If we use
YAML notation, yaml-blocks become first-class citizens in the language,
while in a regular python-like language they would be just
formatted-strings. For example, look at this code snippet which generates a
heat template to deploy an instance: http://paste.openstack.org/show/65725/
As you may see, the code on lines 7-11 is a Heat-template, seamlessly
embedded inside Murano's workflow code. It has the variables right inside
the template, and the Murano engine will substitute them with a
user-specified (or workflow-computed) data

Another reason for YAML: the notation is very easy to extend: you'll just
have to add some new predefined key and a handler for its value: the format
will not be broken, so older code will run out of the box, and even the
newer code will most probably run fine on the older parser (the unknown
sections will simply be ignored). This will allow us to extend the language
without making any breaking-changes. The regular languages do not provide
this flexibility: they will have problems if detecting unrecognised lexems
or constructs.

Last but not least: the perception. You are absolutely right when you say
that this is actually a full programming language. However, we don't want
to rush all its capabilities to unprepared developers. If some developer
does not want any complexity, they may think about it as about some fancy
configuration markup language: a declarative, heat-template-like header
followed by a sequence of simple actions. And only if needed the power
comes at your service: variable assignments, flow control, flexible data
contracts, complex compositions, inheritance trees.. I can imagine a lot of
scary stuff here J
But at the same time, YAML's indent-based syntax will look familiar to
python developers.

Yes, everything comes at cost, and yaml may seem a bit bulky at the first
glance. But I believe that people will get used to it soon enough, and the
benefits are really important.


I hope this answers your concern. We'll come up with more examples and
ideas: this thing has just emerged, nothing is set in stone yet, I am
actively seeking for feedback and ideas.  So thanks a loot for your
question, I really appreciate it.



--
Regards,
Alexander Tivelkov


On Fri, Feb 14, 2014 at 6:41 PM, Joshua Harlow wrote:

>  An honest question,
>
>  U are mentioning what appears to be the basis for a full programming
> language (variables, calling other workflows - similar to functions) but
> then u mention this is being stuffed into yaml.
>
>  Why?
>
>  It appears like u might as well spend the effort and define a grammar
> and simplistic language that is not stuffed inside yaml. Shoving one into
> yaml syntax seems like it gets u none of the benefits of syntax checking,
> parsing, validation (highlighting...) and all the pain of yaml.
>
>  Something doesn't seem right about the approach of creating languages
> inside the yaml format (in a way it becomes like xsl, yet xsl at least has
> a spec and is well defined).
>
>  My 2 cents
>
> Sent from my really tiny device...
>
> On Feb 14, 2014, at 7:22 PM, "Alexander Tivelkov" 
> wrote:
>
>Hi folks,
>
>  Murano matures, and we are getting more and more feedback from our early
> adopters. The overall reception is very positive, but at the same time
> there are some complaints as well. By now the most significant complaint is
> is hard to write workflows for application deployment and maintenance.
>
> Current version of workflow definition markup really have some design
> drawbacks which limit its potential adoption. They are caused by the fact
> that it was never intended for use for Application Catalog use-cases.
>
>  I'll briefly touch these drawbacks first:
>
>
>1. Murano's workflow engine is actually a state machine, however the
>workflow markup does not explicitly define the states and transitions.
>2. There is no data isolation within any environment, which causes
>both potential security vulnerabilities and unpredictable workflow
>behaviours.
>3. There is no easy way to reuse the wo