Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Qiu Yu
On Mon, May 2, 2016 at 12:38 PM, Steven Dake (stdake) 
wrote:

> Yup but that didn't happen with kolla-mesos and I didn't catch it until 2
> weeks after it was locked in stone.  At that point I asked for the ABI to
> be unified to which I got a "shrug" and no action.
>
> If it has been in one repo, everyone would have seen the multiple ABIs and
> rejected the patch in the first place.
>
> FWIW I am totally open to extending the ABI however is necessary to make
> Kolla containers be the reference that other projects use for their
> container deployment technology tooling.  In this case the ABI was
> extended without consultation and without repair after the problem was
> noticed.


ABI has been mentioned a lot in either this thread or the spec code review.
Does it refer to container image only, or does it cover other part like
jinja2
template for config generation as well?

That is the part I think need more clarification. Because even though we
treat
Kubernetes as just another deployment tool, but if it still relies on
Ansible to
generate configuations (as proposed in the spec[1]), then there's no clean
way
to centralize all Kube related stuff in separate repo.

If we're going to re-use Kolla's jinja2 templates and ini merging (which is
heavily depends on Ansible module as of now), I think practically it is
easiser
to bootstrap Kubernetes stuff in the same Kolla repo. But other than that,
I'm
in favor of separate Kolla-kubernetes repo.

[1] https://review.openstack.org/#/c/304182

QY
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Hide CI comments in Gerrit

2014-05-28 Thread Qiu Yu
Hi Rado,

Thanks for this cool userscript!

It works great, and the only problem I found is TamperMonkey has some issue
with hash in @include pattern matching[1].  So I end up using
https://review.openstack.org/* instead of https://review.openstack.org/#/c/*

[1] http://tampermonkey.net/documentation.php#@include

Thanks,
--
QY


On Sun, May 25, 2014 at 8:23 PM, Radoslav Gerganov 
wrote:

> Hi,
>
> I created a small userscript that allows you to hide CI comments in
> Gerrit. That way you can read only comments written by humans and hide
> everything else. I’ve been struggling for a long time to follow discussions
> on changes with many patch sets because of the CI noise. So I came up with
> this userscript:
>
> https://gist.github.com/rgerganov/35382752557cb975354a
>
> It adds “Toggle CI” button at the bottom of the page that hides/shows CI
> comments. Right now it is configured for Nova CIs, as I contribute mostly
> there, but you can easily make it work for other projects as well. It
> supports both the “old” and “new” screens that we have.
>
> How to install on Chrome: open chrome://extensions and drag&drop the
> script there
> How to install on Firefox: install Greasemonkey first and then open the
> script
>
> Known issues:
>  - you may need to reload the page to get the new button
>  - I tried to add the button somewhere close to the collapse/expand links
> but it didn’t work for some reason
>
> Hope you will find it useful. Any feedback is welcome :)
>
> Thanks,
> Rado
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


Thanks,
--
Qiu Yu


On Sun, May 25, 2014 at 8:23 PM, Radoslav Gerganov 
wrote:

> Hi,
>
> I created a small userscript that allows you to hide CI comments in
> Gerrit. That way you can read only comments written by humans and hide
> everything else. I’ve been struggling for a long time to follow discussions
> on changes with many patch sets because of the CI noise. So I came up with
> this userscript:
>
> https://gist.github.com/rgerganov/35382752557cb975354a
>
> It adds “Toggle CI” button at the bottom of the page that hides/shows CI
> comments. Right now it is configured for Nova CIs, as I contribute mostly
> there, but you can easily make it work for other projects as well. It
> supports both the “old” and “new” screens that we have.
>
> How to install on Chrome: open chrome://extensions and drag&drop the
> script there
> How to install on Firefox: install Greasemonkey first and then open the
> script
>
> Known issues:
>  - you may need to reload the page to get the new button
>  - I tried to add the button somewhere close to the collapse/expand links
> but it didn’t work for some reason
>
> Hope you will find it useful. Any feedback is welcome :)
>
> Thanks,
> Rado
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Meaning of qvo, qve, qbr, qr, qg and so on

2014-05-19 Thread Qiu Yu
Hi Eduard,

Hopefully diagrams in following page can clear things up.
http://docs.openstack.org/admin-guide-cloud/content/under_the_hood_openvswitch.html

Terms explained below based on my understanding
- qvo: veth pair openvswitch side
- qvb: veth pair bridge side
- qbr: bridge
- qr: l3 agent managed port, router side
- qg: l3 agent managed port, gateway side

Not sure about the qve you mentioned, didn't see it anywere in the code.

Thanks,
--
Qiu Yu


On Fri, May 16, 2014 at 2:50 AM, Eduard barrera wrote:

> Hi all,
>
> I was wondering what is the meaning of this acronyms...
> Are they acronyms, right ?
>
> Quantum
> Virtual
> O??
>
> Quantum
> B
> Ridge ?
>
> qg ?
> qve ?
>
> What do they mean ?
>
> Thanks
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Centralized policy rules and quotas

2014-02-18 Thread Qiu Yu
On Feb 18, 2014 5:48 PM, "Vinod Kumar Boppanna" <
vinod.kumar.boppa...@cern.ch> wrote:



> The file "quota.py" contains the domain quota driver implemented by Tiago
and his team (I have just added few more functions to complete it).
>
> Hope this is ok. If you are finding problem with this as well, then let
me know...I will try to create a patch then (I am new to all this code
commit, patch etc..so please bare with me for the inconvenience).
>
> Tiago once had given me this link
>
> https://github.com/tellesnobrega/nova/tree/master/nova (where he has put
up the domain quota driver). But again as i said, it was little in-complete
..and some of the functions are missing.
>

Thanks Vinod, that answers all my questions. Thank you so much for the
detail information! :)

Thanks,
--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Centralized policy rules and quotas

2014-02-18 Thread Qiu Yu
On Tue, Feb 18, 2014 at 4:59 PM, Vinod Kumar Boppanna <
vinod.kumar.boppa...@cern.ch> wrote:

>  Dear Qiu Yu,
>
> The domain quota driver as well as the APIs to access the domain quota
> driver is available. Please check the following
>
> BluePrint ->
> https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api
> Wiki Page -> https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver
> GitHub Code -> https://github.com/vinodkumarboppanna/DomainQuotaAPIs
>

Vinod,

Thank you for sharing. I did try to dig up in your repo before sending the
last email.

It looks like domain quota driver code has already been included in your
base commit, that is not quite easy for me to read. Do you happen to have a
link for the clean commit / patch of just domain quota driver code itself?
Thanks!

Thanks,
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Centralized policy rules and quotas

2014-02-18 Thread Qiu Yu
On Fri, Feb 7, 2014 at 4:46 AM, Raildo Mascena  wrote:

> Hello,
>
> Currently, there is a blueprint for creating a Domain in New Quota Driver
> who is waiting approval, but that is already implemented. I believe that is
> worth checking out.
>
> https://blueprints.launchpad.net/nova/+spec/domain-quota-driver
>
> Any questions I am available.
>
> Regards,
>
> Raildo Mascena
>

Hi Raildo,

Is this domain quota driver code now available to review?

I'm asking this because work items in the blueprint[1] have already been
marked as done, and there is also some relevant work (quotas for domain)
mentioned by Vinod in another thread. But found nowhere for the domain
quota driver code. Appreciated if you can share me some pointers.

[1] https://blueprints.launchpad.net/nova/+spec/domain-quota-driver

Thanks,
--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Gate] qemu: linux kernel too old to load a ram disk

2014-02-14 Thread Qiu Yu
On Fri, Feb 14, 2014 at 4:58 PM, sahid wrote:

> Hello,
>
> It looks since 12 hours the gate fails in 100% of case because
> an error with libvirt (logs/libvirtd.txt):
> qemu: linux kernel too old to load a ram disk
>
>
> Bug reported on openstack-ci:
> https://bugs.launchpad.net/openstack-ci/+bug/1280142
>
> Fingerprint:
>
> http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTpcInFlbXU6IGxpbnV4IGtlcm5lbCB0b28gb2xkIHRvIGxvYWQgYSByYW0gZGlza1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTM5MjM2NzU5MTY1MX0=
>
> s.
>
>
Just marked it as a duplicate of
https://bugs.launchpad.net/openstack-ci/+bug/1280072

Seems glance is not happy with newly released python-swiftclient 2.0, and
then with corrupted image, all vm provisioning simply fail.

--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Gate broken

2014-02-07 Thread Qiu Yu
On Sat, Feb 8, 2014 at 3:29 PM, Gary Kotton  wrote:

> Hi,
> Anyone aware of:
>
>
It is caused the new boto 2.25 release.
Joe Gordon filed a new bug on this[1], and I just submitted a patch [2] to
fix.

[1] https://bugs.launchpad.net/nova/+bug/1277790
[2] https://review.openstack.org/#/c/72066/

Thanks,
--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] policy has no effect because of hard coded assert_admin?

2013-12-12 Thread Qiu Yu
On Fri, Dec 13, 2013 at 2:40 AM, Morgan Fainberg  wrote:

> As Dolph stated, V3 is where the policy file protects.  This is one of the
> many reasons why I would encourage movement to using V3 Keystone over V2.
>
> The V2 API is officially deprecated in the Icehouse cycle, I think that
> moving the decorator potentially could cause more issues than not as stated
> for compatibility.  I would be very concerned about breaking compatibility
> with deployments and maintaining the security behavior with the
> encouragement to move from V2 to V3.  I am also not convinced passing the
> context down to the manager level is the right approach.  Making a move on
> where the protection occurs likely warrants a deeper discussion (perhaps in
> Atlanta?).
>
>
Thanks for the background info. However, after a quick go-through keystone
V3 API and existing BPs. Two questions still confuse me regarding policy
enforcement.

#1 Seems V3 policy api [1] has nothing to do with the policy rules. It
seems to be dealing with access / secret keys only. So it might be used for
access key authentication and related control in my understanding.

Is there any use case / example regarding V3 policy api? Does it even
related to policy rules in json file?

#2 Found this slides[2] online by Adam Young. And in page 27, he mentioned
"isAdmin" currently in nova belongs to keystone actually.

Would be really appreciated for some pointers. ML discussion or bp (I don't
find any so far), etc.

[1] http://api.openstack.org/api-ref-identity.html#Policy_Calls
[2] http://www.slideshare.net/kamesh001/openstack-keystone

Thanks,
--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] policy has no effect because of hard coded assert_admin?

2013-12-11 Thread Qiu Yu
Hi,

I was trying to fine tune some keystone policy rules. Basically I want to
grant "create_project" action to user in "ops" role. And following are my
steps.

1. Adding a new user "usr1"
2. Creating new role "ops"
3. Granting this user a "ops" role in "service" tenant
4. Adding new lines to keystone policy file

"ops_required": [["role:ops"]],
"admin_or_ops": [["rule:admin_required"], ["rule:ops_required"]],

5. Change

"identity:create_project": [["rule:admin_required"]],
to
"identity:create_project": [["rule:admin_or_ops"]],

6. Restart keystone service

keystone tenant-create with credential of user "usr1" still returns 403
Forbidden error.
“You are not authorized to perform the requested action, admin_required.
(HTTP 403)”

After some quick scan, it seems that create_project function has a
hard-coded assert_admin call[1], which does not respect settings in the
policy file.

Any ideas why? Is it a bug to fix? Thanks!
BTW, I'm running keystone havana release with V2 API.

[1]
https://github.com/openstack/keystone/blob/master/keystone/identity/controllers.py#L105

Thanks,
--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Scheduler] about scheduler-as-a-service

2013-12-10 Thread Qiu Yu
On Tue, Dec 10, 2013 at 5:30 PM, Lingxian Kong  wrote:

> we know that there is a scheduler-as-a-service[1] working in progress now,
> aiming at smart resource placement and also providing the instance group
> API work for nova.
>
> But what I wonder is does it include the feature of DRS(Distributed
> Resource Scheduler, something like that), as it is in vCenter[2], or is
> there any project related to this? or some related bp?
>
> Any hints are appreciated. I apologize if this question was already
> covered and I missed it.
>
>
For the "smart" portion, maybe you should take a look at
https://blueprints.launchpad.net/nova/+spec/solver-scheduler

And for the DRS feature, I think it's more likely to be fit in nova
conductor's role. After all the migration task have been moved to
conductor, then some feature like DRS could be discussed as the next step.
https://blueprints.launchpad.net/nova/+spec/cold-migrations-to-conductor
https://blueprints.launchpad.net/nova/+spec/unified-migrations

[1]https://etherpad.openstack.org/p/icehouse-external-scheduler
> [2]https://www.vmware.com/cn/products/vsphere/features/drs-dpm.html
>
>
BTW, the second link you provided seems to be in Chinese. For those who
interested, please use this one instead.
http://www.vmware.com/pdf/vmware_drs_wp.pdf

--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Design proposals for blueprints to record scheduler information

2013-12-09 Thread Qiu Yu
Hi, ALL

Recently I've been working on two blueprints[1][2], both involved with
recording scheduling information. And would like to hear some comments for
several design choices.

Problem Statement
--
* NoValidHost exception might masked out real failure reason to spin up an
instance.

Consider following event sequence, "run_instance" on host1 failed to spin
up an instance due to port allocation failure in neutron. The request
casted back to scheduler to pick next available host. It failed again on
host2 for the same reason of port allocation error. After Maximum 3 times
to retry, instance is set in "ERROR" state with a NoValidHost exception.
And there's no easy way to find out what is really going wrong.

* Current scheduling information are recorded in several different log
items, which is difficult to lookup when debugging.

Design Proposal
--
1. Blueprint internal-scheduler[1] will try to address problem #1. After
conductor retrieved selected destination hosts from scheduler, it will
create a "scheduler_records_allocations" item in database, for each
allocated instance/host allocation.

Design choices:
a) Correlate this scheduler_records_allocations with the 'create' instance
action, and generate a combined view with instance-action events.
b) Add separate new API to retrieve this information.

I prefer the choice #a, because instance action events perfectly fits such
usage case. And allocation records will supplement necessary information
when viewing 'create' action events of an instance.

Thoughts?

NOTE: Please find the following chart in link[3], in case of any
format/display issue.

  scheduler_records_allocations
  +-+
  |allocation_id: 9001  |
  |instance_uuid: inst1_uuid|
 scheduler_records|scheduler_record_id: 1210|
 +--+ |host: host1  |
 |scheduler_record_id: 1210 | |weight: 197.0|
+---+
 |user_id: 'u_fakeid'   | |result: Failed   |
|instance1  |
 |project_id: 'p_fakeid'| |reason: 'No more IP addresses|
+---+
 |request_id: 'req-xxx' | +-+
 |instance_uuids: [ | +-+
+---+
 |'inst1_uuid', | |allocation_id: 9002  |
|instance2  |
 |'inst2_uuid'] | |instance_uuid: inst2_uuid|
+---+
 |request_spec: {...}   | |scheduler_record_id: 1210|
 |filter_properties: {...}  | |host: host2  |
 |scheduler_records_allocations:| |weight: 128.0|
 |[9001, 9002]  | |result: Success  |
 |start_time: ...   | |reason:  |
 |finish_time: ...  | +-+
 +--+ +-+
  |allocation_id: 9003  |
  |instance_uuid: inst1_uuid|
  |scheduler_record_id: 1210|
  |host: host2  |
  |weight: 64.0 |
  |result: Failed   |
  |reason: 'No more IP addresses|
  +-+

2. Blueprint record-scheduler-information[2] will try to solve the problem
#2, to generate a structured information for each scheduler run.

Design choices:
a) Record 'scheduler_records' info in database, which is easy to query, but
introduce a great burden in terms of performance, extra database space
usage, clean up/archiving policy, security relate issue[4], etc.
b) Record 'scheduler_records' into a separate log file, in JSON format, and
each line for a single record of each scheduler run. And then add a new API
extension to retrieve last n (as a query parameter) scheduler records. The
benefit of this approach avoided database issue, and plays well with
external tooling, as well as provide a central place to view the log. But
as a compromise, we won't be able to query logs for specific request_id.

So the problem here is, is database storage solution still desirable? Or...
implement backend driver which deployer could choose? However, in such
case, API would be the minimum set to support both.

Any comments or thoughts are highly appreciated.

[1] https://blueprints.launchpad.net/nova/+spec/internal-scheduler
[2] https://blueprints.launchpad.net/nova/+spec/record-scheduler-information
[3]
https://docs.google

Re: [openstack-dev] Cells - Neutron Service

2013-08-30 Thread Qiu Yu
I found this launchpad discussion, in which Aaron mentioned about network
awareness integration with nova cells.

https://answers.launchpad.net/neutron/+question/228815

Could anyone share some pointers in that direction? Thanks!

Best Regards,
--
Qiu Yu


On Fri, Aug 30, 2013 at 6:40 AM, Ravi Chunduru  wrote:

> Its an interesting discussion you brought up today. I agree there is no
> clear definition of neutron service in that table. The cell goes by its
> definition of ability to create instance anywhere. Then there needs to be
> inter-vm communication for a given network.
>
> I feel Neutron must be shared service in Cells. Such depth is missing in
> Neutron today.
>
> Any thoughts?
>
> Thanks,
> -Ravi.
>
>
> On Thu, Aug 29, 2013 at 8:00 AM, Addepalli Srini-B22160 <
> b22...@freescale.com> wrote:
>
>>  Hi,
>>
>> ** **
>>
>> While developing some  neutron extensions, one question came up on Cells.
>> Appreciate any comments.
>>
>> ** **
>>
>> According to this table in operations guide,  a cell shares nova-api and
>> keystone, but does not talk about other services.
>>
>> ** **
>>
>> I understand from few that Neutron service need to be shared across cells
>> if virtual networks are to be extended to multiple cells.   Otherwise,
>> neutron service can be dedicated to each cell.
>>
>> ** **
>>
>> I guess anybody developing  neutron related extensions need to take care
>> both scenarios.
>>
>> ** **
>>
>> Is that understanding correct?  
>>
>> ** **
>>
>> Also which deployments are more common – Shared Neutron or dedicated
>> neutrons?
>>
>> ** **
>>
>> Thanks
>> Srini
>>
>> ** **
>>
>> ** **
>>
>> *Cell**s*
>>
>> *Regions*
>>
>> *Availability Zones*
>>
>> *Host Aggregates*
>>
>> *Use when you need* 
>>
>> A single API 
>> endpoint<http://docs.openstack.org/trunk/openstack-ops/content/scaling.html>for
>>  compute, or you require a second level of scheduling.
>> 
>>
>> Discrete regions with separate API endpoints and no coordination between
>> regions.
>>
>> Logical separation within your nova deployment for physical isolation or
>> redundancy.
>>
>> To schedule a group of hosts with common features.
>>
>> *Example* 
>>
>> A cloud with multiple sites where you can schedule VMs "anywhere" or on a
>> particular site.
>>
>> A cloud with multiple sites, where you schedule VMs to a particular site
>> and you want a shared infrastructure.
>>
>> A single site cloud with equipment fed by separate power supplies.
>>
>> Scheduling to hosts with trusted hardware support.
>>
>> *Overhead* 
>>
>> **· **A new service, nova-cells
>>
>> **· **Each cell has a full nova installation except nova-api
>>
>> **· **A different API endpoint for every region. 
>>
>> **· **Each region has a full nova installation.
>>
>> **· **Configuration changes to nova.conf
>>
>> **· **Configuration changes to nova.conf
>>
>> *Shared services* 
>>
>> Keystone
>>
>> nova-api 
>>
>> Keystone
>>
>> Keystone
>>
>> All nova services
>>
>> Keystone
>>
>> All nova services
>>
>> ** **
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ravi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] cgroups cpu share allocation in grizzly seems incorrect

2013-08-26 Thread Qiu Yu
On Fri, Aug 23, 2013 at 6:00 AM, Chris Friesen
 wrote:
>
> I just noticed that in Grizzly regardless of the number of vCPUs the value
> of /sys/fs/cgroup/cpu/libvirt/qemu/instance-X/cpu.shares seems to be the
> same.  If we were overloaded, this would give all instances the same cpu
> time regardless of the number of vCPUs in the instance.
>
> Is this design intent?  It seems to me that it would be more correct to have
> the instance value be multiplied by the number of vCPUs.

I think it make sense to have each vCPU an equal weight for scheduling
sake. This makes each vCPU an equal entity with same computing power.

For limiting vCPU hard limit purpose, using CFS quota/period, please
check following BP for reference.
https://blueprints.launchpad.net/nova/+spec/quota-instance-resource

--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-17 Thread Qiu Yu
On Wed, Jul 17, 2013 at 3:30 PM, Roman Podolyaka
 wrote:
> Hi,
>
> Indeed, stable/grizzly contains the following code in the base test case
> class (quantum/tests/base.py):
>
> if os.environ.get('OS_STDOUT_NOCAPTURE') not in TRUE_STRING:
> stdout = self.useFixture(fixtures.StringStream('stdout')).stream
> self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
>

Oh, I see. Thank you so much, Roman.

> so stdout is captured by default, and you should use OS_STDOUT_NOCAPTURE=1
> instead.
>

Actually, both OS_STDOUT_NOCAPTURE=1 and OS_STDERR_NOCAPTURE=1 need to
be specified.

> The behavior was changed in this commit
> https://github.com/openstack/neutron/commit/91bd4bbaeac37d12e61c9c7b033f55ec9f1ab562.
>
> Thanks,
> Roman

--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-16 Thread Qiu Yu
On Wed, Jul 17, 2013 at 12:00 PM, Roman Podolyaka
 wrote:
> Hi,
>
> Ensure that stdout isn't captured by the corresponding fixture:
>
> OS_STDOUT_CAPTURE=0 python -m testtools.run
> neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
> Tests running...

Thanks Roman, ipdb works fine with test cases in Neutron master
branch. And if you run 'python -m testtools.run {testcase}', stdout is
not captured by default.

However, the issue still exists with Neutron stable/grizzly branch,
even with OS_STDOUT_CAPTURE=0. Not quite sure which change in trunk
resolved this issue.

Thanks,
--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ipdb debugging in Neutron test cases?

2013-07-16 Thread Qiu Yu
Hi,

I'm wondering is there any one ever tried using ipdb in Neutron test
cases? The same trick that used to be working with Nova, cannot be
applied in Neutron.

For example, you can trigger one specific test case. But once ipdb
line is added, following exception will be raised from ipython.

Any thoughts? How can I make ipdb work with Neutron test case? Thanks!

$ source .venv/bin/activate
(.venv)$ python -m testtools.run
quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update

==
ERROR: 
quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update
--
Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "quantum/tests/unit/openvswitch/test_ovs_quantum_agent.py",
line 163, in test_port_update
from ipdb import set_trace; set_trace()
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__init__.py",
line 16, in 
from ipdb.__main__ import set_trace, post_mortem, pm, run,
runcall, runeval, launch_ipdb_on_exception
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__main__.py",
line 26, in 
import IPython
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/__init__.py",
line 43, in 
from .config.loader import Config
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/__init__.py",
line 16, in 
from .application import *
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/application.py",
line 31, in 
from IPython.config.configurable import SingletonConfigurable
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/configurable.py",
line 26, in 
from loader import Config
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/loader.py",
line 27, in 
from IPython.utils.path import filefind, get_ipython_dir
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/path.py",
line 25, in 
from IPython.utils.process import system
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/process.py",
line 27, in 
from ._process_posix import _find_cmd, system, getoutput, arg_split
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/_process_posix.py",
line 27, in 
from IPython.utils import text
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/text.py",
line 29, in 
from IPython.utils.io import nlprint
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py",
line 78, in 
stdout = IOStream(sys.stdout, fallback=devnull)
  File 
"/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py",
line 42, in __init__
setattr(self, meth, getattr(stream, meth))
AttributeError: '_io.BytesIO' object has no attribute 'name'


--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Combination of ComputeCapabilitiesFilter and AggregateInstanceExtraSpecsFilter

2013-07-05 Thread Qiu Yu
Russell,

Should ComputeCapabilitiesFilter also be restricted to use scoped
format only? Currently it recognize and compare BOTH scoped and
non-scoped key, which is causing the conflict.

I've already submitted a bug and patch review before.

https://bugs.launchpad.net/nova/+bug/1191185
https://review.openstack.org/#/c/33143/

--
Qiu Yu


On Sat, Jul 6, 2013 at 2:04 AM, Russell Bryant  wrote:
> On 07/05/2013 12:42 PM, Jérôme Gallard wrote:
>> Hi all,
>>
>> I'm trying to combine ComputeCapabilitiesFilter and
>> AggregateInstanceExtraSpecsFilter. However I probably missed
>> something, because it does not work :-)
>>
>> Both filters are activated with the following order:
>> ComputeCapabilitiesFilter, AggregateInstanceExtraSpecsFilter.
>>
>> I created a flavor with the following extra_spec:
>> * capabilities:hypervisor_hostname=node1
>> * class=good
>>
>> I created an aggregate containing node1 with an extra_spec:
>> * class=good
>>
>> When I start a new instance with the previously created flavor, the
>> ComputeCapabilitiesFilter can't find an available node. I put some
>> debug inside the filter. From my understanding, it seems that,
>> ComputeCapabilitiesFilter manage to find the first spec
>> "capabilities:hypervisor_hostname=node1" into the list of metadata
>> provided by the host node1 : the first iteration of the loop is OK.
>> Then this filter continues with the "class=good" spec and, of course,
>> it fails and the filter returns that there is no available host.
>>
>> Do you have an idea about what I'm missing? How to tell to
>> ComputeCapabilitiesFilter that the "class" key is not for it?
>>
>> I read the detailed documentation about filter_scheduler (
>> http://docs.openstack.org/developer/nova/devref/filter_scheduler.html
>> ). But I didn't manage to solve the issue.
>
> The AggregateInstanceExtraSpecs filter needs to have support added for
> scoped extra specs.  That way you can specify something like
> 'aggregate_capabilities:class=good', and the other filter will ignore it.
>
> I'll fix it up.  It should be pretty easy.
>
> https://bugs.launchpad.net/nova/+bug/1198290
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev