Re: [openstack-dev] [neutron][upgrade] new 'all things upgrade' subteam

2015-11-11 Thread Anna Kamyshnikova
Great news! Thanks Ihar!

I'm interested in working on this :) My TZ is UTC+3:00.

On Wed, Nov 11, 2015 at 12:14 AM, Martin Hickey 
wrote:

> I am interested too and will be available to the subteam.
>
> On Tue, Nov 10, 2015 at 9:03 PM, Sean M. Collins 
> wrote:
>
>> I'm excited. I plan on attending and being part of the subteam. I think
>> the tags that Dan Smith recently introduced could be our deliverables,
>> where this subteam focuses on working towards Neutron being tagged with
>> these tags.
>>
>> https://review.openstack.org/239771 - Introduce assert:supports-upgrade
>> tag
>>
>> https://review.openstack.org/239778 - Introduce
>> assert:supports-rolling-upgrade tag
>> --
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-11 Thread Samta Rangare
Hi Sean,

Thanks for replying back, response inline.

On Mon, Nov 9, 2015 at 8:24 PM, Mooney, Sean K 
wrote:
> Hi
> Can you provide some more information regarding your deployment?
>
> Can you check which kernel you are using.
>
> uname -a

Linux ubuntu 3.16.0-50-generic #67~14.04.1-Ubuntu SMP Fri Oct 2 22:07:51
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

>
> If you are using a 3.19 kernel changes to some locking code in the kennel
broke synchronization dpdk2.0 and requires dpdk 2.1 to be used instead.
> In general it is not advisable to use a 3.19 kernel with dpdk as it can
lead to non-deterministic behavior.
>
> When devstack hangs can you connect with a second ssh session and run
> sudo service ovs-dpdk status
> and
> ps aux | grep ovs
>
sudo service ovs-dpdk status
sourcing config
/opt/stack/logs/ovs-vswitchd.pid is not running
Not all processes are running restart!!!
1
ubuntu@ubuntu:~/samta/devstack$ ps -ef | grep ovs
root 13385 1  0 15:17 ?00:00:00 /usr/sbin/ovsdb-server
--detach --pidfile=/opt/stack/logs/ovsdb-server.pid
--remote=punix:/usr/local/var/run/openvswitch/db.sock
--remote=db:Open_vSwitch,Open_vSwitch,manager_options
ubuntu   24451 12855  0 15:45 pts/000:00:00 grep --color=auto ovs

>
> When the deployment hangs at sudo ovs-vsctl br-set-external-id br-ex
bridge-id br-ex
> It usually means that the ovs-vswitchd process has exited.
>
The above result shows that ovs-vswitchd is not running.
> This can happen for a number of reasons.
> The vswitchd process may exit if it  failed to allocate memory (due to
memory fragmentation or lack of free hugepages)
> if the ovs-vswitchd.log is not available can you check the the hugepage
mount point was created in
> /mnt/huge And that Iis mounted
> Run
> ls -al /mnt/huge
> and
> mount
>
ls -al /mnt/huge
total 4
drwxr-xr-x 2 libvirt-qemu kvm 0 Nov 11 15:18 .
drwxr-xr-x 3 root root 4096 May 15 00:09 ..

ubuntu@ubuntu:~/samta/devstack$ mount
/dev/mapper/ubuntu--vg-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs
(rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
/dev/sda1 on /boot type ext2 (rw)
systemd on /sys/fs/cgroup/systemd type cgroup
(rw,noexec,nosuid,nodev,none,name=systemd)
hugetlbfs-kvm on /run/hugepages/kvm type hugetlbfs (rw,mode=775,gid=106)
nodev on /mnt/huge type hugetlbfs (rw,uid=106,gid=106)
nodev on /mnt/huge type hugetlbfs (rw,uid=106,gid=106)

> then checkout how many hugepages are mounted
>
> cat /proc/meminfo | grep huge
>

cat /proc/meminfo | grep Huge
AnonHugePages:292864 kB
HugePages_Total:   5
HugePages_Free:5
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:1048576 kB

>
> the vswitchd process may also exit if it  failed to initializes dpdk
interfaces.
> This can happen if no interface is  compatible with the igb-uio or
vfio-pci drivers
> (note in the vfio-pci case all interface in the same iommu group must be
bound to the vfio-pci driver and
> The iommu must be enabled in the kernel command line with VT-d enabled in
the bios)
>
> Can you  check which interface are bound to the dpdk driver by running
the following command
>
> /opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py --status
>
/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver



Network devices using kernel driver
===
:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=p1p1 drv=ixgbe
unused=igb_uio
:02:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' if=p4p1 drv=i40e

Re: [openstack-dev] [Fuel][Fuel-QA][Fuel-TechDebt] Code Quality: Do Not Hardcode - Fix Things Instead

2015-11-11 Thread Vladimir Kuklin
Matthew

Thanks for your feedback. Could you please elaborate more on the statistics
of such tech-debt eliminations? My perception is that such bugs do not ever
get fixed actually jeopardizing our efforts on bugfixing and actually
making our statistics manupilative.

So far my suggestion is the following - if you can, please do not introduce
workarounds. If you have - introduce a TODO/FIXME comment for it in the
code and create a tech-debt bug. If you see something of that kind that is
already there and does not have such a comment - add this TODO/FIXME and
create a tech-debt bug.

So this is a best effort initiative, but I would encourage core reviewers
to be stricter with such workarounds and hacks - please, do not get them
pass through your hands unless there is a really good reason to merge this
code with these hacks right now.

On Wed, Nov 11, 2015 at 1:43 PM, Matthew Mosesohn 
wrote:

> Vladimir,
>
> Bugfixes and minor refactoring often belong in separate commits. Combining
> "extending foo to enable bar in XYZ" with "ensuring logs from service abc
> are sent via syslog" often makes little sense to code reviewers. In this
> case it is a feature enhancement + a bugfix.
>
> Looking at it from one perspective, if the bugfix is made poorly without a
> feature commit, then it looks like the scenario you described. However, it
> has the benefit that it can be cleanly backported. If we simply reverse the
> order of the commits (untangling the workaround), we get the same result,
> but get flamed.
>
> Sometimes both approaches are necessary. I agree that not growing tech
> debt is important, but perceptions really depend on trends over 3+ weeks.
> It's possible that such tech debt bugs are created and solved within 2-3
> days of the workaround. I know that's the exception, but I think we should
> be most concerned about what happens when we carry tech debt across entire
> Fuel releases.
> On Nov 11, 2015 10:28 AM, "Aleksandr Didenko" 
> wrote:
>
>> +1 from me
>>
>> On Tue, Nov 10, 2015 at 6:38 PM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> I think that it is excellent thought.
>>> +1
>>>
>>> On Tue, Nov 10, 2015 at 6:52 PM, Vladimir Kuklin 
>>> wrote:
>>>
 Folks

 I wanted to raise awareness about one of the things I captured while
 doing reviews recently - we are sacrificing quality to bugfixing and
 feature development velocity, essentially moving from one heap to another -
 from bugs/features to 'tech-debt' bugs.

 I understand that we all have deadlines and need to meet them. But,
 folks, let's create the following policy:

 1) do not introduce hacks/workarounds/kludges if it is possible.
 2) while fixing things if you have a hack/workaround/kludge that you
 need to work with - think of removing it instead of enhancing and extending
 it. If it is possible - fix it. Do not let our technical debt grow.
 3) if there is no way to avoid kludge addition/enhancing, if there is
 no way to remove it - please, add a 'TODO/FIXME' line above it, so that we
 can collect them in the future and fix them gradually.

 I suggest to add this requirement into code-review policy.

 What do you think about this?

 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 35bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com 
 www.mirantis.ru
 vkuk...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype 

Re: [openstack-dev] [kuryr] gerrit/git review problem error code 10061

2015-11-11 Thread Znoinski, Waldemar
Hi Baohua
If you have a socks proxy to hand then you could check ‘tsocks’ - 
http://tsocks.sourceforge.net/ or check your distro’s package manager.
It’s quite ‘transparent’ proxying (over socks 4 or 5) of pretty much any 
process you run with it, i.e.:

tsocks scp -P29418 
yangbao...@review.openstack.org:hooks/commit-msg
> .git\hooks\commit-msg"

From: Baohua Yang [mailto:yangbao...@gmail.com]
Sent: Wednesday, November 11, 2015 2:15 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kuryr] gerrit/git review problem error code 10061

Thanks to all!
It seems network connectivity problem. How sad again!  :(
i will try other ways.

On Tue, Nov 10, 2015 at 12:52 AM, Jeremy Stanley 
> wrote:
On 2015-11-09 10:13:33 +0800 (+0800), Baohua Yang wrote:
> Anyone recently meet such problem after cloning the latest code
> from kuryr? Try proxy also, but not solved.
[...]
> The following command failed with exit code 1
> "scp -P29418 
> yangbao...@review.openstack.org:hooks/commit-msg
> .git\hooks\commit-msg"
> ---
> FATAL: Unable to connect to relay host, errno=10061
> ssh_exchange_identification: Connection closed by remote host
[...]

I've checked our Gerrit SSH API authentication logs from the past 30
days and find no record of any yangbaohua authenticating. Chances
are this is a broken local proxy or some sort of intercepting
firewall which is preventing your 29418/tcp connection from even
reaching review.openstack.org.

If you use Telnet or NetCat to connect to port 29418 on
review.openstack.org directly, do you see an SSH 
banner starting
with a string like "SSH-2.0-GerritCodeReview_2.8.4-19-g4548330
(SSHD-CORE-0.9.0.201311081)" or something else?
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best wishes!
Baohua
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Linux kernel IPv4 configuration during the neutron installation

2015-11-11 Thread Neil Jerram
On 11/11/15 10:30, JinXing F wrote:
>
> Hi, guys:
>
> during the neutron installation guide, I found that we need to
> config the linux kernel as bellow: 
>
> net.ipv4.ip_forward=1
>
> net.ipv4.conf.all.rp_filter=0
>
> net.ipv4.conf.default.rp_filter=0
>
>
> the first one is the ip address translation between LAN and WLAN,
>

No, that's incorrect.  net.ipv4.ip_forward simply allows Linux to
forward IPv4 packets - i.e. to receive a packet on one network
interface, determine that it is not a packet that should be delivered
locally, and forward it on to its next IP hop.  There is no IP address
translation involved here.

> the second and third command is used for "Reverse Path Filtering".
>
> I cann't understand the purpose of the config in the neutron.
>

Do you mean that you don't understand what "Reverse Path Filtering" is? 
Or that you don't understand why Neutron needs RPF to be disabled?

For the former, please see
https://en.wikipedia.org/wiki/Reverse_path_forwarding.

Regards,
Neil

> 1. If the instance in compute node connect with exteral network,what's
> the function of the three config?
>
> 2. The instance connect with each others, what's the function of the
> three config?
>
>
> I am very confused about this config.Please explain the answer to me. 
>
> Thanks. 
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] ML2 don't call ipam driver remove_subnet function

2015-11-11 Thread thanh le giang
Hi Gary

Thanks for your information, It's very helpful.

Thanh

2015-11-11 14:53 GMT+07:00 Gary Kotton :

> Hi,
> This should be resolved with this path -
> https://review.openstack.org/239885
> Good luck
> Gary
>
> From: thanh le giang 
> Reply-To: OpenStack List 
> Date: Wednesday, November 11, 2015 at 5:20 AM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron][IPAM] ML2 don't call ipam driver
> remove_subnet function
>
> Dear folks
>
> I have met a problem when implement IPAM driver, ML2 doesn't call
> remove_subnet function of ipam driver because ML2 doesn't use delete_subnet
> function of NeutronDbPluginV2
>
> Now I workaround by using "SUBNET BEFORE_DELETE" event to notify external
> IPAM that subnet will be deleted, but I think it's not a good solution.
> According to my understanding this event should be used for checking subnet
> is in-used or not. I think ML2 should call IPAM driver for this situation
> or provide additional "SUBNET AFTER_DELETE" event
>
> Thanks,
> Thanh
>
> Email: legiangt...@gmail.com 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can we get some sanity in the Neutron logs please?

2015-11-11 Thread Rossella Sblendido

Hello Matt,


On 11/10/2015 07:33 PM, Matt Riedemann wrote:

Let me qualify by saying I'm not a Neutron person.

We know that gate-tempest-dsvm-neutron-full is failing hard as of the
last 24 hours [1].

An error that's been showing up in tempest runs with neutron a lot is:

"AssertionError: 0 == 0 : No IPv4 addresses found in: []"

So checking logstash [2] it's hitting a lot. It's only recent because
that failure message is new to Tempest in the last day or so, but it has
a lot of hits, so whatever it is, it's failing a lot.

So the next step is usually digging into service logs looking for
errors. I check the q-svc logs first. Not many errors but a bazillion
warnings for things not found (networks and devices). [3]

For example:

2015-11-10 17:13:02.542 WARNING neutron.plugins.ml2.rpc
[req-15a73753-1512-4689-9404-9658a0cd0c09 None None] Device
aaa525be-14eb-44a5-beb0-ed722896be93 requested by agent
ovs-agent-devstack-trusty-rax-iad-5785199 not found in database

2015-11-10 17:14:17.754 WARNING neutron.api.rpc.handlers.dhcp_rpc
[req-3d7e9848-6151-4780-907f-43f11a2a8545 None None] Network
b07ad9b2-e63e-4459-879d-3721074704e5 could not be found, it might have
been deleted concurrently.

Are several hundred of these warnings useful to an operator trying to
debug a problem? The point of the CI gate testing is to try and simulate
a production cloud environment. When something goes wrong, you check the
logs. With the amount of warning/error level logging that is in the
neutron logs, finding a real problem is like looking for a needle in a
haystack. Since everything is async, 404s are expected when racing to
delete a resource and they should be handled gracefully.

Anyway, the server log isn't useful so I go digging in the agent logs
and stacktraces there are aplenty. [4]

Particularly this:

"Exception: Port tapcea51630-e1 is not ready, resync needed"

That's due to a new change landing in the last 24 hours [5]. But the
trace shows up over 16K times since it landed [6].

Checking the code, it's basically a loop processing events and when it
hits an event it can't handle, it punts (breaking the loop so you don't
process the other events after it - which is a bug), and the code that
eventually handles it is just catching all Exception and tracing them
out assuming they are really bad.


As you noticed in the review [1] there was a dependent patch that was 
solving this. It was a big and pretty complex change, I tried to split 
it in few patches. I should have split it in a better way.




At this point, as a non-neutron person, i.e. not well versed in the
operations of neutron or how to debug it in great detail, I assume
something is bad here but I don't really know - and the logs are so full
of noise that I can't distinguish real failures.

I don't mean to pick on this particular change, but it's a good example
of a recent thing.

I'd like to know if this is all known issue or WIP type stuff. I've
complained about excessively noisey neutron logs in channel before and
I'm usually told that they are either necessary (for whatever reason) or
that rather than complain about the verbosity, I should fix the race
that is causing it - which is not likely to happen since I don't have
the async rpc happy nature of everything in neutron in my head to debug
it (I doubt many do).


Yes in this case it's a WIP, logs were not meant to stay there, actually 
they were cleaned up by the dependent patch.





Anyway, this is a plea for sanity in the logs. There are logging
guidelines for openstack [7]. Let's please abide by them. Let's keep
operators in mind when we're looking at logs and be proactive about
making them useful (which includes more granular error handling and less
global try/except Exception: LOG.exception constructs).


We are aware of the guidelines and when reviewing code in Neutron we try 
to enforce them. We all want better logs :) so please keep giving 
feedback. Thanks for raising this point.


cheers,

Rossella




[1] http://tinyurl.com/ne3ex4v
[2]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22AssertionError:%200%20==%200%20:%20No%20IPv4%20addresses%20found%20in:%20%5B%5D%5C%22%20AND%20tags:%5C%22console%5C%22

[3]
http://logs.openstack.org/85/239885/2/gate/gate-tempest-dsvm-neutron-full/602d864/logs/screen-q-svc.txt.gz?level=TRACE

[4]
http://logs.openstack.org/85/239885/2/gate/gate-tempest-dsvm-neutron-full/602d864/logs/screen-q-agt.txt.gz?level=TRACE

[5] https://review.openstack.org/#/c/164880/
[6]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22Exception:%20Port%5C%22%20AND%20message:%5C%22is%20not%20ready,%20resync%20needed%5C%22%20AND%20tags:%5C%22screen-q-agt.txt%5C%22

[7]
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-11 Thread Takashi Yamamoto
On Wed, Nov 11, 2015 at 7:24 PM, Takashi Yamamoto  wrote:
> hi,
>
> i have no idea why the link is broken.

probabaly meeting_id given to #startmeeting was wrong?

>
> today's meeting log is here:
> http://eavesdrop.openstack.org/meetings/tap_as_a_service_meeting/2015/tap_as_a_service_meeting.2015-11-11-06.36.html
>
> On Wed, Nov 11, 2015 at 7:08 PM, Neil Jerram  
> wrote:
>> Sounds interesting!  I'd like to look at some past meeting logs (including 
>> from today), but the 'past meetings' link at 
>> http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting does not work for 
>> me.
>>
>> Neil
>>
>> -Original Message-
>> From: Takashi Yamamoto [mailto:yamam...@midokura.com]
>> Sent: 11 November 2015 03:09
>> To: OpenStack Development Mailing List (not for usage questions) 
>> 
>> Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting
>>
>> hi,
>>
>> tap-as-a-service meeting will be held weekly, starting today.
>> http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
>> anyone interested in the project is welcome.
>> sorry for immediate notice.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][puppet] Detached roles and globals.pp

2015-11-11 Thread Daniel Depaoli
Hi all.
I'm starting to resolve the todo at these line[1]. To solve this I think to
hardcoded the role in the file, for example:

*$swift_proxies = get_nodes_hash_by_roles($network_metadata,
['primary-swift-proxy', 'swift-proxy']) ? {*
*true => get_nodes_hash_by_roles($network_metadata,
['primary-swift-proxy', 'swift-proxy']),*
*false => get_nodes_hash_by_roles($network_metadata,
['primary-controller', 'controller']),*
*}*


Is this the right way or do you suggest something more clean?

Thanks

[1]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/globals/globals.pp#L236:L242

-- 

Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-11 Thread Andrew Beekhof

> On 11 Nov 2015, at 6:26 PM, bdobre...@mirantis.com wrote:
> 
> Thank you Andrew.
> Answers below.
> >>>
> Sounds interesting, can you give any comment about how it differs to the 
> other[i] upstream agent?
> Am I right that this one is effectively A/P and wont function without some 
> kind of shared storage?
> Any particular reason you went down this path instead of full A/A?
> 
> [i] 
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
> <<<
> It is based on multistate clone notifications. It requries nothing shared but 
> Corosync info base CIB where all Pacemaker resources stored anyway.
> And it is fully A/A.

Oh!  So I should skip the A/P parts before "Auto-configuration of a cluster 
with a Pacemaker”? 
Is the idea that the master mode is for picking a node to bootstrap the cluster?

If so I don’t believe that should be necessary provided you specify 
ordered=true for the clone.
This allows you to assume in the agent that your instance is the only one 
currently changing state (by starting or stopping).
I notice that rabbitmq.com explicitly sets this to false… any particular reason?


Regarding the pcs command to create the resource, you can simplify it to:

pcs resource create --force --master p_rabbitmq-server 
ocf:rabbitmq:rabbitmq-server-ha \
  erlang_cookie=DPMDALGUKEOMPTHWPYKC node_port=5672 \
  op monitor interval=30 timeout=60 \
  op monitor interval=27 role=Master timeout=60 \
  op monitor interval=103 role=Slave timeout=60 OCF_CHECK_LEVEL=30 \
  meta notify=true ordered=false interleave=true master-max=1 master-node-max=1

If you update the stop/start/notify/promote/demote timeouts in the agent’s 
metadata.


Lines 1602,1565,1621,1632,1657, and 1678 have the notify command returning an 
error.
Was this logic tested? Because pacemaker does not currently support/allow 
notify actions to fail.
IIRC pacemaker simply ignores them.

Modifying the resource state in notifications is also highly unusual.
What was the reason for that?

I notice that on node down, this agent makes disconnect_node and 
forget_cluster_node calls.
The other upstream agent does not, do you have any information about the bad 
things that might happen as a result?

Basically I’m looking for what each option does differently/better with a view 
to converging on a single implementation. 
I don’t much care in which location it lives.

I’m CC’ing the other upstream maintainer, it would be good if you guys could 
have a chat :-)

> All running rabbit nodes may process AMQP connections. Master state is only 
> for a cluster initial point at wich other slaves may join to it.
> Note, here you can find events flow charts as well [0]
> [0] https://www.rabbitmq.com/pacemaker.html
> Regards,
> Bogdan
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Detached roles and globals.pp

2015-11-11 Thread Sergey Vasilenko
On Wed, Nov 11, 2015 at 1:41 PM, Daniel Depaoli <
daniel.depa...@create-net.org> wrote:

> Hi all.
> I'm starting to resolve the todo at these line[1]. To solve this I think
> to hardcoded the role in the file, for example:
>
> *$swift_proxies = get_nodes_hash_by_roles($network_metadata,
> ['primary-swift-proxy', 'swift-proxy']) ? {*
> *true => get_nodes_hash_by_roles($network_metadata,
> ['primary-swift-proxy', 'swift-proxy']),*
> *false => get_nodes_hash_by_roles($network_metadata,
> ['primary-controller', 'controller']),*
> *}*
>
>
> Is this the right way or do you suggest something more clean?
>

Function *get_nodes_hash_by_roles returns *hash*. *Provided above switch
shouldn't work. But this way is true.
IMHO list of roles name should be constructed before.


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Add Fuel to OpenStack projects: status update

2015-11-11 Thread Andrew Woodward
I didn't read Jim's note as fuel integration tests need to run on OpenStack
infra as reason. To me, it comes across as that we need have a long and
complete conversation with infra about what we are doing in fuel
integration. The deliverables would be: what can be done now in infra, what
does infra want road mapped, built and transitioned to infra at a later
date, and what does infra not want at this time or the near future. As a
follow-up to this meeting I'd guess that some early draft specs and
proposed time lines are in order. In general it seems he wants alignment
and approval from infra on this.

It seems fair ask at this time, AFAICT fuel has the most comprehensive
multinode and HA testing patterns that I'm aware of, leaving this all in
the fuel-ci system seems anti big tent. Not to mention doing so will reduce
our burden to maintain it ourselves.

On Wed, Nov 11, 2015, 12:21 PM Davanum Srinivas  wrote:

> Dima,
>
> +1 to "additional scrutiny is there because they want to get this
> right. Lets prove that their trust in us is not misplaced."
>
> Thanks,
> Dims
>
> On Tue, Nov 10, 2015 at 10:10 PM, Dmitry Borodaenko
>  wrote:
> > As you may have guessed, many Fuel developers were holding their breath
> > for the Technical Committee meeting today, where the decision on whether
> > to accept Fuel into Big Tent as an OpenStack project [0] was on the
> > agenda [1].
> >
> > [0] https://review.openstack.org/199232
> > [1]
> http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-11-10-20.02.log.html#l-115
> >
> > Unfortunately, we'll have to hold breath for another week: our proposal
> > was not approved today, and the vote was postponed again. The good news
> > is, most of the TC members present were in favor and have acknowledged
> > that Fuel team has made significant progress in the right direction.
> >
> > The remaining objections are not new and not insurmountable: Jim Blair
> > has pointed out that it's not enough to have _most_ of Fuel repositories
> > covered by PTI compliant gate jobs, it has to be _all_ of them, and that
> > we still have a few gaps. Thierry was willing to let us get away with a
> > commitment that we complete this work by the end of the year, or be
> > removed from the projects if we fail. However, Jim's concerns were
> > seconded by Russel Bryant and Mark McClain who explicitly abstained
> > until, in Russel's words, "the Infra team is happy". Without their votes
> > and with 4 more TC members absent from the meeting, our proposal did not
> > get enough votes to pass.
> >
> > I have documented the specific gaps in the gate jobs in my comment to
> > the governance review linked above. To sum up, what's left to bring Fuel
> > into full compliance with PTI is:
> >
> > 1) Enable the currently non-voting gate jobs for the new repositories
> > extracted from fuel-web last week: fuel-menu, network-checker, shotgun.
> >
> > 2) Fix and enable the failing docs jobs in fuel-astute and fuel-docs.
> >
> > 3) Finish the unit test job for fuel-ostf.
> >
> > 4) Set up Ruby unit tests and syntax checks for fuel-astute and
> > fuel-nailgun-agent.
> >
> > While figuring out some of the job failures here is tricky, I believe we
> > should focus on remaining gaps and close all of them soon. It would be a
> > shame to have come this far and have our proposal rejected because of a
> > missing syntax check or a failure to compile HTML from RST.
> >
> > Jim's request to start work on running the more complex tests
> > (specifically, multi-node deployment tests from fuel-qa) turned out to
> > be more controversial, both because it is a new requirement that was
> > explicitly excluded during the previous round of discussions in July,
> > and because it's hard to objectively assess how much work, short of
> > complete implementation and full conversion, would be enough to prove
> > that there is a sufficient collaboration between Fuel and Infrastructure
> > teams.
> >
> > We had a good opening discussion on #openstack-dev about this after the
> > TC meeting [2]. Aleksandra Fedorova has mentioned that she actually
> > proposed a talk for Tokyo about exactly this topic (which was
> > unfortunately rejected), and promised to kick off a thread on
> > openstack-dev ML based on the research she has done so far. It's a
> > worthwhile long-term goal, I completely understand Infra team's desire
> > to make sure Fuel project can pull its own weight on OpenStack Infra,
> > and I will support efforts by Aleksandra and other Fuel Infra engineers
> > to fully align our CI with OpenStack Infra.
> >
> > [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2015-11-10.log.html#t2015-11-10T21:03:34
> >
> > Still, I believe that making this a hard requirement for Fuel's
> > acceptance into Big Tent would be one step too far down a slippery slope
> > into a whole new vat of worms. Objective inclusion criteria such as
> > Project Requirements and Project 

Re: [openstack-dev] [Fuel][Fuel-QA][Fuel-TechDebt] Code Quality: Do Not Hardcode - Fix Things Instead

2015-11-11 Thread Matthew Mosesohn
Vladimir,

Bugfixes and minor refactoring often belong in separate commits. Combining
"extending foo to enable bar in XYZ" with "ensuring logs from service abc
are sent via syslog" often makes little sense to code reviewers. In this
case it is a feature enhancement + a bugfix.

Looking at it from one perspective, if the bugfix is made poorly without a
feature commit, then it looks like the scenario you described. However, it
has the benefit that it can be cleanly backported. If we simply reverse the
order of the commits (untangling the workaround), we get the same result,
but get flamed.

Sometimes both approaches are necessary. I agree that not growing tech debt
is important, but perceptions really depend on trends over 3+ weeks. It's
possible that such tech debt bugs are created and solved within 2-3 days of
the workaround. I know that's the exception, but I think we should be most
concerned about what happens when we carry tech debt across entire Fuel
releases.
On Nov 11, 2015 10:28 AM, "Aleksandr Didenko"  wrote:

> +1 from me
>
> On Tue, Nov 10, 2015 at 6:38 PM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> I think that it is excellent thought.
>> +1
>>
>> On Tue, Nov 10, 2015 at 6:52 PM, Vladimir Kuklin 
>> wrote:
>>
>>> Folks
>>>
>>> I wanted to raise awareness about one of the things I captured while
>>> doing reviews recently - we are sacrificing quality to bugfixing and
>>> feature development velocity, essentially moving from one heap to another -
>>> from bugs/features to 'tech-debt' bugs.
>>>
>>> I understand that we all have deadlines and need to meet them. But,
>>> folks, let's create the following policy:
>>>
>>> 1) do not introduce hacks/workarounds/kludges if it is possible.
>>> 2) while fixing things if you have a hack/workaround/kludge that you
>>> need to work with - think of removing it instead of enhancing and extending
>>> it. If it is possible - fix it. Do not let our technical debt grow.
>>> 3) if there is no way to avoid kludge addition/enhancing, if there is no
>>> way to remove it - please, add a 'TODO/FIXME' line above it, so that we can
>>> collect them in the future and fix them gradually.
>>>
>>> I suggest to add this requirement into code-review policy.
>>>
>>> What do you think about this?
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] gerrit/git review problem error code 10061

2015-11-11 Thread ZZelle
Hi,

An alternative solution is to use https instead of ssh to interact with
gerrit:

  # It requires a "recent" gitreview
  git config gitreview.scheme https
  git config gitreview.port 443
  git review -s

It requires to define a http password in gerrit (settings > HTTP Password)


Cédric/ZZelle@IRC

On Wed, Nov 11, 2015 at 10:03 AM, Znoinski, Waldemar <
waldemar.znoin...@intel.com> wrote:

> Hi Baohua
>
> If you have a socks proxy to hand then you could check ‘tsocks’ -
> http://tsocks.sourceforge.net/ or check your distro’s package manager.
>
> It’s quite ‘transparent’ proxying (over socks 4 or 5) of pretty much any
> process you run with it, i.e.:
>
>
>
> tsocks scp -P29418 yangbao...@review.openstack.org:hooks/commit-msg
> > .git\hooks\commit-msg"
>
>
>
> *From:* Baohua Yang [mailto:yangbao...@gmail.com]
> *Sent:* Wednesday, November 11, 2015 2:15 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [kuryr] gerrit/git review problem error
> code 10061
>
>
>
> Thanks to all!
>
> It seems network connectivity problem. How sad again!  :(
>
> i will try other ways.
>
>
>
> On Tue, Nov 10, 2015 at 12:52 AM, Jeremy Stanley 
> wrote:
>
> On 2015-11-09 10:13:33 +0800 (+0800), Baohua Yang wrote:
> > Anyone recently meet such problem after cloning the latest code
> > from kuryr? Try proxy also, but not solved.
> [...]
> > The following command failed with exit code 1
> > "scp -P29418 yangbao...@review.openstack.org:hooks/commit-msg
> > .git\hooks\commit-msg"
> > ---
> > FATAL: Unable to connect to relay host, errno=10061
> > ssh_exchange_identification: Connection closed by remote host
> [...]
>
> I've checked our Gerrit SSH API authentication logs from the past 30
> days and find no record of any yangbaohua authenticating. Chances
> are this is a broken local proxy or some sort of intercepting
> firewall which is preventing your 29418/tcp connection from even
> reaching review.openstack.org.
>
> If you use Telnet or NetCat to connect to port 29418 on
> review.openstack.org directly, do you see an SSH banner starting
> with a string like "SSH-2.0-GerritCodeReview_2.8.4-19-g4548330
> (SSHD-CORE-0.9.0.201311081)" or something else?
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best wishes!
> Baohua
>
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-11 Thread Matthew Booth
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow 
wrote:

> Matthew Booth wrote:
>
>> My patch to MessageHandlingServer is currently being reverted because it
>> broke Nova tests:
>>
>> https://review.openstack.org/#/c/235347/
>>
>> Specifically it causes a number of tests to take a very long time to
>> execute, which ultimately results in the total build time limit being
>> exceeded. This is very easy to replicate. The
>> test
>> nova.tests.functional.test_server_group.ServerGroupTest.test_boot_servers_with_affinity
>> is an example test which will always hit this issue. The problem is
>> that ServerGroupTest.setUp() does:
>>
>>  self.compute2 = self.start_service('compute', host='host2')
>>  self.addCleanup(self.compute2.kill)
>>
>> The problem with this is that start_service() adds a fixture which also
>> adds kill as a cleanup method. kill does stop(), wait(). This means that
>> the resulting call order is: start, stop, wait, stop, wait. The
>> redundant call to kill is obviously a wart, but I feel we should have
>> handled it anyway.
>>
>> The problem is that we decided it should be possible to restart a
>> server. There are some unit tests in oslo.messaging that do this. It's
>> not clear to me that there are any projects which do this, but after
>> this experience I feel like it would be good to check before changing it
>> :)
>>
>> The implication of that is that after wait() the state wraps, and we're
>> now waiting on start() again. Consequently, when the second cleanup call
>> hangs.
>>
>> We could fix Nova (at least the usage we have seen) by removing the
>> wrapping. After wait() if you want to start a server again you need to
>> create a new one.
>>
>> So, to be specific, lets consider the following 2 call sequences:
>>
>> 1. start stop wait stop wait
>> 2. start stop wait start stop wait
>>
>> What should they do? The behaviours with and without wrapping are:
>>
>> 1. start stop wait stop wait
>> WRAP: start stop wait HANG HANG
>> NO WRAP: start stop wait NO-OP NO-OP
>>
>> 2. start stop wait start stop wait
>> WRAP: start stop wait start stop wait
>> NO WRAP: start stop wait NO-OP NO-OP NO-OP
>>
>> I'll refresh my memory on what they did before my change in the morning.
>> Perhaps it might be simpler to codify the current behaviour, but iirc I
>> proposed this because it was previously undefined due to races.
>>
>
> I personally prefer not allowing restarting, its needless code complexity
> imho and a feature that people imho probably aren't using anyway (just
> create a new server object if u are doing this), so I'd be fine with doing
> the above NO WRAP and turning those into NO-OPs (and for example raising a
> runtime error in the case of start stop wait start ... to denote that
> restarting isn't recommended/possible). If we have a strong enough reason
> to really to start stop wait start ...
>
> I might be convinced the code complexity is worth it but for now I'm not
> convinced...
>

I agree, and in the hopefully unlikely event that we did break anybody, at
least they would get an obvious exception rather than a hang. A lesson from
breaking nova was that the log messages were generated and were available
in the failed test runs, but nobody noticed them.

Incidentally, I think I'd also merge my second patch into the first before
resubmitting, which adds timeouts and the option not to log.

Matt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Linux kernel IPv4 configuration during the neutron installation

2015-11-11 Thread JinXing F
Hi, guys:

during the neutron installation guide, I found that we need to config
the linux kernel as bellow:

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0


the first one is the ip address translation between LAN and WLAN, the
second and third command is used for "Reverse Path Filtering".

I cann't understand the purpose of the config in the neutron.

1. If the instance in compute node connect with exteral network,what's the
function of the three config?

2. The instance connect with each others, what's the function of the three
config?


I am very confused about this config.Please explain the answer to me.

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Sripriya Seetharam to Tacker core

2015-11-11 Thread Bharath Thiruveedula
+1

On Tue, Nov 10, 2015 at 10:42 PM, Stephen Wong 
wrote:

> +1
>
> On Mon, Nov 9, 2015 at 6:22 PM, Sridhar Ramaswamy 
> wrote:
>
>> I'd like to propose Sripriya Seetharam to join the Tacker core team.
>> Sripriya
>> ramped up quickly in early Liberty cycle and had become an expert in the
>> Tacker
>> code base. Her major contributions include landing MANO API blueprint,
>> introducing unit test framework along with the initial unit-tests and
>> tirelessly
>> squashing hard to resolve bugs (including chasing the recent nova-neutron
>> goose
>> hunt). Her reviews are solid fine tooth comb and constructive [1].
>>
>> I'm glad to welcome Sripriya to the core team. Current cores members,
>> please vote
>> with your +1 / -1.
>>
>> [1]
>> http://stackalytics.com/?release=libertyuser_id=sseethaproject_type=openstack-others
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-11 Thread Neil Jerram
Sounds interesting!  I'd like to look at some past meeting logs (including from 
today), but the 'past meetings' link at 
http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting does not work for me.

Neil

-Original Message-
From: Takashi Yamamoto [mailto:yamam...@midokura.com] 
Sent: 11 November 2015 03:09
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

hi,

tap-as-a-service meeting will be held weekly, starting today.
http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
anyone interested in the project is welcome.
sorry for immediate notice.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-11 Thread Stanislaw Bogatkin
Dmitry, I propose to give needed linux capabilities
(like CAP_NET_BIND_SERVICE) to processes (services) which needs them and
then start these processes from non-privileged user. It will give you
ability to run each process without 'sudo' at all with well fine-grained
permissions.

On Tue, Nov 10, 2015 at 11:06 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> I've been experimenting with 'capsh' on the 6.1 master node and it doesn't
> seem to preserve any capabilities when setting SECURE_NOROOT bit, even if
> explicitely told to do so (via either --keep=1 or "SECURE_KEEP_CAPS" bit).
>
> On Tue, Nov 10, 2015 at 11:20 AM, Dmitry Nikishov 
> wrote:
>
>> Bartolomiej, Adam,
>> Stanislaw is correct. And this is going to be ported to master. The goal
>> currently is to reach an agreement on the implementation so that there's
>> going to be a some kinf of compatibility during upgrades.
>>
>> Stanislaw,
>> Do I understand correctly that you propose using something like sucap to
>> launch from root, switch to a different user and then drop capabilities
>> which are not required?
>>
>> On Tue, Nov 10, 2015 at 3:11 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Bartolomiej, it's customer-related patches, they, I think, have to be
>>> done for 6.1 prior to 8+ release.
>>>
>>> Dmitry, it's nice to hear about it. Did you consider to use linux
>>> capabilities on fuel-related processes instead of just using non-extended
>>> POSIX privileged/non-privileged permission checks?
>>>
>>> On Tue, Nov 10, 2015 at 10:11 AM, Bartlomiej Piotrowski <
>>> bpiotrow...@mirantis.com> wrote:
>>>
 We don't develop features for already released versions… It should be
 done for master instead.

 BP

 On Tue, Nov 10, 2015 at 7:02 AM, Adam Heczko 
 wrote:

> Dmitry,
> +1
>
> Do you plan to port your patchset to future Fuel releases?
>
> A.
>
> On Tue, Nov 10, 2015 at 12:14 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Hey guys.
>>
>> I've been working on making Fuel not to rely on superuser privileges
>> at least for day-to-day operations. These include:
>> a) running Fuel services (nailgun, astute etc)
>> b) user operations (create env, deploy, update, log in)
>>
>> The reason for this is that many security policies simply do not
>> allow root access (especially remote) to servers/environments.
>>
>> This feature/enhancement means that anything that currently is being
>> run under root, will be evaluated and, if possible, put under a
>> non-privileged
>> user. This also means that remote root access will be disabled.
>> Instead, users will have to log in with "fueladmin" user.
>>
>> Together with Omar  we've put together a blueprint[0] and
>> a
>> spec[1] for this feature. I've been developing this for Fuel 6.1, so
>> there
>> are two patches into fuel-main[2] and fuel-library[3] that can give
>> you an
>> impression of current approach.
>>
>> These patches do following:
>> - Add fuel-admin-user package, which creates 'fueladmin'
>> - Make all other fuel-* packages depend on fuel-admin-user
>> - Put supervisord under 'fueladmin' user.
>>
>> Please review the spec/patches and let's have a discussion on the
>> approach to
>> this feature.
>>
>> Thank you.
>>
>> [0] https://blueprints.launchpad.net/fuel/+spec/fuel-nonsuperuser
>> [1] https://review.openstack.org/243340
>> [2] https://review.openstack.org/243337
>> [3] https://review.openstack.org/243313
>>
>> --
>> Dmitry Nikishov,
>> Deployment Engineer,
>> Mirantis, Inc.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not 

Re: [openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using Fuel

2015-11-11 Thread Steven Hardy
On Tue, Nov 10, 2015 at 02:15:02AM +, Fei LU wrote:
>Greeting Fuel teams,
>My company is working on the installation of virtualization
>infrastructure, and we have noticed Fuel is a great tool, much better than
>our own installer. The question is that Mirantis is currently supporting
>OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>I have read all the Fuel documents, including fuel dev doc, but I haven't
>found the solution how can I add my own release into Fuel. Or maybe I'm
>missing something.
>So, would you guys please give some guide or hints?

I'm guessing you already know this, but just in case - the
install/management tool for recent versions of RHEL-OSP is OSP director,
which is based directly on another OpenStack deployment project, TripleO.

So, it's only fair to point out that you may have a much easier time
participating in the TripleO community if your aim is primarily to support
deploying RHEL-OSP or RDO distributions.

http://docs.openstack.org/developer/tripleo-docs/

There are various pros/cons and differences between the TripleO and Fuel
tooling, but I hope that over time we can work towards less duplication and
more reuse between the two efforts.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc] How could an L2 agent extension access agent methods ?

2015-11-11 Thread Paul Carver

On 11/9/2015 9:59 PM, Vikram Choudhary wrote:

Hi Cathy,

Could you please check on this. My mother passed away yesterday and I
will be on leave for couple of weeks.


I'm very sorry to hear that. Please take all the time you need.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-11 Thread Vikram Hosakote (vhosakot)
Hi,

TAAS looks great for traffic monitoring.

Some questions about TAAS.

1) Can TAAS be used for provider networks as well, or just for tenant
networks ?

2) Will there be any performance impact is every neutron port and every
packet is mirrored/duplicated ?

3) How is TAAS better than a non-mirroring approaches like packet-sniffing
(wireshark/tcpdump) and tracking interface counters/metrics ?

4) Is TAAS a legal/lawful way to intercept/duplicate customer traffic in a
production cloud ? Or, TAAS is used just for debugging/troubleshooting ?

I was not able to find answers for these questions in
https://etherpad.openstack.org/p/mitaka-neutron-unplugged-track.

Thanks!


Regards,
Vikram Hosakote
vhosa...@cisco.com
Software Engineer
Cloud and Virtualization Group (CVG)
Cisco Systems
Boxborough MA USA

From: Takashi Yamamoto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, November 10, 2015 at 10:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

hi,

tap-as-a-service meeting will be held weekly, starting today.
http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
anyone interested in the project is welcome.
sorry for immediate notice.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-11 Thread Zane Bitter

On 10/11/15 11:32, Adam Young wrote:

On 11/10/2015 10:28 AM, Renat Akhmerov wrote:




On 09 Nov 2015, at 20:43, Adam Young  wrote:

On 11/06/2015 06:28 PM, Tim Hinrichs wrote:

Congress allows users to write a policy that executes an action
under certain conditions.

The conditions can be based on any data Congress has access to,
which includes nova servers, neutron networks, cinder storage,
keystone users, etc.  We also have some Ceilometer statistics; I'm
not sure about whether it's easy to get the Keystone notifications
that you're talking about today, but notifications are on our
roadmap.  If the user's login is reflected in the Keystone API, we
may already be getting that event.

The action could in theory be a mistral/heat API or an arbitrary
script.  Right now we're set up to invoke any method on any of the
python-clients we've integrated with.  We've got an integration with
heat but not mistral.  New integrations are typically easy.

Sounds like Mistral and Congress are competing here, then.  Maybe we
should merge those efforts.

I may be wrong on this but the difference is that Mistral provides
workflow. Meaning you can have a graph of tasks related by conditional
logic whereas Congress action is something simple like calling a
function. Correct me if my understanding is wrong. I actually don’t
know at this point whether a workflow is really needed, IMO it does
make sense if we need to create a bunch of heavy resources so it
should be an HA service managing the process of configuring/creating
the new tenant. The power of workflow is in automating long-running
stuff.

But both technologies are missing notifications part now.


This does not need to be super complicated; we need a listener that can
kick off workflows.  If congress is that listener, super.


Listening to what?

If it's the RabbitMQ bus then there has been a lot of talk about having 
some service listen to that and emit notifications to actual end-users. 
Ceilometer has been mentioned in the context of these discussions. I 
would love to see this happen. I know we discussed it in the Horizon 
track at Summit, and Heat also has use cases for this.


At the user level, there are many, many use cases that could benefit 
from being able to trigger a Mistral workflow from a Zaqar message 
(Renat and I have discussed this on a number of occasions), and this is 
possibly another.


The two are not mutually exclusive, either - potentially Ceilometer 
could be listening on the RabbitMQ bus and emitting notifications to 
Zaqar that could be picked up by Mistral. It sounds like that's not 
necessary though; Keystone could probably emit the notification wherever 
it likes.



I would think that it would then be

1. Keystone sends "new federated user" notification out on the message bus.
2. Congress Picks up the message and checks policy to see what should be
done.
3. Congress calls Heat to fire off template for autoprovisioning user.


This is the right solution if:
- The user template is somewhat complex
- Congress always needs control over the policy
- All we ever need to do is encapsulated in the Heat template


It could also be:

1. Keystone sends "new federated user" notification out on the message bus.
2. Murano Picks up the message and checks policy to see what should be
done.
3. Murano calls Heat to fire off template for autoprovisioning user.


Assuming s/Murano/Mistral/, this is the right solution if:

- The user template is somewhat complex
- We want it to also work without Congress in the loop

Another possibility:

1. Keystone sends "new federated user" notification out on the message bus.
2. Mistral picks up the message and checks policy to see what should be 
 done.

3. Mistral creates the autoprovisioning user directly.

This is the right solution if:
- Creating the user is no harder than creating a Heat stack
- We want it to also work without Congress in the loop


You are suggesting it should be:

1. Keystone sends "new federated user" notification out on the message bus.
2. Congress Picks up the message and checks policy to see what should be
done.
3. Congress calls Murano to fire off template for autoprovisioning user.


This is the right solution if:

- Creating the user is no harder than creating a Heat stack
- Congress always needs control over the policy
- The action may need to be customised in ways not encapsulated in a 
Heat template



And, the most complex solution:

1. Keystone sends "new federated user" notification out on the message bus.
2. Congress Picks up the message and checks policy to see what should be
done.
3. Congress calls Murano to fire off template for autoprovisioning user.
4. Murano calls Heat to fire off template for autoprovisioning user.


This is the right solution if:
- The user template is somewhat complex
- Congress always needs control over the policy
- The action may need to be customised in ways not encapsulated in a 
Heat template




Personally, I would prefer:

1. 

Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-11 Thread Murray, Paul (HP Cloud)
> Unfortunately, you're trying to work around misuse of the cloud API's, not
> missing features from them. Don't use those volume types, and don't build
> systems that rely on single ports and interfaces. IMO rebuild is a misguided
> concept (something that took me a long time to realize). 

Slightly tangential to this discussion but specifically to your comment about
rebuild: HP (and I believe RAX) kept asking OpenStack infra to use rebuild in 
node pool instead of constantly deleting and creating instances.
The reason being it would give them a significant performance advantage
and dramatically improve the use of resources in the operators cloud. Node
pool would have gained a vastly better resource utilization. The
fact infra did not do that (I believe) is partly because it was difficult to 
refactor
node pool for the purpose (after all, it works and there's other things to do) 
and
partly because the resources were free. In a pay-per-use scenario the decision
would have been different.

This is off topic because it's not about cinder volumes. But I guess there are 
two
take aways: 

On one hand, in the end they didn't really need rebuild - node pool works 
without it.

On the other, it would have reduced their costs (if paying) and made better use 
of
resources.

I'll leave it to the reader to decide which of these matter to you.

> It requires one
> service (Nova) to take all of the responsibility for cross-service 
> reinitialization
> and that doesn't happen so you get weird disconnects like this one. Heat
> would be a better choice, as you can simply deploy a new template, which
> will replace the old instance in a relatively safe way including detaching and
> re-attaching volumes and any VIP's.
> 
> So, to be clear:
> 
> *DO* build systems that store data on volumes that are _not_ the root disk.
> Boot a new instance, initialize the configuration using your tools, and then
> move the volume attachment from the old to the new.
> 
> *DO* build systems that use DNS or a VIP to communicate so that new ports
> can be allocated and attached to the new instance while the old one is stil
> active.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-11 Thread Ruby Loo
On 10 November 2015 at 12:08, Dmitry Tantsur  wrote:

> On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:
>
>> Hi,
>>
>> In the last Ironic meeting [1] we started a discussion about whether
>> we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
>> ideas about the format of the midcycle were presented in that
>> conversation and this email is just a follow up on that conversation.
>>
>> The ideas presented were:
>>
>> 1. Normal mid-cycle
>>
>> Same format as the previous ones, the meetup will happen in a specific
>> venue somewhere in the world.
>>
>
> I would really want to see you all as often as possible. However, I don't
> see much value in proper face-to-face mid-cycles as compared to improving
> our day-to-day online communications.


+2.

My take on mid-cycles is that if folks want to have one, that is fine, I
might not attend :)

My preference is 4) no mid-cycle -- and try to work more effectively with
people in different locations and time zones.

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-11 Thread Chris Friesen

On 11/11/2015 11:01 AM, Murray, Paul (HP Cloud) wrote:

Unfortunately, you're trying to work around misuse of the cloud API's, not
missing features from them. Don't use those volume types, and don't build
systems that rely on single ports and interfaces. IMO rebuild is a misguided
concept (something that took me a long time to realize).


Slightly tangential to this discussion but specifically to your comment about
rebuild: HP (and I believe RAX) kept asking OpenStack infra to use rebuild in
node pool instead of constantly deleting and creating instances.
The reason being it would give them a significant performance advantage
and dramatically improve the use of resources in the operators cloud. Node
pool would have gained a vastly better resource utilization.


I didn't think that the overhead of deleting/creating an instance was *that* 
much different than rebuilding an instance.


Do you have any information about where the "significant performance advantage" 
was coming from?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-11 Thread Fox, Kevin M
I'm in the middle of deploying a keystone cluster to support multiple regions. 
We were told we wouldn't need rabbit for this case.

Just something to keep in mind. Keystone's kind of unique in that it can be 
shared between regions.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, November 11, 2015 9:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, 
per-user projects, and Federation

On 10/11/15 11:32, Adam Young wrote:
> On 11/10/2015 10:28 AM, Renat Akhmerov wrote:
>>
>>
>>> On 09 Nov 2015, at 20:43, Adam Young  wrote:
>>>
>>> On 11/06/2015 06:28 PM, Tim Hinrichs wrote:
 Congress allows users to write a policy that executes an action
 under certain conditions.

 The conditions can be based on any data Congress has access to,
 which includes nova servers, neutron networks, cinder storage,
 keystone users, etc.  We also have some Ceilometer statistics; I'm
 not sure about whether it's easy to get the Keystone notifications
 that you're talking about today, but notifications are on our
 roadmap.  If the user's login is reflected in the Keystone API, we
 may already be getting that event.

 The action could in theory be a mistral/heat API or an arbitrary
 script.  Right now we're set up to invoke any method on any of the
 python-clients we've integrated with.  We've got an integration with
 heat but not mistral.  New integrations are typically easy.
>>> Sounds like Mistral and Congress are competing here, then.  Maybe we
>>> should merge those efforts.
>> I may be wrong on this but the difference is that Mistral provides
>> workflow. Meaning you can have a graph of tasks related by conditional
>> logic whereas Congress action is something simple like calling a
>> function. Correct me if my understanding is wrong. I actually don’t
>> know at this point whether a workflow is really needed, IMO it does
>> make sense if we need to create a bunch of heavy resources so it
>> should be an HA service managing the process of configuring/creating
>> the new tenant. The power of workflow is in automating long-running
>> stuff.
>>
>> But both technologies are missing notifications part now.
>
> This does not need to be super complicated; we need a listener that can
> kick off workflows.  If congress is that listener, super.

Listening to what?

If it's the RabbitMQ bus then there has been a lot of talk about having
some service listen to that and emit notifications to actual end-users.
Ceilometer has been mentioned in the context of these discussions. I
would love to see this happen. I know we discussed it in the Horizon
track at Summit, and Heat also has use cases for this.

At the user level, there are many, many use cases that could benefit
from being able to trigger a Mistral workflow from a Zaqar message
(Renat and I have discussed this on a number of occasions), and this is
possibly another.

The two are not mutually exclusive, either - potentially Ceilometer
could be listening on the RabbitMQ bus and emitting notifications to
Zaqar that could be picked up by Mistral. It sounds like that's not
necessary though; Keystone could probably emit the notification wherever
it likes.

> I would think that it would then be
>
> 1. Keystone sends "new federated user" notification out on the message bus.
> 2. Congress Picks up the message and checks policy to see what should be
> done.
> 3. Congress calls Heat to fire off template for autoprovisioning user.

This is the right solution if:
- The user template is somewhat complex
- Congress always needs control over the policy
- All we ever need to do is encapsulated in the Heat template

> It could also be:
>
> 1. Keystone sends "new federated user" notification out on the message bus.
> 2. Murano Picks up the message and checks policy to see what should be
> done.
> 3. Murano calls Heat to fire off template for autoprovisioning user.

Assuming s/Murano/Mistral/, this is the right solution if:

- The user template is somewhat complex
- We want it to also work without Congress in the loop

Another possibility:

1. Keystone sends "new federated user" notification out on the message bus.
2. Mistral picks up the message and checks policy to see what should be
  done.
3. Mistral creates the autoprovisioning user directly.

This is the right solution if:
- Creating the user is no harder than creating a Heat stack
- We want it to also work without Congress in the loop

> You are suggesting it should be:
>
> 1. Keystone sends "new federated user" notification out on the message bus.
> 2. Congress Picks up the message and checks policy to see what should be
> done.
> 3. Congress calls Murano to fire off template for autoprovisioning user.

This is the right solution if:

- Creating the user is no harder than creating a Heat stack
- Congress always needs control over the policy

Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-11 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2015-11-11 09:43:43 -0800:
> 1. Keystone (or some Rabbit->Zaqar proxy service reading notifications 
> from Keystone) sends "new federated user" notification out via Zaqar.
> 2. Mistral picks up the message and checks policy to see what should be 
> done.
> 3. Mistral calls either Heat or Keystone to autoprovision user.
> 

Zane I like most of what you said here, and agree with nearly all of it.
I actually started typing a question asking why Zaqar, but I think I
understand, and you can correct me if I'm wrong.

There's a notification bus. It is generally accessible to all of the
things run by the operator if the operator wants it to be. Zaqar is for
communication toward the user, whether from user hosted apps or operator
hosted services. The thing we're discussiong seems entirely operator
hosted, to operator hosted. Which to me, at first, meant we should just
teach Mistral to listen to Keystone notifications and to run workflows
using trusts acquired similarly to the way Heat acquires them.

However, it just ocurred to me that if we teach Mistral to read messages
in a Zaqar queue belonging to a user, then there's no weirdness around
user authentication and admin powers. Messages in a user's queue are
entirely acted on using trusts for that user.

That said, I think this is overly abstracted. I'd rather just see
operator hosted services listen to the notification bus and react to the
notifications they care about. You have to teach Mistral about trusts
either way so it can do things as a user, and having the notification
go an extra step:

Keystone->[notifications]->Zaqar->Mistral

vs.

Keystone->[notifications]->Mistral

Doesn't make a ton of sense to me.

> But as Renat mentioned, the part about triggering Mistral workflows from 
> a message does not yet exist. As Tim pointed out, Congress could be a 
> solution to that (listening for a message and then starting the Mistral 
> workflow). That may be OK in the short term, but in the long term I'd 
> prefer that we implement the triggering thing in Mistral (since there 
> are *lots* of end-user use cases for this too), and have the workflow 
> optionally query Congress for the policy rather than having Congress in 
> the loop.
> 

I agree 100% on the positioning of Congress vs. Mistral here.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-11 Thread Clint Byrum
Excerpts from Murray, Paul (HP Cloud)'s message of 2015-11-11 09:01:16 -0800:
> > Unfortunately, you're trying to work around misuse of the cloud API's, not
> > missing features from them. Don't use those volume types, and don't build
> > systems that rely on single ports and interfaces. IMO rebuild is a misguided
> > concept (something that took me a long time to realize). 
> 
> Slightly tangential to this discussion but specifically to your comment about
> rebuild: HP (and I believe RAX) kept asking OpenStack infra to use rebuild in 
> node pool instead of constantly deleting and creating instances.
> The reason being it would give them a significant performance advantage
> and dramatically improve the use of resources in the operators cloud. Node
> pool would have gained a vastly better resource utilization. The
> fact infra did not do that (I believe) is partly because it was difficult to 
> refactor
> node pool for the purpose (after all, it works and there's other things to 
> do) and
> partly because the resources were free. In a pay-per-use scenario the decision
> would have been different.
> 
> This is off topic because it's not about cinder volumes. But I guess there 
> are two
> take aways: 
> 
> On one hand, in the end they didn't really need rebuild - node pool works 
> without it.
> 
> On the other, it would have reduced their costs (if paying) and made better 
> use of
> resources.
> 
> I'll leave it to the reader to decide which of these matter to you.
> 

Thanks, that is a helpful anecdote that I think shows the dangerous
nature of this feature. It _seems_ like a good thing because it lets the
operator expose cost savings to the user.

But IMO, the cloud operators were forced into this way of thinking
because deletes and creates have traditionally been expensive. They
go through schedulers, cause relational database churn, leave behind
dangling resources, etc. Rebuild is effectively a bypass around those
things, and for that reason I understand why it exists.

But, from a user perspective, any issue where you need to use rebuild
because it doesn't eat up more space on an already-full cloud or saves
you money in billed hours isn't real. If you can't build a new one
due to capacity, and you can handle downtime, simply stop the old one
before the build (leaving volumes and/or objects behind for the new one
to build from). No part of rebuild actually helps users other than
preserving ID's and attachments, which is not going to help them at all
if they need to move the workload to a new region/AZ/cloud.

So, if we actually make cloud churn scale well (which should not be
hard and is important anyway since cloud native apps will want to work
like this anyway), rebuild becomes a quaint server side way to stop,
create new, start, and delete old.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-11 Thread Zane Bitter

On 09/11/15 11:55, Adam Young wrote:

On 11/09/2015 10:57 AM, Tim Hinrichs wrote:

Congress happens to have the capability to run a script/API call under
arbitrary conditions on the state of other OpenStack projects, which
sounded like what you wanted.  Or did I misread your original question?

Congress and Mistral are definitely not competing. Congress lets
people declare which states of the other OpenStack projects are
permitted using a general purpose policy language, but it does not try
to make complex changes (often requiring a workflow) to eliminate
prohibited states.  Mistral lets people create a workflow that makes
complex changes to other OpenStack projects, but it doesn't have a
general purpose policy language that describes which states are
permitted. Congress and Mistral are complementary, and each can stand
on its own.


And why should not these two things be in a single project?


Because they're completely different projects with completely different 
architectures developed by completely different teams that do completely 
different things for completely different groups of users.


This is a bit like saying that Nova and Rally should be a single project 
because they both sometimes make API calls to Cinder. The premise is 
technically true but as an argument... it's not so great.



Arguably, Congress should have implemented their action invocation as a 
hard-coded call to Mistral so as to let users define an arbitrary 
workflow. Instead they made it pluggable and allowed users to specify 
any single API call (provided a plugin existed for it, and the Mistral 
one does not yet). This does duplicate functionality in Mistral in the 
sense that Mistral also has plugins to call OpenStack APIs, which it 
does as a step in a workflow. It's easy to see why they might have 
chosen that - no young project wants to hitch its wagon to any other 
relatively young project because it makes getting adoption that much 
harder. However there's an easy migration path (write a Mistral plugin, 
convert the other actions to one-step workflows, switch over to using 
the Mistral plugin exclusively), so it seems like a perfectly sensible 
decision to me. Merging the projects because of this one tiny bit of 
common functionality would be absurd.


cheers,
Zane.



Tim


On Mon, Nov 9, 2015 at 6:46 AM Adam Young
<ayo...@redhat.com> wrote:

On 11/06/2015 06:28 PM, Tim Hinrichs wrote:

Congress allows users to write a policy that executes an action
under certain conditions.

The conditions can be based on any data Congress has access to,
which includes nova servers, neutron networks, cinder storage,
keystone users, etc.  We also have some Ceilometer statistics;
I'm not sure about whether it's easy to get the Keystone
notifications that you're talking about today, but notifications
are on our roadmap.  If the user's login is reflected in the
Keystone API, we may already be getting that event.

The action could in theory be a mistral/heat API or an arbitrary
script.  Right now we're set up to invoke any method on any of
the python-clients we've integrated with.  We've got an
integration with heat but not mistral.  New integrations are
typically easy.


Sounds like Mistral and Congress are competing here, then.  Maybe
we should merge those efforts.




Happy to talk more.

Tim



On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann
> wrote:

Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28
-0600:
> On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann
> wrote:
>
> > Excerpts from Clint Byrum's message of 2015-11-05
10:09:49 -0800:
> > > Excerpts from Doug Hellmann's message of 2015-11-05
09:51:41 -0800:
> > > > Excerpts from Adam Young's message of 2015-11-05
12:34:12 -0500:
> > > > > Can people help me work through the right set of
tools for this use
> > case
> > > > > (has come up from several Operators) and map out a
plan to implement
> > it:
> > > > >
> > > > > Large cloud with many users coming from multiple
Federation sources
> > has
> > > > > a policy of providing a minimal setup for each user
upon first visit
> > to
> > > > > the cloud:  Create a project for the user with a
minimal quota, and
> > > > > provide them a role assignment.
> > > > >
> > > > > Here are the gaps, as I see it:
> > > > >
> > > > > 1.  Keystone provides a notification that a user
has logged in, but
> > > > > there is nothing capable of executing on this
notification at the
> > > > > moment.  Only Ceilometer listens to Keystone
notifications.
  

Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-11 Thread John Villalovos
My order of preference would be:

1. Coordinated regional mid-cycles
2. Normal mid-cycle
3. Virtual mid-cycle
4. No mid-cycle

Thanks,
John

On Tue, Nov 10, 2015 at 10:45 AM, Lucas Alvares Gomes  wrote:

> Hi,
>
> In the last Ironic meeting [1] we started a discussion about whether
> we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
> ideas about the format of the midcycle were presented in that
> conversation and this email is just a follow up on that conversation.
>
> The ideas presented were:
>
> 1. Normal mid-cycle
>
> Same format as the previous ones, the meetup will happen in a specific
> venue somewhere in the world.
>
> 2. Virtual mid-cycle
>
> People doing a virtual hack session on IRC / google hangout /
> others... Something like virtual sprints [2].
>
> 3. Coordinated regional mid-cycles
>
> Having more than one meetup happening in different parts of the world
> with a preferable time overlap between them so we could use video
> conference for some hours each day to sync up what was done/discussed
> on each of the meetups.
>
> 4. Not having a mid-cycle at all
>
>
> So, what people think about it? Should we have a mid-cycle for the
> Mitaka release or not? If so, what format should we use?
>
> Other ideas are also welcome.
>
> [1]
> http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-11-09-17.00.log.html
> [2] https://wiki.openstack.org/wiki/VirtualSprints
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-11 Thread Flavio Percoco

On 09/11/15 21:30 -0600, Matt Riedemann wrote:

On 11/9/2015 9:12 PM, Matthew Treinish wrote:

On Mon, Nov 09, 2015 at 10:54:43PM +, Kuvaja, Erno wrote:

On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:


On 11/9/2015 10:41 AM, Thierry Carrez wrote:

Hi everyone,

A few cycles ago we set up the Release Cycle Management team which
was a bit of a frankenteam of the things I happened to be leading:
release management, stable branch maintenance and vulnerability

management.

While you could argue that there was some overlap between those
functions (as in, "all these things need to be released") logic
was not the primary reason they were put together.

When the Security Team was created, the VMT was spinned out of the
Release Cycle Management team and joined there. Now I think we
should spin out stable branch maintenance as well:

* A good chunk of the stable team work used to be stable point
release management, but as of stable/liberty this is now done by
the release management team and triggered by the project-specific
stable maintenance teams, so there is no more overlap in tooling
used there

* Following the kilo reform, the stable team is now focused on
defining and enforcing a common stable branch policy[1], rather
than approving every patch. Being more visible and having more
dedicated members can only help in that very specific mission

* The release team is now headed by Doug Hellmann, who is focused
on release management and does not have the history I had with
stable branch policy. So it might be the right moment to refocus
release management solely on release management and get the stable
team its own leadership

* Empowering that team to make its own decisions, giving it more
visibility and recognition will hopefully lead to more resources
being dedicated to it

* If the team expands, it could finally own stable branch health
and gate fixing. If that ends up all falling under the same roof,
that team could make decisions on support timeframes as well,
since it will be the primary resource to make that work


Isn't this kind of already what the stable maint team does? Well,
that and some QA people like mtreinish and sdague.



So.. good idea ? bad idea ? What do current stable-maint-core[2]
members think of that ? Who thinks they could step up to lead that

team ?


[1]
http://docs.openstack.org/project-team-guide/stable-branches.html
[2] https://review.openstack.org/#/admin/groups/530,members



With the decentralizing of the stable branch stuff in Liberty [1] it
seems like there would be less use for a PTL for stable branch
maintenance - the cats are now herding themselves, right? Or at
least that's the plan as far as I understood it. And the existing
stable branch wizards are more or less around for help and answering

questions.


The same might be said about releasing from master and the release
management team. There's still some benefit to having people dedicated
to making sure projects all agree to sane policies and to keep up with
deliverables that need to be released.


Except the distinction is that relmgt is actually producing something. Relmgt
has the releases repo which does centralize library releases, reno to do the
release notes, etc. What does the global stable core do? Right now it's there
almost entirely to just add people to the project specific stable core teams.

-Matt Treinish



I'd like to move the discussion from what are the roles of the current 
stable-maint-core and more towards what the benefits would be having a 
stable-maint team rather than the -core group alone.

Personally I think the stable maintenance should be quite a lot more than 
unblocking gate and approving people allowed to merge to the stable branches.



Sure, but that's not we're talking about here are we? The other tasks, like
backporting changes for example, have been taken on by project teams. Even in
your other email you mentioned that you've been doing backports and other tasks
that you consider stable maint in a glance only context. That's something we
changed in kilo which ttx referenced in [1] to enable that to happen, and it was
the only way to scale things.

The discussion here is about the cross project effort around stable branches,
which by design is a more limited scope now. Right now the cross project effort
around stable branch policy is really 2 things (both of which ttx already
mentioned):

1. Keeping the gates working on the stable branches
2. Defining and enforcing stable branch policy.

The only lever on #2 is that the global stable-maint-core is the only group
which has add permissions to the per project stable core groups. (also the
stable branch policy wiki, but that rarely changes) We specifically shrunk it to
these 2 things in. [1] Well, really 3 things there, but since we're not doing
integrated stable point releases in the future its now only 2.

This is my whole argument that creating 

Re: [openstack-dev] [puppet] ::db classes

2015-11-11 Thread Clayton O'Neill
On Wed, Nov 11, 2015 at 9:50 AM, Clayton O'Neill  wrote:

> I discovered this issue last night and opened a bug on it (
> https://bugs.launchpad.net/puppet-tuskar/+bug/1515273).
>
> This effects most of the modules, and the short version of it is that the
> defaults in all the ::db classes are wrong for max_pool_size
> and max_overflow.  We’re setting test to 10 and 20, but oslo_db actually
> has no internal default.
>

To clarify: The modules following this pattern are setting max_pool_size
and max_overflow to 10 and 20 respectively, but oslo_db has no internal
default.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] ::db classes

2015-11-11 Thread Clayton O'Neill
I discovered this issue last night and opened a bug on it (
https://bugs.launchpad.net/puppet-tuskar/+bug/1515273).

This effects most of the modules, and the short version of it is that the
defaults in all the ::db classes are wrong for max_pool_size
and max_overflow.  We’re setting test to 10 and 20, but oslo_db actually
has no internal default.

The two options I see for fixing this are to either put in place the old
traditional:

if $option_name {
  # ensure value
} else {
  # ensure absent
}

and do that for at least the two values with the wrong defaults, or
preferably, change the defaults for all of these parameters to use the
service default fact.

I prefer the latter approach.  I know there has been some discussion on
Trello about how much we want to be using the service default fact, but as
near as I can tell, the concerns seem to mostly about not accidentally
reverting intentionally different values and breaking existing installs.

This scenario seems to be the ideal candidate for just not setting a value
unless the deployer has specifically asked for something special.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stackalytics] possibly breaking bug closing statistics?

2015-11-11 Thread Ilya Shakhat
2015-11-11 16:38 GMT+03:00 Thierry Carrez :

>
> date_fix_committed is probably not set if we directly switch the bug to
> "Fix released", and that is what we plan to do now with Launchpad bugs.
>
> We might therefore need a backward-compatible patch to Stackalytics so
> that it uses (date_fix_committed or date_fix_released) instead.


Good point, Thierry.

I have one bug that was transferred from New directly into Fix Released
state
(https://bugs.launchpad.net/stackalytics/+bug/1479791). Launchpad sets all
intermediate
states, including date_fix_committed:
"date_fix_committed": "2015-08-03T08:37:49.270140+00:00",
"date_fix_released": "2015-08-03T08:37:49.270140+00:00",

Not sure if it's documented behavior or not, so the patch to Stackalytics
would probably
be preferred.

Thanks,
Ilya
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-11 Thread Doug Hellmann

> On Nov 11, 2015, at 3:54 AM, Andrey Kurilin  wrote:
> 
> 
> 
> On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague  > wrote:
> On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
> >>It was also proposed to reuse openstackclient or the openstack SDK.
> >
> > Openstack SDK was proposed a long time ago(it looks like it was several
> > cycles ago) as "alternative" for cliutils and apiclient, but I don't
> > know any client which use it yet. Maybe openstacksdk cores should try to
> > port any client as an example of how their project should be used.
> 
> The SDK is targeted for end user applications, not service clients. I do
> get there was lots of confusion over this, but SDK is not the answer
> here for service clients.
> 
> Ok, thanks for explanation, but there is another question in my head: If 
> openstacksdk is not for python-*clients, why apiclient(which is actually used 
> by python-*clients) was marked as deprecated due to openstacksdk? 

The Oslo team wanted to deprecate the API client code because it wasn’t being 
maintained. We thought at the time we did so that the SDK would replace the 
clients, but discussions since that time have changed direction.

> 
> The service clients are *always* going to have to exist in some form.
> Either as libraries that services produce, or by services deciding they
> don't want to consume the libraries of other clients and just put a
> targeted bit of rest code in their own tree to talk to other services.
> 
> -Sean
> 
> --
> Sean Dague
> http://dague.net 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> -- 
> Best regards,
> Andrey Kurilin.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] [api] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-11-11 Thread Anne Gentle
Great! I'll help out on this side of the world.

Before the 8th, please review this patch to re-arrange and publish the
Compute API Guide (concept topics) to developer.openstack.org. nova:
https://review.openstack.org/#/c/230186/ It depends on this one in infra:
https://review.openstack.org/#/c/231000/

That'll ensure the concept topics are built to a findable location.

Thanks,
Anne

On Wed, Nov 11, 2015 at 6:51 AM, Alex Xu  wrote:

> Hi,
>
> At nova api subteam weekly meeting, we decided hold 2 days virtual doc
> sprint to help the Nova API document. The initial proposed date is Dec 8
> and 9(Let me know if the date is conflict with other thing). The sprint is
> running on local time for folks. Peoples can work on the patch and also can
> help on the review.
>
> Appreciate and welcome people join this sprint to help on API doc.
>
> Please sign up for this sprint first if you are interesting at the top of
> etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks
> of sprint are also in the etherpad, already have some contributors work on
> those doc tasks now, so free to join us now or join the sprint.
>
> Thanks
> Alex
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stackalytics] possibly breaking bug closing statistics?

2015-11-11 Thread Doug Hellmann
Excerpts from Ilya Shakhat's message of 2015-11-11 17:17:58 +0300:
> 2015-11-11 16:38 GMT+03:00 Thierry Carrez :
> 
> >
> > date_fix_committed is probably not set if we directly switch the bug to
> > "Fix released", and that is what we plan to do now with Launchpad bugs.
> >
> > We might therefore need a backward-compatible patch to Stackalytics so
> > that it uses (date_fix_committed or date_fix_released) instead.
> 
> 
> Good point, Thierry.
> 
> I have one bug that was transferred from New directly into Fix Released
> state
> (https://bugs.launchpad.net/stackalytics/+bug/1479791). Launchpad sets all
> intermediate
> states, including date_fix_committed:
> "date_fix_committed": "2015-08-03T08:37:49.270140+00:00",
> "date_fix_released": "2015-08-03T08:37:49.270140+00:00",
> 
> Not sure if it's documented behavior or not, so the patch to Stackalytics
> would probably
> be preferred.
> 
> Thanks,
> Ilya

This should get us started, though I'm not certain it's sufficient:
https://review.openstack.org/244142

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-11 Thread Andrey Kurilin
On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague  wrote:

> On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
> >>It was also proposed to reuse openstackclient or the openstack SDK.
> >
> > Openstack SDK was proposed a long time ago(it looks like it was several
> > cycles ago) as "alternative" for cliutils and apiclient, but I don't
> > know any client which use it yet. Maybe openstacksdk cores should try to
> > port any client as an example of how their project should be used.
>
> The SDK is targeted for end user applications, not service clients. I do
> get there was lots of confusion over this, but SDK is not the answer
> here for service clients.
>

Ok, thanks for explanation, but there is another question in my head: If
openstacksdk is not for python-*clients, why apiclient(which is actually
used by python-*clients) was marked as deprecated due to openstacksdk?

>
> The service clients are *always* going to have to exist in some form.
> Either as libraries that services produce, or by services deciding they
> don't want to consume the libraries of other clients and just put a
> targeted bit of rest code in their own tree to talk to other services.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate apiclient library

2015-11-11 Thread Andrey Kurilin
Hi!
I really like the idea of graduation apiclient to separate library.

On Tue, Nov 10, 2015 at 8:29 AM, Kekane, Abhishek <
abhishek.kek...@nttdata.com> wrote:

> Hi Devs,
>
> In the Mitaka design summit session [1], it was decided to create a new
> library apiclient with completely new code (Metadata classes) and copy
> only code from oslo-incubator/openstack/common/apiclient that is needed by
> all python clients libraries.
>
> We have done extensive analysis to figure out the differences of usages of
> different oslo-incubator/openstack/common/apiclient modules used in the
> respective Openstack python client libraries and added all our
> observations in the google spreadsheet [2].
>
> Please read the spreadsheet before reading the below information.
>
> All modules from oslo-incubator/openstack/common/apiclient are taken into
> consideration.
>
> auth.py
> ===
> Few python client libraries are using auth.py from oslo-incubator for
> loading auth plugins and authentication and others have its own code
> auth_plugin.py which does the same job in conjunction with keystoneclient
> library.
>
> Differences between oslo.incubator/openstack/common/apiclient/auth.py and
> python-*client/*client/auth_plugin.py :
>
> a) Methods names are same but implementation is different
> (discover_auth_systems, load_auth_system_opts, load_plugin etc)
>
> b) auth_plugin.py module is present in nova and cinder client (nova and
> cinder has almost same auth_plugin module)
>
> c) BaseAuthPlugin of auth_plugin.py and auth.py have same methods defined
> except
>  i. BaseAuthPlugin class of auth.py module do not have 'get_auth_url'
> method
>  ii. 'authenticate' function takes different arguments
>
> Possible resolutions:-
> 1) Remove auth_plugin.py from respective python client libraries and add
> all required common functionality in auth.py. Add auth.py module into the
> new apiclient library.
> 2) Remove auth.py completely and add auth_plugins.py module in the
> respective python client libraries. No need to add auth.py into the new
> apiclient library.
> 3) Check if keystoneauth library has all needed functionality present in
> auth.py and auth_plugin.py. If present, don¹t include auth.py in the new
> client library and eliminate auth_plugin.py from python client libraries
> and instead use keystoneauth wherever needed.
>
> base.py
> ===
> Uses classes ResourceClass and CrudManager and method getid() from
> openstack/common/apiclient/base.py in various python client libraries.
> There is also getid() method implemented in some of the python client
> libraries. getid() method should be deleted from respective python client
> libraries and should be used from base.py and this module should be added
> to the new apiclient library as it is.
>
>
> client.py
> ===
>
> Only few of the python client libraries are using get_class() static
> method of class BaseClient from
> oslo.incubator/openstack/common/apiclient/client.py module.
> This method is implemented in few of the python client libraries. we can
> simply ignore client.py and not include in the new apiclient library. We
> should add get_class method missing in the required python client
> libraries.
>
>
> exceptions.py
> ===
>
> Please refer to the exception classes present in the respective python
> client libraries in the google spreadsheet ³exception_details².
> All common exceptions from respective python client libraries should be
> moved to the exception.py module and this module should be part of the new
> apiclient library.
>
>
>
I suppose that this module can be useful only for new clients. It is a hard
task for existing clients. I broke whole openstack with [1][2](changes were
reverted) while working on [3](status is outdated now).
Also, few weeks ago, there was another attempt to change novaclient's
exceptions[4][5]. It was reverted too.

[1] - https://review.openstack.org/#/c/69837/6
[2] - https://review.openstack.org/#/c/94166/3
[3] -
https://blueprints.launchpad.net/oslo-incubator/+spec/common-client-library-2
[4] -
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077985.html
[5] - https://review.openstack.org/#/c/235558/


> fake_client.py
> ===
>
> Retain this module as it is for unit testing purpose.
>
>
> utils.py
> ===
>
> find_resource method from utils.py is used only by manila-client, all
> other clients has it¹s own version.
>
> Possible resolutions:
> 1. Move utils.py to the new apiclient library and delete find_resource
> method from all python client libraries and make them use it from the
> apiclient library
> 2. Simply not include utils.py to the new apiclient library and implement
> find_resource method in the manila client.
> We prefer to implement option #1.
>
>
> Please have a look at it and let us know your suggestions on the same.
> Currently we are having Diwali Vacation in India and once we are back from
> the vacation, based on your 

Re: [openstack-dev] [telemetry][gnocchi] defining scope/domain

2015-11-11 Thread Julien Danjou
On Tue, Nov 10 2015, gord chung wrote:

Hi Gordon,

> i was doing some googling on the current state of time series databases to see
> if there was a worthwhile db to implement as a Gnocchi driver and it seems
> we're currently in the same state as before: there are flaws with all open
> source time series databases but if you have lots of money, Splunk is 
> legit[1].

The thing is the Carbonara based backend is starting to work pretty
well, so there's less and less attraction towards other solutions
anyway.

> as i get asked what Gnocchi is/does pretty often, i usually point them to
> docs[2]. that said, do we think it's necessary to define and list explicit use
> cases for Gnocchi? Gnocchi is 'a resource indexing and metric storage service'
> is a nice one-liner but would a comparison of Gnocchi make it easier to
> understand[3]. just wondering if there's a  way we can make Gnocchi easier to
> understand and consumable.

Yes, I think there is. We should write typical use cases and example in
the documentation so that people may get a grasp of what Gnocchi is and
what kind of problem it can or cannot solve.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] weekly meeting

2015-11-11 Thread Takashi Yamamoto
hi,

i have no idea why the link is broken.

today's meeting log is here:
http://eavesdrop.openstack.org/meetings/tap_as_a_service_meeting/2015/tap_as_a_service_meeting.2015-11-11-06.36.html

On Wed, Nov 11, 2015 at 7:08 PM, Neil Jerram  wrote:
> Sounds interesting!  I'd like to look at some past meeting logs (including 
> from today), but the 'past meetings' link at 
> http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting does not work for me.
>
> Neil
>
> -Original Message-
> From: Takashi Yamamoto [mailto:yamam...@midokura.com]
> Sent: 11 November 2015 03:09
> To: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: [openstack-dev] [neutron][tap-as-a-service] weekly meeting
>
> hi,
>
> tap-as-a-service meeting will be held weekly, starting today.
> http://eavesdrop.openstack.org/#Tap_as_a_Service_Meeting
> anyone interested in the project is welcome.
> sorry for immediate notice.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][puppet] Detached roles and globals.pp

2015-11-11 Thread Aleksandr Didenko
Hi,

I think something like this would be more flexible:

$swift_proxy_roles   = hiera('swift_proxy_roles', ['primary-controller',
'controller'])
$swift_storage_roles = hiera('swift_storage_roles', ['primary-controller',
'controller'])
# ...
$swift_nodes = get_nodes_hash_by_roles($network_metadata,
$swift_storage_roles)
$swift_proxies   = get_nodes_hash_by_roles($network_metadata,
$swift_proxy_roles)
$swift_proxy_caches  = get_nodes_hash_by_roles($network_metadata,
$swift_proxy_roles)

Regards,
Alex


On Wed, Nov 11, 2015 at 12:41 PM, Daniel Depaoli <
daniel.depa...@create-net.org> wrote:

> Hi all.
> I'm starting to resolve the todo at these line[1]. To solve this I think
> to hardcoded the role in the file, for example:
>
> *$swift_proxies = get_nodes_hash_by_roles($network_metadata,
> ['primary-swift-proxy', 'swift-proxy']) ? {*
> *true => get_nodes_hash_by_roles($network_metadata,
> ['primary-swift-proxy', 'swift-proxy']),*
> *false => get_nodes_hash_by_roles($network_metadata,
> ['primary-controller', 'controller']),*
> *}*
>
>
> Is this the right way or do you suggest something more clean?
>
> Thanks
>
> [1]
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/globals/globals.pp#L236:L242
>
> --
> 
> Daniel Depaoli
> CREATE-NET Research Center
> Smart Infrastructures Area
> Junior Research Engineer
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using Fuel

2015-11-11 Thread Vladimir Kuklin
Hi, Fei

It seems you will need to do several things with Fuel - create a new
release, associate your cluster with it when creating it and provide paths
to corresponding repositories with packages. Also, you will need to create
a base image for Image-based provisioning. I am not sure we have all the
100% of the code that supports it, but it should be possible to do so with
some additional efforts. Let me specifically refer to Fuel Agent team who
are working on Image-Based Provisioning and Nailgun folks who should help
you with figuring out patterns for repositories URLs configuration.

On Tue, Nov 10, 2015 at 5:15 AM, Fei LU  wrote:

> Greeting Fuel teams,
>
>
> My company is working on the installation of virtualization
> infrastructure, and we have noticed Fuel is a great tool, much better than
> our own installer. The question is that Mirantis is currently supporting
> OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>
> I have read all the Fuel documents, including fuel dev doc, but I haven't
> found the solution how can I add my own release into Fuel. Or maybe I'm
> missing something.
>
> So, would you guys please give some guide or hints?
>
> Appreciating any help.
> Kane
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs][api] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-11-11 Thread Alex Xu
Sorry, add [docs] and [api] tags to the title!

2015-11-11 20:51 GMT+08:00 Alex Xu :

> Hi,
>
> At nova api subteam weekly meeting, we decided hold 2 days virtual doc
> sprint to help the Nova API document. The initial proposed date is Dec 8
> and 9(Let me know if the date is conflict with other thing). The sprint is
> running on local time for folks. Peoples can work on the patch and also can
> help on the review.
>
> Appreciate and welcome people join this sprint to help on API doc.
>
> Please sign up for this sprint first if you are interesting at the top of
> etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks
> of sprint are also in the etherpad, already have some contributors work on
> those doc tasks now, so free to join us now or join the sprint.
>
> Thanks
> Alex
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-11 Thread Vladimir Kuklin
Hi, Andrew

Let me answer your questions.

This agent is active/active which actually marks one of the node as
'pseudo'-master which is used as a target for other nodes to join to. We
also check which node is a master and use it in monitor action to check
whether this node is clustered with this 'master' node. When we do cluster
bootstrap, we need to decide which node to mark as a master node. Then,
when it starts (actually, promotes), we can finally pick its name through
notification mechanism and ask other nodes to join this cluster.

Regarding disconnect_node+forget_cluster_node this is quite simple - we
need to eject node from the cluster. Otherwise it is mentioned in the list
of cluster nodes and a lot of cluster actions, e.g. list_queues, will hang
forever as well as forget_cluster_node action.

We also handle this case whenever a node leaves the cluster. If you
remember, I wrote an email to Pacemaker ML regarding getting notifications
on node unjoin event '[openstack-dev] [Fuel][Pacemaker][HA] Notifying
clones of offline nodes'. So we went another way and added a dbus daemon
listener that does the same when node lefts corosync cluster (we know that
this is a little bit racy, but disconnect+forget actions pair is
idempotent).

Regarding notification commands - we changed behaviour to the one that
fitter our use cases better and passed our destructive tests. It could be
Pacemaker-version dependent, so I agree we should consider changing this
behaviour. But so far it worked for us.

On Wed, Nov 11, 2015 at 2:12 PM, Andrew Beekhof  wrote:

>
> > On 11 Nov 2015, at 6:26 PM, bdobre...@mirantis.com wrote:
> >
> > Thank you Andrew.
> > Answers below.
> > >>>
> > Sounds interesting, can you give any comment about how it differs to the
> other[i] upstream agent?
> > Am I right that this one is effectively A/P and wont function without
> some kind of shared storage?
> > Any particular reason you went down this path instead of full A/A?
> >
> > [i]
> >
> https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
> > <<<
> > It is based on multistate clone notifications. It requries nothing
> shared but Corosync info base CIB where all Pacemaker resources stored
> anyway.
> > And it is fully A/A.
>
> Oh!  So I should skip the A/P parts before "Auto-configuration of a
> cluster with a Pacemaker”?
> Is the idea that the master mode is for picking a node to bootstrap the
> cluster?
>
> If so I don’t believe that should be necessary provided you specify
> ordered=true for the clone.
> This allows you to assume in the agent that your instance is the only one
> currently changing state (by starting or stopping).
> I notice that rabbitmq.com explicitly sets this to false… any particular
> reason?
>
>
> Regarding the pcs command to create the resource, you can simplify it to:
>
> pcs resource create --force --master p_rabbitmq-server
> ocf:rabbitmq:rabbitmq-server-ha \
>   erlang_cookie=DPMDALGUKEOMPTHWPYKC node_port=5672 \
>   op monitor interval=30 timeout=60 \
>   op monitor interval=27 role=Master timeout=60 \
>   op monitor interval=103 role=Slave timeout=60 OCF_CHECK_LEVEL=30 \
>   meta notify=true ordered=false interleave=true master-max=1
> master-node-max=1
>
> If you update the stop/start/notify/promote/demote timeouts in the agent’s
> metadata.
>
>
> Lines 1602,1565,1621,1632,1657, and 1678 have the notify command returning
> an error.
> Was this logic tested? Because pacemaker does not currently support/allow
> notify actions to fail.
> IIRC pacemaker simply ignores them.
>
> Modifying the resource state in notifications is also highly unusual.
> What was the reason for that?
>
> I notice that on node down, this agent makes disconnect_node and
> forget_cluster_node calls.
> The other upstream agent does not, do you have any information about the
> bad things that might happen as a result?
>
> Basically I’m looking for what each option does differently/better with a
> view to converging on a single implementation.
> I don’t much care in which location it lives.
>
> I’m CC’ing the other upstream maintainer, it would be good if you guys
> could have a chat :-)
>
> > All running rabbit nodes may process AMQP connections. Master state is
> only for a cluster initial point at wich other slaves may join to it.
> > Note, here you can find events flow charts as well [0]
> > [0] https://www.rabbitmq.com/pacemaker.html
> > Regards,
> > Bogdan
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [fuel][puppet] Detached roles and globals.pp

2015-11-11 Thread Daniel Depaoli
With before you mean before in globals.pp or in another module? The best
solution should be add it in override plugin module, but it is executed
after globals so it has no effect on globals.

On Wed, Nov 11, 2015 at 12:13 PM, Sergey Vasilenko 
wrote:

>
> On Wed, Nov 11, 2015 at 1:41 PM, Daniel Depaoli <
> daniel.depa...@create-net.org> wrote:
>
>> Hi all.
>> I'm starting to resolve the todo at these line[1]. To solve this I think
>> to hardcoded the role in the file, for example:
>>
>> *$swift_proxies = get_nodes_hash_by_roles($network_metadata,
>> ['primary-swift-proxy', 'swift-proxy']) ? {*
>> *true => get_nodes_hash_by_roles($network_metadata,
>> ['primary-swift-proxy', 'swift-proxy']),*
>> *false => get_nodes_hash_by_roles($network_metadata,
>> ['primary-controller', 'controller']),*
>> *}*
>>
>>
>> Is this the right way or do you suggest something more clean?
>>
>
> Function *get_nodes_hash_by_roles returns *hash*. *Provided above switch
> shouldn't work. But this way is true.
> IMHO list of roles name should be constructed before.
>
>
> /sv
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Daniel Depaoli
CREATE-NET Research Center
Smart Infrastructures Area
Junior Research Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Nova][Neutron] Multi-tenancy support

2015-11-11 Thread Sukhdev Kapur
Hi Vasyl,

I have not cross checked every patch in your list, but, the list looks
about right.
>From the Ironic, Nova, and Neutron point of the code is pretty much in
place with these patches.

In this week's meeting we discussed the plan for merging these patches.
Couple of things are holding us - namely the CI and documentation. We are
working on getting the CI addressed so that automated testing can be kicked
off, which will enable us to merge these patches (hopefully in M1).
Documentation is also underway.

As to ML2 driver (which you are looking for), in order make the CI work, we
are considering couple of options - either write a canned ML2 driver to
test this or enhance OVS driver to allow/accept/deal with new interface. We
did not have full quorum in this week's meeting. Hopefully, we will have
some concrete plans by the next week. But, this ML2 driver is being
considered to deal with devstack/CI related testing only.

In order to test the real world scenarios, you will need real HW and vendor
ML2 driver. The only two vendors that I am aware of who has this working
are HP and Arista. I do not know if HP is in a position to release it yet.
Arista will take some time to release it, as we follow very strict quality
control guidelines before releasing any software. I am only techie and do
not control the release of software, but, my guess is, its release will be
aligned with release of Mitaka.

If you believe you can be good with canned ML2 driver for devstack
initially, that may become available much earlier.
We meet every Monday at 1700 UTC (8am Pacific time) on
#openstack-meeting-4. Feel free to drop by or join us - as this is one of
the things we plan on discussing next Monday's meeting. This will give you
a better feel.

Hope this helps.

-Sukhdev
P.S. feel free to ping me on IRC (IRC handle: Sukhdev) on neutron or Ironic
channels


On Tue, Nov 10, 2015 at 3:05 AM, Vasyl Saienko 
wrote:

> Hello community,
>
> I would like to start preliminary testing of Ironic multi-tenant network
> setup which is supported by Neutron in Liberty according to [1]. I found
> the following patches that are on review. Also neutron ML2 plugin is
> needed. I can't find any plugin that supports multi-tenancy and Cisco
> (Catalyst)/Arista switches. I would be grateful for any information on
> the matter.
>
> *Ironic:*
>
> https://review.openstack.org/#/c/206232/
>
> https://review.openstack.org/#/c/206238/
>
> https://review.openstack.org/#/c/206243/
>
> https://review.openstack.org/#/c/206244/
>
> https://review.openstack.org/#/c/206245/
>
> https://review.openstack.org/#/c/139687/
>
> https://review.openstack.org/#/c/213262/
> https://review.openstack.org/#/c/228496/
>
> *Nova:*
>
> https://review.openstack.org/#/c/186855/
> https://review.openstack.org/#/c/194413/
>
> *python-ironicclient*:
> https://review.openstack.org/#/c/206144
>
>
> [1]
> https://blueprints.launchpad.net/neutron/+spec/neutron-ironic-integration
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] live migration sub-team meeting

2015-11-11 Thread Murray, Paul (HP Cloud)
So, to confirm:

The live migration IRC meeting will be held on: Tuesdays at 1400 UTC in 
#openstack-meeting-3, starting next week: Tuesday 17th November.

See: https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

On that page you will find links the tracking pages and bugs. Please check that 
the appropriate specs and code reviews are listed on the tracking pages. Please 
also take time to review specs and code.

Please contact me or respond to this email thread if there is anything you want 
to see on the agenda for the first meeting.

Regards,
Paul

From: Murray, Paul (HP Cloud)
Sent: 10 November 2015 11:48
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] live migration sub-team meeting

Thank you for the prompting Michael. I was chasing up a couple of key people to 
make sure they were available.

The IRC meeting should be Tuesdays at 1400 UTC on #openstack-meeting-3 starting 
next week (too late for today).

I will get that sorted out with infra and send another email to confirm. I will 
also sort out all the specs and patches that I know about today. More 
information will be included about that too.

Paul

From: Michael Still [mailto:mi...@stillhq.com]
Sent: 09 November 2015 21:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] live migration sub-team meeting

So, its been a week. What time are we picking?

Michael

On Thu, Nov 5, 2015 at 10:46 PM, Murray, Paul (HP Cloud) 
> wrote:
> > Most team members expressed they would like a regular IRC meeting for
> > tracking work and raising blocking issues. Looking at the contributors
> > here [2], most of the participants seem to be in the European
> > continent (in time zones ranging from UTC to UTC+3) with a few in the
> > US (please correct me if I am wrong). That suggests that a time around
> > 1500 UTC makes sense.
> >
> > I would like to invite suggestions for a day and time for a weekly
> > meeting -
>
> Maybe you could create a quick Doodle poll to reach a rough consensus on
> day/time:
>
> http://doodle.com/

Yes, of course, here's the poll:

http://doodle.com/poll/rbta6n3qsrzcqfbn



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Propose Virtual Nova API Doc Sprint on Dec 8 and 9

2015-11-11 Thread Alex Xu
Hi,

At nova api subteam weekly meeting, we decided hold 2 days virtual doc
sprint to help the Nova API document. The initial proposed date is Dec 8
and 9(Let me know if the date is conflict with other thing). The sprint is
running on local time for folks. Peoples can work on the patch and also can
help on the review.

Appreciate and welcome people join this sprint to help on API doc.

Please sign up for this sprint first if you are interesting at the top of
etherpad https://etherpad.openstack.org/p/nova-v2.1-api-doc . The tasks of
sprint are also in the etherpad, already have some contributors work on
those doc tasks now, so free to join us now or join the sprint.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stackalytics] possibly breaking bug closing statistics?

2015-11-11 Thread Thierry Carrez
Ilya Shakhat wrote:
> Doug,
> 
> You are right, there should not be any changes in the stats. Bugs are
> mapped to release only by date, target milestone is not taken into
> account. For resolved bugs metric Stackalytics uses 'date_fix_commited'
> field, for filed bugs metric - 'date_created'.

Thanks Ilya for the check.

date_fix_committed is probably not set if we directly switch the bug to
"Fix released", and that is what we plan to do now with Launchpad bugs.

We might therefore need a backward-compatible patch to Stackalytics so
that it uses (date_fix_committed or date_fix_released) instead.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-sfc] We will resume our weekly Service Chain project IRC meeting starting on 11/12/2015

2015-11-11 Thread Cathy Zhang
Hi everyone,

Here is the meeting info:
Weekly on Thursday at 1700 
UTC 
in #openstack-meeting-4 (IRC 
webclient)
 .
Due to the day time saving change, the meeting will start at 9am pacific time.

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Infra] HA deployment tests on nodepool

2015-11-11 Thread Aleksandra Fedorova
Hi, everyone,



in Fuel project we run two HA deployment tests for every commit in
openstack/fuel-library repository. Currently these tests are run by
Fuel CI - the standalone Third-Party CI system, which can vote on
openstack/fuel* projects. But we'd like it to be the part of the gate.
As these tests require several vms to be available for the test run,
we'd like to use nodepool multinode support for that.



As I don't have deep understanding about multinode setup at the
moment, and I haven't found enough info in docs or specs, I think the
better way to start this discussion is to explain the workflow used in
Fuel HA tests. Then we can consider which of the steps are already
there, or can be implemented, and which items are potential blockers.
Any feedback is welcome.

Generic deployment test
===

To run deployment for Fuel you need:
  - ISO image for Fuel node [1]
  - fuel-devops [2] - the cli tool which manages virtual machines and
stores state (vm names, network interfaces..) in a PostgreSQL database
  - fuel-qa [3] - test framework based on proboscis

Basic setup is described in [4].

And the test flow is as follows:

- with fuel-devops tool:

create several vm's connected via internal network - a so called
'devops environment'

- with fuel-qa framework:

1. install Fuel node on first vm using the ISO image provided by the
local path on the host server

2. bootstrap other vms with basic OS image provided on Fuel ISO

3. configure Fuel environment via API according to certain scenario

4. run deployment


Test scenarios are described in fuel-qa documentation, see for example [5].

Deployment test on CI
=

fuel-library code is essentially a set of puppet manifests which are
used to deploy the enviroment configuration defined via Fuel
interface. These manifests are delivered to Fuel node as RPM package
[6].

To save time and resources on CI we don't recreate environment from
scratch for every tests but regularly take a "stable enough" ISO,
upload it to Jenkins slaves, create base environment (steps 1. and 2.)
and snapshot all its vms.

Then, on every commit we

1) rebuild a fuel-library package in a CentOS-based docker container

2) revert devops environment from snapshot

3) upload and install package on Fuel node

4) run the deployment test scenario (steps 3. and 4.)


You can refer to detailed logs in [7]

How this can be addressed in nodepool
=

The nodepool driver approach


fuel-devops is essentially a wrapper and vm's manager, and it was
originally planned as a tool which can use multiple backends, taking
libvirt as a default one. There is an still-on-discussion task to
implement the 'bare-metal driver' for fuel-devops, which would make it
possible to use vm's from different servers for one particular test
run.

We can consider implementing nodepool as a driver, so it provides
vm's, which then are wrapped by fuel-devops and are sent further to
fuel-qa framework.

Then to run the test we would need a 'manager vm' where fuel-devops
code is executed, and several empty nodes from nodepool. We'd register
those empty nodes in fuel-devops database and run the test as usual.

No fuel-devops approach
---

Direct approach would be to use the nodepool's service for pre-built images.

Given by Fuel ISO image, we regularly generate one vm with deployed
Fuel node (step 1.), and one with the basic node deployed with
bootstrap image (step 2.). This images are stored in Glance or another
storage as usual.

Then for each fuel-library test we request 1 Fuel node and several
basic nodes, and then operate on them directly without wrappers.

For this scenario to work we need to change a lot in fuel-qa code. But
this approach seems to be aligned with the initiative [8], which is
currently in development: if we manage to describe devops environments
in YAML files, we'd probably be able to map these descriptions to the
multinode configurations, and then support them in nodepool.

Side Question
=

Can we build package from a change request so that package is then
used in test? Are there any best practices?


P.S. All code used for current implementation of deployment tests in
Fuel CI is open, including puppet manifests for Jenkins master and
slaves and Jenkins Job Builder configs for actual jobs. See links
below.

[1] https://ci.fuel-infra.org/view/ISO/ - nightly ISO builds
[2] https://github.com/openstack/fuel-devops
[3] https://github.com/openstack/fuel-qa
[4] https://docs.fuel-infra.org/fuel-dev/devops.html - test environment setup
[5] 
https://docs.fuel-infra.org/fuel-qa/base_tests.html#module-fuelweb_test.tests.test_neutron
- test scenario
[6] 
https://github.com/openstack/fuel-library/blob/master/specs/fuel-library8.0.spec
[7] 
https://ci.fuel-infra.org/job/master.fuel-library.pkgs.ubuntu.neutron_vlan_ha/2699/console
- test run example
[8] 

Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-11 Thread Clint Byrum
Excerpts from Clint Byrum's message of 2015-11-11 10:57:26 -0800:
> Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> > On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> > >
> > > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand  wrote:
> > > >
> > > > > Hi there!
> > > > >
> > > > > All of Liberty would be migrating from Sid to Testing (which is the
> > > > > pre-condition for an upload to offical Debian backports) if I didn't
> > > > > have a really annoying situation with the repoze.{what,who} packages.
> > I
> > > > > feel like I could get some help from the Python export folks here.
> > > > >
> > > > > What is it about?
> > > > > =
> > > > >
> > > > > Here's the dependency chain:
> > > > >
> > > > > - Keystone depends on pysaml2.
> > > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to Sid.
> > > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > > >
> > > > > Unfortunately, python-repoze.who doesn't migrate to Debian Testing
> > > > > because it would make python-repoze.what broken.
> > > > >
> > > > > To make the situation worse, python-repoze.what build-depends on
> > > > > python-repoze.who-testutil, which itself doesn't work with
> > > > > python-repoze.who >= 2.
> > > > >
> > > > > Note: repoze.who-testutil is within the package
> > > > > python-repoze.who-plugins who also contains 4 other plugins which are
> > > > > all broken with repoze.who >= 2, but the others could be dropped from
> > > > > Debian easily). We can't drop repoze.what completely, because there's
> > > > > turbogears2 and another package who needs it.
> > > > >
> > > > > There's no hope from upstream, as all of these seem to be abandoned
> > > > > projects.
> > > > >
> > > > > So I'm a bit stuck here, helpless, and I don't know how to fix the
> > > > > situation... :(
> > > > >
> > > > > What to fix?
> > > > > 
> > > > > Make repoze.what and repoze.who-testutil work with repoze.who >= 2.
> > > > >
> > > > > Call for help
> > > > > =
> > > > > I'm a fairly experienced package maintainer, but I still consider
> > myself
> > > > > a poor Python coder (probably because I spend all my time packaging
> > > > > rather than programming in Python: I know a way better other
> > programing
> > > > > languages).
> > > > >
> > > > > So I would enjoy a lot having some help here, also because my time is
> > > > > very limited and probably better invested working on packages to
> > assist
> > > > > the whole OpenStack project, rather than upstream code on some weirdo
> > > > > dependencies that I don't fully understand.
> > > > >
> > > > > So, would anyone be able to invest a bit of time, and help me fix the
> > > > > problems with repoze.what / repoze.who in Debian? If you can help,
> > > > > please ping me on IRC.
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Thomas Goirand (zigo)
> > > > >
> > > > >
> > > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > > >
> > > > I admit I haven't tested it, but the requirements declaration doesn't
> > seem
> > > > to enforce the need for > 2. If that is in-fact the case that > 2 is
> > > > needed, we are a somewhat of an impass with dead/abandonware holding us
> > > > ransom. I'm not sure what the proper handling of that ends up being in
> > the
> > > > debian world.
> > >
> > > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> > >
> > > https://github.com/repoze/repoze.who/commits/master
> > >
> > > who's just not being released (does anybody else smell a Laurel and
> > > Hardy skit coming on?)
> > 
> > Seriously!
> > 
> > >
> > > Also, this may have been something temporary, that then got left around
> > > because nobody bothered to try the released versions:
> > >
> > >
> > https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> > >
> > > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> > >
> > >
> > 
> > Def something to try out. If this is still an outstanding issue next week
> > (when I have a bit more time) I'll see what I can do to test out the
> > variations.
> 
> FYI, I tried 2.0 and it definitely broke repoze.what's test suite. The API
> is simply incompatible (shake your fists at whoever did that please). For
> those not following along: please make a _NEW_ module when you break
> your API.
> 
> On the off chance it could just be dropped, I looked at turbogears2, and
> this seems to be the only line _requiring_ repoze.what-plugins:
> 
> https://github.com/TurboGears/tg2/blob/development/tg/configuration/app_config.py#L1042
> 
> So I opened this issue:
> 
> https://github.com/TurboGears/tg2/issues/69
> 
> Anyway, it seems like less work to just deprecate this particular feature
> that depends on an unmaintained library (which should be 

[openstack-dev] [QA] Meeting Thursday November 12th at 9:00 UTC

2015-11-11 Thread Ken'ichi Ohmichi
[QA] Meeting Thursday November 12th at 9:00 UTC

Hi everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, October 1st at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the next
meeting will be at:

04:00 EDT
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Meeting Thursday November 12th at 9:00 UTC

2015-11-11 Thread Ken'ichi Ohmichi
Hi everyone,

# Sorry for sending this main again.
# My previous mail contains some bugs.

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, November 12th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the next
meeting will be at:

03:00 EDT
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Qiao,Liyong

hello all:

I will update some Magnum functional testing status, 
functional/integration testing
is important to us, since we change/modify the Heat template rapidly, we 
need to
verify the modification is correct, so we need to cover all templates 
Magnum has.
and currently we only has k8s testing(only test with atomic image), we 
need to
add more, like swarm(WIP), mesos(under plan), also , we may need to 
support COS image.

lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo 
summit,

Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it 
after testing.


for each stage, the time costing is follows:

 * devstack prepare: 5-6 mins
 * Running devstack: 15 mins(include downloading atomic image)
 * 1) and 2) 15 mins
 * 3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack 
setup will take 20 mins already.


To reduce time, I suggest to only create 1 bay each pipeline and do vary 
kinds of testing
on this bay, if want to test some specify bay (for example, 
network_driver etc), create

a new pipeline .

So, I think we can *delete 2)*, since 3) will do similar 
things(create/delete), the different is

3) use tls_disabled=False. *what do you think *?
see https://review.openstack.org/244378 for the time costing, will 
reduce to 45 min (48m 50s in the example.)


=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 * gate-functional-dsvm-magnum-api 30 mins
 * gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine 
on gate)

https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration management approach

2015-11-11 Thread Koniszewski, Pawel


> -Original Message-
> From: Paul Carlton [mailto:paul.carlt...@hpe.com]
> Sent: Tuesday, November 10, 2015 4:51 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova] live migration management approach
>
> All
>
> I inherited the task of producing an upstream spec for querying and aborting
> ongoing migrations from a co-worker and submitted a new spec
> https://review.openstack.org/#/c/228828//.  A related spec for pausing an
> instance being migrated to enable the migration to complete is also proposed
> https://review.openstack.org/#/c/229040/.
>
> However I am now wondering if building these capabilities around the
> migration object is the right way to go.  I'm now wondering if this is right
> way
> to approach this or whether it would be better to implement some additional
> operations on hte instance object?
>
> I'm become aware that details of the progress of a migration operation is
> already available to both cloud admins and instance owners via the server
> get
> details operation, which reports the progress of migration.
>
> Cancelling a migration could be implemented as a reset-task operation on an
> instance?

Reset-task operation would mean that we need to raise exceptions (bad request
or whatever else) for almost every vm task. How many of them are really
cancelable? Also I have a feeling that it is live migration related action so
current approach is ok with me.

> The proposal in https://review.openstack.org/#/c/229040/ relates to
> providing the facility to pause an instance that is being migrated so that
> the
> migration can complete (i.e. in the case where the instance is dirtying
> memory quicker than the migration processing can copy it).
> This could be implemented by simply allowing the existing pause operation
> on a migrating instance.  The existing implementation of live migration
> means
> that the instance will automatically resume again when the migration
> completes.  We could amend instance pause so that it issues a warning if a
> pause operation is performed on an instance that has a task state of
> migrating so the user is made aware that the pause will only be temporary.

My personal feeling is that we should not reuse the same operation for two
different purposes. IMHO pause operation should be as it is - operator pauses
an instance and it remains paused also after live migration completion. Such
case is valid from user perspective who might not be aware of ongoing live
migration. However, this might require some work in Libvirt, not sure we can
do it without changing/adding new Libvirt behavior.

Kind Regards,
Pawel Koniszewski


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Adrian Otto
Eli,

I like this proposed approach. We did have a discussion with a few Stackers 
from openstack-infra in Tokyo to express our interest in using bare metal for 
gate testing. That’s still a way out, but that may be another way to speed this 
up further. A third idea would be to adjust the nova virt driver in our 
devstack image to use libvirt/lxc by default (instead of libvirt/kvm) which 
would allow for bays to be created more rapidly. This would potentially allow 
us to to perform repeated bay creations int he same pipeline in a reasonable 
timeframe.

Adrian

On Nov 11, 2015, at 11:02 PM, Qiao,Liyong 
> wrote:

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-11 Thread Cathy Zhang
Agree with Paul and Louis. The networking-sfc repo should be preserved to 
support the service function chain functionality. Flow classifier is just 
needed to specify what flows will go through the service port chain. 

The flow classifier API is designed as a separate plugin which is independent 
of the port chain plugin. We will support the effort of evolving it to a common 
service classifier API and moving it out of the networking-sfc repo when the 
time comes. 

Thanks,
Cathy

-Original Message-
From: Henry Fourie [mailto:louis.fou...@huawei.com] 
Sent: Wednesday, November 11, 2015 2:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic 
classifiers

Paul,
   Agree completely that the networking-sfc repo should be preserved as it 
includes functionality beyond that of just a classifier - it defines the 
service chain structure. 

Work on a common service classifier API could be done by the networking-sfc 
team to help in evaluating that API.

 - Louis   

-Original Message-
From: Paul Carver [mailto:pcar...@paulcarver.us]
Sent: Wednesday, November 11, 2015 1:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic 
classifiers

On 11/10/2015 8:30 AM, Sean M. Collins wrote:
> On Mon, Nov 09, 2015 at 07:58:34AM EST, Jay Pipes wrote:
>
>> 2) Keep the security-group API as-is to keep outward compatibility with AWS.
>> Create a single, new service-groups and service-group-rules API for
>> L2 to L7 traffic classification using mostly the modeling that Sean has put 
>> together.
>> Remove the networking-sfc repo and obselete the classifier spec. Not 
>> sure what should/would happen to the FWaaS API, frankly.
>
> As to the REST-ful API for creating classifiers, I don't know if it 
> should reside in the networking-sfc project. It's a big enough piece 
> that it will most likely need to be its own endpoint and repo, and 
> have stakeholders from other projects, not just networking-sfc. That 
> will take time and quite a bit of wrangling, so I'd like to defer that 
> for a bit and just work on all the services having the same data 
> model, where we can make changes quickly, since they are not visible 
> to API consumers.
>

I agree that the service classifier API should NOT reside in the networking-sfc 
project, but I don't understand why Jay suggests removing the networking-sfc 
repo. The classifier specified by networking-sfc is needed only because there 
isn't a pre-existing classifier API. As soon as we can converge on a common 
classifier API I am completely in favor of using it in place of the one in the 
networking-sfc repo, but SFC is more than just classifying traffic. We need a 
classifier in order to determine which traffic to redirect, but we also need 
the API to specify how to redirect the traffic that has been identified by 
classifiers.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Default PostgreSQL server encoding is 'ascii'

2015-11-11 Thread Igor Kalnitsky
Hello,

Yeah, that's true. There shouldn't be any VBox specific hacks, and I
believe PostgreSQL uses locale's encoding by default (which is ASCII
in containers).

Well, basically the bug should be assigned on library team - we should
specify UTF-8 explicitly in PostgreSQL config, since Nailgun works
with UTF-8 [1] and we must be in sync here.

Thanks,
Igor

[1] 
https://github.com/openstack/fuel-web/blob/35dda0f36c4c5e52bc68492ab7ad154d14747eef/nailgun/nailgun/db/sqlalchemy/__init__.py#L36


On Thu, Nov 5, 2015 at 12:38 AM, Evgeniy L  wrote:
> Hi,
>
> I believe we don't have any VirtualBox specific hacks, especially in terms
> of
> database configuration. By "development env" Vitaly meant fake UI, when
> developer installs and configures the database by himself, without any iso
> images, so probably his db is configured correctly with utf-8.
>
> Also we should make sure that upgrade works correctly, after the problem is
> fixed.
>
> Thanks,
>
> On Wed, Nov 4, 2015 at 2:30 PM, Artem Roma  wrote:
>>
>> Hi, folks!
>>
>> Recently I've been working on this bug [1] and have found that default
>> encoding of database server used by FUEL infrastructure components (Nailgun,
>> OSTF, etc) is ascii. At least this is true for environment set up via
>> VirtualBox scripts. This situation may (and already does returning to the
>> bug) cause obfuscating problems when dealing with non-ascii string data
>> supplied by user such as names for nodes, clusters etc. Nailgun encodes such
>> data in UTF-8 before sending to the database so misinterpretation by former
>> while saving it is sure thing.
>>
>> I wonder if we have such situation on all Fuel environments or only on
>> those set by VB scripts, because as for me it seems as pretty serious flaw
>> in our infrastructure. It would be great to have some comments from people
>> more competent in areas regarding to the matter.
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1472275
>>
>> --
>> Regards!)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Feature request and bug workflow

2015-11-11 Thread Tripp, Travis S
Searchlighters,

When we began this project, we had many discussions about process and made a 
conscious decision to support as lightweight of a workflow for feature requests 
as possible. We all discussed how we want to encourage contribution from 
everybody by supporting both developers and non-developers who want to provide 
input, requests for features, and bug fixes. Specifically, we decided that we 
did not want to immediately use a separate spec repo and to try to better 
incorporate our normal documentation repo into the feature request process 
whenever Launchpad didn’t meet our needs.

We did not formally document any of the above, mostly because we didn’t have 
time in Liberty, but also because the concept was still a little nebulous on 
how we would better incorporate our normal documentation processes into the 
feature request process.

Now that we are starting Mitaka, I’ve already encountered a couple of features 
where I felt that we needed a better review tool (e.g. gerrit) than launchpad. 
So, I’ve made an attempt [1] at documenting how we can still follow our 
original intents that I mention above. I also have a dependent feature review 
that follows this process as an example [2].

Please take a look at the workflow proposal review and provide comments. We 
also will discuss this in our weekly meeting. I recommend starting with this 
file: doc/source/feature-requests-bugs.rst

[1] Workflow Proposal - https://review.openstack.org/#/c/243881/
[2] Zero Downtime Feature - https://review.openstack.org/#/c/243386/


Steve,

Regarding you email [3] below.  I feel that the associated blueprint is an 
example of a blueprint that could benefit from a similar Gerrit review as 
described above. What do you think?

[3] Admin indexing - 
http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/68685

Thanks,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] About rebuilding volume-backed instances.

2015-11-11 Thread Jeremy Stanley
On 2015-11-11 11:25:09 -0600 (-0600), Chris Friesen wrote:
> I didn't think that the overhead of deleting/creating an instance was *that*
> much different than rebuilding an instance.
> 
> Do you have any information about where the "significant performance
> advantage" was coming from?

The main reason I recall the suggestion coming up is that, due to
IPv4 address starvation in some provider regions, nova boot was
taking an hour or more to return waiting for an IP address to be
assigned. Using nova rebuild would have supposedly avoided the
address churn and thus improved instance turnaround for us.

There may also have been other reasons for recommending rebuild to
us, but if so I don't recall what they were.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-11 Thread Joshua Harlow

Greetings all stackers,

I propose that we add Greg Hill[1] to the taskflow-core[2] team.

Greg (aka jimbo) has been actively contributing to taskflow for a
while now, both in helping make taskflow better via code
contribution(s) and by helping spread more usage/knowledge of taskflow
at rackspace (since the big-data[3] team uses taskflow internally).
He has helped provided quality reviews and is doing an awesome job
with the various taskflow concepts and helping make taskflow the best
it can be!

Overall I think he would make a great addition to the core review team.

Please respond with +1/-1.

Thanks much!

- Joshua Harlow

[1] https://launchpad.net/~greg-hill
[2] https://launchpad.net/taskflow
[3] http://www.rackspace.com/cloud/big-data

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday November 12th at 17:00UTC

2015-11-11 Thread Christopher Aedo
Greetings! Our next OpenStack Community App Catalog meeting will take
place this Thursday November 12th at 17:00 UTC in #openstack-meeting-3

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Please add agenda items if there's anything specific you would like to
discuss (or of course if the meeting time is not convenient for you
join us on IRC #openstack-app-catalog).

Please join us if you can!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Making stable maintenance its own OpenStack project team

2015-11-11 Thread Matt Riedemann



On 11/11/2015 8:51 AM, Flavio Percoco wrote:

On 09/11/15 21:30 -0600, Matt Riedemann wrote:

On 11/9/2015 9:12 PM, Matthew Treinish wrote:

On Mon, Nov 09, 2015 at 10:54:43PM +, Kuvaja, Erno wrote:

On Mon, Nov 09, 2015 at 05:28:45PM -0500, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-11-09 16:05:29 -0600:


On 11/9/2015 10:41 AM, Thierry Carrez wrote:

Hi everyone,

A few cycles ago we set up the Release Cycle Management team which
was a bit of a frankenteam of the things I happened to be leading:
release management, stable branch maintenance and vulnerability

management.

While you could argue that there was some overlap between those
functions (as in, "all these things need to be released") logic
was not the primary reason they were put together.

When the Security Team was created, the VMT was spinned out of the
Release Cycle Management team and joined there. Now I think we
should spin out stable branch maintenance as well:

* A good chunk of the stable team work used to be stable point
release management, but as of stable/liberty this is now done by
the release management team and triggered by the project-specific
stable maintenance teams, so there is no more overlap in tooling
used there

* Following the kilo reform, the stable team is now focused on
defining and enforcing a common stable branch policy[1], rather
than approving every patch. Being more visible and having more
dedicated members can only help in that very specific mission

* The release team is now headed by Doug Hellmann, who is focused
on release management and does not have the history I had with
stable branch policy. So it might be the right moment to refocus
release management solely on release management and get the stable
team its own leadership

* Empowering that team to make its own decisions, giving it more
visibility and recognition will hopefully lead to more resources
being dedicated to it

* If the team expands, it could finally own stable branch health
and gate fixing. If that ends up all falling under the same roof,
that team could make decisions on support timeframes as well,
since it will be the primary resource to make that work


Isn't this kind of already what the stable maint team does? Well,
that and some QA people like mtreinish and sdague.



So.. good idea ? bad idea ? What do current stable-maint-core[2]
members think of that ? Who thinks they could step up to lead that

team ?


[1]
http://docs.openstack.org/project-team-guide/stable-branches.html
[2] https://review.openstack.org/#/admin/groups/530,members



With the decentralizing of the stable branch stuff in Liberty [1] it
seems like there would be less use for a PTL for stable branch
maintenance - the cats are now herding themselves, right? Or at
least that's the plan as far as I understood it. And the existing
stable branch wizards are more or less around for help and answering

questions.


The same might be said about releasing from master and the release
management team. There's still some benefit to having people
dedicated
to making sure projects all agree to sane policies and to keep up
with
deliverables that need to be released.


Except the distinction is that relmgt is actually producing
something. Relmgt
has the releases repo which does centralize library releases, reno
to do the
release notes, etc. What does the global stable core do? Right now
it's there
almost entirely to just add people to the project specific stable
core teams.

-Matt Treinish



I'd like to move the discussion from what are the roles of the
current stable-maint-core and more towards what the benefits would
be having a stable-maint team rather than the -core group alone.

Personally I think the stable maintenance should be quite a lot more
than unblocking gate and approving people allowed to merge to the
stable branches.



Sure, but that's not we're talking about here are we? The other
tasks, like
backporting changes for example, have been taken on by project teams.
Even in
your other email you mentioned that you've been doing backports and
other tasks
that you consider stable maint in a glance only context. That's
something we
changed in kilo which ttx referenced in [1] to enable that to happen,
and it was
the only way to scale things.

The discussion here is about the cross project effort around stable
branches,
which by design is a more limited scope now. Right now the cross
project effort
around stable branch policy is really 2 things (both of which ttx
already
mentioned):

1. Keeping the gates working on the stable branches
2. Defining and enforcing stable branch policy.

The only lever on #2 is that the global stable-maint-core is the only
group
which has add permissions to the per project stable core groups.
(also the
stable branch policy wiki, but that rarely changes) We specifically
shrunk it to
these 2 things in. [1] Well, really 3 things there, but since we're
not doing
integrated stable point releases in the future its now 

[openstack-dev] [oslo][all] Oslo libraries dropping python 2.6 compatability

2015-11-11 Thread Davanum Srinivas
Folks,

Any concerns? please chime in:
https://review.openstack.org/244275

Thanks
-- Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stackalytics] possibly breaking bug closing statistics?

2015-11-11 Thread Robert Collins
On 12 November 2015 at 03:17, Ilya Shakhat  wrote:
> 2015-11-11 16:38 GMT+03:00 Thierry Carrez :
>>
>>
>> date_fix_committed is probably not set if we directly switch the bug to
>> "Fix released", and that is what we plan to do now with Launchpad bugs.
>>
>> We might therefore need a backward-compatible patch to Stackalytics so
>> that it uses (date_fix_committed or date_fix_released) instead.
>
>
> Good point, Thierry.
>
> I have one bug that was transferred from New directly into Fix Released
> state
> (https://bugs.launchpad.net/stackalytics/+bug/1479791). Launchpad sets all
> intermediate
> states, including date_fix_committed:
> "date_fix_committed": "2015-08-03T08:37:49.270140+00:00",
> "date_fix_released": "2015-08-03T08:37:49.270140+00:00",
>
> Not sure if it's documented behavior or not, so the patch to Stackalytics
> would probably
> be preferred.

Released implies committed: being defensive here won't hurt but is IMO
entirely unneeded.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop pushing stuff to the gate queue

2015-11-11 Thread Armando M.
On 10 November 2015 at 12:13, Armando M.  wrote:

> Neutron Cores,
>
> We have high failure rate (see [1] for more context). We have an initial
> bug report [2] filed, and more triaged is happening.
>
> Let's hold on before we push stuff to the gate queue. Once [2] is solved
> and the fire is put out, we'll resume the merge frenzy.
>
> As a general reminder, please be conscious of failure rate of [3], and pay
> attention to the Neutron dashboards [4], it helps us detect issues sooner
> rather than later.
>
>
A follow up:

The gate is on fire: in master we have [1,2,3,4] sneaking roughly at the
same time, and Liberty is partially broken to due to lbaas and releasenotes
issues.

You might have seen that by now, so bear with us whilst restore sanity.

A.

[1] https://bugs.launchpad.net/neutron/+bug/1514935
[2] https://bugs.launchpad.net/neutron/+bug/1515335
[3] https://bugs.launchpad.net/neutron/+bug/1515118
[4] https://bugs.launchpad.net/neutron/+bug/1515035


> Cheers,
> Armando
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079133.html
> [2] https://bugs.launchpad.net/neutron/+bug/1514935
> [3] http://tinyurl.com/ne3ex4v
> [4] http://docs.openstack.org/developer/neutron/#dashboards
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-11 Thread Morgan Fainberg
On Nov 11, 2015 10:57, "Clint Byrum"  wrote:
>
> Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> > On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> > >
> > > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand 
wrote:
> > > >
> > > > > Hi there!
> > > > >
> > > > > All of Liberty would be migrating from Sid to Testing (which is
the
> > > > > pre-condition for an upload to offical Debian backports) if I
didn't
> > > > > have a really annoying situation with the repoze.{what,who}
packages.
> > I
> > > > > feel like I could get some help from the Python export folks here.
> > > > >
> > > > > What is it about?
> > > > > =
> > > > >
> > > > > Here's the dependency chain:
> > > > >
> > > > > - Keystone depends on pysaml2.
> > > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to
Sid.
> > > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > > >
> > > > > Unfortunately, python-repoze.who doesn't migrate to Debian Testing
> > > > > because it would make python-repoze.what broken.
> > > > >
> > > > > To make the situation worse, python-repoze.what build-depends on
> > > > > python-repoze.who-testutil, which itself doesn't work with
> > > > > python-repoze.who >= 2.
> > > > >
> > > > > Note: repoze.who-testutil is within the package
> > > > > python-repoze.who-plugins who also contains 4 other plugins which
are
> > > > > all broken with repoze.who >= 2, but the others could be dropped
from
> > > > > Debian easily). We can't drop repoze.what completely, because
there's
> > > > > turbogears2 and another package who needs it.
> > > > >
> > > > > There's no hope from upstream, as all of these seem to be
abandoned
> > > > > projects.
> > > > >
> > > > > So I'm a bit stuck here, helpless, and I don't know how to fix the
> > > > > situation... :(
> > > > >
> > > > > What to fix?
> > > > > 
> > > > > Make repoze.what and repoze.who-testutil work with repoze.who >=
2.
> > > > >
> > > > > Call for help
> > > > > =
> > > > > I'm a fairly experienced package maintainer, but I still consider
> > myself
> > > > > a poor Python coder (probably because I spend all my time
packaging
> > > > > rather than programming in Python: I know a way better other
> > programing
> > > > > languages).
> > > > >
> > > > > So I would enjoy a lot having some help here, also because my
time is
> > > > > very limited and probably better invested working on packages to
> > assist
> > > > > the whole OpenStack project, rather than upstream code on some
weirdo
> > > > > dependencies that I don't fully understand.
> > > > >
> > > > > So, would anyone be able to invest a bit of time, and help me fix
the
> > > > > problems with repoze.what / repoze.who in Debian? If you can help,
> > > > > please ping me on IRC.
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Thomas Goirand (zigo)
> > > > >
> > > > >
> > > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > > >
> > > > I admit I haven't tested it, but the requirements declaration
doesn't
> > seem
> > > > to enforce the need for > 2. If that is in-fact the case that > 2 is
> > > > needed, we are a somewhat of an impass with dead/abandonware
holding us
> > > > ransom. I'm not sure what the proper handling of that ends up being
in
> > the
> > > > debian world.
> > >
> > > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> > >
> > > https://github.com/repoze/repoze.who/commits/master
> > >
> > > who's just not being released (does anybody else smell a Laurel and
> > > Hardy skit coming on?)
> >
> > Seriously!
> >
> > >
> > > Also, this may have been something temporary, that then got left
around
> > > because nobody bothered to try the released versions:
> > >
> > >
> >
https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> > >
> > > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> > >
> > >
> >
> > Def something to try out. If this is still an outstanding issue next
week
> > (when I have a bit more time) I'll see what I can do to test out the
> > variations.
>
> FYI, I tried 2.0 and it definitely broke repoze.what's test suite. The API
> is simply incompatible (shake your fists at whoever did that please). For
> those not following along: please make a _NEW_ module when you break
> your API.
>
> On the off chance it could just be dropped, I looked at turbogears2, and
> this seems to be the only line _requiring_ repoze.what-plugins:
>
>
https://github.com/TurboGears/tg2/blob/development/tg/configuration/app_config.py#L1042
>
> So I opened this issue:
>
> https://github.com/TurboGears/tg2/issues/69
>
> Anyway, it seems like less work to just deprecate this particular feature
> that depends on an unmaintained library (which should be grounds 

Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-11 Thread Davanum Srinivas
+1 from me!

On Wed, Nov 11, 2015 at 3:02 PM, Joshua Harlow  wrote:
> Greetings all stackers,
>
> I propose that we add Greg Hill[1] to the taskflow-core[2] team.
>
> Greg (aka jimbo) has been actively contributing to taskflow for a
> while now, both in helping make taskflow better via code
> contribution(s) and by helping spread more usage/knowledge of taskflow
> at rackspace (since the big-data[3] team uses taskflow internally).
> He has helped provided quality reviews and is doing an awesome job
> with the various taskflow concepts and helping make taskflow the best
> it can be!
>
> Overall I think he would make a great addition to the core review team.
>
> Please respond with +1/-1.
>
> Thanks much!
>
> - Joshua Harlow
>
> [1] https://launchpad.net/~greg-hill
> [2] https://launchpad.net/taskflow
> [3] http://www.rackspace.com/cloud/big-data
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Oslo libraries dropping python 2.6 compatability

2015-11-11 Thread Kevin L. Mitchell
On Wed, 2015-11-11 at 14:14 -0500, Davanum Srinivas wrote:
> Any concerns? please chime in:
> https://review.openstack.org/244275

Commented on the review, but I have to point out that python-novaclient,
which, to my knowledge, still supports Python 2.6, also happens to
depend on oslo.i18n, oslo.serialization, and oslo.utils.  If we drop
Python 2.6 compatibility on any of those three, we would also have to
drop it from novaclient (and potentially other clients).  Perhaps we
need to have a discussion about whether the clients still need to
support Python 2.6?
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Oslo libraries dropping python 2.6 compatability

2015-11-11 Thread Davanum Srinivas
Kevin,

right, that's exactly the intention of this email :) we should drop
py2.6 from python-novaclient as well

-- Dims

On Wed, Nov 11, 2015 at 3:07 PM, Kevin L. Mitchell
 wrote:
> On Wed, 2015-11-11 at 14:14 -0500, Davanum Srinivas wrote:
>> Any concerns? please chime in:
>> https://review.openstack.org/244275
>
> Commented on the review, but I have to point out that python-novaclient,
> which, to my knowledge, still supports Python 2.6, also happens to
> depend on oslo.i18n, oslo.serialization, and oslo.utils.  If we drop
> Python 2.6 compatibility on any of those three, we would also have to
> drop it from novaclient (and potentially other clients).  Perhaps we
> need to have a discussion about whether the clients still need to
> support Python 2.6?
> --
> Kevin L. Mitchell 
> Rackspace
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] proposal to add Rohit Jaiswal to Ceilometer core

2015-11-11 Thread gord chung

thanks for the feedback.

it's my pleasure to welcome Rohit to the Ceilometer core team. everyone 
back to work! :)


On 05/11/15 08:45 AM, gord chung wrote:

hi folks,

i'd like to nominate Rohit Jaiswal as core for Ceilometer. he's done a 
lot of good work recently like discovering and fixing many issues with 
Events and implemented the configuration reloading functionality. he's 
also been very active providing input and fixes for many bugs.


as we've been doing, please vote here: 
https://review.openstack.org/#/c/242058/


reviews:
https://review.openstack.org/#/q/reviewer:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z 



patches:
https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/ceilometer,n,z 

https://review.openstack.org/#/q/owner:%22Rohit+Jaiswal+%253Crohit.jaiswal%2540hp.com%253E%22+project:openstack/python-ceilometerclient,n,z 



cheers,



--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-11 Thread Doug Hellmann
+1

Excerpts from Davanum Srinivas (dims)'s message of 2015-11-11 15:05:37 -0500:
> +1 from me!
> 
> On Wed, Nov 11, 2015 at 3:02 PM, Joshua Harlow  wrote:
> > Greetings all stackers,
> >
> > I propose that we add Greg Hill[1] to the taskflow-core[2] team.
> >
> > Greg (aka jimbo) has been actively contributing to taskflow for a
> > while now, both in helping make taskflow better via code
> > contribution(s) and by helping spread more usage/knowledge of taskflow
> > at rackspace (since the big-data[3] team uses taskflow internally).
> > He has helped provided quality reviews and is doing an awesome job
> > with the various taskflow concepts and helping make taskflow the best
> > it can be!
> >
> > Overall I think he would make a great addition to the core review team.
> >
> > Please respond with +1/-1.
> >
> > Thanks much!
> >
> > - Joshua Harlow
> >
> > [1] https://launchpad.net/~greg-hill
> > [2] https://launchpad.net/taskflow
> > [3] http://www.rackspace.com/cloud/big-data
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [reno] [release] Where do release notes of stable branches go?

2015-11-11 Thread Kirill Zaitsev
I’m setting up reno for murano repository and been testing and playing with it 
a bit and this question seems unclear to me.

So where should we put release notes for stable releases? Into respective 
branches or into master?

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-QA][Fuel-TechDebt] Code Quality: Do Not Hardcode - Fix Things Instead

2015-11-11 Thread Igor Kalnitsky
Folks,

I have one thing to add: if workaround is unavoidable, please DO
comment it. Usually  workaround aren't obvious, and it would be
incredibly helpful to comment all of them; and do not hesitate to
write extensive comments. The clearer you write - the less time your
colleagues will spend next time they touch such code.

Thanks,
Igor

On Wed, Nov 11, 2015 at 2:58 AM, Vladimir Kuklin  wrote:
> Matthew
>
> Thanks for your feedback. Could you please elaborate more on the statistics
> of such tech-debt eliminations? My perception is that such bugs do not ever
> get fixed actually jeopardizing our efforts on bugfixing and actually making
> our statistics manupilative.
>
> So far my suggestion is the following - if you can, please do not introduce
> workarounds. If you have - introduce a TODO/FIXME comment for it in the code
> and create a tech-debt bug. If you see something of that kind that is
> already there and does not have such a comment - add this TODO/FIXME and
> create a tech-debt bug.
>
> So this is a best effort initiative, but I would encourage core reviewers to
> be stricter with such workarounds and hacks - please, do not get them pass
> through your hands unless there is a really good reason to merge this code
> with these hacks right now.
>
> On Wed, Nov 11, 2015 at 1:43 PM, Matthew Mosesohn 
> wrote:
>>
>> Vladimir,
>>
>> Bugfixes and minor refactoring often belong in separate commits. Combining
>> "extending foo to enable bar in XYZ" with "ensuring logs from service abc
>> are sent via syslog" often makes little sense to code reviewers. In this
>> case it is a feature enhancement + a bugfix.
>>
>> Looking at it from one perspective, if the bugfix is made poorly without a
>> feature commit, then it looks like the scenario you described. However, it
>> has the benefit that it can be cleanly backported. If we simply reverse the
>> order of the commits (untangling the workaround), we get the same result,
>> but get flamed.
>>
>> Sometimes both approaches are necessary. I agree that not growing tech
>> debt is important, but perceptions really depend on trends over 3+ weeks.
>> It's possible that such tech debt bugs are created and solved within 2-3
>> days of the workaround. I know that's the exception, but I think we should
>> be most concerned about what happens when we carry tech debt across entire
>> Fuel releases.
>>
>> On Nov 11, 2015 10:28 AM, "Aleksandr Didenko" 
>> wrote:
>>>
>>> +1 from me
>>>
>>> On Tue, Nov 10, 2015 at 6:38 PM, Stanislaw Bogatkin
>>>  wrote:

 I think that it is excellent thought.
 +1

 On Tue, Nov 10, 2015 at 6:52 PM, Vladimir Kuklin 
 wrote:
>
> Folks
>
> I wanted to raise awareness about one of the things I captured while
> doing reviews recently - we are sacrificing quality to bugfixing and 
> feature
> development velocity, essentially moving from one heap to another - from
> bugs/features to 'tech-debt' bugs.
>
> I understand that we all have deadlines and need to meet them. But,
> folks, let's create the following policy:
>
> 1) do not introduce hacks/workarounds/kludges if it is possible.
> 2) while fixing things if you have a hack/workaround/kludge that you
> need to work with - think of removing it instead of enhancing and 
> extending
> it. If it is possible - fix it. Do not let our technical debt grow.
> 3) if there is no way to avoid kludge addition/enhancing, if there is
> no way to remove it - please, add a 'TODO/FIXME' line above it, so that we
> can collect them in the future and fix them gradually.
>
> I suggest to add this requirement into code-review policy.
>
> What do you think about this?
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> 

Re: [openstack-dev] Help with getting keystone to migrate to Debian testing: fixing repoze.what and friends

2015-11-11 Thread Clint Byrum
Excerpts from Morgan Fainberg's message of 2015-11-10 20:17:12 -0800:
> On Nov 10, 2015 16:48, "Clint Byrum"  wrote:
> >
> > Excerpts from Morgan Fainberg's message of 2015-11-10 15:31:16 -0800:
> > > On Tue, Nov 10, 2015 at 3:20 PM, Thomas Goirand  wrote:
> > >
> > > > Hi there!
> > > >
> > > > All of Liberty would be migrating from Sid to Testing (which is the
> > > > pre-condition for an upload to offical Debian backports) if I didn't
> > > > have a really annoying situation with the repoze.{what,who} packages.
> I
> > > > feel like I could get some help from the Python export folks here.
> > > >
> > > > What is it about?
> > > > =
> > > >
> > > > Here's the dependency chain:
> > > >
> > > > - Keystone depends on pysaml2.
> > > > - Pysaml2 depends on python-repoze.who >=2, which I uploaded to Sid.
> > > > - python-repoze.what depends on python-repoze.who < 1.99
> > > >
> > > > Unfortunately, python-repoze.who doesn't migrate to Debian Testing
> > > > because it would make python-repoze.what broken.
> > > >
> > > > To make the situation worse, python-repoze.what build-depends on
> > > > python-repoze.who-testutil, which itself doesn't work with
> > > > python-repoze.who >= 2.
> > > >
> > > > Note: repoze.who-testutil is within the package
> > > > python-repoze.who-plugins who also contains 4 other plugins which are
> > > > all broken with repoze.who >= 2, but the others could be dropped from
> > > > Debian easily). We can't drop repoze.what completely, because there's
> > > > turbogears2 and another package who needs it.
> > > >
> > > > There's no hope from upstream, as all of these seem to be abandoned
> > > > projects.
> > > >
> > > > So I'm a bit stuck here, helpless, and I don't know how to fix the
> > > > situation... :(
> > > >
> > > > What to fix?
> > > > 
> > > > Make repoze.what and repoze.who-testutil work with repoze.who >= 2.
> > > >
> > > > Call for help
> > > > =
> > > > I'm a fairly experienced package maintainer, but I still consider
> myself
> > > > a poor Python coder (probably because I spend all my time packaging
> > > > rather than programming in Python: I know a way better other
> programing
> > > > languages).
> > > >
> > > > So I would enjoy a lot having some help here, also because my time is
> > > > very limited and probably better invested working on packages to
> assist
> > > > the whole OpenStack project, rather than upstream code on some weirdo
> > > > dependencies that I don't fully understand.
> > > >
> > > > So, would anyone be able to invest a bit of time, and help me fix the
> > > > problems with repoze.what / repoze.who in Debian? If you can help,
> > > > please ping me on IRC.
> > > >
> > > > Cheers,
> > > >
> > > > Thomas Goirand (zigo)
> > > >
> > > >
> > > It looks like pysaml2 might be ok with < 1.99 of repoze.who here:
> > > https://github.com/rohe/pysaml2/blob/master/setup.py#L30
> > >
> > > I admit I haven't tested it, but the requirements declaration doesn't
> seem
> > > to enforce the need for > 2. If that is in-fact the case that > 2 is
> > > needed, we are a somewhat of an impass with dead/abandonware holding us
> > > ransom. I'm not sure what the proper handling of that ends up being in
> the
> > > debian world.
> >
> > repoze.who doesn't look abandoned to me, so it is just repoze.what:
> >
> > https://github.com/repoze/repoze.who/commits/master
> >
> > who's just not being released (does anybody else smell a Laurel and
> > Hardy skit coming on?)
> 
> Seriously!
> 
> >
> > Also, this may have been something temporary, that then got left around
> > because nobody bothered to try the released versions:
> >
> >
> https://github.com/repoze/repoze.what/commit/b9fc014c0e174540679678af99f04b01756618de
> >
> > note, 2.0a1 wasn't compatible.. but perhaps 2.2 would work fine?
> >
> >
> 
> Def something to try out. If this is still an outstanding issue next week
> (when I have a bit more time) I'll see what I can do to test out the
> variations.

FYI, I tried 2.0 and it definitely broke repoze.what's test suite. The API
is simply incompatible (shake your fists at whoever did that please). For
those not following along: please make a _NEW_ module when you break
your API.

On the off chance it could just be dropped, I looked at turbogears2, and
this seems to be the only line _requiring_ repoze.what-plugins:

https://github.com/TurboGears/tg2/blob/development/tg/configuration/app_config.py#L1042

So I opened this issue:

https://github.com/TurboGears/tg2/issues/69

Anyway, it seems like less work to just deprecate this particular feature
that depends on an unmaintained library (which should be grounds enough
to remove python-repoze.what and python-repoze.what-plugins). Then the
dep can be dropped from python-turbogears2 and tg2-devtools.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [oslo.messaging] State wrapping in the MessageHandlingServer

2015-11-11 Thread Joshua Harlow

Matthew Booth wrote:

On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow > wrote:

Matthew Booth wrote:

My patch to MessageHandlingServer is currently being reverted
because it
broke Nova tests:

https://review.openstack.org/#/c/235347/

Specifically it causes a number of tests to take a very long time to
execute, which ultimately results in the total build time limit
being
exceeded. This is very easy to replicate. The
test

nova.tests.functional.test_server_group.ServerGroupTest.test_boot_servers_with_affinity
is an example test which will always hit this issue. The problem is
that ServerGroupTest.setUp() does:

  self.compute2 = self.start_service('compute',
host='host2')
  self.addCleanup(self.compute2.kill)

The problem with this is that start_service() adds a fixture
which also
adds kill as a cleanup method. kill does stop(), wait(). This
means that
the resulting call order is: start, stop, wait, stop, wait. The
redundant call to kill is obviously a wart, but I feel we should
have
handled it anyway.

The problem is that we decided it should be possible to restart a
server. There are some unit tests in oslo.messaging that do
this. It's
not clear to me that there are any projects which do this, but after
this experience I feel like it would be good to check before
changing it :)

The implication of that is that after wait() the state wraps,
and we're
now waiting on start() again. Consequently, when the second
cleanup call
hangs.

We could fix Nova (at least the usage we have seen) by removing the
wrapping. After wait() if you want to start a server again you
need to
create a new one.

So, to be specific, lets consider the following 2 call sequences:

1. start stop wait stop wait
2. start stop wait start stop wait

What should they do? The behaviours with and without wrapping are:

1. start stop wait stop wait
WRAP: start stop wait HANG HANG
NO WRAP: start stop wait NO-OP NO-OP

2. start stop wait start stop wait
WRAP: start stop wait start stop wait
NO WRAP: start stop wait NO-OP NO-OP NO-OP

I'll refresh my memory on what they did before my change in the
morning.
Perhaps it might be simpler to codify the current behaviour, but
iirc I
proposed this because it was previously undefined due to races.


I personally prefer not allowing restarting, its needless code
complexity imho and a feature that people imho probably aren't using
anyway (just create a new server object if u are doing this), so I'd
be fine with doing the above NO WRAP and turning those into NO-OPs
(and for example raising a runtime error in the case of start stop
wait start ... to denote that restarting isn't
recommended/possible). If we have a strong enough reason to really
to start stop wait start ...

I might be convinced the code complexity is worth it but for now I'm
not convinced...


I agree, and in the hopefully unlikely event that we did break anybody,
at least they would get an obvious exception rather than a hang. A
lesson from breaking nova was that the log messages were generated and
were available in the failed test runs, but nobody noticed them.

Incidentally, I think I'd also merge my second patch into the first
before resubmitting, which adds timeouts and the option not to log.



+1

Makes sense to me.

IMHO we can remove the log output later if we determine it's to noisy 
for folks (and/or not helping)...



Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-11 Thread Min Pae
+1 Greg has been actively contributing to taskflow, both code, code review,
and general discussions and helping users.  It would be great to have him
as a core.

On Wed, Nov 11, 2015 at 12:02 PM, Joshua Harlow 
wrote:

> Greetings all stackers,
>
> I propose that we add Greg Hill[1] to the taskflow-core[2] team.
>
> Greg (aka jimbo) has been actively contributing to taskflow for a
> while now, both in helping make taskflow better via code
> contribution(s) and by helping spread more usage/knowledge of taskflow
> at rackspace (since the big-data[3] team uses taskflow internally).
> He has helped provided quality reviews and is doing an awesome job
> with the various taskflow concepts and helping make taskflow the best
> it can be!
>
> Overall I think he would make a great addition to the core review team.
>
> Please respond with +1/-1.
>
> Thanks much!
>
> - Joshua Harlow
>
> [1] https://launchpad.net/~greg-hill
> [2] https://launchpad.net/taskflow
> [3] http://www.rackspace.com/cloud/big-data
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-11 Thread Paul Carver

On 11/10/2015 8:30 AM, Sean M. Collins wrote:

On Mon, Nov 09, 2015 at 07:58:34AM EST, Jay Pipes wrote:


2) Keep the security-group API as-is to keep outward compatibility with AWS.
Create a single, new service-groups and service-group-rules API for L2 to L7
traffic classification using mostly the modeling that Sean has put together.
Remove the networking-sfc repo and obselete the classifier spec. Not sure
what should/would happen to the FWaaS API, frankly.


As to the REST-ful API for creating classifiers, I don't know if it
should reside in the networking-sfc project. It's a big enough piece
that it will most likely need to be its own endpoint and repo, and have
stakeholders from other projects, not just networking-sfc. That will
take time and quite a bit of wrangling, so I'd like to defer that for a
bit and just work on all the services having the same data model, where
we can make changes quickly, since they are not visible to API
consumers.



I agree that the service classifier API should NOT reside in the 
networking-sfc project, but I don't understand why Jay suggests removing 
the networking-sfc repo. The classifier specified by networking-sfc is 
needed only because there isn't a pre-existing classifier API. As soon 
as we can converge on a common classifier API I am completely in favor 
of using it in place of the one in the networking-sfc repo, but SFC is 
more than just classifying traffic. We need a classifier in order to 
determine which traffic to redirect, but we also need the API to specify 
how to redirect the traffic that has been identified by classifiers.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][fuel] How can I install Redhat-OSP using Fuel

2015-11-11 Thread Igor Kalnitsky
Hey Fei LU,

Thanks for being interested in Fuel. I'll help you with pleasure.

First of all, as Vladimir mentioned, you need to create a new release.
That's could be done by POST request to /api/v1/releases/. You can use
JSON of CentOS with slight changes. When releases is created you need
to do two things:

1. Prepare a provisioning image and make it shared by Nginx, Please
ensure you have correct path to this image in your recently created
RedHat release.

2. Populate RedHat release with deployment tasks. It could be done by
executing the following command:

fuel rel --sync-deployment-tasks --dir "/etc/puppet/{release-version}"

I think most of CentOS tasks should fine on RedHat, though we didn't
test it. If you met any problem, please feel free to contact us using
either this ML or #fuel-dev IRC channel.

Thanks,
Igor

On Wed, Nov 11, 2015 at 3:41 AM, Vladimir Kuklin  wrote:
> Hi, Fei
>
> It seems you will need to do several things with Fuel - create a new
> release, associate your cluster with it when creating it and provide paths
> to corresponding repositories with packages. Also, you will need to create a
> base image for Image-based provisioning. I am not sure we have all the 100%
> of the code that supports it, but it should be possible to do so with some
> additional efforts. Let me specifically refer to Fuel Agent team who are
> working on Image-Based Provisioning and Nailgun folks who should help you
> with figuring out patterns for repositories URLs configuration.
>
> On Tue, Nov 10, 2015 at 5:15 AM, Fei LU  wrote:
>>
>> Greeting Fuel teams,
>>
>>
>> My company is working on the installation of virtualization
>> infrastructure, and we have noticed Fuel is a great tool, much better than
>> our own installer. The question is that Mirantis is currently supporting
>> OpenStack on CentOS and Ubuntu, while my company is using Redhat-OSP.
>>
>> I have read all the Fuel documents, including fuel dev doc, but I haven't
>> found the solution how can I add my own release into Fuel. Or maybe I'm
>> missing something.
>>
>> So, would you guys please give some guide or hints?
>>
>> Appreciating any help.
>> Kane
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 35bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com
> www.mirantis.ru
> vkuk...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Priority management for new features

2015-11-11 Thread Armando M.
Hi neutronians,

Whilst I recover from the gate failure binge eating...I wanted to put out
there a couple of process changes that should help the drivers team and the
PTL to improve their ability to justify priority assignments for new
features.

Comments welcome.

Cheers,
Armando

[1] https://review.openstack.org/#/c/244302/
[2] https://review.openstack.org/#/c/244313/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Oslo libraries dropping python 2.6 compatability

2015-11-11 Thread Jeremy Stanley
On 2015-11-11 14:07:25 -0600 (-0600), Kevin L. Mitchell wrote:
> On Wed, 2015-11-11 at 14:14 -0500, Davanum Srinivas wrote:
> > Any concerns? please chime in:
> > https://review.openstack.org/244275
> 
> Commented on the review, but I have to point out that python-novaclient,
> which, to my knowledge, still supports Python 2.6, also happens to
> depend on oslo.i18n, oslo.serialization, and oslo.utils.  If we drop
> Python 2.6 compatibility on any of those three, we would also have to
> drop it from novaclient (and potentially other clients).  Perhaps we
> need to have a discussion about whether the clients still need to
> support Python 2.6?

The Infrastructure team's plan is to remove our CentOS 6.x job
workers and any jobs currently running on them (which would include
all the current Python 2.6 jobs) when stable/juno reaches EOL,
shortly after the 2014.2.4 release on November 19. If projects want
to restore Python 2.6 testing, they'll need some custom job to
install the desired interpreter from somewhere unofficial (e.g. the
deadsnakes PPA for Ubuntu Trusty) at runtime.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Priority management for new features

2015-11-11 Thread Kyle Mestery
On Wed, Nov 11, 2015 at 4:19 PM, Armando M.  wrote:

> Hi neutronians,
>
> Whilst I recover from the gate failure binge eating...I wanted to put out
> there a couple of process changes that should help the drivers team and the
> PTL to improve their ability to justify priority assignments for new
> features.
>
> Comments welcome.
>
>
I commented in the review, but I thought I'd reply here as well. I don't
understand the reason to move to only using "High" and "Low" priority, it
seems somewhat arbitrary. Of course, you could argue our current system for
prioritizing is arbitrary as well, but I'd argue that utilizing all 4
priorities makes sense. Ultimately though, this is all mostly arbitrary
anyways, and we all likely understand the stuff which is important (e.g.
Essential). We have done a bad job at getting that stuff into a release in
the past though.

And now I feel like Salvatore and I'll stop pedantically meandering.


> Cheers,
> Armando
>
> [1] https://review.openstack.org/#/c/244302/
> [2] https://review.openstack.org/#/c/244313/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-11 Thread Kurt Taylor
On Wed, Nov 11, 2015 at 11:16 AM, Ruby Loo  wrote:

> On 10 November 2015 at 12:08, Dmitry Tantsur  wrote:
>
>> On 11/10/2015 05:45 PM, Lucas Alvares Gomes wrote:
>>
>>> Hi,
>>>
>>> In the last Ironic meeting [1] we started a discussion about whether
>>> we need to have a mid-cycle meeting for the Mitaka cycle or not. Some
>>> ideas about the format of the midcycle were presented in that
>>> conversation and this email is just a follow up on that conversation.
>>>
>>> The ideas presented were:
>>>
>>> 1. Normal mid-cycle
>>>
>>> Same format as the previous ones, the meetup will happen in a specific
>>> venue somewhere in the world.
>>>
>>
>> I would really want to see you all as often as possible. However, I don't
>> see much value in proper face-to-face mid-cycles as compared to improving
>> our day-to-day online communications.
>
>
> +2.
>
> My take on mid-cycles is that if folks want to have one, that is fine, I
> might not attend :)
>
> My preference is 4) no mid-cycle -- and try to work more effectively with
> people in different locations and time zones.
>

I was hoping to suggest that we have a mid-cycle co-located with neutron,
but they are not having a mid-cycle.  So, my preference would be 4) no mid
cycle. I would like for us to try a few virtual sprints on targeted
subjects. I did one for CI documentation and the hardest part about setting
that up was picking the time.
https://wiki.openstack.org/wiki/VirtualSprints

Kurt Taylor (krtaylor)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [metrics] Review metrics: average numbers

2015-11-11 Thread Jesus M. Gonzalez-Barahona
Hi, Mike,

I'm not sure what you are looking for exactly, but maybe you can have a
look at the quarterly reports. AFAIK, currently there is none specific
to Fuel, but for example for Nova, you have:

http://activity.openstack.org/dash/reports/2015-q3/pdf/projects/nova.pd
f

In page 6, you have "time waiting for reviewer" (from the moment a new
patchset is produced, to the time a conclusive review vote is found in
Gerrit), and "time waiting for developer" (from the conclusive review
vote to next patchset).

We're working now in a visualization for that kind of information. For
now, we only have complete changeset values, check if you're
interested:

http://blog.bitergia.com/2015/10/22/understanding-the-code-review-proce
ss-in-openstack/

Saludos,

Jesus.

On Wed, 2015-11-11 at 21:45 +, Mike Scherbakov wrote:
> Hi stackers,
> I have a question about Stackalytics.
> I'm trying to get some more data from code review stats. For Fuel,
> for instance,
> http://stackalytics.com/report/reviews/fuel-group/open
> shows some useful stats. Do I understand right, that average numbers
> here are calculated out of open reviews, not total number of reviews?
> 
> The most important number which I'm trying to get, is an average time
> change requests waiting for reviewers since last vote or mark, from
> all requests (not only those which remain in open state, like it is
> now, I believe).
> 
> How hard would it be to get / extend Stackalytics to make it..?
> 
> Thanks!
> -- 
> Mike Scherbakov
> #mihgen
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- 
Bitergia: http://bitergia.com
/me at Twitter: https://twitter.com/jgbarah


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stackalytics] Review metrics: average numbers

2015-11-11 Thread Mike Scherbakov
Hi stackers,
I have a question about Stackalytics.
I'm trying to get some more data from code review stats. For Fuel, for
instance,
http://stackalytics.com/report/reviews/fuel-group/open
shows some useful stats. Do I understand right, that average numbers here
are calculated out of open reviews, not total number of reviews?

The most important number which I'm trying to get, is an average time
change requests waiting for reviewers since last vote or mark, from all
requests (not only those which remain in open state, like it is now, I
believe).

How hard would it be to get / extend Stackalytics to make it..?

Thanks!
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] [Heat] Autoprovisioning, per-user projects, and Federation

2015-11-11 Thread Tim Hinrichs
Excerpts from Wed, Nov 11, 2015 at 10:14 AM Clint Byrum 
wrote:

> > But as Renat mentioned, the part about triggering Mistral workflows from
> > a message does not yet exist. As Tim pointed out, Congress could be a
> > solution to that (listening for a message and then starting the Mistral
> > workflow). That may be OK in the short term, but in the long term I'd
> > prefer that we implement the triggering thing in Mistral (since there
> > are *lots* of end-user use cases for this too), and have the workflow
> > optionally query Congress for the policy rather than having Congress in
> > the loop.
> >
>
> I agree 100% on the positioning of Congress vs. Mistral here.
>
>
One problem that I'd imagine Mistral would want to solve if it's picking up
events off the bus and executing workflows is how the operator configures
the event-to-workflow mapping logic.  In Adam's example, the operator would
want to say that every time the 'new-user-login-event' shows up on the bus
that Mistral should kick off the 'create-quota' workflow and the
'create-role' workflow.  In simple cases, this would just be a dictionary,
but what happens if the operator wants to condition workflow execution on
an AND/OR/NOT expression evaluated over state from different projects (e.g.
run 'create-quota' when a new user logs in and that user doesn't already
have a nova quota).

For the operator, the problem becomes more complicated when multiple
OpenStack projects are listening to the bus and kicking off
workflows/scripts/etc. The operator now has N projects to configure
(possibly in different ways) and needs to feel confident that there's not
some (rare) sequence of events that puts OpenStack as a whole into a bad
state because the events/workflows she configured have opposing goals.

The benefit of Congress is that there's one rich, declarative language that
operators can use to control the event-to-workflow mapping.  The operator
dictates which events/states (drawn from any collection of OpenStack
projects) should cause which workflows/templates/APIs (again from any
OpenStack project) to be executed.  And because the mapping is written
declaratively, it's feasible to do some conflict detection.

I'm not arguing that Mistral can't or shouldn't be adapted as was
suggested.  I'm just articulating what Congress brings to the table.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [reno] [release] Where do release notes of stable branches go?

2015-11-11 Thread Doug Hellmann
Excerpts from Kirill Zaitsev's message of 2015-11-12 00:26:02 +0300:
> I’m setting up reno for murano repository and been testing and playing with 
> it a bit and this question seems unclear to me.
> 
> So where should we put release notes for stable releases? Into respective 
> branches or into master?

Thanks for posting this to the list, Kirill.

Reno looks at the branch and tag history to figure out where a note
belongs based on where it was committed. So you want to put notes files
in the branch for the version where the change is, and you want to
commit the release notes before you tag the release.

Typically that will mean a note going into master with a fix, and then
being carried over in the backport into the stable branch. That backport
step is the reason reno uses lots of little files instead of one big
file -- it eliminates the merge conflict on the backport.

The *build* for the release notes happens from master, but it scans all
of the branches you tell it to (that's what the "branch" argument to the
release-notes directive does). So from master, the release notes build
can scan all of the relevant branches and publish their current release
notes together in one place. This last bit is why we ended up needing
the file with the release-notes directive that doesn't specify a branch
name (causing it to scan the "current" branch, which for a patch under
test includes any release notes files).

The shorter answer: Put the release note as close to the code change
as possible. In the same commit as a bug fix, or in one of the
patches in the series implementing a complex feature. Then make
sure that commit is going into the right branch. Reno will then use
the git history to figure out how to build the release notes document.

Doug

> 
> -- 
> Kirill Zaitsev
> Murano team
> Software Engineer
> Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need to have a mid-cycle?

2015-11-11 Thread Michael Davies
On Wed, Nov 11, 2015 at 3:15 AM, Lucas Alvares Gomes 
wrote:
>
> So, what people think about it? Should we have a mid-cycle for the
> Mitaka release or not? If so, what format should we use?
>

I like the idea of having a midcycle as it's a useful sync point, so my
preference would be:

3. Coordinated regional mid-cycles (which probably means North America over
Europe for those in the Antipodes)
1. Normal mid-cycle
2. Virtual mid-cycle
4. Not having a mid-cycle at all

I find value in them, due to timezone challenges, but I'm probably unique
in this case.
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2015-11-11 Thread Andrew Beekhof

> On 11 Nov 2015, at 11:35 PM, Vladimir Kuklin  wrote:
> 
> Hi, Andrew
> 
> Let me answer your questions.
> 
> This agent is active/active which actually marks one of the node as 
> 'pseudo'-master which is used as a target for other nodes to join to. We also 
> check which node is a master and use it in monitor action to check whether 
> this node is clustered with this 'master' node. When we do cluster bootstrap, 
> we need to decide which node to mark as a master node. Then, when it starts 
> (actually, promotes), we can finally pick its name through notification 
> mechanism and ask other nodes to join this cluster. 

Ah good, I understood it correctly then :)
I would be interested in your opinion of how the other agent does the 
bootstrapping (ie. without notifications or master/slave).

> 
> Regarding disconnect_node+forget_cluster_node this is quite simple - we need 
> to eject node from the cluster. Otherwise it is mentioned in the list of 
> cluster nodes and a lot of cluster actions, e.g. list_queues, will hang 
> forever as well as forget_cluster_node action. 

That makes sense, the part I’m struggling with is that it sounds like the other 
agent shouldn’t work at all.
Yet we’ve used it extensively and not experienced these kinds of hangs.

> 
> We also handle this case whenever a node leaves the cluster. If you remember, 
> I wrote an email to Pacemaker ML regarding getting notifications on node 
> unjoin event '[openstack-dev] [Fuel][Pacemaker][HA] Notifying clones of 
> offline nodes’.

Oh, I recall that now.

> So we went another way and added a dbus daemon listener that does the same 
> when node lefts corosync cluster (we know that this is a little bit racy, but 
> disconnect+forget actions pair is idempotent).
> 
> Regarding notification commands - we changed behaviour to the one that fitter 
> our use cases better and passed our destructive tests. It could be 
> Pacemaker-version dependent, so I agree we should consider changing this 
> behaviour. But so far it worked for us.

Changing the state isn’t ideal but there is precedent, the part that has me 
concerned is the error codes coming out of notify.
Apart from producing some log messages, I can’t think how it would produce any 
recovery.

Unless you’re relying on the subsequent monitor operation to notice the error 
state.
I guess that would work but you might be waiting a while for it to notice.

> 
> On Wed, Nov 11, 2015 at 2:12 PM, Andrew Beekhof  wrote:
> 
> > On 11 Nov 2015, at 6:26 PM, bdobre...@mirantis.com wrote:
> >
> > Thank you Andrew.
> > Answers below.
> > >>>
> > Sounds interesting, can you give any comment about how it differs to the 
> > other[i] upstream agent?
> > Am I right that this one is effectively A/P and wont function without some 
> > kind of shared storage?
> > Any particular reason you went down this path instead of full A/A?
> >
> > [i]
> > https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster
> > <<<
> > It is based on multistate clone notifications. It requries nothing shared 
> > but Corosync info base CIB where all Pacemaker resources stored anyway.
> > And it is fully A/A.
> 
> Oh!  So I should skip the A/P parts before "Auto-configuration of a cluster 
> with a Pacemaker”?
> Is the idea that the master mode is for picking a node to bootstrap the 
> cluster?
> 
> If so I don’t believe that should be necessary provided you specify 
> ordered=true for the clone.
> This allows you to assume in the agent that your instance is the only one 
> currently changing state (by starting or stopping).
> I notice that rabbitmq.com explicitly sets this to false… any particular 
> reason?
> 
> 
> Regarding the pcs command to create the resource, you can simplify it to:
> 
> pcs resource create --force --master p_rabbitmq-server 
> ocf:rabbitmq:rabbitmq-server-ha \
>   erlang_cookie=DPMDALGUKEOMPTHWPYKC node_port=5672 \
>   op monitor interval=30 timeout=60 \
>   op monitor interval=27 role=Master timeout=60 \
>   op monitor interval=103 role=Slave timeout=60 OCF_CHECK_LEVEL=30 \
>   meta notify=true ordered=false interleave=true master-max=1 
> master-node-max=1
> 
> If you update the stop/start/notify/promote/demote timeouts in the agent’s 
> metadata.
> 
> 
> Lines 1602,1565,1621,1632,1657, and 1678 have the notify command returning an 
> error.
> Was this logic tested? Because pacemaker does not currently support/allow 
> notify actions to fail.
> IIRC pacemaker simply ignores them.
> 
> Modifying the resource state in notifications is also highly unusual.
> What was the reason for that?
> 
> I notice that on node down, this agent makes disconnect_node and 
> forget_cluster_node calls.
> The other upstream agent does not, do you have any information about the bad 
> things that might happen as a result?
> 
> Basically I’m looking for what each option does differently/better with a 
> view to converging on a single implementation.
> I 

Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-11 Thread Henry Fourie
Paul,
   Agree completely that the networking-sfc repo should be preserved
as it includes functionality beyond that of just a classifier -
it defines the service chain structure. 

Work on a common service classifier API could be done by
the networking-sfc team to help in evaluating that API.

 - Louis   

-Original Message-
From: Paul Carver [mailto:pcar...@paulcarver.us] 
Sent: Wednesday, November 11, 2015 1:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic 
classifiers

On 11/10/2015 8:30 AM, Sean M. Collins wrote:
> On Mon, Nov 09, 2015 at 07:58:34AM EST, Jay Pipes wrote:
>
>> 2) Keep the security-group API as-is to keep outward compatibility with AWS.
>> Create a single, new service-groups and service-group-rules API for 
>> L2 to L7 traffic classification using mostly the modeling that Sean has put 
>> together.
>> Remove the networking-sfc repo and obselete the classifier spec. Not 
>> sure what should/would happen to the FWaaS API, frankly.
>
> As to the REST-ful API for creating classifiers, I don't know if it 
> should reside in the networking-sfc project. It's a big enough piece 
> that it will most likely need to be its own endpoint and repo, and 
> have stakeholders from other projects, not just networking-sfc. That 
> will take time and quite a bit of wrangling, so I'd like to defer that 
> for a bit and just work on all the services having the same data 
> model, where we can make changes quickly, since they are not visible 
> to API consumers.
>

I agree that the service classifier API should NOT reside in the networking-sfc 
project, but I don't understand why Jay suggests removing the networking-sfc 
repo. The classifier specified by networking-sfc is needed only because there 
isn't a pre-existing classifier API. As soon as we can converge on a common 
classifier API I am completely in favor of using it in place of the one in the 
networking-sfc repo, but SFC is more than just classifying traffic. We need a 
classifier in order to determine which traffic to redirect, but we also need 
the API to specify how to redirect the traffic that has been identified by 
classifiers.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Priority management for new features

2015-11-11 Thread Armando M.
On 11 November 2015 at 13:58, Kyle Mestery  wrote:

> On Wed, Nov 11, 2015 at 4:19 PM, Armando M.  wrote:
>
>> Hi neutronians,
>>
>> Whilst I recover from the gate failure binge eating...I wanted to put out
>> there a couple of process changes that should help the drivers team and the
>> PTL to improve their ability to justify priority assignments for new
>> features.
>>
>> Comments welcome.
>>
>>
> I commented in the review, but I thought I'd reply here as well. I don't
> understand the reason to move to only using "High" and "Low" priority, it
> seems somewhat arbitrary. Of course, you could argue our current system for
> prioritizing is arbitrary as well, but I'd argue that utilizing all 4
> priorities makes sense. Ultimately though, this is all mostly arbitrary
> anyways, and we all likely understand the stuff which is important (e.g.
> Essential). We have done a bad job at getting that stuff into a release in
> the past though.
>

Thanks Kyle, I'll elaborate more on the patch.


>
> And now I feel like Salvatore and I'll stop pedantically meandering.
>
>
>> Cheers,
>> Armando
>>
>> [1] https://review.openstack.org/#/c/244302/
>> [2] https://review.openstack.org/#/c/244313/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Using upstream packages & modules

2015-11-11 Thread Alex Schultz
On Tue, Nov 10, 2015 at 11:10 AM, Vladimir Kuklin  wrote:
> Alex
>
> Thanks for your very detailed answer - it clarified things a bit. So, if you
> would allow me to rephrase it - you are actually conducting a research on
> what is the actual gap between our downstream/fork and upstream
> UCA/puppet-openstack. This seems to be a very promising initiative. Are you
> going to come up with some external-user readable report soon?
>
> Also, regarding multiple distros support.  I think, we need to come up with
> an approach of making 'release manager' piece of Nailgun data driven and
> just allow a user to run any distribution he or she wants. Just register a
> set of repos with packages and run it. Actually, we already have it - we
> need to make base-image generation for RPM more flexible and any RPM/DEB
> based distro should work ok with it.
>
> The remaining piece is to actually support distro-specific things, e.g.
> CentOS/RHEL networking configuration, e.g. l23 network stored config puppet
> providers. But this is a distro-supporter/community burden.
>
>

Yes I hope to have something together by the end of the week.  I've
managed to get a controller and compute/cinder nodes up (and passing
basic OSTF tests) with what appears to be some minor adjustments to
the fuel-library code.  The one thing that gets dropped because of
lack of upstream support is nova floating network ranges. There's a
pending review that'll get that back in but I also don't know how
important it would be to support for this type of configuration.
Another issue is the upstream murano module is still a work in
progress so that won't work right now either. Hopefully that'll get
sorted out in time for the official release of the liberty puppet
modules.

As I've been working through this, I've noticed that it would be
possible to use fuel-plugins to only apply UCA packages to specific
nodes via a plugin role. An interesting follow on to this effort would
be to use MOS packages for controllers and UCA for Compute or vice
versa.  But that should probably be more an academic exercise rather
than production one.

-Alex

>
> On Tue, Nov 10, 2015 at 6:25 PM, Alex Schultz  wrote:
>>
>> Hey Vladimir,
>>
>> On Tue, Nov 10, 2015 at 5:56 AM, Vladimir Kuklin 
>> wrote:
>> > Alex
>> >
>> > That's great to hear that. But be aware - making all of the components
>> > work
>> > exactly the way they work within MOS is actually identical to
>> > upstreaming
>> > MOS. We are using some components of different versions to satisfy many
>> > requirements for our Reference Architecutre implementation. It will not
>> > be
>> > so easy to base them upon existing 3rd party distributions. For example,
>> > `read timeout` for SQL requests is crucial for our HA as it handles
>> > cases
>> > when an SQL node dies while serving the request. And you will find
>> > myriads
>> > of them. And as we do not control things in upstream, we will always
>> > have
>> > some downstream divergence.
>> >
>>
>> Yes, I'm aware that it'll be an effort to make it work identically to
>> MOS.  Currently that's not my goal. My goal is to get it working at
>> all and be able to document the deficiencies when using upstream
>> packages/modules vs MOS provided ones.  Once we have documented these
>> differences we will be able to make decisions as to what efforts
>> should be made if we choose to address the differences.  The read
>> timeout thing seems to be an issue with what mysql python driver is
>> used so that could easily be configurable based on a packages or a
>> configuration option.
>>
>> > I guess, the optimal solution is to figure out the actual divergence
>> > between
>> > upstream and downstream and try to push things upstream as hard as we
>> > can,
>> > while retaining overrides for some packages and manifests on top of
>> > upstream
>> > versions. Do not get me wrong, but it seems there is exactly 0 (zero)
>> > ways
>> > you can get Fuel working with upstream packages unless they support
>> > exactly
>> > the same feature set and fix the same bugs in various components that
>> > Fuel
>> > expect them to support or fix. By 'working' I mean passing the same set
>> > of
>> > at least smoke and destructive tests, let alone passing scale tests.
>> >
>>
>> So I think this is where we are currently backwards in the way we're
>> doings. As we hope to push Fuel as a community project, we need to be
>> more open to supporting the distributions versions of these packages
>> as being supported.  If we want to continue with specific versions of
>> things we need to be able to modularize them and apply them to a
>> downstream version of Fuel that we can promote with the MOS package
>> set.  I agree that right now it's highly unlikely that one would be
>> able to use Fuel without any MOS packages. Because I don't think it's
>> possible right now, what I'm attempting to do is be able to deploy an
>> alternate OpenStack 

Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-11 Thread Martin Millnert
On Mon, 2015-11-09 at 10:19 -0500, Adam Young wrote:
> I personally like Java, but feel like we should focus on limiting the 
> number of languages we need to understand in order to Do OpenStack 
> development.

That's a quite useful datapoint to collect in surveys: How many
languages are your components/apps written in?
I believe most "serious" production deployments already run Java apps,
anyway, for other reasons. I'd wager that basically Erlang and Java are
already sunk, in most deployments, in terms of languages, already. It's
the cost of operation.

--
Martin Millnert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] Feature request and bug workflow

2015-11-11 Thread McLellan, Steven
I've already spoken to you about this and I think everyone would agree
that for large features blueprints are cumbersome; my preference would be
for simple blueprints (uncontroversial and straightforward from a design
perspective) to leave a full description in launchpad but for larger ones
to link to a review (and possibly update launchpad once the feature's been
agreed upon). The admin indexing work is one that would benefit from
having reviews in gerrit.

Steve

On 11/11/15, 12:50 PM, "Tripp, Travis S"  wrote:

>Searchlighters,
>
>When we began this project, we had many discussions about process and
>made a conscious decision to support as lightweight of a workflow for
>feature requests as possible. We all discussed how we want to encourage
>contribution from everybody by supporting both developers and
>non-developers who want to provide input, requests for features, and bug
>fixes. Specifically, we decided that we did not want to immediately use a
>separate spec repo and to try to better incorporate our normal
>documentation repo into the feature request process whenever Launchpad
>didn¹t meet our needs.
>
>We did not formally document any of the above, mostly because we didn¹t
>have time in Liberty, but also because the concept was still a little
>nebulous on how we would better incorporate our normal documentation
>processes into the feature request process.
>
>Now that we are starting Mitaka, I¹ve already encountered a couple of
>features where I felt that we needed a better review tool (e.g. gerrit)
>than launchpad. So, I¹ve made an attempt [1] at documenting how we can
>still follow our original intents that I mention above. I also have a
>dependent feature review that follows this process as an example [2].
>
>Please take a look at the workflow proposal review and provide comments.
>We also will discuss this in our weekly meeting. I recommend starting
>with this file: doc/source/feature-requests-bugs.rst
>
>[1] Workflow Proposal - https://review.openstack.org/#/c/243881/
>[2] Zero Downtime Feature - https://review.openstack.org/#/c/243386/
>
>
>Steve,
>
>Regarding you email [3] below.  I feel that the associated blueprint is
>an example of a blueprint that could benefit from a similar Gerrit review
>as described above. What do you think?
>
>[3] Admin indexing -
>http://permalink.gmane.org/gmane.comp.cloud.openstack.devel/68685
>
>Thanks,
>Travis
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-11 Thread Fox, Kevin M
development is different then dependencies... So erlang is a dependency for 
rabbit, but no one in openstack currently writes anything in erlang. zookeeper 
would be a dependency, not a target for development

that being said, openstack at present doesn't have any hard dependencies on 
java, this brings up a new one.

And currently at least for 3 of our clouds, we don't have java installed on any 
of them, so it is an additional dependency. So I know there are clouds out 
there that would consider it a new dep.

Thanks,
Kevin

From: Martin Millnert [mar...@millnert.se]
Sent: Wednesday, November 11, 2015 2:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

On Mon, 2015-11-09 at 10:19 -0500, Adam Young wrote:
> I personally like Java, but feel like we should focus on limiting the
> number of languages we need to understand in order to Do OpenStack
> development.

That's a quite useful datapoint to collect in surveys: How many
languages are your components/apps written in?
I believe most "serious" production deployments already run Java apps,
anyway, for other reasons. I'd wager that basically Erlang and Java are
already sunk, in most deployments, in terms of languages, already. It's
the cost of operation.

--
Martin Millnert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][TaskFlow] Proposal for new core reviewer (greg hill)

2015-11-11 Thread Brant Knudson
Looks like he's doing good reviews so +1. - Brant

On Wed, Nov 11, 2015 at 2:02 PM, Joshua Harlow 
wrote:

> Greetings all stackers,
>
> I propose that we add Greg Hill[1] to the taskflow-core[2] team.
>
> Greg (aka jimbo) has been actively contributing to taskflow for a
> while now, both in helping make taskflow better via code
> contribution(s) and by helping spread more usage/knowledge of taskflow
> at rackspace (since the big-data[3] team uses taskflow internally).
> He has helped provided quality reviews and is doing an awesome job
> with the various taskflow concepts and helping make taskflow the best
> it can be!
>
> Overall I think he would make a great addition to the core review team.
>
> Please respond with +1/-1.
>
> Thanks much!
>
> - Joshua Harlow
>
> [1] https://launchpad.net/~greg-hill
> [2] https://launchpad.net/taskflow
> [3] http://www.rackspace.com/cloud/big-data
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev