Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-25 Thread Hugh Blemings

Hiya,

On 24/09/2016 03:46, Mike Perez wrote:

On 11:03 Sep 21, Doug Hellmann wrote:



A separate mailing list just for “important announcements” would
need someone to decide what is “important”. It would also need
everyone to be subscribed, or we would have to cross-post to the
existing list. That’s why we use topic tags on the mailing list, so
that it is possible to filter messages based on what is important
to the reader, rather than the sender.


This has came up in the past and I have suggested that people who
can't spend that much time on the lists to refer to the Dev Digest at
blog.openstack.org which mentioned the PTL elections being open.


Fwiw, I'd endorse Mike's comments about the Dev digest - it's an easily
digestible (sorry!) and concise summary of what's happening on
openstack-dev - I refer to it regularly myself.

Two other sources that come to mind for less detailed but topical
summaries of traffic are Jason Baker's summary on opensource.com [0] and
Lwood [1] which I put together each week.  Both flag upcoming Election
related topics pretty reliably and might suit some folk.

For what my $0.20 is worth I don't think splitting out into further
logistics or announcement oriented lists would be beneficial in the long
term.

Cheers,
Hugh


[0] https://opensource.com/business/16/9/openstack-news-september-26
[1] http://hugh.blemings.id.au/openstack/lwood/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] tempest-cores update

2016-09-25 Thread Hugh Blemings

Hi Ohmichi-san,

Firstly congratulations on becoming PTL for Quality Assurance!


As previous mail, Marc has resigned from tempest-cores.
In addition, David also has done from tempest-cores to concentrate on new work.
Thank you two for many contributions to the project and I wish your
continuous successes.


I'm including a link for your email about tempest-cores in this week's 
Lwood.  Could you please tell David's surname so I can include that in 
the item ?


Kind Regards,
Hugh





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-25 Thread Huang Zhiteng
On Mon, Sep 26, 2016 at 12:05 PM, Joshua Harlow 
wrote:

> Huang Zhiteng wrote:
>
>> In eBay, we did some inhouse change to Nova so that our big data type of
>> use case can have physical disks as ephemeral disk for this type of
>> flavors.  It works well so far.   My 2 cents.
>>
>
> Is there a published patch (or patchset) anywhere that people can look at
> for said in-house changes?
>

Unfortunately no, but I think we can publish it if there are enough
interests.  However, I don't think that can be easily adopted onto upstream
Nova since it depends on other in-house changes we've done to Nova.

>
> -Josh
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards
Huang Zhiteng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-25 Thread Joshua Harlow

Huang Zhiteng wrote:

In eBay, we did some inhouse change to Nova so that our big data type of
use case can have physical disks as ephemeral disk for this type of
flavors.  It works well so far.   My 2 cents.


Is there a published patch (or patchset) anywhere that people can look 
at for said in-house changes?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-25 Thread Huang Zhiteng
Hi Zhenyu and all,

If you look at the problem from a different angle, for example, treating
local disks on hypervisors same resource like GPU/NIC, your requirement
doesn't necessarily need to involve Cinder.  Local disks become a resource
type associated with certain group of hypervisors, scheduling becomes
easier and provisioning is also simpler because it doesn't have to talk to
another service (Cinder) and do coordination between Nova and Cinder
anymore.

In eBay, we did some inhouse change to Nova so that our big data type of
use case can have physical disks as ephemeral disk for this type of
flavors.  It works well so far.   My 2 cents.


On Mon, Sep 26, 2016 at 9:35 AM, Zhenyu Zheng 
wrote:

> Hi Matt,
>
> Yes, we can only do this using 1:1 AZs mapped for each compute node in the
> deployment, which is not very feasible in commercial deployment,
> we can either pass some hints to Cinder(for current code, cinder 
> "InstanceLocalityFilter"
> uses instance uuid as parameter so it will be impossible for
> user to pass it while booting instances)/ add filters or something else to
> Nova when doing Nova scheduling. And maybe we will have new solutions
> after "Generic-resource-pool" is reached?
>
> The implementations may varies, but this could be a reasonable demands?
> right?
>
> Thanks
>
> On Sun, Sep 25, 2016 at 1:02 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>> On 9/23/2016 8:19 PM, Zhenyu Zheng wrote:
>>
>>> Hi,
>>>
>>> Thanks all for the information, as for the filter
>>> Erlon(InstanceLocalityFilter) mentioned, this only solves a part of the
>>> problem,
>>> we can create new volumes for existing instances using this filter and
>>> then attach to it, but the root volume still cannot
>>> be guranteed to be on the same host as the compute resource, right?
>>>
>>> The idea here is that all the volumes uses local disks.
>>> I was wondering if we already have such a plan after the Resource
>>> Provider structure has accomplished?
>>>
>>> Thanks
>>>
>>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz >> > wrote:
>>>
>>> Not sure exactly what you mean, but in Cinder using the
>>> InstanceLocalityFilter[1], you can  schedule a volume to the same
>>> compute node the instance is located. Is this what you need?
>>>
>>> [1] http://docs.openstack.org/developer/cinder/scheduler-filters
>>> .html#instancelocalityfilter
>>> >> s.html#instancelocalityfilter>
>>>
>>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant
>>> >> > wrote:
>>>
>>> Kevin,
>>>
>>> This is functionality that has been requested in the past but
>>> has never been implemented.
>>>
>>> The best way to proceed would likely be to propose a
>>> blueprint/spec for this and start working this through that.
>>>
>>> -Jay
>>>
>>>
>>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>>
 Hi Novaers and Cinders:

 Quite often application requirements would demand using
 locally attached disks (or direct attached disks) for
 OpenStack compute instances. One such example is running
 virtual hadoop clusters via OpenStack.

 We can now achieve this by using BlockDeviceDriver as Cinder
 driver and using AZ in Nova and Cinder, illustrated in[1],
 which is not very feasible in large scale production deployment.

 Now that Nova is working on resource provider trying to build
 an generic-resource-pool, is it possible to perform
 "volume-based-scheduling" to build instances according to
 volume? As this could be much easier to build instances like
 mentioned above.

 Or do we have any other ways of doing this?

 References:
 [1] http://cloudgeekz.com/71/how-t
 o-setup-openstack-to-use-local-disks-for-instances.html
 

 Thanks,

 Kevin Zheng


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
 k-dev
 

>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lis

Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-25 Thread Zhai, Edwin

On Fri, 23 Sep 2016, gordon chung wrote:




On 23/09/2016 2:18 AM, Zhai, Edwin wrote:



There are many targets(topics)/endpoints in above ceilometer code. But
in AODH, we just have one topic, 'alarm.all', and one endpoint. If it is
still multi-threaded, there is already potential race condition here,
but event-alarm tiemout make it worse.

https://github.com/openstack/aodh/blob/master/aodh/event.py#L61-L63


see my reply to other message, but yes, it is multithreaded. there's not
race currently because we don't do anything that needs to honour ordering.


Currently, we still need ordering. e.g.
2 events with different traits could trigger same alarm. If they come in an 
interval big enough, the alarm would be triggered once(Second event see the 
state as 'ALARM' and give up).  If they come and is handled concurrently, the 
alarm possibly be triggered twice(Both event see the state as 'UNKNOWN').  This 
is wrong as event alarm is one-shot(if repeat_actions=False).


Do you have any idea to resolve this race condition?





event evaluator is triggered by event only, that is, it's not called at
all until next event comes. If no event comes, evaluator just sleeps so
that can't check timeout and update_alarm. In other words, 'timeout.end'
is just for waking up evaluator.



what's the purpose of the thread being created? i thought the idea was
to receive alarm.timeout.start event -> creates a thread? can we not:
1. receive alarm.timeout.start -> create an alarm with timeout thread
2a. if event received, kill timeout thread, update alarm.
2b. if timeout reached, send alarm notification, update alarm.

^ that is just a random thought, i didn't think about exactly how to
implement. right now i'm not clear who is generating this
alarm.timeout.end event and why it needs to do that at all.



It's good idea! We need one way for timeout calculation: new thread, or alarm 
signal. If alarm signal is more stable, let's turn to it.


We need one list to keep all alarms waiting for timeout, and update the list 
when timeout signal reached.


alarm.timeout.end event is just for locking, and generated by new thread or 
alarm signal handler(your suggestion). If it is useless for locking, we can give 
up and just update alarm directly as you said.




cheers,
--
gord



Best Rgds,
Edwin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2016-09-25 Thread Zhenyu Zheng
Hi Matt,

Yes, we can only do this using 1:1 AZs mapped for each compute node in the
deployment, which is not very feasible in commercial deployment,
we can either pass some hints to Cinder(for current code, cinder
"InstanceLocalityFilter"
uses instance uuid as parameter so it will be impossible for
user to pass it while booting instances)/ add filters or something else to
Nova when doing Nova scheduling. And maybe we will have new solutions
after "Generic-resource-pool" is reached?

The implementations may varies, but this could be a reasonable demands?
right?

Thanks

On Sun, Sep 25, 2016 at 1:02 AM, Matt Riedemann 
wrote:

> On 9/23/2016 8:19 PM, Zhenyu Zheng wrote:
>
>> Hi,
>>
>> Thanks all for the information, as for the filter
>> Erlon(InstanceLocalityFilter) mentioned, this only solves a part of the
>> problem,
>> we can create new volumes for existing instances using this filter and
>> then attach to it, but the root volume still cannot
>> be guranteed to be on the same host as the compute resource, right?
>>
>> The idea here is that all the volumes uses local disks.
>> I was wondering if we already have such a plan after the Resource
>> Provider structure has accomplished?
>>
>> Thanks
>>
>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz > > wrote:
>>
>> Not sure exactly what you mean, but in Cinder using the
>> InstanceLocalityFilter[1], you can  schedule a volume to the same
>> compute node the instance is located. Is this what you need?
>>
>> [1] http://docs.openstack.org/developer/cinder/scheduler-filters
>> .html#instancelocalityfilter
>> > s.html#instancelocalityfilter>
>>
>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant
>> > > wrote:
>>
>> Kevin,
>>
>> This is functionality that has been requested in the past but
>> has never been implemented.
>>
>> The best way to proceed would likely be to propose a
>> blueprint/spec for this and start working this through that.
>>
>> -Jay
>>
>>
>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>
>>> Hi Novaers and Cinders:
>>>
>>> Quite often application requirements would demand using
>>> locally attached disks (or direct attached disks) for
>>> OpenStack compute instances. One such example is running
>>> virtual hadoop clusters via OpenStack.
>>>
>>> We can now achieve this by using BlockDeviceDriver as Cinder
>>> driver and using AZ in Nova and Cinder, illustrated in[1],
>>> which is not very feasible in large scale production deployment.
>>>
>>> Now that Nova is working on resource provider trying to build
>>> an generic-resource-pool, is it possible to perform
>>> "volume-based-scheduling" to build instances according to
>>> volume? As this could be much easier to build instances like
>>> mentioned above.
>>>
>>> Or do we have any other ways of doing this?
>>>
>>> References:
>>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-local
>>> -disks-for-instances.html
>>> >> l-disks-for-instances.html>
>>>
>>> Thanks,
>>>
>>> Kevin Zheng
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> >> subscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
>>> k-dev
>>> >> ck-dev>
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > subscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Develo

[openstack-dev] [nova][vmware] Can the VMware NSX CI please get the recheck command in the error comment?

2016-09-25 Thread Matt Riedemann
I've asked this several times before, but every time the vmware NSX CI 
fails, which is more often than not when I look, the recheck command 
needed to re-run it isn't in the comment posted into the review. I'm 
pretty sure it's a requirement to put the recheck comment in 3rd party 
CI failures anyway.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Candidate proposals for TC (Technical Committee) positions are now open

2016-09-25 Thread Tristan Cacqueray
Candidate proposals for the Technical Committee positions (6 positions)
are now open and will remain open until 2016-10-01, 23:45 UTC

All candidacies must be submitted as a text file to the
openstack/election repository as explained on the election website[1].
Please note that the name of the file has changed this cycle to be the
candidates IRC nic *not* full-name.

Also for TC candidates, election officials refer to the community member
profiles at [2] please take this opportunity to ensure that the you
profile contains current information.

Candidates for the Technical Committee Positions: Any Foundation
individual member can propose their candidacy for an available,
directly-elected TC seat. (except the seven (7) TC members who were
elected for a one-year seat in April[3]).

The election will be held from October 3rd through to 23:45 October 8th,
2015 UTC. The electorate are the Foundation individual members that are
also committers for one of the official programs projects[4] over the
Mitaka-Newton timeframe (September 5, 2015 00:00 UTC to September 4,
2016 23:59 UTC)), as well as the extra-ATCs who are acknowledged by the
TC[5].

Please see the website[1] for additional details about this election.
Please find below the timeline:

TC nomination starts   @ 2016-09-26, 00:00 UTC
TC nomination ends @ 2016-10-01, 23:45 UTC
TC elections begins@ 2016-10-03, 00:00 UTC
TC elections ends  @ 2016-10-08, 23:45 UTC

If you have any questions please be sure to either voice them on the
mailing list or to the elections officials[6].

Thank you, and we look forward to reading your candidate proposals,
-Tristan

[1] http://governance.openstack.org/election/#how-to-submit-your-candidacy
[2] http://www.openstack.org/community/members/
[3] https://wiki.openstack.org/wiki/TC_Elections_April_2016#Results
[4]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2016-elections
 Note the tag for this repo, sept-2016-elections.
[5] Look for the extra-atcs element in [4]
[6] http://governance.openstack.org/election/#election-officials



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] PTL Election Conclusion and Results

2016-09-25 Thread Tony Breeds
Thank you to the electorate, to all those who voted and to all candidates who
put their name forward for PTL for this election. A healthy, open process
breeds trust in our decision making capability thank you to all those who make
this process possible.

Now for the results of the PTL election process, please join me in extending
congratulations to the following PTLs:

 * Astara: Ryan Petrello[#]
 * Barbican  : Dave Mccowan
 * Chef_OpenStack: Samuel Cassiba
 * Cinder: Sean Mcginnis
 * Cloudkitty: Christophe Sauthier
 * Community_App_Catalog : Christopher Aedo
 * Congress  : Eric Kao
 * Designate : Graham Hayes
 * Documentation : Lana Brindley
 * Dragonflow: Omer Anson
 * Ec2_Api   : Alexandre Levine
 * Freezer   : Pierre Mathieu
 * Fuel  : Alexey Shtokolov
 * Glance: Brian Rosmaita
 * Heat  : Rabi Mishra
 * Horizon   : Richard Jones
 * I18n  : Ian Y. Choi
 * Infrastructure: Jeremy Stanley
 * Ironic: Jim Rollenhagen
 * Karbor: Saggi Mizrahi
 * Keystone  : Steve Martinelli
 * Kolla : Michal Jastrzebski
 * Kuryr : Antoni Segura Puimedon
 * Magnum: Adrian Otto
 * Manila: Ben Swartzlander
 * Mistral   : Renat Akhmerov
 * Monasca   : Roland Hochmuth
 * Murano: Kirill Zaitsev
 * Neutron   : Armando Migliaccio
 * Nova  : Matt Riedemann
 * OpenStackAnsible  : Andy Mccrae
 * OpenStackClient   : Dean Troyer
 * OpenStackSalt : Ales Komarek[#]
 * OpenStack_Charms  : James Page
 * OpenStack_UX  : Piet Kruithof[*]
 * Oslo  : Joshua Harlow
 * Packaging_Deb : Thomas Goirand
 * Packaging_Rpm : Haïkel Guémar
 * Puppet_OpenStack  : Alex Schultz
 * Quality_Assurance : Ken'ichi Ohmichi
 * Rally : Andrey Kurilin
 * RefStack  : Catherine Diep
 * Release_Management: Doug Hellmann
 * Requirements  : Tony Breeds
 * Sahara: Vitaly Gridnev
 * Searchlight   : Steve Mclellan
 * Security  : Robert Clark[*]
 * Senlin: Yanyan Hu
 * Solum : Devdatta Kulkarni
 * Stable_Branch_Maintenance : Tony Breeds
 * Swift : John Dickinson
 * Tacker: Sridhar Ramaswamy
 * Telemetry : Julien Danjou
 * Tripleo   : Emilien Macchi
 * Trove : Amrith Kumar
 * Vitrage   : Ifat Afek
 * Watcher   : Antoine Cabot
 * Winstackers   : Claudiu Belu
 * Zaqar : Fei Long Wang
Elections
 1: Freezer: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_662fe1dfea3b2980
 2: Ironic: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_5bbba65c5879783c
 3: Keystone: 
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f0432662b678f99f
 4: Kolla: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_9fa13adc6f6e7148
 5: Magnum: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2fd00175baa579a6
 6: Quality_Assurance: 
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_745c895dcf12c405
Footnotes:
 *: By TC Appointment
 #: Incumbent PTL.  This project is likely to be removed from the big tent

Thank you to all involved in the PTL election process.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][designate DNSaaS] AXFR failing for Secondary zone

2016-09-25 Thread Kelam, Koteswara Rao
I created secondary zone as follows but AXFR fails. I am running nsd on port 53.
$openstack zone create --type SECONDARY example6.com. --master 16.154.201.34 
--description "This is a slave"

2016-09-25 06:53:45.230 ^[[00;36mINFO designate.dnsutils 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[00;36m] ^[[01;35m^[[00;36mDoing AXFR for example6.com. from {'zone_id': 
u'73cde005-3460-453b-8749-5d2879e88c99', 'created_at': datetime.datetime(2016, 
9, 25, 10, 53, 45), 'updated_at': None, 'port': 53, 'host': u'16.154.201.34', 
'version': 1, 'id': u'3e853f94-9b43-49bd-a189-7190e36f5c60'}^[[00m
2016-09-25 06:53:45.231 ^[[01;31mERROR designate.dnsutils 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[01;31m] ^[[01;35m^[[01;31mZone example6.com. is not present on {'zone_id': 
u'73cde005-3460-453b-8749-5d2879e88c99', 'created_at': datetime.datetime(2016, 
9, 25, 10, 53, 45), 'updated_at': None, 'port': 53, 'host': u'16.154.201.34', 
'version': 1, 'id': u'3e853f94-9b43-49bd-a189-7190e36f5c60'}.Trying next 
server.^[[00m
2016-09-25 06:53:45.232 ^[[01;33mWARNING designate.mdns.xfr 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[01;33m] ^[[01;35m^[[01;33mXFR failed for example6.com.. No servers in 
[{'zone_id': u'73cde005-3460-453b-8749-5d2879e88c99', 'created_at': 
datetime.datetime(2016, 9, 25, 10, 53, 45), 'updated_at': None, 'port': 53, 
'host': u'16.154.201.34', 'version': 1, 'id': 
u'3e853f94-9b43-49bd-a189-7190e36f5c60'}] was reached.^[[00m
2016-09-25 06:53:50.285 ^[[00;36mINFO designate.mdns.notify 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[00;36m] ^[[01;35m^[[00;36mSending 'NOTIFY' for 'example6.com.' to 
'16.154.201.34:533'.^[[00m
2016-09-25 06:53:50.288 ^[[00;36mINFO designate.mdns.notify 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[00;36m] ^[[01;35m^[[00;36mexample6.com. not found on 16.154.201.34:533^[[00m
2016-09-25 06:53:50.291 ^[[00;36mINFO designate.mdns.notify 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[00;36m] ^[[01;35m^[[00;36mSending 'SOA' for 'example6.com.' to 
'16.154.201.34:533'.^[[00m
2016-09-25 06:53:50.294 ^[[00;36mINFO designate.mdns.notify 
[^[[01;36mreq-b11a90c6-e04b-4124-a8f0-75017b9bae42 
^[[00;36m2356f9454721446da00645e15edec9c9 d35b45f2e5e04763ad43572e8adff3aa - - 
-^[[00;36m] ^[[01;35m^[[00;36mexample6.com. not found on 16.154.201.34:533

Regards,
Koteswara

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Failed jobs because bad image on mirror server

2016-09-25 Thread Sagi Shnaidman
Hi, all

FYI, jobs failed after last images promotion because of corrupted image,
seems like last promotion job failed to upload it correctly, it didn't
match md5. I've replaced it on mirror server with image from previous
delorean hash run, it should be OK because we anyway update them and it
should be updated on next promotion job run.

Thanks
-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Last hours to elect your PTL for Freezer, Ironic, Keystone, Kolla, Magnum and Quality_Assurance

2016-09-25 Thread Tristan Cacqueray
Hello Freezer, Ironic, Keystone, Kolla, Magnum and Quality_Assurance
contributors,

Just a quick reminder that elections are closing soon, if you haven't
already you should use your right to vote and pick your favorite candidate!

Thanks for your time!
-Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] floating IP is DOWN

2016-09-25 Thread Barber, Ofer
Yes, the output log is for debugging.

I will check my neutron service.
It seems to be not working well.

Thank you,
Ofer

From: Salvatore Orlando [mailto:salv.orla...@gmail.com]
Sent: Friday, September 23, 2016 7:59 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] floating IP is DOWN

Probably that LOG statement is a line added for debugging purposes.
There are several probable causes for a floating ip being down. If you see any 
traceback in the neutron server or l3-agent that will probably immediately 
reveal the root cause.

On the other hand, lack of any traceback might indicate communication issues 
between the server and the l3 agent.

Salvatore

On 22 September 2016 at 16:53, Brian Haley 
mailto:brian.ha...@hpe.com>> wrote:
On 09/22/2016 10:19 AM, Barber, Ofer wrote:
when i assign a floating IP to a server, i see that the status of the floating
IP is "down"

why is that so ?

*_code:_*

LOG.info("\n<== float IP address: %s and status: %s  ==>" %
(float_ip['floating_ip_address'],float_ip['status']))

*_Output:_*

<== float IP address: 10.63.101.225 and status: DOWN  ==>

I couldn't find that code anywhere, what release was this on?

From a Newton-based system created yesterday, this is the message in the 
l3-agent log when I associate a floating IP:

Floating ip 4c1b4571-a003-43f2-96a1-f7073cd1319d added, status ACTIVE

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev