the
>> reply
>> 3. Page through all the user's networks and filter client-side
>>
>> How is the user supposed to be assembling this giant UUID list? I'd think
>> it would be easier for them to specify a query (e.g. "get usage data for
>> all my production
do?
>
> Thanks,
> doug
>
> On Jan 19, 2016, at 4:59 PM, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
> Hi folks,
>
>
> I am writing a Neutron extension which needs to take 1000s of network-ids
> as argument for filtering. The CURL call is as follows:
Hi folks,
I am writing a Neutron extension which needs to take 1000s of network-ids
as argument for filtering. The CURL call is as follows:
curl -i -X GET
'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
-H
On Tue, Nov 24, 2015 at 7:39 AM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:
> On Mon, Nov 23, 2015 at 03:35:58PM -0800, Shraddha Pandhe wrote:
> > Hi,
> >
> > I would like to know how everyone is using maintenance mode and what is
> > expected from
a migration in the Mitaka time frame.
>
> John
>
> [1] https://bugs.launchpad.net/neutron/+bug/1516156
>
>
>
> On Nov 23, 2015, at 8:05 PM, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
> Hi folks,
>
> What is the right way to use ipam reference imp
eutron-legacy file
>
> Thanh
>
> 2015-11-24 11:20 GMT+07:00 Shraddha Pandhe <spandhe.openst...@gmail.com>:
>
>> Hi John,
>>
>> Thanks for letting me know. How do I setup fresh devstack with pluggable
>> IPAM enabled in the meantime?
>>
>&g
Hi,
I would like to know how everyone is using maintenance mode and what is
expected from admins about nodes in maintenance. The reason I am bringing
up this topic is because, most of the ironic operations, including manual
cleaning are not allowed for nodes in maintenance. Thats a problem for
Hi folks,
What is the right way to use ipam reference implementation with devstack?
When setup devstack, I didnt have the setting
ipam_driver = internal
I changed it afterwards. But now when I try to create a port, I get this
error:
2015-11-23 21:23:00.078 ERROR
Hi Carl,
Please find me reply inline
On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin <c...@ecbaldwin.net> wrote:
> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>>
>> We have a similar requirement where we want to pick
gt; Date: Wednesday, November 4, 2015 at 11:38 PM
> To: OpenStack List <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam
> db tables
>
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe <spandhe.openst...@gmail.c
estery.com> wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
>>>> Hi Salvatore,
>>>>
>>>> Thanks for the feedback. I agree with you
Bumping this up :)
Folks, does anyone else have a similar requirement to ours? Are folks
making scheduling decisions based on networking?
On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:
> Hi,
>
> I agree with all of you about the REST
duler_Customization
[2]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-2-neutron/#Customizations_to_Abstract_Away_Layer_2
> Regards,
>Neil
>
>
>
> *From: *Shraddha Pandhe
> *Sent: *Friday, 6 November 2015 20:23
> *To: *OpenStack Development Mailing Li
-cases are always
shared with the community.
On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery <mest...@mestery.com> wrote:
> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
&g
On Wed, Nov 4, 2015 at 1:38 PM, Armando M. <arma...@gmail.com> wrote:
>
>
> On 4 November 2015 at 13:21, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blo
ch associate vendor specific properties
> to allocation pools.
>
> Salvatore
>
> On 4 November 2015 at 21:46, Shraddha Pandhe <spandhe.openst...@gmail.com>
> wrote:
>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM,
Hi folks,
I have a small question/suggestion about IPAM.
With IPAM, we are allowing users to have their own IPAM drivers so that
they can manage IP allocation. The problem is, the new ipam tables in the
database have the same columns as the old tables. So, as a user, if I want
to have my own
e and finding the right solution.
> On Nov 4, 2015 3:58 PM, "Shraddha Pandhe" <spandhe.openst...@gmail.com>
> wrote:
>
>>
>>
>> On Wed, Nov 4, 2015 at 1:38 PM, Armando M. <arma...@gmail.com> wrote:
>>
>>>
>>>
>>>
Hi folks,
James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.
Here's our use-case for Multi-ip:
For Ironic, we want user to specify the number of IPs on boot.
Hi folks,
James Penick from Yahoo! presented a talk on Thursday about how Yahoo uses
Neutron for Ironic. I would like to follow up on one particular use case
that was discussed: Multi-IP support.
Here's our use-case for Multi-ip:
For Ironic, we want user to specify the number of IPs on boot.
Hi Ionut,
I am working on a similar effort: Adding driver for neutron-dhcp-agent [1]
& [2]. Is it similar to what you are trying to do? My approach doesn't need
any extra database. There are two ways to achieve HA in my case:
1. Run multiple neutron-dhcp-agents and set agents_per_network >1 so
On Wed, Jul 1, 2015 at 11:28 AM, Shraddha Pandhe
spandhe.openst...@gmail.com wrote:
Hi,
I had a discussion about this with Kevin Benton on IRC. Filed a bug:
https://bugs.launchpad.net/neutron/+bug/1470612
Thanks!
On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe
spandhe.openst...@gmail.com
Hi,
I have added few more questions to the bug [1]. Please confirm my
understanding.
[1] https://bugs.launchpad.net/neutron/+bug/1470612/comments/12
On Tue, Jul 28, 2015 at 12:14 PM, Shraddha Pandhe
spandhe.openst...@gmail.com wrote:
Hi,
I started working on this patch for bug [0
Hi Shihan,
I think the problem is slightly different. Does your patch take care of the
scenario where a port was deleted AFTER agent restart (not when agent was
down)?
My problem is that, when the agent restarts, it loses its previous network
cache. As soon as the agent starts, as part of
Hi,
I had a discussion about this with Kevin Benton on IRC. Filed a bug:
https://bugs.launchpad.net/neutron/+bug/1470612
Thanks!
On Wed, Jul 1, 2015 at 11:03 AM, Shraddha Pandhe
spandhe.openst...@gmail.com wrote:
Hi Shihan,
I think the problem is slightly different. Does your patch take
Hi folks..
I have a question about neutron dhcp agent restart scenario. It seems like,
when the agent restarts, it recovers the known network IDs in cache, but we
don't recover the known ports [1].
So if a port that was present before agent restarted, is deleted after
agent restart, the agent
that.
On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe
spandhe.openst...@gmail.com wrote:
The idea is to round-robin between gateways by using some sort of mod
operation
So logically it can look something like:
idx = len(gateways) % ip
gateway = gateways[idx]
This is just
Hi everyone,
Any thoughts on supporting multiple gateway IPs for subnets?
On Thu, Jun 11, 2015 at 3:34 PM, Shraddha Pandhe
spandhe.openst...@gmail.com wrote:
The idea is to round-robin between gateways by using some sort of mod
operation
So logically it can look something like
wrote:
What gateway address do you give to regular clients via dhcp when you have
multiple?
On Jun 11, 2015 12:29 PM, Shraddha Pandhe spandhe.openst...@gmail.com
wrote:
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one
gateway. For provider networks in large data
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one gateway.
For provider networks in large data centers, quite often, the architecture is
such a way that multiple gateways are configured per subnet. These multiple
gateways are typically spread across backplanes so that the
Hi,
Currently, the Subnets in Neutron and Nova-Network only support one
gateway. For provider networks in large data centers, quite often, the
architecture is such a way that multiple gateways are configured per
subnet. These multiple gateways are typically spread across backplanes so
that the
Hi Daniel,
I see following in your command
--dhcp-range=set:tag0,192.168.110.0,static,86400s
--dhcp-range=set:tag1,192.168.111.0,static,86400s
Is this expected? Was this command generated by the agent itself, or was
Dnsmasq manually started?
On Tuesday, June 9, 2015 4:41 AM, Kevin
Hi folks,
I am working on nova-network in Havana. I have a very unique use case where I
need to add duplicate VLANs in nova-network. I am trying to add multiple
networks in nova-network with same VLAN ID. The reason is as follows:
The cluster that I have has an L3 backplane. We have been
Hi folks,
I am working on nova-network in Havana. I have a very unique use case where I
need to add duplicate VLANs in nova-network. I am trying to add multiple
networks in nova-network with same VLAN ID. The reason is as follows:
The cluster that I have has an L3 backplane. We have been
34 matches
Mail list logo