Re: [Openstack] Question for Quantum V2 subnet

2012-08-13 Thread Takaaki Suzuki
Hi

Thank you for your comment.
I see. dual stack network need ipv4 and bunch of ipv6 prefix
(link-local, global, temporary global..) on one network.

>> The reason for this is because you can have multiple subnets on the same
>> L2 bcast domain. You can use ip aliasing in order to use multiple subnets on
>> one virtual nic. For example ifconfig eth0:1 a.b.c.d/24; ifconfig eth0:2
>> d.e.f.g/24
This means, It seems like vlan tagging?

Thanks!
Suzuki

On Tue, Aug 14, 2012 at 12:50 PM, Dan Wendlandt  wrote:
> In a dual stack deployment there may be a v4 and a v6 subnet on the same
> network.
>
> There's also the case that a service provider has a notion of a public
> network, which is represented by a UUID.  After a period of time, they may
> run out of IPs in one subnet, and want to assign another subnet as well,
> without forcing tenants to have to start using a new identifier for the
> public network.   Of course, the provider would be responsible for creating
> L3 connectivity between the two subnets.
>
> Others can chime in, but those where the two cases that I remember.
>
> Dan
>
>
> On Mon, Aug 13, 2012 at 8:31 PM, Aaron Rosen  wrote:
>>
>> The reason for this is because you can have multiple subnets on the same
>> L2 bcast domain. You can use ip aliasing in order to use multiple subnets on
>> one virtual nic. For example ifconfig eth0:1 a.b.c.d/24; ifconfig eth0:2
>> d.e.f.g/24
>>
>> Aaron
>>
>>
>> On Mon, Aug 13, 2012 at 7:52 PM, Takaaki Suzuki 
>> wrote:
>>>
>>> Hi all.
>>>
>>> I have one question. I prepared devstack with Qauntum V2.
>>> Now I can create Subnet for Network.
>>> And I can add multiple Subnet for one Network. VM can use multiple
>>> subnet for one virtual NIC?
>>> Why quantum v2 can create multiple subnet for one Network?
>>>
>>> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
>>> http://localhost:9696/ net-list
>>>
>>> ++--+---++--+--+
>>> | admin_state_up | id   | name  |
>>> status | subnets  | tenant_id
>>>   |
>>>
>>> ++--+---++--+--+
>>> | True   | d7a8106c-7ca6-4302-a065-6a87c859ed9c | test
>>> | ACTIVE | 474ea30c-9337-4f48-854c-9f572538a44c |
>>> 4fb66e3355304be5a6f3340d7067b369 |
>>> |  |
>>>||  |
>>> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |
>>> |
>>> |  |
>>>||  |
>>> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |
>>>|
>>>
>>> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
>>> http://localhost:9696/ subnet-list
>>>
>>> +--+--+---+--+++--+--+
>>> | allocation_pools | cidr
>>>| gateway_ip| id   | ip_version
>>> | name   | network_id   | tenant_id
>>> |
>>>
>>> +--+--+---+--+++--+--+
>>> | {"start": "192.168.100.2", "end": "192.168.100.254"} |
>>> 192.168.100.0/24 | 192.168.100.1 |
>>> 474ea30c-9337-4f48-854c-9f572538a44c |  4 | test01 |
>>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>>> 4fb66e3355304be5a6f3340d7067b369 |
>>> | {"start": "192.168.210.2", "end": "192.168.210.254"} |
>>> 192.168.210.0/24 | 192.168.210.1 |
>>> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |  4 | test03 |
>>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>>> 4fb66e3355304be5a6f3340d7067b369 |
>>> | {"start": "192.168.200.2", "end": "192.168.200.254"} |
>>> 192.168.200.0/24 | 192.168.200.1 |
>>> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |  4 | test02 |
>>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>>> 4fb66e3355304be5a6f3340d7067b369 |
>>>
>>> +--+--+---+--+++--+--+
>>>
>>> Thanks!
>>> Suzuki
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : o

Re: [Openstack] [openstack-dev] [nova] Disk attachment consistency

2012-08-13 Thread John Griffith
On Mon, Aug 13, 2012 at 10:16 PM, Nathanael Burton
 wrote:
> On Aug 13, 2012 11:37 PM, "Vishvananda Ishaya" 
> wrote:
>> The second proposal I have is to use a feature of kvm attach and set the
>> device serial number. We can set it to the same value as the device
>> parameter. This means that a device attached to /dev/vdb may not always be
>> at /dev/vdb (with old kvm guests), but it will at least show up at
>> /dev/disk/by-id/virtio-vdb consistently.
>
> What about setting the serial number to the volume_id? At least that way you
> could be sure it was the volume you meant, especially in the case where vdb
> in the guest ends up not being what you requested. What about other
> hypervisors?
>
>> (review coming soon)
>>
>> First question: should we return this magic path somewhere via the api? It
>> would be pretty easy to have horizon generate it but it might be nice to
>> have it show up. If we do return it, do we mangle the device to always show
>> the consistent one, or do we return it as another parameter? guest_device
>> perhaps?
>>
>> Second question: what should happen if someone specifies /dev/xvda against
>> a kvm cloud or /dev/vda against a xen cloud?
>> I see two options:
>> a) automatically convert it to the right value and return it
>> b) fail with an error message
>>
>> Third question: what do we do if someone specifies a device value to a kvm
>> cloud that we know will not work. For example the vm has /dev/vda and
>> /dev/vdb and they request an attach at /dev/vdf. In this case we know that
>> it will likely show up at /dev/vdc. I see a few options here and none of
>> them are amazing:
>>
>> a) let the attach go through as is.
>>   advantages: it will allow scripts to work without having to manually
>> find the next device.
>>   disadvantages: the device name will never be correct in the guest
>> b) automatically modify the request to attach at /dev/vdc and return it
>>   advantages: the device name will be correct some of the time (kvm guests
>> with newer kernels)
>>   disadvantages: sometimes the name is wrong anyway. The user may not
>> expect the device number to change
>> c) fail and say, the next disk must be attached at /dev/vdc:
>>   advantages: explicit
>>   disadvantages: painful, incompatible, and the place we say to attach may
>> be incorrect anyway (kvm guests with old kernels)
>>
>> The second proposal earlier will at least give us a consistent name to
>> find the volume in all these cases, although b) means we have to check the
>> return value to find out what that consistent location is like we do when we
>> don't pass in a device.
>>
>> I hope everything is clear, but if more explanation is needed please let
>> me know. If anyone has alternative/better proposals please tell me. The last
>> question I think is the most important.
>>
>> Vish
>>
>>
>> ___
>> OpenStack-dev mailing list
>> openstack-...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> openstack-...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I've wondered about using the mount by device-uuid as long term
solutions, ie just mount using libvirt mount by /dev/disk/by-uuid
(don't even take a device parameter).  Although I guess there are some
issues here.

As far as my input to your questions:
> What about setting the serial number to the volume_id? At least that way you
> could be sure it was the volume you meant, especially in the case where vdb
> in the guest ends up not being what you requested. What about other
> hypervisors?

+1

>First question: should we return this magic path somewhere via the api? It 
>would be >pretty easy to have horizon generate it but it might be nice to have 
>it show up. If we >do return it, do we mangle the device to always show the 
>consistent one, or do we >return it as another parameter? guest_device perhaps?

I think returning a distinct parameter would be best in this case.

>Second question: what should happen if someone specifies /dev/xvda against a 
>>kvm cloud or /dev/vda against a xen cloud?
>I see two options:
>a) automatically convert it to the right value and return it
>b) fail with an error message

I would vote for option a (auto convert and return)

With respect to the third question:
> (b) automatically modify the request to attach at /dev/vdc and return it)

Seems the best choice we have given a balance between compatibility
and reliability.
The only way to get better reliability given the scenarios creates a
mess compatibility wise in my opinion.  I also think that if we add a
field to volume show that includes the *real* path it alleviates some
of the problem here.

John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~open

Re: [Openstack] [openstack-dev] [nova] Disk attachment consistency

2012-08-13 Thread Nathanael Burton
On Aug 13, 2012 11:37 PM, "Vishvananda Ishaya" 
wrote:
> The second proposal I have is to use a feature of kvm attach and set the
device serial number. We can set it to the same value as the device
parameter. This means that a device attached to /dev/vdb may not always be
at /dev/vdb (with old kvm guests), but it will at least show up at
/dev/disk/by-id/virtio-vdb consistently.

What about setting the serial number to the volume_id? At least that way
you could be sure it was the volume you meant, especially in the case where
vdb in the guest ends up not being what you requested. What about other
hypervisors?

> (review coming soon)
>
> First question: should we return this magic path somewhere via the api?
It would be pretty easy to have horizon generate it but it might be nice to
have it show up. If we do return it, do we mangle the device to always show
the consistent one, or do we return it as another parameter? guest_device
perhaps?
>
> Second question: what should happen if someone specifies /dev/xvda
against a kvm cloud or /dev/vda against a xen cloud?
> I see two options:
> a) automatically convert it to the right value and return it
> b) fail with an error message
>
> Third question: what do we do if someone specifies a device value to a
kvm cloud that we know will not work. For example the vm has /dev/vda and
/dev/vdb and they request an attach at /dev/vdf. In this case we know that
it will likely show up at /dev/vdc. I see a few options here and none of
them are amazing:
>
> a) let the attach go through as is.
>   advantages: it will allow scripts to work without having to manually
find the next device.
>   disadvantages: the device name will never be correct in the guest
> b) automatically modify the request to attach at /dev/vdc and return it
>   advantages: the device name will be correct some of the time (kvm
guests with newer kernels)
>   disadvantages: sometimes the name is wrong anyway. The user may not
expect the device number to change
> c) fail and say, the next disk must be attached at /dev/vdc:
>   advantages: explicit
>   disadvantages: painful, incompatible, and the place we say to attach
may be incorrect anyway (kvm guests with old kernels)
>
> The second proposal earlier will at least give us a consistent name to
find the volume in all these cases, although b) means we have to check the
return value to find out what that consistent location is like we do when
we don't pass in a device.
>
> I hope everything is clear, but if more explanation is needed please let
me know. If anyone has alternative/better proposals please tell me. The
last question I think is the most important.
>
> Vish
>
>
> ___
> OpenStack-dev mailing list
> openstack-...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question for Quantum V2 subnet

2012-08-13 Thread Dan Wendlandt
In a dual stack deployment there may be a v4 and a v6 subnet on the same
network.

There's also the case that a service provider has a notion of a public
network, which is represented by a UUID.  After a period of time, they may
run out of IPs in one subnet, and want to assign another subnet as well,
without forcing tenants to have to start using a new identifier for the
public network.   Of course, the provider would be responsible for creating
L3 connectivity between the two subnets.

Others can chime in, but those where the two cases that I remember.

Dan


On Mon, Aug 13, 2012 at 8:31 PM, Aaron Rosen  wrote:

> The reason for this is because you can have multiple subnets on the same
> L2 bcast domain. You can use ip aliasing in order to use multiple subnets
> on one virtual nic. For example ifconfig eth0:1 a.b.c.d/24; ifconfig eth0:2
> d.e.f.g/24
>
> Aaron
>
>
> On Mon, Aug 13, 2012 at 7:52 PM, Takaaki Suzuki wrote:
>
>> Hi all.
>>
>> I have one question. I prepared devstack with Qauntum V2.
>> Now I can create Subnet for Network.
>> And I can add multiple Subnet for one Network. VM can use multiple
>> subnet for one virtual NIC?
>> Why quantum v2 can create multiple subnet for one Network?
>>
>> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
>> http://localhost:9696/ net-list
>>
>> ++--+---++--+--+
>> | admin_state_up | id   | name  |
>> status | subnets  | tenant_id
>>   |
>>
>> ++--+---++--+--+
>> | True   | d7a8106c-7ca6-4302-a065-6a87c859ed9c | test
>> | ACTIVE | 474ea30c-9337-4f48-854c-9f572538a44c |
>> 4fb66e3355304be5a6f3340d7067b369 |
>> |  |
>>||  |
>> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |
>> |
>> |  |
>>||  |
>> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |
>>|
>>
>> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
>> http://localhost:9696/ subnet-list
>>
>> +--+--+---+--+++--+--+
>> | allocation_pools | cidr
>>| gateway_ip| id   | ip_version
>> | name   | network_id   | tenant_id
>> |
>>
>> +--+--+---+--+++--+--+
>> | {"start": "192.168.100.2", "end": "192.168.100.254"} |
>> 192.168.100.0/24 | 192.168.100.1 |
>> 474ea30c-9337-4f48-854c-9f572538a44c |  4 | test01 |
>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>> 4fb66e3355304be5a6f3340d7067b369 |
>> | {"start": "192.168.210.2", "end": "192.168.210.254"} |
>> 192.168.210.0/24 | 192.168.210.1 |
>> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |  4 | test03 |
>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>> 4fb66e3355304be5a6f3340d7067b369 |
>> | {"start": "192.168.200.2", "end": "192.168.200.254"} |
>> 192.168.200.0/24 | 192.168.200.1 |
>> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |  4 | test02 |
>> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
>> 4fb66e3355304be5a6f3340d7067b369 |
>>
>> +--+--+---+--+++--+--+
>>
>> Thanks!
>> Suzuki
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] centos6.2+essex+kvm+virtio+flatdhcp+windows2003 blue screen and netcard break

2012-08-13 Thread Shake Chen
Hi

now Redhat recommand use Redhat 6.3 for openstack installed.

maybe you can try Centos 6.3

On Mon, Aug 13, 2012 at 9:15 PM, halfss  wrote:

> hello,folks:
>
> when i use centos6.2+essex+kvm(0.12)+**virtio+flatdhcp+windows2003,**the
> guest os break frequent, like this:
>
>
>
> is there anyone know why?
>
> thanks :)
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Shake Chen
<<0592da6b689830cb2d57a2b83c469dbe.png>><<9e4601769a4e8c45b593ab418d4fc11d.png>>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [nova] Disk attachment consistency

2012-08-13 Thread Vishvananda Ishaya
Hey Everyone,

Overview


One of the things that we are striving for in nova is interface consistency, 
that is, we'd like someone to be able to use an openstack cloud without knowing 
or caring which hypervisor is running underneath. There is a nasty bit of 
inconsistency in the way that disks are hot attached to vms that shows through 
to the user. I've been debating ways to minimize this and I have some issues I 
need feedback on.

Background
--

There are three issues contributing to the bad user experience of attaching 
volumes.

1) The api we present for attaching a volume to an instance has a parameter 
called device. This is presented as where to attach the disk in the guest.

2) Xen picks minor device numbers on the host hypervisor side and the guest 
driver follows instructions

3) KVM picks minor device numbers on the guest driver side and doesn't expose 
them to the host hypervisor side

Resulting Issues


a) The device name only makes sense for linux. FreeBSD will select different 
device names, and windows doesn't even use device names. In addition xen uses 
/dev/xvda and kvm uses /dev/vda

b) The device sent in kvm will not match where it actually shows up. We can 
consistently guess where it will show up if the guest kernel is >= 3.2, 
otherwise we are likely to be wrong, and it may change on a reboot anyway


Long term solutions
--

We probably shouldn't expose a device path, it should be a device number. This 
is probably the right change long term, but short term we need to make the 
device name make sense somehow. I want to delay the long term until after the 
summit, and come up with something that works short-term with our existing 
parameters and usage.

The first proposal I have is to make the device parameter optional. The system 
will automatically generate a valid device name that will be accurate for xen 
and kvm with guest kernel 3.2, but will likely be wrong for old kvm guests in 
some situations. I think this is definitely an improvement and only a very 
minor change to an extension api (making a parameter optional, and returning 
the generated value of the parameter).

(review at https://review.openstack.org/#/c/10908/)

The second proposal I have is to use a feature of kvm attach and set the device 
serial number. We can set it to the same value as the device parameter. This 
means that a device attached to /dev/vdb may not always be at /dev/vdb (with 
old kvm guests), but it will at least show up at /dev/disk/by-id/virtio-vdb 
consistently.

(review coming soon)

First question: should we return this magic path somewhere via the api? It 
would be pretty easy to have horizon generate it but it might be nice to have 
it show up. If we do return it, do we mangle the device to always show the 
consistent one, or do we return it as another parameter? guest_device perhaps?

Second question: what should happen if someone specifies /dev/xvda against a 
kvm cloud or /dev/vda against a xen cloud?
I see two options:
a) automatically convert it to the right value and return it
b) fail with an error message

Third question: what do we do if someone specifies a device value to a kvm 
cloud that we know will not work. For example the vm has /dev/vda and /dev/vdb 
and they request an attach at /dev/vdf. In this case we know that it will 
likely show up at /dev/vdc. I see a few options here and none of them are 
amazing:

a) let the attach go through as is.
  advantages: it will allow scripts to work without having to manually find the 
next device.
  disadvantages: the device name will never be correct in the guest
b) automatically modify the request to attach at /dev/vdc and return it
  advantages: the device name will be correct some of the time (kvm guests with 
newer kernels)
  disadvantages: sometimes the name is wrong anyway. The user may not expect 
the device number to change
c) fail and say, the next disk must be attached at /dev/vdc:
  advantages: explicit
  disadvantages: painful, incompatible, and the place we say to attach may be 
incorrect anyway (kvm guests with old kernels)

The second proposal earlier will at least give us a consistent name to find the 
volume in all these cases, although b) means we have to check the return value 
to find out what that consistent location is like we do when we don't pass in a 
device.

I hope everything is clear, but if more explanation is needed please let me 
know. If anyone has alternative/better proposals please tell me. The last 
question I think is the most important.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cannot associate/dissociate an floating IP to an instance from API

2012-08-13 Thread Sam Su
Hi,

I am trying to associate an floating IP to an instance from API in my Essex
environment, and get the error "404 Not Found". I found someone have filed
an invalid bug (the link is https://bugs.launchpad.net/nova/+bug/917064), I
followed the information and tried several request formats as below, but
all are failed.

1. curl -k -D - -H "X-Auth-Token: 7f48c07af3b842d1b9c0f15a37ddd956" -X
'POST' -d @test.json -v
http://localhost:8774/v1.1/53869b3cd0cc40a28a826422a37622da/os-floating-ips/1/associate-H
'Content-type: application/json'

The file test.json:
*{*
*"associate_address" : {*
*   "fixed_ip" : "192.168.20.3"*
*}*
*}*
*
*
2. curl -k -D - -H "X-Auth-Token: 7f48c07af3b842d1b9c0f15a37ddd956" -X
'POST' -d @test.json -v
http://localhost:8774/v1.1/53869b3cd0cc40a28a826422a37622da/os-floating-ips/1/action
-H 'Content-type: application/json'

The file test.json:
*{*
*"addFloatingIp": {*
*   "address" : "10.100.20.17"*
*},*
*"associate_address" : {*
*   "fixed_ip" : "192.168.20.3"*
*}*
*}*

3. curl -k -D - -H "X-Auth-Token: 7f48c07af3b842d1b9c0f15a37ddd956" -X
'POST' -d @test.json -v
http://localhost:8774/v1.1/53869b3cd0cc40a28a826422a37622da/os-floating-ips/1/addFloatingIp
 -H 'Content-type: application/json'

The file test.json:
*{*
*"associate_address" : {*
*   "fixed_ip" : "192.168.20.3"*
*}*
*}*

I am wondering what is the correct request format or just a bug?  It will
be much appreciated if someone can give me some hints.

Thanks,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question for Quantum V2 subnet

2012-08-13 Thread Aaron Rosen
The reason for this is because you can have multiple subnets on the same L2
bcast domain. You can use ip aliasing in order to use multiple subnets on
one virtual nic. For example ifconfig eth0:1 a.b.c.d/24; ifconfig eth0:2
d.e.f.g/24

Aaron

On Mon, Aug 13, 2012 at 7:52 PM, Takaaki Suzuki  wrote:

> Hi all.
>
> I have one question. I prepared devstack with Qauntum V2.
> Now I can create Subnet for Network.
> And I can add multiple Subnet for one Network. VM can use multiple
> subnet for one virtual NIC?
> Why quantum v2 can create multiple subnet for one Network?
>
> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
> http://localhost:9696/ net-list
>
> ++--+---++--+--+
> | admin_state_up | id   | name  |
> status | subnets  | tenant_id
>   |
>
> ++--+---++--+--+
> | True   | d7a8106c-7ca6-4302-a065-6a87c859ed9c | test
> | ACTIVE | 474ea30c-9337-4f48-854c-9f572538a44c |
> 4fb66e3355304be5a6f3340d7067b369 |
> |  |
>||  |
> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |
> |
> |  |
>||  |
> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |
>|
>
> quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
> http://localhost:9696/ subnet-list
>
> +--+--+---+--+++--+--+
> | allocation_pools | cidr
>| gateway_ip| id   | ip_version
> | name   | network_id   | tenant_id
> |
>
> +--+--+---+--+++--+--+
> | {"start": "192.168.100.2", "end": "192.168.100.254"} |
> 192.168.100.0/24 | 192.168.100.1 |
> 474ea30c-9337-4f48-854c-9f572538a44c |  4 | test01 |
> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
> 4fb66e3355304be5a6f3340d7067b369 |
> | {"start": "192.168.210.2", "end": "192.168.210.254"} |
> 192.168.210.0/24 | 192.168.210.1 |
> 52ffda8c-61aa-465b-ae62-1ef57e9bed85 |  4 | test03 |
> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
> 4fb66e3355304be5a6f3340d7067b369 |
> | {"start": "192.168.200.2", "end": "192.168.200.254"} |
> 192.168.200.0/24 | 192.168.200.1 |
> 9a659285-c6b1-4e6f-b3f0-c3e37341e0be |  4 | test02 |
> d7a8106c-7ca6-4302-a065-6a87c859ed9c |
> 4fb66e3355304be5a6f3340d7067b369 |
>
> +--+--+---+--+++--+--+
>
> Thanks!
> Suzuki
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Question for Quantum V2 subnet

2012-08-13 Thread Takaaki Suzuki
Hi all.

I have one question. I prepared devstack with Qauntum V2.
Now I can create Subnet for Network.
And I can add multiple Subnet for one Network. VM can use multiple
subnet for one virtual NIC?
Why quantum v2 can create multiple subnet for one Network?

quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
http://localhost:9696/ net-list
++--+---++--+--+
| admin_state_up | id   | name  |
status | subnets  | tenant_id
  |
++--+---++--+--+
| True   | d7a8106c-7ca6-4302-a065-6a87c859ed9c | test
| ACTIVE | 474ea30c-9337-4f48-854c-9f572538a44c |
4fb66e3355304be5a6f3340d7067b369 |
|  |
   ||  |
52ffda8c-61aa-465b-ae62-1ef57e9bed85 |
|
|  |
   ||  |
9a659285-c6b1-4e6f-b3f0-c3e37341e0be |
   |

quantum --os_token 1b73ace152c440ea939c2329fd115e56 --os_url
http://localhost:9696/ subnet-list
+--+--+---+--+++--+--+
| allocation_pools | cidr
   | gateway_ip| id   | ip_version
| name   | network_id   | tenant_id
|
+--+--+---+--+++--+--+
| {"start": "192.168.100.2", "end": "192.168.100.254"} |
192.168.100.0/24 | 192.168.100.1 |
474ea30c-9337-4f48-854c-9f572538a44c |  4 | test01 |
d7a8106c-7ca6-4302-a065-6a87c859ed9c |
4fb66e3355304be5a6f3340d7067b369 |
| {"start": "192.168.210.2", "end": "192.168.210.254"} |
192.168.210.0/24 | 192.168.210.1 |
52ffda8c-61aa-465b-ae62-1ef57e9bed85 |  4 | test03 |
d7a8106c-7ca6-4302-a065-6a87c859ed9c |
4fb66e3355304be5a6f3340d7067b369 |
| {"start": "192.168.200.2", "end": "192.168.200.254"} |
192.168.200.0/24 | 192.168.200.1 |
9a659285-c6b1-4e6f-b3f0-c3e37341e0be |  4 | test02 |
d7a8106c-7ca6-4302-a065-6a87c859ed9c |
4fb66e3355304be5a6f3340d7067b369 |
+--+--+---+--+++--+--+

Thanks!
Suzuki

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Michael Still
On 14/08/12 08:54, Jay Pipes wrote:

> I was *going* to create a random-data table with the same average row
> size as the instances table in Nova to see how long the migration would
> take, and then I realized something... The user_data column is already
> of column type MEDIUMTEXT, not TEXT:
> 
> jpipes@uberbox:~$ mysql -uroot nova -e "DESC instances" | grep user_data
> user_data mediumtext  YES NULL
> 
> So the column can already store data up to 2^24 bytes long, or 16MB of
> data. So this might be a moot issue already? Do we expect user data to
> be more than 16MB?

The bug reports truncation at 64kb. The last schema change I can see for
that column is Essex version 82, which has:

$ grep user_data *.py
082_essex.py:Column('user_data', Text),

http://docs.sqlalchemy.org/en/latest/dialects/mysql.html says that Text
is "MySQL TEXT type, for text up to 2^16 characters".

Am I misunderstanding something here?

Mikal

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Jay Pipes
On 08/13/2012 06:02 PM, Michael Still wrote:
> On 14/08/12 01:24, Jay Pipes wrote:
> 
>> Or just set the column to the LONGTEXT type and both MySQL and
>> PostgreSQL will be just as happy.
> 
> This is what I was originally aiming at -- will large deployers be angry
> if I change this column to longtext? Will the migration be a significant
> problem for them?

>From the MySQL standpoint, the migration impact is neglible. It's
essentially changing the row pointer size from 2 bytes to 4 bytes and
rewriting data pages. For InnoDB tables, it's unlikely many rows would
even be moved, as InnoDB stores a good chunk of these types of rows in
its main data pages -- I think up to 4KB if I remember correctly -- so
unless the user data exceeded that size, I don't think the rows would
even need to move data pages...

I would guess that an ALTER TABLE that changes the column from a TEXT to
a LONGTEXT would likely take less than a minute for even a pretty big
(millions of rows in the instances table) database.

I was *going* to create a random-data table with the same average row
size as the instances table in Nova to see how long the migration would
take, and then I realized something... The user_data column is already
of column type MEDIUMTEXT, not TEXT:

jpipes@uberbox:~$ mysql -uroot nova -e "DESC instances" | grep user_data
user_data   mediumtext  YES NULL

So the column can already store data up to 2^24 bytes long, or 16MB of
data. So this might be a moot issue already? Do we expect user data to
be more than 16MB?

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Joshua Harlow
I'm pretty sure its common since its the main way to get data into
cloud-init.

-Josh

On 8/13/12 3:02 PM, "Michael Still"  wrote:

>On 14/08/12 01:24, Jay Pipes wrote:
>
>> Or just set the column to the LONGTEXT type and both MySQL and
>> PostgreSQL will be just as happy.
>
>This is what I was originally aiming at -- will large deployers be angry
>if I change this column to longtext? Will the migration be a significant
>problem for them?
>
>Mikal
>
>___
>Mailing list: https://launchpad.net/~openstack
>Post to : openstack@lists.launchpad.net
>Unsubscribe : https://launchpad.net/~openstack
>More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] metadata service problem

2012-08-13 Thread Xin Zhao

Hello,

How can I check the value of $my_ip?

I notice that in the arp table on the instance, the entry for ip 
169.254.169.254 has a wrong mac address, which I don't see existent on 
both the controller
node and the worker node. After I change the entry for ip 
169.254.169.254 to map to the real mac address of the controller 
host(metadata host as well),
the curl command works. But, I don't know why the initial wrong mac 
address gets entered into the arp table in the first place?


Thanks,
Xin

On 8/10/2012 3:47 AM, tacy lee wrote:
Suggest check iptables-save output,Metadata_host default set to 
$my_ip. if you often switch network,your $my_ip maybe set wrong


On Fri, Aug 10, 2012 at 10:31 AM, Xin Zhao > wrote:


Hello,

In my essex install on RHEL6, there is a problem with the metadata
service.
The metadata service works for instances running on the controller
node, where
the nova-api(metadata service) is running. But for the other
worker nodes,
the metadata service is intermittent, ie. the instances sometimes
can get its metadata,
sometime it fails with errors like:

$> curl -v http://169.254.169.254:8775/
* About to connect() to 169.254.169.254 port 8775 (#0)
*   Trying 169.254.169.254... Connection timed out
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host

Any idea where should I start investigating this?

Thanks,
Xin

___
Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp






smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Michael Still
On 14/08/12 01:24, Jay Pipes wrote:

> Or just set the column to the LONGTEXT type and both MySQL and
> PostgreSQL will be just as happy.

This is what I was originally aiming at -- will large deployers be angry
if I change this column to longtext? Will the migration be a significant
problem for them?

Mikal

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Manually attaching iSCSI volume to an instance

2012-08-13 Thread Rafi Khardalian
Great -- I just submitted it for stable/essex and will do the same for
Folsom once I can test it.

https://review.openstack.org/#/c/11303/

---
Rafi Khardalian
Vice President, Operations | Metacloud, Inc.
Email: r...@metacloud.com | Tel: 855-638-2256, Ext. 2662


-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Monday, August 13, 2012 1:08 PM
To: Rafi Khardalian
Cc: Samuel Winchenbach; openstack@lists.launchpad.net
Subject: Re: [Openstack] Manually attaching iSCSI volume to an instance

This patch would be most welcome.

Vish

On Aug 13, 2012, at 12:25 PM, Rafi Khardalian  wrote:

> We ran into this issue as well and just patched a fix into our
> distribution.  Basically, the patch will re-establish iSCSI
> connections anytime an instance is hard rebooted.  The Nova
> configuration option "start_guests_on_host_boot" ultimately results in
> executing the same code as a hard reboot, which is why we opted to patch
it into that function.
>
> I've attached a copy of the patch to my response, which should apply
> cleanly to stable/essex.  I'm planning to submit the patch upstream,
> since I do not see a fix for it anywhere, even in Folsom.
>
> Please let me know if this fixes the issue for you.
> ---
> Rafi Khardalian
> Vice President, Operations | Metacloud, Inc.
> Email: r...@metacloud.com | Tel: 855-638-2256, Ext. 2662
>
>
> -Original Message-
> From: openstack-bounces+rafi=metacloud@lists.launchpad.net
> [mailto:openstack-bounces+rafi=metacloud@lists.launchpad.net] On
> Behalf Of Samuel Winchenbach
> Sent: Friday, August 10, 2012 6:59 AM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] Manually attaching iSCSI volume to an instance
>
> Hi,
>
> I believe I am a "victim" of this bug:
> https://bugs.launchpad.net/nova/+bug/1001088  ( I can not seem to find
> the fix that was committed )
>
> After rebooting my volumes are not restored and they become stuck in
> the "in-use" state.
>
> Is there a way to manually restore the volumes?  Using tgt-admin I
> verified that the target exists, I used iscsiadm to login, and
> verified that created a device node.  Here is a log showing what I just
mentioned:
> http://paste2.org/p/2101666
>
> And here is a document I created showing my general setup, including
> nova.conf.  (I didn't include glance, or keystone configs because they
> seem to be working fine):
> https://docs.google.com/document/d/1pkwGa22OfATp62hVGYR3jWTEbhQVyHIpwO
> JgaS
> rbQ7M/edit
>
> Any help would be greatly appreciated.  Thanks, Sam
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 
> ___ Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Summit Tracks & Topics

2012-08-13 Thread Lauren Sell
Hi everyone,

We're planning the next OpenStack Summit, October 15-18, and wanted share the 
current thinking around tracks/topics and get input.

Based on the feedback from the last event, people wanted more design summit 
working sessions, more conference-style sessions (presentations/panels) and 
more hands-on workshops. To accommodate more content overall, the previously 
separate Design Summit (3 days) and Conference (2 days) events now run in 
parallel as one OpenStack Summit (4 days). You can review the original 
community discussion about the format change here 
--http://www.openstack.org/blog/2012/06/openstack-summit-coming-october-15th-19th-to-san-diego-ca/

This means if you're a developer attending the Design Summit, you would attend 
Monday - Thursday. If you're a technical practitioner, you may also want to 
attend all four days for the Operations Summit (we're dedicating more time to 
it this year) and workshops happening Monday and Thursday. But for the majority 
of folks, the main days of the Summit with keynote speakers and 
presentations/panels will be Tuesday and Wednesday.

We have a public google doc with the list of proposed tracks, including the 
Design Summit topics, and a short description of each.  We've also included a 
proposed schedule to give a general idea how things would run, but there might 
be some minor adjustments.

Proposed Tracks & Schedule:
https://docs.google.com/spreadsheet/ccc?key=0AmUn0hzC1InKdEtNWVpRckt4R0Z0Q0Z3SUc1cUtDQXc#gid=0

Speaking submissions for the conference-style content are live 
http://www.openstack.org/summit/san-diego-2012/call-for-speakers/ (basically 
everything except the Design Summit working sessions which will open for 
submissions in the next few weeks), and the deadline is August 30.  New this 
time around, we want to publish all of the conference-style submissions for 
community vote (similar to SXSW panel picker if you're familiar), which would 
go live early September.  We are also planning to have track chairs from 
various disciplines and organizations help fill in the gaps and shape out the 
rest of the schedule.  

Please take a look at the google doc, and we would appreciate any feedback on 
the current tracks -- whether the topic is not necessary or needs less time, 
there's something missing, or the definition/scope could be changed.

Thanks!
Lauren___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-13 Thread Dan Wendlandt
On Mon, Aug 13, 2012 at 12:51 PM, Vishvananda Ishaya
wrote:

> This is up to dan, I suppose, but the rootwrap stuff seems like something
> worth granting a ffe to…
>

I wasn't going to mention it, as the urgency of a nearby deadline can be
helpful :)

But yes, I'd grant an ffe to something this important, especially because
it applies across all uses of quantum.

Dan



>
> Vish
>
> On Aug 13, 2012, at 11:49 AM, j...@redhat.com wrote:
>
> >>   From: j...@redhat.com
> >>   Date: Fri, 10 Aug 2012 11:52:49 -0400
> > [...]
> >>   Very much, thanks.  More news as it happens...
> >
> > Here's where I've got to so far
> >
> > I've ported/transliterated code from nova/cinder to manage rootwrap
> > filter defs the same way in quantum.
> >
> > I've plowed through most of the quantum filter defs which were
> > embedded in the agent code, and changed them to newer format, in
> > /etc/quantum/rootwrap.d/*
> >
> > Current headache is getting my test environment back to working
> > condition, and then contriving enough tests to prove that the code
> > changes are working.  Once I get that done, I'll do a cleanup pass and
> > get a changeset posted for review.
> >
> > We're getting close to the tomorrow deadline.  I will work with Gary
> > and Bob and Chris to try to get this stuff nailed ASAP, or figure out
> > plan B if it looks like that's just too much of a stretch.
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Manually attaching iSCSI volume to an instance

2012-08-13 Thread Vishvananda Ishaya
This patch would be most welcome.

Vish

On Aug 13, 2012, at 12:25 PM, Rafi Khardalian  wrote:

> We ran into this issue as well and just patched a fix into our
> distribution.  Basically, the patch will re-establish iSCSI connections
> anytime an instance is hard rebooted.  The Nova configuration option
> "start_guests_on_host_boot" ultimately results in executing the same code
> as a hard reboot, which is why we opted to patch it into that function.
> 
> I've attached a copy of the patch to my response, which should apply
> cleanly to stable/essex.  I'm planning to submit the patch upstream, since
> I do not see a fix for it anywhere, even in Folsom.
> 
> Please let me know if this fixes the issue for you.
> ---
> Rafi Khardalian
> Vice President, Operations | Metacloud, Inc.
> Email: r...@metacloud.com | Tel: 855-638-2256, Ext. 2662
> 
> 
> -Original Message-
> From: openstack-bounces+rafi=metacloud@lists.launchpad.net
> [mailto:openstack-bounces+rafi=metacloud@lists.launchpad.net] On
> Behalf Of Samuel Winchenbach
> Sent: Friday, August 10, 2012 6:59 AM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] Manually attaching iSCSI volume to an instance
> 
> Hi,
> 
> I believe I am a "victim" of this bug:
> https://bugs.launchpad.net/nova/+bug/1001088  ( I can not seem to find the
> fix that was committed )
> 
> After rebooting my volumes are not restored and they become stuck in the
> "in-use" state.
> 
> Is there a way to manually restore the volumes?  Using tgt-admin I
> verified that the target exists, I used iscsiadm to login, and verified
> that created a device node.  Here is a log showing what I just mentioned:
> http://paste2.org/p/2101666
> 
> And here is a document I created showing my general setup, including
> nova.conf.  (I didn't include glance, or keystone configs because they
> seem to be working fine):
> https://docs.google.com/document/d/1pkwGa22OfATp62hVGYR3jWTEbhQVyHIpwOJgaS
> rbQ7M/edit
> 
> Any help would be greatly appreciated.  Thanks, Sam
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-13 Thread Vishvananda Ishaya
This is up to dan, I suppose, but the rootwrap stuff seems like something worth 
granting a ffe to…

Vish

On Aug 13, 2012, at 11:49 AM, j...@redhat.com wrote:

>>   From: j...@redhat.com
>>   Date: Fri, 10 Aug 2012 11:52:49 -0400
> [...]
>>   Very much, thanks.  More news as it happens...
> 
> Here's where I've got to so far
> 
> I've ported/transliterated code from nova/cinder to manage rootwrap
> filter defs the same way in quantum.
> 
> I've plowed through most of the quantum filter defs which were
> embedded in the agent code, and changed them to newer format, in
> /etc/quantum/rootwrap.d/*
> 
> Current headache is getting my test environment back to working
> condition, and then contriving enough tests to prove that the code
> changes are working.  Once I get that done, I'll do a cleanup pass and
> get a changeset posted for review.
> 
> We're getting close to the tomorrow deadline.  I will work with Gary
> and Bob and Chris to try to get this stuff nailed ASAP, or figure out
> plan B if it looks like that's just too much of a stretch.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] The Return of Hyper-V

2012-08-13 Thread Adam Young

On 08/13/2012 11:26 AM, Peter Pouliot wrote:


Hello Everyone,

I would like to take this moment to make everyone aware of the following:

https://review.openstack.org/#/c/11276/

I would like to thank the following individuals, who have given so 
much to help this project progress to this state.  Without their 
efforts this project would be dead in the water.


Jordan Rinke: Thank you for doing the original work to bring back base 
Essex functionality and   your effort help jump start this community 
and for that I am grateful.  You are one of the original champions of 
Hyper-V.


Alessandro Pilotti (CloudBase Solutions): Thank you for all the 
tremendous amount of code you produced.   We would not have the 
feature set, unit tests or the Folsom integration without the many 
hours you graciously contributed to the project.   I am truly grateful 
for the effort  you produced.   At times I thought this project was 
going to fail.  Without your effort it most certainly would have.


Pedro Navarro Perez:  Thank you reaching out to me at the OpenStack 
summit and being the first hyper-v community member.   Your efforts on 
Volume Attach/Detach and boot from volume are greatly appreciated.  
Thank you for the collaboration with other members to test, 
troubleshoot and advance our efforts.  Your  effort over this weekend 
helping Alessandro to this point was necessary for this submission to 
occur.


Jose Castro Leon:  Jose, thank you for  your assistance testing.  
  Given all that is going on these days at CERN, you still found time 
to help us test Hyper-V.


To Vish, Monty and Stef, thank you for all of your guidance in this 
process.   Hopefully because of your help and wisdom it will be a 
quick and painless approval and integration process.


And last and most importantly:

To Hashir Abdi, my colleague at Microsoft.   I am grateful for you 
believing in this project.  From the early days in the Interop lab, 
you saw the true potential. Thank you for the hours you have spent 
making sure that this project progressed forward.   We have busy days 
ahead.


Once again, thank you everyone for your assistance.

p

Peter J. Pouliot, CISSP

Senior SDET, OpenStack

Microsoft

New England Research & Development Center

One Memorial Drive,Cambridge, MA 02142

ppoul...@microsoft.com  | Tel: +1(857) 
453 6436




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Well done, Peter and company.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Manually attaching iSCSI volume to an instance

2012-08-13 Thread Rafi Khardalian
We ran into this issue as well and just patched a fix into our
distribution.  Basically, the patch will re-establish iSCSI connections
anytime an instance is hard rebooted.  The Nova configuration option
"start_guests_on_host_boot" ultimately results in executing the same code
as a hard reboot, which is why we opted to patch it into that function.

I've attached a copy of the patch to my response, which should apply
cleanly to stable/essex.  I'm planning to submit the patch upstream, since
I do not see a fix for it anywhere, even in Folsom.

Please let me know if this fixes the issue for you.
---
Rafi Khardalian
Vice President, Operations | Metacloud, Inc.
Email: r...@metacloud.com | Tel: 855-638-2256, Ext. 2662


-Original Message-
From: openstack-bounces+rafi=metacloud@lists.launchpad.net
[mailto:openstack-bounces+rafi=metacloud@lists.launchpad.net] On
Behalf Of Samuel Winchenbach
Sent: Friday, August 10, 2012 6:59 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Manually attaching iSCSI volume to an instance

Hi,

I believe I am a "victim" of this bug:
https://bugs.launchpad.net/nova/+bug/1001088  ( I can not seem to find the
fix that was committed )

After rebooting my volumes are not restored and they become stuck in the
"in-use" state.

Is there a way to manually restore the volumes?  Using tgt-admin I
verified that the target exists, I used iscsiadm to login, and verified
that created a device node.  Here is a log showing what I just mentioned:
http://paste2.org/p/2101666

And here is a document I created showing my general setup, including
nova.conf.  (I didn't include glance, or keystone configs because they
seem to be working fine):
https://docs.google.com/document/d/1pkwGa22OfATp62hVGYR3jWTEbhQVyHIpwOJgaS
rbQ7M/edit

Any help would be greatly appreciated.  Thanks, Sam

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


metacloud-volume_reconnect.patch
Description: Binary data
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Removing quantum-rootwrap

2012-08-13 Thread jrd
>From: j...@redhat.com
>Date: Fri, 10 Aug 2012 11:52:49 -0400
[...]
>Very much, thanks.  More news as it happens...

Here's where I've got to so far

I've ported/transliterated code from nova/cinder to manage rootwrap
filter defs the same way in quantum.

I've plowed through most of the quantum filter defs which were
embedded in the agent code, and changed them to newer format, in
/etc/quantum/rootwrap.d/*

Current headache is getting my test environment back to working
condition, and then contriving enough tests to prove that the code
changes are working.  Once I get that done, I'll do a cleanup pass and
get a changeset posted for review.

We're getting close to the tomorrow deadline.  I will work with Gary
and Bob and Chris to try to get this stuff nailed ASAP, or figure out
plan B if it looks like that's just too much of a stretch.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] community update and what's coming in Folsom

2012-08-13 Thread John Dickinson
We just released Swift 1.6.0 last Monday
( https://lists.launchpad.net/openstack/msg15505.html ). We've got a lot
of great features and improvements in it, and I wanted to take some
time to update the wider community about where Swift is.

Swift 1.4.8 was included with the last OpenStack realease (Essex).
Since then, all of the OpenStack projects have been working towards
OpenStack's Folsom release. It is scheduled for the end of September.
This summer, Swift has made two major releases (1.5.0 and 1.6.0). We
will most likely have one more release of Swift before Folsom is cut.
This next release will be included in OpenStack Folsom.

So what can you expect from swift in the Folsom release? Looking at at
the CHANGELOG, there are some exciting changes coming.

First, swift now has deep integration with statsd. This allows for
simple integration into existing statsd monitoring systems and
provides real-time monitoring of nearly every aspect of a swift
cluster. This feature is documented at
http://docs.openstack.org/developer/swift/admin_guide.html#reporting-metrics-to-statsd.
We have also expanded swift-recon to support all
types of servers in the cluster and to report on many of the
background processes used by swift. These features together allow
swift deployers to know exactly what is going on in their clusters.

Also, swift now supports versioned writes. With this feature enabled,
PUTs to an existing object will not overwrite that object but instead
move the current contents into a new location. A complete overview for
versioning is at 
http://docs.openstack.org/developer/swift/overview_object_versioning.html.

Swift has greatly improved its support for SSD-based account and
container storage. A new db_preallocation config flag can be set to
enable or disable preallocation of swift's sqlite databases. Enabling
preallocation minimizes disk fragmentation (good for spinning drives),
and disabling it maximizes usable space on the drive (good for SSDs).

We have also separated the client tools from swift and moved them into
python-swiftclient. This change benefits other projects that want to
integrate with swift. They can now install supported client tools
without needing to install all of swift.

We have also separated the swift3 middleware from swift. The code is
now managed apart from swift and is found at
https://github.com/fujita/swift3.

Finally, the swift-keystone middleware has moved from the keystone
project into the swift project. This allows those who know swift best
to support the code that ties the two projects together.

Swift's developer community has continued to grow. Since the Essex
release, Swift has had 30 contributors, 13 of whom are new. This
brings us to a total of 71 contributors.

I'm excited about delivering these features in Folsom. Thanks to all
of the contributors for your hard and thoughtful work on swift.

I'll be sending another email shortly about where swift is going in
grizzly and beyond. Stay tuned for more.

--John




smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Does glance-scrubber.conf require sql_connection?

2012-08-13 Thread Lorin Hochstein

On Aug 13, 2012, at 1:52 PM, Jay Pipes  wrote:

> On 08/13/2012 01:45 PM, Lorin Hochstein wrote:
>> On Aug 13, 2012, at 11:33 AM, Jay Pipes  wrote:
>> 
>>> On 08/12/2012 10:12 PM, Lorin Hochstein wrote:
 Doc question:
 
 Does glance-scrubber require sql_connection?  The Install and Deploy
 Guide specifies the sql_connection parameter
 ,
 but it wasn't clear to me that the scrubber actually makes any queries
 against the database. 
>>> 
>>> It used to make direct queries against the registry database, but now it
>>> makes queries via the registry's REST API. So this option can safely be
>>> removed now.
>> 
>> Does "now" mean as of essex or as of folsom?
> 
> Sorry, good point, Lorin :) This behaviour (of not requiring the
> registry database connection) was implemented in Essex:
> 
> https://bugs.launchpad.net/glance/+bug/836381
> 

Thanks, Jay. Docfix submitted: https://review.openstack.org/11294


Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Does glance-scrubber.conf require sql_connection?

2012-08-13 Thread Jay Pipes
On 08/13/2012 01:45 PM, Lorin Hochstein wrote:
> On Aug 13, 2012, at 11:33 AM, Jay Pipes  wrote:
> 
>> On 08/12/2012 10:12 PM, Lorin Hochstein wrote:
>>> Doc question:
>>>
>>> Does glance-scrubber require sql_connection?  The Install and Deploy
>>> Guide specifies the sql_connection parameter
>>> ,
>>> but it wasn't clear to me that the scrubber actually makes any queries
>>> against the database. 
>>
>> It used to make direct queries against the registry database, but now it
>> makes queries via the registry's REST API. So this option can safely be
>> removed now.
> 
> Does "now" mean as of essex or as of folsom?

Sorry, good point, Lorin :) This behaviour (of not requiring the
registry database connection) was implemented in Essex:

https://bugs.launchpad.net/glance/+bug/836381

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Does glance-scrubber.conf require sql_connection?

2012-08-13 Thread Lorin Hochstein
On Aug 13, 2012, at 11:33 AM, Jay Pipes  wrote:

> On 08/12/2012 10:12 PM, Lorin Hochstein wrote:
>> Doc question:
>> 
>> Does glance-scrubber require sql_connection?  The Install and Deploy
>> Guide specifies the sql_connection parameter
>> ,
>> but it wasn't clear to me that the scrubber actually makes any queries
>> against the database. 
> 
> It used to make direct queries against the registry database, but now it
> makes queries via the registry's REST API. So this option can safely be
> removed now.


Does "now" mean as of essex or as of folsom?



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift + keystone integration

2012-08-13 Thread Miguel Alejandro González
hey guys

thanks it worked, i had a different address saved in the keystone endpoint
for swift, but I still get the same error on the dashboard.

here's the command that worked and the output:
swift -v -V 2.0 -A http://10.17.12.163:5000/v2.0/ -U admin:admin -K admin
stat
StorageURL: http://10.17.13.29:8080/v1/AUTH_db70ab799000449eb51c2b490c2591ee
Auth Token: 7030c42e528d4d70bdc48da26da59b4f
   Account: AUTH_db70ab799000449eb51c2b490c2591ee
Containers: 0
   Objects: 0
 Bytes: 0
Accept-Ranges: bytes
X-Trans-Id: tx3089d0c2ccb34155ad9d7934442e2007

here's the error from the dashboard http://pastebin.com/w6HxYfaB

how do i fix this?

On Sat, Aug 11, 2012 at 9:44 PM, Kuo Hugo  wrote:

> I used to debug via curl for separating the AUTH section(Keystone) and
> Data Section(Swift-proxy) .
>
>  #>curl -v -d {%json%}  http://keystone_ip:port/v2.0
>  #>curl -H "X-AUTH-TOKEN: %TOKEN%" http://swift_ip:port/v1/AUTH_%account%
>
> And monitor the log on both keystone and swift.
>
> Several Steps you can followed
>
> 1. Check keystone is working on proper port
> 2. Check Swift is working and on proper port
> 3. Check swift endpoint under Keystone's DB
> 4. Does the network accessible between keystone and Swift
>
>
>
> 2012/8/12 Miguel Alejandro González 
>
>> Hello
>>
>> I have 3 nodes with ubuntu 12.04 server and installed openstack with
>> packages from the ubuntu repos
>>
>>- controller (where keystone is installed)
>>- compute
>>- swift
>>
>> I'm trying to configure Swift with Keystone but I'm having some problems,
>> here's my proxy-server.conf
>>
>> [DEFAULT]
>> bind_port = 8080
>> user = swift
>> swift_dir = /etc/swift
>> [pipeline:main]
>> # Order of execution of modules defined below
>> pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
>> [app:proxy-server]
>> use = egg:swift#proxy
>> allow_account_management = true
>> account_autocreate = true
>> set log_name = swift-proxy
>> set log_facility = LOG_LOCAL0
>> set log_level = INFO
>> et access_log_name = swift-proxy
>> set access_log_facility = SYSLOG
>> set access_log_level = INFO
>> set log_headers = True
>> account_autocreate = True
>> [filter:healthcheck]
>> use = egg:swift#healthcheck
>> [filter:catch_errors]
>> use = egg:swift#catch_errors
>> [filter:cache]
>> use = egg:swift#memcache
>> set log_name = cache
>> [filter:authtoken]
>> paste.filter_factory = keystone.middleware.auth_token:filter_factory
>> auth_protocol = http
>> auth_host = 10.17.12.163
>> auth_port = 35357
>> auth_token = admin
>> service_protocol = http
>> service_host = 10.17.12.163
>> service_port = 5000
>> admin_token = admin
>> admin_tenant_name = admin
>> admin_user = admin
>> admin_password = admin
>> delay_auth_decision = 0
>> [filter:keystone]
>> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
>> operator_roles = admin, swiftoperator
>> is_admin = true
>>
>> On Horizon I get a Django error page and says [Errno 111] ECONNREFUSED
>>
>> From the Swift server I try this command:
>>
>> swift -v -V 2.0 -A http://10.17.12.163:5000/v2.0/ -U admin:admin -K
>> admin stat
>>
>> And I also get [Errno 111] ECONNREFUSED
>>
>>
>> Is there any way to debug this??? Is there any conf or packages that I'm
>> missing for this to work on a multi-node deployment? Can you help me?
>>
>> Regards!
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> +Hugo Kuo+
> tonyt...@gmail.com
> + 886 935004793
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Experimenting with additional field for JSON filter

2012-08-13 Thread Heng Xu
Hello my friends,

I was trying to experiment with some of the scheduler’s functionality, as a 
mock-up, I was trying to augment the JSON filter’s functionality, JSON filter 
can filter anything in HostState class’s member fields, and I was trying to get 
the ID of a compute node, as in the compute_nodes table in the database, I added

self.currentid = 0

in __init__ method in HostState class, and added

databaseid = compute[‘id’]
self.currentid = databaseid

in update_from_compute_node method in HostState class, then in the hope that I 
can specify an id when pass a hint to JSON filter

nova boot –image 827d564a-e636-4fc4-a376-d36f7ebe1747 –flavor 1 –hint 
query=’[“=”,”$currentid”,1]’ server1

however, it always gives me an error on scheduling, I guess the scheduler could 
not find a compute node with id 1, but I just could not figure out, what is 
missing. Any thought and help is greatly appreciated.
Thanks
Heng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] Operational knowledge sharing

2012-08-13 Thread Greg Holt
On Aug 13, 2012, at 11:36 AM, Caitlin Bestler  
wrote:

> I'm not sure it's worth the compatibility hassles, but why would periodic 
> "Progress" returns that could be translated into a client status bar be 
> "useless"?

Sorry, poor choice of word I guess.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] Operational knowledge sharing

2012-08-13 Thread Caitlin Bestler
Greg Holt wrote:

> Followup note: Though briefly mentioned by John, I like to emphasize this 
> also affects COPY (or PUT with X-Copy-From) requests,
> and #1 (upping the lb timeout) is really the only solution unless we go crazy 
> and implement async requests with status checks. 
> Well, another weird solution is to have Swift return useless response bodies 
> very slowly as a keep alive. :)

I'm not sure it's worth the compatibility hassles, but why would periodic 
"Progress" returns that could be translated into a client status bar be 
"useless"?
If the operation takes long enough for network elements to forget about the 
connection then any human user will certainly be wondering what's going on as 
well.
Of course the challenge would be to introduce periodic feedback in a way that 
did not break existing automated clients and scripts.

Perhaps an option for periodic status reports?



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cannot reach VMs via floating IP from hosts other than the one running the VM

2012-08-13 Thread Joshua Buss
Hello,

I'm having a very similar issue as the fellow who made this bug:
https://bugs.launchpad.net/nova/+bug/933640

..but my problem is the exact opposite.  I can reach my VMs via the public
IPs if I'm on the host that runs the VM - but I cannot reach the VMs via
public IPs from any other computer on my network.  I've detailed the issue
on this pastebin, including iptables output and the like, and the fact that
my arp tables are updating properly:  http://pastebin.com/Wyh16Pys

I've been stuck like this for a week now with no one in IRC able to help.
If you have ANY ideas, I'd love to hear them!

Thanks in advance!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Does glance-scrubber.conf require sql_connection?

2012-08-13 Thread Jay Pipes
On 08/12/2012 10:12 PM, Lorin Hochstein wrote:
> Doc question:
> 
> Does glance-scrubber require sql_connection?  The Install and Deploy
> Guide specifies the sql_connection parameter
> ,
> but it wasn't clear to me that the scrubber actually makes any queries
> against the database. 

It used to make direct queries against the registry database, but now it
makes queries via the registry's REST API. So this option can safely be
removed now.

Jason, do you concur?

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] The Return of Hyper-V

2012-08-13 Thread Peter Pouliot
Hello Everyone,

I would like to take this moment to make everyone aware of the following:

https://review.openstack.org/#/c/11276/


I would like to thank the following individuals, who have given so much to help 
this project progress to this state.  Without their efforts this project would 
be dead in the water.

Jordan Rinke: Thank you for doing the original work to bring back base Essex 
functionality and   your effort help jump start this community and for that I 
am grateful.  You are one of the original champions of Hyper-V.

Alessandro Pilotti (CloudBase Solutions):  Thank you for all the tremendous 
amount of code you produced.   We would not have the feature set, unit tests or 
the Folsom integration without the many hours you graciously contributed to the 
project.   I am truly grateful for the effort  you produced.   At times I 
thought this project was going to fail.  Without your effort it most certainly 
would have.

Pedro Navarro Perez:  Thank you reaching out to me at the OpenStack summit and 
being the first hyper-v community member.   Your efforts on Volume 
Attach/Detach and boot from volume are greatly appreciated.  Thank you for the 
collaboration with other members to test, troubleshoot and advance our efforts. 
 Your  effort over this weekend helping Alessandro to this point was necessary 
for this submission to occur.

Jose Castro Leon:  Jose, thank you for  your assistance testing.Given all 
that is going on these days at CERN, you still found time to help us test 
Hyper-V.

To Vish, Monty and Stef, thank you for all of your guidance in this process.   
Hopefully because of your help and wisdom it will be a quick and painless 
approval and integration process.

And last and most importantly:
To Hashir Abdi, my colleague at Microsoft.   I am grateful for you believing in 
this project.  From the early days in the Interop lab, you saw the true 
potential.   Thank you for the hours you have spent making sure that this 
project progressed forward.   We have busy days ahead.

Once again, thank you everyone for your assistance.

p


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research & Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Jay Pipes
On 08/13/2012 09:53 AM, Stephen Gran wrote:
> Hi,
> 
> I think user_data is probably reasonably common - most people who use,
> eg, cloud-init will use it (we do).
> 
> As the 64k limit is a MySQL limitation, and not a nova limitation, why
> not just say, "if you want more storage, use postgres" (or similar)?  I
> have no issue with making the size guarded in the application, with a
> configurable limit, but the particular problem that started this off is
> an implementation issue rather than a code issue.

Or just set the column to the LONGTEXT type and both MySQL and
PostgreSQL will be just as happy.

> Storing the user_data in some place like the database is fairly
> important for making things like launch configs for autoscale groups
> work.  I'd like to not make that harder to implement.

Why is storing user_data in the database fairly important? You say above
you don't want an implementation issue to be misconceived as a code
issue -- and then go on to say that an implementation issue (storing
user_data in a database) isn't a code issue. I don't think you can have
it both ways. :)

Now, I totally buy the argument that there is a large existing
cloud-init userbase out there that relies on the EC2 Metadata API
service living on the hard-coded 169.254.169.254 address, and we
shouldn't do anything to mess up that experience. But I totally think
that config-drive or disk-injection is a better way to handle this stuff
-- and certainly doesn't force an implementation that has proven to be a
major performance and scaling bottleneck (the EC2 Metadata service)

Best,
-jay

> Cheers,
> 
> On Mon, 2012-08-13 at 09:12 -0400, Dan Prince wrote:
>>
>> - Original Message -
>>> From: "Michael Still" 
>>> To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org
>>> Sent: Saturday, August 11, 2012 5:12:22 AM
>>> Subject: [Openstack] [Nova] How common is user_data for instances?
>>>
>>> Greetings.
>>>
>>> I'm seeking information about how common user_data is for instances
>>> in
>>> nova. Specifically for large deployments (rackspace and HP, here's
>>> looking at you). What sort of costs would be associated with changing
>>> the data type of the user_data column in the nova database?
>>>
>>> Bug 1035055 [1] requests that we allow user_data of more than 65,535
>>> bytes per instance. Note that this size is a base64 encoded version
>>> of
>>> the data, so that's only a bit under 50k of data. This is because the
>>> data is a sqlalchemy Text column.
>>>
>>> We could convert to a LongText column, which allows 2^32 worth of
>>> data,
>>> but I want to understand the cost to operators of that change some
>>> more.
>>> Is user_data really common? Do you think people would start uploading
>>> much bigger user_data? Do you care?
>>
>> Nova has configurable quotas on most things so if we do increase the size of 
>> the DB column we should probably guard it in a configurable manner with 
>> quotas as well.
>>
>> My preference would actually be that we go the other way though and not have 
>> to store user_data in the database at all. That unfortunately may not be 
>> possible since some images obtain user_data via the metadata service which 
>> needs a way to look it up. Other methods of injecting metadata via disk 
>> injection, agents and/or config drive however might not need it to be store 
>> in the database right?
>>
>> As a simpler solution:
>>
>> Would setting a reasonable limit (hopefully smaller) and returning a HTTP 
>> 400 bad request if incoming requests exceed that limit be good enough to 
>> resolve this ticket? That way we don't have to increase the DB column at all 
>> and end users would be notified up front that user_data is too large (not 
>> silently truncated). They way I see it user_data is really for bootstrapping 
>> instances... we probably don't need it to be large enough to write an entire 
>> application, etc.
>>
>>
>>>
>>> Mikal
>>>
>>> 1: https://bugs.launchpad.net/nova/+bug/1035055
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Jay Pipes
On 08/13/2012 09:12 AM, Dan Prince wrote:
> - Original Message -
>> From: "Michael Still" 
>> To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org
>> Sent: Saturday, August 11, 2012 5:12:22 AM
>> Subject: [Openstack] [Nova] How common is user_data for instances?
>>
>> Greetings.
>>
>> I'm seeking information about how common user_data is for instances
>> in
>> nova. Specifically for large deployments (rackspace and HP, here's
>> looking at you). What sort of costs would be associated with changing
>> the data type of the user_data column in the nova database?
>>
>> Bug 1035055 [1] requests that we allow user_data of more than 65,535
>> bytes per instance. Note that this size is a base64 encoded version
>> of
>> the data, so that's only a bit under 50k of data. This is because the
>> data is a sqlalchemy Text column.
>>
>> We could convert to a LongText column, which allows 2^32 worth of
>> data,
>> but I want to understand the cost to operators of that change some
>> more.
>> Is user_data really common? Do you think people would start uploading
>> much bigger user_data? Do you care?
> 
> Nova has configurable quotas on most things so if we do increase the size of 
> the DB column we should probably guard it in a configurable manner with 
> quotas as well.
> 
> My preference would actually be that we go the other way though and not have 
> to store user_data in the database at all. That unfortunately may not be 
> possible since some images obtain user_data via the metadata service which 
> needs a way to look it up. Other methods of injecting metadata via disk 
> injection, agents and/or config drive however might not need it to be store 
> in the database right?

+1 When we can, let's not hobble ourselves to the EC2 API way of doing
things when we can have a more efficient and innovative solution.

> As a simpler solution:
> 
> Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 
> bad request if incoming requests exceed that limit be good enough to resolve 
> this ticket? That way we don't have to increase the DB column at all and end 
> users would be notified up front that user_data is too large (not silently 
> truncated). They way I see it user_data is really for bootstrapping 
> instances... we probably don't need it to be large enough to write an entire 
> application, etc.

Seems reasonable to me.

-jay

>>
>> Mikal
>>
>> 1: https://bugs.launchpad.net/nova/+bug/1035055
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] snapshot-temp-directory

2012-08-13 Thread Boris Filippov
This actually was fixed in upstream (
https://github.com/openstack/nova/commit/70129ed19db187cc90f74abd5c93c86098d29c27
).

Which OpenStack version you use?
Maybe this patch can be backported there.
There is workaround for this problem, but it's (not a bit) ugly:
http://www.mail-archive.com/openstack@lists.launchpad.net/msg13730.html


2012/8/13 Wolfgang Hennerbichler :
> hello fellow openstack users,
>
> I just tried to create a snapshot of a virtual machine, which was larger
> than average. Bad news: The snapshot couldn't be saved, but what I see here
> is some kind of design problem:
>
> 2012-08-13 13:44:41 DEBUG nova.utils
> [req-a678e738-9c18-4ee6-8e2e-f1f176a4674b 9b4ad74ecb69492db1bd3818efe6a47f
> 7edc3284f53f4e02b92d498db41b842d] Running cmd (subprocess): qemu-img convert
> -f qcow2 -O raw -s 34dd12038e7b400b9f1d772de4e01
> 70a /var/lib/nova/instances/instance-005c/disk
> /tmp/tmp1gRMY9/34dd12038e7b400b9f1d772de4e0170a from (pid=25365) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
>
> that's bad, I don't have so much disk space on /, but I have plenty in
> /var/lib/nova/instances.
>
> df -h | grep nova
> /dev/dm-8   1.4T  150G  1.3T  11% /var/lib/nova/instances
>
> is there a generic way to change this path?
>
> Wolfgang
>
> --
> DI (FH) Wolfgang Hennerbichler
> Software Development
> Unit Advanced Computing Technologies
> RISC Software GmbH
> A company of the Johannes Kepler University Linz
>
> IT-Center
> Softwarepark 35
> 4232 Hagenberg
> Austria
>
> Phone: +43 7236 3343 245
> Fax: +43 7236 3343 250
> wolfgang.hennerbich...@risc-software.at
> http://www.risc-software.at
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Stephen Gran
Hi,

I think user_data is probably reasonably common - most people who use,
eg, cloud-init will use it (we do).

As the 64k limit is a MySQL limitation, and not a nova limitation, why
not just say, "if you want more storage, use postgres" (or similar)?  I
have no issue with making the size guarded in the application, with a
configurable limit, but the particular problem that started this off is
an implementation issue rather than a code issue.

Storing the user_data in some place like the database is fairly
important for making things like launch configs for autoscale groups
work.  I'd like to not make that harder to implement.

Cheers,

On Mon, 2012-08-13 at 09:12 -0400, Dan Prince wrote:
> 
> - Original Message -
> > From: "Michael Still" 
> > To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org
> > Sent: Saturday, August 11, 2012 5:12:22 AM
> > Subject: [Openstack] [Nova] How common is user_data for instances?
> > 
> > Greetings.
> > 
> > I'm seeking information about how common user_data is for instances
> > in
> > nova. Specifically for large deployments (rackspace and HP, here's
> > looking at you). What sort of costs would be associated with changing
> > the data type of the user_data column in the nova database?
> > 
> > Bug 1035055 [1] requests that we allow user_data of more than 65,535
> > bytes per instance. Note that this size is a base64 encoded version
> > of
> > the data, so that's only a bit under 50k of data. This is because the
> > data is a sqlalchemy Text column.
> > 
> > We could convert to a LongText column, which allows 2^32 worth of
> > data,
> > but I want to understand the cost to operators of that change some
> > more.
> > Is user_data really common? Do you think people would start uploading
> > much bigger user_data? Do you care?
> 
> Nova has configurable quotas on most things so if we do increase the size of 
> the DB column we should probably guard it in a configurable manner with 
> quotas as well.
> 
> My preference would actually be that we go the other way though and not have 
> to store user_data in the database at all. That unfortunately may not be 
> possible since some images obtain user_data via the metadata service which 
> needs a way to look it up. Other methods of injecting metadata via disk 
> injection, agents and/or config drive however might not need it to be store 
> in the database right?
> 
> As a simpler solution:
> 
> Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 
> bad request if incoming requests exceed that limit be good enough to resolve 
> this ticket? That way we don't have to increase the DB column at all and end 
> users would be notified up front that user_data is too large (not silently 
> truncated). They way I see it user_data is really for bootstrapping 
> instances... we probably don't need it to be large enough to write an entire 
> application, etc.
> 
> 
> > 
> > Mikal
> > 
> > 1: https://bugs.launchpad.net/nova/+bug/1035055
> > 
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> > 
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

-- 
Stephen Gran
Senior Systems Integrator - guardian.co.uk

Please consider the environment before printing this email.
--
Visit guardian.co.uk - newspaper of the year

www.guardian.co.ukwww.observer.co.uk www.guardiannews.com 

On your mobile, visit m.guardian.co.uk or download the Guardian
iPhone app www.guardian.co.uk/iphone and iPad edition www.guardian.co.uk/iPad 
 
Save up to 37% by subscribing to the Guardian and Observer - choose the papers 
you want and get full digital access. 
Visit guardian.co.uk/subscribe 

-
This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.
 
Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.l

[Openstack] centos6.2+essex+kvm+virtio+flatdhcp+windows2003 blue screen and netcard break

2012-08-13 Thread halfss
hello,folks:when i use centos6.2+essex+kvm(0.12)+virtio+flatdhcp+windows2003,the guest os break frequent, like this:



is there anyone know why? thanks :)___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] How common is user_data for instances?

2012-08-13 Thread Dan Prince


- Original Message -
> From: "Michael Still" 
> To: openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org
> Sent: Saturday, August 11, 2012 5:12:22 AM
> Subject: [Openstack] [Nova] How common is user_data for instances?
> 
> Greetings.
> 
> I'm seeking information about how common user_data is for instances
> in
> nova. Specifically for large deployments (rackspace and HP, here's
> looking at you). What sort of costs would be associated with changing
> the data type of the user_data column in the nova database?
> 
> Bug 1035055 [1] requests that we allow user_data of more than 65,535
> bytes per instance. Note that this size is a base64 encoded version
> of
> the data, so that's only a bit under 50k of data. This is because the
> data is a sqlalchemy Text column.
> 
> We could convert to a LongText column, which allows 2^32 worth of
> data,
> but I want to understand the cost to operators of that change some
> more.
> Is user_data really common? Do you think people would start uploading
> much bigger user_data? Do you care?

Nova has configurable quotas on most things so if we do increase the size of 
the DB column we should probably guard it in a configurable manner with quotas 
as well.

My preference would actually be that we go the other way though and not have to 
store user_data in the database at all. That unfortunately may not be possible 
since some images obtain user_data via the metadata service which needs a way 
to look it up. Other methods of injecting metadata via disk injection, agents 
and/or config drive however might not need it to be store in the database right?

As a simpler solution:

Would setting a reasonable limit (hopefully smaller) and returning a HTTP 400 
bad request if incoming requests exceed that limit be good enough to resolve 
this ticket? That way we don't have to increase the DB column at all and end 
users would be notified up front that user_data is too large (not silently 
truncated). They way I see it user_data is really for bootstrapping 
instances... we probably don't need it to be large enough to write an entire 
application, etc.


> 
> Mikal
> 
> 1: https://bugs.launchpad.net/nova/+bug/1035055
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] snapshot-temp-directory

2012-08-13 Thread Wolfgang Hennerbichler

hello fellow openstack users,

I just tried to create a snapshot of a virtual machine, which was larger 
than average. Bad news: The snapshot couldn't be saved, but what I see 
here is some kind of design problem:


2012-08-13 13:44:41 DEBUG nova.utils 
[req-a678e738-9c18-4ee6-8e2e-f1f176a4674b 
9b4ad74ecb69492db1bd3818efe6a47f 7edc3284f53f4e02b92d498db41b842d] 
Running cmd (subprocess): qemu-img convert -f qcow2 -O raw -s 
34dd12038e7b400b9f1d772de4e01
70a /var/lib/nova/instances/instance-005c/disk 
/tmp/tmp1gRMY9/34dd12038e7b400b9f1d772de4e0170a from (pid=25365) execute 
/usr/lib/python2.7/dist-packages/nova/utils.py:219


that's bad, I don't have so much disk space on /, but I have plenty in 
/var/lib/nova/instances.


df -h | grep nova
/dev/dm-8   1.4T  150G  1.3T  11% /var/lib/nova/instances

is there a generic way to change this path?

Wolfgang

--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova ignores nova.conf

2012-08-13 Thread Simon Walter


On 08/12/2012 01:31 PM, Lorin Hochstein wrote:

On Aug 10, 2012, at 6:07 AM, Mark McLoughlin mailto:mar...@redhat.com>> wrote:


On Fri, 2012-08-10 at 00:23 -0900, Simon Walter wrote:

Nova does not respect the options set in the /etc/nova/nova.conf file.

...


If nova is being run with --config-file, then the syntax is

 [DEFAULT]
 flat_network_bridge=br0

OTOH, if it is being run with --flagfile, the syntax is:

 --flat_network_bridge=br0



I assumed that the nova-* services were auto-deteting the nova.conf
format. When I run on Ubuntu, the default nova.conf file is in the
deprecated flag file format, but I just edited the nova.conf file to use
the new ini-style format, and everything seemed to just work.



Thanks for that guys. I've got Openstack running with all the bells on 
one node. Now I'm going to try to add more nodes.


Many thanks,

Simon

--
simonsmicrophone.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Common openstack client library

2012-08-13 Thread Chmouel Boudjnah
On Mon, Aug 13, 2012 at 9:39 AM, Alessio Ababilov
 wrote:
> from openstackclient_base.client import HttpClient
> http_client = HttpClient(username="...", password="...", tenant_name="...",
> auth_uri="...")

Shouldn't be the role of python-keystoneclient?

Chmouel.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Common openstack client library

2012-08-13 Thread Alessio Ababilov
Hi!

I have an implementation of blueprint
https://blueprints.launchpad.net/nova/+spec/basic-client-library. The
library is accessible at https://github.com/aababilov/python-openstackclient
-base.

This library is actively used in Grid Dynamics for two month and is quite
stable.

This library can be very useful for python-openstackclient (that started to
build its own client wrapper) and nova server.

Unfortunately, nova, keystone, and glance clients are very inconsistent. A
lot of code is copied between all these clients instead of moving it to a
common library. The code was edited without synchronization between
clients, so, they have different behaviour:

   - all client constructors use different parameters (api_key in nova or
   password in keystone and so on);
   - keystoneclient authenticates immediately in __init__, while novaclient
   does in lazily during first method call;
   - {keystone,nova}client can manage service catalogs and accept
   keystone's auth URI while glanceclient allows endpoints only;
   - keystoneclient can support authorization with an unscoped token but
   novaclient doesn't;
   - novaclient uses class composition while keystoneclient uses
   inheritance.

I have developed a library to unify current clients. The library can be
used as-is, but it would be better if openstack clients dropped their
common code (base.py, exceptions.py and so on) and just began to import
common code.

The library performs file chunking that is necessary for the glance client.
Also, it performs reauthentication if the token is expired.

Here is an example of using unified clients.

from openstackclient_base.base import monkey_patch # optional
monkey_patch() # optional

from openstackclient_base.client import HttpClient
http_client = HttpClient(username="...", password="...",
tenant_name="...", auth_uri="...")

from openstackclient_base.nova.client import ComputeClient
print ComputeClient(http_client).servers.list()

from openstackclient_base.keystone.client import IdentityPublicClient
print IdentityPublicClient(http_client).tenants.list()


-- 
Alessio Ababilov
Software Engineer
Grid Dynamics
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova DHCP

2012-08-13 Thread Aaron Rosen
You can definitely disable the dhcp and provide your own means of providing
dhcp. Do you have a specific use case in mind that isn't addressed by
either these two already provided?

Aaron

P.S; the quantum dhcp agent now support for overlapping ip :)

On Mon, Aug 13, 2012 at 2:30 AM, Trinath Somanchi <
trinath.soman...@gmail.com> wrote:

> Hi-
>
> Rather than using Nova DHCP from Essex or Quantum based DHCP in the
> upcoming folsom release, Can we have a configurable option to use custom
> DHCP...
>
> Can any one tried this way or any one can help me how to get in this using
> Openstack Essex and also with Folsom releases.
>
> Thanking you all..
>
>
> --
> Regards,
> --
> Trinath Somanchi,
> +91 9866 235 130
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [Netstack] [Quantum] Multi-host implementation

2012-08-13 Thread Aaron Rosen
The dhcp agent now is able to use network namespaces so there is no longer
ip conflicts. Perhaps in the future the dhcp agent could implement some
kind of DHCP reply (ip helper) service . Though currently it allocates an
ip address in each subnet that you want dhcp enabled on.

Aaron

On Mon, Aug 13, 2012 at 12:02 AM, Hua ZZ Zhang  wrote:

> hi,
>
> I have a question about the ip address consumed by the dhcp services. is
> it necessary to assign an individual ip for each dhcp daemon? can we
> reserve only one ip address for all dhcp deamons subject to one subnet
> since they won't run on the same host?  since the dhcp service only need to
> communicate with local VM instances and local dhcp agent (don't know if it
> is true for this assumption), can we find a kind of isolation mechanism to
> avoid ip conflicts and extra consume of ip addresses to implement this
> feature?
>
> *Best Regards, *
>
> --
>
>*Edward Zhang(张华)*
>Staff Software Engineer
>Travel&Transportation Standards
>Emerging Technology Institute(ETI)
>IBM China Software Development Lab
>e-mail: zhu...@cn.ibm.com
>Notes ID: Hua ZZ Zhang/China/IBM
>Tel: 86-10-82450483
>
>
>地址:北京市海淀区东北旺西路8号 中关村软件园28号楼 环宇大厦3层 邮编:100193
>Address: 3F Ring, Building 28 Zhongguancun Software Park, 8
>Dongbeiwang West Road, Haidian District, Beijing, P.R.C.100193
>
>
>
>
>
>
>
> [image: Inactive hide details for MURAOKA Yusuke ---2012-08-09
> 00:21:52---Hi,]MURAOKA Yusuke ---2012-08-09 00:21:52---Hi,
>
>
>*MURAOKA Yusuke *
>Sent by: openstack-bounces+zhuadl=cn.ibm@lists.launchpad.net
>
>2012-08-09 00:19
>
>
> To
>
>
>Dan Wendlandt 
>
>
> cc
>
>
>OpenStack Development Mailing List ,
>netst...@lists.launchpad.net, openstack@lists.launchpad.net
>
>
> Subject
>
>
>Re: [Openstack] [Netstack] [openstack-dev] [Quantum] Multi-host
>implementation
>
>
> Hi,
>
> I've updated the bp to correspond with current design spec.
> > https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp
>
>
> I'd know the case of failure, improper, insane, by that.
> Anyway, comments, discussions are welcome.
>
> Thanks.
>
> --
> MURAOKA Yusuke
>
> Mail: yus...@jbking.org
>
>
> 日付:2012年8月7日火曜日、�r刻:2:47、差出人:Nachi Ueno:
>
> > Hi Dan
> >
> > Thank you for pointing this.
> >
> > Yusuke updated design spec.
> > https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp
> >
> > 2012/8/6 Dan Wendlandt  > (mailto:d...@nicira.com
> )>:
> > > Hi Nachi,
> > >
> > > I've reviewed the code and added comments. I'd like to see at least a
> basic
> > > spec describing the proposed approach (need only be a couple
> paragraphs,
> > > perhaps with a diagram) linked to the blueprint so we can have a design
> > > discussion around it. Thanks,
> > >
> > > Dan
> > >
> > >
> > > On Fri, Aug 3, 2012 at 1:03 PM, Nachi Ueno  mailto:na...@nttmcl.com )> wrote:
> > > >
> > > > Hi folks
> > > >
> > > > Sorry.
> > > > I added openstack-...@lists.openstack.org (
> mailto:openstack-...@lists.openstack.org)
> in this discussion.
> > > >
> > > > 2012/8/3 Nati Ueno  > > > (mailto:nati.u...@gmail.com
> )>:
> > > > > Hi folks
> > > > >
> > > > > > Gary
> > > > > Thank you for your comment. I wanna discuss your point on the
> mailing
> > > > > list.
> > > > >
> > > > > Yusuke pushed Multi-host implementation for review.
> > > > > https://review.openstack.org/#/c/10766/2
> > > > > This patch changes only quantum-dhcp-agent side.
> > > > >
> > > > > Gary's point is we should have host attribute on the port for
> > > > > scheduling.
> > > > > I agree with Gary.
> > > > >
> > > > > In the nova, vm has available_zone for scheduling.
> > > > > So Instead of using host properties.
> > > > > How about use available_zone for port?
> > > > >
> > > > > Format of availability_zone is something like this
> > > > > available_zone="zone_name:host".
> > > > >
> > > > > We can also add availability_zone attribute for the network as a
> > > > > default value of port.
> > > > > We can write this until next Monday.
> > > > > However I'm not sure quantum community will accept this or not, so
> I'm
> > > > > asking here.
> > > > >
> > > > > If there are no objections, we will push zone version for review.
> > > > > Thanks
> > > > > Nachi
> > > > >
> > > > > ___
> > > > > Mailing list: https://launchpad.net/~openstack
> > > > > Post to : openstack@lists.launchpad.net (
> mailto:openstack@lists.launchpad.net )
> > > > > Unsubscribe : https://launchpad.net/~openstack
> > > > > More help : https://help.launchpad.net/ListHelp
> > > >
> > > >
> > > >
> > > > ___
> > > > OpenStack-dev mailing list
> > > > openstack-...@lists.openstack.org (
> mailto:openstack-...@lists.openstack.org
> )
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > ~~~
> > > Dan Wendla

Re: [Openstack] [Netstack] [openstack-dev] [Quantum] Multi-host implementation

2012-08-13 Thread Hua ZZ Zhang
hi,

I have a question about the ip address consumed by the dhcp services. is it
necessary to assign an individual ip for each dhcp daemon? can we reserve
only one ip address for all dhcp deamons subject to one subnet since they
won't run on the same host?  since the dhcp service only need to
communicate with local VM instances and local dhcp agent (don't know if it
is true for this assumption), can we find a kind of isolation mechanism to
avoid ip conflicts and extra consume of ip addresses to implement this
feature?

Best Regards,

 
 Edward Zhang(张华)地址:北京市海淀区东北旺西路8号 中关村 
 Staff Software Engineer   软件园28号楼 环宇大厦3层 邮编:100193 
 Travel&Transportation Standards   Address: 3F Ring, Building 28 
 Emerging Technology Institute(ETI)Zhongguancun Software Park, 8 
 IBM China Software Development LabDongbeiwang West Road, Haidian
 e-mail: zhu...@cn.ibm.com District, Beijing, P.R.C.100193   
 Notes ID: Hua ZZ Zhang/China/IBM
 Tel: 86-10-82450483 
 
 
 
 
 
 
 





   
 MURAOKA Yusuke
  To 
 Sent by:  Dan Wendlandt   
 openstack-bounces  cc 
 +zhuadl=cn.ibm.co OpenStack Development Mailing List  
 m@lists.launchpad  
 .net  , netst...@lists.launchpad.net, 
   openstack@lists.launchpad.net   
   Subject 
 2012-08-09 00:19  Re: [Openstack] [Netstack]  
   [openstack-dev] [Quantum]   
   Multi-host implementation   
   
   
   
   
   
   




Hi,

I've updated the bp to correspond with current design spec.
> https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp


I'd know the case of failure, improper, insane, by that.
Anyway, comments, discussions are welcome.

Thanks.

--
MURAOKA Yusuke

Mail: yus...@jbking.org


日付:2012年8月7日火曜日、�r刻:2:47、差出人:Nachi Ueno:

> Hi Dan
>
> Thank you for pointing this.
>
> Yusuke updated design spec.
> https://blueprints.launchpad.net/quantum/+spec/quantum-multihost-dhcp
>
> 2012/8/6 Dan Wendlandt mailto:d...@nicira.com)>:
> > Hi Nachi,
> >
> > I've reviewed the code and added comments. I'd like to see at least a
basic
> > spec describing the proposed approach (need only be a couple
paragraphs,
> > perhaps with a diagram) linked to the blueprint so we can have a design
> > discussion around it. Thanks,
> >
> > Dan
> >
> >
> > On Fri, Aug 3, 2012 at 1:03 PM, Nachi Ueno mailto:na...@nttmcl.com)> wrote:
> > >
> > > Hi folks
> > >
> > > Sorry.
> > > I added openstack-...@lists.openstack.org (
mailto:openstack-...@lists.openstack.org) in this discussion.
> > >
> > > 2012/8/3 Nati Ueno mailto:nati.u...@gmail.com
)>:
> > > > Hi folks
> > > >
> > > > > Gary
> > > > Thank you for your comment. I wanna discuss your point on the
mailing
> > > > list.
> > > >
> > > > Yusuke pushed Multi-host implementation for review.
> > > > https://review.openstack.org/#/c/10766/2
> > > > This patch changes only quantum-dhcp-agent side.
> > > >
> > > > Gary's point is we should have host attribute on the port for
> > > > scheduling.
> > > > I agree with Gary.
> > > >
> > > > In the nova, vm has available_zone for scheduling.
> > > > So Instead of using host properties.
> > > > How about use available_zone for port?
> > > >
> > > > Format of availability_zone is something like this
> > > > available_zone="zone_name:host".
> > > >
> >

[Openstack] Review required - https://review.openstack.org/#/c/11016/

2012-08-13 Thread Gurjar, Unmesh
Hi All,

I have added HTTP POST notifier to the openstack-common project and needs 
review.

Please review: https://review.openstack.org/#/c/11016/.


Thanks & Regards,
Unmesh Gurjar | Lead Engineer | NTT DATA Global Technology Services Private 
Limited | w. +91.20.6604.1500 x 379 | m. +91.982.324.7631 | 
unmesh.gur...@nttdata.com | Learn more at 
nttdata.com/americas


__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp