Re: [Openstack] Shared Storage with cinder ?

2013-07-05 Thread Heiko Krämer

Hey Clint,

yeah that's correct but you will have many old applications like 
Wordpress or Magento they can't use out of the box an object store. You 
have customers with applications they'll not customize the application 
only to host that stuff.


This feature will have Havanna release but it's more then 6 months if we 
can "use" this as stable.



Greetings
Heiko


Am 05.07.2013 18:31, schrieb Clint Byrum:

Excerpts from Heiko Krämer's message of 2013-07-05 09:24:03 -0700:

Heyho guys,

I'm searching for a solution to share storage over more then one instance.

Normally you attach a block device with cinder directly via iscsi or
glusterfs or whatever to one instance and that's it. Multi attachments
are not present.


Use case:

I've an application on 4 application instances and a database instance.
Now you have static file like images, movies, css ... but this files
should be available on each application instance.

This use case is best served by object storage like swift and CEPH's radosgw.


Now you need to fire up an "storage" instance and attach a volume. After
that you can with nfs or what ever share your stuff to the application
instances but i think this is a very big ressource overhead for small
projects. You need a instance only to share your data on each project
and this n times :(


If you only have one small app with 5 instances, running your own
OpenStack is quite overkill.  However, I suspect you have OpenStack so
you can have many small apps with a few instances, and thus you'll find
many of them can benefit from a good solid object store.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Shared Storage with cinder ?

2013-07-05 Thread Heiko Krämer

Heyho guys,

I'm searching for a solution to share storage over more then one instance.

Normally you attach a block device with cinder directly via iscsi or 
glusterfs or whatever to one instance and that's it. Multi attachments 
are not present.



Use case:

I've an application on 4 application instances and a database instance. 
Now you have static file like images, movies, css ... but this files 
should be available on each application instance.
Now you need to fire up an "storage" instance and attach a volume. After 
that you can with nfs or what ever share your stuff to the application 
instances but i think this is a very big ressource overhead for small 
projects. You need a instance only to share your data on each project 
and this n times :(




Do we have now any ways to solve this use case without this resource 
overhead ? I've seen it's a blueprint to attach cinder volumes to 
multiple instances but not implemented yet.




Thx for your hints and greetings
Heiko


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Neutron] Strange packat loss of rubygems.org

2013-07-05 Thread Heiko Krämer

Heyho Guys,

me again :)

Yeah I'm posting on the list and what happen? I find the problem :)

The problem is/was the same like OpenStack L3-Agent 3 months before. The 
MTU setting of the network device in the warden container is 1500 and to 
big for rubygems or github 


Greetings
Heiko

Am 05.07.2013 11:36, schrieb Heiko Krämer:

Heyho guys,

I've a really strange issue and i try to find out since one week why 
i've a packet loss only by one http web site.


Description:

I've setting up Neutron with L3, DHCP and Meta agent with OVS and Gre 
tunneling.
The hole network works perfectly with all required components but I've 
a strange issue.


I start an instance with ubuntu 12.04 without any problems. The 
instance can talk with WAN AND i can curl to http://rubygems.org 
directly on the command line. The gem installer cli tool (gem) is 
working too.


Now i'm trying to deploy *Cloudfroundry* on *OpenStack* and it works 
now for the most cases.
If i try now to deploy an rails app to cloudfoundry the warden on the 
dea will install some gems so the usual way.
The problem is i don't get data from rubygems, only from this site °°. 
The first http handshake works



curl http://rubygems.org

tcpdump in my instance but on outside of the warden container:

/09:26:33.477668 IP (tos 0x0, ttl 63, id 16829, offset 0, flags [DF], 
proto TCP (6), length 60)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [S], cksum 0xd35c 
(correct), seq 3100954858, win 14600, options [mss 1460,sackOK,TS val 
17666391 ecr 0,nop,wscale 5], length 0//

//0x:  4500 003c 41bd 4000 3f06 b8d6 0a64 0021 E..//09:26:33.835130 IP (tos 0x0, ttl 47, id 0, offset 0, flags [DF], 
proto TCP (6), length 52)//
//54.245.255.174.80 > 10.100.0.33.56832: Flags [S.], cksum 0x028a 
(correct), seq 325285586, ack 3100954859, win 14600, options [mss 
1460,nop,nop,sackOK,nop,wscale 7], length 0//

//0x:  4500 0034  4000 2f06 0a9c 36f5 ffae E..4..@./...6...//
//0x0010:  0a64 0021 0050 de00 1363 76d2 b8d4 d0eb .d.!.P...cv.//
//0x0020:  8012 3908 028a  0204 05b4 0101 0402 ..9.//
//0x0030:  0103 0307//
//09:26:33.835328 IP (tos 0x0, ttl 63, id 16830, offset 0, flags [DF], 
proto TCP (6), length 40)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [.], cksum 0x7a9b 
(correct), seq 3100954859, ack 325285587, win 457, length 0//

//0x:  4500 0028 41be 4000 3f06 b8e9 0a64 0021  E..(A.@.?d.!//
//0x0010:  36f5 ffae de00 0050 b8d4 d0eb 1363 76d3 6..P.cv.//
//0x0020:  5010 01c9 7a9b  P...z...//
//09:26:33.835565 IP (tos 0x0, ttl 63, id 16831, offset 0, flags [DF], 
proto TCP (6), length 193)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [P.], cksum 0xd305 
(correct), seq 3100954859:3100955012, ack 325285587, win 457, length 153//

//0x:  4500 00c1 41bf 4000 3f06 b84f 0a64 0021 E...A.@.?..O.d.!//
//0x0010:  36f5 ffae de00 0050 b8d4 d0eb 1363 76d3 6..P.cv.//
//0x0020:  5018 01c9 d305  4745 5420 2f20 4854 P...GET./.HT//
//0x0030:  5450 2f31 2e31 0d0a 5573 6572 2d41 6765 TP/1.1..User-Age//
//0x0040:  6e74 3a20 6375 726c 2f37 2e31 392e 3720 nt:.curl/7.19.7.//
//0x0050:  2878 3836 5f36 342d 7063 2d6c 696e 7578 (x86_64-pc-linux//
//0x0060:  2d67 6e75 2920 6c69 6263 7572 6c2f 372e -gnu).libcurl/7.//
//0x0070:  3139 2e37 204f 7065 6e53 534c 2f30 2e39 19.7.OpenSSL/0.9//
//0x0080:  2e38 6b20 7a6c 6962 2f31 2e32 2e33 2e33 .8k.zlib/1.2.3.3//
//0x0090:  206c 6962 6964 6e2f 312e 3135 0d0a 486f .libidn/1.15..Ho//
//0x00a0:  7374 3a20 7275 6279 6765 6d73 2e6f 7267 st:.rubygems.org//
//0x00b0:  0d0a 4163 6365 7074 3a20 2a2f 2a0d 0a0d ..Accept:.*/*...//
//0x00c0:  0a   .//
//09:26:34.025768 IP (tos 0x0, ttl 47, id 28779, offset 0, flags [DF], 
proto TCP (6), length 40)//
//54.245.255.174.80 > 10.100.0.33.56832: Flags [.], cksum 0x7b50 
(correct), seq 325285587, ack 3100955012, win 123, length 0//

//0x:  4500 0028 706b 4000 2f06 9a3c 36f5 ffae  E..(pk@./..<6...//
//0x0010:  0a64 0021 0050 de00 1363 76d3 b8d4 d184 .d.!.P...cv.//
//0x0020:  5010 007b 7b50  P..{{P../



On the network node I get the follow responses by http correctly but 
this will not reach the vm °°.

Sometimes will reach the response the vm one min later .


In my instance you will see some iptables rules to get connection to 
the warden containter:


iptables -L


/root@f10ee0c8-bab8-4fc1-9964-898fb76d518c:~# iptables -L//
//Chain INPUT (policy ACCEPT)//
//target prot opt source destination //
//
//Chain FORWARD (policy ACCEPT)//
//target prot opt source destination //
//warden-forward  all  --  anywhere anywhere //
//
//Chain OUTPUT (policy ACCEPT)//
//target prot opt source destination //
//
//Chain warden-default (1 references)//
//target prot opt source destination /

[Openstack] [Neutron] Strange packat loss of rubygems.org

2013-07-05 Thread Heiko Krämer

Heyho guys,

I've a really strange issue and i try to find out since one week why 
i've a packet loss only by one http web site.


Description:

I've setting up Neutron with L3, DHCP and Meta agent with OVS and Gre 
tunneling.
The hole network works perfectly with all required components but I've a 
strange issue.


I start an instance with ubuntu 12.04 without any problems. The instance 
can talk with WAN AND i can curl to http://rubygems.org directly on the 
command line. The gem installer cli tool (gem) is working too.


Now i'm trying to deploy *Cloudfroundry* on *OpenStack* and it works now 
for the most cases.
If i try now to deploy an rails app to cloudfoundry the warden on the 
dea will install some gems so the usual way.
The problem is i don't get data from rubygems, only from this site °°. 
The first http handshake works



curl http://rubygems.org

tcpdump in my instance but on outside of the warden container:

/09:26:33.477668 IP (tos 0x0, ttl 63, id 16829, offset 0, flags [DF], 
proto TCP (6), length 60)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [S], cksum 0xd35c 
(correct), seq 3100954858, win 14600, options [mss 1460,sackOK,TS val 
17666391 ecr 0,nop,wscale 5], length 0//

//0x:  4500 003c 41bd 4000 3f06 b8d6 0a64 0021 E..//09:26:33.835130 IP (tos 0x0, ttl 47, id 0, offset 0, flags [DF], proto 
TCP (6), length 52)//
//54.245.255.174.80 > 10.100.0.33.56832: Flags [S.], cksum 0x028a 
(correct), seq 325285586, ack 3100954859, win 14600, options [mss 
1460,nop,nop,sackOK,nop,wscale 7], length 0//

//0x:  4500 0034  4000 2f06 0a9c 36f5 ffae E..4..@./...6...//
//0x0010:  0a64 0021 0050 de00 1363 76d2 b8d4 d0eb .d.!.P...cv.//
//0x0020:  8012 3908 028a  0204 05b4 0101 0402 ..9.//
//0x0030:  0103 0307//
//09:26:33.835328 IP (tos 0x0, ttl 63, id 16830, offset 0, flags [DF], 
proto TCP (6), length 40)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [.], cksum 0x7a9b 
(correct), seq 3100954859, ack 325285587, win 457, length 0//

//0x:  4500 0028 41be 4000 3f06 b8e9 0a64 0021 E..(A.@.?d.!//
//0x0010:  36f5 ffae de00 0050 b8d4 d0eb 1363 76d3 6..P.cv.//
//0x0020:  5010 01c9 7a9b  P...z...//
//09:26:33.835565 IP (tos 0x0, ttl 63, id 16831, offset 0, flags [DF], 
proto TCP (6), length 193)//
//10.100.0.33.56832 > 54.245.255.174.80: Flags [P.], cksum 0xd305 
(correct), seq 3100954859:3100955012, ack 325285587, win 457, length 153//

//0x:  4500 00c1 41bf 4000 3f06 b84f 0a64 0021 E...A.@.?..O.d.!//
//0x0010:  36f5 ffae de00 0050 b8d4 d0eb 1363 76d3 6..P.cv.//
//0x0020:  5018 01c9 d305  4745 5420 2f20 4854 P...GET./.HT//
//0x0030:  5450 2f31 2e31 0d0a 5573 6572 2d41 6765 TP/1.1..User-Age//
//0x0040:  6e74 3a20 6375 726c 2f37 2e31 392e 3720 nt:.curl/7.19.7.//
//0x0050:  2878 3836 5f36 342d 7063 2d6c 696e 7578 (x86_64-pc-linux//
//0x0060:  2d67 6e75 2920 6c69 6263 7572 6c2f 372e -gnu).libcurl/7.//
//0x0070:  3139 2e37 204f 7065 6e53 534c 2f30 2e39 19.7.OpenSSL/0.9//
//0x0080:  2e38 6b20 7a6c 6962 2f31 2e32 2e33 2e33 .8k.zlib/1.2.3.3//
//0x0090:  206c 6962 6964 6e2f 312e 3135 0d0a 486f .libidn/1.15..Ho//
//0x00a0:  7374 3a20 7275 6279 6765 6d73 2e6f 7267 st:.rubygems.org//
//0x00b0:  0d0a 4163 6365 7074 3a20 2a2f 2a0d 0a0d ..Accept:.*/*...//
//0x00c0:  0a   .//
//09:26:34.025768 IP (tos 0x0, ttl 47, id 28779, offset 0, flags [DF], 
proto TCP (6), length 40)//
//54.245.255.174.80 > 10.100.0.33.56832: Flags [.], cksum 0x7b50 
(correct), seq 325285587, ack 3100955012, win 123, length 0//

//0x:  4500 0028 706b 4000 2f06 9a3c 36f5 ffae E..(pk@./..<6...//
//0x0010:  0a64 0021 0050 de00 1363 76d3 b8d4 d184 .d.!.P...cv.//
//0x0020:  5010 007b 7b50  P..{{P../



On the network node I get the follow responses by http correctly but 
this will not reach the vm °°.

Sometimes will reach the response the vm one min later .


In my instance you will see some iptables rules to get connection to the 
warden containter:


iptables -L


/root@f10ee0c8-bab8-4fc1-9964-898fb76d518c:~# iptables -L//
//Chain INPUT (policy ACCEPT)//
//target prot opt source   destination //
//
//Chain FORWARD (policy ACCEPT)//
//target prot opt source   destination //
//warden-forward  all  --  anywhere anywhere //
//
//Chain OUTPUT (policy ACCEPT)//
//target prot opt source   destination //
//
//Chain warden-default (1 references)//
//target prot opt source   destination //
//
//Chain warden-forward (1 references)//
//target prot opt source   destination //
//warden-instance-170m184dmam  all  --  anywhere anywhere
[goto] //

//DROP   all  --  anywhere anywhere //
//
//Chain warden-instance-170m184dmam (1 references)//
//target prot opt source   destin

Re: [Openstack] Swift cleaning tenant after deletion on Keystone

2013-06-20 Thread Heiko Krämer
Hey Hugo,

ok, thx for your quick answer!

Greetings
Heiko

On 20.06.2013 11:56, Kuo Hugo wrote:
> Hi Heiko, 
>
> All objects won't be deleted if the tenant been deleted in Keystone. 
>
> Hugo
>
> +Hugo Kuo+
> h...@swiftstack.com 
> tonyt...@gmail.com
> 
> +886 935004793

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift cleaning tenant after deletion on Keystone

2013-06-20 Thread Heiko Krämer
Heyho guys,

I've a short question because I can't find anything in the docs.

Will Swift cleanup himself if I delete a tenant on Keystone ?

Or do I need to ensure that all files and all buckets/containers are
deleted on Swift before the tenant will be deleted on Keystone?

Greetings and thx
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cinder problems with usage and caching ?

2013-06-14 Thread Heiko Krämer
Hey Ollie,

thx for your reply. I would but i don't have more informations as in my
post before.
My DB looks clean:

mysql> select SUM(size) from volumes where deleted=0\G
*** 1. row ***
SUM(size): 88
1 row in set (0.00 sec)

mysql> select SUM(volume_size) from snapshots where deleted=0\G
*** 1. row ***
SUM(volume_size): 53
1 row in set (0.00 sec)


That's the entries of my mysql db but it seems that cinder will sumarize
all new created volumes to the "usage" (that's correct) even they will
delete after.
I thought first it's a problem with the caching so i've restarted all
memcached services but the problem still the same.

I don't see anything in the cinder logs instead of the api entries at my
post before.


It's weired :( but a big problem


Thx and Greetings
Heiko


On 13.06.2013 13:30, Ollie Leahy wrote:
> Thanks for taking the trouble to do that Heiko,
>
> as you can see that's been open a while and we're having trouble
> reproducing the problem,
> any information you can supply about your situation would be welcome.
> eg, errors in log files
> or the contents of your database as Duncan suggested in that bug.
>
> Ollie
>
> On Thu 13 Jun 2013 09:55:53 IST, Heiko Krämer wrote:
>> Hey Ollie,
>>
>> yeah thx, I've found yesterday an existing bug report.
>>
>> https://bugs.launchpad.net/cinder/+bug/1174193
>>
>> Thx and greetings
>> Heiko
>>
>> On 12.06.2013 17:05, Ollie Leahy wrote:
>>> This looks like a bug, so you could raise a bug on cinder at
>>> https://bugs.launchpad.net/cinder/+filebug
>>>
>>> When you do you could include information about the version of cinder
>>> you are using, is it grizzy, folsom or are you testing on head?
>>>
>>> Also, if you can include any context information for example had that
>>> project id had used more quota in the past and deleted it?
>>>
>>> It would also be useful to search through any cinder logs for other
>>> error warnings, in case there was a failure in the past, when quota
>>> was either consumed or recovered by this project and where the
>>> operation was not completed successfully.
>>>
>>> Ollie
>>>
>>>
>>>
>>>
>>> On 12/06/13 10:02, Heiko Krämer wrote:
>>>> Hi guys,
>>>>
>>>> I'm running in a problem raised by cinder api.
>>>>
>>>> I'll show you the log output it's more then my explaination :)
>>>>
>>>>
>>>> 2013-06-12 10:50:13AUDIT [cinder.api.v1.volumes] Create volume of
>>>> 30 GB
>>>> 2013-06-12 10:50:13  WARNING [cinder.volume.api] Quota exceeded for
>>>> d4e1c14691d841f6b53a24b6c4c42a0e, tried to create 30G volume (172G of
>>>> 200G already consumed)
>>>> 2013-06-12 10:50:13ERROR [cinder.api.middleware.fault] Caught
>>>> error:
>>>> Requested volume or snapshot exceeds allowed Gigabytes quota
>>>>
>>>>
>>>> root@api2:~# cinder list
>>>> +--++-+--+-+--+--+
>>>>
>>>>
>>>> |  ID  | Status |
>>>> Display Name| Size | Volume Type | Bootable
>>>> | Attached to  |
>>>> +--++-+--+-+--+--+
>>>>
>>>>
>>>> | 6ce6f626-2d2b-4a17-8933-13e196fa650c | in-use |
>>>> bosh|  10  |   default   |  false   |
>>>> 567a4c86-08ab-43cd-b9bc-3b220f2bf262 |
>>>> | 8822b84b-595e-4b6f-9636-472dae7c33a4 | in-use |
>>>> volume-64e51c64-5da4-4981-9b05-f7abfc6695b1 |  16  | None|
>>>> false   | 65f33296-c2b0-4824-b887-359ee0462b56 |
>>>> | d56e5a86-f6d1-43ed-b125-2ff977aefa24 | in-use |
>>>> volume-363573c1-05d6-4484-9aad-0919e47546e0 |  5   | None|
>>>> false   | fbb809d5-71f3-4a78-9cb7-4913c1e0af83 |
>>>> | f7506174-4ae4-4a3c-928f-47b785bb35f5 | in-use |
>>>> volume-385997c8-709c-4fa2-9d5b-ca2bba9d4e87 |  7   | None|
>>>> false   | 0f1ab672-043a-4361-afd5-9f2ddd818ed8 |
>>>> +--++---

Re: [Openstack] Cinder problems with usage and caching ?

2013-06-13 Thread Heiko Krämer
Hey Ollie,

yeah thx, I've found yesterday an existing bug report.

https://bugs.launchpad.net/cinder/+bug/1174193

Thx and greetings
Heiko

On 12.06.2013 17:05, Ollie Leahy wrote:
> This looks like a bug, so you could raise a bug on cinder at
> https://bugs.launchpad.net/cinder/+filebug
>
> When you do you could include information about the version of cinder
> you are using, is it grizzy, folsom or are you testing on head?
>
> Also, if you can include any context information for example had that
> project id had used more quota in the past and deleted it?
>
> It would also be useful to search through any cinder logs for other
> error warnings, in case there was a failure in the past, when quota
> was either consumed or recovered by this project and where the
> operation was not completed successfully.
>
> Ollie
>
>
>
>
> On 12/06/13 10:02, Heiko Krämer wrote:
>> Hi guys,
>>
>> I'm running in a problem raised by cinder api.
>>
>> I'll show you the log output it's more then my explaination :)
>>
>>
>> 2013-06-12 10:50:13AUDIT [cinder.api.v1.volumes] Create volume of
>> 30 GB
>> 2013-06-12 10:50:13  WARNING [cinder.volume.api] Quota exceeded for
>> d4e1c14691d841f6b53a24b6c4c42a0e, tried to create 30G volume (172G of
>> 200G already consumed)
>> 2013-06-12 10:50:13ERROR [cinder.api.middleware.fault] Caught error:
>> Requested volume or snapshot exceeds allowed Gigabytes quota
>>
>>
>> root@api2:~# cinder list
>> +--++-+--+-+--+--+
>>
>> |  ID  | Status |
>> Display Name| Size | Volume Type | Bootable
>> | Attached to  |
>> +--++-+--+-+--+--+
>>
>> | 6ce6f626-2d2b-4a17-8933-13e196fa650c | in-use |
>> bosh|  10  |   default   |  false   |
>> 567a4c86-08ab-43cd-b9bc-3b220f2bf262 |
>> | 8822b84b-595e-4b6f-9636-472dae7c33a4 | in-use |
>> volume-64e51c64-5da4-4981-9b05-f7abfc6695b1 |  16  | None|
>> false   | 65f33296-c2b0-4824-b887-359ee0462b56 |
>> | d56e5a86-f6d1-43ed-b125-2ff977aefa24 | in-use |
>> volume-363573c1-05d6-4484-9aad-0919e47546e0 |  5   | None|
>> false   | fbb809d5-71f3-4a78-9cb7-4913c1e0af83 |
>> | f7506174-4ae4-4a3c-928f-47b785bb35f5 | in-use |
>> volume-385997c8-709c-4fa2-9d5b-ca2bba9d4e87 |  7   | None|
>> false   | 0f1ab672-043a-4361-afd5-9f2ddd818ed8 |
>> +--++-+--+-+--+--+
>>
>>
>>
>> root@api2:~# cinder quota-show d4e1c14691d841f6b53a24b6c4c42a0e
>> +---+---+
>> |  Property | Value |
>> +---+---+
>> | gigabytes |  200  |
>> | snapshots |   20  |
>> |  volumes  |   30  |
>> +---+---+
>>
>>
>>
>> you see I consume only 38GB of 200GB and not 172GB (log).
>> It's anything wrong with caching by cinder ? Have anyone the same
>> problem or any hints ?
>>
>>
>> Greetings
>> Heiko
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cinder problems with usage and caching ?

2013-06-12 Thread Heiko Krämer
Hi guys,

I'm running in a problem raised by cinder api.

I'll show you the log output it's more then my explaination :)


2013-06-12 10:50:13AUDIT [cinder.api.v1.volumes] Create volume of 30 GB
2013-06-12 10:50:13  WARNING [cinder.volume.api] Quota exceeded for
d4e1c14691d841f6b53a24b6c4c42a0e, tried to create 30G volume (172G of
200G already consumed)
2013-06-12 10:50:13ERROR [cinder.api.middleware.fault] Caught error:
Requested volume or snapshot exceeds allowed Gigabytes quota


root@api2:~# cinder list
+--++-+--+-+--+--+
|  ID  | Status |
Display Name| Size | Volume Type | Bootable
| Attached to  |
+--++-+--+-+--+--+
| 6ce6f626-2d2b-4a17-8933-13e196fa650c | in-use |
bosh|  10  |   default   |  false   |
567a4c86-08ab-43cd-b9bc-3b220f2bf262 |
| 8822b84b-595e-4b6f-9636-472dae7c33a4 | in-use |
volume-64e51c64-5da4-4981-9b05-f7abfc6695b1 |  16  | None| 
false   | 65f33296-c2b0-4824-b887-359ee0462b56 |
| d56e5a86-f6d1-43ed-b125-2ff977aefa24 | in-use |
volume-363573c1-05d6-4484-9aad-0919e47546e0 |  5   | None| 
false   | fbb809d5-71f3-4a78-9cb7-4913c1e0af83 |
| f7506174-4ae4-4a3c-928f-47b785bb35f5 | in-use |
volume-385997c8-709c-4fa2-9d5b-ca2bba9d4e87 |  7   | None| 
false   | 0f1ab672-043a-4361-afd5-9f2ddd818ed8 |
+--++-+--+-+--+--+


root@api2:~# cinder quota-show d4e1c14691d841f6b53a24b6c4c42a0e
+---+---+
|  Property | Value |
+---+---+
| gigabytes |  200  |
| snapshots |   20  |
|  volumes  |   30  |
+---+---+



you see I consume only 38GB of 200GB and not 172GB (log).
It's anything wrong with caching by cinder ? Have anyone the same
problem or any hints ?


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] CloudFoundry on Openstack Grizzly Part 1

2013-06-07 Thread Heiko Krämer
Heyho guys,

I've written the first part how to deploy cloudfoundry on OpenStack
The second will coming soon.


http://honeybutcher.de/2013/06/cloudfoundry-micro-bosh-openstack-grizzly/


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Swift load balancing

2013-06-07 Thread Heiko Krämer
Hey Kotwani,

we are using an SW loadbalancer but L3 (keepalived).
DNS round robin are not a load balancer :) if one node is done, some
connections will arrive the down host that's not the right way i think.

HTTP Proxy are an option but you make a bottleneck of your connection to
WAN because all usage will pass your proxy server.

You can use Keepalived as a Layer3 Loadbalancer, so all your incoming
responses will distributed to the swift proxy servers and delivered of
them. You don't have a bottleneck because you are using the WAN
connection of each swift proxy servers and you have automate failover of
keepalived with an other hot standby lb ( keepalived are using out of
the box pacemaker + corosync for lb failover).


Greetings
Heiko

On 07.06.2013 06:40, Chu Duc Minh wrote:
> If you choose to use DNS round robin, you can set TTL small and use a
> script/tool to continous check proxy nodes to reconfigure DNS record
> if one proxy node goes down, and vice-versa.
>
> If you choose to use SW load-balancer, I suggest HAProxy for
> performance (many high-traffic websites use it) and NGinx for features
> (if you really need features provided by Nginx).
> IMHO, I like Nginx more than Haproxy. It's stable, modern, high
> performance, and full-featured.
>
>
> On Fri, Jun 7, 2013 at 6:28 AM, Kotwani, Mukul  > wrote:
>
> Hello folks,
>
> I wanted to check and see what others are using in the case of a
> Swift installation with multiple proxy servers for load
> balancing/distribution. Based on my reading, the approaches used
> are DNS round robin, or SW load balancers such as Pound, or HW
> load balancers. I am really interested in finding out what others
> have been using in their installations. Also, if there are issues
> that you have seen related to the approach you are using, and any
> other information you think would help would be greatly appreciated.
>
>  
>
> As I understand it, DNS round robin does not check the state of
> the service behind it, so if a service goes down, DNS will still
> send the record and the record requires manual removal(?). Also, I
> am not sure how well it scales or if there are any other issues.
> About Pound, I am not sure what kind of resources it expects and
> what kind of scalability it has, and yet again, what other issues
> have been seen.
>
>  
>
> Real world examples and problems seen by you guys would definitely
> help in understanding the options better.
>
>  
>
> Thanks!
>
> Mukul
>
>  
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> 
> Post to : openstack@lists.launchpad.net
> 
> Unsubscribe : https://launchpad.net/~openstack
> 
> More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Policy settings not working correctly

2013-06-07 Thread Heiko Krämer
Hi Guang,

thx for your hint but that's not the reason because in your example all
users with the KeystoneAdmin role have the same rights as the admin and
thats useless.

@Adam so i've no chance to get the policy management working ? I can't
say the KeystoneAdmin role is only allowed to create and delete users
and nothing more ?
I saw instead of the file a mysql base policy management but thers no
cli commands available right ?


Thx and Greetings
Heiko

On 07.06.2013 07:59, Yee, Guang wrote:
>
> I think keystone client is still V2 by default, which is enforcing
> admin_required.
>
>  
>
> Try this
>
>  
>
> "admin_required": [["role:KeystoneAdmin"], ["role:admin"],
> ["is_admin:1"]],
>
>  
>
>  
>
> Guang
>
>  
>
>  
>
> *From:*Openstack
> [mailto:openstack-bounces+guang.yee=hp@lists.launchpad.net] *On
> Behalf Of *Adam Young
> *Sent:* Thursday, June 06, 2013 7:28 PM
> *To:* Heiko Krämer; openstack
> *Subject:* Re: [Openstack] [Keystone] Policy settings not working
> correctly
>
>  
>
> What is the actualy question here?  Is it "why is this failing" or
> "why was it done that way?"
>
>
> On 06/04/2013 07:47 AM, Heiko Krämer wrote:
>
> Heyho guys :)
>
> I've a little problem with policy settings in keystone. I've
> create a new rule in my policy-file and restarts keystone but
> keystone i don't have privileges.
>
>
> What is the rule?
>
>
> Example:
>
>
> keystone user-create --name kadmin --pw lala
> keystone user-role-add --
>
> keystone role-list --user kadmin --role KeystoneAdmin --tenant admin
>
> +--+--+
> |id| name |
> +--+--+
> | 3f5c0af585db46aeaec49da28900de28 |KeystoneAdmin |
> | dccfed0bd790420bbf1982686cbf7e31 | KeystoneServiceAdmin |
>
>
> cat /etc/keystone/policy.json
>
> {
> "admin_required": [["role:admin"], ["is_admin:1"]],
> "owner" : [["user_id:%(user_id)s"]],
> "admin_or_owner": [["rule:admin_required"], ["rule:owner"]],
> "admin_or_kadmin": [["rule:admin_required"], ["role:KeystoneAdmin"]],
>
> "default": [["rule:admin_required"]],
> [.]
> "identity:list_users": [["rule:admin_or_kadmin"]],
> []
>
> 
>
> keystone user-list
> Unable to communicate with identity service: {"error": {"message":
> "You are not authorized to perform the requested action:
> admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)
>
>
> In log file i see:
> DEBUG [keystone.policy.backends.rules] enforce admin_required:
> {'tenant_id': u'b33bf3927d4e449a98cec4a883148110', 'user_id':
> u'46a6a9e429db483f8346f0259e99d6a5', u'roles': [u'KeystoneAdmin']}
>
>
>
>
> Why does keystone enforce /admin_required/ rule instead of the defined
> rule (/admin_or_kadmin/).
>
>
> Historical reasons.  We are trying to clean this up. 
>
>
>
>
>
> Keystone conf:
> [...]
>
> # Path to your policy definition containing identity actions
> policy_file = policy.json
> [..]
> [policy]
> driver = keystone.policy.backends.rules.Policy
>
>
>
>
> Any have an idea ?
>
> Thx and greetings
> Heiko
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack 
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack@lists.launchpad.net 
> <mailto:openstack@lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack 
> <https://launchpad.net/%7Eopenstack>
> More help   : https://help.launchpad.net/ListHelp
>
>  
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Keystone] Policy settings not working correctly

2013-06-04 Thread Heiko Krämer
Heyho guys :)

I've a little problem with policy settings in keystone. I've create a
new rule in my policy-file and restarts keystone but keystone i don't
have privileges.

Example:


keystone user-create --name kadmin --pw lala
keystone user-role-add --

keystone role-list --user kadmin --role KeystoneAdmin --tenant admin

+--+--+
|id| name |
+--+--+
| 3f5c0af585db46aeaec49da28900de28 |KeystoneAdmin |
| dccfed0bd790420bbf1982686cbf7e31 | KeystoneServiceAdmin |


cat /etc/keystone/policy.json

{
"admin_required": [["role:admin"], ["is_admin:1"]],
"owner" : [["user_id:%(user_id)s"]],
"admin_or_owner": [["rule:admin_required"], ["rule:owner"]],
"admin_or_kadmin": [["rule:admin_required"], ["role:KeystoneAdmin"]],

"default": [["rule:admin_required"]],
[.]
"identity:list_users": [["rule:admin_or_kadmin"]],
[]



keystone user-list
Unable to communicate with identity service: {"error": {"message": "You
are not authorized to perform the requested action: admin_required",
"code": 403, "title": "Not Authorized"}}. (HTTP 403)


In log file i see:
DEBUG [keystone.policy.backends.rules] enforce admin_required:
{'tenant_id': u'b33bf3927d4e449a98cec4a883148110', 'user_id':
u'46a6a9e429db483f8346f0259e99d6a5', u'roles': [u'KeystoneAdmin']}




Why does keystone enforce /admin_required/ rule instead of the defined
rule (/admin_or_kadmin/).



Keystone conf:
[...]

# Path to your policy definition containing identity actions
policy_file = policy.json
[..]
[policy]
driver = keystone.policy.backends.rules.Policy




Any have an idea ?

Thx and greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OVS] Loopingcall and high load on nova comp

2013-05-17 Thread Heiko Krämer
Hi guys,

i seen on each nova compute node a ressource overhead caused by quantum
ovs agent.

This means if only runs the agent on the compute node (+ nova-compute)
i'll have sometimes a load by 2. The agent will have a many cpu hours
and if i take a look into the log file i see manytimes this one:

2013-05-17 12:35:46  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.195565 sec
2013-05-17 12:35:50  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.24272 sec
2013-05-17 12:35:54  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.254162 sec
2013-05-17 12:35:58  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.131815 sec
2013-05-17 12:36:02  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.213518 sec
2013-05-17 12:36:07  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.212293 sec
2013-05-17 12:36:11  WARNING [quantum.openstack.common.loopingcall] task
run outlasted interval by 0.223576 sec

I've searched to this warning but without success.

Anyone have an idea or the same problem ?


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] V2 Grizzly net-gateway-xxx 404

2013-04-26 Thread Heiko Krämer
So the usecase is,

i've a shared private net for shared services that can use by all tenants.

I've created a router, shared-net and subnet for this.
Now i'm trying to attach to the user-tenant router an interface with any
IP of the service-shared net, but thats not working:

quantum router-interface-add 424a4e14-54c1-4cd3-b0ef-69deeba3d24c
7e522217-4633-4def-b1eb-5be2eb3f2335
Unable to complete operation for network
39610c0f-228f-4a78-b65c-e523f49bbeed. The IP address 10.0.101.1 is in use.

It's possible to attach an interface of an other subnet to a router ?
It's to make it possible to communicate between user-tenant and
shared-tenant (not external network).

I'm using openvswitch and gre with namespaces so for each tenant one router.


I'm trying now since many hours to combine two internal networks...

Usecase:

tenant1 (10.0.0.0/24):
router1
vm1:
IP: 10.0.0.20

tenant2(10.0.1.0/24):
router2
vm2:
IP: 10.0.1.8

tenant3(10.0.2.0/24):
router3
vm3
ip: 10.0.2.9


now should it possible:
connect from vm1 and vm2 to vm3
not from vm1 to vm2

i need attach a port of the shared-net to the router of each tenant


I'm going to crazy :D


Greetings and thx
Heiko


On 26.04.2013 12:53, Akihiro MOTOKI wrote:
> Hi Heiko,
>
> net-gateway-* feature is provided by network gateway extension.
> This extension is specific to Nicira NVP plugin and not supported by
> other plugins (in Grizzly release).
>
> Thanks,
> Akihiro
>
>
>
> 2013/4/26 Heiko Krämer  <mailto:i...@honeybutcher.de>>
>
> Hey guys,
>
> I'm trying to use:
> /quantum net-gateway-list /
>
> or something else of this command. I get every time 404:
>
> /quantum net-gateway-list//
> //404 Not Found//
> //
> //The resource could not be found.//
> /
> I think any extension is missing in my conf?! But i can't find any
> doc :(
>
> quantum.conf:
> /
> core_plugin =
> quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2//
> //service_plugins =
> quantum.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin//
> //router_scheduler_driver =
> quantum.scheduler.l3_agent_scheduler.ChanceScheduler/
>
>
>
> Greetings and Thx
> Heiko
>
> ___
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack@lists.launchpad.net
> <mailto:openstack@lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> -- 
> Akihiro MOTOKI mailto:amot...@gmail.com>>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] V2 Grizzly net-gateway-xxx 404

2013-04-26 Thread Heiko Krämer
All right,

thx for the information Akihiro.

greetings
Heiko

On 26.04.2013 12:53, Akihiro MOTOKI wrote:
> Hi Heiko,
>
> net-gateway-* feature is provided by network gateway extension.
> This extension is specific to Nicira NVP plugin and not supported by
> other plugins (in Grizzly release).
>
> Thanks,
> Akihiro
>
>
>
> 2013/4/26 Heiko Krämer  <mailto:i...@honeybutcher.de>>
>
> Hey guys,
>
> I'm trying to use:
> /quantum net-gateway-list /
>
> or something else of this command. I get every time 404:
>
> /quantum net-gateway-list//
> //404 Not Found//
> //
> //The resource could not be found.//
> /
> I think any extension is missing in my conf?! But i can't find any
> doc :(
>
> quantum.conf:
> /
> core_plugin =
> quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2//
> //service_plugins =
> quantum.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin//
> //router_scheduler_driver =
> quantum.scheduler.l3_agent_scheduler.ChanceScheduler/
>
>
>
> Greetings and Thx
> Heiko
>
> ___
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack@lists.launchpad.net
> <mailto:openstack@lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> -- 
> Akihiro MOTOKI mailto:amot...@gmail.com>>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] V2 Grizzly net-gateway-xxx 404

2013-04-26 Thread Heiko Krämer
Hey guys,

I'm trying to use:
/quantum net-gateway-list /

or something else of this command. I get every time 404:

/quantum net-gateway-list//
//404 Not Found//
//
//The resource could not be found.//
/
I think any extension is missing in my conf?! But i can't find any doc :(

quantum.conf:
/
core_plugin =
quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2//
//service_plugins =
quantum.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin//
//router_scheduler_driver =
quantum.scheduler.l3_agent_scheduler.ChanceScheduler/



Greetings and Thx
Heiko
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] horizon error

2013-04-25 Thread Heiko Krämer
Hi Mballo,

looks good to me. Please check your keystone endpoints too. You need to
check the other service parts too if all endpoints are correct.
Do you see on your project page your images, networks and volumes? It's
mostly a good indicator if your horizon can communicate with these API's.



Greetings
Heiko

Am 23.04.2013 18:12, schrieb Mballo Cherif:
>
> Hi Hiho,
>
> Thanks you for your answers. In fact when I lauch "nova-manage service
> list" I get this:
>
>  
>
> /nova-certopenstack-grizzly.linux.gem 
> internal enabled:-)   2013-04-23 16:05:03/
>
> /nova-conductor   openstack-grizzly.linux.gem 
> internal enabled:-)   2013-04-23 16:05:03/
>
> /nova-consoleauth openstack-grizzly.linux.gem 
> internal enabled:-)   2013-04-23 16:05:03/
>
> /nova-scheduler   openstack-grizzly.linux.gem 
> internal enabled:-)   2013-04-23 16:05:03/
>
> /nova-compute openstack-grizzly.linux.gem 
> nova enabled:-)   2013-04-23 16:05:04/
>
>  
>
>  
>
> is it normal not having nova-api in the list ?. Otherwise when I check
> "service nova-api status" the service is running well (/nova-api
> start/running, process 13421/)
>
>  
>
>  
>
>  
>
> *From:*Openstack
> [mailto:openstack-bounces+cherif.mballo=gemalto@lists.launchpad.net]
> *On Behalf Of *Heiko Krämer
> *Sent:* mardi 23 avril 2013 17:30
> *To:* openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] horizon error
>
>  
>
> Hiho,
>
> this occurs if an service not running or not reachable. In your case
> mostly api or compute.
> Check if each service are running and reachable from your Horizon host.
>
> Check if all endpoints in keystone are configured correctly.
>
> Greetings
> Heiko
>
>
> On 23.04.2013 17:25, Mballo Cherif wrote:
>
> Hi everybody, when I'm authenticate with horizon I have this
> message "*Error: *Unauthorized: Unable to retrieve usage
> information." And "*Error: *Unauthorized: Unable to retrieve quota
> information."
>
> How can I fix this issue?
>
>  
>
> Thanks you for your help!
>
>  
>
> Sheriff!
>
>  
>
>
>
>
> ___
>
> Mailing list: https://launchpad.net/~openstack 
> <https://launchpad.net/%7Eopenstack>
>
> Post to : openstack@lists.launchpad.net 
> <mailto:openstack@lists.launchpad.net>
>
> Unsubscribe : https://launchpad.net/~openstack 
> <https://launchpad.net/%7Eopenstack>
>
> More help   : https://help.launchpad.net/ListHelp
>
>  
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] horizon error

2013-04-23 Thread Heiko Krämer
Hiho,

this occurs if an service not running or not reachable. In your case
mostly api or compute.
Check if each service are running and reachable from your Horizon host.

Check if all endpoints in keystone are configured correctly.

Greetings
Heiko


On 23.04.2013 17:25, Mballo Cherif wrote:
>
> Hi everybody, when I'm authenticate with horizon I have this message
> "*Error: *Unauthorized: Unable to retrieve usage information." And
> "*Error: *Unauthorized: Unable to retrieve quota information."
>
> How can I fix this issue?
>
>  
>
> Thanks you for your help!
>
>  
>
> Sheriff!
>
>  
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Short introduction to running swift with quotas

2013-04-19 Thread Heiko Krämer
Hi Guys,

I've written a short guide to enable the quotas in Swift (1.8.0).

http://honeybutcher.de/2013/04/account-quotas-swift-1-8-0/


I hope it's helpfully.

Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] which network controller is the best for quantum grizzly?

2013-04-17 Thread Heiko Krämer
hmmm yeah it's a simple case. You can use the default one, the OVS
controller. It's good documented and works very well.

Greetings

On 18.04.2013 06:37, Liu Wenmao wrote:
> hi Heiko:
>
> My network topology is very simple: a router connecting with two
> subnets, each VM in the two subnets can ping each other.
>
> So it needs l3 layer routing, I also need namespace for quantum
> configuration. So is there a controller suitable for such a scenario?
>
> Thanks.
>
>
> On Wed, Apr 17, 2013 at 8:16 PM, Heiko Krämer  <mailto:i...@honeybutcher.de>> wrote:
>
> Hi Wenmao,
>
> i think you should plan your network topologie first and after
> that you can decide which controller are the best choice for you.
>
> Greetings
> Heiko
>
>
> On 17.04.2013 14:01, Liu Wenmao wrote:
>> I have tried floodlight, but it does not support namespace, so I
>> wonder is there a better network controller to support
>> quantum?(nox, ryu ..)
>>
>> Wenmao Liu
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack 
>> <https://launchpad.net/%7Eopenstack>
>> Post to : openstack@lists.launchpad.net 
>> <mailto:openstack@lists.launchpad.net>
>> Unsubscribe : https://launchpad.net/~openstack 
>> <https://launchpad.net/%7Eopenstack>
>> More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> Post to : openstack@lists.launchpad.net
> <mailto:openstack@lists.launchpad.net>
> Unsubscribe : https://launchpad.net/~openstack
> <https://launchpad.net/%7Eopenstack>
> More help   : https://help.launchpad.net/ListHelp
>
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Thx for your replies!!

I've created a bug report: https://bugs.launchpad.net/cinder/+bug/1169928

I think anything is wrong with the config parser.
If i'll find a quickfix i'll let you know.

Greetings
Heiko

On 17.04.2013 15:50, Jérôme Gallard wrote:
> Hi,
>
> Yes, it's very surprising. I manage to obtain your error by doing the
operations manually (compute and guest are ubuntu 12.04 and devstack
deployment).
>
> Another interesting thing is that, in my case, with multi-backend
enabled, tempest tells me everything is right:
>
> /opt/stack/tempest# nosetests -sv
tempest.tests.volume.test_volumes_actions.py
<http://tempest.tests.volume.test_volumes_actions.py>
> nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
>
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[smoke]
... ok
>
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_get_volume_attachment
... ok
>
> --
> Ran 2 tests in 122.465s
>
> OK
>
>
> I don't think that error is linked to the distribution. With my
configuration, if I remove the multi-backend option, attachment is possible.
>
> Regards,
> Jérôme
>
>
> On Wed, Apr 17, 2013 at 3:22 PM, Steve Heistand
mailto:steve.heist...@nasa.gov>> wrote:
>
> in my case (as near as I can tell) its something to do with the inability
> for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
> the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
> havent tried 12.10 or rarring as a vm.
>
> steve
>
> On 04/17/2013 05:42 AM, Heiko Krämer wrote:
> > Hi Steve,
>
> > yeah it's running ubuntu 12.04 on the nodes and on the vm.
>
> > But configuration parsing error should have normally nothing todo
> with a distribution
> > ?! Maybe the oslo version or something like that.
>
> > But thanks for your hint.
>
> > Greetings Heiko
>
> > On 17.04.2013 14:36, Steve Heistand wrote:
> >> what OS Are you running in the VM? I had similar issues with ubuntu
> 12.04 but
> >> things worked great with centos 6.4
> >>
> >>
> >> On 04/17/2013 01:15 AM, Heiko Krämer wrote:
> >>> Hi Guys,
> >>>
> >>> I'm running in a strange config issue with cinder-volume service.
> I try to use
> >>> the multi backend feature in grizzly and the scheduler works fine
> but the volume
> >>> service are not running correctly. I can create/delete volumes but
> not attach.
> >>>
> >>> My cinder.conf (abstract): / // Backend Configuration//
> >>> //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
> >>>
> //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// //
> >>> //enabled_backends=storage1,storage2// //[storage1]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // //
> //[storage2]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
> >>>
> >>>
> >>>
> >>> this section is on each host the same. If i try to attach an
> existing volume to
> >>> an instance i'll get the following error on cinder-volume:
> >>>
> >>> /2013-04-16 17:18:13AUDIT [cinder.service] Starting
> cinder-volume node
> >>> (version 2013.1)// //2013-04-16 17:18:13 INFO
> [cinder.volume.manager]
> >>> Updating volume status// //2013-04-16 17:18:13 INFO
> [cinder.volume.iscsi]
> >>> Creating iscsi_target for:
> volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
> >>> //2013-04-16 17:18:13 INFO
> [cinder.openstack.common.rpc.common] Connected to
> >>>  AMQP server on 10.0.0.104:5672// <http://10.0.0.104:5672//>
> //2013-04-16 17:18:13 INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// <http://10.0.0.104:5672//> //2013-04-16
> 17:18:14 INFO [cinder.volume.manager] Updating
> >>> volume status// //2013-04-16 17:18:14 INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// <http://10.0.0.104:5672//> //2013-04-16
> 17:18:14 INFO
> >>> [cinder.openstack.common.rpc.co

Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Hi Steve,

yeah it's running ubuntu 12.04 on the nodes and on the vm.

But configuration parsing error should have normally nothing todo with a
distribution ?! Maybe the oslo version or something like that.

But thanks for your hint.

Greetings
Heiko

On 17.04.2013 14:36, Steve Heistand wrote:
> what OS Are you running in the VM? I had similar issues with ubuntu 12.04
> but things worked great with centos 6.4
>
>
> On 04/17/2013 01:15 AM, Heiko Krämer wrote:
>> Hi Guys,
>>
>> I'm running in a strange config issue with cinder-volume service.
>> I try to use the multi backend feature in grizzly and the scheduler works 
>> fine 
>> but the volume service are not running correctly.
>> I can create/delete volumes but not attach.
>>
>> My cinder.conf (abstract):
>> /
>> // Backend Configuration//
>> //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
>> //scheduler_host_manager=cinder.scheduler.host_manager.HostManager//
>> //
>> //enabled_backends=storage1,storage2//
>> //[storage1]//
>> //volume_group=nova-volumes//
>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
>> //volume_backend_name=LVM_ISCSI//
>> //iscsi_helper=tgtadm//
>> //
>> //
>> //[storage2]//
>> //volume_group=nova-volumes//
>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
>> //volume_backend_name=LVM_ISCSI//
>> //iscsi_helper=tgtadm/
>>
>>
>>
>> this section is on each host the same. If i try to attach an existing volume 
>> to 
>> an instance i'll get the following error on cinder-volume:
>>
>> /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume node 
>> (version 2013.1)//
>> //2013-04-16 17:18:13 INFO [cinder.volume.manager] Updating volume 
>> status//
>> //2013-04-16 17:18:13 INFO [cinder.volume.iscsi] Creating iscsi_target 
>> for: 
>> volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
>> //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] 
>> Connected to 
>> AMQP server on 10.0.0.104:5672//
>> //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] 
>> Connected to 
>> AMQP server on 10.0.0.104:5672//
>> //2013-04-16 17:18:14 INFO [cinder.volume.manager] Updating volume 
>> status//
>> //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] 
>> Connected to 
>> AMQP server on 10.0.0.104:5672//
>> //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] 
>> Connected to 
>> AMQP server on 10.0.0.104:5672//
>> //2013-04-16 17:18:26ERROR [cinder.openstack.common.rpc.amqp] Exception 
>> during message handling//
>> //Traceback (most recent call last)://
>> //  File 
>> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py", 
>> line 430, in _process_data//
>> //rval = self.proxy.dispatch(ctxt, version, method, **args)//
>> //  File 
>> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py",
>>  
>> line 133, in dispatch//
>> //return getattr(proxyobj, method)(ctxt, **kwargs)//
>> //  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 
>> 665, 
>> in initialize_connection//
>> //return self.driver.initialize_connection(volume_ref, connector)//
>> //  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 
>> 336, 
>> in initialize_connection//
>> //if self.configuration.iscsi_helper == 'lioadm'://
>> //  File "/usr/lib/python2.7/dist-packages/cinder/volume/configuration.py", 
>> line 
>> 83, in __getattr__//
>> //return getattr(self.local_conf, value)//
>> //  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1708, 
>> in 
>> __getattr__//
>> //return self._conf._get(name, self._group)//
>> //  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1513, 
>> in _get//
>> //value = self._substitute(self._do_get(name, group))//
>> //  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1529, 
>> in 
>> _do_get//
>> //info = self._get_opt_info(name, group)//
>> //  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1629, 
>> in 
>> _get_opt_info//
>> //raise NoSuchOptError(opt_name, group)//
>> //NoSuchOptError: no such option in group storage1: iscsi_helper/
>>
>>
>> It's very strange the 
>> '/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option should 
>> set 
>> the iscsi_helper=tgtadm per default.
>>
>>
>> Anyone have an idea or the same issue, otherwise i'll create a bug report.
>>
>> Greetings from Berlin
>> Heiko
>>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] which network controller is the best for quantum grizzly?

2013-04-17 Thread Heiko Krämer
Hi Wenmao,

i think you should plan your network topologie first and after that you
can decide which controller are the best choice for you.

Greetings
Heiko

On 17.04.2013 14:01, Liu Wenmao wrote:
> I have tried floodlight, but it does not support namespace, so I
> wonder is there a better network controller to support quantum?(nox,
> ryu ..)
>
> Wenmao Liu
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Hi Guys,

I'm running in a strange config issue with cinder-volume service.
I try to use the multi backend feature in grizzly and the scheduler
works fine but the volume service are not running correctly.
I can create/delete volumes but not attach.

My cinder.conf (abstract):
/
// Backend Configuration//
//scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
//scheduler_host_manager=cinder.scheduler.host_manager.HostManager//
//
//enabled_backends=storage1,storage2//
//[storage1]//
//volume_group=nova-volumes//
//volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
//volume_backend_name=LVM_ISCSI//
//iscsi_helper=tgtadm//
//
//
//[storage2]//
//volume_group=nova-volumes//
//volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
//volume_backend_name=LVM_ISCSI//
//iscsi_helper=tgtadm/



this section is on each host the same. If i try to attach an existing
volume to an instance i'll get the following error on cinder-volume:

/2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume
node (version 2013.1)//
//2013-04-16 17:18:13 INFO [cinder.volume.manager] Updating volume
status//
//2013-04-16 17:18:13 INFO [cinder.volume.iscsi] Creating
iscsi_target for: volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
//2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:14 INFO [cinder.volume.manager] Updating volume
status//
//2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:26ERROR [cinder.openstack.common.rpc.amqp]
Exception during message handling//
//Traceback (most recent call last)://
//  File
"/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py",
line 430, in _process_data//
//rval = self.proxy.dispatch(ctxt, version, method, **args)//
//  File
"/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py",
line 133, in dispatch//
//return getattr(proxyobj, method)(ctxt, **kwargs)//
//  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
line 665, in initialize_connection//
//return self.driver.initialize_connection(volume_ref, connector)//
//  File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py",
line 336, in initialize_connection//
//if self.configuration.iscsi_helper == 'lioadm'://
//  File
"/usr/lib/python2.7/dist-packages/cinder/volume/configuration.py", line
83, in __getattr__//
//return getattr(self.local_conf, value)//
//  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1708, in __getattr__//
//return self._conf._get(name, self._group)//
//  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1513, in _get//
//value = self._substitute(self._do_get(name, group))//
//  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1529, in _do_get//
//info = self._get_opt_info(name, group)//
//  File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1629, in _get_opt_info//
//raise NoSuchOptError(opt_name, group)//
//NoSuchOptError: no such option in group storage1: iscsi_helper/


It's very strange the
'/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option
should set the iscsi_helper=tgtadm per default.


Anyone have an idea or the same issue, otherwise i'll create a bug report.

Greetings from Berlin
Heiko
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Glance] 404 after upgrading to grizzly

2013-04-16 Thread Heiko Krämer
All right Guys,


command back. Glance wasn't apparently the "bad guy" in this case. I've
check swift and all stored files and I found that the image files are
not available. I think any goes wrong with the upgrade of swift but all
other stored files of customers are present  It's totaly crazy :)

I need only upload the images again and take snapshots 

But it's very strange that i've lost these images and glance means that
all are present.


However Greetings
Heiko



On 16.04.2013 10:39, Heiko Krämer wrote:
> Heyho Guys,
>
> i've a strange issue with glance after I've upgraded from Folsom to
> Grizzly. All Images are stored in swift!
>
> I see all Images and the image details too but I can't download or
> modify this images. Nova-compute can't download it too.
>
> root@api2:~# glance image-list
> +--++-+--+++
> | ID   | Name   | Disk Format |
> Container Format | Size   | Status |
> +--++-+--+++
> | a9d4488d-305d-44ee-aded-923a9f3e7aa2 | Cirros-Test| qcow2   |
> bare | 9761280| active |
> | b7dcf14e-4a1d-4370-86d8-7e4d2f5792f8 | default(12.04) | qcow2   |
> bare | 251527168  | active |
> +--++-+--+++
>
>
> root@api2:~# glance image-download a9d4488d-305d-44ee-aded-923a9f3e7aa2
>> test.img
> Request returned failure status.
> 404 Not Found
> Swift could not find image at URI.
> (HTTP 404)
>
>
> So i've checked, the db migrations have worked i think (example):
> ++--+---+-+-++-+
> | id | image_id |
> value 
>
> | created_at  | updated_at  | deleted_at | deleted |
> ++--+---+-+-++-+
> | 25 | a9d4488d-305d-44ee-aded-923a9f3e7aa2 |
> swift+https://service%3Aglance:@xx:35357/v2.0/glance/a9d4488d-305d-44ee-aded-923a9f3e7aa2
> | 2013-03-11 16:30:08 | 2013-03-11 16:30:09 | NULL   |   0 |
>
> I can't see any errors in Log's of the glance services (Debug mode on)
> or Keystone logs. In addition I don't see a request in my swift log.
>
> I've running all Services in Folsom without problems, so my Keystone
> endpoints should be ok:
>
> | de64976ee0974ddca7f2c6cfb3fe0fae |  nova  |
> https://swift.xxx.de/v1/AUTH_%(tenant_id)s  |
> https://10.0.0.103/v1/AUTH_%(tenant_id)s | 
> https://10.0.0.103/v1  | a7a2021c32354e6caff8bef14e1c5eb3 |
>
>
> I've upgraded last week my hole stack to grizzly and all have worked,
> yesterday i've upgraded glance and swift and now i can't start any
> instance :) because no images was found.
> I tried to upload a new image and download it after the process finished
> and it works normally.
>
>
>
> Do anyone have same trouble ? If you need more informations please ask :)
>
>
> Greetings and thanks
> Heiko
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Glance] 404 after upgrading to grizzly

2013-04-16 Thread Heiko Krämer
Heyho Guys,

i've a strange issue with glance after I've upgraded from Folsom to
Grizzly. All Images are stored in swift!

I see all Images and the image details too but I can't download or
modify this images. Nova-compute can't download it too.

root@api2:~# glance image-list
+--++-+--+++
| ID   | Name   | Disk Format |
Container Format | Size   | Status |
+--++-+--+++
| a9d4488d-305d-44ee-aded-923a9f3e7aa2 | Cirros-Test| qcow2   |
bare | 9761280| active |
| b7dcf14e-4a1d-4370-86d8-7e4d2f5792f8 | default(12.04) | qcow2   |
bare | 251527168  | active |
+--++-+--+++


root@api2:~# glance image-download a9d4488d-305d-44ee-aded-923a9f3e7aa2
> test.img
Request returned failure status.
404 Not Found
Swift could not find image at URI.
(HTTP 404)


So i've checked, the db migrations have worked i think (example):
++--+---+-+-++-+
| id | image_id |
value   
 
| created_at  | updated_at  | deleted_at | deleted |
++--+---+-+-++-+
| 25 | a9d4488d-305d-44ee-aded-923a9f3e7aa2 |
swift+https://service%3Aglance:@xx:35357/v2.0/glance/a9d4488d-305d-44ee-aded-923a9f3e7aa2
| 2013-03-11 16:30:08 | 2013-03-11 16:30:09 | NULL   |   0 |

I can't see any errors in Log's of the glance services (Debug mode on)
or Keystone logs. In addition I don't see a request in my swift log.

I've running all Services in Folsom without problems, so my Keystone
endpoints should be ok:

| de64976ee0974ddca7f2c6cfb3fe0fae |  nova  |
https://swift.xxx.de/v1/AUTH_%(tenant_id)s  |
https://10.0.0.103/v1/AUTH_%(tenant_id)s | 
https://10.0.0.103/v1  | a7a2021c32354e6caff8bef14e1c5eb3 |


I've upgraded last week my hole stack to grizzly and all have worked,
yesterday i've upgraded glance and swift and now i can't start any
instance :) because no images was found.
I tried to upload a new image and download it after the process finished
and it works normally.



Do anyone have same trouble ? If you need more informations please ask :)


Greetings and thanks
Heiko




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Create Route to physical net

2013-04-11 Thread Heiko Krämer
Hiho guys,

i'm running OpenStack grizzly with quantum and all stuff. All is running
fine but i'm trying to get a connection from each fixed_network
(namespaced) to a physical network.

Example:

Fixed-Network: 10.100.0.0/24
Gre-Tunneling net: 100.20.20/24

Pysical-Network (3. Interface on Network-Node): 10.0.0.0/24

Now i'll create a route on Router xy with interface 10.100.0.1 to =>
10.0.0.17(pysical interface on network host)
On the 10.0.0.0/24 network i'm running shared services like mysql
cluster, searching cluster and so on and the goal is that each fixed
network reach the shared services.
My problem is to get a connection between a router (namespace) and the
physical interface.


If you need more details please let me know :)


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't log in grizzly's dashboard

2013-04-11 Thread Heiko Krämer
Hi Mohammed,


do you have sync the db with keystone-manage sync_db ?

Do you see any errors in your keystone.log?

Greetings
Heiko
On 11.04.2013 14:17, Mohammed Amine SAYA wrote:
> Hi all,
>
> I upgraded my install from folsom to grizzly but I can't log in
> dashboard. I keep getting this error : 
>
> HTTPConnectionPool(host='192.168.0.1', port=8776): Max retries
> exceeded with url:
> /v1/5690876e82414117b80e64167a3ee3f8/os-quota-sets/5690876e82414117b80e64167a3ee3f8
>
> I haven't changed the database content. I installed grizzly packages only.
> Keystone, nova and apache are running fine. I can list endpoints,
> users and tenants.
>
> nova-manage service list gives this:
> +--++--+-+---++
> | Binary   | Host   | Zone | Status  | State |
> Updated_at |
> +--++--+-+---++
> | nova-cert| openstack0 | internal | enabled | up|
> 2013-04-11T12:17:07.00 |
> | nova-compute | openstack1 | nova | enabled | down  |
> 2013-04-03T12:45:08.00 |
> | nova-console | openstack0 | internal | enabled | up|
> 2013-04-11T12:17:11.00 |
> | nova-consoleauth | openstack0 | internal | enabled | down  |
> 2013-04-03T12:45:14.00 |
> | nova-scheduler   | openstack0 | internal | enabled | up|
> 2013-04-11T12:17:11.00 |
> +--++--+-+---++
>
> Do you know how to fix this please?
>
> Thanks for your help.
> Amine.
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] grizzly on ubuntu precise: auth error using glance index

2013-03-08 Thread Heiko Krämer
Hi Oliver,

think this will solve your problem

In /etc/keystone/keystone.conf

[signing]
token_format = UUID
#certfile = /etc/keystone/ssl/certs/signing_cert.pem
#keyfile = /etc/keystone/ssl/private/signing_key.pem
#ca_certs = /etc/keystone/ssl/certs/ca.pem
#key_size = 1024
#valid_days = 3650
#ca_password = None


and restart Keystone.


Greetings
Heiko

Am 08.03.2013 14:50, schrieb Olivier Archer:
> Hi,
>   From the documentation here :
> http://docs.openstack.org/trunk/openstack-compute/install/apt/content/ap_installinggrizzlyubuntuprecise.html
>
> I've got problems with 'glance index' :
> # glance index
> Authorization Failed: Unable to communicate with identity service:
> {"error": {"message": "An unexpected error prevented the server from
> fulfilling your request. Command 'openssl' returned non-zero exit
> status 3", "code": 500, "title": "Internal Server Error"}}. (HTTP 500)
>
> /var/log/keystone/keystone.log give:
> ERROR [keystone.common.cms] Signing error: Error opening signer
> certificate /etc/keystone/ssl/certs/signing_cert.pem
>
> So I've run
> # sudo keystone keystone-manage pki-setup
>
> to create certs file.
>
> But now, 'glance index' give me:
>
> Request returned failure status.
> Invalid OpenStack Identity credentials.
>
> and keystone.log give:
> WARNING [keystone.common.wsgi] Authorization failed. The request you
> have made requires authentication.
>
> my configuration is like the one in the doc:
>
> creds:
> export SERVICE_TOKEN=admin
> export OS_TENANT_NAME=admin
> export OS_USERNAME=admin
> export OS_PASSWORD=openstack
> export OS_AUTH_URL=http://100.10.10.115:5000/v2.0/
> export SERVICE_ENDPOINT=http://100.10.10.115:35357/v2.0/
>
> i've reinstalled everything from the begining from a fresh installed
> server, and i'm still stuck in this error...
>
>
>
>


-- 
B. Sc. Informatik
Heiko Krämer
CIO/Administrator

Twitter: @railshoster
Avarteq GmbH
Zweigstelle:
Prinzessinnenstr. 20, 10969 Berlin


Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz:
Science Park 2
66123 Saarbrücken

Tel: +49 (0)681 / 309 64 190
Fax: +49 (0)681 / 309 64 191

Visit:
http://www.enterprise-rails.de/

<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] File Upload Problems after Upgrade

2013-03-08 Thread Heiko Krämer
Hi Guys,

I've upgraded my  swift setup (2 Storage Nodes and 2 Proxy Nodes) from
1.4.6 to 1.7.6. It was upgraded without errors and i've followed this
guides:
https://lists.launchpad.net/openstack/msg16188.html
https://wiki.openstack.org/wiki/ReleaseNotes/Folsom#OpenStack_Object_Storage_.28Swift.29

But since this time i can't upload any bigger files. I mean smaller as
10MB :(

I got every time

413 Request Entity Too Largehttps://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Scale out

2013-03-01 Thread Heiko Krämer
Heyho Guys,

i'm trying to setup Openstack with Quantum. That's not a big deal and
all components are running but Quantum L3 agent and dhcp agent runs on
one node (network node). So this node are a GW for external and internal
traffic. This topology will be a bottleneck in the near future.
My goal is now to scale out Quantum on other nodes like nova-network.


My first idea was to create a second router and configure this router on
a second node (l3 agent router_id ) with a second external network (l3
agent network_id). I can use now a second network node with a second
router and second external network to balance the traffic between this
tow nodes.

So before i had: 1Gbit Uplink to WAN => 1 x network node with 1 ext NIC
Now: 2 x 1 Gbit Uplink to WAN => 2 x network node with 1 ext NIC


The main goal is to use the external NIC's of each compute node or of
many Network nodes but the maintainability, which vm or tenant are use
which network node, is not really good. 


I would prefere i can Quantum scale out of the box and Quantum manage
port mapping on different nodes like a port scheduler mapper :)


Have anyone experience with that ? Ideas or network topologies ?

 

Greetings
Heiko

-- 
B. Sc. Informatik
Heiko Krämer
CIO/Administrator

Twitter: @railshoster
Avarteq GmbH
Zweigstelle:
Prinzessinnenstr. 20, 10969 Berlin


Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz:
Science Park 2
66123 Saarbrücken

Tel: +49 (0)681 / 309 64 190
Fax: +49 (0)681 / 309 64 191

Visit:
http://www.enterprise-rails.de/

<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift][Keystone] Authentication problems with Swift and Keystone by Grizzly release

2013-03-01 Thread Heiko Krämer
Hi Adam,

thx for your repli. The problem was the new PKI authentification.

I've change from PKI to

[signing]
token_format = UUID


and it works now :)


Thx and Greetings
Heiko
On 17.02.2013 03:23, Adam Young wrote:
> On 02/14/2013 09:38 AM, Heiko Krämer wrote:
>> Heyho Guys,
>>
>> i'm testing Swift and Keystone (Grizzly).
>>
>> !NOTE!
>> I'm posting only the importent stuff (output, responses, configs)
>>
>> I've upgraded and migrate the database, the migration are working not
>> correct (kyestone-manage db_sync) because in the role table will create
>> a new column but with NULL values and this will break the auth (first
>> issue).
>>
>> The next command of keystone they you will need is
>> keystone-manage pki_setup => done without errors but you will need to
>> change the rights of the generated files.
>>
>>
>>
>> #
>> ## Output / Log ###
>>
>> My request to keystone are correct if i try to get a token with curl. I
>> get a token with all endpoints and other stuff.
>>
>> "token": {
>> "expires": "2013-02-15T14:29:59Z",
>> "id":
>> "MIIL-wYJKoZIhvcNAQcCoIIL8DCCC+wCAQExCTAHBgUrDgMCGjCCCtgGCSqGSIb3DQEHAaCCCskEggrFeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wMi0xNFQxNDoyOTo1OS44NDI0MjQiLCAiZXhwaXJlcyI6ICIyMDEzLTAyLTE1VDE0OjI5OjU5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiNTY5NzdiYjVhMDU1NDc2MWJmMGViOWQ2Y2E3NzBkNzUiLCAibmFtZSI6ICJ0ZXN0aW5nIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC4wLjE6ODc3NC92Mi81Njk3N2JiNWEwNTU0NzYxYmYwZWI5ZDZjYTc3MGQ3NSIsICJyZWdpb24iOiAidGVzdGluZyIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC4wLjE6ODc3NC92Mi81Njk3N2JiNWEwNTU0NzYxYmYwZWI5ZDZjYTc3MGQ3NSIsICJpZCI6ICJiOGQ3YTQzMWZjY2M0MWY2YTYzMzFjZTY3NjBlYjI1ZSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzg4LjE5OC42LjE1Mjo4Nzc0L3YyLzU2OTc3YmI1YTA1NTQ3NjFiZjBlYjlkNmNhNzcwZDc1In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjAuMC4xOjk2OTYiLCAicmVnaW9uIjogInRlc3RpbmciLCAiaW50ZXJuYWxVUkwi!
>>  OiAiaHR0cD
>> ovLzEwLjAuMC4xOjk2OTYiLCAiaWQiOiAiM2ZjNTcxNzUyMDA3NDY3OWI3MTlkM2VmNTlmZGViYzMiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMC4wLjAuMTo5Njk2In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm5ldHdvcmsiLCAibmFtZSI6ICJxdWFudHVtIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjAuMC4xOjkyOTIvdjIiLCAicmVnaW9uIjogInRlc3RpbmciLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjAuMC4xOjkyOTIvdjIiLCAiaWQiOiAiMWZlZTllNDQ1NjNjNDcwYzhkNjFmNjE5NDNjYmIxM2YiLCAicHVibGljVVJMIjogImh0dHA6Ly84OC4xOTguNi4xNTI6OTI5Mi92MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMTo4Nzc2L3YxLzU2OTc3YmI1YTA1NTQ3NjFiZjBlYjlkNmNhNzcwZDc1IiwgInJlZ2lvbiI6ICJ0ZXN0aW5nIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMTo4Nzc2L3YxLzU2OTc3YmI1YTA1NTQ3NjFiZjBlYjlkNmNhNzcwZDc1IiwgImlkIjogIjFmMjVlMDUwMjdmMTRmNGI5MDFmMWFmNjJiZTZhMzAwIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vODguMTk4LjYuMTUyOjg3NzYvdjEvNTY5NzdiYjVhMDU1NDc2MWJmMGViOWQ2Y2E3NzBkNzUifV0sICJlbmRwb2ludHN!
>>  fbGlua3MiO
>> iBbXSwgInR5cGUiOiAidm9sdW1lIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEwLjAuMC4xOjg3NzMvc2VydmljZXMvQWRtaW4iLCAicmVnaW9uIjogInRlc3RpbmciLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzEwLjAuMC4xOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMWIyZTViZjkzNTI2NGI2ODljZmZkZWViMTk1ZDRjMWQiLCAicHVibGljVVJMIjogImh0dHA6Ly84OC4xOTguNi4xNTI6ODc3My9zZXJ2aWNlcy9DbG91ZCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJlYzIiLCAibmFtZSI6ICJlYzIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTAuMC4wLjE6ODA4MC92MSIsICJyZWdpb24iOiAidGVzdGluZyIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTAuMC4wLjE6ODA4MC92MS9BVVRIXzU2OTc3YmI1YTA1NTQ3NjFiZjBlYjlkNmNhNzcwZDc1IiwgImlkIjogIjI3YTEyYTBkMGI2ODQ2YjJhMDQzNjMwZmJlYzUwNmJhIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vODguMTk4LjYuMTUyOjgwODAvdjEvQVVUSF81Njk3N2JiNWEwNTU0NzYxYmYwZWI5ZDZjYTc3MGQ3NSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJvYmplY3Qtc3RvcmUiLCAibmFtZSI6ICJzd2lmdCJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMTozNTM1Ny92Mi4wIi!
>>  wgInJlZ2lv
>> biI6ICJ0ZXN0aW5nIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMTo1MDAwL3YyLjAiLCAiaWQiOiAiMDI2NWNmOTUyZDRmNGZhYWEyZjdlZGIzNGZlMGQxYTUiLCAicHVibGljVVJMIjogImh0dHA6Ly84OC4xOTguNi4xNTI6NTAwMC92Mi4wIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImlkZW50aXR5IiwgIm5hbWUiOiAia2V5c3RvbmUifV0sICJ1c2VyIjogeyJ1c2VybmFtZSI6ICJkbGVpZGlzY2giLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjRjZDRhNzRlMTVlMTQ4MmY5ZmExNmY1MjRhZmQ4ZWJlIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9LCB7Im5hbWUiOiAiS2V5c3RvbmVTZXJ2aWNlQWRt

[Openstack] [Swift][Keystone] Authentication problems with Swift and Keystone by Grizzly release

2013-02-14 Thread Heiko Krämer
2424",
"tenant": {
"enabled": true,
"id": "56977bb5a0554761bf0eb9d6ca770d75",
"name": "testing"
}
},
"user": {
"id": "4cd4a74e15e1482f9fa16f524afd8ebe",
"name": "user",
"roles": [
{
"name": "admin"
},
{
"name": "KeystoneServiceAdmin"
},
{
"name": "KeystoneAdmin"
}
],
"roles_links": [],
"username": "user"
}
}
}


Next try with swift client:

swift -V 2.0 -A http://localhost:5000/v2.0 -U testing:user -K
user_testing2013 stat
~> Account HEAD failed:
http://xx.xx.xx.xx:8080/v1/AUTH_56977bb5a0554761bf0eb9d6ca770d75 401
Unauthorized



In Swift Log:

http://paste.ubuntu.com/1650988/




## Swift config ##
#
# The importent parts of config



[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging cache ratelimit
authtoken keystoneauth container-quotas proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
recheck_account_existence = 60
recheck_container_existence = 60
set log_level = DEBUG
allow_account_management = true
account_autocreate = true

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = localhost
auth_port = 35357
auth_protocol = http
auth_uri = http://localhost:5000/
admin_tenant_name = service
admin_user = swift
admin_password = swift_testing2012
admin_token = xx
auth_token = xx
service_port = 5000
service_host = 127.0.0.1
delay_auth_decision = 1
signing_dir=/etc/swift


[filter:keystoneauth]
use = egg:swift#keystoneauth
# Operator roles is the role which user would be allowed to manage a
# tenant and be able to create container or give ACL to others.
operator_roles = admin, Member



I think the problem is the openssl validation or parsing, i don't know.
You see exit status of openssl in swift log and i think thats the problem.
Is it a bug or i've configured some thinks wrong ? Do anyone runs in a
similar problem ?


If anyone have questions or need detailled informations, please let me know.

Greetings
Heiko

-- 
B. Sc. Informatik
Heiko Krämer
CIO/Administrator

Twitter: @railshoster
Avarteq GmbH
Zweigstelle:
Prinzessinnenstr. 20, 10969 Berlin


Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz:
Science Park 2
66123 Saarbrücken

Tel: +49 (0)681 / 309 64 190
Fax: +49 (0)681 / 309 64 191

Visit:
http://www.enterprise-rails.de/

<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] "multi-host" mode in quantum

2012-12-13 Thread Heiko Krämer
Hey Guys,

it's a good point. I hope this option will include in Grizzly. We get
now (since the switch to Quantum) network I/O bottlenecks without using
all NIC's of our nodes.
So I'm looking forward to Grizzly 

Greetings
Heiko

Am 12.12.2012 17:11, schrieb Gary Kotton:
> On 12/12/2012 05:58 PM, Xin Zhao wrote:
>> Hello,
>>
>> If I understand it correctly, multi-host network mode is not
>> supported (yet) in quantum in Folsom.
>> I wonder what's the recommended way of running multiple network nodes
>> (for load balancing and
>> bandwidth concerns) in quantum?  Any documentation links will be
>> appreciated.
>
> At the moment this is in discussion upstream. It is currently not
> supported but we are hoping to have support for this in grizzly.
>>
>> Thanks,
>> Xin
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp