[openstack-dev] [neutron][external networks] neutron net-external-list returns empty list after restart of neutron-server

2014-01-04 Thread rezroo

Hi all,
I'm testing the Havana devstack and I noticed that after killing and 
restarting the neutron server public networks are not returned when 
queried via horizon or command line, which in Grizzly devstack the query 
returns the external network even after a quantum-server restart:


Basically, before killing neutron-server, executing the below command as 
demo/demo/nova we have:


   /stack@host1:~$ neutron net-external-list //
   
//+--++--+//
   //| id   | name   |
   subnets  |//
   
//+--++--+//
   //| 16c986b3-fa3d-4666-a6bd-a0dd9bfb5f19 | public |
   f0895c49-32ce-4ba2-9062-421c254892ec 172.24.4.224/28 |//
   
//+--++--+//
   //stack@///host1/:~$ //
   /

After killing and restarting neutron-server we have:

   /stack@///host1/:~$ neutron net-external-list /

   /stack@///host1/:~$ /


I can get around this problem by making the public network/subnet 
shared then everything starts working, but after that I'm not able to 
revert it back to private again. In checking with grizzly version the 
external public network is listed for all tenants even when it is not 
shared, so making it shared is not a solution, only verification of what 
the problem is.


First, I think this is a neutron bug, and want to report it if not 
reported already. I didn't find a bug report, but if you know of it 
please let me know.


Second, I am looking for documentation that explains the security policy 
and permissions for external networks. Although by checking legacy and 
current behaviour it seems that all tenants should be able to list all 
external networks even if they aren't shared, I'm looking for 
documentation that explains the thinking and reasons behind this 
behaviour. Also confusing is if by default all tenants can see external 
networks then what is the purpose of the shared flag, and why once a 
network/subnet is shared it cannot be undone.


Thanks in advance.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][external networks] neutron net-external-list returns empty list after restart of neutron-server

2014-01-06 Thread rezroo
Eugene,
Bug 1254555 seems to be the opposite of what I'm observing in Havana
devstack. The bug states:

 I see that the ext-net network is not available after I do all of the
above router/subnet creation. It does become available to tenants as
soon as I restart neutron-server.

But in the case below the external net is available until I kill and
restart neutron-server. Then after it remains unavailable no matter what
neutron daemon is killed and restarted - you cannot get anything from
/neutron net-external-list/ unless you make the external network shared.

So how are the two bugs related?
Thanks,
Reza

On 01/05/2014 02:16 AM, Eugene Nikanorov wrote:
 Hi rezoo,

 This is a known bug for HAavana, which has been fixed (but was not
 backported), please see:
 https://bugs.launchpad.net/neutron/+bug/1254555

 Thanks,
 Eugene.


 On Sun, Jan 5, 2014 at 1:25 AM, rezroo r...@dslextreme.com
 mailto:r...@dslextreme.com wrote:

 Hi all,
 I'm testing the Havana devstack and I noticed that after killing
 and restarting the neutron server public networks are not returned
 when queried via horizon or command line, which in Grizzly
 devstack the query returns the external network even after a
 quantum-server restart:

 Basically, before killing neutron-server, executing the below
 command as demo/demo/nova we have:

 /stack@host1:~$ neutron net-external-list //
 
 //+--++--+//
 //| id   | name   |
 subnets  |//
 
 //+--++--+//
 //| 16c986b3-fa3d-4666-a6bd-a0dd9bfb5f19 | public |
 f0895c49-32ce-4ba2-9062-421c254892ec 172.24.4.224/28
 http://172.24.4.224/28 |//
 
 //+--++--+//
 //stack@///host1/:~$ //
 /

 After killing and restarting neutron-server we have:

 /stack@///host1/:~$ neutron net-external-list /

 /stack@///host1/:~$ /


 I can get around this problem by making the public
 network/subnet shared then everything starts working, but after
 that I'm not able to revert it back to private again. In checking
 with grizzly version the external public network is listed for
 all tenants even when it is not shared, so making it shared is not
 a solution, only verification of what the problem is.

 First, I think this is a neutron bug, and want to report it if not
 reported already. I didn't find a bug report, but if you know of
 it please let me know.

 Second, I am looking for documentation that explains the security
 policy and permissions for external networks. Although by checking
 legacy and current behaviour it seems that all tenants should be
 able to list all external networks even if they aren't shared, I'm
 looking for documentation that explains the thinking and reasons
 behind this behaviour. Also confusing is if by default all tenants
 can see external networks then what is the purpose of the shared
 flag, and why once a network/subnet is shared it cannot be undone.

 Thanks in advance.





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread rezroo
I'm trying to reconcile differences and similarities between OVN and 
OpenDayLight in my head. Can someone help me compare these two 
technologies and explain if they solve the same problem, or if there are 
fundamental differences between them?


Thanks,

Reza


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread rezroo
Interesting conversation, and I think I have more of a question than a 
comment. With my understanding of OpenStack architecture, I don't 
understand the point about making "Magnum dependent on Barbican". 
Wouldn't this issue be completely resolved using a driver model, such as 
delegating the task to a separate class specified in magnum.conf, with a 
reference implementation using Barbian API (like the vif driver of nova, 
or nova chance vs. filter scheduler)? If people want choice, we know how 
to give them choice - decouple, and have a base implementation. The rest 
is up to them. That's the framework's architecture. What am I missing?

Thanks,
Reza

On 4/12/2016 9:16 PM, Fox, Kevin M wrote:
Ops are asking for you to make it easy for them to make their security 
weak. And as a user of other folks clouds, i'd have no way to know the 
cloud is in that mode. That seems really bad for app developers/users.


Barbican, like some of the other servises, wont become common if folks 
keep trying to reimplement it so they dont have to depend on it. Folks 
said the same things about Keystone. Ultimately it was worth making it 
a dependency.


Keystone doesnt support encryption, so you are asking for new 
functionality duplicating Barbican either way.


And we do understand the point of what you are trying to do. We just 
dont see eye to eye on it being a good thing to do. If you are 
invested enough in setting up an ha setup where you would need a 
clusterd solution, barbicans not that much of an extra lift compared 
to the other services you've already had to deploy. Ive deployed both 
ha setups and barbican before. Ha is way worse.


Thanks,
Kevin

*
*

*From:* Adrian Otto
*Sent:* Tuesday, April 12, 2016 8:06:03 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates


Please don't miss the point here. We are seeking a solution that 
allows a location to place a client side encrypted blob of data (A TLS 
cert) that multiple magnum-conductor processes on different hosts can 
reach over the network.


We *already* support using Barbican for this purpose, as well as 
storage in flat files (not as secure as Barbican, and only works with 
a single conductor) and are seeking a second alternative for clouds 
that have not yet adopted Barbican, and want to use multiple 
conductors. Once Barbican is common in OpenStack clouds, both 
alternatives are redundant and can be deprecated. If Keystone depends 
on Barbican, then we have no reason to keep using it. That will mean 
that Barbican is core to OpenStack.


Our alternative to using Keystone is storing the encrypted blobs in 
the Magnum database which would cause us to add an API feature in 
magnum that is the exact functional equivalent of the credential store 
in Keystone. That is something we are trying to avoid by leveraging 
existing OpenStack APIs.


--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews > wrote:




On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad > wrote:


Keystone's credential API pre-dates barbican. We started talking
about having the credential API back to barbican after it was a
thing. I'm not sure if any work has been done to move the
credential API in this direction. From a security perspective, I
think it would make sense for keystone to back to barbican.


+1

And regarding the "inappropriate use of keystone," I'd agree... 
without this spec, keystone is entirely useless as any sort of 
alternative to Barbican:


https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu
> wrote:

Hi all,

In short, some Magnum team members proposed to store TLS
certificates in Keystone credential store. As Magnum PTL, I
want to get agreements (or non-disagreement) from OpenStack
community in general, Keystone community in particular,
before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint
of kubernetes/docker swarm. The usage of TLS requires a
secure store for storing TLS certificates. Currently, we
leverage Barbican for this purpose, but we constantly
received requests to decouple Magnum from Barbican (because
users normally don’t have Barbican installed in their
clouds). Some Magnum team members proposed to leverage
Keystone credential store as a Barbican alternative [1].
Therefore, I want to confirm what is Keystone team position
for this proposal (I remembered someone from Keystone
mentioned this is 

[openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread rezroo
Keystone supports both tokens and ec2 credentials simultaneously, but as 
far as I can tell, will only do a single token format (uuid, pki/z, 
fernet) at a time. Is it possible or advisable to configure keystone to 
issue multiple token formats? For example, I could configure two 
keystone servers, each using a different token format, so depending on 
endpoint used, I could get a uuid or pki token. Each service can use 
either token format, so is there a conceptual or implementation issue 
with this setup?

Thanks,
Reza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread rezroo
The basic idea is to let the openstack clients decide what sort of token 
optimization to use - for example, while a normal client uses uuid 
tokens, some services like heat or magnum may opt for pki tokens for 
their operations. A service like nova, configured for PKI will validate 
that token without going to any keystone server, but if it gets a uuid 
token then validates it with a keystone endpoint. I'm under the 
impression that the different token formats have different use-cases, so 
am wondering if there is a conceptual reason why multiple token formats 
are an either/or scenario.


On 3/8/2016 8:06 AM, Matt Fischer wrote:
This would be complicated to setup. How would the Openstack services 
validate the token? Which keystone node would they use? A better 
question is why would you want to do this?


On Tue, Mar 8, 2016 at 8:45 AM, rezroo <openst...@roodsari.us 
<mailto:openst...@roodsari.us>> wrote:


Keystone supports both tokens and ec2 credentials simultaneously,
but as far as I can tell, will only do a single token format
(uuid, pki/z, fernet) at a time. Is it possible or advisable to
configure keystone to issue multiple token formats? For example, I
could configure two keystone servers, each using a different token
format, so depending on endpoint used, I could get a uuid or pki
token. Each service can use either token format, so is there a
conceptual or implementation issue with this setup?
Thanks,
Reza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-02 Thread rezroo
Hello - I'm using Liberty release devstack for the below scenario. I 
have created project "abcd" with "john" as Member. I've launched one 
instance, I can use curl to list the instance. No problem.


I then modify /etc/nova/policy.json and redefine "admin_or_owner" as 
follows:


"admin_or_owner":  "role:admin or is_admin:True or 
project_id:%(project_id)s",


My expectation was that I would be able to list the instance in abcd 
using a token of admin. However, when I use the token of user "admin" in 
project "admin" to list the instances I get the following error:


/stack@vlab:~/token$ curl 
http://localhost:8774/v2.1///378a4b9e0b594c24a8a753cfa40ecc14///servers/detail 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
f221164cd9b44da6beec70d6e1f3382f"//
//{"badRequest": {"message": "Malformed request URL: URL's project_id 
'//378a4b9e0b594c24a8a753cfa40ecc14//' doesn't match Context's 
project_id '//f73175d9cc8b4fb58ad22021f03bfef5//'", "code": 400}}/


378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and 
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.


I'm confused by this behavior and the reported error, because if the 
project id used to acquire the token is the same as the project id in 
/servers/detail then I would be an "owner". So where is the "admin" in 
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever 
functionality "rule:admin_or_owner" allows in policy.json, regardless of 
the project id used to acquire the token?


I do understand that I can use the admin user and project to get all 
instances of all tenants:
/curl 
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"/


My question is more centered around why nova has the additional check to 
make sure that the token project id matches the url project id - and 
whether this is a keystone requirement, or only nova/cinder and programs 
that have a project-id in their API choose to do this. In other words, 
is it the developers of each project that decide to only expose some 
APIs for administrative functionality (such all-tenants), but restrict 
everything else to owners, or keystone requires this check?


Thanks,

Reza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] "admin" role and "rule:admin_or_owner" confusion

2016-09-26 Thread rezroo
I am still confused how the "cloud admin" role is fulfilled in Liberty 
release. For example, I used "nova --debug delete" to see how the 
project:admin/user:admin deletes an instance of the demo project. 
Basically, we use the project:admin/user:admin token to get a list of 
instances for all tenants and then reference the instance of demo using 
the admin project tenant-id in the:


curl -g -i -X DELETE 
http://172.31.5.216:8774/v2.1/85b0992a5845455083db84d909c218ab/servers/6c876149-ecc4-4467-b727-9dff7b059390


So 85b0992a5845455083db84d909c218ab is admin tenant id, and 
6c876149-ecc4-4467-b727-9dff7b059390 is owned by demo project.


I am able to reproduce this using curl commands - but what's confusing 
me is that the token I get from keystone clearly shows is_admin is 0:


"user": {"username": "admin", "roles_links": [], "id": 
"9b29c721bc3844a784dcffbb8c8a47f8", "roles": [{"name": "admin"}], 
"name": "admin"}, "metadata": {"is_admin": 0, "roles": 
["6a6893ea36394a2ab0b93d225ab01e25"]}}}


And the rules for compute:delete seem to require is_admin to be true. 
nova/policy.json has two rules for "compute:delete":


/Line  81 "compute:delete": "rule:admin_or_owner",
Line  88 "compute:delete": "",/

First question - why is line 88 needed?

Second, on line  3 admin_or_owner definition requires is_admin to be true:

/"admin_or_owner": "is_admin:True or project_id:%(project_id)s",/

which if my understanding is correct, is never true unless the keystone 
admin_token is used, and is certainly not true the token I got using 
curl. So why is my curl request using this token able to delete the 
instance?


Thanks,

Reza


On 9/2/2016 12:51 PM, Morgan Fainberg wrote:


On Sep 2, 2016 09:39, "rezroo" <openst...@roodsari.us 
<mailto:openst...@roodsari.us>> wrote:

>
> Hello - I'm using Liberty release devstack for the below scenario. I 
have created project "abcd" with "john" as Member. I've launched one 
instance, I can use curl to list the instance. No problem.

>
> I then modify /etc/nova/policy.json and redefine "admin_or_owner" as 
follows:

>
> "admin_or_owner":  "role:admin or is_admin:True or 
project_id:%(project_id)s",

>
> My expectation was that I would be able to list the instance in abcd 
using a token of admin. However, when I use the token of user "admin" 
in project "admin" to list the instances I get the following error:

>
> stack@vlab:~/token$ curl 
http://localhost:8774/v2.1/378a4b9e0b594c24a8a753cfa40ecc14/servers/detail 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
f221164cd9b44da6beec70d6e1f3382f"
> {"badRequest": {"message": "Malformed request URL: URL's project_id 
'378a4b9e0b594c24a8a753cfa40ecc14' doesn't match Context's project_id 
'f73175d9cc8b4fb58ad22021f03bfef5'", "code": 400}}

>
> 378a4b9e0b594c24a8a753cfa40ecc14 is project id of abcd and 
f73175d9cc8b4fb58ad22021f03bfef5 is project id of admin.

>
> I'm confused by this behavior and the reported error, because if the 
project id used to acquire the token is the same as the project id in 
/servers/detail then I would be an "owner". So where is the "admin" in 
"admin_or_owner"? Shouldn't the "role:admin" allow me to do whatever 
functionality "rule:admin_or_owner" allows in policy.json, regardless 
of the project id used to acquire the token?

>
> I do understand that I can use the admin user and project to get all 
instances of all tenants:
> curl 
http://localhost:8774/v2.1/f73175d9cc8b4fb58ad22021f03bfef5/servers/detail?all_tenants=1 
-H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: $1"

>
> My question is more centered around why nova has the additional 
check to make sure that the token project id matches the url project 
id - and whether this is a keystone requirement, or only nova/cinder 
and programs that have a project-id in their API choose to do this. In 
other words, is it the developers of each project that decide to only 
expose some APIs for administrative functionality (such all-tenants), 
but restrict everything else to owners, or keystone requires this check?

>
> Thanks,
>
> Reza
>
>
> 
__

> OpenStack Development Mailing 

[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-17 Thread rezroo
Hello - I'm trying to install a working local.conf devstack ocata on a 
new server, and some python packages have changed so I end up with this 
error during the build of octavia image:


   2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
   2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
   2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
   2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
   2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a
   distutils installed project and thus we cannot accurately determine
   which files belong to it which would lead to only a partial uninstall.

   2018-05-18 02:05:44.768 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
   2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
   2018-05-18 02:05:44.820 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
   2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
   2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
   2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
   2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
   2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
   2018-05-18 02:05:50.668 |
   +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
   exit_trap
   2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494
   local r=1
   2018-05-18 02:05:50.690 |
   ++./devstack/stack.sh:exit_trap:495 jobs -p
   2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495
   jobs=
   2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498
   [[ -n '' ]]
   2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
   kill_spinner
   2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390 
   '[' '!' -z '' ']'
   2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506
   [[ 1 -ne 0 ]]
   2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507
   echo 'Error on exit'
   2018-05-18 02:05:50.751 | Error on exit
   2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
   generate-subunit 1526608058 1092 fail
   2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509
   [[ -z /tmp ]]
   2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
   /home/stack/devstack/tools/worlddump.py -d /tmp

I've tried pip uninstalling PyYAML and pip installing it before running 
stack.sh, but the error comes back.


   $ sudo pip uninstall PyYAML
   The directory '/home/stack/.cache/pip/http' or its parent directory
   is not owned by the current user and the cache has been disabled.
   Please check the permissions and owner of that directory. If
   executing pip with sudo, you may want sudo's -H flag.
   Uninstalling PyYAML-3.12:
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
  /usr/local/lib/python2.7/dist-packages/_yaml.so
   Proceed (y/n)? y
  Successfully uninstalled PyYAML-3.12

I've posted my question to the pip folks and they think it's an 
openstack issue: https://github.com/pypa/pip/issues/4805


Is there a workaround here?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack

2018-05-18 Thread rezroo

Hi - let's try this again - this time with pike :-)
Any suggestions on how to get the image builder to create a larger loop 
device? I think that's what the problem is.

Thanks in advance.

   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.level1.mbr [-] Write partition entry
   blockno [0] entry [0] start [2048] length [4190208]   [57/1588]
   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo sync]
   2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs
   /dev/loop3]
   2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4
   -i 4096 -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d
   9c5fcb5d9c -q /dev/mapper/loop3p1]
   2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.level3.mount [-] Mounting
   [mount_mkfs_root] to [/tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mount
   /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO
   diskimage_builder.block_device.blockdevice [-] create() finished
   2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-device]
   2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-devices]
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.blockdevice [-] Creating fstab
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/built/etc]
   2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo cp
   /tmp/dib_build.zv2VZo3W/states/block-device/fstab
   /tmp/dib_build.zv2VZo3W/bui
   lt/etc/fstab]
   2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | ++ export
   'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb nomodeset vga=normal'
   2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb
   nomodeset vga=normal'
   2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 |  dirname
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.951 | +++
   
PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..'
   2018-05-19 05:03:12.951 | +++ dib-init-system
   2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd
   2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM
   2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial
   2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial
   2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | + source
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | ++ export DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.961 | ++ DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.962 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file