Re: [openstack-dev] Oslo.db possible module?

2013-09-16 Thread Flavio Percoco

On 13/09/13 18:04 +, Joshua Harlow wrote:

Hi guys,

In my attempt to not use oslo.cfg in taskflow I ended up re-creating a lot of
what oslo-incubator db has but without the strong connection to oslo.cfg,

I was thinking that a majority of this code (which is also partially ceilometer
influenced) could become oslo.db,

https://github.com/stackforge/taskflow/blob/master/taskflow/persistence/
backends/impl_sqlalchemy.py (search for SQLAlchemyBackend as the main class).

It should be generic enough that it could be easily extracted to be the basis
for oslo.db if that is desirable,

Thoughts/comments/questions welcome :-)



Not having looked at the code in detail, I'd like to ask what the are
the differences between this implementation and the one currently in
Oslo Incubator?

Also, when you say you're not using oslo.cfg, do you mean you're not
using the global instance or that you're not using it at all? There
are good examples at how it is possible to avoid using the global
instance in oslo.messaging.

I'd like to hear boris-42 thoughts about this as well - since he's
been working on that with other folks - and perhaps bring this up at
the oslo.db session[0] -assuming it'll get accepted.

Cheers,
FF

[0] http://summit.openstack.org/cfp/details/13


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo.db possible module?

2013-09-16 Thread Boris Pavlovic
Hi Joshua,


+1 to discuss it on oslo.db session!=)


Best regards,
Boris Pavlovic
--
Mirantis Inc.


On Mon, Sep 16, 2013 at 12:26 PM, Roman Podolyaka
rpodoly...@mirantis.comwrote:

 Hi Joshua,

 This looks great!

 We definitely should consider this to become the base of oslo.db, as
 currently DB code in oslo-incubator depends on oslo-config and has a few
 drawbacks (e. g. global engine and session instances).

 We could discuss this in details at the summit (Boris has already proposed
 a session for oslo.db lib - http://summit.openstack.org/cfp/details/13).

 Thanks,
 Roman


 On Fri, Sep 13, 2013 at 9:04 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Hi guys,

  In my attempt to not use oslo.cfg in taskflow I ended up re-creating a
 lot of what oslo-incubator db has but without the strong connection to
 oslo.cfg,

  I was thinking that a majority of this code (which is also partially
 ceilometer influenced) could become oslo.db,


 https://github.com/stackforge/taskflow/blob/master/taskflow/persistence/backends/impl_sqlalchemy.py
  (search
 for SQLAlchemyBackend as the main class).

  It should be generic enough that it could be easily extracted to be the
 basis for oslo.db if that is desirable,

  Thoughts/comments/questions welcome :-)

  -Josh

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-16 Thread Julien Danjou
On Fri, Sep 13 2013, Nachi Ueno wrote:

Hi Nachi,

That looks like a good idea, thanks for submitting.

 [1] We should add elastic search query api for ceilometer? or we
 should let user kick ElasticSearch api directory?

 Note that ElasticSearch has no tenant based authentication, in that
 case we need to integrate Keystone and ElasticSearch. (or Horizon)

This should provide data retrieval too, otherwise it has much less
interest.

 [2] Log (syslog or any application log) should be stored in
 Ceilometer? (or it should be new OpenStack project? )

Ceilometer already has on the roadmap events/notifications storage, ES
would really fit here I think. As I've some plan to use the notification
system as a logging back-end, that would probably cover part of this.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Protecting the access to memcache

2013-09-16 Thread Chmouel Boudjnah
John Dickinson m...@not.mn writes:

 available for a WSGI pipeline. (Note that swift.common.middleware.acl
 may be misplaced by this definition, but it's only used by tempauth.)

and keystone_auth FYI.

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Quick review from a core please for simple bug fix

2013-09-16 Thread Day, Phil
Hi Folks,

Could one more core look at the following simple bug fix please: 
https://review.openstack.org/#/c/46486/ - which allows the system clean up VMs 
from deleted instances.


Its already got one +2 and four +1's

Thanks
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-16 Thread Michael Still
On Fri, Sep 13, 2013 at 7:51 AM, Dolph Mathews dolph.math...@gmail.com wrote:

 ++ Data backups are a solved problem, and no DB admin should trust an
 application to perform its own backups.

I'm not completely sure I agree. Consider the case where a cloud with
active users undertakes an upgrade. The migrations run, and they allow
user traffic to hit the installation. They then discover there is a
serious problem and now need to rollback. However, they can't just
restore a database backup, because the database is no longer in a
consistent state compared with the hypervisors -- users might have
created or deleted instances for example.

In this scenario if we could downgrade reliably, they could force a
downgrade with db sync, and then revert the packages they had
installed to the previous version.

How would they handle this scenario with just database backups?

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Quick review from a core please for simple bug fix

2013-09-16 Thread Michael Still
Done.

Cheers,
Michael

On Mon, Sep 16, 2013 at 8:21 PM, Day, Phil philip@hp.com wrote:
 Hi Folks,



 Could one more core look at the following simple bug fix please:
 https://review.openstack.org/#/c/46486/ - which allows the system clean up
 VMs from deleted instances.





 Its already got one +2 and four +1’s



 Thanks

 Phil


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Additions to Hypervisor Support Matrix

2013-09-16 Thread Avishay Traeger

Hi all,
I have added a few features to the hypervisor support matrix that are
related to volume functions.
https://wiki.openstack.org/wiki/HypervisorSupportMatrix#Hypervisor_feature_support_matrix

1. iSCSI CHAP: Sets CHAP password on iSCSI connections
2. Fibre Channel: Use the FC protocol to attach volumes
3. Volume swap: Swap an attached volume with a different (unattached)
volume - data is copied over
4. Volume rate limiting: Rate limit the I/Os to a given volume

1+2 are not new (Grizzly or before), while 3+4 are new in Havana (Cinder
uses the former for live volume migration, and the latter for QoS/rate
limiting).

The purpose of this email is to notify hypervisor driver maintainers:
1. To update their entries
2. That these features exist and it would be great to have wide support

I know the libvirt driver supports them all, but maybe the maintainers
would like to update it themselves.

Thanks!
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-16 Thread Dolph Mathews
On Mon, Sep 16, 2013 at 5:31 AM, Michael Still mi...@stillhq.com wrote:

 On Fri, Sep 13, 2013 at 7:51 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

  ++ Data backups are a solved problem, and no DB admin should trust an
  application to perform its own backups.

 I'm not completely sure I agree. Consider the case where a cloud with
 active users undertakes an upgrade. The migrations run, and they allow
 user traffic to hit the installation. They then discover there is a
 serious problem and now need to rollback. However, they can't just
 restore a database backup, because the database is no longer in a
 consistent state compared with the hypervisors -- users might have
 created or deleted instances for example.


 In this scenario if we could downgrade reliably, they could force a
 downgrade with db sync, and then revert the packages they had
 installed to the previous version.

 How would they handle this scenario with just database backups?


Great point, but I still wouldn't *rely* on an application to manage it's
own data backups :)



 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] 503 Service Unavailable errors

2013-09-16 Thread Snider, Tim
When I'm doing large transfers Swift often returns 503 errors with 
proxy-server Object PUT exceptions during send, 1/2 required connections in 
the log file.
Is this an indication of network issues or can someone explain the cause and 
possible solution?
Thanks

*   Trying 192.168.10.90...   % Total% Received % Xferd  Average Speed   
TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0connected
 PUT /v1/AUTH_test/load/1gbfile51_0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 192.168.10.90:8080
 Accept: */*
 X-Auth-Token: AUTH_tk422c579f97bb4da69528820add184204
 Content-Length: 1073741824

} [data not shown]
  0 1024M0 00 3040k  0  3304k  0:05:17 --:--:--  0:05:17 3307k
  0 1024M0 00 4512k  0  2058k  0:08:29  0:00:02  0:08:27 2060k
  0 1024M0 00 5200k  0  1772k  0:09:51  0:00:02  0:09:49 1772k
  0 1024M0 00 6544k  0  1668k  0:10:28  0:00:03  0:10:25 1669k
  0 1024M0 00 8960k  0  1835k  0:09:31  0:00:04  0:09:27 1835k
  1 1024M0 01 12.7M  0  2215k  0:07:53  0:00:05  0:07:48 2012k
  1 1024M0 01 19.2M  0  2859k  0:06:06  0:00:06  0:06:00 3233k
  2 1024M0 02 22.0M  0  2768k  0:06:18  0:00:08  0:06:10 3327k
  2 1024M0 02 26.4M  0  3058k  0:05:42  0:00:08  0:05:34 4159k
  2 1024M0 02 29.5M  0  2905k  0:06:00  0:00:10  0:05:50 3851k
  2 1024M0 02 29.5M  0  2650k  0:06:35  0:00:11  0:06:24 3113k
  2 1024M0 02 29.5M  0  2436k  0:07:10  0:00:12  0:06:58 1907k
  2 1024M0 02 29.5M  0  2254k  0:07:45  0:00:13  0:07:32 1455k
  2 1024M0 02 29.5M  0  2097k  0:08:19  0:00:14  0:08:05  560k
  2 1024M0 02 29.5M  0  1961k  0:08:54  0:00:15  0:08:39 0
  2 1024M0 02 29.5M  0  1841k  0:09:29  0:00:16  0:09:13 0
  2 1024M0 02 29.5M  0  1736k  0:10:04  0:00:17  0:09:47 0
  2 1024M0 02 29.5M  0  1641k  0:10:38  0:00:18  0:10:20 0 
HTTP/1.1 503 Service Unavailable
 Content-Length: 212
 Content-Type: text/html; charset=UTF-8
 X-Trans-Id: tx3c2f6133fc2e4b43bebc939aea2ae17f
 Date: Mon, 16 Sep 2013 02:02:37 GMT
* HTTP error before end of send, stop sending

{ [data not shown]
  2 1024M  100   2122 29.5M 11  1586k  0:11:00  0:00:19  0:10:41 0
  2 1024M  100   2122 29.5M 11  1586k  0:11:00  0:00:19  0:10:41 0
* Closing connection #0
html
head
  title503 Service Unavailable/title
/head
body
  h1503 Service Unavailable/h1
  The server is currently unavailable. Please try again at a later time.br 
/br /
/body


ssh -i /root/.ssh/id_rsa  root@10.113.193.90 grep 
tx3c2f6133fc2e4b43bebc939aea2ae17f /var/log/swift/*
/var/log/swift/proxy.error:Sep 15 19:02:37 swift14 proxy-server Object PUT 
exceptions during send, 1/2 required connections (txn: 
tx3c2f6133fc2e4b43bebc939aea2ae17f) (client_ip: 192.168.10.69)
/var/log/swift/proxy.log:Sep 15 19:02:37 swift14 proxy-server 192.168.10.69 
192.168.10.69 16/Sep/2013/02/02/37 PUT /v1/AUTH_test/load/1gbfile51_0 HTTP/1.0 
503 - 
curl/7.22.0%20%28x86_64-pc-linux-gnu%29%20libcurl/7.22.0%20OpenSSL/1.0.1%20zlib/1.2.3.4%20libidn/1.23%20librtmp/2.3
 test%2CAUTH_tk422c579f97bb4da69528820add184204 29556736 212 - 
tx3c2f6133fc2e4b43bebc939aea2ae17f - 19.0232 -


Thanks,
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Savanna] Hadoop 1.1.2 replacement to 1.2.1 in vanilla plugin

2013-09-16 Thread Alexander Ignatov

Hi Savanna folks,

Due to replacement of Hadoop distro from 1.1.2 to 1.2.1 into Vanilla 
plugin newly created CRs in master branch may fails on integration tests.

Replacement related patch: https://review.openstack.org/#/c/46490/
DIB script changes: https://review.openstack.org/#/c/46720/

These changes were tested manually and all Savanna related tests are 
worked fine, eventually savanna-ci set +1.


I will retrigger your failed tests manually. Sorry for inconvenience.

--
Regards,
Alexander Ignatov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Core review request

2013-09-16 Thread Vitaliy Kolosov
Hi, guys.

Please, review my changes:
https://review.openstack.org/#/c/46064/
https://review.openstack.org/#/c/46066/
https://review.openstack.org/#/c/46072/

-- 
;;
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Tempest whitebox tests in nova

2013-09-16 Thread Adalberto Medeiros

Hello!

I'm looking at where would be the most appropriate place to have the 
tempest whitebox tests in nova unit tests. At first look, the 
nova/tests/db/test_db_api.py seems to be an appropriate place. As 
previously in tempest, I can work directly with the db and change states 
accordingly. However, the logic to allow certain actions depending on 
instance states seems not to be covered at this level.


For example, one of the logic tested is try to delete an instance in 
vm_state = 'resized' and task_state='resize_prep' . This should raise an 
Exception, but that does not happen considering only the db level. It 
would require to import manager methods in this case.


On the other hand, having the whitebox tests on the manager test level, 
we have most of db methods stubbed or use of fakes, so it wouldn't 
really be doing what are expected in terms of whitebox.


I think one option is to import the manager in the db level to apply the 
needed logic, but I'm looking for more advice from the nova team and to 
understand if my assumptions are correct so far.


More information about the whitebox tests for servers in tempest (from 
the patch that removes those tests): 
https://review.openstack.org/#/c/46116/3/tempest/whitebox/test_servers_whitebox.py


The nova db tests: 
https://github.com/openstack/nova/blob/master/nova/tests/db/test_db_api.py


Regards,

--
Adalberto Medeiros
Linux Technology Center
Openstack and Cloud Development
IBM Brazil
Email: adal...@linux.vnet.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Savanna] problem starting namenode

2013-09-16 Thread Arindam Choudhury
Hi,

I am trying to a custom plugin to provision hadoop 0.20.203.0 with jdk1.6u45. 
So I created a custom pre-installed image tweaking savanna-image-elements and a 
new plugin called mango.
I am having this error on namenode:

2013-09-16 13:34:27,463 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG: 
/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = test-master-starfish-001/192.168.32.2
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 
-r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
/
2013-09-16 13:34:27,784 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2013-09-16 13:34:27,797 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
MetricsSystem,sub=Stats registered.
2013-09-16 13:34:27,799 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2013-09-16 13:34:27,799 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
NameNode metrics system started
2013-09-16 13:34:27,964 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi 
registered.
2013-09-16 13:34:27,966 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Source name ugi already exists!
2013-09-16 13:34:27,976 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm 
registered.
2013-09-16 13:34:27,976 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode 
registered.
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: VM type   = 
64-bit
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 
17.77875 MB
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 
2^21 = 2097152 entries
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 
recommended=2097152, actual=2097152
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-09-16 13:34:28,060 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.block.invalidate.limit=100
2013-09-16 13:34:28,060 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false 
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-09-16 13:34:28,306 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStateMBean and NameNodeMXBean
2013-09-16 13:34:28,326 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Caching file names occuring more than 10 times 
2013-09-16 13:34:28,329 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Storage directory /mnt/lib/hadoop/hdfs/namenode does not exist.
2013-09-16 13:34:28,330 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/mnt/lib/hadoop/hdfs/namenode is in an inconsistent state: storage directory 
does not exist or is not accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:353)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:434)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2013-09-16 13:34:28,330 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/mnt/lib/hadoop/hdfs/namenode is in an inconsistent state: storage directory 
does not exist or is not accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:353)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:434)
at 

[openstack-dev] While booting an instance each time, getting Status = ERROR.

2013-09-16 Thread chandan kumar
Hello,

I have deployed Devstack in a VM RAM= 2 GB, CPU-2 using Fedora 18.

It is my localrc file.: http://fpaste.org/39848/33990213/

I have created image of fedora 17 using glance.
By using that image, I am trying to boot an instance using nova client.
During Booting after the end of Build state, The Instance gives Status ERROR.

Below is the output of all the operations that i have done during
booting an instance.
http://fpaste.org/39851/13793402/

I have tried to deploy the devstack in a bare metal, there also i have
got the same Status Error during boot the instance.

For finding the reason of Status = Error, i have checked logs
directory. But there are lots of files available for different screen.
Please tell me which file to look to file error Log?
Here is the log link: http://fpaste.org/39855/37934048/

here is the output of nova-manage service list:
http://fpaste.org/39857/40640137/
from there i found that all nova services are enabled. But why on each
time booting the instance it gives error?


Thanks,
Chandan Kumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] All needed Tuskar metrics and alerts mapped to what Ceilometer supports

2013-09-16 Thread Ladislav Smola

Hello,

this is follow up of T.Sedovic old email, trying to identify all 
metrics, we will need to track for Tuskar.
The Ceilometer API for Horizon is now in progress, so we have time to 
finish the list of metrics
and alarms we need. That may also raise the requests for some Ceilometer 
API optimization


This is meant for the open conversation, that will lead to the final list.


Measurements
=

The old list sent by tsedovic:
-

* CPU utilisation for each CPU (percentage) (Ceilometer-Nova as cpu_util)
* RAM utilisation (GB) (Ceilometer-Nova as memory)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Swap utilisation (GB) (Ceilometer-Nova as disk.ephemeral.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* Disk utilisation (GB) (Ceilometer-Cinder as volume.size and 
Ceilometer-Swift as storage.objects.size)
- I do just assume, this is the used value and total value can be got 
from the service itself,

  needs confirmation
* System load -- see /proc/loadavg (percentage) (--)
* Incoming traffic for each NIC (Mbps) ( Ceilometer-Nova as 
network.incoming.bytes)
* Outgoing traffic for each NIC (Mbps) (Ceilometer-Nova as 
network.outgoing.bytes)
- It is connected to VM interface now, I do expect Baremetal 
agent(Hardware agent) will use NICs,

  needs confirmation
* Number of currently running instances and the associated 
flavours(Ceilometer-Nova

  using instance:type and group_by resource_id)


The additional meters used in wireframes
-

jcoufal could you add the additional measurements from the last wireframes?


The measurements the Ceilometer supports now
---

http://docs.openstack.org/developer/ceilometer/measurements.html

Feel free to include the others into wireframes jcoufal (I guess there 
will have to be different

overview pages for different Resource Classes, based on their service type)

I am in the process of finding out, whether all off this measurements 
will be also collected by the
Baremetal agent(Hardware agent). But I would say yes, from the 
description it has (except the VM

specific metrics like vcpusI guess)

The missing meters
-

We will have to probably implement these (meaning implementing a 
pollsters for the Baremetal

agent(Hardware agent), that will collect these metrics)

* System load -- see /proc/loadavg (percentage) (probably for all services?)

- Please add other Baremetal metrics you think we will need.


Alerts


Setting and Alarm
---

Simplified explanation of setting the alarm:
In order to have alerts, you have to set an alarm first. Alarm can 
contain any statistics query,
a threshold and an operator. (e.g. fire alarm when avg cpu_util  90% on 
all instances of project_1).
We can combine more alarms into one complex alarm. And you can browse 
alarms.

(There can be actions set up on alarm, but more about that later.)

Showing alerts
---

1. I would be bold enough to distinguish system-meter (e.g. similar to 
cpu_util  90%, are used
for Heat autoscaling). And user-defined-meter (the ones defined in UI). 
Will we show both in
the UI? Probably in different sections. System meters will require extra 
caution.


2. For the table view of alarms, I would see it as a general filterable 
order-able table of alarms.
So we can easily show something like e.g. all nova alarms, all alarms 
for cpu_util with condition  90%


3. Now there is a ongoing conversation with eglynn, how to show the 
'aggregate alarms stats'

and 'alarm time series':
https://wiki.openstack.org/wiki/Ceilometer/blueprints/alarm-audit-api-group-by#Discussion 

Next to the overview page with predefined charts, we should have a 
general filterable order-able

charts (the similar interface as table view above).

Here is pictured a one possible way of how the charts for Alarms could 
look like on the overview page:
( 
http://file.brq.redhat.com/~jcoufal/openstack-m/user_stories/racks_detail-overview.pdf 
http://file.brq.redhat.com/%7Ejcoufal/openstack-m/user_stories/racks_detail-overview.pdf) 
.
Any feedback is welcome. Also we should figure out what Alarms will be 
used for defining e.g. there is
something bad happening (like health chart?). Or what alarms to set and 
show as default (lot of them

is already being set by e.g. Heat)

4. There is a load of alerts used in wireframes, that are not currently 
supported in Ceilometer (alerts can
be only based on existing measurements), like instances failures, disk 
failures, etc... We should write those
down and probably write agents and pollsters for them. It make sense to 
integrate them to Ceilometer,

whatever they will be.


Dynamic Ceilometer


Due to the dynamic architecture of the 

Re: [openstack-dev] python-simplejson 2.0.0 errors

2013-09-16 Thread Bhuvan Arumugam
On Sun, Sep 15, 2013 at 8:50 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 There's jsonschema 2.0.0 in Sid, and when I build some of the OpenStack
 packages, I get a huge list of requirement parsing errors:

 2013-09-12 17:05:55.720 26018 ERROR stevedore.extension [-] Could not
 load 'file': (jsonschema 2.0.0 (/usr/lib/python2.7/dist-packages),
 Requirement.parse('jsonschema=0.7,2'))
 2013-09-12 17:05:55.720 26018 ERROR stevedore.extension [-] (jsonschema
 2.0.0 (/usr/lib/python2.7/dist-packages),
 Requirement.parse('jsonschema=0.7,2'))



It's fun out of inter dependencies. Nova depends on python-glanceclient;
python-glanceclient depends on warlock=1.0.1,2. warlock depends on
jsonschema=0.7,2 (warlock 1.0.0). The latest warlock depends on newer
jsonschema release (=0.7,3). To fix your issue, you may do one of
following workaround:

1. upgrade warlock to latest (sudo pip install warlock --upgrade)
2. downgrade jsonschema to earlier release v1.3.0 (sudo pip uninstall
jsonschema; sudo pip install jsonschema==1.3.0)
3. install newer python-glanceclient. It depends on newer warlock (sudo pip
install python-glanceclient --upgrade)

References:
https://github.com/openstack/python-glanceclient/blob/master/requirements.txt
https://github.com/bcwaldon/warlock/blob/master/requirements.txt

Thank you,
Bhuvan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Adam Young
Looks like this has grown into a full discussion.  Opening up to the dev 
mailing list.


On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:

I did run into a couple of fundamental limitations with the policy API as 
implemented in Keystone.

1)  policy_list and policy_get are admin_required by default in the policy 
file.  Obviously this can be changed in the policy file itself, but this is a 
bad default.  A regular user is most in need of policy rule enforcement so the 
existing default does not make sense from a UI perspective.
Hmmm, this sounds like a mismatch of expectations.  I would think that 
the Horizon server would fetch the policy as an admin user, not the end 
user, and use that to tailor their UX.  It would only be a problem if 
that tailoring was done on the Client side in Javascript.  Why would it 
matter what access control for the policy was?  Why would the end user 
be requesting the policy?




2)  The 3 param/return fields supported by the policy API are: blob, id, type (mime-type).  When 
trying to utilize multiple policy files (blobs) from several services we need a way to map the blob 
to a service type to know which rule set to apply.  I had considered lumping all the policy blobs 
into one, but there is no requirement that each policy rule will begin with e.g., 
identity: and several blobs could implement a rule default which could be 
specified differently.  So, I believe a service_type parameter is necessary.  Additionally, is 
there anything barring nova from uploading multiple policy blobs (perhaps different), each getting 
unique IDs, and then having several varying compute policy blobs to choose from?  Which one wins?
I haven't looked deeply at the policy API until now:   It looks broken.  
I would not be able to tell just from reading the code how to map a 
policy file to the service that needed it.  I would think that, upon 
service startup, it would request the policy file that mapped to it, 
either by endpoint with a fallback to a pre-service call.


I would think that you would make a tree out of the rules.  At the root 
would be policy.  Underneath that would be the service, (then the 
endpoint in the future when we support multiple per service) and then 
the rules underneath those.  The rules would be a json dumps of the blob 
get from the policy_api.






Having devstack load the policy files into keystone would help, but 1 and 2 
need to be addressed before those files are usable in Horizon.

Thanks,
David

-Original Message-
From: Adam Young [mailto:ayo...@redhat.com]
Sent: Monday, September 16, 2013 8:16 AM
To: Julie Pichon
Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
Subject: Re: WebUI and user roles

On 09/16/2013 07:33 AM, Julie Pichon wrote:

Adam Young ayo...@redhat.com wrote:

Gabriel and I talked at the last summit about how Horizon could
figure out what to show the user based on the roles that the user
had.  At the time, I was thinking it wasn't something we could figure out at 
run time.

I was wrong.

The answer is plain.  We have the policy files in Keystone already,
we just don't parse them.  Horizon has all the information it needs
to figure out based on a token, what can this user do?

I'm not certain how to make use of this, yet, but the kernel of the
idea is there.

Thanks Adam. David Lyle implemented RBAC functionality based on policy
files in Havana [0]. I think one of the problems he found was that
although policy files are in use, most services currently do not
upload them to Keystone so they are not yet queryable (?).

That is true, but it is a deployment issue that is easily solvable. We can have 
devstack, packstack, and whatever else, upload the policy files at the start.  
They are all in the various deployments, so it is really a trivial step to load 
them into Keystone.


Regards,

Julie


[0] https://blueprints.launchpad.net/horizon/+spec/rbac



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-16 Thread Jaromir Coufal

Hi,

after few days of gathering information, it looks that no more new ideas 
appear there, so let's take the last round of voting for names which you 
prefer. It's important for us to get on the same page.


https://etherpad.openstack.org/tuskar-naming

Thanks guys
-- Jarda


On 2013/12/09 11:20, Jaromir Coufal wrote:

Hello everybody,

I just started and etherped with various names of concepts in Tuskar. 
It is important to get all of us on the same page, so the usage and 
discussions around Tuskar concepts are clear and easy to use (also for 
users, not just contributors!).


https://etherpad.openstack.org/tuskar-naming

Keep in mind, that we will use these names in API, CLI and UI as well, 
so they should be as descriptive as possible and not very long or 
difficult though.


Etherpad is not the best tool for mark up, but I did my best. Each 
concept which needs name is bold and is followed with bunch of bullets 
- description, suggestion of names, plus discussion under each 
suggestion, why yes or not.


Name suggestions are in underlined italic font.

Feel free to add  update  discuss anything in the document, because 
I might have forgotten bunch of stuff.


Thank you all and follow the etherpad :)
-- Jarda




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight

On Sep 16, 2013, at 8:42 AM, Matthias Runge wrote:

 On 16/09/13 17:36, Michael Basnight wrote:
 
 
 Not to forget python-troveclient, which is currently a hard
 requirement for Horizon.
 
 During the review for python-troveclient, it was discovered,
 troveclient still references reddwarfclient (in docs/source).
 
 Are you saying it references another codebase? Or just that when we
 renamed it we forgot to update a reference or two? If its the latter,
 is it relevant to this requirements issue? Also, I will gladly fix it
 and release it if its making anyone's life hell :)
 
 
 In my understanding, this is just due forgotten references during the
 rename, it's not relevant to the requirements issue.
 
 Currently, just the docs refer to reddwarf, resulting in build issues
 when building docs.
 

Whew! Ill fix it anyway. Thx fro pointing it out.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Matthias Runge
On 16/09/13 17:36, Michael Basnight wrote:

 
 Not to forget python-troveclient, which is currently a hard
 requirement for Horizon.
 
 During the review for python-troveclient, it was discovered,
 troveclient still references reddwarfclient (in docs/source).
 
 Are you saying it references another codebase? Or just that when we
 renamed it we forgot to update a reference or two? If its the latter,
 is it relevant to this requirements issue? Also, I will gladly fix it
 and release it if its making anyone's life hell :)
 

In my understanding, this is just due forgotten references during the
rename, it's not relevant to the requirements issue.

Currently, just the docs refer to reddwarf, resulting in build issues
when building docs.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Meeting Minutes 2013-09-16

2013-09-16 Thread Denis Koryavov
Hello,

Below, you can see the meeting minutes from today's Murano meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-09-16-15.04.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-09-16-15.04.txt
Log:
http://eavesdrop.openstack.org/meetings/murano/2013/murano.2013-09-16-15.04.log.html

The next meeting will be held at 21th Sep.

--
Denis
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight
On Sep 16, 2013, at 12:24 AM, Matthias Runge mru...@redhat.com wrote:

 On 16/09/13 05:30, Monty Taylor wrote:
 
 
 On 09/15/2013 01:47 PM, Alex Gaynor wrote:
 Falcon was included as a result of Marconi moving from stackforge to
 being incubated. sphinxcontrib-programoutput doesn't appear to have been
 added at all, it's still under
 review: https://review.openstack.org/#/c/46325/
 
 I agree with Alex and Morgan. falcon was the marconi thing.
 diskimage-builder and tripleo-image-elements are part of an OpenStack
 program.
 
 sphinxcontrib-programoutput is only a build depend for docs - but I
 think you're made a good point, and we should be in requirements freeze.
 Let's hold off on that one until icehouse opens.
 
 
 Not to forget python-troveclient, which is currently a hard requirement
 for Horizon.
 
 During the review for python-troveclient, it was discovered, troveclient
 still references reddwarfclient (in docs/source).

Are you saying it references another codebase? Or just that when we renamed it 
we forgot to update a reference or two? If its the latter, is it relevant to 
this requirements issue? Also, I will gladly fix it and release it if its 
making anyone's life hell :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Michael Basnight

On Sep 16, 2013, at 9:05 AM, Matthias Runge wrote:

 Signed PGP part
 On 16/09/13 17:51, Michael Basnight wrote:
 
  Currently, just the docs refer to reddwarf, resulting in build
  issues when building docs.
  
  
  Whew! Ill fix it anyway. Thx fro pointing it out.
  
 Awesome, Michael. Very much appreciated!

https://review.openstack.org/#/c/46755/


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Dolph Mathews
On Mon, Sep 16, 2013 at 11:01 AM, Adam Young ayo...@redhat.com wrote:

 Looks like this has grown into a full discussion.  Opening up to the dev
 mailing list.

 On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:

 I did run into a couple of fundamental limitations with the policy API as
 implemented in Keystone.

 1)  policy_list and policy_get are admin_required by default in the
 policy file.  Obviously this can be changed in the policy file itself, but
 this is a bad default.  A regular user is most in need of policy rule
 enforcement so the existing default does not make sense from a UI
 perspective.

 Hmmm, this sounds like a mismatch of expectations.  I would think that the
 Horizon server would fetch the policy as an admin user, not the end user,
 and use that to tailor their UX.  It would only be a problem if that
 tailoring was done on the Client side in Javascript.  Why would it matter
 what access control for the policy was?  Why would the end user be
 requesting the policy?


 2)  The 3 param/return fields supported by the policy API are: blob, id,
 type (mime-type).  When trying to utilize multiple policy files (blobs)
 from several services we need a way to map the blob to a service type to
 know which rule set to apply.  I had considered lumping all the policy
 blobs into one, but there is no requirement that each policy rule will
 begin with e.g., identity: and several blobs could implement a rule
 default which could be specified differently.  So, I believe a
 service_type parameter is necessary.  Additionally, is there anything
 barring nova from uploading multiple policy blobs (perhaps different), each
 getting unique IDs, and then having several varying compute policy blobs to
 choose from?  Which one wins?

 I haven't looked deeply at the policy API until now:   It looks broken.  I
 would not be able to tell just from reading the code how to map a policy
 file to the service that needed it.  I would think that, upon service
 startup, it would request the policy file that mapped to it, either by
 endpoint with a fallback to a pre-service call.


We stopped short of any policy - service/endpoint mapping because there
were mixed expectations about how that should be done and no clear use case
that fetching policies by ID / URL didn't satisfy a bit more simply.



 I would think that you would make a tree out of the rules.  At the root
 would be policy.  Underneath that would be the service, (then the endpoint
 in the future when we support multiple per service) and then the rules
 underneath those.  The rules would be a json dumps of the blob get from the
 policy_api.




 Having devstack load the policy files into keystone would help, but 1 and
 2 need to be addressed before those files are usable in Horizon.

 Thanks,
 David

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Monday, September 16, 2013 8:16 AM
 To: Julie Pichon
 Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
 Subject: Re: WebUI and user roles

 On 09/16/2013 07:33 AM, Julie Pichon wrote:

 Adam Young ayo...@redhat.com wrote:

 Gabriel and I talked at the last summit about how Horizon could
 figure out what to show the user based on the roles that the user
 had.  At the time, I was thinking it wasn't something we could figure
 out at run time.

 I was wrong.

 The answer is plain.  We have the policy files in Keystone already,
 we just don't parse them.  Horizon has all the information it needs
 to figure out based on a token, what can this user do?

 I'm not certain how to make use of this, yet, but the kernel of the
 idea is there.

 Thanks Adam. David Lyle implemented RBAC functionality based on policy
 files in Havana [0]. I think one of the problems he found was that
 although policy files are in use, most services currently do not
 upload them to Keystone so they are not yet queryable (?).

 That is true, but it is a deployment issue that is easily solvable. We
 can have devstack, packstack, and whatever else, upload the policy files at
 the start.  They are all in the various deployments, so it is really a
 trivial step to load them into Keystone.

  Regards,

 Julie


 [0] 
 https://blueprints.launchpad.**net/horizon/+spec/rbachttps://blueprints.launchpad.net/horizon/+spec/rbac



 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] problem starting namenode

2013-09-16 Thread Alexander Ignatov

Hi, Arindam

Current Savanna's vanilla plugin pushes two configs directly into 
hdfs-site.xml for all DataNodes and NameNode:

dfs.name.dir =/lib/hadoop/hdfs/namenode,
dfs.data.dir = /lib/hadoop/hdfs/datanode
https://github.com/stackforge/savanna/blob/master/savanna/plugins/vanilla/config_helper.py#L178-L181
All these pathes are joined with /mnt dir which as a root place for 
mounted Ephemeral drives.
These configs are responsible for placement of HDFS data. Particularly 
/mnt/lib/hadoop/hdfs/namenode should be created before formatting NameNode.
I'm not sure about proper behaviour of Hadoop 0.20.203.0 you are using 
in your plugin but in 1.1.2 version supported by Vanilla Plugin 
/mnt/lib/hadoop/hdfs/namenode is created during formatting namenode 
automatically.
Maybe 0.20.203.0 this is not implemented. I'd recommend you to check it 
with manual cluster deployment w/o Savanna cluster provisioning.
If that is case then you should write your code with creating these 
directories before starting Hadoop services.


Regards,
Alexander Ignatov
On 9/16/2013 6:11 PM, Arindam Choudhury wrote:

Hi,

I am trying to a custom plugin to provision hadoop 0.20.203.0 with 
jdk1.6u45. So I created a custom pre-installed image tweaking 
savanna-image-elements and a new plugin called mango.

I am having this error on namenode:

2013-09-16 13:34:27,463 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

/
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = test-master-starfish-001/192.168.32.2
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = 
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 
-r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011

/
2013-09-16 13:34:27,784 INFO 
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
2013-09-16 13:34:27,797 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
MetricsSystem,sub=Stats registered.
2013-09-16 13:34:27,799 INFO 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot 
period at 10 second(s).
2013-09-16 13:34:27,799 INFO 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics 
system started
2013-09-16 13:34:27,964 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
ugi registered.
2013-09-16 13:34:27,966 WARN 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi 
already exists!
2013-09-16 13:34:27,976 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
jvm registered.
2013-09-16 13:34:27,976 INFO 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source 
NameNode registered.
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: VM 
type   = 64-bit
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 2% max 
memory = 17.77875 MB
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 
capacity  = 2^21 = 2097152 entries
2013-09-16 13:34:28,002 INFO org.apache.hadoop.hdfs.util.GSet: 
recommended=2097152, actual=2097152
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-09-16 13:34:28,047 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
isPermissionEnabled=true
2013-09-16 13:34:28,060 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
dfs.block.invalidate.limit=100
2013-09-16 13:34:28,060 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), 
accessTokenLifetime=0 min(s)
2013-09-16 13:34:28,306 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStateMBean and NameNodeMXBean
2013-09-16 13:34:28,326 INFO 
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names 
occuring more than 10 times
2013-09-16 13:34:28,329 INFO 
org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
/mnt/lib/hadoop/hdfs/namenode does not exist.
2013-09-16 13:34:28,330 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: 
Directory /mnt/lib/hadoop/hdfs/namenode is in an inconsistent state: 
storage directory does not exist or is not accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:353)
at 

Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
FYI: We were thinking about using the new Keystone policy API, but fell back to 
using files on the file system due to not having a way to retrieve the policies 
from Keystone other than with an ID string. After saving the policy file you 
need to save the policy ID somewhere so you might as well just save the policy 
file as well. If the policy table also had a name field, then the policy file 
could be saved during OpenStack installation and retrieved later by each 
service using some algorithm on its name.

Mark

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, September 16, 2013 9:19 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] WebUI and user roles


On Mon, Sep 16, 2013 at 11:01 AM, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:
Looks like this has grown into a full discussion.  Opening up to the dev 
mailing list.

On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:
I did run into a couple of fundamental limitations with the policy API as 
implemented in Keystone.

1)  policy_list and policy_get are admin_required by default in the policy 
file.  Obviously this can be changed in the policy file itself, but this is a 
bad default.  A regular user is most in need of policy rule enforcement so the 
existing default does not make sense from a UI perspective.
Hmmm, this sounds like a mismatch of expectations.  I would think that the 
Horizon server would fetch the policy as an admin user, not the end user, and 
use that to tailor their UX.  It would only be a problem if that tailoring was 
done on the Client side in Javascript.  Why would it matter what access control 
for the policy was?  Why would the end user be requesting the policy?

2)  The 3 param/return fields supported by the policy API are: blob, id, type 
(mime-type).  When trying to utilize multiple policy files (blobs) from several 
services we need a way to map the blob to a service type to know which rule set 
to apply.  I had considered lumping all the policy blobs into one, but there is 
no requirement that each policy rule will begin with e.g., identity: and 
several blobs could implement a rule default which could be specified 
differently.  So, I believe a service_type parameter is necessary.  
Additionally, is there anything barring nova from uploading multiple policy 
blobs (perhaps different), each getting unique IDs, and then having several 
varying compute policy blobs to choose from?  Which one wins?
I haven't looked deeply at the policy API until now:   It looks broken.  I 
would not be able to tell just from reading the code how to map a policy file 
to the service that needed it.  I would think that, upon service startup, it 
would request the policy file that mapped to it, either by endpoint with a 
fallback to a pre-service call.

We stopped short of any policy - service/endpoint mapping because there were 
mixed expectations about how that should be done and no clear use case that 
fetching policies by ID / URL didn't satisfy a bit more simply.


I would think that you would make a tree out of the rules.  At the root would 
be policy.  Underneath that would be the service, (then the endpoint in the 
future when we support multiple per service) and then the rules underneath 
those.  The rules would be a json dumps of the blob get from the policy_api.



Having devstack load the policy files into keystone would help, but 1 and 2 
need to be addressed before those files are usable in Horizon.

Thanks,
David

-Original Message-
From: Adam Young [mailto:ayo...@redhat.commailto:ayo...@redhat.com]
Sent: Monday, September 16, 2013 8:16 AM
To: Julie Pichon
Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
Subject: Re: WebUI and user roles

On 09/16/2013 07:33 AM, Julie Pichon wrote:
Adam Young ayo...@redhat.commailto:ayo...@redhat.com wrote:
Gabriel and I talked at the last summit about how Horizon could
figure out what to show the user based on the roles that the user
had.  At the time, I was thinking it wasn't something we could figure out at 
run time.

I was wrong.

The answer is plain.  We have the policy files in Keystone already,
we just don't parse them.  Horizon has all the information it needs
to figure out based on a token, what can this user do?

I'm not certain how to make use of this, yet, but the kernel of the
idea is there.
Thanks Adam. David Lyle implemented RBAC functionality based on policy
files in Havana [0]. I think one of the problems he found was that
although policy files are in use, most services currently do not
upload them to Keystone so they are not yet queryable (?).
That is true, but it is a deployment issue that is easily solvable. We can have 
devstack, packstack, and whatever else, upload the policy files at the start.  
They are all in the various deployments, so it is really a trivial step to load 
them into Keystone.
Regards,

Julie


[0] https://blueprints.launchpad.net/horizon/+spec/rbac



[openstack-dev] [marconi] Minutes from today's meeting

2013-09-16 Thread Kurt Griffiths
Hi folks,

Today the Marconi team held its regular Monday meeting[1] in
#openstack-meeting-alt @ 1600 UTC. Among other things, we discussed
progress on marconi-proxy:

Minutes: http://goo.gl/kfBjF8
Log: http://goo.gl/GoUU4c

As always, you can catch us in #openstack-marconi in between the weekly
meetings.

@kgriffs

[1]: https://wiki.openstack.org/wiki/Meetings/Marconi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Review request for adding ordereddict

2013-09-16 Thread Paul Bourke
Hi all,

I've submitted https://review.openstack.org/#/c/46474/ to add
ordereddict to openstack/requirements.

The reasoning behind the change is that we want ConfigParser to store
sections in the order they're read, which is the default behavior in
py2.7[1], but it must be specified in py2.6.

The following two Glance features depend on this:

https://review.openstack.org/#/c/46268/
https://review.openstack.org/#/c/46283/

Can someone take a look at this change?

Thanks,
-Paul

[1] 
http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Savanna usability

2013-09-16 Thread Erik Bergenholtz
I want to start a dialog around adding some usability features to Savanna, now 
that I have had a chance to spend a fair amount of time provisioning, scaling 
and changing clusters. Here is a list of items that I believe are important to 
address; comments welcomed: 

1.Changing an OpenStack flavor associated with a node-group template has a 
ripple effect on both node group and cluster templates. The reason for this is 
that OpenStack does not support modification of flavors; when a flavor is 
modified (RAM, CPU, Root Disk, etc.) the flavor is deleted and a new one is 
created resulting in a new flavor id. The implication is that both node-group 
referencing the flavor [id] and any cluster templates referencing the affected 
node-group become stale and unusable. A user then has to start from scratch, 
creating new node-groups and cluster templates for a simple flavor change.
a.A possible solution to this is to internally associate flavor 
name with node-group and look up the flavor id based on flavor name when 
provisioning instances
b.At a minimum it should be possible to change the flavor id 
associated with a node-group. See #2.


2.Cluster templates and node group templates are immutable. This is more of 
an issue at the node-group level as I often times want to make changes to a 
node-group and want to affect all cluster templates that make use of that 
node-group. I see this as being fairly commonplace.


3.Before provisioning a cluster, quotas should be checked to make sure that 
enough quota exists. I know this can be done transactionally (check quota, 
spawn cluster), but a basic check would go a long way.


4.Spawning a large cluster comes with some problems today, as Savanna will 
abort if a single VM fails. In deploying large clusters (100’s to x1000), which 
will be commonplace, having a single slave VM (i.e. data node) not spawn 
properly should not necessarily be a reason to abort the entire deployment, in 
particular in a scaling operation. Obviously the failing VM cannot be a master 
service. This applies both to the plugin as well as controller as they are both 
involved in the deployment. I could see a possible solution being the creation 
of an error/fault policy allowing a user to specify (perhaps optionally) a % or 
hard number for minimum # of nodes that need to come up without aborting 
deployment.
a. This also applies to scaling the cluster by larger increments
 
Just some thoughts based on my experience last week; comments welcomed.
 
Best,
 
Erik
 
 
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-16 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 16/09/13 17:51, Michael Basnight wrote:

 Currently, just the docs refer to reddwarf, resulting in build
 issues when building docs.
 
 
 Whew! Ill fix it anyway. Thx fro pointing it out.
 
Awesome, Michael. Very much appreciated!

Matthias
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSNyxJAAoJEOnz8qQwcaIWN/cH/1iS6vqNzp/Wh/G5EQc8lYfX
el+dozlaSIyECrtPWJyHMTpjW5/LGTJthn2f/MyFNSMamkY78CYeORRD2isU+TUR
MMh2y/r7TbtgXMEVEwSjNgPp/uMz4pB+/gRjM305sMSuaPbCo3PU+0G/DpYAtgk1
GIxsUXeMlU/0iJajKemv/d1/LRRZSa2XFLqYTAiGYpfabfvJwffTxF0KvcRu3kcZ
e4c6ylVA98qhmtqhhfuKdclTQxeaiJPDn0kya+Mw4XAJK+r3u4TdsioYmaEw6PGk
o2qw4wobl4BRK+NqlaqA6E0dAPBa2rXOQBaRtDf5YFSjs0xBK7i7wQ//7ubdChs=
=o1uM
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova]Connecting a VM from one tenant to a non-shared network in another tenant

2013-09-16 Thread Samuel Bercovici
Hi,

The bug opened in Nova https://bugs.launchpad.net/nova/+bug/1221320 has a fix 
pending core nova developer approval.
The 2nd bug opened for Neutron is fixed and approved.
As we need this quite urgently to complete our testing in time for Havana, I 
would appreciate if another core reviewer will review 
https://review.openstack.org/#/c/45691/ and hopefully will approve.

Regards,
-Sam.


From: Avishay Balderman
Sent: Sunday, September 08, 2013 11:15 AM
To: OpenStack Development Mailing List; gong...@unitedstack.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi
I have opened two bugs that are related to the topic below:

https://bugs.launchpad.net/neutron/+bug/1221315

https://bugs.launchpad.net/nova/+bug/1221320

Thanks

Avishay

From: Samuel Bercovici
Sent: Wednesday, August 07, 2013 1:05 PM
To: OpenStack Development Mailing List; 
gong...@unitedstack.commailto:gong...@unitedstack.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Yong,

Garry has recommended that I will you the following:

In: /opt/stack/nova/nova/network/neutronv2/api.py
In the def _get_available_networks function, the developer has added a specific 
line of code filtering networks by the tenant_id.
Around line 123: search_opts = {tenant_id: project_id, 'shared': False}

As far as I understand, Neutron already filters non-shared networks by the 
tenant ID, so why do we need this explicit filter, even more, I think that the 
behavior of neutron will also return the shared network in addition to the 
private ones by default so instead of the code doing two calls it could only do 
one call to Neutron with if needed filtering by net_ids.

Do you see a reason why the code should remain as is?

Thanks,
-Sam.



From: Samuel Bercovici
Sent: Thursday, August 01, 2013 10:58 AM
To: OpenStack Development Mailing List; 
sorla...@nicira.commailto:sorla...@nicira.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

There was another patch needed:
In: /opt/stack/nova/nova/network/neutronv2/api.py
In the def _get_available_networks function, the developer has added a specific 
line of code filtering networks by the tenant_id.
In general as far as I understand, this might be unneeded as quantum will 
already filter the networks based on the tenant_id in the context while if 
is_admin, will elevate and return all networks which I belive is the behavior 
we want.

Do you think this can somehow be solved only on neutron side or must it also be 
done by rmoving the tenant_id filter in the nova side?

When removing the filter of tenant_id + the pathc bellow, I get the behavior 
that as admin, I can createVMs connected to another tenants private network but 
as non-admin I am not able to do so.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Wednesday, July 31, 2013 7:32 PM
To: OpenStack Development Mailing List; 
sorla...@nicira.commailto:sorla...@nicira.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Slavatore,

I thought that creating a qport  would be enough but it looks like I still 
missing something else.
I have commented in /opt/stack/quantum/neutron/api/v2/base.py in the create 
function the ._validate_network_tenant_ownership call.
I can now as an Admin user, can create a qport from tenant-a that is mapped to 
a private network in tenant-b.

The following still fails with ERROR: The resource could not be found. (HTTP 
404) ...
nova boot --flavor 1 --image image-id --nic port-id=port-id
Where port-id is the one I got from the port-create

Any ideas where I should look next?

Regards,
-Sam.


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, July 31, 2013 5:42 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Sam,

is what you're trying to do tantamount to creating a port on a network whose 
tenant_id is different from the network's tenant_id?
We have at the moment a fairly strict ownership check - which does not allow 
even admin users to do this operation.

I do not have a strong opinion against relaxing the check, and allowing admin 
users to create ports on any network - I don't think this would constitute a 
potential vulnerability, as in neutron is someone's manages to impersonate an 
admin user, he/she can make much more damage.

Salvatore

On 31 July 2013 16:11, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi All,

We are providing load balancing services via virtual machines running under an 
admin tenant that needs to be connected to VMs attached to a non-shared/private 
tenant network.
The virtual machine fails to be provisioned connected to the private 

Re: [openstack-dev] Issues with IPTables

2013-09-16 Thread Qing He
The follow up question is:
Has anyone walked through the guides faithfully posted there and see if it 
works without back door tricks/tricks not documented there? 

-Original Message-
From: Qing He 
Sent: Monday, September 16, 2013 10:37 AM
To: 'Solly Ross'
Cc: OpenStack Development Mailing List
Subject: RE: Issues with IPTables

Solly,
It would be great if you can share the notes.  The reason I asked the question 
is that I'm trying to decide If I need to allocate development time in 
installation following the installation guide. The usual wisdom is that 
installation with detailed instruction would take no time. However, your 
experience and mine showed the contrary. I have not finished mine following the 
Ubuntu installation guide. Thus, I was interested in knowing your effort spent 
on it so that I would know that it was not just me who had issues with the 
supposedly plug and play installation with the packages.
Thanks,
Qing

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Monday, September 16, 2013 10:24 AM
To: Qing He
Cc: OpenStack Development Mailing List
Subject: Re: Issues with IPTables

Quite a while.  RDO's documentation for configuring multinode Packstack with 
Neutron was a bit lacking, so after attempting to get that working for a while, 
I switched to following the Basic Install Guide 
(http://docs.openstack.org/trunk/basic-install/content/basic-install_intro.html).
  I also found the basic install guide catered for Fedora 
(http://docs.openstack.org/trunk/basic-install/yum/content/basic-install_intro.html),
 but that is sorely lacking in the actual instruction department, and is 
missing several steps.

If you would like, I can attach the raw draft of my notes.  Eventually, some of 
the changes or clairifications should make their way into the actual OpenStack 
Docs.

Best Regards,
Solly Ross

- Original Message -
From: Qing He qing...@radisys.com
To: sr...@redhat.com
Sent: Monday, September 16, 2013 1:14:42 PM
Subject: RE: Issues with IPTables

Solly,
A side question, how long did this process take you?

Thanks,

Qing

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Monday, September 16, 2013 10:11 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Issues with IPTables

In a enfort to improve/verify the Openstack Documentation with regards to RHEL 
and Fedora, I've been attempting to follow the basic install guides.  I've 
managed to create a working installation and set of instructions.  However, to 
do so I needed to disable the Neutron IPTables firewall, as it was blocking 
non-VM traffic.  Namely, it was blocking the GRE packets being used by Neutron. 
 Did I miss something, or is this a bug?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][IceHouse] Ceilometer + Kibana + ElasticSearch Integration

2013-09-16 Thread Nachi Ueno
Hi Julien

Thank you for your comment

2013/9/16 Julien Danjou jul...@danjou.info:
 On Fri, Sep 13 2013, Nachi Ueno wrote:

 Hi Nachi,

 That looks like a good idea, thanks for submitting.

 [1] We should add elastic search query api for ceilometer? or we
 should let user kick ElasticSearch api directory?

 Note that ElasticSearch has no tenant based authentication, in that
 case we need to integrate Keystone and ElasticSearch. (or Horizon)

 This should provide data retrieval too, otherwise it has much less
 interest.

OK I'll propose adding the data retrieval api too with elastic search query.

 [2] Log (syslog or any application log) should be stored in
 Ceilometer? (or it should be new OpenStack project? )

 Ceilometer already has on the roadmap events/notifications storage, ES
 would really fit here I think. As I've some plan to use the notification
 system as a logging back-end, that would probably cover part of this.

Cool. Ok so I'll continue working on this on Ceilometer.

Best
Nachi

 --
 Julien Danjou
 // Free Software hacker / independent consultant
 // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with IPTables

2013-09-16 Thread Anne Gentle
On Mon, Sep 16, 2013 at 12:24 PM, Solly Ross sr...@redhat.com wrote:

 Quite a while.  RDO's documentation for configuring multinode Packstack
 with Neutron was a bit lacking, so after attempting to get that working for
 a while, I switched to following the Basic Install Guide (
 http://docs.openstack.org/trunk/basic-install/content/basic-install_intro.html).
  I also found the basic install guide catered for Fedora (
 http://docs.openstack.org/trunk/basic-install/yum/content/basic-install_intro.html),
 but that is sorely lacking in the actual instruction department, and is
 missing several steps.

 If you would like, I can attach the raw draft of my notes.  Eventually,
 some of the changes or clairifications should make their way into the
 actual OpenStack Docs.


Hi Solly,

We really need to get this guide into shape by Oct. 17th. That's not very
much time. Can you put your notes into a doc bug at
http://bugs.launchpad.net/openstack-manuals/ as soon as you can?

Thanks,
Anne



 Best Regards,
 Solly Ross

 - Original Message -
 From: Qing He qing...@radisys.com
 To: sr...@redhat.com
 Sent: Monday, September 16, 2013 1:14:42 PM
 Subject: RE: Issues with IPTables

 Solly,
 A side question, how long did this process take you?

 Thanks,

 Qing

 -Original Message-
 From: Solly Ross [mailto:sr...@redhat.com]
 Sent: Monday, September 16, 2013 10:11 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] Issues with IPTables

 In a enfort to improve/verify the Openstack Documentation with regards to
 RHEL and Fedora, I've been attempting to follow the basic install guides.
  I've managed to create a working installation and set of instructions.
  However, to do so I needed to disable the Neutron IPTables firewall, as it
 was blocking non-VM traffic.  Namely, it was blocking the GRE packets being
 used by Neutron.  Did I miss something, or is this a bug?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] While booting an instance each time, getting Status = ERROR.

2013-09-16 Thread Avi L
Hi Chandan,

devstack by default logs everything to standard output and you can use
screen to view the logs. Here are some steps that I had documented in my
blog for debugging devstack logs:

Redirect devstack output to log files :
http://www.datauniv.com/blogs/2013/06/20/how-to-debug-devstack/

Using screen to view devstack logs:

http://www.datauniv.com/blogs/2013/06/11/openstack-taking-for-a-spin/

Thanks
AL


On Mon, Sep 16, 2013 at 7:14 AM, chandan kumar 
chandankumar.093...@gmail.com wrote:

 Hello,

 I have deployed Devstack in a VM RAM= 2 GB, CPU-2 using Fedora 18.

 It is my localrc file.: http://fpaste.org/39848/33990213/

 I have created image of fedora 17 using glance.
 By using that image, I am trying to boot an instance using nova client.
 During Booting after the end of Build state, The Instance gives Status
 ERROR.

 Below is the output of all the operations that i have done during
 booting an instance.
 http://fpaste.org/39851/13793402/

 I have tried to deploy the devstack in a bare metal, there also i have
 got the same Status Error during boot the instance.

 For finding the reason of Status = Error, i have checked logs
 directory. But there are lots of files available for different screen.
 Please tell me which file to look to file error Log?
 Here is the log link: http://fpaste.org/39855/37934048/

 here is the output of nova-manage service list:
 http://fpaste.org/39857/40640137/
 from there i found that all nova services are enabled. But why on each
 time booting the instance it gives error?


 Thanks,
 Chandan Kumar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with IPTables

2013-09-16 Thread Solly Ross
Here you go.  Keep in mind that I structured them more like their own install 
guide.  Basic tweaks were integrated into the steps, but larger issues are 
noted at the bottom under the notes section.

Best Regards,
Solly Ross


- Original Message -
From: Qing He qing...@radisys.com
To: Solly Ross sr...@redhat.com
Cc: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Monday, September 16, 2013 1:37:02 PM
Subject: RE: Issues with IPTables

Solly,
It would be great if you can share the notes.  The reason I asked the question 
is that I'm trying to decide If I need to allocate development time in 
installation following the installation guide. The usual wisdom is that 
installation with detailed instruction would take no time. However, your 
experience and mine showed the contrary. I have not finished mine following the 
Ubuntu installation guide. Thus, I was interested in knowing your effort spent 
on it so that I would know that it was not just me who had issues with the 
supposedly plug and play installation with the packages.
Thanks,
Qing

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Monday, September 16, 2013 10:24 AM
To: Qing He
Cc: OpenStack Development Mailing List
Subject: Re: Issues with IPTables

Quite a while.  RDO's documentation for configuring multinode Packstack with 
Neutron was a bit lacking, so after attempting to get that working for a while, 
I switched to following the Basic Install Guide 
(http://docs.openstack.org/trunk/basic-install/content/basic-install_intro.html).
  I also found the basic install guide catered for Fedora 
(http://docs.openstack.org/trunk/basic-install/yum/content/basic-install_intro.html),
 but that is sorely lacking in the actual instruction department, and is 
missing several steps.

If you would like, I can attach the raw draft of my notes.  Eventually, some of 
the changes or clairifications should make their way into the actual OpenStack 
Docs.

Best Regards,
Solly Ross

- Original Message -
From: Qing He qing...@radisys.com
To: sr...@redhat.com
Sent: Monday, September 16, 2013 1:14:42 PM
Subject: RE: Issues with IPTables

Solly,
A side question, how long did this process take you?

Thanks,

Qing

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Monday, September 16, 2013 10:11 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Issues with IPTables

In a enfort to improve/verify the Openstack Documentation with regards to RHEL 
and Fedora, I've been attempting to follow the basic install guides.  I've 
managed to create a working installation and set of instructions.  However, to 
do so I needed to disable the Neutron IPTables firewall, as it was blocking 
non-VM traffic.  Namely, it was blocking the GRE packets being used by Neutron. 
 Did I miss something, or is this a bug?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Intro
=

We will be following the basic install guide at
http://docs.openstack.org/trunk/basic-install/.

Layout
==

We have three networks:

* vmnet10 (NAT, 192.168.0.x) -- management network
* vmnet11 (host-only, 10.10.10.x) -- data network
* vmnet12 (NAT, 192.168.230.x) -- external/API network

All networks have x.x.x.1 assigned as the host's IP, and NAT networks have
x.x.x.2 set as the default gateway/NAT box.  For this reason, we will start
all IPs at x.x.x.3 instead of x.x.x.1 (just add 2 to every IP in the guide)

controller.rdo-test
---

* eth0: 192.168.0.3 (mgmt)
* eth1: 129.168.230.7 (ext)

compute.rdo-test


* eth0: 192.168.0.5 (mgmt)
* eth1: 10.10.10.4 (data)

network.rdo-test


* eth0: 192.168.0.4 (mgmt)
* eth1: 10.10.10.3 (data)
* eth2: 192.168.230.8 (ext)

Setup
=

NOTE: make sure that the outside network is reachable
(for example, in our VMWare setup, add `DNS1=192.168.0.2`
and `GATEWAY=192.168.0.2` to /etc/sysconfig/network-scripts/ifcfg-eth0)

Controller Node (controller.rdo-test)
-

1. Add the repositories:
   1. `yum-config-manager --add-repo 
http://repos.fedorapeople.org/repos/openstack/openstack-trunk/el6-openstack-trunk.repo`
 (RDO)
   2. `yum install -y 
http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm`
 (EPEL)

2. Update to grab the new kernel, and reboot to use it
   `yum -y update  shutdown -r now`

3. Edit the network scripts to contain the correct lines:
   `$EDITOR /etc/sysconfig/network-scripts/ifcfg-eth{0,1}`
   1. `ONBOOT=yes`
   2. `NETMASK=255.255.255.0`
   3. `GATEWAY=x.x.x.2` (replace the `x.x.x` with the appropriate prefix)
   4. `BOOTPROTO=none`
   5. `IPADDR=[SEE LAYOUT SECTION]`

4. Edit sysctl.conf to disable route verification
   `$EDITOR /etc/sysctl.conf`
   1. `net.ipv4.conf.all.rp_filter = 0`
   2. 

Re: [openstack-dev] [qa] Pitfalls of parallel tempest execution

2013-09-16 Thread Matthew Treinish
On Fri, Aug 30, 2013 at 12:23:03PM -0400, David Kranz wrote:
 Now that we have achieved the goal of parallel tempest in the gate
 using testr we have to be careful that we don't introduce tests that
 are flaky.  This may be obvious to many of you but we should include
 some information in the tempest README. Here is a start.
 Improvements are welcome and I will soon add this to the README.
 
  -David
 
 A new test only has to pass twice to get into the gate. The default
 tenant isolation prevents most tests from trying to access the same
 state in a racy way, but there are some apis, particularly in the
 whitebox and admin area (also watch out for the cli tests) that
 affect more global resources. Such tests can cause race failures. In
 some cases a lock can be used to serialize execution for a set of
 tests. An example is AggregatesAdminTest.  Races between methods in
 a class is not a problem because parallelization is at the class
 level, but if there is a json and xml version of the class there
 could still be a problem. Reviewers need to keep on top of these
 issues.
 

Thanks for starting this David. This was definitely something that we needed to
add to the tempest docs. Because some of the assumptions that we made before
when things were running serially aren't true in a parallel environment. Just
an FYI I took this description and added a bit more detail to it. I pushed it
out for review here:
https://review.openstack.org/46774

Thanks,

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-16 Thread Tomas Sedovic

On 09/16/2013 05:50 PM, Jaromir Coufal wrote:

Hi,

after few days of gathering information, it looks that no more new ideas
appear there, so let's take the last round of voting for names which you
prefer. It's important for us to get on the same page.

https://etherpad.openstack.org/tuskar-naming

Thanks guys
-- Jarda


Thanks Jarda,

I was thinking we could do the voting during the weekly IRC meeting (the 
bot has some cool voting capabilities).


Unfortunately, I've fallen ill and chances are I won't be able to drive 
the meeting. If you folks want to self-organise and start the vote, you 
have my blessing.


Otherwise, shall we do it on the IRC meeting after that?

T.




On 2013/12/09 11:20, Jaromir Coufal wrote:

Hello everybody,

I just started and etherped with various names of concepts in Tuskar.
It is important to get all of us on the same page, so the usage and
discussions around Tuskar concepts are clear and easy to use (also for
users, not just contributors!).

https://etherpad.openstack.org/tuskar-naming

Keep in mind, that we will use these names in API, CLI and UI as well,
so they should be as descriptive as possible and not very long or
difficult though.

Etherpad is not the best tool for mark up, but I did my best. Each
concept which needs name is bold and is followed with bunch of bullets
- description, suggestion of names, plus discussion under each
suggestion, why yes or not.

Name suggestions are in underlined italic font.

Feel free to add  update  discuss anything in the document, because
I might have forgotten bunch of stuff.

Thank you all and follow the etherpad :)
-- Jarda




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Review request for adding ordereddict

2013-09-16 Thread Dolph Mathews
On Mon, Sep 16, 2013 at 11:34 AM, Paul Bourke pauldbou...@gmail.com wrote:

 Hi all,

 I've submitted https://review.openstack.org/#/c/46474/ to add
 ordereddict to openstack/requirements.


Related thread:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015121.html


 The reasoning behind the change is that we want ConfigParser to store
 sections in the order they're read, which is the default behavior in
 py2.7[1], but it must be specified in py2.6.

 The following two Glance features depend on this:

 https://review.openstack.org/#/c/46268/
 https://review.openstack.org/#/c/46283/

 Can someone take a look at this change?

 Thanks,
 -Paul

 [1]
 http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-16 Thread Matthew Farrellee
IMHO, Big Data is even more nebulous and currently being pulled in many 
directions. Hadoop-as-a-Service may be too narrow. So, something in 
between, such as Data Processing, is a good balance.


Best,


matt

On 09/13/2013 08:37 AM, Abhishek Lahiri wrote:

IMHO data processing is too board , it makes more sense to clarify this
program as big data as a service or simply openstack-Hadoop-as-a-service.

Thanks  Regards
Abhishek Lahiri

On Sep 12, 2013, at 9:13 PM, Nirmal Ranganathan rnir...@gmail.com
mailto:rnir...@gmail.com wrote:





On Wed, Sep 11, 2013 at 8:39 AM, Erik Bergenholtz
ebergenho...@hortonworks.com mailto:ebergenho...@hortonworks.com
wrote:


On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com
mailto:jma...@hortonworks.com wrote:


Openstack Big Data Platform


On Sep 10, 2013, at 8:39 PM, David Scott
david.sc...@cloudscaling.com
mailto:david.sc...@cloudscaling.com wrote:


I vote for 'Open Stack Data'


On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo
zhongyue@intel.com mailto:zhongyue@intel.com wrote:

Why not OpenStack MapReduce? I think that pretty much says
it all?


On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell
g...@glenc.io mailto:g...@glenc.io wrote:

performant isn't a word. Or, if it is, it means
having performance. I think you mean high-performance.


On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee
m...@redhat.com mailto:m...@redhat.com wrote:

Rough cut -

Program: OpenStack Data Processing
Mission: To provide the OpenStack community with an
open, cutting edge, performant and scalable data
processing stack and associated management interfaces.



Proposing a slightly different mission:

To provide a simple, reliable and repeatable mechanism by which to
deploy Hadoop and related Big Data projects, including management,
monitoring and processing mechanisms driving further adoption of
OpenStack.



+1. I liked the data processing aspect as well, since EDP api directly
relates to that, maybe a combination of both.




On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:

It sounds too broad IMO. Looks like we need to
define Mission Statement
first.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Sep 10, 2013, at 17:09, Alexander Kuznetsov
akuznet...@mirantis.com
mailto:akuznet...@mirantis.com
mailto:akuznetsov@mirantis.__com
mailto:akuznet...@mirantis.com wrote:

My suggestion OpenStack Data Processing.


On Tue, Sep 10, 2013 at 4:15 PM, Sergey Lukjanov
slukja...@mirantis.com
mailto:slukja...@mirantis.com
mailto:slukja...@mirantis.com
mailto:slukja...@mirantis.com__ wrote:

Hi folks,

due to the Incubator Application we
should prepare Program name
and Mission statement for Savanna, so, I
want to start mailing
thread about it.

Please, provide any ideas here.

P.S. List of existing programs:
https://wiki.openstack.org/__wiki/Programs
https://wiki.openstack.org/wiki/Programs
P.P.S.
https://wiki.openstack.org/__wiki/Governance/NewPrograms
https://wiki.openstack.org/wiki/Governance/NewPrograms

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org

mailto:OpenStack-dev@lists.__openstack.org
mailto:OpenStack-dev@lists.openstack.org


http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Dolph Mathews
On Mon, Sep 16, 2013 at 11:35 AM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) mark.m.mil...@hp.com wrote:

  FYI: We were thinking about using the new Keystone policy API, but fell
 back to using files on the file system due to not having a way to retrieve
 the policies from Keystone other than with an ID string. After saving the
 policy file you need to save the policy ID somewhere so you might as well
 just save the policy file as well. If the policy table also had a name
 field, then the policy file could be saved during OpenStack installation
 and retrieved later by each service using some algorithm on its name.


The SQL policy driver supports names (and any other arbitrary attribute),
although it's not part of the spec. We just need some agreement on the
some algorithm bit (and an implementation!).


 

 ** **

 Mark

 ** **

 *From:* Dolph Mathews [mailto:dolph.math...@gmail.com]
 *Sent:* Monday, September 16, 2013 9:19 AM
 *To:* OpenStack Development Mailing List
 *Subject:* Re: [openstack-dev] WebUI and user roles

 ** **

 ** **

 On Mon, Sep 16, 2013 at 11:01 AM, Adam Young ayo...@redhat.com wrote:***
 *

 Looks like this has grown into a full discussion.  Opening up to the dev
 mailing list.

 On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:

 I did run into a couple of fundamental limitations with the policy API as
 implemented in Keystone.

 1)  policy_list and policy_get are admin_required by default in the policy
 file.  Obviously this can be changed in the policy file itself, but this is
 a bad default.  A regular user is most in need of policy rule enforcement
 so the existing default does not make sense from a UI perspective.

 Hmmm, this sounds like a mismatch of expectations.  I would think that the
 Horizon server would fetch the policy as an admin user, not the end user,
 and use that to tailor their UX.  It would only be a problem if that
 tailoring was done on the Client side in Javascript.  Why would it matter
 what access control for the policy was?  Why would the end user be
 requesting the policy?


 2)  The 3 param/return fields supported by the policy API are: blob, id,
 type (mime-type).  When trying to utilize multiple policy files (blobs)
 from several services we need a way to map the blob to a service type to
 know which rule set to apply.  I had considered lumping all the policy
 blobs into one, but there is no requirement that each policy rule will
 begin with e.g., identity: and several blobs could implement a rule
 default which could be specified differently.  So, I believe a
 service_type parameter is necessary.  Additionally, is there anything
 barring nova from uploading multiple policy blobs (perhaps different), each
 getting unique IDs, and then having several varying compute policy blobs to
 choose from?  Which one wins?

 I haven't looked deeply at the policy API until now:   It looks broken.  I
 would not be able to tell just from reading the code how to map a policy
 file to the service that needed it.  I would think that, upon service
 startup, it would request the policy file that mapped to it, either by
 endpoint with a fallback to a pre-service call.

 ** **

 We stopped short of any policy - service/endpoint mapping because there
 were mixed expectations about how that should be done and no clear use case
 that fetching policies by ID / URL didn't satisfy a bit more simply.

  


 I would think that you would make a tree out of the rules.  At the root
 would be policy.  Underneath that would be the service, (then the endpoint
 in the future when we support multiple per service) and then the rules
 underneath those.  The rules would be a json dumps of the blob get from the
 policy_api.


 


 Having devstack load the policy files into keystone would help, but 1 and
 2 need to be addressed before those files are usable in Horizon.

 Thanks,
 David

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Monday, September 16, 2013 8:16 AM
 To: Julie Pichon
 Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
 Subject: Re: WebUI and user roles

 On 09/16/2013 07:33 AM, Julie Pichon wrote:

 Adam Young ayo...@redhat.com wrote:

 Gabriel and I talked at the last summit about how Horizon could
 figure out what to show the user based on the roles that the user
 had.  At the time, I was thinking it wasn't something we could figure out
 at run time.

 I was wrong.

 The answer is plain.  We have the policy files in Keystone already,
 we just don't parse them.  Horizon has all the information it needs
 to figure out based on a token, what can this user do?

 I'm not certain how to make use of this, yet, but the kernel of the
 idea is there.

 Thanks Adam. David Lyle implemented RBAC functionality based on policy
 files in Havana [0]. I think one of the problems he found was that
 although policy files are in use, most services currently do not
 upload them 

Re: [openstack-dev] Issues with IPTables

2013-09-16 Thread Solly Ross
Quite a while.  RDO's documentation for configuring multinode Packstack with 
Neutron was a bit lacking, so after attempting to get that working for a while, 
I switched to following the Basic Install Guide 
(http://docs.openstack.org/trunk/basic-install/content/basic-install_intro.html).
  I also found the basic install guide catered for Fedora 
(http://docs.openstack.org/trunk/basic-install/yum/content/basic-install_intro.html),
 but that is sorely lacking in the actual instruction department, and is 
missing several steps.

If you would like, I can attach the raw draft of my notes.  Eventually, some of 
the changes or clairifications should make their way into the actual OpenStack 
Docs.

Best Regards,
Solly Ross

- Original Message -
From: Qing He qing...@radisys.com
To: sr...@redhat.com
Sent: Monday, September 16, 2013 1:14:42 PM
Subject: RE: Issues with IPTables

Solly,
A side question, how long did this process take you?

Thanks,

Qing

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Monday, September 16, 2013 10:11 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Issues with IPTables

In a enfort to improve/verify the Openstack Documentation with regards to RHEL 
and Fedora, I've been attempting to follow the basic install guides.  I've 
managed to create a working installation and set of instructions.  However, to 
do so I needed to disable the Neutron IPTables firewall, as it was blocking 
non-VM traffic.  Namely, it was blocking the GRE packets being used by Neutron. 
 Did I miss something, or is this a bug?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
I was thinking of something simple like policy_name to go along with policy_id. 
Then we can name it whatever we like.

Mark

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Monday, September 16, 2013 11:25 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] WebUI and user roles


On Mon, Sep 16, 2013 at 11:35 AM, Miller, Mark M (EB SW Cloud - RD - 
Corvallis) mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
FYI: We were thinking about using the new Keystone policy API, but fell back to 
using files on the file system due to not having a way to retrieve the policies 
from Keystone other than with an ID string. After saving the policy file you 
need to save the policy ID somewhere so you might as well just save the policy 
file as well. If the policy table also had a name field, then the policy file 
could be saved during OpenStack installation and retrieved later by each 
service using some algorithm on its name.

The SQL policy driver supports names (and any other arbitrary attribute), 
although it's not part of the spec. We just need some agreement on the some 
algorithm bit (and an implementation!).


Mark

From: Dolph Mathews 
[mailto:dolph.math...@gmail.commailto:dolph.math...@gmail.com]
Sent: Monday, September 16, 2013 9:19 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] WebUI and user roles


On Mon, Sep 16, 2013 at 11:01 AM, Adam Young 
ayo...@redhat.commailto:ayo...@redhat.com wrote:
Looks like this has grown into a full discussion.  Opening up to the dev 
mailing list.

On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:
I did run into a couple of fundamental limitations with the policy API as 
implemented in Keystone.

1)  policy_list and policy_get are admin_required by default in the policy 
file.  Obviously this can be changed in the policy file itself, but this is a 
bad default.  A regular user is most in need of policy rule enforcement so the 
existing default does not make sense from a UI perspective.
Hmmm, this sounds like a mismatch of expectations.  I would think that the 
Horizon server would fetch the policy as an admin user, not the end user, and 
use that to tailor their UX.  It would only be a problem if that tailoring was 
done on the Client side in Javascript.  Why would it matter what access control 
for the policy was?  Why would the end user be requesting the policy?

2)  The 3 param/return fields supported by the policy API are: blob, id, type 
(mime-type).  When trying to utilize multiple policy files (blobs) from several 
services we need a way to map the blob to a service type to know which rule set 
to apply.  I had considered lumping all the policy blobs into one, but there is 
no requirement that each policy rule will begin with e.g., identity: and 
several blobs could implement a rule default which could be specified 
differently.  So, I believe a service_type parameter is necessary.  
Additionally, is there anything barring nova from uploading multiple policy 
blobs (perhaps different), each getting unique IDs, and then having several 
varying compute policy blobs to choose from?  Which one wins?
I haven't looked deeply at the policy API until now:   It looks broken.  I 
would not be able to tell just from reading the code how to map a policy file 
to the service that needed it.  I would think that, upon service startup, it 
would request the policy file that mapped to it, either by endpoint with a 
fallback to a pre-service call.

We stopped short of any policy - service/endpoint mapping because there were 
mixed expectations about how that should be done and no clear use case that 
fetching policies by ID / URL didn't satisfy a bit more simply.


I would think that you would make a tree out of the rules.  At the root would 
be policy.  Underneath that would be the service, (then the endpoint in the 
future when we support multiple per service) and then the rules underneath 
those.  The rules would be a json dumps of the blob get from the policy_api.


Having devstack load the policy files into keystone would help, but 1 and 2 
need to be addressed before those files are usable in Horizon.

Thanks,
David

-Original Message-
From: Adam Young [mailto:ayo...@redhat.commailto:ayo...@redhat.com]
Sent: Monday, September 16, 2013 8:16 AM
To: Julie Pichon
Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
Subject: Re: WebUI and user roles

On 09/16/2013 07:33 AM, Julie Pichon wrote:
Adam Young ayo...@redhat.commailto:ayo...@redhat.com wrote:
Gabriel and I talked at the last summit about how Horizon could
figure out what to show the user based on the roles that the user
had.  At the time, I was thinking it wasn't something we could figure out at 
run time.

I was wrong.

The answer is plain.  We have the policy files in Keystone already,
we just don't parse them.  Horizon has all the information it needs
to figure out based on a token, what can this user do?

I'm not certain how to make 

Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-16 Thread Mike Spreitzer
data processing is surely a superset of big data.  Either, by itself, 
is way too vague.  But the wording that many people favor, which I will 
quote again, uses the vague term in a qualified way that makes it 
appropriately specific, IMHO.  Here is the wording again:

``To provide a simple, reliable and repeatable mechanism by which to 
deploy Hadoop and related Big Data projects, including management, 
monitoring and processing mechanisms driving further adoption of 
OpenStack.''

I think that saying related Big Data projects after Hadoop is fairly 
clear.  OTOH, I would not mind replacing Hadoop and related Big Data 
projects with the Hadoop ecosystem.

Regards,
Mike

Matthew Farrellee m...@redhat.com wrote on 09/16/2013 02:39:20 PM:

 From: Matthew Farrellee m...@redhat.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
 Date: 09/16/2013 02:40 PM
 Subject: Re: [openstack-dev] [savanna] Program name and Mission 
statement
 
 IMHO, Big Data is even more nebulous and currently being pulled in many 
 directions. Hadoop-as-a-Service may be too narrow. So, something in 
 between, such as Data Processing, is a good balance.
 
 Best,
 
 
 matt
 
 On 09/13/2013 08:37 AM, Abhishek Lahiri wrote:
  IMHO data processing is too board , it makes more sense to clarify 
this
  program as big data as a service or simply 
openstack-Hadoop-as-a-service.
 
  Thanks  Regards
  Abhishek Lahiri
 
  On Sep 12, 2013, at 9:13 PM, Nirmal Ranganathan rnir...@gmail.com
  mailto:rnir...@gmail.com wrote:
 
 
 
 
  On Wed, Sep 11, 2013 at 8:39 AM, Erik Bergenholtz
  ebergenho...@hortonworks.com mailto:ebergenho...@hortonworks.com
  wrote:
 
 
  On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com
  mailto:jma...@hortonworks.com wrote:
 
  Openstack Big Data Platform
 
 
  On Sep 10, 2013, at 8:39 PM, David Scott
  david.sc...@cloudscaling.com
  mailto:david.sc...@cloudscaling.com wrote:
 
  I vote for 'Open Stack Data'
 
 
  On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo
  zhongyue@intel.com mailto:zhongyue@intel.com wrote:
 
  Why not OpenStack MapReduce? I think that pretty much 
says
  it all?
 
 
  On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell
  g...@glenc.io mailto:g...@glenc.io wrote:
 
  performant isn't a word. Or, if it is, it means
  having performance. I think you mean 
high-performance.
 
 
  On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee
  m...@redhat.com mailto:m...@redhat.com wrote:
 
  Rough cut -
 
  Program: OpenStack Data Processing
  Mission: To provide the OpenStack community with an
  open, cutting edge, performant and scalable data
  processing stack and associated management 
interfaces.
 
 
  Proposing a slightly different mission:
 
  To provide a simple, reliable and repeatable mechanism by which 
to
  deploy Hadoop and related Big Data projects, including 
management,
  monitoring and processing mechanisms driving further adoption of
  OpenStack.
 
 
 
  +1. I liked the data processing aspect as well, since EDP api 
directly
  relates to that, maybe a combination of both.
 
 
 
  On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:
 
  It sounds too broad IMO. Looks like we need to
  define Mission Statement
  first.
 
  Sincerely yours,
  Sergey Lukjanov
  Savanna Technical Lead
  Mirantis Inc.
 
  On Sep 10, 2013, at 17:09, Alexander Kuznetsov
  akuznet...@mirantis.com
  mailto:akuznet...@mirantis.com
  mailto:akuznetsov@mirantis.__com
  mailto:akuznet...@mirantis.com wrote:
 
  My suggestion OpenStack Data Processing.
 
 
  On Tue, Sep 10, 2013 at 4:15 PM, Sergey 
Lukjanov
  slukja...@mirantis.com
  mailto:slukja...@mirantis.com
  mailto:slukja...@mirantis.com
  mailto:slukja...@mirantis.com__ wrote:
 
  Hi folks,
 
  due to the Incubator Application we
  should prepare Program name
  and Mission statement for Savanna, so, 
I
  want to start mailing
  thread about it.
 
  Please, provide any ideas here.
 
  P.S. List of existing programs:
  https://wiki.openstack.org/__wiki/Programs
  https://wiki.openstack.org/wiki/Programs
  P.P.S.
  

[openstack-dev] [nova] [Pci passthrough] bug? -- 'NoneType' object has no attribute 'support_requests'

2013-09-16 Thread David Kang

 Hi,

 I'm testing PCI passthrough features on Havana (single node installation).
I've installed OpenStack on CentOS 6.4 using EPEL.
The pci_passthrough_filter doesn't seem to be able to get the object 
'host_state.pci_stats'. 
Is it a bug?

 Thanks,
 David

 Here is the information of the test environment:

1. /etc/nova.conf

pci_alias={name:test, product_id:7190, vendor_id:8086}
pci_passthrough_whitelist=[{vendor_id:8086,product_id:7190}]


scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,PciPassthroughFilter

2. flavor

# nova flavor-list --extra-specs
++---+---+--+---+--+---+-+---+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public | extra_specs   |
++---+---+--+---+--+---+-+---+---+
| 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 | 
True  | {u'pci_passthrough:alias': u'test:1'} |


3. RPM information of nova-scheduler:
Name: openstack-nova-scheduler
Arch: noarch
Version : 2013.2
Release : 0.19.b3.el6
Size: 2.3 k
Repo: installed
From repo   : openstack-havana


4. /var/log/nova/scheduler.log

2013-09-16 17:04:51.259 13088 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:70
2013-09-16 17:04:51.259 13088 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:70
2013-09-16 17:04:51.259 13088 WARNING nova.scheduler.utils 
[req-267b2f38-825f-4609-82ef-6d4164e227b1 8ace6a952a0f4a9d81c435a2c8194fe9 
656fecdc92df43c2a047316e5a1e3a24] [instance: 
9a7e57e1-8c6f-4a18-94f0-c406aae99f9a] Setting instance to ERROR state.
2013-09-16 17:04:51.398 13088 ERROR nova.openstack.common.rpc.amqp 
[req-267b2f38-825f-4609-82ef-6d4164e227b1 8ace6a952a0f4a9d81c435a2c8194fe9 
656fecdc92df43c2a047316e5a1e3a24] Exception during message handling
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp **args)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/scheduler/manager.py, line 160, in 
run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp context, 
ex, request_spec)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/scheduler/manager.py, line 147, in 
run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
legacy_bdm_in_spec)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, line 87, 
in schedule_run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
filter_properties, instance_uuids)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py, line 
336, in _schedule
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
filter_properties, index=num)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/scheduler/host_manager.py, line 397, in 
get_filtered_hosts
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp hosts, 
filter_properties, index)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/filters.py, line 82, in 
get_filtered_objects
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
list_objs = list(objs)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/filters.py, line 43, in filter_all
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp if 
self._filter_one(obj, filter_properties):
2013-09-16 

Re: [openstack-dev] [DevStack] Generalize config file settings

2013-09-16 Thread Dean Troyer
On Fri, Sep 13, 2013 at 1:20 PM, Everett Toews
everett.to...@rackspace.comwrote:

 On Sep 13, 2013, at 6:10 AM, Sean Dague wrote:

  Because inevitably people ask for copies of other folks configs to
 duplicate things, and a single file is easier to pass around than a tree.
 But that would mean a unique parser to handle the top level stanza.

 +1

 I share localrc files all the time.


Well, I wrote a parser for a modified INI file format anyway and managed to
put everything into a single local.conf file.  There is a bit of clumsiness
around localrc in order to maintain backward-compatibility and a
deprecation cycle to go through for some existing config variables.

The current incarnation is in https://review.openstack.org/#/c/46768/, try
it out, let's see if it is any good.  I did write up some more detailed
docs at http://hackstack.org/x/blog/2013/09/07/devstack-local-config/ that
will morph into the doc page in devstack.org when we're done.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Timeline for upcoming PTL and TC elections

2013-09-16 Thread Thierry Carrez
It's that time of the year again !

In the next weeks we'll renew our PTLs (one for each OpenStack program)
and most Technical Committee members. The timeline for those elections
is as follows:

* Sep 20 - Sep 26: Open candidacy to PTL positions
* Sep 27 - Oct 3: PTL elections
* Oct 4 - Oct 10: Open candidacy to TC positions
* Oct 11 - Oct 17: TC elections

Anita Kuno volunteered to serve as election official for this round, so
you should expect some emails from her starting at the end of this week.

See more details at:
https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013
https://wiki.openstack.org/wiki/TC_Elections_Fall_2013

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Lyle, David (Cloud Services)
Adam Young ayo...@redhat.com wrote:

 Looks like this has grown into a full discussion.  Opening up to the dev 
 mailing list.

 On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:
 I did run into a couple of fundamental limitations with the policy API as 
 implemented in Keystone.

 1)  policy_list and policy_get are admin_required by default in the policy 
 file.  Obviously this can be changed in the policy file itself, but this is 
 a bad default.  A regular user is most in need of policy rule enforcement so 
 the existing default does not make sense from a UI perspective.
Hmmm, this sounds like a mismatch of expectations.  I would think that the 
Horizon server would fetch the policy as an admin user, not the end user, and 
use that to tailor their UX.  It would only be a problem if that tailoring was 
done on the Client side in Javascript.  Why would it matter what access 
control for the policy was?  Why would the end user be requesting the policy?

Horizon does not have an admin authenticated user running in the background.  
All privileges are based on the roles returned in the token from keystone when 
authenticating.  So allowing read access to the policy file for non-admin users 
allows the policy file to be accessed at all.


 2)  The 3 param/return fields supported by the policy API are: blob, id, 
 type (mime-type).  When trying to utilize multiple policy files (blobs) from 
 several services we need a way to map the blob to a service type to know 
 which rule set to apply.  I had considered lumping all the policy blobs into 
 one, but there is no requirement that each policy rule will begin with e.g., 
 identity: and several blobs could implement a rule default which could 
 be specified differently.  So, I believe a service_type parameter is 
 necessary.  Additionally, is there anything barring nova from uploading 
 multiple policy blobs (perhaps different), each getting unique IDs, and then 
 having several varying compute policy blobs to choose from?  Which one wins?
I haven't looked deeply at the policy API until now:   It looks broken.  
I would not be able to tell just from reading the code how to map a policy 
file to the service that needed it.  I would think that, upon service startup, 
it would request the policy file that mapped to it, either by endpoint with a 
fallback to a pre-service call.

I would think that you would make a tree out of the rules.  At the root would 
be policy.  Underneath that would be the service, (then the endpoint in the 
future when we support multiple per service) and then the rules underneath 
those.  The rules would be a json dumps of the blob get from the policy_api.

A service type indicator allows would be the base addition again to 
differentiate ambiguous rules between blobs.  When the policy blob is uploaded 
the service type should be specified.  If that specifier is the service 
endpoint, that would work well and map extensibly.

David



 Having devstack load the policy files into keystone would help, but 1 and 2 
 need to be addressed before those files are usable in Horizon.

 Thanks,
 David

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Monday, September 16, 2013 8:16 AM
 To: Julie Pichon
 Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
 Subject: Re: WebUI and user roles

 On 09/16/2013 07:33 AM, Julie Pichon wrote:
 Adam Young ayo...@redhat.com wrote:
 Gabriel and I talked at the last summit about how Horizon could 
 figure out what to show the user based on the roles that the user 
 had.  At the time, I was thinking it wasn't something we could figure out 
 at run time.

 I was wrong.

 The answer is plain.  We have the policy files in Keystone already, 
 we just don't parse them.  Horizon has all the information it needs 
 to figure out based on a token, what can this user do?

 I'm not certain how to make use of this, yet, but the kernel of the 
 idea is there.
 Thanks Adam. David Lyle implemented RBAC functionality based on 
 policy files in Havana [0]. I think one of the problems he found was 
 that although policy files are in use, most services currently do not 
 upload them to Keystone so they are not yet queryable (?).
 That is true, but it is a deployment issue that is easily solvable. We can 
 have devstack, packstack, and whatever else, upload the policy files at the 
 start.  They are all in the various deployments, so it is really a trivial 
 step to load them into Keystone.

 Regards,

 Julie


 [0] https://blueprints.launchpad.net/horizon/+spec/rbac

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WebUI and user roles

2013-09-16 Thread Dolph Mathews
On Mon, Sep 16, 2013 at 6:18 PM, Lyle, David (Cloud Services) 
david.l...@hp.com wrote:

 Adam Young ayo...@redhat.com wrote:

  Looks like this has grown into a full discussion.  Opening up to the dev
 mailing list.

  On 09/16/2013 10:43 AM, Lyle, David (Cloud Services) wrote:
  I did run into a couple of fundamental limitations with the policy API
 as implemented in Keystone.
 
  1)  policy_list and policy_get are admin_required by default in the
 policy file.  Obviously this can be changed in the policy file itself, but
 this is a bad default.  A regular user is most in need of policy rule
 enforcement so the existing default does not make sense from a UI
 perspective.
 Hmmm, this sounds like a mismatch of expectations.  I would think that
 the Horizon server would fetch the policy as an admin user, not the end
 user, and use that to tailor their UX.  It would only be a problem if that
 tailoring was done on the Client side in Javascript.  Why would it matter
 what access control for the policy was?  Why would the end user be
 requesting the policy?

 Horizon does not have an admin authenticated user running in the
 background.  All privileges are based on the roles returned in the token
 from keystone when authenticating.  So allowing read access to the policy
 file for non-admin users allows the policy file to be accessed at all.


Given that keystone doesn't determine what privileges/capabilities a role
actually provides... Horizon shouldn't be expected to correctly interpret
every service's policy blob. A service is free to change it's policy
enforcement, and Horizon shouldn't have to duplicate that functionality.

In addition, every service shouldn't be expected to use centralized policy
storage.

From my perspective, it only makes sense for Horizon to discover authorized
capabilities by making authenticated requests to each services (for
example, via OPTIONS).



 
  2)  The 3 param/return fields supported by the policy API are: blob,
 id, type (mime-type).  When trying to utilize multiple policy files (blobs)
 from several services we need a way to map the blob to a service type to
 know which rule set to apply.  I had considered lumping all the policy
 blobs into one, but there is no requirement that each policy rule will
 begin with e.g., identity: and several blobs could implement a rule
 default which could be specified differently.  So, I believe a
 service_type parameter is necessary.  Additionally, is there anything
 barring nova from uploading multiple policy blobs (perhaps different), each
 getting unique IDs, and then having several varying compute policy blobs to
 choose from?  Which one wins?
 I haven't looked deeply at the policy API until now:   It looks broken.
 I would not be able to tell just from reading the code how to map a
 policy file to the service that needed it.  I would think that, upon
 service startup, it would request the policy file that mapped to it, either
 by endpoint with a fallback to a pre-service call.

 I would think that you would make a tree out of the rules.  At the root
 would be policy.  Underneath that would be the service, (then the endpoint
 in the future when we support multiple per service) and then the rules
 underneath those.  The rules would be a json dumps of the blob get from the
 policy_api.

 A service type indicator allows would be the base addition again to
 differentiate ambiguous rules between blobs.  When the policy blob is
 uploaded the service type should be specified.  If that specifier is the
 service endpoint, that would work well and map extensibly.

 David


 
  Having devstack load the policy files into keystone would help, but 1
 and 2 need to be addressed before those files are usable in Horizon.
 
  Thanks,
  David
 
  -Original Message-
  From: Adam Young [mailto:ayo...@redhat.com]
  Sent: Monday, September 16, 2013 8:16 AM
  To: Julie Pichon
  Cc: Matthias Runge; Gabriel Hurley; Lyle, David (Cloud Services)
  Subject: Re: WebUI and user roles
 
  On 09/16/2013 07:33 AM, Julie Pichon wrote:
  Adam Young ayo...@redhat.com wrote:
  Gabriel and I talked at the last summit about how Horizon could
  figure out what to show the user based on the roles that the user
  had.  At the time, I was thinking it wasn't something we could
 figure out at run time.
 
  I was wrong.
 
  The answer is plain.  We have the policy files in Keystone already,
  we just don't parse them.  Horizon has all the information it needs
  to figure out based on a token, what can this user do?
 
  I'm not certain how to make use of this, yet, but the kernel of the
  idea is there.
  Thanks Adam. David Lyle implemented RBAC functionality based on
  policy files in Havana [0]. I think one of the problems he found was
  that although policy files are in use, most services currently do not
  upload them to Keystone so they are not yet queryable (?).
  That is true, but it is a deployment issue that is easily solvable. We
 can have devstack, packstack, 

Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-16 Thread Joshua Hesketh


On 9/16/13 10:37 PM, Dolph Mathews wrote:


On Mon, Sep 16, 2013 at 5:31 AM, Michael Still mi...@stillhq.com 
mailto:mi...@stillhq.com wrote:


On Fri, Sep 13, 2013 at 7:51 AM, Dolph Mathews
dolph.math...@gmail.com mailto:dolph.math...@gmail.com wrote:

 ++ Data backups are a solved problem, and no DB admin should
trust an
 application to perform its own backups.

I'm not completely sure I agree. Consider the case where a cloud with
active users undertakes an upgrade. The migrations run, and they allow
user traffic to hit the installation. They then discover there is a
serious problem and now need to rollback. However, they can't just
restore a database backup, because the database is no longer in a
consistent state compared with the hypervisors -- users might have
created or deleted instances for example.


In this scenario if we could downgrade reliably, they could force a
downgrade with db sync, and then revert the packages they had
installed to the previous version.

How would they handle this scenario with just database backups?


Great point, but I still wouldn't *rely* on an application to manage 
it's own data backups :)


I don't think Michael was saying anybody should be relying on migrations 
to manage its own backups but that it could serve an edge case that 
database snapshots can not. In the scenario given I would imagine that 
the administrators did have backups but wanted to avoid using them to 
not lose any new data entered. If the migration downgrade were to fail 
they would still have the backups and be no worse off than they would 
have been without them. However if the migration downgrade works then 
they get the benefit of not (necessarily) losing new user data.


Cheers,
Josh

--
Rackspace Australia



Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Tuskar Names Clarification Unification

2013-09-16 Thread Mike Spreitzer
 From: Jaromir Coufal jcou...@redhat.com
 To: openstack-dev@lists.openstack.org, 
 Date: 09/16/2013 11:51 AM
 Subject: Re: [openstack-dev] [Tuskar] Tuskar Names Clarification  
Unification
 
 Hi,
 
 after few days of gathering information, it looks that no more new 
 ideas appear there, so let's take the last round of voting for names
 which you prefer. It's important for us to get on the same page.

I am concerned that the proposals around the term 'rack' do not recognize 
that there might be more than one layer in the organization.

Is it more important to get appropriately abstract and generic terms, or 
is the desire to match common concrete terms?

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI][Ceilometer] Please help to understand why migrations are (not?) working in gate-tempest-devstack-vm-full

2013-09-16 Thread Jay Pipes

Hi all,

I submitted a patch earlier that adds a migration to Ceilometer:

https://review.openstack.org/#/c/46841/

The patch only adds a SQLAlchemy migration file. Nothing more.

My patch failed some gate tests, so I went to investigate the cause of 
the failures, and I saw in the console log [1] the following:


2013-09-17 04:09:09.876 | 2013-09-17 04:09:09 + mysql -uroot -psecret 
-hlocalhost -e 'DROP DATABASE IF EXISTS ceilometer;'
2013-09-17 04:09:09.879 | 2013-09-17 04:09:09 + mysql -uroot -psecret 
-hlocalhost -e 'CREATE DATABASE ceilometer CHARACTER SET utf8;'
2013-09-17 04:09:09.881 | 2013-09-17 04:09:09 + 
/usr/local/bin/ceilometer-dbsync
2013-09-17 04:09:09.883 | 2013-09-17 04:09:09 2013-09-17 04:09:09.131 
23924 INFO migrate.versioning.api [-] 0 - 1...

snip
2013-09-17 04:09:09.928 | 2013-09-17 04:09:09 2013-09-17 04:09:09.549 
23924 INFO migrate.versioning.api [-] 9 - 10...
2013-09-17 04:09:09.931 | 2013-09-17 04:09:09 2013-09-17 04:09:09.560 
23924 INFO migrate.versioning.api [-] done
2013-09-17 04:09:23.191 | Process leaked file descriptors. See 
http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build 
for more information

2013-09-17 04:09:23.707 | Build step 'Execute shell' marked build as failure

Here's where I'm stumped... I added the 16th migration to Ceilometer. 
But only 10 migrations ran. So...


1) What happened to the other migrations? I *think* they actually ran -- 
because in looking at the MySQL slow log for the test run, I saw this:


UPDATE migrate_version SET version=15 WHERE migrate_version.version = 14 
AND migrate_version.repository_id = 'ceilometer';


So at least up until migration 15 succeeded... but where is the output 
in the console log?


2) What is the error that is failing the test run? I grep for error 
and fail and don't see anything that indicates an error occurred -- 
but perhaps I'm not looking in the right place?


3) Does the leaked file descriptors thing noted above have anything to 
do with running database migrations?


4) Do the Ceilometer logs get screen-scraped like the other services? I 
didn't see them in the log list [2], but perhaps I'm not looking in the 
right place?


Any and all insight would be greatly appreciated.

Best,
-jay

[1] 
http://logs.openstack.org/41/46841/2/check/gate-tempest-devstack-vm-full/c16c3be/console.html
[2] 
http://logs.openstack.org/41/46841/2/check/gate-tempest-devstack-vm-full/c16c3be/logs/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [scheduler] Bringing things together for Icehouse

2013-09-16 Thread Mike Spreitzer
I have written a brief document, with pictures.  See 
https://docs.google.com/document/d/1hQQGHId-z1A5LOipnBXFhsU3VAMQdSe-UXvL4VPY4ps

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Program name and Mission statement

2013-09-16 Thread Alexander Kuznetsov
Another variant *Big Data Processing*. This mission more precise reflects
the Savanna nature as just Data Processing. Also, this name is less
confusing as just Data Processing.


On Tue, Sep 17, 2013 at 12:11 AM, Mike Spreitzer mspre...@us.ibm.comwrote:

 data processing is surely a superset of big data.  Either, by itself,
 is way too vague.  But the wording that many people favor, which I will
 quote again, uses the vague term in a qualified way that makes it
 appropriately specific, IMHO.  Here is the wording again:

 ``To provide a simple, reliable and repeatable mechanism by which to
 deploy Hadoop and related Big Data projects, including management,
 monitoring and processing mechanisms driving further adoption of
 OpenStack.''

 I think that saying related Big Data projects after Hadoop is fairly
 clear.  OTOH, I would not mind replacing Hadoop and related Big Data
 projects with the Hadoop ecosystem.

 Regards,
 Mike

 Matthew Farrellee m...@redhat.com wrote on 09/16/2013 02:39:20 PM:

  From: Matthew Farrellee m...@redhat.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
  Date: 09/16/2013 02:40 PM
  Subject: Re: [openstack-dev] [savanna] Program name and Mission statement
 
  IMHO, Big Data is even more nebulous and currently being pulled in many
  directions. Hadoop-as-a-Service may be too narrow. So, something in
  between, such as Data Processing, is a good balance.
 
  Best,
 
 
  matt
 
  On 09/13/2013 08:37 AM, Abhishek Lahiri wrote:
   IMHO data processing is too board , it makes more sense to clarify this
   program as big data as a service or simply
 openstack-Hadoop-as-a-service.
  
   Thanks  Regards
   Abhishek Lahiri
  
   On Sep 12, 2013, at 9:13 PM, Nirmal Ranganathan rnir...@gmail.com
   mailto:rnir...@gmail.com rnir...@gmail.com wrote:
  
  
  
  
   On Wed, Sep 11, 2013 at 8:39 AM, Erik Bergenholtz
   ebergenho...@hortonworks.com 
   mailto:ebergenho...@hortonworks.comebergenho...@hortonworks.com
 
   wrote:
  
  
   On Sep 10, 2013, at 8:50 PM, Jon Maron jma...@hortonworks.com
   mailto:jma...@hortonworks.com jma...@hortonworks.com wrote:
  
   Openstack Big Data Platform
  
  
   On Sep 10, 2013, at 8:39 PM, David Scott
   david.sc...@cloudscaling.com
   mailto:david.sc...@cloudscaling.comdavid.sc...@cloudscaling.com
 wrote:
  
   I vote for 'Open Stack Data'
  
  
   On Tue, Sep 10, 2013 at 5:30 PM, Zhongyue Luo
   zhongyue@intel.com 
   mailto:zhongyue@intel.comzhongyue@intel.com
 wrote:
  
   Why not OpenStack MapReduce? I think that pretty much says
   it all?
  
  
   On Wed, Sep 11, 2013 at 3:54 AM, Glen Campbell
   g...@glenc.io mailto:g...@glenc.io g...@glenc.io
 wrote:
  
   performant isn't a word. Or, if it is, it means
   having performance. I think you mean
 high-performance.
  
  
   On Tue, Sep 10, 2013 at 8:47 AM, Matthew Farrellee
   m...@redhat.com mailto:m...@redhat.comm...@redhat.com
 wrote:
  
   Rough cut -
  
   Program: OpenStack Data Processing
   Mission: To provide the OpenStack community with an
   open, cutting edge, performant and scalable data
   processing stack and associated management
 interfaces.
  
  
   Proposing a slightly different mission:
  
   To provide a simple, reliable and repeatable mechanism by which to
   deploy Hadoop and related Big Data projects, including management,
   monitoring and processing mechanisms driving further adoption of
   OpenStack.
  
  
  
   +1. I liked the data processing aspect as well, since EDP api directly
   relates to that, maybe a combination of both.
  
  
  
   On 09/10/2013 09:26 AM, Sergey Lukjanov wrote:
  
   It sounds too broad IMO. Looks like we need to
   define Mission Statement
   first.
  
   Sincerely yours,
   Sergey Lukjanov
   Savanna Technical Lead
   Mirantis Inc.
  
   On Sep 10, 2013, at 17:09, Alexander Kuznetsov
   akuznet...@mirantis.com
   
   mailto:akuznet...@mirantis.comakuznet...@mirantis.com
 
   
   mailto:akuznetsov@mirantis.__comakuznetsov@mirantis.__com
   
   mailto:akuznet...@mirantis.comakuznet...@mirantis.com
 wrote:
  
   My suggestion OpenStack Data Processing.
  
  
   On Tue, Sep 10, 2013 at 4:15 PM, Sergey
 Lukjanov
   slukja...@mirantis.com
   
   mailto:slukja...@mirantis.comslukja...@mirantis.com
 
   
   mailto:slukja...@mirantis.comslukja...@mirantis.com