Re: [openstack-dev] [fuel] Fuel API settings reference

2015-06-15 Thread Przemyslaw Kaminski
Well, I suggest continuing

https://review.openstack.org/#/c/179051/

It basically requires to update docstrings of handler functions
according to [1]. This way the documentation is as close to the code as
possible.

With some work one could add automatic generation of docs out of
JSONSchema probably.

P.

[1] http://pythonhosted.org/sphinxcontrib-httpdomain/

On 06/15/2015 03:21 PM, Andrew Woodward wrote:
 I think there is some desire to see more documentation around here as
 there are some odd interactions with parts of the data payload, and
 perhaps documenting these may improve some of them.
 
 I think the gaps in order of most used are:
 * node object create / update
 * environment networks ( the fact that metadata cant be updated kills me)
 * environment settings (the separate api for hidden and non kills me)
 * release update
 * role add/update
 
 After these are updated I think we can move on to common but less used
 * node interface assignment
 * node disk assignment
 
 
 
 On Mon, Jun 15, 2015 at 8:09 AM Oleg Gelbukh ogelb...@mirantis.com
 mailto:ogelb...@mirantis.com wrote:
 
 Good day, fellow fuelers
 
 Fuel API is a powerful tool that allow for very fine tuning of
 deployment settings and parameters, and we all know that UI exposes
 only a fraction of the full range of attributes client can pass to
 Fuel installer.
 
 However, there are very little documentation that explains what
 settings are accepted by Fuel objects, what are they meanings and
 what is their syntax. There is a main reference document for API
 [1], but it does give almost no insight into payload of parameters
 that every entity accepts. Which are they and what they for seems to
 be mostly scattered as a tribal knowledge.
 
 I would like to understand if there is a need in such a document
 among developers and deployers who consume Fuel API? Or might be
 there is already such document or effort to create it going on?
 
 --
 Best regards,
 Oleg Gelbukh
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 --
 Andrew Woodward
 Mirantis
 Fuel Community Ambassador
 Ceph Community 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pecan migration status

2015-05-08 Thread Przemyslaw Kaminski
Ping with [1] as additional argument for migrating.

[1]
https://openstack.nimeyo.com/43700/openstack-keystone-rehashing-pecan-falcon-other-wsgi-debate?qa_q=rehashing+pecan

P.

On 03/24/2015 09:09 AM, Przemyslaw Kaminski wrote:
 BTW, old urls do not match as yet exactly the new ones. There is a need
 to write a test that will check all urls.py list and compare with new
 handlers' urls to make sure nothing was missed.
 
 P.
 
 On 03/24/2015 08:46 AM, Przemyslaw Kaminski wrote:
 Hello,

 I want to summarize work I've done in spare time on migrating our API to
 Pecan [1]. This is partially based on previous Nicolay's work [2]. One
 test is still failing there but it's some DB lock and I'm not 100% sure
 that this is caused because something is yet not done on Pecan side or
 just some bug popped out (I was getting a different DB lock before but
 it disappeared after rebasing a fix for [5]).

 My main commitment here is the 'reverse' method [3] which is not
 provided by default in Pecan. I have kept compatibility with original
 reverse method in our code. I have additionally added a 'qs' keyword
 argument that is used for adding a query string to the URL (see
 test_node_nic_handler.py::test_change_mac_of_assigned_nics::get_nodes,
 test_node_nic_handler.py::test_remove_assigned_interfaces::get_nodes)

 I decided to keep original Nicolay's idea of copying all handlers to v2
 directory and not just modify original v1 handlers and concentrated
 instead of removing hacks around Pecan as in [6] (with post_all,
 post_one, put_all, put_one, etc). Merging current v2 into v1 should
 drastically decrease the number of changed lines in this patchset.

 I have so far found one fault in the URLs in our API that isn't easily
 handled by Pecan (some custom _route function would help) and IMHO
 should be fixed by rearranging URLs instead of adding _route hacks:
 /nodes/interfaces and /nodes/1/interfaces require the same get_all
 method in the interfaces controller. And /nodes/interfaces only usage is
 for doing a batch node interface update via PUT. The current v2 can be
 merged into v1 with some effort.

 We sometimes use a PUT request without specifying an object's ID -- this
 is unsupported in Pecan but can be easily hacked by giving a dummy
 keyword argument to function's definition:

 def put(self, dummy=None)

 Some bugs in tests were found and fixed (for example, wrong content-type
 in headers in [4]).

 I haven't put enough thought into error handling there yet, some stuff
 is implemented in hooks/error.py but I'm not fully satisfied with it.
 Most unfinished stuff I was marking with a TODO(pkaminski).

 P.

 [1] https://review.openstack.org/158661
 [2] https://review.openstack.org/#/c/99069/6
 [3]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/api/__init__.py
 [4]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/test/unit/test_handlers.py
 [5] https://bugs.launchpad.net/fuel/+bug/1433528
 [6]
 https://review.openstack.org/#/c/99069/6/nailgun/nailgun/api/v2/controllers/base.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Swagger documentation

2015-05-05 Thread Przemyslaw Kaminski
Hello,

I prepared a small PoC of Swagger [1] as a proposal to [2]. If you want
to test it out, checkout that commit into your repo, start Nailgun
locally and point your browser to [3]. Basically you just need to put
Swagger-UI [4] somewhere and point your browser to /dist/index.html
there, filling in the URL. OPTIONS handler with appropriate CORS
settings is required on API side if Swagger UI's host is somewhere else
than the API. I've turned this on when the settings.DEVELOPMENT variable
in API is set to True.

As a demo I modified documentation of LogEntryCollectionHandler.
Basically we should fix our docstrings to comply with [5] and extend 
cleanup my docutils parsing logic in swagger.py. The plus side is that
our Sphinx documentation will get better too.

Please test it and give feedback.

P.

[1] https://review.openstack.org/#/c/179051/
[2] https://bugs.launchpad.net/fuel/+bug/1449030
[3]
http://172.18.163.4/swagger-ui/dist/index.html?url=http://localhost:8000/api/v1/docs#!/default/get_logs
[4] https://github.com/swagger-api/swagger-ui
[5] http://pythonhosted.org/sphinxcontrib-httpdomain/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Julia Aranovich for fuel-web core

2015-04-30 Thread Przemyslaw Kaminski
+1, indeed Julia's reviews are very thorough.

P.

On 04/30/2015 11:28 AM, Vitaly Kramskikh wrote:
 Hi,
 
 I'd like to nominate Julia Aranovich
 http://stackalytics.com/report/users/jkirnosova for fuel-web
 https://github.com/stackforge/fuel-web core team. Julia's reviews are
 always thorough and have decent quality. She is one of the top
 contributors and reviewers in fuel-web repo (mostly for JS/UI stuff).
 
 Please vote by replying with +1/-1.
 
 -- 
 Vitaly Kramskikh,
 Fuel UI Tech Lead,
 Mirantis, Inc.
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Since there is no reply here I have taken steps to become core reviewer
of the (orphaned) repos [1], [2], [3], [4].

Should anyone want to take responsibility for them please write me.

I have also taken steps to get the fuel-qa script working and will make
sure tests pass with new manifests. I will also update manifests'
version so that there will be no deprecation warnings.

P.

[1]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-glusterfs,access
[2]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-group-based-policy,access
[3]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-nfs,access
[4]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-cinder-netapp,access

On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
 Hello,
 
 I've been investigating bug [1] concentrating on the
 fuel-plugin-external-glusterfs.
 
 First of all: [2] there are no core reviewers for Gerrit for this repo
 so even if there was a patch to fix [1] no one could merge it. I saw
 also fuel-plugin-external-nfs -- same issue, haven't checked other
 repos. Why is this? Can we fix this quickly?
 
 Second, the plugin throws:
 
 DEPRECATION WARNING: The plugin has old 1.0 package format, this format
 does not support many features, such as plugins updates, find plugin in
 new format or migrate and rebuild this one.
 
 I don't think this is appropriate for a plugin that is listed in the
 official catalog [3].
 
 Third, I created a supposed fix for this bug [4] and wanted to test it
 with the fuel-qa scripts. Basically I built an .fp file with
 fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
 to point to that .fp file and then ran the
 group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
 Then I reverted the changes from the patch and the test still failed
 [6]. But installing the plugin by hand shows that it's available there
 so I don't know if it's broken plugin test or am I still missing something.
 
 It would be nice to get some QA help here.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415058
 [2] https://review.openstack.org/#/admin/groups/577,members
 [3] https://fuel-infra.org/plugins/catalog.html
 [4] https://review.openstack.org/#/c/169683/
 [5]
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
 [6]
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Hello,

Done, added you.

I already created something that should fix the tests for glusterfs: [1]

Also the fuel-qa is not entirely correct for testing the glusterfs
plugin: here's the proposed fix [2].

Unfortunately the tests still fail with this message: [3]

I had an error about GLUSTER_CLUSTER_ENDPOINT being undefined so I set
it like: GLUSTER_CLUSTER_ENDPOINT=127.0.0.2:/mnt but I'm not sure if
it's correct (CI job has some custom-setup server with glusterfs for this).

Here are the logs [4]. Will you take over?

P.

[1] https://review.openstack.org/#/c/169683/
[2] https://review.openstack.org/170094
[3] http://sprunge.us/BYVY
[4]
https://www.dropbox.com/s/io6aeogidc49qxk/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_02__10_52_22.tar.xz?dl=0

On 04/02/2015 12:07 PM, Stanislaw Bogatkin wrote:
 Hi Przemyslaw,
 I would be glad to be core reviewer to fuel-plugin-glusterfs as long as
 seems than I was only one person who push some commits to it.
 
 On Thu, Apr 2, 2015 at 10:47 AM, Przemyslaw Kaminski
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Since there is no reply here I have taken steps to become core reviewer
 of the (orphaned) repos [1], [2], [3], [4].
 
 Should anyone want to take responsibility for them please write me.
 
 I have also taken steps to get the fuel-qa script working and will make
 sure tests pass with new manifests. I will also update manifests'
 version so that there will be no deprecation warnings.
 
 P.
 
 [1]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-glusterfs,access
 [2]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-group-based-policy,access
 [3]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-nfs,access
 [4]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-cinder-netapp,access
 
 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this
 format
  does not support many features, such as plugins updates, find
 plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH
 variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
fuel-plugin-build error:

(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb --build
.


Unexpected error
Cannot find directories ./repositories/ubuntu for release
{'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
'os': 'ubuntu', 'mode': ['ha', 'multinode'], 'deployment_scripts_path':
'deployment_scripts/'}
(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
pre_build_hook  README.md  tasks.yaml
(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
'repositories'
metadata.yaml
18:repository_path: repositories/ubuntu
23:repository_path: repositories/centos

Apparently some files are missing from the git repo or the manifest is
incorrect. Does anyone know something about this?

P.

[1] https://github.com/stackforge/fuel-plugin-cinder-netapp

On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
 Hello,
 
 I've been investigating bug [1] concentrating on the
 fuel-plugin-external-glusterfs.
 
 First of all: [2] there are no core reviewers for Gerrit for this repo
 so even if there was a patch to fix [1] no one could merge it. I saw
 also fuel-plugin-external-nfs -- same issue, haven't checked other
 repos. Why is this? Can we fix this quickly?
 
 Second, the plugin throws:
 
 DEPRECATION WARNING: The plugin has old 1.0 package format, this format
 does not support many features, such as plugins updates, find plugin in
 new format or migrate and rebuild this one.
 
 I don't think this is appropriate for a plugin that is listed in the
 official catalog [3].
 
 Third, I created a supposed fix for this bug [4] and wanted to test it
 with the fuel-qa scripts. Basically I built an .fp file with
 fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
 to point to that .fp file and then ran the
 group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
 Then I reverted the changes from the patch and the test still failed
 [6]. But installing the plugin by hand shows that it's available there
 so I don't know if it's broken plugin test or am I still missing something.
 
 It would be nice to get some QA help here.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415058
 [2] https://review.openstack.org/#/admin/groups/577,members
 [3] https://fuel-infra.org/plugins/catalog.html
 [4] https://review.openstack.org/#/c/169683/
 [5]
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
 [6]
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Well then either we need to fix fuel-plugin-builder to accept such
situations.

Actually it is an issue with fpb since git does not accepty empty
directories [1] so pulling fresh from such repo will result in
'repositories' dir missing even when the developer had it.

I hope no files were accidentaly forgotten during commit there?

P.

[1]
http://stackoverflow.com/questions/115983/how-can-i-add-an-empty-directory-to-a-git-repository

On 04/02/2015 03:46 PM, Sergey Kulanov wrote:
 Hi, Przemyslaw
 
 1) There should be two repositories folders. Please check the correct
 structure (marked with bold):
 mkdir -p repositories/{ubuntu,centos}
 
 
 root@55725ffa6e80:~/fuel-plugin-cinder-netapp# tree
 .
 |-- LICENSE
 |-- README.md
 |-- cinder_netapp-1.0.0.fp
 |-- deployment_scripts
 |   |-- puppet
 |   |   `-- plugin_cinder_netapp
 |   |   `-- manifests
 |   |   `-- init.pp
 |   `-- site.pp
 |-- environment_config.yaml
 |-- metadata.yaml
 |-- pre_build_hook
 *|-- repositories
 |   |-- centos
 |   `-- ubuntu
 *`-- tasks.yaml
 
 Then you can build the plugin.
 
 2) Actually, this should not be the issue while creating plugins from
 scratch using fpb tool itself [1]:
 
 fpb --create test
 
 root@55725ffa6e80:~# tree test
 test
 |-- LICENSE
 |-- README.md
 |-- deployment_scripts
 |   `-- deploy.sh
 |-- environment_config.yaml
 |-- metadata.yaml
 |-- pre_build_hook
 |-- repositories
 |   |-- centos
 |   `-- ubuntu
 `-- tasks.yaml
 
 
 
 [1]. https://pypi.python.org/pypi/fuel-plugin-builder/1.0.2
 
 
 2015-04-02 16:30 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com
 mailto:pkamin...@mirantis.com:
 
 Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
 fuel-plugin-build error:
 
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb --build
 .
 
 
 Unexpected error
 Cannot find directories ./repositories/ubuntu for release
 {'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
 'os': 'ubuntu', 'mode': ['ha', 'multinode'], 'deployment_scripts_path':
 'deployment_scripts/'}
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
 deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
 pre_build_hook  README.md  tasks.yaml
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
 'repositories'
 metadata.yaml
 18:repository_path: repositories/ubuntu
 23:repository_path: repositories/centos
 
 Apparently some files are missing from the git repo or the manifest is
 incorrect. Does anyone know something about this?
 
 P.
 
 [1] https://github.com/stackforge/fuel-plugin-cinder-netapp
 
 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this
 format
  does not support many features, such as plugins updates, find
 plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH
 variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ

Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Well this directory structure is not true for [1] and the suggestion
about fpb was for such repos.

P.

https://github.com/stackforge/fuel-plugin-cinder-netapp

On 04/02/2015 04:08 PM, Sergey Kulanov wrote:
 I've just print tree with hidden files, so actually it's ok with fpb:
 
 root@55725ffa6e80:~# tree -a test/
 test/
 |-- .gitignore
 |-- LICENSE
 |-- README.md
 |-- deployment_scripts
 |   `-- deploy.sh
 |-- environment_config.yaml
 |-- metadata.yaml
 |-- pre_build_hook
 *|-- repositories
 |   |-- centos
 |   |   `-- .gitkeep
 |   `-- ubuntu
 |   `-- .gitkeep
 *`-- tasks.yaml
 
 
 
 2015-04-02 17:01 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com
 mailto:pkamin...@mirantis.com:
 
 Well then either we need to fix fuel-plugin-builder to accept such
 situations.
 
 Actually it is an issue with fpb since git does not accepty empty
 directories [1] so pulling fresh from such repo will result in
 'repositories' dir missing even when the developer had it.
 
 I hope no files were accidentaly forgotten during commit there?
 
 P.
 
 [1]
 
 http://stackoverflow.com/questions/115983/how-can-i-add-an-empty-directory-to-a-git-repository
 
 On 04/02/2015 03:46 PM, Sergey Kulanov wrote:
  Hi, Przemyslaw
 
  1) There should be two repositories folders. Please check the correct
  structure (marked with bold):
  mkdir -p repositories/{ubuntu,centos}
 
 
  root@55725ffa6e80:~/fuel-plugin-cinder-netapp# tree
  .
  |-- LICENSE
  |-- README.md
  |-- cinder_netapp-1.0.0.fp
  |-- deployment_scripts
  |   |-- puppet
  |   |   `-- plugin_cinder_netapp
  |   |   `-- manifests
  |   |   `-- init.pp
  |   `-- site.pp
  |-- environment_config.yaml
  |-- metadata.yaml
  |-- pre_build_hook
  *|-- repositories
  |   |-- centos
  |   `-- ubuntu
  *`-- tasks.yaml
 
  Then you can build the plugin.
 
  2) Actually, this should not be the issue while creating plugins from
  scratch using fpb tool itself [1]:
 
  fpb --create test
 
  root@55725ffa6e80:~# tree test
  test
  |-- LICENSE
  |-- README.md
  |-- deployment_scripts
  |   `-- deploy.sh
  |-- environment_config.yaml
  |-- metadata.yaml
  |-- pre_build_hook
  |-- repositories
  |   |-- centos
  |   `-- ubuntu
  `-- tasks.yaml
 
 
 
  [1]. https://pypi.python.org/pypi/fuel-plugin-builder/1.0.2
 
 
  2015-04-02 16:30 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com 
 mailto:pkamin...@mirantis.com
  mailto:pkamin...@mirantis.com mailto:pkamin...@mirantis.com:
 
  Investigating the cinder-netapp plugin [1] (a 'certified' one)
 shows
  fuel-plugin-build error:
 
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$
 fpb --build
  .
 
 
  Unexpected error
  Cannot find directories ./repositories/ubuntu for release
  {'repository_path': 'repositories/ubuntu', 'version':
 '2014.2-6.0',
  'os': 'ubuntu', 'mode': ['ha', 'multinode'],
 'deployment_scripts_path':
  'deployment_scripts/'}
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
  deployment_scripts  environment_config.yaml  LICENSE 
 metadata.yaml
  pre_build_hook  README.md  tasks.yaml
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
  'repositories'
  metadata.yaml
  18:repository_path: repositories/ubuntu
  23:repository_path: repositories/centos
 
  Apparently some files are missing from the git repo or the
 manifest is
  incorrect. Does anyone know something about this?
 
  P.
 
  [1] https://github.com/stackforge/fuel-plugin-cinder-netapp
 
  On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
   Hello,
  
   I've been investigating bug [1] concentrating on the
   fuel-plugin-external-glusterfs.
  
   First of all: [2] there are no core reviewers for Gerrit for
 this repo
   so even if there was a patch to fix [1] no one could merge
 it. I saw
   also fuel-plugin-external-nfs -- same issue, haven't checked
 other
   repos. Why is this? Can we fix this quickly?
  
   Second, the plugin throws:
  
   DEPRECATION WARNING: The plugin has old 1.0 package format, this
  format
   does not support many features, such as plugins updates, find
  plugin in
   new format or migrate and rebuild this one.
  
   I don't think this is appropriate for a plugin that is
 listed in the
   official catalog [3].
  
   Third, I created a supposed fix for this bug [4] and wanted
 to test

[openstack-dev] [Fuel] glusterfs plugin

2015-04-01 Thread Przemyslaw Kaminski
Hello,

I've been investigating bug [1] concentrating on the
fuel-plugin-external-glusterfs.

First of all: [2] there are no core reviewers for Gerrit for this repo
so even if there was a patch to fix [1] no one could merge it. I saw
also fuel-plugin-external-nfs -- same issue, haven't checked other
repos. Why is this? Can we fix this quickly?

Second, the plugin throws:

DEPRECATION WARNING: The plugin has old 1.0 package format, this format
does not support many features, such as plugins updates, find plugin in
new format or migrate and rebuild this one.

I don't think this is appropriate for a plugin that is listed in the
official catalog [3].

Third, I created a supposed fix for this bug [4] and wanted to test it
with the fuel-qa scripts. Basically I built an .fp file with
fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
to point to that .fp file and then ran the
group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
Then I reverted the changes from the patch and the test still failed
[6]. But installing the plugin by hand shows that it's available there
so I don't know if it's broken plugin test or am I still missing something.

It would be nice to get some QA help here.

P.

[1] https://bugs.launchpad.net/fuel/+bug/1415058
[2] https://review.openstack.org/#/admin/groups/577,members
[3] https://fuel-infra.org/plugins/catalog.html
[4] https://review.openstack.org/#/c/169683/
[5]
https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
[6]
https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel-dev-tools repo

2015-03-27 Thread Przemyslaw Kaminski
Hello,

In accordance with the consensus that was reached on the ML I've set up
the fuel-dev-tools repository [1]. It will be the target repo to merge
my 2 private repos [2] and [3] (I don't think it's necessary to set up 2
different repos for this now). The core reviewers are the fuel-core
group. I needed core permissions to set things up and merged a
Cookiecutter patchset [4] to test things. After that I revoked my core
permissions leaving only the fuel-core team.

P.

[1] https://github.com/stackforge/fuel-dev-tools
[2] https://github.com/stackforge/fuel-dev-tools
[3] https://github.com/CGenie/vagrant-fuel-dev
[4] https://review.openstack.org/#/c/167968/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-dev-tools repo

2015-03-27 Thread Przemyslaw Kaminski
Sorry, I meant

[2] https://github.com/CGenie/fuel-utils/

P.

On 03/27/2015 08:34 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 In accordance with the consensus that was reached on the ML I've set up
 the fuel-dev-tools repository [1]. It will be the target repo to merge
 my 2 private repos [2] and [3] (I don't think it's necessary to set up 2
 different repos for this now). The core reviewers are the fuel-core
 group. I needed core permissions to set things up and merged a
 Cookiecutter patchset [4] to test things. After that I revoked my core
 permissions leaving only the fuel-core team.
 
 P.
 
 [1] https://github.com/stackforge/fuel-dev-tools
 [2] https://github.com/stackforge/fuel-dev-tools
 [3] https://github.com/CGenie/vagrant-fuel-dev
 [4] https://review.openstack.org/#/c/167968/
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Pecan migration status

2015-03-24 Thread Przemyslaw Kaminski
Hello,

I want to summarize work I've done in spare time on migrating our API to
Pecan [1]. This is partially based on previous Nicolay's work [2]. One
test is still failing there but it's some DB lock and I'm not 100% sure
that this is caused because something is yet not done on Pecan side or
just some bug popped out (I was getting a different DB lock before but
it disappeared after rebasing a fix for [5]).

My main commitment here is the 'reverse' method [3] which is not
provided by default in Pecan. I have kept compatibility with original
reverse method in our code. I have additionally added a 'qs' keyword
argument that is used for adding a query string to the URL (see
test_node_nic_handler.py::test_change_mac_of_assigned_nics::get_nodes,
test_node_nic_handler.py::test_remove_assigned_interfaces::get_nodes)

I decided to keep original Nicolay's idea of copying all handlers to v2
directory and not just modify original v1 handlers and concentrated
instead of removing hacks around Pecan as in [6] (with post_all,
post_one, put_all, put_one, etc). Merging current v2 into v1 should
drastically decrease the number of changed lines in this patchset.

I have so far found one fault in the URLs in our API that isn't easily
handled by Pecan (some custom _route function would help) and IMHO
should be fixed by rearranging URLs instead of adding _route hacks:
/nodes/interfaces and /nodes/1/interfaces require the same get_all
method in the interfaces controller. And /nodes/interfaces only usage is
for doing a batch node interface update via PUT. The current v2 can be
merged into v1 with some effort.

We sometimes use a PUT request without specifying an object's ID -- this
is unsupported in Pecan but can be easily hacked by giving a dummy
keyword argument to function's definition:

def put(self, dummy=None)

Some bugs in tests were found and fixed (for example, wrong content-type
in headers in [4]).

I haven't put enough thought into error handling there yet, some stuff
is implemented in hooks/error.py but I'm not fully satisfied with it.
Most unfinished stuff I was marking with a TODO(pkaminski).

P.

[1] https://review.openstack.org/158661
[2] https://review.openstack.org/#/c/99069/6
[3]
https://review.openstack.org/#/c/158661/35/nailgun/nailgun/api/__init__.py
[4]
https://review.openstack.org/#/c/158661/35/nailgun/nailgun/test/unit/test_handlers.py
[5] https://bugs.launchpad.net/fuel/+bug/1433528
[6]
https://review.openstack.org/#/c/99069/6/nailgun/nailgun/api/v2/controllers/base.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pecan migration status

2015-03-24 Thread Przemyslaw Kaminski
BTW, old urls do not match as yet exactly the new ones. There is a need
to write a test that will check all urls.py list and compare with new
handlers' urls to make sure nothing was missed.

P.

On 03/24/2015 08:46 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 I want to summarize work I've done in spare time on migrating our API to
 Pecan [1]. This is partially based on previous Nicolay's work [2]. One
 test is still failing there but it's some DB lock and I'm not 100% sure
 that this is caused because something is yet not done on Pecan side or
 just some bug popped out (I was getting a different DB lock before but
 it disappeared after rebasing a fix for [5]).
 
 My main commitment here is the 'reverse' method [3] which is not
 provided by default in Pecan. I have kept compatibility with original
 reverse method in our code. I have additionally added a 'qs' keyword
 argument that is used for adding a query string to the URL (see
 test_node_nic_handler.py::test_change_mac_of_assigned_nics::get_nodes,
 test_node_nic_handler.py::test_remove_assigned_interfaces::get_nodes)
 
 I decided to keep original Nicolay's idea of copying all handlers to v2
 directory and not just modify original v1 handlers and concentrated
 instead of removing hacks around Pecan as in [6] (with post_all,
 post_one, put_all, put_one, etc). Merging current v2 into v1 should
 drastically decrease the number of changed lines in this patchset.
 
 I have so far found one fault in the URLs in our API that isn't easily
 handled by Pecan (some custom _route function would help) and IMHO
 should be fixed by rearranging URLs instead of adding _route hacks:
 /nodes/interfaces and /nodes/1/interfaces require the same get_all
 method in the interfaces controller. And /nodes/interfaces only usage is
 for doing a batch node interface update via PUT. The current v2 can be
 merged into v1 with some effort.
 
 We sometimes use a PUT request without specifying an object's ID -- this
 is unsupported in Pecan but can be easily hacked by giving a dummy
 keyword argument to function's definition:
 
 def put(self, dummy=None)
 
 Some bugs in tests were found and fixed (for example, wrong content-type
 in headers in [4]).
 
 I haven't put enough thought into error handling there yet, some stuff
 is implemented in hooks/error.py but I'm not fully satisfied with it.
 Most unfinished stuff I was marking with a TODO(pkaminski).
 
 P.
 
 [1] https://review.openstack.org/158661
 [2] https://review.openstack.org/#/c/99069/6
 [3]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/api/__init__.py
 [4]
 https://review.openstack.org/#/c/158661/35/nailgun/nailgun/test/unit/test_handlers.py
 [5] https://bugs.launchpad.net/fuel/+bug/1433528
 [6]
 https://review.openstack.org/#/c/99069/6/nailgun/nailgun/api/v2/controllers/base.py
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] development tools

2015-03-20 Thread Przemyslaw Kaminski
It is something different from what I see.

Repos can be called fuel-dev-utils and fuel-vagrant-dev.

P.

On 03/19/2015 09:43 PM, Andrew Woodward wrote:
 we already have a package with the name fuel-utils please see [1]. I
 -1'd the CR over it.
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2015-March/059206.html
 
 On Thu, Mar 19, 2015 at 7:11 AM, Alexander Kislitsky
 akislit...@mirantis.com wrote:
 +1 for moving fuel_development into separate repo.

 On Thu, Mar 19, 2015 at 5:02 PM, Evgeniy L e...@mirantis.com wrote:

 Hi folks,

 I agree, lets create separate repo with its own cores and remove
 fuel_development from fuel-web.

 But in this case I'm not sure if we should merge the patch which
 has links to non-stackforge repositories, because location is going
 to be changed soon.

 Also it will be cool to publish it on pypi.

 Thanks,

 On Thu, Mar 19, 2015 at 4:21 PM, Sebastian Kalinowski
 skalinow...@mirantis.com wrote:

 As I wrote in the review already: I like the idea of merging
 those two tools and making a separate repository. After that
 we could make they more visible in our documentation and wiki
 so they could benefit from being used by broader audience.

 Same for vagrant configuration - if it's useful (and it is
 since newcomers are using them) we could at least move under
 Mirantis organization on Github.

 Best,
 Seabastian


 2015-03-19 13:49 GMT+01:00 Przemyslaw Kaminski pkamin...@mirantis.com:

 Hello,

 Some time ago I wrote some small tools that make Fuel development easier
 and it was suggested to add info about them to the documentation --
 here's the review link [1].

 Evgenyi Li correctly pointed out that we already have something like
 fuel_development already in fuel-web. I think though that we shouldn't
 mix such stuff directly into fuel-web, I mean we recently migrated CLI
 to a separate repo to make fuel-web thinner.

 So a suggestion -- maybe make these tools more official and create
 stackforge repos for them? I think dev ecosystem could benefit by having
 some standard way of dealing with the ISO (for example we get questions
 from people how to apply new openstack.yaml config to the DB).

 At the same time we could get rid of fuel_development and merge that
 into the new repos (it has the useful 'revert' functionality that I
 didn't think of :))

 P.

 [1] https://review.openstack.org/#/c/140355/9/docs/develop/env.rst


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] development tools

2015-03-19 Thread Przemyslaw Kaminski
Hello,

Some time ago I wrote some small tools that make Fuel development easier
and it was suggested to add info about them to the documentation --
here's the review link [1].

Evgenyi Li correctly pointed out that we already have something like
fuel_development already in fuel-web. I think though that we shouldn't
mix such stuff directly into fuel-web, I mean we recently migrated CLI
to a separate repo to make fuel-web thinner.

So a suggestion -- maybe make these tools more official and create
stackforge repos for them? I think dev ecosystem could benefit by having
some standard way of dealing with the ISO (for example we get questions
from people how to apply new openstack.yaml config to the DB).

At the same time we could get rid of fuel_development and merge that
into the new repos (it has the useful 'revert' functionality that I
didn't think of :))

P.

[1] https://review.openstack.org/#/c/140355/9/docs/develop/env.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] development tools

2015-03-19 Thread Przemyslaw Kaminski
+1 -- there is no point for commiting that review with external urls if
those repos are to be created in stackforge.

P.

On 03/19/2015 03:02 PM, Evgeniy L wrote:
 Hi folks,
 
 I agree, lets create separate repo with its own cores and remove
 fuel_development from fuel-web.
 
 But in this case I'm not sure if we should merge the patch which
 has links to non-stackforge repositories, because location is going
 to be changed soon.
 
 Also it will be cool to publish it on pypi.
 
 Thanks,
 
 On Thu, Mar 19, 2015 at 4:21 PM, Sebastian Kalinowski
 skalinow...@mirantis.com mailto:skalinow...@mirantis.com wrote:
 
 As I wrote in the review already: I like the idea of merging
 those two tools and making a separate repository. After that
 we could make they more visible in our documentation and wiki
 so they could benefit from being used by broader audience.
 
 Same for vagrant configuration - if it's useful (and it is
 since newcomers are using them) we could at least move under
 Mirantis organization on Github.
 
 Best,
 Seabastian
 
 
 2015-03-19 13:49 GMT+01:00 Przemyslaw Kaminski
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com:
 
 Hello,
 
 Some time ago I wrote some small tools that make Fuel
 development easier
 and it was suggested to add info about them to the documentation --
 here's the review link [1].
 
 Evgenyi Li correctly pointed out that we already have something like
 fuel_development already in fuel-web. I think though that we
 shouldn't
 mix such stuff directly into fuel-web, I mean we recently
 migrated CLI
 to a separate repo to make fuel-web thinner.
 
 So a suggestion -- maybe make these tools more official and create
 stackforge repos for them? I think dev ecosystem could benefit
 by having
 some standard way of dealing with the ISO (for example we get
 questions
 from people how to apply new openstack.yaml config to the DB).
 
 At the same time we could get rid of fuel_development and merge that
 into the new repos (it has the useful 'revert' functionality that I
 didn't think of :))
 
 P.
 
 [1] https://review.openstack.org/#/c/140355/9/docs/develop/env.rst
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Deprecation warnings in python-fuelclient-6.1.*

2015-03-04 Thread Przemyslaw Kaminski
Maybe add a Changelog in the repo and maintain it?

http://keepachangelog.com/

Option #2 is OK but it can cause pain when testing -- upon each fresh
installation from ISO we would get that message and it might break some
tests though that is fixable. Option #3 is OK too. #1 is worst and I
wouldn't do it.

Or maybe display that info when showing all the commands (typing 'fuel'
or 'fuel -h')? We already have a deprecation warning there concerning
client/config.yaml, it is not very disturbing and shouldn't break any
currently used automation scripts.

P.


On 03/03/2015 03:52 PM, Roman Prykhodchenko wrote:
 Hi folks!
 
 
 According to the refactoring plan [1] we are going to release the 6.1 version 
 of python-fuelclient which is going to contain recent changes but will keep 
 backwards compatibility with what was before. However, the next major release 
 will bring users the fresh CLI that won’t be compatible with the old one and 
 the new, actually usable IRL API library that also will be different.
 
 The issue this message is about is the fact that there is a strong need to 
 let both CLI and API users about those changes. At the moment I can see 3 
 ways of resolving it:
 
 1. Show deprecation warning for commands and parameters which are going to be 
 different. Log deprecation warnings for deprecated library methods.
 The problem with this approach is that the structure of both CLI and the 
 library will be changed, so deprecation warning will be raised for mostly 
 every command for the whole release cycle. That does not look very user 
 friendly, because users will have to run all commands with --quiet for the 
 whole release cycle to mute deprecation warnings.
 
 2. Show the list o the deprecated stuff and planned changes on the first run. 
 Then mute it.
 The disadvantage of this approach if that there is a need of storing the info 
 about the first run to a file. However, it may be cleaned after the upgrade.
 
 3. The same as #2 but publish the warning online.
 
 I personally prefer #2, but I’d like to get more opinions on this topic.
 
 
 References:
 
 1. https://blueprints.launchpad.net/fuel/+spec/re-thinking-fuel-client
 
 
 - romcheg
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Network verification status flag

2015-02-26 Thread Przemyslaw Kaminski
Hello,

Recently I've been asked to implement Python side of a simple feature:
before deployment tell the UI user that network verification for current
cluster configuration has not been performed. Moreover, on the UI side
it's possible to do network checking on usaved cluster data -- in that
case treat is as no network checking was performed. Unfortunately it
turned out to be not at all that simple to implement on the backend and
I'll try to explain why.

I ended up with the implementation [1] that added a tri-valued flag to
the Cluster model. What's surprising I got stuck at the idempotency test
of network configuration: I've sent a GET request on network config,
then sent a PUT with the received data and asserted that nothing
changed. What's strange in about 1/4 cases this test failed because some
ips got assigned differently. I wasn't able to explain why (I had other
tasks to do and this one was somewhat of a side-project). BTW, it turned
out that we have at least 2 functions that are used to deeply compare 2
objects, both unnecessary IMHO as there are 3rd party libs for this,
like [3].

Another issue was that network configuration PUT returns a task while
actually there is no asynchronicity there at all, it's just a huge
validator that executes everything synchronously. This was already
heavily commented in [2] and it's proposed to remove that task
completely. Moreover Nova and Neutron backends returned different
statuses albeit their verification code was almost the same. A
unification of these 2 handlers was proposed in [1].

Another issue is that we have to somehow invalidate the flag that says
cluster verification is done. It is not difficult to overwrite the save
method for Cluster so that any change in cluster invalidates network
checking. But it's not that simple. First of all -- not all cluster's
changes should invalidate the network checking. Second -- not only
cluster changes invalidate that -- adding nodes to cluster, for example,
invalidates network checking too. Adding triggers all over the code that
check this don't seem to be a good solution.

So what I proposed is to instead of having a simple flag like in [1] to
actually store the whole JSON object with serialized network
configuration. The UI, upon deployment, will ask the API about cluster
and there we will return an additional key called 'network_status_check'
that is 'failed', 'passed' or 'not_performed'. The API will determine
that flag by getting that saved JSON object and comparing it with a
freshly serialized object. This way we don't need to alter the flag upon
save or anything, we just compute if it was changed on demand.

I guess this change grew out so big that it requires a blueprint and can
be done for 7.0. The feature can be implemented on the UI side only that
covers most (but not all of) the problems and is good enough for 6.1.

[1] https://review.openstack.org/153556
[2]
https://review.openstack.org/#/c/137642/15/nailgun/nailgun/api/v1/handlers/network_configuration.py
[3] https://github.com/inveniosoftware/dictdiffer

P.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-18 Thread Przemyslaw Kaminski
Yes, I agree, basically the logic of introducing promises (or fake
threads or whatever they are called) should be tested itself too.

Basically what this is all about is mocking Astute and be able to
easily program it's responses in tests.

P.


On 02/18/2015 09:27 AM, Evgeniy L wrote:
 Hi Przemyslaw,
 
 Thanks for bringing up the topic. A long time ago we had similar
 topic, I agree that the way it works now is not good at all,
 because it leads to a lot of problems, I remember the time when our
 tests were randomly broken because of deadlocks and race conditions
 with fake thread.
 
 We should write some helpers for receiver module, to explicitly and
 easily change state of the system, as you mentioned it should be
 done in synchronous fashion.
 
 But of course we cannot just remove fake and we should continue
 supporting it, some fake thread specific tests should be added to
 make sure that it's not broken.
 
 Thanks,
 
 On Mon, Feb 16, 2015 at 2:54 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Hello,
 
 This somehow relates to [1]: in integration tests we have a class 
 called FakeThread. It is responsible for spawning threads to
 simulate asynchronous tasks in fake env. In BaseIntegrationTest
 class we have a method called _wait_for_threads that waits for all
 fake threads to terminate.
 
 In my understanding what these things actually do is that they
 just simulate Astute's responses. I'm thinking if this could be
 replaced by a better solution, I just want to start a discussion on
 the topic.
 
 My suggestion is to get rid of all this stuff and implement a 
 predictable solution: something along promises or coroutines that 
 would execute synchronously. With either promises or coroutines we 
 could simulate tasks responses any way we want without the need to 
 wait using unpredictable stuff like sleeping, threading and such.
 No need for waiting or killing threads. It would hopefully make our
 tests easier to debug and get rid of the random errors that are
 sometimes getting into our master branch.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1421599
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Sorting and filtering of node list

2015-02-17 Thread Przemyslaw Kaminski
+1 for that, it should be done with pagination too. IMHO pagination 
simple filtering by object's status can be done generically on the API
side for all GET methods that derive from CollectionHandler.

P.


On 02/17/2015 10:18 AM, Lukasz Oles wrote:
 Hello Julia,
 
 I think node filtering and sorting is a great feature and it will 
 improve UX, but we need to remember that with increasing number of 
 nodes increases number of automation tasks. If fuel user want to 
 automate something he will use fuel client not Fuel GUI. This is
 why I think sorting and filtering should be done on backend side. 
 We should stop thinking that Fuel UI is the only way to interact
 with Fuel.
 
 Regards,
 
 On Sat, Feb 14, 2015 at 9:27 AM, Julia Aranovich 
 jkirnos...@mirantis.com wrote:
 Hi All,
 
 Currently we [Fuel UI team] are planning the features of sorting
 and filtering of node list to introduce it in 6.1 release.
 
 Now user can filter nodes just by it's name or MAC address and no
 sorters are available. It's rather poor UI for managing 200+
 nodes environment. So, the current suggestion is to filter and
 sort nodes by the following parameters:
 
 name manufacturer IP address MAC address CPU memory disks total
 size (we need to think about less than/more than 
 representation) interfaces speed status (Ready, Pending Addition,
 Error, etc.) roles
 
 
 It will be a form-based filter. Items [1-4] should go to a single
 text input and other go to a separate controls. And also there is
 an idea to translate a user filter selection to a query and add
 it to a location string. Like it's done for the logs search: 
 #cluster/x/logs/type:local;source:api;level:info.
 
 Please also note, that the changes we are thinking about should
 not affect backend code.
 
 
 I will be very grateful if you share your ideas about this or
 tell some of the cases that would be useful to you at work with
 real deployments. We would like to introduce really usefull tools
 based on your feedback.
 
 
 Best regards, Julia
 
 -- Kind Regards, Julia Aranovich, Software Engineer, Mirantis,
 Inc +7 (905) 388-82-61 (cell) Skype: juliakirnosova 
 www.mirantis.ru jaranov...@mirantis.com
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fake threads in tests

2015-02-16 Thread Przemyslaw Kaminski


On 02/16/2015 01:55 PM, Jay Pipes wrote:
 On 02/16/2015 06:54 AM, Przemyslaw Kaminski wrote:
 Hello,
 
 This somehow relates to [1]: in integration tests we have a
 class called FakeThread. It is responsible for spawning threads
 to simulate asynchronous tasks in fake env. In
 BaseIntegrationTest class we have a method called
 _wait_for_threads that waits for all fake threads to terminate.
 
 In my understanding what these things actually do is that they
 just simulate Astute's responses. I'm thinking if this could be
 replaced by a better solution, I just want to start a discussion
 on the topic.
 
 My suggestion is to get rid of all this stuff and implement a 
 predictable solution: something along promises or coroutines
 that would execute synchronously. With either promises or
 coroutines we could simulate tasks responses any way we want
 without the need to wait using unpredictable stuff like sleeping,
 threading and such. No need for waiting or killing threads. It
 would hopefully make our tests easier to debug and get rid of the
 random errors that are sometimes getting into our master branch.
 
 Hi!
 
 For integration/functional tests, why bother faking out the threads
 at all? Shouldn't the integration tests be functionally testing the
 real code, not mocked or faked stuff?
 

Well you'd need Rabbit/Astute etc fully set up and working so this was
made for less painful testing I guess. These tests concert only
Nailgun side so I think it's OK to have Fake tasks like this. Full
integration tests with all components are made by the QA team.

P.

 Best, -jay
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fake threads in tests

2015-02-16 Thread Przemyslaw Kaminski
Hello,

This somehow relates to [1]: in integration tests we have a class
called FakeThread. It is responsible for spawning threads to simulate
asynchronous tasks in fake env. In BaseIntegrationTest class we have a
method called _wait_for_threads that waits for all fake threads to
terminate.

In my understanding what these things actually do is that they just
simulate Astute's responses. I'm thinking if this could be replaced by
a better solution, I just want to start a discussion on the topic.

My suggestion is to get rid of all this stuff and implement a
predictable solution: something along promises or coroutines that
would execute synchronously. With either promises or coroutines we
could simulate tasks responses any way we want without the need to
wait using unpredictable stuff like sleeping, threading and such. No
need for waiting or killing threads. It would hopefully make our tests
easier to debug and get rid of the random errors that are sometimes
getting into our master branch.

P.

[1] https://bugs.launchpad.net/fuel/+bug/1421599

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski

On 02/07/2015 12:09 PM, Dmitriy Shulyak wrote:
 
 On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com mailto:vkramsk...@mirantis.com wrote:
 
 I want to discuss possibility to add network verification status 
 field for environments. There are 2 reasons for this:
 
 1) One of the most frequent reasons of deployment failure is wrong 
 network configuration. In the current UI network verification is 
 completely optional and sometimes users are even unaware that this 
 feature exists. We can warn the user before the start of
 deployment if network check failed of wasn't performed.
 
 2) Currently network verification status is partially tracked by 
 status of the last network verification task. Sometimes its
 results become stale, and the UI removes the task. There are a few
 cases when the UI does this, like changing network settings, adding
 a new node, etc (you can grep removeFinishedNetworkTasks to see
 all the cases). This definitely should be done on backend.
 
 
 
 Additional field on cluster like network_check_status? When it will
 be populated with result? I think it will simply duplicate
 task.status with network_verify name
 
 Network check is not a single task.. Right now there is two, and 
 probably we will need one more right in this release (setup public 
 network and ping gateway). And AFAIK there is a need for other pre 
 deployment verifications..
 
 I would prefer to make a separate tab with pre_deployment
 verifications, similar to ostf. But if you guys want to make smth
 right now, compute status of network verification based on task
 with name network_verify, if you deleted this task from UI (for
 some reason) just add warning that verification wasnt performed. If
 there is more than one task with network_verify for any given
 cluster - pick latest one.

Well, there are some problems with this solution:
1. No 'pick latest one with filtering to network_verify' handler is
available currently.
2. Tasks are ephemeral entities -- they get deleted here and there.
Look at nailgun/task/manager.py for example -- lines 83-88 or lines
108-120 and others
3. Just having network verification status as ready is NOT enough.
From the UI you can fire off network verification for unsaved changes.
Some JSON request is made, network configuration validated by tasks
and RPC call made returing that all is OK for example. But if you
haven't saved your changes then in fact you haven't verified your
current configuration, just some other one. So in this case task
status 'ready' doesn't mean that current cluster config is valid. What
do you propose in this case? Fail the task on purpose? I only see a
solution to this by introducting a new flag and network_check_status
seems to be an appropriate one.

P.

 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski

On 02/09/2015 12:06 PM, Dmitriy Shulyak wrote:
 
 On Mon, Feb 9, 2015 at 12:51 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Well, there are some problems with this solution: 1. No 'pick
 latest one with filtering to network_verify' handler is available
 currently.
 
 
 Well i think there should be finished_at field anyway, why not to
 add it for this purpose?

So you're suggesting to add another column and modify all tasks for
this one feature?

 
 2. Tasks are ephemeral entities -- they get deleted here and
 there. Look at nailgun/task/manager.py for example -- lines 83-88
 or lines 108-120 and others
 
 
 I dont actually recall what was the reason to delete them, but if
 it happens imo it is ok to show right now that network verification
 wasnt performed.

Is this how one does predictible and easy to understand software?
Sometimes we'll say that verification is OK, othertimes that it wasn't
performed?

 
 3. Just having network verification status as ready is NOT enough. 
 From the UI you can fire off network verification for unsaved
 changes. Some JSON request is made, network configuration validated
 by tasks and RPC call made returing that all is OK for example. But
 if you haven't saved your changes then in fact you haven't verified
 your current configuration, just some other one. So in this case
 task status 'ready' doesn't mean that current cluster config is
 valid. What do you propose in this case? Fail the task on purpose?
 I only see a
 
 solution to this by introducting a new flag and
 network_check_status seems to be an appropriate one.
 
 
 My point that it has very limited UX. Right now network check is: -
 l2 with vlans verication - dhcp verification
 
 When we will have time we will add: - multicast routing
 verification - public gateway Also there is more stuff that
 different users was asking about.
 
 Then i know that vmware team also wants to implement
 pre_deployment verifications.
 
 So what this net_check_status will refer to at that point?

Issue #3 I described is still valid -- what is your solution in this case?

If someone implements pre-deployment network verifications and doesn't
add the procedures to network verification task then really no
solution can prevent the user from being able to deploy a cluster with
some invalid configuration. It's not an issue with providing info that
network checks were or weren't made.

As far as I understand, there's one supertask 'verify_networks'
(called in nailgu/task/manager.py line 751). It spawns other tasks
that do verification. When all is OK verify_networks calls RPC's
'verify_networks_resp' method and returns a 'ready' status and at that
point I can inject code to also set the DB column in cluster saying
that network verification was OK for the saved configuration. Adding
other tasks should in no way affect this behavior since they're just
subtasks of this task -- or am I wrong?

P.

 
 
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [nailgun] [UI] network_check_status fleild for environments

2015-02-09 Thread Przemyslaw Kaminski


On 02/09/2015 01:18 PM, Dmitriy Shulyak wrote:
 
 On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Well i think there should be finished_at field anyway, why not
 to add it for this purpose?
 
 So you're suggesting to add another column and modify all tasks
 for this one feature?
 
 
 Such things as time stamps should be on all tasks anyway.
 
 
 I dont actually recall what was the reason to delete them, but
 if it happens imo it is ok to show right now that network
 verification wasnt performed.
 
 Is this how one does predictible and easy to understand software? 
 Sometimes we'll say that verification is OK, othertimes that it
 wasn't performed?
 
 In my opinion the questions that needs to be answered - what is
 the reason or event to remove verify_networks tasks history?
 
 
 3. Just having network verification status as ready is NOT
 enough. From the UI you can fire off network verification for
 unsaved changes. Some JSON request is made, network configuration
 validated by tasks and RPC call made returing that all is OK for
 example. But if you haven't saved your changes then in fact you
 haven't verified your current configuration, just some other one.
 So in this case task status 'ready' doesn't mean that current
 cluster config is valid. What do you propose in this case? Fail
 the task on purpose?
 
 Issue #3 I described is still valid -- what is your solution in
 this case?
 
 Ok, sorry. What do you think if in such case we will remove old
 tasks? It seems to me that is correct event in which old
 verify_networks is invalid anyway, and there is no point to store
 history.

Well, not exactly. Configure networks, save settings, do network check
all assume that all went fine. Now change one thing without saving,
check settings, didn't pass but it doesn't affect the flag because
that's some different configuration from the saved one. And your
original cluster is OK still. So in this case user will have to yet
again run the original check. The plus of the network_check_status
column is actually you don't need to store any history -- task can be
deleted or whatever and still last checked saved configuration
matters. User can perform other checks 'for free' and is not required
to rerun the working configuration checks.

With data depending on tasks you actually have to store a lot of
history because you need to keep last working saved configuration --
otherwise user will have to rerun original configuration. So from
usability point of view this is a worse solution.

 
 
 As far as I understand, there's one supertask 'verify_networks' 
 (called in nailgu/task/manager.py line 751). It spawns other tasks 
 that do verification. When all is OK verify_networks calls RPC's 
 'verify_networks_resp' method and returns a 'ready' status and at
 that point I can inject code to also set the DB column in cluster
 saying that network verification was OK for the saved
 configuration. Adding other tasks should in no way affect this
 behavior since they're just subtasks of this task -- or am I
 wrong?
 
 
 It is not that smooth, but in general yes - it can be done when
 state of verify_networks is changed. But lets say we have
 some_settings_verify task? Would be it valid to add one more field
 on cluster model, like some_settings_status?

Well, why not? Cluster deployment is a task and it's status is saved
in cluster colum and not fetched from tasks. As you see the logic of
network task verification is not simply based on ready/error status
reading but more subtle. What other settings you have in mind? I guess
when we have more of them one can create a separate table to keep
them, but for now I don't see a point in doing this.

P.

 
 
 
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-14 Thread Przemyslaw Kaminski
I just made a general remark regarding why migrating to 2.7 is
profitable (I understood Bartek's question this way).

The point about Red Hat guaranteeing security fixes to 2.6 is a good
one. Also, it's true we don't use SSL for fuelclient so yes, if other
OpenStack projects keep 2.6 we should stick to it also.

P.

On 01/14/2015 08:32 AM, Bartłomiej Piotrowski wrote:
 On 01/13/2015 11:16 PM, Tomasz Napierala wrote:
 
 On 13 Jan 2015, at 10:51, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
 
 For example
 
 https://www.python.org/download/releases/2.6.9/
 
 All official maintenance for Python 2.6, including security
 patches, has ended.
 
 https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
 
 Especially the SSL stuff is interesting
 
 http://bugs.python.org/issue22935
 
 This looks like final word here. We cannot provide software, that
 has no security support.
 
 Regards,
 
 
 I can hardly see it as a justification for maintaining yet another 
 package on our own while Red Hat is supposed to provide backports
 of security fixes to python 2.6 until 2020.
 
 I wanted to hear exact use cases of 2.7 features that allow us to 
 accomplish things easier than it is now with 2.6. As Doug already
 said, clients and Oslo libraries will maintain compatibility with
 2.6. So what is the real gain?
 
 Regards, Bartłomiej
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Dropping Python-2.6 support

2015-01-13 Thread Przemyslaw Kaminski
For example

https://www.python.org/download/releases/2.6.9/

All official maintenance for Python 2.6, including security patches,
has ended.

https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS

Especially the SSL stuff is interesting

http://bugs.python.org/issue22935

P.

On 01/13/2015 08:39 AM, Bartłomiej Piotrowski wrote:
 On 01/12/2015 03:55 PM, Roman Prykhodchenko wrote:
 Folks,
 
 as it was planned and then announced at the OpenStack summit
 OpenStack services deprecated Python-2.6 support. At the moment
 several services and libraries are already only compatible with
 Python=2.7. And there is no common sense in trying to get back
 compatibility with Py2.6 because OpenStack infra does not run
 tests for that version of Python.
 
 The point of this email is that some components of Fuel, say,
 Nailgun and Fuel Client are still only tested with Python-2.6.
 Fuel Client in it’s turn is about to use OpenStack CI’s
 python-jobs for running unit tests. That means that in order to
 make it compatible with Py2.6 there is a need to run a separate
 python job in FuelCI.
 
 However, I believe that forcing the things being compatible with
 2.6 when the rest of ecosystem decided not to go with it and when
 Py2.7 is already available in the main CentOS repo sounds like a
 battle with the common sense. So my proposal is to drop 2.6
 support in Fuel-6.1.
 
 While I come from the lands where being bleeding edge is preferred,
 I ask myself (as not programmer) one thing: what does 2.7 provide
 that you cannot easily achieve in 2.6?
 
 Regards, Bartłomiej
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2015-01-07 Thread Przemyslaw Kaminski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello,

The updated version of monitoring code is available here:

https://review.openstack.org/#/c/137785/

This is based on monit as was agreed in this thread. The drawback of
monit is that basically it's a very simple system that doesn't track
state of checkers so still some Python code is needed so that user
isn't spammed with low disk space notifications every minute.

On 01/05/2015 10:40 PM, Andrew Woodward wrote:
 There are two threads here that need to be unraveled from each 
 other.
 
 1. We need to prevent fuel from doing anything if the OS is out of 
 disk space. It leads to a very broken database from which it 
 requires a developer to reset to a usable state. From this point we
 need to * develop a method for locking down the DB writes so that
 fuel becomes RO until space is freed

It's true that full disk space + DB writes can result in fatal
database failure. I just don't know if we can lock the DB just like
that? What if deployment is in progress?

I think the first way to reduce disk space usage would be to set
logging level to WARNING instead of DEBUG. It's good to have DEBUG
during development but I don't think it's that good for production.
Besides it slows down deployment much, from what I observed.

 * develop a method (or re-use existing) to notify the user that a 
 serious error state exists on the host. ( that could not be 
 dismissed)

Well this is done already in the review I've linked above. It
basically posts a notification to the UI system. Everything still
works as before though until the disk is full. The CLI doesn't
communicate in any way with notifications AFAIK so the warning is not
shown there.

 * we need some API that can lock / unlock the DB * we need some 
 monitor process that will trigger the lock/unlock

This one can be easily changed with the code in the above review request.

 
 2. We need monitoring for the master node and fuel components in 
 general as discussed at length above. unless we intend to use this
  to also monitor the services on deployed nodes (likely bad), then
  what we use to do this is irrelevant to getting this started. If 
 we are intending to use this to also monitor deployed nodes, (again
 bad for the fuel node to do) then we need to standardize with what
 we monitor the cloud with (Zabbix currently) and offer a single
 pane of glass. Federation in the monitoring becomes a critical
 requirement here as having more than one pane of glass is an
 operations nightmare.

AFAIK installation of Zabbix is optional. We want obligatory
monitoring of the master which would somehow force its installation on
the cloud nodes.

P.

 
 Completing #1 is very important in the near term as I have had to 
 un-brick several deployments over it already. Also, in my mind 
 these are also separate tasks.
 
 On Thu, Nov 27, 2014 at 1:19 AM, Simon Pasquier 
 spasqu...@mirantis.com wrote:
 I've added another option to the Etherpad: collectd can do basic
  threshold monitoring and run any kind of scripts on alert 
 notifications. The other advantage of collectd would be the RRD 
 graphs for (almost) free. Of course since monit is already 
 supported in Fuel, this is the fastest path to get something 
 done. Simon
 
 On Thu, Nov 27, 2014 at 9:53 AM, Dmitriy Shulyak 
 dshul...@mirantis.com wrote:
 
 Is it possible to send http requests from monit, e.g for 
 creating notifications? I scanned through the docs and found 
 only alerts for sending mail, also where token (username/pass)
  for monit will be stored?
 
 Or maybe there is another plan? without any api interaction
 
 On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
 
 This I didn't know. It's true in fact, I checked the 
 manifests. Though monit is not deployed yet because of lack 
 of packages in Fuel ISO. Anyways, I think the argument about
  using yet another monitoring service is now rendered 
 invalid.
 
 So +1 for monit? :)
 
 P.
 
 
 On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:
 
 Monit is easy and is used to control states of Compute nodes.
 We can adopt it for master node.
 
 -- Best regards, Sergii Golovatiuk, Skype #golserge IRC 
 #holser
 
 On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
 sbogat...@mirantis.com wrote:
 
 As for me - zabbix is overkill for one node. Zabbix Server
  + Agent + Frontend + DB + HTTP server, and all of it for 
 one node? Why not use something that was developed for 
 monitoring one node, doesn't have many deps and work out of
 the box? Not necessarily Monit, but something similar.
 
 On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:
 
 We want to monitor Fuel master node while Zabbix is only
  on slave nodes and not on master. The monitoring service
  is supposed to be installed on Fuel master host (not 
 inside a Docker container) and provide basic info about 
 free disk space, etc.
 
 P.
 
 
 On 11/26/2014 02:58 PM, Jay Pipes wrote:
 
 On 11/26

Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Przemyslaw Kaminski
First of all, compiling of statics shouldn't be a required step. No one 
does this during development.
For production-ready plugins, the compiled files should already be 
included in the GitHub repos and installation of plugin should just be a 
matter of downloading it. The API should then take care of informing the 
UI what plugins are installed.

The npm install step is mostly one-time.
The grunt build step for the plugin should basically just compile the 
staticfiles of the plugin and not the whole project. Besides with one 
file this is not extendable -- for N plugins we would build 2^N files 
with all possible combinations of including the plugins? :)


P.

On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:
My experience with building Fuel plugins with UI part is following. To 
build a ui-less plugin, it takes 3 seconds and those commands:


git clone https://github.com/AlgoTrader/test-plugin.git
cd ./test-plugin
fpb --build ./

When UI added, build start to look like this and takes many minutes:

git clone https://github.com/AlgoTrader/test-plugin.git
git clone https://github.com/stackforge/fuel-web.git
cd ./fuel-web
git fetch https://review.openstack.org/stackforge/fuel-web 
refs/changes/00/112600/24  git checkout FETCH_HEAD

cd ..
mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin
cd ./fuel-web/nailgun
npm install  npm update
grunt build --static-dir=static_compressed
cd ../..
rm -rf ./test-plugin/ui
mkdir ./test-plugin/ui
cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* 
./test-plugin/ui

cd ./test-plugin
fpb --build ./

I think we need something not so complex and fragile

Anton




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Przemyslaw Kaminski


On 12/15/2014 02:26 PM, Anton Zemlyanov wrote:

The building of the UI plugin has several things I do not like

1) I need to extract the UI part of the plugin and copy/symlink it to 
fuel-web


This is required, the UI part should live somewhere in statics/js. This 
directory is served by nginx and symlinking/copying is I think the best 
way, far better than adding new directories to nginx configuration.



2) I have to run grunt build on the whole fuel-web


This shouldn't at all be necessary.


3) I have to copy files back to original location to pack them


Shouldn't be necessary.

4) I cannot easily switch between development/production versions (no 
way to easily change entry point)


Development/production versions should only differ by serving 
raw/compressed files. The compressed files should be published by the 
plugin author.




The only way to install plugin is `fuel plugins --install`, no matter 
development or production, so even development plugins should be 
packed to tar.gz


The UI part should be working immediately after symlinking somewhere in 
the statics/js directory imho (and after API is aware of the new pugin but).


P.



Anton

On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski 
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:


First of all, compiling of statics shouldn't be a required step.
No one does this during development.
For production-ready plugins, the compiled files should already be
included in the GitHub repos and installation of plugin should
just be a matter of downloading it. The API should then take care
of informing the UI what plugins are installed.
The npm install step is mostly one-time.
The grunt build step for the plugin should basically just compile
the staticfiles of the plugin and not the whole project. Besides
with one file this is not extendable -- for N plugins we would
build 2^N files with all possible combinations of including the
plugins? :)

P.


On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:

My experience with building Fuel plugins with UI part is
following. To build a ui-less plugin, it takes 3 seconds and
those commands:

git clone https://github.com/AlgoTrader/test-plugin.git
cd ./test-plugin
fpb --build ./

When UI added, build start to look like this and takes many minutes:

git clone https://github.com/AlgoTrader/test-plugin.git
git clone https://github.com/stackforge/fuel-web.git
cd ./fuel-web
git fetch https://review.openstack.org/stackforge/fuel-web
refs/changes/00/112600/24  git checkout FETCH_HEAD
cd ..
mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
cp -R ./test-plugin/ui/*
./fuel-web/nailgun/static/plugins/test-plugin
cd ./fuel-web/nailgun
npm install  npm update
grunt build --static-dir=static_compressed
cd ../..
rm -rf ./test-plugin/ui
mkdir ./test-plugin/ui
cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/*
./test-plugin/ui
cd ./test-plugin
fpb --build ./

I think we need something not so complex and fragile

Anton




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nailgun] Web framework

2014-12-03 Thread Przemyslaw Kaminski
The only useful paradigm to write in Flask is MethodView's for me [1] 
because decorators seem hard to refactor for large projects. Please look 
at adding URLs -- one has to additionally specify methods to match those 
from the MethodView -- this is code duplication and looks ugly.


It seems though that Fask-RESTful [2] fixes this but then we're 
dependent on 2 projects.


I don't like that Flask uses a global request object [3]. From Flask 
documentation


Basically you can completely ignore that this is the case unless you 
are doing something like unit testing. You will notice that code which 
depends on a request object will suddenly break because there is no 
request object. The solution is creating a request object yourself and 
binding it to the context.


Yeah, let's make testing even harder...

Pecan looks better in respect of RESTful services [4].
POST parameters are cleanly passed as arguments to the post method. It 
also provides custom JSON serialization hooks [5] so we can forget about 
explicit serialization in handlers.


So from these 2 choices I'm for Pecan.

[1] http://flask.pocoo.org/docs/0.10/views/#method-views-for-apis
[2] https://flask-restful.readthedocs.org/en/0.3.0/
[3] http://flask.pocoo.org/docs/0.10/quickstart/#accessing-request-data
[4] http://pecan.readthedocs.org/en/latest/rest.html
[5] http://pecan.readthedocs.org/en/latest/jsonify.html


P.

On 12/03/2014 10:57 AM, Alexander Kislitsky wrote:
We had used Flask in the fuel-stats. It was easy and pleasant and all 
project requirements was satisfied. And I saw difficulties and 
workarounds with Pecan, when Nick integrated it into Nailgun.

So +1 for Flask.


On Tue, Dec 2, 2014 at 11:00 PM, Nikolay Markov nmar...@mirantis.com 
mailto:nmar...@mirantis.com wrote:


Michael, we already solved all issues I described, and I just don't
want to solve them once again after moving to another framework. Also,
I think, nothing of these wishes contradicts with good API design.

On Tue, Dec 2, 2014 at 10:49 PM, Michael Krotscheck
krotsch...@gmail.com mailto:krotsch...@gmail.com wrote:
 This sounds more like you need to pay off technical debt and
clean up your
 API.

 Michael

 On Tue Dec 02 2014 at 10:58:43 AM Nikolay Markov
nmar...@mirantis.com mailto:nmar...@mirantis.com
 wrote:

 Hello all,

 I actually tried to use Pecan and even created a couple of
PoCs, but
 there due to historical reasons of how our API is organized it will
 take much more time to implement all workarounds we need to issues
 Pecan doesn't solve out of the box, like working with non-RESTful
 URLs, reverse URL lookup, returning custom body in 404 response,
 wrapping errors to JSON automatically, etc.

 As far as I see, each OpenStack project implements its own
workarounds
 for these issues, but still it requires much less men and hours
for us
 to move to Flask-Restful instead of Pecan, because all these
problems
 are already solved there.

 BTW, I know a lot of pretty big projects using Flask (it's the
second
 most popular Web framework after Django in Python Web
community), they
 even have their own hall of fame:
 http://flask.pocoo.org/community/poweredby/ .

 On Tue, Dec 2, 2014 at 7:13 PM, Ryan Brown rybr...@redhat.com
mailto:rybr...@redhat.com wrote:
  On 12/02/2014 09:55 AM, Igor Kalnitsky wrote:
  Hi, Sebastian,
 
  Thank you for raising this topic again.
 
  [snip]
 
  Personally, I'd like to use Flask instead of Pecan, because
first one
  is more production-ready tool and I like its design. But I
believe
  this should be resolved by voting.
 
  Thanks,
  Igor
 
  On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski
  skalinow...@mirantis.com mailto:skalinow...@mirantis.com
wrote:
  Hi all,
 
  [snip explanation+history]
 
  Best,
  Sebastian
 
  Given that Pecan is used for other OpenStack projects and has
plenty of
  builtin functionality (REST support, sessions, etc) I'd
prefer it for a
  number of reasons.
 
  1) Wouldn't have to pull in plugins for standard (in Pecan)
things
  2) Pecan is built for high traffic, where Flask is aimed at
much smaller
  projects
  3) Already used by other OpenStack projects, so common
patterns can be
  reused as oslo libs
 
  Of course, the Flask community seems larger (though the
average flask
  project seems pretty small).
 
  I'm not sure what determines production readiness, but it
seems to me
  like Fuel developers fall more in Pecan's target audience than in
  Flask's.
 
  My $0.02,
  Ryan
 
  --
  Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
 
  ___
  OpenStack-dev 

Re: [openstack-dev] [Fuel][Nailgun] Web framework

2014-12-03 Thread Przemyslaw Kaminski
Yeah, didn't notice that. Honestly, I'd prefer both to be accessible as 
instance attributes just like in [1] but it's more of taste I guess.


[1] 
http://tornado.readthedocs.org/en/latest/web.html#tornado.web.RequestHandler.request


P.

On 12/03/2014 02:03 PM, Sebastian Kalinowski wrote:


2014-12-03 13:47 GMT+01:00 Igor Kalnitsky ikalnit...@mirantis.com 
mailto:ikalnit...@mirantis.com:


 I don't like that Flask uses a global request object [3].

Przemyslaw, actually Pecan does use global objects too. BTW, what's
wrong with global objects? They are thread-safe in both Pecan and
Flask.


To be fair, Pecan could also pass request and response explicit to 
method [1]


[1] http://pecan.readthedocs.org/en/latest/contextlocals.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-12-01 Thread Przemyslaw Kaminski


On 11/28/2014 05:15 PM, Ivan Kliuk wrote:

Hi, team!

Let me please present ideas collected during the unit tests 
improvement meeting:

1) Rename class ``Environment`` to something more descriptive
2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. 
Let's use parameters instead
3) run_tests.sh should invoke alternate syncdb() for cases where we 
don't need to test migration procedure, i.e. create_db_schema()
4) Consider usage of custom fixture provider. The main functionality 
should combine loading from YAML/JSON source and support fixture 
inheritance

5) The project needs in a document(policy) which describes:
- Tests creation technique;
- Test categorization (integration/unit) and approaches of testing 
different code base

-
6) Review the tests and refactor unit tests as described in the test 
policy

7) Mimic Nailgun module structure in unit tests
8) Explore Swagger tool http://swagger.io/


Swagger is a great tool, we used it in my previous job. We used Tornado, 
attached some hand-crafted code to RequestHandler class so that it 
inspected all its subclasses (i.e. different endpoint with REST 
methods), generated swagger file and presented the Swagger UI 
(https://github.com/swagger-api/swagger-ui) under some /docs/ URL.
What this gave us is that we could just add YAML specification directly 
to the docstring of the handler method and it would automatically appear 
in the UI. It's worth noting that the UI provides an interactive form 
for sending requests to the API so that tinkering with the API is easy [1].


[1] 
https://www.dropbox.com/s/y0nuxull9mxm5nm/Swagger%20UI%202014-12-01%2012-13-06.png?dl=0


P.


--
Sincerely yours,
Ivan Kliuk


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-27 Thread Przemyslaw Kaminski
I mean with monit you can execute arbitrary scripts so use curl? Or save 
them directly to DB?


http://omgitsmgp.com/2013/09/07/a-monit-primer/

I guess some data has to be stored in a configuration file (either DB 
credentials or Nailgun API URL at least, if we were to create 
notifications via the API). I proposed a hand-crafted solution


https://review.openstack.org/#/c/135314/

that lives in fuel-web code and uses settings.yaml so no config file is 
necessary. It has the drawback though that the nailgun code lives inside 
a Docker container so the monitoring data isn't reliable.


P.

On 11/27/2014 09:53 AM, Dmitriy Shulyak wrote:
Is it possible to send http requests from monit, e.g for creating 
notifications?

I scanned through the docs and found only alerts for sending mail,
also where token (username/pass) for monit will be stored?

Or maybe there is another plan? without any api interaction

On Thu, Nov 27, 2014 at 9:39 AM, Przemyslaw Kaminski 
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:


This I didn't know. It's true in fact, I checked the manifests.
Though monit is not deployed yet because of lack of packages in
Fuel ISO. Anyways, I think the argument about using yet another
monitoring service is now rendered invalid.

So +1 for monit? :)

P.


On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:

Monit is easy and is used to control states of Compute nodes. We
can adopt it for master node.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin
sbogat...@mirantis.com mailto:sbogat...@mirantis.com wrote:

As for me - zabbix is overkill for one node. Zabbix Server +
Agent + Frontend + DB + HTTP server, and all of it for one
node? Why not use something that was developed for monitoring
one node, doesn't have many deps and work out of the box? Not
necessarily Monit, but something similar.

On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

We want to monitor Fuel master node while Zabbix is only
on slave nodes and not on master. The monitoring service
is supposed to be installed on Fuel master host (not
inside a Docker container) and provide basic info about
free disk space, etc.

P.


On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring
systems to learn,
configure, and debug? Monasca for cloud users,
zabbix for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats
already being deployed?


Yes, I had the same thoughts... why not just use
zabbix since it's used already?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Przemyslaw Kaminski

I agree, this was supposed to be small.

P.

On 11/26/2014 11:03 AM, Stanislaw Bogatkin wrote:

Hi all,
As I understand, we just need to monitoring one node - Fuel master. 
For slave nodes we already have a solution - zabbix.
So, in that case why we need some complicated stuff like monasca? 
Let's use something small, like monit or sensu.


On Mon, Nov 24, 2014 at 10:36 PM, Fox, Kevin M kevin@pnnl.gov 
mailto:kevin@pnnl.gov wrote:


One of the selling points of tripleo is to reuse as much as
possible from the cloud, to make it easier to deploy. While
monasca may be more complicated, if it ends up being a component
everyone learns, then its not as bad as needing to learn two
different monitoring technologies. You could say the same thing
cobbler vs ironic. the whole Ironic stack is much more
complicated. But for an openstack admin, its easier since a lot of
existing knowlege applies. Just something to consider.

Thanks,
Kevin *
*

*From:* Tomasz Napierala
*Sent:* Monday, November 24, 2014 6:42:39 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Fuel] fuel master monitoring


 On 24 Nov 2014, at 11:09, Sergii Golovatiuk
sgolovat...@mirantis.com mailto:sgolovat...@mirantis.com wrote:

 Hi,

 monasca looks overcomplicated for the purposes we need. Also it
requires Kafka which is Java based transport protocol.
 I am proposing Sensu. It's architecture is tiny and elegant.
Also it uses rabbitmq as transport so we won't need to introduce
new protocol.

Do we really need such complicated stuff? Sensu is huge project,
and it's footprint is quite large. Monit can alert using scripts,
can we use it instead of API?

Regards,
-- 
Tomasz 'Zen' Napierala

Sr. OpenStack Engineer
tnapier...@mirantis.com mailto:tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Przemyslaw Kaminski
We want to monitor Fuel master node while Zabbix is only on slave nodes 
and not on master. The monitoring service is supposed to be installed on 
Fuel master host (not inside a Docker container) and provide basic info 
about free disk space, etc.


P.

On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring systems to learn,
configure, and debug? Monasca for cloud users, zabbix for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already being deployed?


Yes, I had the same thoughts... why not just use zabbix since it's 
used already?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-26 Thread Przemyslaw Kaminski
This I didn't know. It's true in fact, I checked the manifests. Though 
monit is not deployed yet because of lack of packages in Fuel ISO. 
Anyways, I think the argument about using yet another monitoring service 
is now rendered invalid.


So +1 for monit? :)

P.

On 11/26/2014 05:55 PM, Sergii Golovatiuk wrote:
Monit is easy and is used to control states of Compute nodes. We can 
adopt it for master node.


--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Nov 26, 2014 at 4:46 PM, Stanislaw Bogatkin 
sbogat...@mirantis.com mailto:sbogat...@mirantis.com wrote:


As for me - zabbix is overkill for one node. Zabbix Server + Agent
+ Frontend + DB + HTTP server, and all of it for one node? Why not
use something that was developed for monitoring one node, doesn't
have many deps and work out of the box? Not necessarily Monit, but
something similar.

On Wed, Nov 26, 2014 at 6:22 PM, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

We want to monitor Fuel master node while Zabbix is only on
slave nodes and not on master. The monitoring service is
supposed to be installed on Fuel master host (not inside a
Docker container) and provide basic info about free disk
space, etc.

P.


On 11/26/2014 02:58 PM, Jay Pipes wrote:

On 11/26/2014 08:18 AM, Fox, Kevin M wrote:

So then in the end, there will be 3 monitoring systems
to learn,
configure, and debug? Monasca for cloud users, zabbix
for most of the
physical systems, and sensu or monit to be small?

Seems very complicated.

If not just monasca, why not the zabbix thats already
being deployed?


Yes, I had the same thoughts... why not just use zabbix
since it's used already?

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Przemyslaw Kaminski

I proposed monasca-agent in a previous mail in this thread.

P.

On 11/21/2014 04:48 PM, Fox, Kevin M wrote:

How about this?
https://wiki.openstack.org/wiki/Monasca

Kevin *
*

*From:* Dmitriy Shulyak
*Sent:* Friday, November 21, 2014 12:57:45 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Fuel] fuel master monitoring


I have nothing against using some 3rd party service. But I thought
this was to be small -- disk monitoring only  notifying the user,
not stats collecting. That's why I added the code to Fuel
codebase. If you want external service you need to remember about
such details as, say, duplicate settings (database credentials at
least) and I thought this was an overkill for such simple
functionality.

Yes, it will be much more complex than simple daemon that creates 
notifications but our application is operating in isolated containers, 
and most of the resources cant be discovered from any particular 
container. So if we will want to extend it, with another task, like 
monitoring pool of dhcp addresses - we will end up with some kindof 
server-agent architecture, and this is a lot of work to do


Also, for a 3rd party service, notification injecting code still
needs to be written as a plugin -- that's why I also don't think
Ruby is a good idea :)

Afaik there is a way to write python plugins for sensu, but if there 
is monitoring app  in python, that have friendly support for 
extensions, I am +1 for python


So in the end I don't know if we'll have that much less code with
a 3rd party service. But if you want a statistics collector then
maybe it's OK.

I think that monitoring application is fits there, and we kindof 
already introducing our whell for collecting
statistic from openstack. I would like to know what guys who was 
working on stats in 6.0 thinking about it. So it is TBD




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-24 Thread Przemyslaw Kaminski

And it all started out with simple free disk space monitoring :)

I created a document

https://etherpad.openstack.org/p/fuel-master-monitoring

Let's write what exactly we want to monitor and what actions to take. 
Then it would be easier to decide which system we want.


P.

On 11/24/2014 04:32 PM, Rob Basham wrote:

Rob Basham

Cloud Systems Software Architecture
971-344-1999


Tomasz Napierala tnapier...@mirantis.com wrote on 11/24/2014 
06:42:39 AM:


 From: Tomasz Napierala tnapier...@mirantis.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 11/24/2014 06:46 AM
 Subject: Re: [openstack-dev] [Fuel] fuel master monitoring


  On 24 Nov 2014, at 11:09, Sergii Golovatiuk
 sgolovat...@mirantis.com wrote:
 
  Hi,
 
  monasca looks overcomplicated for the purposes we need. Also it
 requires Kafka which is Java based transport protocol.
What scale are you proposing to support?

  I am proposing Sensu. It's architecture is tiny and elegant. Also
 it uses rabbitmq as transport so we won't need to introduce new 
protocol.
We use Sensu on our smaller clouds and really like it there, but it 
doesn't scale sufficiently for our bigger clouds.


 Do we really need such complicated stuff? Sensu is huge project, and
 it's footprint is quite large. Monit can alert using scripts, can we
 use it instead of API?
I assume you weren't talking about Sensu here and rather about 
Monasca.  I like Monasca for monitoring at large scale.  Kafka and 
Apache Storm are proven technologies at scale.  Do you really think 
you can just pick one monitoring protocol that fits the needs of 
everybody?  Frankly, I'm skeptical of that.



 Regards,
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-21 Thread Przemyslaw Kaminski
I have nothing against using some 3rd party service. But I thought this 
was to be small -- disk monitoring only  notifying the user, not stats 
collecting. That's why I added the code to Fuel codebase. If you want 
external service you need to remember about such details as, say, 
duplicate settings (database credentials at least) and I thought this 
was an overkill for such simple functionality. Also, for a 3rd party 
service, notification injecting code still needs to be written as a 
plugin -- that's why I also don't think Ruby is a good idea :)


So in the end I don't know if we'll have that much less code with a 3rd 
party service. But if you want a statistics collector then maybe it's OK.


I found some Python services that might suit us:

https://github.com/google/grr
https://github.com/BrightcoveOS/Diamond

P.

On 11/20/2014 09:13 PM, Dmitriy Shulyak wrote:

Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our 
small monitoring applications..
Also something well designed and extendable can be reused for 
statistic collector



1. https://github.com/sensu

On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala 
tnapier...@mirantis.com mailto:tnapier...@mirantis.com wrote:



On 06 Nov 2014, at 12:20, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

 I didn't mean a robust monitoring system, just something
simpler. Notifications is a good idea for FuelWeb.

I’m all for that, but if we add it, we need to document ways to
clean up space.
We could also add some kind of simple job to remove rotated logs,
obsolete spanshots, etc., but this is out of scope for 6.0 I guess.

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com mailto:tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-21 Thread Przemyslaw Kaminski

BTW, there's also Monit

http://mmonit.com/monit/

(though it's in C) that looks quite nice. Some config examples:

http://omgitsmgp.com/2013/09/07/a-monit-primer/

P.

On 11/20/2014 09:13 PM, Dmitriy Shulyak wrote:

Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our 
small monitoring applications..
Also something well designed and extendable can be reused for 
statistic collector



1. https://github.com/sensu

On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala 
tnapier...@mirantis.com mailto:tnapier...@mirantis.com wrote:



On 06 Nov 2014, at 12:20, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

 I didn't mean a robust monitoring system, just something
simpler. Notifications is a good idea for FuelWeb.

I’m all for that, but if we add it, we need to document ways to
clean up space.
We could also add some kind of simple job to remove rotated logs,
obsolete spanshots, etc., but this is out of scope for 6.0 I guess.

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com mailto:tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-21 Thread Przemyslaw Kaminski

There's also OpenStack's monasca-agent:

https://github.com/stackforge/monasca-agent

We could try to run it standalone (without Monasca API), add a plugin 
for it that checks disk and sends notification straight to Fuel DB and 
omits generating Forwarder requests. Or set up a fake API though both 
ways seem a bit hackish.


I wasn't aware that we want email notifications too?

P.

On 11/21/2014 10:35 AM, Igor Kalnitsky wrote:

I heard about Monit a lot of good reviews, but unfortunately it looks
like Monit doesn't support plugins and doesn't provide API. It may be
a stumbling block if one day we decide to go deeper in monitoring
task.

On Fri, Nov 21, 2014 at 11:01 AM, Matthew Mosesohn
mmoses...@mirantis.com wrote:

I'm okay with Sensu or Monit, just as long as the results of
monitoring can be represented in a web UI and has a configurable
option for email alerting. Tight integration with Fuel Web is a
nice-to-have (via AMQP perhaps), but anything that can solve our
out-of-disk scenario is ideal. I did my best to tune our logging and
logs rotation, but monitoring is the most sensible approach.

-Matthew

On Fri, Nov 21, 2014 at 12:21 PM, Przemyslaw Kaminski
pkamin...@mirantis.com wrote:

BTW, there's also Monit

http://mmonit.com/monit/

(though it's in C) that looks quite nice. Some config examples:

http://omgitsmgp.com/2013/09/07/a-monit-primer/

P.

On 11/20/2014 09:13 PM, Dmitriy Shulyak wrote:

Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our small
monitoring applications..
Also something well designed and extendable can be reused for statistic
collector


1. https://github.com/sensu

On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:


On 06 Nov 2014, at 12:20, Przemyslaw Kaminski pkamin...@mirantis.com
wrote:


I didn't mean a robust monitoring system, just something simpler.
Notifications is a good idea for FuelWeb.

I’m all for that, but if we add it, we need to document ways to clean up
space.
We could also add some kind of simple job to remove rotated logs, obsolete
spanshots, etc., but this is out of scope for 6.0 I guess.

Regards,
--
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-06 Thread Przemyslaw Kaminski
I didn't mean a robust monitoring system, just something simpler. 
Notifications is a good idea for FuelWeb.


P.

On 11/06/2014 09:59 AM, Anton Zemlyanov wrote:
We can add a notification to FuelWeb, no additional software or user 
actions are required. I would not overestimate this method though, it 
is in no way the robust monitoring system. Forcing user to do 
something on a regular basis is unlikely to work.


Anton

On Thu, Nov 6, 2014 at 11:55 AM, Przemyslaw Kaminski 
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:


I think we're missing the point here. What I meant adding a simple
monitoring system that informed the user via UI/CLI/email/whatever
of low resources on fuel master node. That's it. HA here is not an
option -- if, despite of warnings, the user still continues to use
fuel and disk becomes full, it's the user's fault. By adding these
warnings we have a way of saying We told you so! Without
warnings we get bugs like [1] I mentioned in the first post.

Of course user can check disk space by hand but since we do have a
full-blown UI telling the user to periodically log in to the
console and check disks by hand seems a bit of a burden.

We can even implement such monitoring functionality as a Nailgun
plugin -- installing it would be optional and at the same time we
would grow our plugin ecosystem.

P.


On 11/05/2014 08:42 PM, Dmitry Borodaenko wrote:

Even one additional hardware node required to host the Fuel
master is seen by many users as excessive. Unless you can come up
with an architecture that adds HA capability to Fuel without
increasing its hardware footprint by 2 more nodes, it's just not
worth it.

The only operational aspect of the Fuel master node that you
don't want to lose even for a short while is logging. You'd be
better off redirecting OpenStack environments' logs to a
dedicated highly available logging server (which, of course, you
already have in your environment), and deal with Fuel master node
failures by restoring it from backups.

On Wed, Nov 5, 2014 at 8:26 AM, Anton Zemlyanov
azemlya...@mirantis.com mailto:azemlya...@mirantis.com wrote:

Monitoring of the Fuel master's disk space is the special
case. I really wonder why Fuel master have no HA option, disk
overflow can be predicted but many other failures cannot. HA
is a solution of the 'single point of failure' problem.

The current monitoring recommendations

(http://docs.openstack.org/openstack-ops/content/logging_monitoring.html)
are based on analyzing logs and manual checks, that are
rather reactive way of fixing problems. Zabbix is quite good
for preventing failures that are predictable but for the
abrupt problems Zabbix just reports them 'post mortem'.

The only way to remove the single failure point is to
implement redundancy/HA

Anton

On Tue, Nov 4, 2014 at 6:26 PM, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

Hello,

In extension to my comment in this bug [1] I'd like to
discuss the possibility of adding Fuel master node
monitoring. As I wrote in the comment, when disk is full
it might be already too late to perform any action since
for example Nailgun could be down because DB shut itself
down. So we should somehow warn the user that disk is
running low (in the UI and fuel CLI on stderr for
example) before it actually happens.

For now the only meaningful value to monitor would be
disk usage -- do you have other suggestions? If not then
probably a simple API endpoint with statvfs calls would
suffice. If you see other usages of this then maybe it
would be better to have some daemon collecting the stats
we want.

If we opted for a daemon, then I'm aware that the user
can optionally install Zabbix server although looking at
blueprints in [2] I don't see anything about monitoring
Fuel master itself -- is it possible to do? Though the
installation of Zabbix though is not mandatory so it
still doesn't completely solve the problem.

[1] https://bugs.launchpad.net/fuel/+bug/1371757
[2]
https://blueprints.launchpad.net/fuel/+spec/monitoring-system

Przemek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev

Re: [openstack-dev] [Fuel] fuel master monitoring

2014-11-05 Thread Przemyslaw Kaminski
I think we're missing the point here. What I meant adding a simple 
monitoring system that informed the user via UI/CLI/email/whatever of 
low resources on fuel master node. That's it. HA here is not an option 
-- if, despite of warnings, the user still continues to use fuel and 
disk becomes full, it's the user's fault. By adding these warnings we 
have a way of saying We told you so! Without warnings we get bugs like 
[1] I mentioned in the first post.


Of course user can check disk space by hand but since we do have a 
full-blown UI telling the user to periodically log in to the console and 
check disks by hand seems a bit of a burden.


We can even implement such monitoring functionality as a Nailgun plugin 
-- installing it would be optional and at the same time we would grow 
our plugin ecosystem.


P.

On 11/05/2014 08:42 PM, Dmitry Borodaenko wrote:
Even one additional hardware node required to host the Fuel master is 
seen by many users as excessive. Unless you can come up with an 
architecture that adds HA capability to Fuel without increasing its 
hardware footprint by 2 more nodes, it's just not worth it.


The only operational aspect of the Fuel master node that you don't 
want to lose even for a short while is logging. You'd be better off 
redirecting OpenStack environments' logs to a dedicated highly 
available logging server (which, of course, you already have in your 
environment), and deal with Fuel master node failures by restoring it 
from backups.


On Wed, Nov 5, 2014 at 8:26 AM, Anton Zemlyanov 
azemlya...@mirantis.com mailto:azemlya...@mirantis.com wrote:


Monitoring of the Fuel master's disk space is the special case. I
really wonder why Fuel master have no HA option, disk overflow can
be predicted but many other failures cannot. HA is a solution of
the 'single point of failure' problem.

The current monitoring recommendations
(http://docs.openstack.org/openstack-ops/content/logging_monitoring.html)
are based on analyzing logs and manual checks, that are rather
reactive way of fixing problems. Zabbix is quite good for
preventing failures that are predictable but for the abrupt
problems Zabbix just reports them 'post mortem'.

The only way to remove the single failure point is to implement
redundancy/HA

Anton

On Tue, Nov 4, 2014 at 6:26 PM, Przemyslaw Kaminski
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:

Hello,

In extension to my comment in this bug [1] I'd like to discuss
the possibility of adding Fuel master node monitoring. As I
wrote in the comment, when disk is full it might be already
too late to perform any action since for example Nailgun could
be down because DB shut itself down. So we should somehow warn
the user that disk is running low (in the UI and fuel CLI on
stderr for example) before it actually happens.

For now the only meaningful value to monitor would be disk
usage -- do you have other suggestions? If not then probably a
simple API endpoint with statvfs calls would suffice. If you
see other usages of this then maybe it would be better to have
some daemon collecting the stats we want.

If we opted for a daemon, then I'm aware that the user can
optionally install Zabbix server although looking at
blueprints in [2] I don't see anything about monitoring Fuel
master itself -- is it possible to do? Though the installation
of Zabbix though is not mandatory so it still doesn't
completely solve the problem.

[1] https://bugs.launchpad.net/fuel/+bug/1371757
[2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system

Przemek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Dmitry Borodaenko


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] fuel master monitoring

2014-11-04 Thread Przemyslaw Kaminski

Hello,

In extension to my comment in this bug [1] I'd like to discuss the 
possibility of adding Fuel master node monitoring. As I wrote in the 
comment, when disk is full it might be already too late to perform any 
action since for example Nailgun could be down because DB shut itself 
down. So we should somehow warn the user that disk is running low (in 
the UI and fuel CLI on stderr for example) before it actually happens.


For now the only meaningful value to monitor would be disk usage -- do 
you have other suggestions? If not then probably a simple API endpoint 
with statvfs calls would suffice. If you see other usages of this then 
maybe it would be better to have some daemon collecting the stats we want.


If we opted for a daemon, then I'm aware that the user can optionally 
install Zabbix server although looking at blueprints in [2] I don't see 
anything about monitoring Fuel master itself -- is it possible to do? 
Though the installation of Zabbix though is not mandatory so it still 
doesn't completely solve the problem.


[1] https://bugs.launchpad.net/fuel/+bug/1371757
[2] https://blueprints.launchpad.net/fuel/+spec/monitoring-system

Przemek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev