[openstack-dev] [Horizon][Trove] About Trove policies
Hi all, I'm currently working on two blueprints to improve the Trove support in Horizon: 1. blueprint trove-list-datastores-and-versions 2. blueprint datatable-column-level-policy-checks <-- needed by the first When requesting Trove API to get datastore versions, the response contains more data if it's called with an admin context. I want to process/display these additionnal data in a DataTable with "admin only columns". My first implementation was based on permission: process and show some columns only for an admin. The feedbacks on gerrit suggest to use policies. So my current implementation is based on policies. As I said on gerrit, my concern is that there is no policy.json file in Trove. Therefore I find a bit weird to have a trove_policy.json in Horizon. I have several questions: 1. Is it planned to add policy file in Trove? 2. If not, should we keep trove_policy.json in Horizon? 3. If not, should we revert to permissions? Links: Patchsets for bp trove-list-datastores-and-versions: https://review.openstack.org/#/c/163196/ Patchsets for bp datatable-column-level-policy-checks: https://review.openstack.org/#/c/164010/ Thanks, Romain __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] [glance] do NOT ever sort requirements.txt
On Wed, 2014-09-03 at 14:03 +0100, Daniel P. Berrange wrote: > On Wed, Sep 03, 2014 at 08:37:17AM -0400, Sean Dague wrote: > > I'm not sure why people keep showing up with "sort requirements" patches > > like - https://review.openstack.org/#/c/76817/6, however, they do. > > > > All of these need to be -2ed with predjudice. > > > > requirements.txt is not a declarative interface. The order is important > > as pip processes it in the order it is. Changing the order has impacts > > on the overall integration which can cause wedges later. > > Can requirements.txt contain comment lines ? If so, it would be > worth adding > ># The ordering of modules in this file is important ># Do not attempt to re-sort the lines > > Because 6 months hence people will have probably forgotten about > this mail, or if they're new contributors, never know it existed. > > Regards, > Daniel Yes, requirements.txt can contain comment lines. It's a good idea to keep this information in the file itself. Best, Romain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend
On Tue, 2014-09-02 at 17:34 +0300, Dmitriy Ukhlov wrote: > Hi Romain! > > > Thank you for useful info about your Cassandra backuping. It's always a pleasure to talk about Cassandra :) > > We have not tried to tune Cassandra compaction properties yet. > > MagnetoDB is DynamoDB-like REST API and it means that it is key-value > storage itself and it should be able to work for different kind of > load, because it depends on user application which use MagnetoDB. The compaction strategy choice really matters when setting up a cluster. In such a use case, I mean MagnetoDB, we can assume that the database will be updated frequently. Thus LCS is more suitable than STCS. > Do you have some recommendation or comments based on information about > read/write ratio? Yes, if read/write ratio >= 2 then LCS is a must have. Just be aware that LCS is more IO intensive during compaction than STCS, but it's for a good cause. You'll find information here: http://www.datastax.com/dev/blog/when-to-use-leveled-compaction Best, Romain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend
Hi Mirantis guys, I have set up two Cassandra backups: The first backup procedure was similar to the one you want to achieve. The second backup used SAN features (EMC VNX snapshots) so it was very specific to the environment. Backup an entire cluster (therefore all replicas) is challenging when dealing with big data and not really needed. If your replicas are spread accross several data centers then you could backup just one data center. In that case you backup only one replica. Depending on your needs you may want to backup twice (I mean "backup the backup" using a tape library for example) and then store it in an external location for disaster recovery, requirements specification, norms, etc. The snapshot command issues a flush before to effectively take the snapshot. So the flush command is not necessary. https://github.com/apache/cassandra/blob/c7ebc01bbc6aa602b91e105b935d6779245c87d1/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2213 (snapshotWithoutFlush() is used by the scrub command) Just out of curiosity, have you tried the leveled compaction strategy? It seems that you use STCS. Does your use case imply many updates? What is your read/write ratio? Best, Romain - Original Message - From: "Denis Makogon" To: "OpenStack Development Mailing List (not for usage questions)" Sent: Friday, August 29, 2014 4:33:59 PM Subject: Re: [openstack-dev] [magnetodb] Backup procedure for Cassandra backend On Fri, Aug 29, 2014 at 4:29 PM, Dmitriy Ukhlov < dukh...@mirantis.com > wrote: Hello Denis, Thank you for very useful knowledge sharing. But I have one more question. As far as I understood if we have replication factor 3 it means that our backup may contain three copies of the same data. Also it may contain some not compacted sstables set. Do we have any ability to compact collected backup data before moving it to backup storage? Thanks for fast response, Dmitriy. With replication factor 3 - yes, this looks like a feature that allows to backup only one node instead of 3 of them. In other cases, we would need to iterate over each node, as you know. Correct, it is possible to have not compacted SSTables. To accomplish compaction we might need to use compaction mechanism provided by the nodetool, see http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCompact.html , we just need take into account that it's possible that sstable was already compacted and force compaction wouldn't give valuable benefits. Best regards, Denis Makogon On Fri, Aug 29, 2014 at 2:01 PM, Denis Makogon < dmako...@mirantis.com > wrote: Hello, stackers. I'd like to start thread related to backuping procedure for MagnetoDB, to be precise, for Cassandra backend. In order to accomplish backuping procedure for Cassandra we need to understand how does backuping work. To perform backuping: 1. We need to SSH into each node 2. Call ‘nodetool snapshot’ with appropriate parameters 3. Collect backup. 4. Send backup to remote storage. 5. Remove initial snapshot Lets take a look how does ‘ nodetool snapshot ’ works. Cassandra backs up data by taking a snapshot of all on-disk data files (SSTable files) stored in the data directory. Each time an SSTable gets flushed and snapshotted it becomes a hard link against initial SSTable pinned to specific timestamp. Snapshots are taken per keyspace or per-CF and while the system is online. However, nodes must be taken offline in order to restore a snapshot. Using a parallel ssh tool (such as pssh), you can flush and then snapshot an entire cluster. This provides an eventually consistent backup. Although no one node is guaranteed to be consistent with its replica nodes at the time a snapshot is taken, a restored snapshot can resume consistency using Cassandra's built-in consistency mechanisms. After a system-wide snapshot has been taken, you can enable incremental backups on each node (disabled by default) to backup data that has changed since the last snapshot was taken. Each time an SSTable is flushed, a hard link is copied into a /backups subdirectory of the data directory. Now lets see how can we deal with snapshot once its taken. Below you can see a list of command that needs to be executed to prepare a snapshot: Flushing SSTables for consistency 'nodetool flush' Creating snapshots (for example of all keyspaces) "nodetool snapshot -t %(backup_name)s 1>/dev/null", where * backup_name - is a name of snapshot Once it’s done we would need to collect all hard links into a common directory (with keeping initial file hierarchy): sudo tar cpzfP /tmp/all_ks.tar.gz\ $(sudo find %(datadir)s -type d -name %(backup_name)s)" where * backup_name - is a name of snapshot, * datadir - storage location (/var/lib/cassandra/data, by the default) Note that this operation can be extended:
Re: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me...
Hi, Note that Django 1.7 requires Python 2.7 or above[1] while Juno still requires to be compatible with Python 2.6 (Suse ES 11 uses 2.6 if my memory serves me). [1] https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility Best, Romain - Original Message - From: "Thomas Goirand" To: "OpenStack Development Mailing List" Sent: Sunday, August 3, 2014 12:55:19 PM Subject: [openstack-dev] [horizon] Support for Django 1.7: there's a bit of work, though it looks fixable to me... Hi, The Debian maintainer of Django would like to upload Django 1.7 before Jessie is frozen on the 5th of November. As for OpenStack, I would like Icehouse to be in Jessie, since it will be supported by major companies (RedHat and Canonical both will use Icehouse as LTS, and will work on security for a longer time than previously planned in the OpenStack community). Though Horizon Icehouse doesn't currently work with Django 1.7. The first thing to fix would be the TEMPLATE_DIRS thing: ./run_tests.sh -N -P || true Running Horizon application tests Traceback (most recent call last): File "/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/manage.py", line 25, in execute_from_command_line(sys.argv) File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line utility.execute() [... not useful stack dump ...] File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 42, in _setup self._wrapped = Settings(settings_module) File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 110, in __init__ "Please fix your settings." % setting) django.core.exceptions.ImproperlyConfigured: The TEMPLATE_DIRS setting must be a tuple. Please fix your settings. Running openstack_dashboard tests WARNING:root:No local_settings file found. Then of course, the rest of the tests are completely broken because there's no local_settings. Adding a comma at the end of: TEMPLATE_DIRS = (os.path.join(ROOT_PATH, 'tests', 'templates')) in horizon/test/settings.py fixes the issue. Note that this works in both Django 1.6 and 1.7. Some other TEMPLATE_DIRS declaration already have the comma, so I guess it's fine to add it. Which is why I did this: https://review.openstack.org/111561 FYI, there's this document that talks about it: https://docs.djangoproject.com/en/1.7/releases/1.7/#backwards-incompatible-changes-in-1-7 Then, after fixing this, I get this error: == ERROR: Failure: TypeError (Error when calling the metaclass bases function() argument 1 must be code, not str) -- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 414, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/test/tests/tables.py", line 28, in from horizon.test import helpers as test File "/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/test/helpers.py", line 184, in class JasmineTests(SeleniumTestCase): TypeError: Error when calling the metaclass bases function() argument 1 must be code, not str There's the same issue in the definition of SeleniumTestCase() in openstack_dashboard/test/helpers.py (line 365 in Icehouse). Since I don't really care about selenium (it can't be tested in Debian because it's non-free), I commented out the class JasmineTests(SeleniumTestCase), then I get more errors. A few instances of this one: File "/home/zigo/sources/openstack/icehouse/horizon/build-area/horizon-2014.1.1/horizon/tables/base.py", line 206, in "average": lambda data: sum(data, 0.0) / len(data) TypeError: unsupported operand type(s) for +: 'float' and 'str' I'm not a Django expert, so I it'd be awesome to get help on this. Best would be that: 1/ Support for Django 1.7 is added to Juno 2/ The changes are backported to Icehouse (even if this doesn't make it into the stable branch because of "let's stay safe", I can add the patches as Debian specific). Thoughts from the Horizon team would be welcome. Cheers, Thomas Goirand (zigo) ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [MagnetoDB] Best practices for uploading large amounts of data
Bulk loading with sstableloader is blazingly fast (the price to pay is that's not portable of course). Also it's network efficient thanks to SSTable compression. If the network is not a limiting factor then LZ4 will be great. Le Vendredi 28 mars 2014 13h46, Aleksandr Chudnovets a écrit : Dmitriy Ukhlov wrote: > I guess if we a talking about cassandra batch loading the fastest way is to > generate sstables locally and load it into Cassandra via JMX or sstableloader >http://www.datastax.com/dev/blog/bulk-loading > > Good idea, Dmitriy. IMHO bulk load is back-end specific task. So using specialized tools seems good idea for me. Regards, Alexander Chudnovets ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Split of the openstack-dev list
Good idea. -romain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Re : Re: Manual to build an application on open stack
> I am trying to build an app that resides on the cloud and would want to > perform some basic storage management operations. So, you want to > build an app atop Swift ?-Romain___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Re : welcoming new committers
> Indeed :) Can you share with us briefly what you found interesting in > the sessions of Upstream University? Which ones did you go to? Upsteam University acted like a starter to me. I attended the september session in Paris. I'm interested in OpenStack since Essex but I did not dare submit my first patch. Folks at UU make this happen. During the two days group session we have learned the basics of Open Source and how to interact with such a community. Then, weekly assessments and phone calls allow us to stay motivated and thus not to give up. We are given valuable advices to get our fixes accepted. Guys at Upstream University are really cool and proficient. > Well, I think that anybody's opinion matters and you're not a new > OpenStack developer anymore. You have your own experience and your > reviews may definitely help somebody even newer than you to get his/her > patch refined before the more experienced developers get to it. I'm sure > your comments, even without a vote, would help. Chime in then :) Thanks for your kind words. I'm going to start reviewing. I take note of what Jeremy Stanley said: "OpenStack does not lack developers... it lacks reviewers." -Romain ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Re : welcoming new committers (was Re: When is it okay for submitters to say 'I don't want to add tests' ?)
Hi all, Adding a message for new comers is a good idea. I am a new Horizon contributor, some of my fixes have been merged (thanks to Upstream University :-) and reviewers of course) but I still hesitate to do code review. To my mind, it is reserved to "known" developpers whose opinion matters... - Romain___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev