I have made the following changes intended for : CE:MW:Shared / tracker Please review and accept or decline. BOSS has already run some checks on this request. See the "Messages from BOSS" section below.
https://build.pub.meego.com//request/show/8373 Thank You, martyone [This message was auto-generated] --- Request # 8373: Messages from BOSS: State: review at 2013-03-13T18:27:09 by bossbot Reviews: accepted by bossbot : Prechecks succeeded. new for CE-maintainers : Please replace this text with a review and approve/reject the review (not the SR). BOSS will take care of the rest Changes: submit: home:martyone:branches:CE:MW:Shared / tracker -> CE:MW:Shared / tracker changes files: -------------- --- tracker.changes +++ tracker.changes @@ -0,0 +1,15 @@ +* Mon Mar 04 2013 Martin Kampas <[email protected]> - 0.14.4 +- Fix most of NEMO#537 +- Add tracker-tests-do-not-su-meego.patch +- Add tracker-tests-fix-undefined-function-call.patch +- Add tracker-tests-synchronous-terminate-pre-step.patch +- Add tracker-tests-increase-timeout.patch +- Add tracker-tests-fix-helper-starting-order.patch +- Add tracker-tests-make-400-extractor-work-with-testrunner.patch +- Add tracker-tests-allow-reuse-graph-updated-signal-handling.patch +- Add tracker-forget-removed-files.patch +- Add tracker-tests-310-fts-indexing-use-graph-updated-signal.patch +- Add tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch +- Add tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch +- Add tracker-tests-400-extractor-skip-unsupported-file-types.patch + new: ---- tracker-forget-removed-files.patch tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch tracker-tests-310-fts-indexing-use-graph-updated-signal.patch tracker-tests-400-extractor-skip-unsupported-file-types.patch tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch tracker-tests-allow-reuse-graph-updated-signal-handling.patch tracker-tests-do-not-su-meego.patch tracker-tests-fix-helper-starting-order.patch tracker-tests-fix-undefined-function-call.patch tracker-tests-increase-timeout.patch tracker-tests-make-400-extractor-work-with-testrunner.patch tracker-tests-synchronous-terminate-pre-step.patch spec files: ----------- --- tracker.spec +++ tracker.spec @@ -25,6 +25,18 @@ Patch2: tracker-0.10.37-fix-linking-with-newer-toolchain.patch Patch3: tracker-0.10.37-fix-linking-with-newer-glib.patch Patch4: 0001-Remove-tracker-tests.aegis-from-config_SCRIPTS-if-ma.patch +Patch5: tracker-tests-do-not-su-meego.patch +Patch6: tracker-tests-fix-undefined-function-call.patch +Patch7: tracker-tests-synchronous-terminate-pre-step.patch +Patch8: tracker-tests-increase-timeout.patch +Patch9: tracker-tests-fix-helper-starting-order.patch +Patch10: tracker-tests-make-400-extractor-work-with-testrunner.patch +Patch11: tracker-tests-allow-reuse-graph-updated-signal-handling.patch +Patch12: tracker-forget-removed-files.patch +Patch13: tracker-tests-310-fts-indexing-use-graph-updated-signal.patch +Patch14: tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch +Patch15: tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch +Patch16: tracker-tests-400-extractor-skip-unsupported-file-types.patch Requires: gst-plugins-base >= 0.10 Requires: unzip Requires(post): /sbin/ldconfig @@ -134,6 +146,30 @@ %patch3 -p1 # 0001-Remove-tracker-tests.aegis-from-config_SCRIPTS-if-ma.patch %patch4 -p1 +# tracker-tests-do-not-su-meego.patch +%patch5 -p1 +# tracker-tests-fix-undefined-function-call.patch +%patch6 -p1 +# tracker-tests-synchronous-terminate-pre-step.patch +%patch7 -p1 +# tracker-tests-increase-timeout.patch +%patch8 -p1 +# tracker-tests-fix-helper-starting-order.patch +%patch9 -p1 +# tracker-tests-make-400-extractor-work-with-testrunner.patch +%patch10 -p1 +# tracker-tests-allow-reuse-graph-updated-signal-handling.patch +%patch11 -p1 +# tracker-forget-removed-files.patch +%patch12 -p1 +# tracker-tests-310-fts-indexing-use-graph-updated-signal.patch +%patch13 -p1 +# tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch +%patch14 -p1 +# tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch +%patch15 -p1 +# tracker-tests-400-extractor-skip-unsupported-file-types.patch +%patch16 -p1 # >> setup # << setup other changes: -------------- ++++++ tracker-forget-removed-files.patch (new) --- tracker-forget-removed-files.patch +++ tracker-forget-removed-files.patch @@ -0,0 +1,97 @@ +Fix one aspect of Gnome bug #643388 - Sqlite constraint violation + +I met this bug while executing (slighly modified) +tracker-tests/310-fts-indexing.py. + +The error can be seen in the output of `dbus-monitor --session`. For some +reason I do not find it in log file *sometimes*. + +method call sender=:1.46 -> dest=org.freedesktop.Tracker1 serial=73 +path=/org/freedesktop/Tracker1/Steroids; +interface=org.freedesktop.Tracker1.Steroids; member=UpdateArray + (dbus-monitor too dumb to decipher arg type 'h') +method return sender=:1.48 -> dest=:1.46 reply_serial=73 + array [ + string "org.freedesktop.Tracker1.SparqlError.Internal" + string "column nie:url is not unique (strerror of errno (not necessarily +related): No such file or directory)" + ] + + +Steps to reproduce: + +Key is to repeat steps (1) remove indexed file, (2) recreate it, (3) change its +content. In my case the error is triggered when I the steps two times fter +that, the error appears every time the content of the file changes, i.e., steps +(1) and (2) are no more required to repeat. + +Start with clean environment + + *** Do not run the `rm -rf` inside your actual home directory! *** + + $ tracker-control -t + $ rm -rf `find ~/ -iwholename '*tracker*'` + +This is how 310-fts-indexing.py setups tracker configuration + + $ gconftool-2 \ + /org/freedesktop/tracker/miner/files/index-recursive-directories \ + -s --type=list \ + --list-type=string '[/home/nemo/tracker-tests/test-monitored]' + $ gconftool-2 \ + /org/freedesktop/tracker/miner/files/index-single-directories \ + -s --type=list --list-type=string '[]' + $ gconftool-2 \ + /org/freedesktop/tracker/miner/files/index-optical-discs \ + -s --type=bool false + $ gconftool-2 \ + /org/freedesktop/tracker/miner/files/index-removable-devices \ + -s --type=bool false + $ gconftool-2 \ + /org/freedesktop/tracker/miner/files/throttle -s --type=int 5 + +Start tracker + + $ mkdir -p tracker-tests/test-monitored + $ tracker-control -s + +After every step wait couple of seconds (until tracker gets idle) + + $ echo automobile > tracker-tests/test-monitored/xxx.txt + + $ rm tracker-tests/test-monitored/xxx.txt + $ echo automobile > tracker-tests/test-monitored/xxx.txt + $ echo autooomobile > tracker-tests/test-monitored/xxx.txt + $ rm tracker-tests/test-monitored/xxx.txt + $ echo automobile > tracker-tests/test-monitored/xxx.txt + $ echo autooomobile > tracker-tests/test-monitored/xxx.txt + +The problem I found is the GFile instance is not removed from cache when a +single regular file is deleted. The instance is attached (g_object_set_data()) +certain data like the URN. When the file is recreated, it gets new URN +assigned, but the cached GFile instance still serves the old one. The cached +invalid URN is then used on first update of the file. + +Not sure if the patch is 100% valid/without side effects. + +Index: tracker-0.14.4/src/libtracker-miner/tracker-file-notifier.c +=================================================================== +--- tracker-0.14.4.orig/src/libtracker-miner/tracker-file-notifier.c ++++ tracker-0.14.4/src/libtracker-miner/tracker-file-notifier.c +@@ -840,12 +840,10 @@ monitor_item_deleted_cb (TrackerMonitor + file, file_type, NULL); + g_signal_emit (notifier, signals[FILE_DELETED], 0, canonical); + +- if (is_directory) { +- /* Remove all files underneath this dir from the cache */ +- tracker_file_system_forget_files (priv->file_system, +- file, +- G_FILE_TYPE_UNKNOWN); +- } ++ /* Remove the file from the cache (works recursively for directories) */ ++ tracker_file_system_forget_files (priv->file_system, ++ file, ++ G_FILE_TYPE_UNKNOWN); + } + + static void ++++++ tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch (new) --- tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch +++ tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch @@ -0,0 +1,39 @@ +Fix one aspect of Gnome bug #643388 - Sqlite constraint violation + +Another case for the failed UNIQUE constraint on "nie:url" - this one met while +executing tracker-tests/600-applications-camera.py . + +In this test case: + + 1) meta data for test file is manually inserted + 2) test file is created + 3) test file creation is notified by tracker-miner-fs + 4) item_add_or_update() is passed is_new=TRUE so it does not query the + store for existing URN (existing meta data in general) + 5) without existing URN it tries to INSERT while it should UPDATE (i.e. + remove existing metadata and insert what it has extracted) --> INSERT + fails - there is already meta data stored for the test file. + +I am not sure what performance impact does the patch have. + +Index: tracker-0.14.4/src/libtracker-miner/tracker-miner-fs.c +=================================================================== +--- tracker-0.14.4.orig/src/libtracker-miner/tracker-miner-fs.c ++++ tracker-0.14.4/src/libtracker-miner/tracker-miner-fs.c +@@ -1292,10 +1292,14 @@ item_add_or_update (TrackerMinerFS *fs, + sparql = tracker_sparql_builder_new_update (); + g_object_ref (file); + +- if (!is_new) { ++ /* Always query. No matter we are notified the file was just created, its ++ * meta data might already be in the store (possibly inserted by other ++ * application) - in such a case we have to UPDATE, not INSERT. ++ */ ++ //if (!is_new) { + urn = tracker_file_notifier_get_file_iri (fs->priv->file_notifier, + file); +- } ++ //} + + if (!tracker_indexing_tree_file_is_root (fs->priv->indexing_tree, file)) { + parent = g_file_get_parent (file); ++++++ tracker-tests-310-fts-indexing-use-graph-updated-signal.patch (new) --- tracker-tests-310-fts-indexing-use-graph-updated-signal.patch +++ tracker-tests-310-fts-indexing-use-graph-updated-signal.patch @@ -0,0 +1,95 @@ +The tracker_miner_fs_wait_for_idle() call is used to detect miner operation has +completed. It returns when miner's status changes to "Idle" (or on timeout). +Unfortunately at the time miner goes idle it is not guaranteed the data are +already in store - and the related test fails. + +A better way is to listen to GraphUpdated signal sent by store and wait until +the desired resource is announced beng added or removed. + +Depends on tracker-tests-allow-reuse-graph-updated-signal-handling.patch + +Index: tracker-0.14.4/tests/functional-tests/310-fts-indexing.py +=================================================================== +--- tracker-0.14.4.orig/tests/functional-tests/310-fts-indexing.py ++++ tracker-0.14.4/tests/functional-tests/310-fts-indexing.py +@@ -29,6 +29,7 @@ the text contents are updated accordingl + import os + import shutil + import locale ++import time + + import unittest2 as ut + from common.utils.helpers import log +@@ -40,9 +41,13 @@ class CommonMinerFTS (CommonTrackerMiner + Superclass to share methods. Shouldn't be run by itself. + """ + def setUp (self): ++ self.tracker.reset_graph_updates_tracking () + self.testfile = "test-monitored/miner-fts-test.txt" + if os.path.exists (path (self.testfile)): ++ id = self._query_id (uri (self.testfile)) + os.remove (path (self.testfile)) ++ self.tracker.await_resource_deleted (id) ++ self.tracker.reset_graph_updates_tracking () + # Shouldn't we wait here for the miner to idle? (it works without it) + + def tearDown (self): +@@ -54,7 +59,9 @@ class CommonMinerFTS (CommonTrackerMiner + f = open (path (self.testfile), "w") + f.write (text) + f.close () +- self.system.tracker_miner_fs_wait_for_idle () ++ self.tracker.await_resource_inserted (rdf_class = 'nfo:Document', ++ url = uri (self.testfile)) ++ self.tracker.reset_graph_updates_tracking () + + def search_word (self, word): + """ +@@ -83,6 +90,11 @@ class CommonMinerFTS (CommonTrackerMiner + self.assertEquals (len (results), 1) + self.assertIn ( uri (self.testfile), results) + ++ def _query_id (self, uri): ++ query = "SELECT tracker:id(?urn) WHERE { ?urn nie:url \"%s\". }" % uri ++ result = self.tracker.query (query) ++ assert len (result) == 1 ++ return int (result[0][0]) + + + class MinerFTSBasicTest (CommonMinerFTS): +@@ -176,8 +188,9 @@ class MinerFTSFileOperationsTest (Common + TEXT = "automobile is red and big and whatnot" + self.basic_test (TEXT, "automobile") + ++ id = self._query_id (uri (self.testfile)) + os.remove ( path (self.testfile)) +- self.system.tracker_miner_fs_wait_for_idle () ++ self.tracker.await_resource_deleted (id) + + results = self.search_word ("automobile") + self.assertEquals (len (results), 0) +@@ -201,6 +214,7 @@ class MinerFTSFileOperationsTest (Common + self.basic_test (TEXT, "automobile") + + self.set_text ("airplane is blue and small and wonderful") ++ + results = self.search_word ("automobile") + self.assertEquals (len (results), 0) + +@@ -245,12 +259,15 @@ class MinerFTSFileOperationsTest (Common + TEST_16_DEST = "test-monitored/fts-indexing-text-16.txt" + + self.__recreate_file (path (TEST_16_SOURCE), TEXT) ++ # the file is supposed to be ignored by tracker, so there is no notification.. ++ time.sleep (5) + + results = self.search_word ("airplane") + self.assertEquals (len (results), 0) + + shutil.copyfile ( path (TEST_16_SOURCE), path (TEST_16_DEST)) +- self.system.tracker_miner_fs_wait_for_idle () ++ self.tracker.await_resource_inserted (rdf_class = 'nfo:Document', ++ url = uri (TEST_16_DEST)) + + results = self.search_word ("airplane") + self.assertEquals (len (results), 1) ++++++ tracker-tests-400-extractor-skip-unsupported-file-types.patch (new) --- tracker-tests-400-extractor-skip-unsupported-file-types.patch +++ tracker-tests-400-extractor-skip-unsupported-file-types.patch @@ -0,0 +1,23 @@ +Index: tracker-0.14.4/tests/functional-tests/400-extractor.py +=================================================================== +--- tracker-0.14.4.orig/tests/functional-tests/400-extractor.py ++++ tracker-0.14.4/tests/functional-tests/400-extractor.py +@@ -238,11 +238,18 @@ def run_all (): + else: + TEST_DATA_PATH = os.path.join (cfg.DATADIR, "tracker-tests", + "test-extraction-data") ++ blacklist = ["video/video-1.expected", ++ "video/video-2.expected", ++ "audio/Jazz_Audio_OPLs0.expected"] ++ blacklist = [os.path.join (TEST_DATA_PATH, f) for f in blacklist] + print "Loading test descriptions from", TEST_DATA_PATH + extractionTestSuite = ut.TestSuite () + for root, dirs, files in os.walk (TEST_DATA_PATH): + descriptions = [os.path.join (root, f) for f in files if f.endswith ("expected")] + for descfile in descriptions: ++ if descfile in blacklist: ++ print "Skipping '%s' - blacklisted (Nemo bug #537)" % descfile ++ continue + tc = ExtractionTestCase(descfile=descfile) + extractionTestSuite.addTest(tc) + result = ut.TextTestRunner (verbosity=1).run (extractionTestSuite) ++++++ tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch (new) --- tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch +++ tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch @@ -0,0 +1,16 @@ +The extractor output has changed. Similar fix can be seen in +commit aabc8a8e07e90b8fae0172824185e50b6af68228. + +Index: tracker-0.14.4/tests/functional-tests/501-writeback-details.py +=================================================================== +--- tracker-0.14.4.orig/tests/functional-tests/501-writeback-details.py ++++ tracker-0.14.4/tests/functional-tests/501-writeback-details.py +@@ -77,7 +77,7 @@ class WritebackKeepDateTest (CommonTrack + + # Check the value is written in the file + metadata = self.extractor.get_metadata (self.get_test_filename_jpeg (), "") +- self.assertIn (self.favorite, metadata ["nao:hasTag:prefLabel"], ++ self.assertIn (self.favorite, metadata ["nao:hasTag"], + "Tag hasn't been written in the file") + + # Now check the modification date of the files and it should be the same :) ++++++ tracker-tests-allow-reuse-graph-updated-signal-handling.patch (new) --- tracker-tests-allow-reuse-graph-updated-signal-handling.patch +++ tracker-tests-allow-reuse-graph-updated-signal-handling.patch @@ -0,0 +1,426 @@ +In many test cases the tracker_miner_fs_wait_for_idle() call is used to detect +miner operation has completed. It returns when miner's status changes to "Idle" +(or on timeout). Unfortunately at the time miner goes idle it is not guaranteed +the data are already in store - and the related test fails. + +The test case 301-miner-resource-removal.py does it a better way - it listens +to GraphUpdated signal sent by store and waits until the desired resource is +announced being added or removed. + +There is comment inside 301-miner-resource-removal.py: "FIXME: put this stuff +in StoreHelper". This patch is to follow that comment. + +Needed by tracker-tests-310-fts-indexing-use-graph-updated-signal.patch + +Index: tracker-0.14.4/tests/functional-tests/301-miner-resource-removal.py +=================================================================== +--- tracker-0.14.4.orig/tests/functional-tests/301-miner-resource-removal.py ++++ tracker-0.14.4/tests/functional-tests/301-miner-resource-removal.py +@@ -54,7 +54,6 @@ CONF_OPTIONS = [ + REASONABLE_TIMEOUT = 30 + + class MinerResourceRemovalTest (ut.TestCase): +- graph_updated_handler_id = 0 + + # Use the same instances of store and miner-fs for the whole test suite, + # because they take so long to do first-time init. +@@ -81,175 +80,16 @@ class MinerResourceRemovalTest (ut.TestC + + @classmethod + def tearDownClass (self): +- self.store.bus._clean_up_signal_match (self.graph_updated_handler_id) + self.miner_fs.stop () + self.extractor.stop () + self.store.stop () + + def setUp (self): +- self.inserts_list = [] +- self.deletes_list = [] +- self.inserts_match_function = None +- self.deletes_match_function = None +- self.match_timed_out = False +- +- self.graph_updated_handler_id = self.store.bus.add_signal_receiver (self._graph_updated_cb, +- signal_name = "GraphUpdated", +- path = "/org/freedesktop/Tracker1/Resources", +- dbus_interface = "org.freedesktop.Tracker1.Resources") ++ self.store.reset_graph_updates_tracking () + + def tearDown (self): + self.system.unset_up_environment () + +- # A system to follow GraphUpdated and make sure all changes are tracked. +- # This code saves every change notification received, and exposes methods +- # to await insertion or deletion of a certain resource which first check +- # the list of events already received and wait for more if the event has +- # not yet happened. +- # +- # FIXME: put this stuff in StoreHelper +- def _timeout_cb (self): +- self.match_timed_out = True +- self.store.loop.quit () +- # Don't fail here, exceptions don't get propagated correctly +- # from the GMainLoop +- +- def _graph_updated_cb (self, class_name, deletes_list, inserts_list): +- """ +- Process notifications from tracker-store on resource changes. +- """ +- matched = False +- if inserts_list is not None: +- if self.inserts_match_function is not None: +- # The match function will remove matched entries from the list +- (matched, inserts_list) = self.inserts_match_function (inserts_list) +- self.inserts_list += inserts_list +- +- if deletes_list is not None: +- if self.deletes_match_function is not None: +- (matched, deletes_list) = self.deletes_match_function (deletes_list) +- self.deletes_list += deletes_list +- +- def await_resource_inserted (self, rdf_class, url = None, title = None): +- """ +- Block until a resource matching the parameters becomes available +- """ +- assert (self.inserts_match_function == None) +- +- def match_cb (inserts_list, in_main_loop = True): +- matched = False +- filtered_list = [] +- known_subjects = set () +- +- #print "Got inserts: ", inserts_list, "\n" +- +- # FIXME: this could be done in an easier way: build one query that filters +- # based on every subject id in inserts_list, and returns the id of the one +- # that matched :) +- for insert in inserts_list: +- id = insert[1] +- +- if not matched and id not in known_subjects: +- known_subjects.add (id) +- +- where = " ?urn a %s " % rdf_class +- +- if url is not None: +- where += "; nie:url \"%s\"" % url +- +- if title is not None: +- where += "; nie:title \"%s\"" % title +- +- query = "SELECT ?urn WHERE { %s FILTER (tracker:id(?urn) = %s)}" % (where, insert[1]) +- #print "%s\n" % query +- result_set = self.store.query (query) +- #print result_set, "\n\n" +- +- if len (result_set) > 0: +- matched = True +- self.matched_resource_urn = result_set[0][0] +- self.matched_resource_id = insert[1] +- +- if not matched or id != self.matched_resource_id: +- filtered_list += [insert] +- +- if matched and in_main_loop: +- glib.source_remove (self.graph_updated_timeout_id) +- self.graph_updated_timeout_id = 0 +- self.inserts_match_function = None +- self.store.loop.quit () +- +- return (matched, filtered_list) +- +- +- self.matched_resource_urn = None +- self.matched_resource_id = None +- +- log ("Await new %s (%i existing inserts)" % (rdf_class, len (self.inserts_list))) +- +- # Check the list of previously received events for matches +- (existing_match, self.inserts_list) = match_cb (self.inserts_list, False) +- +- if not existing_match: +- self.graph_updated_timeout_id = glib.timeout_add_seconds (REASONABLE_TIMEOUT, self._timeout_cb) +- self.inserts_match_function = match_cb +- +- # Run the event loop until the correct notification arrives +- self.store.loop.run () +- +- if self.match_timed_out: +- self.fail ("Timeout waiting for resource: class %s, URL %s, title %s" % (rdf_class, url, title)) +- +- return (self.matched_resource_id, self.matched_resource_urn) +- +- +- def await_resource_deleted (self, id, fail_message = None): +- """ +- Block until we are notified of a resources deletion +- """ +- assert (self.deletes_match_function == None) +- +- def match_cb (deletes_list, in_main_loop = True): +- matched = False +- filtered_list = [] +- +- #print "Looking for %i in " % id, deletes_list, "\n" +- +- for delete in deletes_list: +- if delete[1] == id: +- matched = True +- else: +- filtered_list += [delete] +- +- if matched and in_main_loop: +- glib.source_remove (self.graph_updated_timeout_id) +- self.graph_updated_timeout_id = 0 +- self.deletes_match_function = None +- +- self.store.loop.quit () +- +- return (matched, filtered_list) +- +- log ("Await deletion of %i (%i existing)" % (id, len (self.deletes_list))) +- +- (existing_match, self.deletes_list) = match_cb (self.deletes_list, False) +- +- if not existing_match: +- self.graph_updated_timeout_id = glib.timeout_add_seconds (REASONABLE_TIMEOUT, self._timeout_cb) +- self.deletes_match_function = match_cb +- +- # Run the event loop until the correct notification arrives +- self.store.loop.run () +- +- if self.match_timed_out: +- if fail_message is not None: +- self.fail (fail_message) +- else: +- self.fail ("Resource %i has not been deleted." % id) +- +- return +- (227 more lines skipped) ++++++ tracker-tests-do-not-su-meego.patch (new) --- tracker-tests-do-not-su-meego.patch +++ tracker-tests-do-not-su-meego.patch @@ -0,0 +1,11 @@ +--- tracker-0.14.4/tests/functional-tests/create-tests-xml.py.orig 2013-02-07 13:25:48.788494333 +0100 ++++ tracker-0.14.4/tests/functional-tests/create-tests-xml.py 2013-02-07 13:27:23.715494319 +0100 +@@ -52,7 +52,7 @@ if (cfg.haveUpstart): + """ + else: + PRE_STEPS = """ <pre_steps> +- <step>su - meego -c "tracker-control -t"</step> ++ <step>tracker-control -t</step> + </pre_steps> + """ + ++++++ tracker-tests-fix-helper-starting-order.patch (new) --- tracker-tests-fix-helper-starting-order.patch +++ tracker-tests-fix-helper-starting-order.patch @@ -0,0 +1,34 @@ +The process tracker-miner-fs causes the process tracker-store be started +automatically via the D-Bus service autostart mechanism. As the test case needs +to start and control the processes itself, it is necessary to start them in +order of their dependencies, so the D-Bus autostart does not happen. + +diff --git a/tests/functional-tests/301-miner-resource-removal.py b/tests/functional-tests/301-miner-resource-removal.py +index 506e8d4..cf0af78 100755 +--- a/tests/functional-tests/301-miner-resource-removal.py ++++ b/tests/functional-tests/301-miner-resource-removal.py +@@ -69,8 +69,6 @@ class MinerResourceRemovalTest (ut.TestCase): + self.system.set_up_environment (CONF_OPTIONS, None) + self.store = StoreHelper () + self.store.start () +- self.miner_fs = MinerFsHelper () +- self.miner_fs.start () + + # GraphUpdated seems to not be emitted if the extractor isn't running + # even though the file resource still gets inserted - maybe because +@@ -78,11 +76,14 @@ class MinerResourceRemovalTest (ut.TestCase): + self.extractor = ExtractorHelper () + self.extractor.start () + ++ self.miner_fs = MinerFsHelper () ++ self.miner_fs.start () ++ + @classmethod + def tearDownClass (self): + self.store.bus._clean_up_signal_match (self.graph_updated_handler_id) +- self.extractor.stop () + self.miner_fs.stop () ++ self.extractor.stop () + self.store.stop () + + def setUp (self): ++++++ tracker-tests-fix-undefined-function-call.patch (new) --- tracker-tests-fix-undefined-function-call.patch +++ tracker-tests-fix-undefined-function-call.patch @@ -0,0 +1,20 @@ +Bug introduced by git commit 690eecb143f6dd69ed3603934c3a7f521918baa8 + +--- tracker-0.14.4/tests/functional-tests/common/utils/helpers.py 2013-02-07 11:58:21.535494883 +0100 ++++ tracker-0.14.4/tests/functional-tests/common/utils/helpers.py 2013-02-07 11:59:52.712494948 +0100 +@@ -277,9 +277,12 @@ class StoreHelper (Helper): + """ + try: + result = self.resources.SparqlQuery (QUERY % (ontology_class)) +- except dbus.DBusException: +- self.connect () +- result = self.resources.SparqlQuery (QUERY % (ontology_class)) ++ except dbus.DBusException as (e): ++ if (e.get_dbus_name().startswith ("org.freedesktop.DBus")): ++ self.start () ++ result = self.resources.SparqlQuery (QUERY % (ontology_class)) ++ else: ++ raise (e) + + if (len (result) == 1): + return int (result [0][0]) ++++++ tracker-tests-increase-timeout.patch (new) --- tracker-tests-increase-timeout.patch +++ tracker-tests-increase-timeout.patch @@ -0,0 +1,18 @@ +Some test cases need more than the default 90 seconds to complete. As the +tests.xml is autogenerated, it is not possible to increase the timeout +selectively without bigger changes to the create-tests-xml.py script. Thus +increasing globally. + +diff --git a/tests/functional-tests/create-tests-xml.py b/tests/functional-tests/create-tests-xml.py +index e78ed7c..f3f488e 100755 +--- a/tests/functional-tests/create-tests-xml.py ++++ b/tests/functional-tests/create-tests-xml.py +@@ -34,7 +34,7 @@ HEADER = """ + <suite name="tracker"> + <description>Functional tests for the brilliant tracker</description> """ + +-TEST_CASE_TMPL = """ <case name="%s"> ++TEST_CASE_TMPL = """ <case name="%s" timeout="180"> + <description>%s</description> + <step>%s</step> + </case>""" ++++++ tracker-tests-make-400-extractor-work-with-testrunner.patch (new) --- tracker-tests-make-400-extractor-work-with-testrunner.patch +++ tracker-tests-make-400-extractor-work-with-testrunner.patch @@ -0,0 +1,19 @@ +The tests.xml is autogenerated by create-tests-xml.py. It scans all *.py +scripts for test class definitions and generate one test case for each class +found. In that test case it invokes the script with the class name passed as an +argument. 400-extractor.py expect different kind of argument. + +diff --git a/tests/functional-tests/400-extractor.py b/tests/functional-tests/400-extractor.py +index 063d562..552d98c 100755 +--- a/tests/functional-tests/400-extractor.py ++++ b/tests/functional-tests/400-extractor.py +@@ -268,6 +268,9 @@ if __name__ == "__main__": + else: + if os.path.exists (sys.argv[1]) and sys.argv[1].endswith (".expected"): + run_one (sys.argv[1]) ++ # FIXME: for the case when invoked by testrunner (see create-tests-xml.py) ++ elif sys.argv[1] == "ExtractionTestCase": ++ run_all () + else: + print "Usage: %s [FILE.expected]" % (sys.argv[0]) + ++++++ tracker-tests-synchronous-terminate-pre-step.patch (new) --- tracker-tests-synchronous-terminate-pre-step.patch +++ tracker-tests-synchronous-terminate-pre-step.patch @@ -0,0 +1,16 @@ +The command `tracker-control -t` simply kill(2) all tracker processes and +exits. It does not wait/check the processes to terminate. In some test cases it +happens that the test case tries to launch its own instance of some tracker +process, but it fails because the old process is still there. + +--- tracker-0.14.4/tests/functional-tests/create-tests-xml.py.orig 2013-02-14 10:55:48.788494333 +0100 ++++ tracker-0.14.4/tests/functional-tests/create-tests-xml.py 2013-02-14 10:57:23.715494319 +0100 +@@ -52,7 +52,7 @@ if (cfg.haveUpstart): + """ + else: + PRE_STEPS = """ <pre_steps> +- <step>tracker-control -t</step> ++ <step>while tracker-control -p |grep -q '^Found process ID '; do tracker-control -t; sleep 1; done</step> + </pre_steps> + """ + ++++++ tracker.yaml --- tracker.yaml +++ tracker.yaml @@ -14,6 +14,18 @@ - tracker-0.10.37-fix-linking-with-newer-toolchain.patch - tracker-0.10.37-fix-linking-with-newer-glib.patch - 0001-Remove-tracker-tests.aegis-from-config_SCRIPTS-if-ma.patch + - tracker-tests-do-not-su-meego.patch + - tracker-tests-fix-undefined-function-call.patch + - tracker-tests-synchronous-terminate-pre-step.patch + - tracker-tests-increase-timeout.patch + - tracker-tests-fix-helper-starting-order.patch + - tracker-tests-make-400-extractor-work-with-testrunner.patch + - tracker-tests-allow-reuse-graph-updated-signal-handling.patch + - tracker-forget-removed-files.patch + - tracker-tests-310-fts-indexing-use-graph-updated-signal.patch + - tracker-tests-501-writeback-details-fix-invalid-metadata-key.patch + - tracker-miner-fs-deal-with-data-inserted-by-other-apps.patch + - tracker-tests-400-extractor-skip-unsupported-file-types.patch ExtraSources: - tracker-store.service;%{_libdir}/systemd/user/
