Thanks Brock, much appreciate the feedback. See comments below:
New webrev with your, Padraig's and Shawn's changes:
http://cr.opensolaris.org/~jmr/pm_8324_local_search_using_api_26Jun_5pm/
JR
Brock Pytlik wrote:
jmr wrote:
I see Padraig's response on the icon issue which might be at the root
of your problems.
I have just applied the webrev against the latest gate, it had some
hunk offsets but was still ok, then did a full make install and make
packages. I have sorted Padraig's issue on the Mange Repositories
Reload as he had suggested and generated a new webrev against the
latest gate.
http://cr.opensolaris.org/~jmr/pm_8324_local_search_using_api_25Jun_4pm/
packagemanager.py:
271: since this is only used to track search's elapsed, time, maybe
the variable should include search in its name.
Sure
As a bigger question, why are we tracking and reporting the search
time to the user? From a UI perspective, I'm not sure why a user would
care whether search took 7 seconds or 9 seconds. Is there a reason a
user would want this info or is it more a debugging/development tool
for you?
Well in Google the search time is reported in brackets at the top of the
page, but its just a nice to have. At present I wanted it there to help
with the debugging and optimization work for Search. Happy to change it
or take it out given feedback from users.
I'm slightly confused about what the "current_not_show_repos" variable
means. On lines 807-810 it seems like those are exactly the ones
you're showing the user as having errors. Also, when does the variable
get cleared? If, and here I'm just guessing, it's a list of repos not
to search, does one failed search mean you won't retry that publisher
again until the next time packagemanger is restarted?
I agree "current_not_show_repos" is misleading, renamed to
"current_repos_with_search_errors", which captures the intent here. It
was being unset in a number of places redundantly, it is now just unset
at the start of __parse_api_search_error() where it is populated and
then used in __handle_api_search_error.
When errors occur searching a repo the user is notified by a dialog and
a yellow warning triangle in the status bar:
__do_api_search
:
except api_errors.ProblematicSearchServers, ex:
self.__parse_api_search_error(ex)
gobject.idle_add(self.w_progress_dialog.hide)
gobject.idle_add(self.__handle_api_search_error)
The user can elect to not have these errors reported again in the popup
dialog. These repos to ignore on error are stored and are persistent
across invocations of PM. It does not mean that you won't try and search
these repos, only that the user won't be bugged with errors telling them
there are problems with it. What happens in this case is that a yellow
triangle is all they see in the statusbar if there are errors on a given
repo they have chosen to ignore. Clicking on the triangle brings up the
dialog listing all the repos with errors and unsets the ignore state of
these repos.
For lines 780-782, why are you getting the string of the error and
testing the string instead of looking at the class of the exception
using, for example, isinstance? That would give you far more control
over the information you display to the user as well as allowing you
to get the specific url, error code, message etc... Also, it would
make more explicit which errors you've accounted for and which are
getting ignored. For example, if I've followed the code correctly, if
search encountered a BadStatusLine error for a repo, the user would
never be notified of this problem, and would simply see that their
search returned no results.
I am leaving this until J's stuff lands and will rework, given his and
your recent proposal on:
http://defect.opensolaris.org/bz/show_bug.cgi?id=9670#c2
I realize that a lot of this code will need to get rewhacked when J's
transport wad lands, but explicitly checking for specific errors and
handling them appropriately seems much closer to what the code will
look like in that world than the current approach.
1736: Why do case sensitive search by default? At least for command
line search, all the feedback we've gotten suggests that case
insensitive should be the default. Typically, if someone wants to
search for openoffice, they don't want to have to remember the exact
capitalization pattern used.
I thought that's what we were doing:
[api.Query(" ".join(pargs), False, True, None,
None)], servers=servers
query_p.Query.__init__(self, text, case_sensitive, return_type,
num_to_return, start_point)
Searching for OFFice and office turn up 26 hits in opensolaris.org
1740-1750 or so: I realize this hasn't changed in this fix,
I'm a bit confused about the approach taken here. Why create an
artificial limit on the number of results generated, why not treat
this as a generator and have it throw results to the user as soon as
they're ready? When we switched to that model on the cli, the user
experience was vastly improved because results started appearing much
sooner. Also, if you're going to limit the number of results shown to
the user to be 100, why not tell the servers not to return more than
100 results?
Good point - I like the idea of getting the results back incrementally.
I have submitted this as an RFE and will work on it after this wad goes
back.
9710 PM search should return results incrementally
1775, 1781: Why do insert 0, which is slow for long lists, and the
reverse the results? Can't this just be results.append(a_res), and
then you don't need to do the reverse either?
Done
Nits:
239: commented line
Done
266: extra whitespace
Done
1673: why not just pass self.is_search_all here instead of creating
search_all?
Done
Thanks,
Brock
JR
Shawn Walker wrote:
On Jun 25, 2009, at 6:04 AM, jmr wrote:
Hi , here is a webrev to move PM over to using the Search API for
all searches, both against single publishers (previously used
GtkTreeView filtering) and all publishers (added in 2009.06,
already using Search API).
webrev:
http://cr.opensolaris.org/~jmr/pm_8324_local_search_using_api_25Jun_1150am/
9442 Use Search API in PM for all searches
This will cause a slow down when searching against Single
Publishers. The timings for a search are listed in the Status Bar
if it takes longer than 1 sec. If a repo cannot be contacted there
is a time out of 30 secs, which is caught and reported as a failed
search against the repo. When we have better exceptions here from
the transport layer [9670] we can do a better job of handling the
range of errors.
Any idea why I see this when trying to start packagemanager from my
workspace?
Traceback (most recent call last):
File "./packagemanager.py", line 4032, in ?
packagemanager = PackageManager()
File "./packagemanager.py", line 265, in __init__
self.__register_iconsets(self.search_options)
File "./packagemanager.py", line 644, in __register_iconsets
iconset = gtk.IconSet(pixbuf) TypeError: pixbuf should be a GdkPixbuf
I've blown away my proto area and done a make install, and I've
checked that my PACKAGE_MANAGER_ROOT is set right.
I'm on build 111b here.
Cheers,
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss