https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10756
--- Comment #18 from Mason James <[email protected]> --- (In reply to Jonathan Druart from comment #17) > (In reply to Marcel de Rooy from comment #13) > > On http://www.jacksasylum.eu/ContentFlow/ I only see changelogs until 2010? > > If there is no further development(?), this might be a risk. > > I can see that as blocker, Mason, would it be easy to update the plugin you > used? hi, yes - its very easy to swap the carousel plugin to anything else i'm happy to change the carousel plugin to whatever people desire > (In reply to Frédéric Demians from comment #14) > > I don't see the advantage of this implementation against Bywater plugin. afaik, the big advantage is that the Bywater plugin can't display an *automated* selection of verified cover-images for recently added items i think for the Bywater plugin, a manual list (report?) would need to be created daily, then each item manually verified for a matching amazon image? > > rather see disadvantages, including ContentFlow.js obsolescence (not updated > > since 2010, when jQuery Flipster used by ByWater plugin is actively > > maintained). Reading the code, I don't understand how GetRecentBibs > > generates > > the list of 'recent' bibs. Why a new table (carousel)? Is it necessary to > > read/re-read this table each time the OPAC main page is loaded? > > Same for me, it's not conceivable to call this subroutine for each get of > the opac main page. with a warmed cache table, the GetRecentBibs() sub takes around 10ms on my old slow VM. 10ms seems acceptable? > Could you please detail what is the purpose of this subroutine? the subroutine returns a list of recently added bibs with verified matching amazon cover images > Why do you need a new table, cache of the image url that's it? yes, thats all - a method of caching the urls is needed for the feature to work at an acceptable speed fyi: i did experiment with memcache - but the speed difference was small/negligible, so i decided upon the convenience of a mysql table instead > Additional comments: > - kohastructure.sql changes are missing thanks, i can do this - no problem > - Amazon lookup should be optional amazon lookups effectively cease (ie: become 0) as the cache table becomes populated so this is probably not needed? (unless i misunderstand your point) > - We have several subroutines in C4::Koha to deal with ISBNs, I am sure you > could reuse thanks for the suggestion, i could use NormalizedISBN() instead > - What are the 150 and 300 hardcoded limits? they are limits to reduce the item list, the values were chosen to give a happy balance of acceptable performance and a good selection of randomised items > - It would be better to use Koha::Object sure, i can do this - no problem > - It would be great to remove all the debug variables, it will ease the > readability i would really prefer to leave the debug/profiling code (should any future problems occur?), i'm happy to tidy/improve the readability of the profiling code -- You are receiving this mail because: You are watching all bug changes. _______________________________________________ Koha-bugs mailing list [email protected] http://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/
