Hi Hans, Am 21.01.11 13:01, schrieb Hans K. Aspenberg: > I have done some further analysis by enabling logging (in Postgresql) > of all SQL-statements on our test server. This is typical entries in > the log from displaying a Sprint backlog. For every ticket I see among > other stuff sequensec of: > > 2011-01-21 12:07:50 CETLOG: statement: SELECT type FROM ticket WHERE > id=315 > 2011-01-21 12:07:50 CETLOG: duration: 0.000 ms > 2011-01-21 12:07:50 CETLOG: statement: SELECT > summary,reporter,owner,description,type,status,milestone,resolution,time,changetime > FROM ticket WHERE id=315 > 2011-01-21 12:07:50 CETLOG: duration: 0.000 ms > 2011-01-21 12:07:50 CETLOG: statement: SELECT type FROM ticket WHERE > id=315 > 2011-01-21 12:07:50 CETLOG: duration: 0.000 ms > > All together more than 4700 SQL statements to display about 120 > tickets. Also, I see a lot of BEGIN/ROLLBACK transaction statemens > around single SELECT statements asking for a single value. > > Displaying "Active tickets" that (just now) lists 450 tickets ends up > in 3 large SQL statements that all in all executes in 32ms on our > server... > Hopefully without stepping on someones toes here I assume that parts > of Agilo is written using Python class libraries where SQL > optimization haven't been particularely in mind.
Well, we rely on Tracs infrastructure to deliver the backlog contents (tickets) to us and as we need the full ticket objects (to trigger behaviour on them) so we can't easily go the route they have gone with the reports that just pull out some information about all the tickets with arbitrary generated SQL. (Also this doesn't expose all features of Tickets - as these would rely on code on the ticket object. Hierarchies are one example, as are computed fields). And sadly the database layout of trac is not... optimal to get a lot of the real internal ticket objects. Of course we could always rewrite big internal pieces of trac, but so far we haven't had success getting a discussion started with the trac-developers about more optimal internal datastructures as they feel their current structures are simpler and therefore better to maintain. (Sorry I can't find the ticket with that discussion just now). What this gives us is a little trouble to maintain very large backlogs as teasing them out of trac is very costly. We have taken the benchmark that we want to support Backlogs of the size that allow you to satisfay multiple teams for many months (years even if you use epics to track far out things. So the question to you would be: what value do you get from tracking very detailed work that is so far out that you have 400+ backlog items? Couldn't your product owner provide more value to the development of the team if he used more of his time on the stories that are approaching implementation than actually creating and maintaining all these stories so far in the future? Hope this helps! Martin -- Follow Agilo on Twitter: http://twitter.com/agiloforscrum Please support us by reviewing and voting on: http://userstories.com/products/8-agilo-for-scrum http://ohloh.net/p/agilo-scrum http://freshmeat.net/projects/agiloforscrum You have received this message because you are subscribed to the "Agilo for Scrum" Google Group. This group is focused on supporting Agilo for Scrum users and is moderated by Agilo Software GmbH <http://www.agiloforscrum.com>. To post to this group, send email to [email protected] To unsubscribe from this group, send an email to [email protected] For more options, visit this group at http://groups.google.com/group/agilo

