On Mon, Mar 17, 2008 at 7:53 PM, Eddy Petrișor <[EMAIL PROTECTED]> wrote:
>
> Lars Lindner wrote:
>  > On Mon, Mar 17, 2008 at 4:33 PM, Luis Rodrigo Gallardo Cruz
>  > <[EMAIL PROTECTED]> wrote:
>  >> On Mon, Mar 17, 2008 at 02:21:14PM +0200, Eddy Petrișor wrote:
>  >>  > Luis Rodrigo Gallardo Cruz wrote:
>  >>  >> On Thu, Mar 13, 2008 at 02:33:54AM +0200, Eddy Petrișor wrote:
>  >>
>  >>>>>> On to more promising lands:
>  >>  >>>>
>  >>  >>>> Could you try running with --debug-update, please?
>  >>  >>> I moved away the ~/.liferea_1.4 directory (safe copy) and ran:
>  >>  >>>
>  >>  >>> liferea --debug-update 2>&1 | tee liferea_update
>  >>  >>>
>  >>  >>> The log is attached.
>  >>  >>
>  >>  >> I hate Heisenbugs.
>  >>  >
>  >>  > Not sure this is one.
>  >>  >
>  >>  >> I see nothing obviously wrong in this log. Even worse, there are
>  >>  >> entries there about updating the feeds, mentioning relatively recent
>  >>  >> entries in, for example, Debian Planet.
>  >>  >>
>  >>  >> Did it keep failing after this update?
>  >>  >
>  >>  > Yes, for instance, Debian Planet is still stuck at that post, "Sami
>  >>  > Haahtinen: Installing Debian on NSLU2" from the 6th of March.
>  >>
>  >>  Mmm. This makes me think it's reading them but then not commiting them
>  >>  to the database. Maybe something in the cache settings is malfunctioning.
>  >
>  > I can think of several scenarios you could be in:
>  >
>  > 1.) No updates saved to disk
>  > 2.) No updates executed at all
>  > 3.) No updates results processed
>  > 4.) No network connection.
>  >
>  > In case of 1.) you should see DB error messages on the command line.
>  > To verify this please run at least once on the command line and check
>  > for suspicious output. You might also run once with command line
>  > option "--debug-db" and check for errors.
>
>  log attached, my untrained eye didn't spot any of those.
>
>
>  > In case of 2.) you can check using the "Update Monitor" options in the
>  > "Tools" menu. This will open a dialog presenting a list of all 
> subscriptions
>  > that are to be updated and all that are downloaded right now. Here you
>  > should check wether there are feeds in the queue at all and if those are
>  > processed after a while.
>
>  They are executed. I checked the Update Monitor and all the expected (I 
> focused on Planet Debian)
>  feeds appear and are processed rather quickly(didn't have to wait more than 
> a few seconds) in front
>  of my eyes.
>
>
>  > In case of 3.) you should check the output of a run with command line
>  > option "--debug-update" for HTTP error codes.
>
>  The log for update is also attached (liferea-debug-upd.log)
>
>  I won't pretend I know the http protocol enough to know what I'm talking 
> about, but this looks odd:
>
>
>  UPDATE: trying to merge "Planet Debian" to node id "ymxftff"
>  UPDATE: -> not adding "Planet Debian" to node id "ymxftff"...
>  UPDATE: trying to merge "Planet Debian" to node id "ymxftff"
>  UPDATE: -> not adding "Planet Debian" to node id "ymxftff"...
>  UPDATE: trying to merge "The Linux Gang - Daily Top Blog Posts on Linux - 
> Powered by SocialRank" to
>  node id "ym
>  xftff"
>  UPDATE: -> not adding "The Linux Gang - Daily Top Blog Posts on Linux - 
> Powered by SocialRank" to
>  node id "ymxf
>  tff"...
>  UPDATE: trying to merge "The Linux Gang - Daily Top Blog Posts on Linux - 
> Powered by SocialRank" to
>  node id "ym
>  xftff"
>  UPDATE: -> not adding "The Linux Gang - Daily Top Blog Posts on Linux - 
> Powered by SocialRank" to
>  node id "ymxf
>  tff"...
>  [...]
>
>  This is also odd (why does it say the content didn't change?)
>
>  UPDATE: trying to merge "Planet Debian" to node id "ymxftff"
>  UPDATE: -> not adding "Planet Debian" to node id "ymxftff"...
>  UPDATE: 0 new items, cache limit is 100 -> dropping 0 items
>  UPDATE: merge itemset took 0,011s
>  UPDATE: download result - HTTP status: 304, error: 1, netio error:0, data: 0
>  UPDATE: request (http:) finished
>  UPDATE: processing request (http://www.debian.org/News/weekly/dwn.en.rdf)
>  UPDATE: downloading http://www.debian.org/News/weekly/dwn.en.rdf
>  UPDATE: download result - HTTP status: 200, error: 0, netio error:0, data: 
> 19272560
>  UPDATE: request (http:) finished
>  UPDATE: processing request (http://planet.debian.org/rss20.xml)
>  UPDATE: downloading http://planet.debian.org/rss20.xml
>  UPDATE: discovered feed format: rss
>  UPDATE: old item set 0x2aaaac282da0 of (node id=slgapfb):
>  UPDATE: trying to merge "Package Build Status" to node id "slgapfb"
>  UPDATE: -> not adding "Package Build Status" to node id "slgapfb"...
>
>
>
>  > In case of 4.) please check if the online/offline icon in the lower left 
> corner
>  > displays the "online" icon. Also you should check the icons in the feed
>  > list wether they symbolize an "unreachable" state (by being replaced
>  > with error symbols).
>
>  Is online.
>
>  All the feeds wear their individual logos.
>
>
>  > In general you should check what the status bar says when you perform
>  > an update (of either single feeds or all feeds).
>
>  The status bar says explicitly Is updating the feeds, one by one, but also 
> says that nothing changed.
>
>
>  > To be honest your error report is pretty vague and one cannot really
>  > determine what you problem is. You really need to provide more details!
>  > For example I'm not sure you told how you trigger updates?
>
>  I find this rather offensive, since I tried my best to provide all the 
> required info until now.
>
>  And, by the way, I already said how I trigger updates[1]. Also, when 
> starting the application, the
>  updates are automatically triggered.

I didn't really read the original reference. And I do admit being impolite.

Nonetheless I identified the problem. You do massively mark posts
as important (flagged). Which is not forbidden, but was totally
unexpected by me when I implemented the merging algorithm.

Flagged items do have the property of never being dropped from
cache, but at the same moment we have a cache limit that the merging
algorithm has to cope with. And the current calculation is simple: if the
cache limit is 100 (like in your case and per default) and there are 100
(or more) flagged items that must never be dropped, then there is just
no room to add new items.

As a temporary workaround you should increase the cache limit for
all affected feeds (like the Debian Planet feed).

For a real solution I need to think of something like having a soft
cache limit that might be extended to

    <# flagged items> + <# items in downloaded feed>

Arghh... I don't like changing the merging mechanism...

Best Regards,
Lars

Reply via email to