Tim Kynerd said:

> The first problem, with the Web site, remains AFAIK; I haven't been on the
> Web site since Saturday, so I can't say for sure.

Sorry, didn't want to reply 'til I got this one sorted. :(

> The second problem turned out to be initially a problem with my site file --
> my regexps weren't getting matched (duh).  But even after I fixed them, I
> wasn't picking up as much content as I would like.  I *think* this is
> because the links I want scooped are in a table.  Will setting
> "ContentsUseTableSmarts" to 0 solve this?  For the time being, I solved it
> by copying the HTML page to my hard drive and editing it, then scooping that
> file; this works beautifully.  But I was planning to contribute this site
> file once I got it working, and I suppose I can't do so as long as it's
> dependent on another file for the scooping to work -- or?

Yep, setting ContentsUseTableSmarts: 0 will do that I think.

Better get working on that /doc problem ;)

--j.

_______________________________________________
Sitescooper-talk mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/sitescooper-talk

Reply via email to