Hi all,
Debajyoti Bera wrote:
Ahh... somehow my reply didnt go the list. The bug was a due to recent patch.
I reverted it - that should fix the repeated indexing problem.
Stephan, could you sync with CVS and rety ?
I fetched the latest version from CVS this morning, compiled and installed
it.
Hi,
Thanks for all the testing. Its very helpful.
I cleaned all indexes, logs, etc. and started from scratch with forced,
quick indexing:
setenv BEAGLE_EXERCISE_THE_DOG 1; beagled --fg --debug
A bit strange, even with an empty index, beagle reports to remove files or
directories
1. Files are being reindexed. This is easily seen in the logs.
2. I've yet to have Beagle finish from a clean directory for a several days
of updating from CVS. The tree crawling gets stuck with messages like:
061206 0722412395 16878 Beagle DEBUG: Running file crawl task
061206 0722417547
Hi,
On Wed, 2006-12-06 at 00:01 +0100, Rafał Próchniak wrote:
Dnia 04-12-2006, pon o godzinie 16:08 -0500, Joe Shaw napisał(a):
* Making sure the date/time issues are all fixed;
Do I have to remove old indexes after upgrading beagle? I compiled CVS
version after dBera's email
Hi,
On Wed, 2006-12-06 at 11:38 -0500, Debajyoti Bera wrote:
I am not using extended attributes, and I'd rather not. I'm not sure if
that makes a difference.
Could be because of that. I am not able to reproduce it on my machine.
By the way, you can emulate not having extended attribute
By the way, you can emulate not having extended attribute support by
running Beagle with BEAGLE_DISABLE_XATTR set.
I ran into this while testing. BEAGLE_DISABLE_XATTR doesnt quite work
as in CVS. There is a check missing from FSQ/FileSystemQueryable.cs.
--
A thought crossed my mind recently about having beagle search inside
subversion repos. Would I have to implement that as a backend? I've
only written fairly trivial filters so far so I'm not sure how much
work that would involve or if it would be practical but it would be
quite cool to be able to
Hi Alex,
On Wed, 2006-12-06 at 20:38 +, Alex Mac wrote:
Any possible pitfalls or things that would make this not worth doing?
The idea is to find .svn directories and deal with their contents,
correct? Or is there something else here I'm not following?
A backend is definitely the way to