Alexander Wagner wrote:
> Joost 't Hart wrote:

Hi Alexander, thanks for your response!

>
> Hi!
>
> First of all, I never sort a database, but I use some base
> of your size equivalent which gets some updates once a year,
> some deduping and thereafter compactification.

Could you, for a change, try it?

Functionally it even seems to work. I left it running this evening, and 
it DID complete... :-)

>
> Some silly points:
>
> - Compact works by writing the Scid base to compact to a
>   temporary, potentially sorted, base which is then renamed
>   to the original file.  That is, in transit you need twice
>   the size of your base on disc, at least.
>
>   Your HDD does not happen to run into a low space state?
>
>   At this point your Linux file systems performacne would
>   break down dramatically. (E.g. fragmentation starts and so
>   on.)

Sure but no. 54 GB of 152 GB free.

>
> - I'm not sure where the sorting takes place, but I
>   consider Scid just uses the memory for this. For a large
>   base quite an amount of RAM might be required. You do not
>   run into trouble at this end? (Asking xosview or gkrellm
>   or top might give a clue.) However, if this is the case
>   I'd suspect it to happen on Windows as well.

It is not the sorting, but the compaction.
Compacting is a pure disk operation. It creates a new game file, then it 
picks up all games from the old game file (one by one, in the order as 
demanded by the index file, which might be changed because of the 
sorting itself) and copies those into the new game file.

On the fly, a new index entry is added to a new index file as well. So 
in a game-by-game loop the two <blabla>TEMP files are extended, and 
renamed to the original files once the loop is over. Pretty simple 
actually, see tkscid.ccp/sc_compact_games().

For the record, I did disable the progress bar, which is updated every 
500 games, as a test. To no avail. This pretty much excludes that tcl is 
involved at all.

>
> - What FS do you use on Linux?
>
>   Some FS do not handle certain usages that well. My default
>   for years now is XFS everywhere (except /boot, yes I've a
>   partition there) which usually gives a very good
>   performance except on deleting a huge numbers of very
>   small files (using XFS for a mail server e.g. is usually
>   not the best choice).
>
>   I remember, that there was a similar point with this Raiser
>   stufff for, AFAIK, larger files (typical for Scid eg.).
>   However, it performs quite good on deletion (even
>   unintended <scnr/>) and its, AFAIK, still the default on
>   SuSE eg.

Hm, it is Reiser that I use, yes. After this I can easily run a test on 
ext3, will let you know what this brings...

>
>   To the best of my knowledge, unfortunately, JFS never made
>   it into real maturity and had several problems in various
>   areas.
>
>   Could it be that you run into such a problem?
>
> - You do not use some mobile disk for your base, e.g. USB?

Nope :-)

>   Some setups are quite silly concerning caching of such
>   attachemnts. In the past I once had a setup that wrote 1.5
>   GB into the cache without even starting to flush it. Then
>   the system started to swap and got terribly slow. (It was
>   a compute server back than and I transfered some
>   simulation data...) Ok, this was surely a bug, I think in
>   RedHat back then, and I didn't experience this in recent
>   setups again. But still, drives that get mounted via HAL
>   e.g. usually are not mounted SYNC and this is not always
>   the best choice, and I still experience some performance
>   break downs due to this sometimes. Usually, this is ok,
>   however, today.
>


------------------------------------------------------------------------------
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
_______________________________________________
Scid-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scid-users

Reply via email to