On Jan 9, 2010, at 8:05 AM, Fischlin Andreas wrote:

> On 08/Jan/2010, at 16:19 , Adam R. Maxwell wrote:
> 
>> On Jan 8, 2010, at 12:45 AM, Fischlin Andreas wrote:
>> 
>>> My personal data base has 13'591 records and I do not know how  
>>> sluggish BD would get if I would transfer all records to it.
>> 
>> File a bug with a sample if it's slow.  The largest file I tested  
>> with has ~25000 entries, and the only slowdowns I recall were in  
>> searching (fixed) and smart groups (which you can now hide in the  
>> source list).
> 
> Good to know. Also all with abstracts?

I don't know.  It's also been a couple of years since I did that profiling, and 
lots of things have changed.

> I just didn't dare to test it.  
> Will do some times in the future when I have some minutes to spare.

You can only find problems by testing, but here are my guesses: having 
abstracts in all of them would increase the memory footprint, and make 
searching by "any field" somewhat slower.  If you have sufficient RAM and 
multiple cores, you won't notice that.  Smart groups that depend on abstract 
would likely be slow.



------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
Bibdesk-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bibdesk-users

Reply via email to