This is already drifting off topic. The patch submitter has to make a case of pros and cons. Ultimately it will be Wayne's decision since it's his code.
I am actually trying to give the OP the information he needs to know as to whether it is worth his time pursuing this. He also knows that I would never pursue it because I don't care if my old software does not load new footprints. (I type $ make, and I can load the new footprints.) But if there's little loss, and a matching or better gain in Wayne's eyes, it might go through. Doing a benchmark is just two lines of code, and not worth another disagreement. And it has nothing to do with graphics. On 05/08/2014 04:26 PM, Lorenzo Marcantonio wrote: > On Thu, May 08, 2014 at 03:46:15PM -0500, Dick Hollenbeck wrote: >> I think for the guy who does not trim down his fp-lib-table to an >> interesting subset, may >> soon be trying to load a lot of footprints, maybe even unknowingly. > > Aren't footprints loaded on demand? IIRC that was one of the reason for > the pretty library format. > > Currently I find the slowest thing in pcbnew is re-preparing the drawing > (like when you toggle zone visibility). Profiling ages ago shown that > most of the time was spent computing distances that weren't used anyway. > Loading/saving may be not blazing but it's a way rarer occurrence, > usually. > >> I do feel speed is important, but I am not sure how much slower the DOM >> parser may be, and >> that is why a measurement is a good thing. > > Could be quite slow, I agree with that. A lot of that depends on the > actual kind of structure used by the DOM tree; another point I think > will matter is string comparison, since I don't think there is some > internation/hashing ongoing in that parser. With interned strings > a string comparison is a pointer comparison, otherwise its strcmp > time. With that lot of keywords to handle probably it would make > a difference. > > If the technique interest you and don't know it yet, try looking at > gperf, the current keyword recognizer could be probably enhanced that > way. IIRC the current lexer uses dynamic hashing which is still a good > way to weed out strings so maybe the gain wouldn't be spectacular. > >> Are you having any luck with lisp on your pic? > > Actually there are lisp subsets targeted to the higher end pics (mostly > toys, however). The saturn processor in the HP calculators is > underspecced against current PICs, it just has a lot of memory strapped > on. Too bad core memory is not plenty in the usual pics we > use (32-256 bytes:P), they are designed to be assembly based... some > people like to use them with forth but I never tried it. > _______________________________________________ Mailing list: https://launchpad.net/~kicad-developers Post to : kicad-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~kicad-developers More help : https://help.launchpad.net/ListHelp