That's a good point. What's the overhead on digests like that? Also, does that open up the possibility, exceedingly small though it may be, of misidentifying a branch as already searched and missing a qualifying subgraph?
On Mar 16, 2007, at 2:02 PM, Seth Falcon wrote: > Peter McMahan <[EMAIL PROTECTED]> writes: > >> Thanks, I'll give it a try. does R have a limit on variable name >> length? > > If you are going to have very long names, you might be better off > computing a digest of some kind. You could use the digest package to > compute an md5sum or the Ruuid package to generate a GUID. > >> Also, is it better to over-estimate or under-estimate the >> size parameter? > > The environment will grow as needed. If you overestimate, you will > use more memory than you need to. Whether this is a problem depends > if you have extra memory available. Underestimating means that the > underlying hashtable will need to be resized and this has a > performance impact. > > + seth > > -- > Seth Falcon | Computational Biology | Fred Hutchinson Cancer > Research Center > http://bioconductor.org > > ______________________________________________ > [email protected] mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting- > guide.html > and provide commented, minimal, self-contained, reproducible code. ______________________________________________ [email protected] mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
