On 10 Apr 2008, at 01:51, [EMAIL PROTECTED] wrote: > Hey I am new to this list and Perl in general. I was wondering which > would be faster between a hash and an array with a large (1000000+) > size? specifically I need to be able to add new elements and access > the > lower elements in sequential order. I'm basically finding large prime > numbers. >
f you just want an n-sized bag of things, and the ability to access them in the order you stored them, you fundamentally want a list. I would guess using push/pop and shift/unshift ( perldoc -f shift , perldoc -f pop )to add and remove things from one would be the fastest way to access a sequence of items, but will modify the original list, which might not be what you want. Speed is not everything of course, and the performance of either would vary according to set size and access patterns. Do you actually need to remember millions of data, or are you just referring to the potential integer bounds of the set, from which you will need to store a subset of numbers? If you want *really* large sets of data that you can access quickly and in a structured fashion it may make more sense to look at using structured data on disk - for simple key/value lookup something like DBM or CDB would be ideal, and can be tied to hashes (perldoc perltie, or see DBM or CDB File CPAN modules for more info about this). You might even want to use a lightweight SQL database like mysql/isam or SQLlite, if you're already comfortable with that concept and API, although it's not a good match for the problem space IMO. Premature optimisation being the route of all evil, my tendency is to use a hash first in perl for structured data, because it tends to be more trivially self-documenting. -- Regards, Colin M. Strickland _______________________________________________ BristolBathPM mailing list [email protected] http://mailman.bristolbath.org/mailman/listinfo/bristolbathpm
