On Friday, 12 February 2021 at 01:23:14 UTC, Josh wrote:
I'm trying to read in a text file that has many duplicated lines and output a file with all the duplicates removed.

If you only need to remove duplicates, keep (and compare) a string hash for each line is good enough. Memory usage should be just n x integers.


Reply via email to