Thanks for the clarification.
Is the allocation of more memory done in fixed chunks? Or something
"smart" in the process? If the former and the chunks are too small, then
maybe I am doing a lot of reallocations. My impression was that memory
usage increased quite monotonically, not in noticeable steps.
If the lines have to be sorted into bands, then the complexity of the
reading will increase in line with what I have noticed. And likely not
much to do about it.
There are two possible slowdowns there could be still. One is that you
hit some line count where you need to reallocate the array of lines
because you have too many. The other is that the search for placing the
line in the correct band is slow when there are more bands to look through.
The former would be just pure bad luck, so there's nothing to do about it.
I would suspect the latter is your problem. You need to search through
the existing bands for every new line to find where it belongs. Since
bands are often clustered closely together in frequency, this could slow
down the reading as you get more and more bands. A smaller frequency
range means fewer bands to look through.
On Sun, Sep 19, 2021, 22:39 Patrick Eriksson
<patrick.eriks...@chalmers.se <mailto:patrick.eriks...@chalmers.se>> wrote:
> It's expected to take a somewhat arbitrary time. It reads ASCII.
I have tried multiple times and the pattern is not changing.
> The start-up time is going to be large because of having to find the
> first frequency, which means you have to parse the text nonetheless.
Understood. But that overhead seems to be relatively small. In my test,
it seemed to take 4-7 s to reach the first frequency. Anyhow, this goes
in the other direction. To minimise the parsing to reach the first
frequency, it should be better to read all in one go, and not in parts
(which is the case for me).