Actually it would, each new file introduces more disk read time, file open,
file close time. Then depending on the language it may have to create a new
json parser, parse the file, and then destroy the json parser. With 1-2
files its not that big of a deal, but as it scales up the issue becomes
more and more of a bottle neck

On Wed, Apr 2, 2014 at 1:29 PM, <[email protected]> wrote:

> https://bugzilla.wikimedia.org/show_bug.cgi?id=63327
>
> --- Comment #2 from Niklas Laxström <[email protected]> ---
> It shouldn't matter too much whether N messages are in 50 or 1000 files
> (made
> up numbers) on the time how much it takes to parse them.
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.
> _______________________________________________
> Pywikipedia-bugs mailing list
> [email protected]
> https://lists.wikimedia.org/mailman/listinfo/pywikipedia-bugs
>
_______________________________________________
Pywikipedia-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l

Reply via email to