On Wednesday, July 26, 2017 at 1:39:10 AM UTC-4, [email protected] 
wrote:
>
> On Sunday, 12 June 2016 22:36:41 UTC-7, Pavel wrote:
>>
>> Greetings,
>>
>> I am evaluating ledger for a use case where a large accounting dataset 
>> contains tens of thousands transactions every month, over several years. 
>> Transactions are mostly currency conversions, and my initial search 
>> specifically led me to ledger due to its flexible commodity handling. A few 
>> questions related to the large size of the journal:
>>
>> (1) Would it be practical to try the program with such a large 
>> transaction dataset at all? *Anyone tried over 10^5 - 10^6 transactions?*
>>
>
> Out of curiosity, I wrote a *small script in Awk to generate 10^6 
> transactions* for testing (attached). The transactions it generates just 
> move random amounts in random currencies amongst accounts, so it's an 
> oversimplification of what you want to do. I imagine that adding 
> conversions will increase processing time linearly. 
>
> On my one-step-above-a-netbook ( Celeron N2840 @ 2.1 GHz × 2 ), running a 
> simple balance report took about 40s, while subtotalling increased time as 
> the number of calculations increased:
>
 

 I tried the benchmark script mentioned in this thread and had great 
results with 100k transactions, but when I increased to 500k for 
comparison, the process died after approx 2 minutes (my system: Core 
i7-2600 @ 3.40 Ghz with 8 GB ram) . I haven't investigated further to 
determine if Ubuntu (17.04 x64) killed the process, or if Ledger 
(3.1.2-20160801) errored-out, but it seems you may similarly hit a ceiling 
unless you use the one-file-per month approach to keep the transaction 
count *relatively* small.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Ledger" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to