Thanks. I do have a lot of transactions that have a unique reference. I have one core (of eight) that is running at 100% processing this. It has now been running for 3 and half hours and the trace file is 557 Mb and growing...
David On Sun, 1 Mar 2020 at 21:33, David Cousens <[email protected]> wrote: > David, > > I have a 16.8 Mbyte uncompresseddata file which takes 3-4 sec at most to > load at startup in V3.8 on Linux Mint 19.3 (Ubuntu 18.04). The Import Map > Editor comes up almost immediately on selecting it from the menu. I > possibly > have fewer accounts than you have and have roughly half the transactions > and > splits. > > If you have a transaction source (or several of them) which uses a unique > number each time in the transaction description you can get a lot of import > matching data being stored with very low frequency of occurrence that > effectively does little to help with matching. I have 6-8 transactions a > month which regularly can't be matched because of this. Government > departments/ pensionfunds are becoming specialists at tagging UUIDs onto > things. This of course has to be imported and sorted through during import > and matching which is probably why you want to use the Import match Editor. > Unfortunately as the match data is stored in the data file there is no > simple way to remove it and start again with retraining. I have't yet taken > the time out to prune the matching data but the ability to do this is > obviously there for when there is a problem. > > If you upload the whole tracefile Geert or John may be able to spot > something in it but if the excerpt is typical it would appear to be > buidling > up the tables of information that the import match editor uses. There may > be > nothing for it but to allow it to run to completion. You may ned to turn > off > any power saving measures in Ubuntu which automatically hibernate or > suspend > the system to allow this to occur. > > Linux has a sometimes annoying indexing process which seems to grab a lot > of > CPU time and runs with a high priority and can stop other programs from > operating by denial of service until it is complete (has been several hours > on a few occasions). This is usually accompanied by a lot of disk access. > There are fixes on the various Linux forums if this is the problem. > > I seem to remember there may be a bug or feature request around possibly > parsing such numbers where there is a fixed component which would identify > a > specific account and a random component but this is not going to be easy > unless there is a separator of some sort. I don't know if anyone has looked > at this yet. > > David Cousens > > > > ----- > David Cousens > -- > Sent from: http://gnucash.1415818.n4.nabble.com/GnuCash-User-f1415819.html > _______________________________________________ > gnucash-user mailing list > [email protected] > To update your subscription preferences or to unsubscribe: > https://lists.gnucash.org/mailman/listinfo/gnucash-user > If you are using Nabble or Gmane, please see > https://wiki.gnucash.org/wiki/Mailing_Lists for more information. > ----- > Please remember to CC this list on all your replies. > You can do this by using Reply-To-List or Reply-All. > -- David Whiting _______________________________________________ gnucash-user mailing list [email protected] To update your subscription preferences or to unsubscribe: https://lists.gnucash.org/mailman/listinfo/gnucash-user If you are using Nabble or Gmane, please see https://wiki.gnucash.org/wiki/Mailing_Lists for more information. ----- Please remember to CC this list on all your replies. You can do this by using Reply-To-List or Reply-All.
