Hi Joe, I seemed to get the best performance doing a commit + prune
everyday. Tried every 7 days first (see at) but from my experience the
prune needs to happen more often.
On Mon, Feb 10, 2014 at 6:37 PM, Joe Bogner joebog...@gmail.com wrote:
Henrik - Thank you for posting the code. I enjoyed
Hi Joe + Henrik,
On Mon, Feb 10, 2014 at 06:37:34AM -0500, Joe Bogner wrote:
If you end up speeding up please share. I know it's just a mock example so
may not be worth the time. It's nice to have small reproducible examples.
Oops! I just notice that the 'prune' semantics Henrik uses is
So by (735580 53948) you mean a +Ref +List? Is it possible to get a range
by way of collect with that setup?
I tested with two separate relations, ie one +Ref +Time and one +Ref +Date,
the database file ended up the same size.
On Mon, Feb 10, 2014 at 9:31 PM, Alexander Burger
On Mon, Feb 10, 2014 at 09:57:13PM +0700, Henrik Sarvell wrote:
So by (735580 53948) you mean a +Ref +List? Is it possible to get a range
by way of collect with that setup?
No, not a +Ref +List. As I propoesed on Feb 08
(rel d (+Aux +Ref +Date) (t)) # Date
(rel t (+Time)) #
Hey Alex -
On Mon, Feb 10, 2014 at 9:31 AM, Alexander Burger a...@software-lab.dewrote:
Also, you can save quite some time if you pre-allocate memory, to avoid
an increase with each garbage collection. I would call (gc 800) in the
beginning, to allocate 800 MB, and (gc 0) in the end.
The index file is 1.3GB in the +Bag case, 2GB in the +String case, doesn't
seem like a big deal to me given that the main entity file ends up being
32GB.
Now I haven't checked, but due to the relative size of the files the range
query might be comparably faster but in my case a tenth of a second
Yes, a bit perhaps.
I tested, it is of no consequence (at least for my applications), given one
transaction per second for a full year, fetching a random +Ref +String day
takes a fraction of a second on my PC equipped with SSD, here is the code:
Note that it's only the collect at the end that
Hi Henrik,
On Fri, Feb 07, 2014 at 08:29:07PM +0700, Henrik Sarvell wrote:
Given a very large amount of external objects, representing for instance
transactions, what would be the quickest way of handling the creation stamp
be with regards to future lookups by way of start stamp and end stamp?