Not quite the same issue, but I've set up triggers to generate a journal
record whenever a record is added/changed/deleted from a table. This
mechanism (triggers) could easily be used to generate a 'version' record.
*** Doug F.
John Stanton wrote:
> We perform some versioning by holding column mat
Hi Sam,
On Wed, 20 Jun 2007 15:33:23 -0400, you wrote:
>Not specific to SQLite, but we're working on an app that needs to keep
>versioned data (i.e., the current values plus all previous values). The
>versioning is integral to the app so it's more than just an audit trail or
>history.
>
>Can an
We perform some versioning by holding column material in XML and using
RCS to maintain reverse deltas and versions.
Samuel R. Neff wrote:
Not specific to SQLite, but we're working on an app that needs to keep
versioned data (i.e., the current values plus all previous values). The
versioning is
Thank you John Stanton. This has opened new doors for me, and think it
would be helpful for others in the list too..
Thanks and Regards
Lloyd
On Thu, 2007-04-12 at 12:34 -0500, John Stanton wrote:
> We use a very simple data retrieval method for smallish datasets. The
> data is just stored in m
Lloyd,
If you want some code examples contact me and I shall send you some.
[EMAIL PROTECTED]
Lloyd wrote:
Thank you John Stanton. This has opened new doors for me, and think it
would be helpful for others in the list too..
Thanks and Regards
Lloyd
On Thu, 2007-04-12 at 12:34 -0500, John Stant
At 17:35 11/04/2007, you wrote:
>Lloyd wrote:
>>
>>Sorry, I am not talking about the limitations of the system in our side,
>>but end user who uses our software. I want the tool to be run at its
>>best on a low end machine also.
>>I don't want the capabilities of a data base here. Just want to sto
We use a very simple data retrieval method for smallish datasets. The
data is just stored in memory or as a memory mapped file and a
sequential search used. It sounds crude but when you use a fast search
algorithm like Boyer-Moore it outperforms index methods up to a
surprisingly large number
valgrind
-Original Message-
From: Lloyd [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 12, 2007 12:26 AM
To: [EMAIL PROTECTED]
Subject: Re: [sqlite] Data structure
Would anybody suggest a good tool for performance measurement (on
Linux) ?
On Wed, 2007-04-11 at 10:35 -0500, John
You might want to check out kazlib for your data structure lookups.
It cantains code to implement Linked List, Hast, and Dictionary access data
structures.
The hashing code is really quite fast for in memory retrievals plus it is
dynamic so that you don't have to preconfigure your has
I've used callgrind to get a hierachy of calls, it's good to graphically see
where your spending time at in the code.
Also you might want to check out oprofile. Its more of a system based
profiler.
And if you want to spend $$$ Rational Rose (I thinkt its an IBM product now)
Purify i
Would anybody suggest a good tool for performance measurement (on
Linux) ?
On Wed, 2007-04-11 at 10:35 -0500, John Stanton wrote:
> You might discover that you can craft a very effective memory
> resident
> storage system using a compression system like Huffman Encoding and
> an
> index method a
Thank you all. I got so many new ideas from your replies. Now I just
have to derive the best solution for me, thanks :)
Lloyd
On Wed, 2007-04-11 at 10:35 -0500, John Stanton wrote:
> You might discover that you can craft a very effective memory
> resident
> storage system using a compression sy
Lloyd wrote:
On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.
Lloyd, you already know what size your data sets are. Esp. if it
doesn't change, putting the entire dataset
11:20 PM
Subject: Re: [sqlite] Data structure
On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.
Lloyd, you already know what size your data sets are. Esp. if it
doesn
I used an approach similar to the Bloom Filter for data retrieval. It
could be very fast at retrieving substrings from large data sets but was
fairly complex to implement.
I would not go with that approach unless you had some very broad
retrieval requirements and a very large data set.
Lloy
om users.
So, you have to be a lot more specific.
-dave
-Original Message-
From: Lloyd [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 11, 2007 11:12 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Data structure
>
> I was just wondering what the odds were of doing a bette
If it is just a read-only access to data then storing the data im memory
with an index which can be either a hashing method or a binary tree
would be the fastest. An easy to handle method is to store the data and
index in a flat file and load it into memory. Loading it in virtual
memory gives
On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
> I think, looking from Lloyd's email address, (s)he might be limited to
> what CDAC, Trivandrum might be providing its users.
>
> Lloyd, you already know what size your data sets are. Esp. if it
> doesn't change, putting the entire dataset in RAM
>
> I was just wondering what the odds were of doing a better job than the
> filing system pros, how much time/code that would take on your part and
> how much that time would cost versus speccing a bigger/faster machine.
>
> Martin
I am not fully clear. I just want my program to run at most
On 4/11/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:
Lloyd wrote:
> hi Puneet and Martin,
> On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
>> File system cache and plenty of RAM?
>>
>
> It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
> mean Swap space as file s
Lloyd wrote:
hi Puneet and Martin,
On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
File system cache and plenty of RAM?
It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
mean Swap space as file system cache, it is also limited, may be 2GB.
I was just wonderin
hi Puneet and Martin,
On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
> File system cache and plenty of RAM?
>
It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
mean Swap space as file system cache, it is also limited, may be 2GB.
Puneet Kishor
> you haven't prov
I'm not sure I understand the question, but I'll take a stab at it
anyway.
If the data is to be loaded by and queried from the same program
execution, you may wnat to consider using a temporary table as opposed
to a regular (permanent) one that will go to disk. The time you might
save has to do w
Lloyd wrote:
Hi,
I don't know whether this is an irrelevant question in SQLite list, but
I don't see a better place to ask.
Which data structure is best to store and retrieve data very fastly?
There is a 95% chance that the searched data to be present in the data
structure. There will be 1000s o
On 4/11/07, Lloyd <[EMAIL PROTECTED]> wrote:
Hi,
I don't know whether this is an irrelevant question in SQLite list, but
I don't see a better place to ask.
Which data structure is best to store and retrieve data very fastly?
There is a 95% chance that the searched data to be present in the data
25 matches
Mail list logo