Would anybody suggest a good tool for performance measurement (on
Linux) ?
On Wed, 2007-04-11 at 10:35 -0500, John Stanton wrote:
> You might discover that you can craft a very effective memory
> resident
> storage system using a compression system like Huffman Encoding and
> an
> index method
Thank you all. I got so many new ideas from your replies. Now I just
have to derive the best solution for me, thanks :)
Lloyd
On Wed, 2007-04-11 at 10:35 -0500, John Stanton wrote:
> You might discover that you can craft a very effective memory
> resident
> storage system using a compression
I figured it out, i needed to pass a parm to Makefile.Pl to force it to
use the local SQLite source.
Jim Dodgen wrote:
Im having a problem geting the perl DBD working with 3.3.15
I integrated the 3.3.15 source with the perl module and all seemed ok.
all is fine with the command line version
[EMAIL PROTECTED] wrote:
Here is an excerpt from my requirements doc:
If a transaction is aborted all database state is reset
("rolled-back") to its value at the time the transaction was opened.
Nested transactions abort or commit relative to their
parent transaction. In the case of an
--- Mike Johnston <[EMAIL PROTECTED]> wrote:
> Are there any significant performance benefits by limiting the number of
> columns in a single
> table select statement? Does joining (<5 tables) make a significant
> difference to that answer?
If you need the columns, you don't have much choice
Are there any significant performance benefits by limiting the number of
columns in a single table select statement? Does joining (<5 tables) make a
significant difference to that answer?
TIA
-
TV dinner still cooling?
Check out "Tonight's Picks"
At 3:33 PM -0600 4/11/07, Dennis Cote wrote:
You have lost me here. If this transaction is considered successful
without executing the middle step (which is the same as executing it
and then undoing that execution) then that step does not belong in
this transaction.
Instead of
begin
> What is worse is that VACUUM didn't really help that much. It takes
> forever, and it doesn't really "fix" the fragmentation either.
That used to be the case, but VACUUM is vastly improved in the latest
version of SQLite.
Steve Krulewitz wrote:
Hey all,
On sqlite 3.3.12, I see the following:
create table test_in (data text, ordinal text);
insert into test_in values ('foo', '0');
insert into test_in values ('bar', '1');
insert into test_in values ('baz', '2');
Running:
select count(1) from test_in where
Unfortunately, the sqlite3 commandline shell is not part of the test suite
despite its widespread use as an administrative tool for sqlite databases.
http://marc.info/?l=sqlite-users=117253099812346=2
But you know this already - you're the same guy as this previous post. :-)
--- Travis
--- [EMAIL PROTECTED] wrote:
> Joe Wilson <[EMAIL PROTECTED]> wrote:
> > --- [EMAIL PROTECTED] wrote:
> > > It may be more difficult to implement this in a backwards-compatible
> > > way such that older versions of SQLite can rollback a journal created
> > > by a newer version if it encounters
Ramon Ribó wrote:
Imagine one application that can import data from a file. You want
that, in case of computer crash, either all the data of the file is
imported or none. At the same time, you want the user to manually
accept or reject every section of the file.
This example can be
Hey all,
On sqlite 3.3.12, I see the following:
create table test_in (data text, ordinal text);
insert into test_in values ('foo', '0');
insert into test_in values ('bar', '1');
insert into test_in values ('baz', '2');
Running:
select count(1) from test_in where ordinal in (select ordinal
Change log for 3.3.15 says:
Many improvements to the test suite. Test coverage now exceeded 98%
What does this mean?
Does it mean that (say) the sqlite3 command line tool (especially the .dump
command) is tested at each release now?
---
I'm asking this because previously on this
Thanks, Samuel, I forgot that trick :)
cheers,
-steve
On 4/6/07, Samuel R. Neff <[EMAIL PROTECTED]> wrote:
Instead of using LIKE use '<' to get a count of records before the one your
targeting. Something like this would work (names are from my schema):
SELECT PermissionRef
FROM
On 11.04.2007 21:38 CE(S)T, Igor Tandetnik wrote:
> No. The other table affects the result of the query, so the join still
> has to be performed.
I see. I haven't thought of that effect. Thank you for the reply.
--
Yves Goergen "LonelyPixel" <[EMAIL PROTECTED]>
Visit my web laboratory at
I forgot to mention that the stream data contains a BEGIN TRANSACTION and END
TRANSACTION marker.
Ray
[EMAIL PROTECTED] wrote:
> In my case, I am a slave device that must accept a stream of commands from an
> external device. I'm not sure if I can make intelligent decisions about
>
[EMAIL PROTECTED] wrote:
I'm not sure if I can make intelligent decisions about choosing what I commit
to the database.
Things don't look too bright for you or your users then. ;-)
I couldn't resist. :-)
Dennis Cote
> Are you using indices? Do you have an integer primary key
> on the table, but the data is not inserted in key order?
> Do you delete and insert records a lot?
Yes, we do have indices on a combination of fields in the table and we
do not guarantee insert of the data in
Darren Duncan wrote:
While it is true in some cases that an application can be written to
know in advance whether certain SQL statements need to be run or not,
there are other cases where it can only easily know after having tried
it.
One type of situation that stands out the most to me is
In my case, I am a slave device that must accept a stream of commands from an
external device. I'm not sure if I can make intelligent decisions about
choosing what I commit to the database.
Ray
Darren Duncan <[EMAIL PROTECTED]> wrote:
> At 12:49 PM -0600 4/11/07, Dennis Cote wrote:
>
At 12:49 PM -0600 4/11/07, Dennis Cote wrote:
[EMAIL PROTECTED] wrote:
It appears that my requirements are to be able to do the following:
BEGIN parent;
insert into t values ('a');
BEGIN child;
insert into t values ('b');
insert into t values ('c');
ROLLBACK child;
Hi,
> I am running into some issues that seem related to the current database
> file size.
I think it has to do with the file system cache: if you
database is small, the entire database is held in your
systems file cache. Once the database exceeds a certain
size, real disk operations have to
Im having a problem geting the perl DBD working with 3.3.15
I integrated the 3.3.15 source with the perl module and all seemed ok.
all is fine with the command line version (sqlite3) but when I run a perl
script i get this:
--- cut here ---
[EMAIL PROTECTED] perl]# ./sqlite_version.pl
Yves Goergen <[EMAIL PROTECTED]>
wrote:
I was thinking about what happens when I do an SQL query that names
tables and joins between them that aren't used at all. For example
this:
SELECT m.Sender, m.Subject FROM Message m, MessageTag mt;
Full cartesian product, aka cross-product. For every
Not that I would suggest it, but you could create a separate database for
each table and then attach the databases to a single connection to join data
together. The indexes will need to be in the same database as the table
they index.
But why do you want each table in a different file?
Sam
If you want to store tables in different files, you may want to consider
a heavier relational db , like Postgres, MySQL, Oracle, SQLServer...
-Original Message-
From: Guilherme C. Hazan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 11, 2007 3:10 PM
To: sqlite-users@sqlite.org
Hi,
From what i seen, sqlite stores all tables in a single file. Is there an
easy way to change this behaviour and store the tables and indexes in
separate files?
thanks
guich
-
To unsubscribe, send
Hello,
I was thinking about what happens when I do an SQL query that names
tables and joins between them that aren't used at all. For example this:
SELECT m.Sender, m.Subject FROM Message m, MessageTag mt;
Does it open and read the table MessageTag at all? Also, what happens in
a more complex
[EMAIL PROTECTED] wrote:
It appears that my requirements are to be able to do the following:
BEGIN parent;
insert into t values ('a');
BEGIN child;
insert into t values ('b');
insert into t values ('c');
ROLLBACK child; // child aborts
insert into t values ('d');
It appears that my requirements are to be able to do the following:
BEGIN parent;
insert into t values ('a');
BEGIN child;
insert into t values ('b');
insert into t values ('c');
ROLLBACK child; // child aborts
insert into t values ('d');
COMMIT parent;
As a result
It seems to me that the biggest worry about nested transactions is backwards
compatibility. So, first, let me reiterate some points:
1). Actual changes to the database occur thru only one user at a time and
modify the database directly.
2). The journal is created by this one user and is a record
Lloyd wrote:
On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.
Lloyd, you already know what size your data sets are. Esp. if it
doesn't change, putting the entire
Darren Duncan wrote:
I wasn't speaking in absolutes there, only in the context that if
child transactions were implemented, and there were 2 ways to do it,
then which way might have the least impact would be the way that most
closely resembles SQLite's current behaviour.
Yes, I know. I
Hi,
If you are using C++, then try hash_map. I've used this on strings with more
that 50,000 records - in memory. Very fast. Much easier to program than
BerkeleyDB.
- Original Message -
From: "Lloyd" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, April 11, 2007
I used an approach similar to the Bloom Filter for data retrieval. It
could be very fast at retrieving substrings from large data sets but was
fairly complex to implement.
I would not go with that approach unless you had some very broad
retrieval requirements and a very large data set.
There are many important variables to consider. What kind of data? How
many tables (if a relational DB approach even makes sense)? How many
records in each table? How will this data be queried? Many different,
often unperdicatable ways? Or just a couple static ways?
There's nothing faster
If it is just a read-only access to data then storing the data im memory
with an index which can be either a hashing method or a binary tree
would be the fastest. An easy to handle method is to store the data and
index in a flat file and load it into memory. Loading it in virtual
memory
On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
> I think, looking from Lloyd's email address, (s)he might be limited to
> what CDAC, Trivandrum might be providing its users.
>
> Lloyd, you already know what size your data sets are. Esp. if it
> doesn't change, putting the entire dataset in
>
> I was just wondering what the odds were of doing a better job than the
> filing system pros, how much time/code that would take on your part and
> how much that time would cost versus speccing a bigger/faster machine.
>
> Martin
I am not fully clear. I just want my program to run at most
On 4/11/07, Martin Jenkins <[EMAIL PROTECTED]> wrote:
Lloyd wrote:
> hi Puneet and Martin,
> On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
>> File system cache and plenty of RAM?
>>
>
> It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
> mean Swap space as file
Lloyd wrote:
hi Puneet and Martin,
On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
File system cache and plenty of RAM?
It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
mean Swap space as file system cache, it is also limited, may be 2GB.
I was just
Jef Driesen wrote:
I want to store a tree in an sqlite database. My first choice was the
adjacency list model:
CREATE TABLE tree (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
parent_id INTEGER
);
But this method requires multiple queries to display the entire tree (or
a
hi Puneet and Martin,
On Wed, 2007-04-11 at 14:27 +0100, Martin Jenkins wrote:
> File system cache and plenty of RAM?
>
It is meant to run on an end user system (eg. Pentium 4 1GB RAM). If you
mean Swap space as file system cache, it is also limited, may be 2GB.
Puneet Kishor
> you haven't
I'm not sure I understand the question, but I'll take a stab at it
anyway.
If the data is to be loaded by and queried from the same program
execution, you may wnat to consider using a temporary table as opposed
to a regular (permanent) one that will go to disk. The time you might
save has to do
Indeed the query in the new code is prepared once and then reset and rebound
between queries, but maybe I must go double check if that was the case with
the code that performed poorly. I remember asking the developer if he did it
like that so I presumed it was done correctly when the answer was
On 4/11/07, Dan Kennedy <[EMAIL PROTECTED]> wrote:
On Tue, 2007-04-10 at 09:26 -0700, Scott Hess wrote:
> On 4/10/07, Dan Kennedy <[EMAIL PROTECTED]> wrote:
> > I checked the code and conflict handling mechanisms (OR ERROR,
> > OR ABORT, OR REPLACE) do not apply to virtual tables.
>
> >
Lloyd wrote:
Hi,
I don't know whether this is an irrelevant question in SQLite list, but
I don't see a better place to ask.
Which data structure is best to store and retrieve data very fastly?
There is a 95% chance that the searched data to be present in the data
structure. There will be 1000s
On 4/11/07, Lloyd <[EMAIL PROTECTED]> wrote:
Hi,
I don't know whether this is an irrelevant question in SQLite list, but
I don't see a better place to ask.
Which data structure is best to store and retrieve data very fastly?
There is a 95% chance that the searched data to be present in the data
I'm speaking purely from an intellectual knowledge of "reading the docs"
(and having been an Oracle database developer since 1990), but I would do
two things in your coding of the SQL query:
1. use a bind variable instead of inlining the SQL if you aren't already, so
that you can prepare the
Hi,
I don't know whether this is an irrelevant question in SQLite list, but
I don't see a better place to ask.
Which data structure is best to store and retrieve data very fastly?
There is a 95% chance that the searched data to be present in the data
structure. There will be 1000s of nodes in the
Hi,
I am no expert but try to increase your btree page size to the default page
size of your storage. I think sqlite defaults to a 1K page size but im sure
you can bump it up to 4K and see if that helps. I work with rather large
databases ( 5-8Gb ) and although increasing my page size from 1K to
Done coding it back to using SQL to do the lookups and there seems to be no
performance issues whatsoever O_o. Although I did not actually write the
previous implementation using SQL to do the lookups I remember going over
the code and all seemed fine. No idea what went wrong there but I am
Joe Wilson <[EMAIL PROTECTED]> wrote:
> --- [EMAIL PROTECTED] wrote:
> > It may be more difficult to implement this in a backwards-compatible
> > way such that older versions of SQLite can rollback a journal created
> > by a newer version if it encounters one.
>
> I wonder if there are many
On Tue, 2007-04-10 at 09:26 -0700, Scott Hess wrote:
> On 4/10/07, Dan Kennedy <[EMAIL PROTECTED]> wrote:
> > I checked the code and conflict handling mechanisms (OR ERROR,
> > OR ABORT, OR REPLACE) do not apply to virtual tables.
>
> > Something to think about anyhow... Do we want conflict
55 matches
Mail list logo