[sqlite] LIKE syntax with BLOBs

2006-01-31 Thread Shane Baker
I have BLOBs in my schema and the data will often start with bytes of 0
value.

I'm having a tough time coming up with the proper SQL syntax to select all
the columns that start with 2 0's (or any zeros).

I have tried:

SELECT * FROM mytable WHERE myblob LIKE 0%;

SELECT * FROM mytable WHERE myblob LIKE X'0%';

SELECT * FROM mytable WHERE myblob LIKE X'0'%;

and a few other combinations that use double quotes.

Is there a LIKE syntax that will work with BLOBs in this way?  Thank you
very much.

Shane


Re: [sqlite] Transforming BLOB results in SELECT using sqlite?

2006-01-30 Thread Shane Baker
Thanks very much, the quote() helps a lot.

On Mon, 30 Jan 2006 [EMAIL PROTECTED] wrote:

> Shane Baker <[EMAIL PROTECTED]> wrote:
> > Are there any mechanisms that will display the [BLOB]
> > data in a human readable format?
>
> Assuming the table is:  CREATE TABLE t1(x BLOB)
> You can do this:
>
>   SELECT quote(x) FROM t1;
>
> > For that matter, can I view an INTEGER
> > column as hex in the output window?
>
> Not without modifying the shell to implement some kind
> of custom function to do so.  At least no way that I can
> think of right off hand.
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
>


[sqlite] Transforming BLOB results in SELECT using sqlite?

2006-01-30 Thread Shane Baker
I don't know if what I want to do is possible or not.

I have a schema that has a couple of BLOB columns where the data is
either:

1.  Really random, or
2.  May contain leading 0's

Either way, the results aren't really readable with SELECT using the
sqlite command line tool.  Are there any mechanisms that will display the
data in a human readable format?  For that matter, can I view an INTEGER
column as hex in the output window?

I could transform the data myself on the way into and out of the database,
but that would generally be wasteful since it only helps when reading the
data directly from the DB and it would make the program more complex
because I'd need to switch to either keeping track of the transformed data
myself or switching to SQLITE_TRANSIENT when I bind the BLOB for
insertion.

I hope I have asked this question clearly enough.  Thanks for any
suggestions,

Shane


RE: [sqlite] can anyone reproduce ticket# 1540 (failure to create a primary key)?

2005-11-28 Thread Shane Baker
I'm going to state right up front that I have very little experience with
sqlite and, if I can help demonstrate that this ticket is valid, I will.

That said, it comes as no surprise that I had never looked at
sqlite_master before.  So, I checked out the contents and found that, with
my very simple test database, there is only one *_autoindex_* and that was
on the table which has a BLOB for the primary key.  All tables that had
integer primary keys did not have any such index.

I'm not sure why an autoindex would be expected on a table for which a
physical integer column was declared to be the primary key.  As for
detecting the column which has the primary key, that is not a typical
functionality provided by a database subsystem.  Contraints exist to
maintain consistency and are typically not queryable (at least not in my
experience).  Maybe this is typically queryable and I've just never had a
need that exposed me to it.

My uninformed guess is that the autoindex that you would see when putting
a UNIQUE constraint on the second column is on the second column for more
efficient constraint checking.  Again, I don't really know how the
autoindex is used or anything so I may be full of it.

On Mon, 28 Nov 2005, Bogdan Ureche wrote:

> Will,
>
> Thanks for the reply. Yes, the table gets created, however the primary key
> (sqlite_autoindex_tablename_1) if it is created, is not visible as a
> distinct row in sqlite_master.
>
> Are you saying that the index is created even if it is not added to
> sqlite_master? If the answer is yes, then how could one find the primary key
> for a table (preferably without parsing sqlite_master.SQL)?
>
> Regards,
>
> Bogdan
>


Re: [sqlite] can anyone reproduce ticket# 1540 (failure to create a primary key)?

2005-11-28 Thread Shane Baker
I have the same question.  Specifically, what indication is there that no
primary key was created.

Certainly, by executing the command below (using the sqlite3 command tool)
verbatim and following that with ".schema" indicates that the table was
created with a primary key.

This is with 3.2.7 on Windows XP.


On Mon, 28 Nov 2005, Will Leshner wrote:

>
> On Nov 28, 2005, at 6:39 PM, Bogdan Ureche wrote:
>
> > CREATE TABLE TestTable (TestField1 INTEGER PRIMARY KEY, TestField2
> > VARCHAR)
>
>
> Does the table itself get created? It is impossible to have a table
> without an INTEGER PRIMARY KEY in SQLite, so it is hard to believe
> that you can create the table without an INTEGER PRIMARY KEY. How can
> you tell the key isn't created?
>


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
Thank you very much.  I am happy to hear that the performance I am seeing
is in line with what others have observed.  I am running this on Windows
XP.

On Tue, 22 Nov 2005, Akira Higuchi wrote:

> Hi,
>
> On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
> Shane Baker <[EMAIL PROTECTED]> wrote:
>
> > I just need to figure out why my performance is about 30x slower than what
> > others are reporting when using the library in similar ways.
>
> Are you using sqlite on windows or MacOS X?
>
> As I tested, sqlite performs 10 write transactions per sec on
> windows and MacOS X. On Linux, sqlite performs up to 300 write
> transactions per sec if HDD cache is enabled. However, when HDD cache
> is disabled (/sbin/hdparm -W0 /dev/hda etc.), it drops to 10
> transactions per sec.
>
> To ensure ACID compliance, sqlite seems to be using FlushFileBuffers() on
> windows, F_FULLFSYNC fcntl() on MacOS X, and fsync() on other unixes.
> FlushFileBuffers and F_FULLFSYNC fcntl() flush HDD write-back
> cache, and they are very slow. However, many implementations of fsync()
> don't flush HDD cache (they flush OScache only). It's very fast, but
> dangerous (not ACID compliant) if HDD cache has no battery backup.
> (So i'm using sqlite on Linux with HDD cache off.)
>


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
Thank you very much for the feedback.  I understand your point, hardware
takes a deterministic amount of time.

I have been basing my assumptions on these sources:

http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
"Transactions and performance")
http://blog.amber.org/2004/11/28/sqlite-insertion-performance/

There was one other, but I can't find it.

For the time being, I don't think that inserts are going to happen very
frequently in my application and I can probably roll updates into
transactions.

Thanks again.


On Mon, 21 Nov 2005, Christian Smith wrote:

> On Mon, 21 Nov 2005, Shane Baker wrote:
>
> >I'm sure I must be doing something wrong.  This is my first attempt at
> >working with SQLite.
>
>
> We'll see...
>
>
> >
> >I have a simple table, with 7 columns.  There are 6 integers and a BLOB,
> >with the primary key being on an integer.  When I try to run inserts (one
> >insert per transacion - I know this is not optimal, but it represents my
> >application's usage), I am only getting about 7 inserts per second, on
> >average.
> >
> >My first suspect was the BLOB and the fact that I was binding this
> >parameter using SQLITE_TRANSIENT (using sqlite3_bind_blob).  I removed the
> >BLOB from the schema altogether, leaving just 6 integers, and I still have
> >the same performance.
>
>
> The performance problem is the synchronous IO bottleneck of doing only a
> single insert per transaction.
>
>
> >
> >For reference, I am getting around 10,000 queries per second when I lookup
> >a row based on the primary key column.
> >
> >All performance measurements I've seen posted by others suggest between
> >200 and 300 inserts per second with one insert per transaction.
>
>
> Probably not, unless this is to a FLASH device, for example. The
> Bottleneck in hard disk IO is the rotational and head movement latencies
> to write data to the platters. Assuming no head movement, a 7200 rpm disk
> will only allow the same sector to be rewritten 1/7200 times a minute,
> which is 120 times a second. Add in that many different sectors need to be
> updated synchronously, and throughput drops dramatically.
>
> A quick test indicates that I can almost double the performance on
> Linux/ext3 by having "data=journal" option set in the mount flags. This is
> because head movement is reduced significantly. A test that previously
> took ~500 seconds (13785 inserts without transactions) took 280 seconds
> with "data=journal". For reference, the same data inserted with a single
> transaction took ~1.2 seconds!
>
>
> >
> >I haven't run a profiler yet but hope to do this tomorrow.  Does anyone
> >have any ideas as to what I might be doing wrong, or where I should look?
>
>
> If you can change your model to insert more than 1 row per transaction,
> you should see a significant performance increase. You'll see roughly N
> times the performance for small N.
>
> If this is not an option, look at your storage and how you can reduce
> latency. FLASH devices have low latency, being solid state, and some RAID
> controllers have battery backed buffers, and so may have lower latency.
>
> >
> >Thanks in advance.
> >
>
> Christian
>
> --
> /"\
> \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
>  X   - AGAINST MS ATTACHMENTS
> / \
>


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
No, as I mentioned in my original message, I am not wrapping them.  I
don't want to test an unrealistic scenario for my application.  In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.

I understand that I will get better performance by inserting all of my
rows inside a transaction.  What I don't understand is why, when NOT using
a transaction, I get about 7 inserts per second compared to others who are
reporting between 200 and 300.  I am working with reasonable hardware.

I just need to figure out why my performance is about 30x slower than what
others are reporting when using the library in similar ways.



On Mon, 21 Nov 2005, Chris Schirlinger wrote:

> Are you wrapping the transactions in between Begin/End Transactions?
>
> BEGIN TRANSACTION;
> INSERT INTO table (foo) VALUES (bar);
> INSERT INTO table (foo) VALUES (par);
> INSERT INTO table (foo) VALUES (tar);
> INSERT INTO table (foo) VALUES (far);
> ..
> INSERT INTO table (foo) VALUES (car);
> INSERT INTO table (foo) VALUES (jar);
> INSERT INTO table (foo) VALUES (mar);
> COMMIT TRANSACTION;
>
> Check out this document for more info
> http://www.sqlite.org/lang_transaction.html
>
> > I'm sure I must be doing something wrong.  This is my first attempt at
> > working with SQLite.
>