Re: [sqlite] Sqlite-3.5.9: getting sqlite_autoindex error

2009-06-19 Thread Tim Bradshaw
On 19 Jun 2009, at 06:36, hiral wrote:

> I am running application which uses the db over the NFS. If I move  
> the db
> over the local drive then it's working fine.
> So I m observing this bug in NFS environment frequently.
> In the same test environment (in which I am getting db corrupted),  
> if I use
> the sqlite-3.6.4 and above it works perfectly fine.

A couple of things to look at.
* what are the NFS mount options?  In particular do not, ever, use  
soft mounts[*].
* what client and server are you using?  Typically Linux / anything  
not linux is a recepie for trouble, as Linux's NFS implementation is a  
bit of a joke in terms of standards (actually, it has been more than a  
bit of a joke at various times).

--tim

[*] Really, not ever
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How can I create an automatically generated primary key id column.

2009-06-14 Thread Tim Bradshaw
On 14 Jun 2009, at 18:33, Simon Slavin wrote:

>> I hope sqlite3 > .import FILE
>> can do it
>
> It can't.

Well, it can if you are willing to be a bit devious.  Something like  
this works:

create table frobs (
  id integer primary key,
  frob varchar);

create temporary table frobs_import (
  frob varchar);

.import data frobs_import

insert into frobs(frob)
  select * from frobs_import;

--tim

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Build problem of sqlite-3.6.14.2 on Solaris 10 with gcc 4.4.0

2009-06-13 Thread Tim Bradshaw
On 13 Jun 2009, at 12:29, Dr. David Kirkby wrote:

> I believe if Sun make some public access machines available, there  
> would
> be a benefit to Sun, and would hopefully avoid a lot of the GNUisms  
> one
> sees in software. Whether the cost would outweigh the benefit I have  
> no
> idea. There is obviously the cost of power, hardware and staff to  
> run it.

I guess we probably should just agree to differ about that.  I really  
don't think people who can't be bothered to download & install a free  
copy of Solaris (in a VM, so with no hardware cost in almost all  
cases) are interested in fixing stuff on some platform other than  
their favoured one.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Build problem of sqlite-3.6.14.2 on Solaris 10 with gcc 4.4.0

2009-06-13 Thread Tim Bradshaw
On 12 Jun 2009, at 23:46, Dr. David Kirkby wrote:

> I don't know if you work for Sun, but if you do, it would be really  
> good
> if Sun made some open-access Suns available for developers to test  
> their
> code, like HP do.

Surely this would only matter for SPARC-related issues (which I don't  
think we're looking at here, are we?)  I can't imagine anyone finding  
it too much hard work to install a Solaris x86 on a VM (both available  
free of charge, from Sun even, if you use virtualbox).  The situation  
is different for HP-UX & AIX which don't run on (VMs on) commodity HW,  
and even if they did might not be legal to run there.

The Sun compilers are free as well (though not open source to my  
knowledge).

My experience (I'm contracted through Sun at the moment) is that  
virtually the only place where you need to care about the x86/SPARC  
thing is differences in booting, which won't matter for SQLite.  I  
guess compiler options  might be different, and certainly getting  
good performance from the CMT systems would probably require access to  
them.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Tim Bradshaw
On 11 Jun 2009, at 16:19, Jim Wilcoxson wrote:

> SSD's usually have poor write performance, because to do a write, they
> have to use read, erase, write sequences across large blocks like 64K.
> Most of the SSD benchmarks that quote good write performance are for
> sequential write performance.  If you skip all over the disk doing
> small writes, like a database does

I think it's not the case that a database needs to skip all over the  
disk. For a start you can write to a log at the DB level, but even if  
you don't at least some modern filesystems are copy-on-write, so they  
never actually rewrite blocks (well, they do, but not in the read- 
modify-write sense).  ZFS is one such filesystem.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advice on which to use (SQLite or SQL Server) for the following app.

2009-06-11 Thread Tim Bradshaw
On 11 Jun 2009, at 20:05, Jim Wilcoxson wrote:

> If you partition the database into multiple databases, you will have
> to place each on its own physical disk drive to increase transaction
> rates.  If your base transaction rate with one drive is T, with N
> drives it should be N*T;  4 drives gives you 4x the transaction rate,
> etc.  Each of the drives has to be completely independent - no
> filesystems crossing drives.

I think - if you are serious about the problem - you can just rely on  
a disk array to do this for you. You see something that looks like a  
disk but which is of course spread over lots of spindles and with a  
bucketload of NV cache in front of it, and which can sustain really  
high numbers of ops/second.

Of course that may be what the previous person meant by "special  
hardware".
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting readline to work on Solaris

2009-06-11 Thread Tim Bradshaw
On 11 Jun 2009, at 09:42, Tim Bradshaw wrote:

>
> * configuring with
>   ./configure --prefix=/home/tfb/packages/sqlite-3.6.14.2 \
>--enable-static=no \
>--enable-readline=yes \
>LIBS="-L/opt/sfw/lib -R/opt/sfw/lib -lreadline -lcurses" \
>INCLUDES="-I/opt/sfw/include"
> * gmake; gmake install

I lost patience with this and, for what it's worth, the following works:

$ ./configure --prefix=/home/tfb/packages/sqlite-3.6.14.2 \
--enable-static=no \
--enable-readline=yes \
LIBS="-L/opt/sfw/lib -R/opt/sfw/lib -lreadline -lcurses" \
CPPFLAGS="-I/opt/sfw/include"

(INCLUDES is not right, it has to be CPPFLAGS). Then edit the Makefile  
and add a -DHAVE_READLINE=1 to the appropriate place.  Then it builds  
with readline, and works.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Getting readline to work on Solaris

2009-06-11 Thread Tim Bradshaw
I've built various versions of SQLite on Solaris 10 (u7 currently,  
with the companion CD).  Things keep breaking so I have to change how  
I build it, but recently I've been building it by:
* Fetching the amalgamation
* configuring with
   ./configure --prefix=/home/tfb/packages/sqlite-3.6.14.2 \
--enable-static=no \
--enable-readline=yes \
LIBS="-L/opt/sfw/lib -R/opt/sfw/lib -lreadline -lcurses" \
INCLUDES="-I/opt/sfw/include"
* gmake; gmake install

This produces a binary which works fine and it is linked with readline:

$ ldd ~/packages/sqlite-3.6.14.2/bin/sqlite3
 libsqlite3.so.0 =>   /home/tfb/packages/sqlite-3.6.14.2/ 
lib/libsqlite3.so.0
 libreadline.so.4 =>  /opt/sfw/lib/libreadline.so.4
 libcurses.so.1 =>/lib/libcurses.so.1
 libc.so.1 => /lib/libc.so.1
 libgcc_s.so.1 => /usr/sfw/lib/libgcc_s.so.1
 libm.so.2 => /lib/libm.so.2

But history etc does not work.

In significantly older versions history *did* work (but building was  
much different then).

Has anyone had any better success getting this to work than me?  I  
suspect my problem is mostly "not running on a Linux distribution  
sufficiently similar to ", but that's not really an option for me  
unfortunately.

Thanks

--tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.4.2 (or 3.5.0) on Solaris 10?

2007-09-17 Thread Tim Bradshaw

On 14 Sep 2007, at 16:20, Tim Bradshaw wrote:

Has anyone successfully got either of these to build on Solaris 10,  
using the gcc that ships with it?  I've tried on 10u4 on x86 and  
(after fixing the known problem with B_FALSE/B_TRUE for 3.4.2) they  
both failed sometime while linking.  I just did a


./configure --prefix/what/ever

with no special options.


To answer my own question, the answer is whether or not to build  
static libraries.  I'm not sure you can at all on Solaris 10, but  
even if you can whatever configure generates can't.  So this works:


../sqlite-3.4.2/configure --prefix=/what/ever --enable-static=no

--tim

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] 3.4.2 (or 3.5.0) on Solaris 10?

2007-09-17 Thread Tim Bradshaw

On 17 Sep 2007, at 05:58, Halton Huo wrote:


I did not build sqlite on Solaris 10, but I do build it on Solaris
Express Community Edition successfully.


Are you using Sun's compiler or gcc?  I guess I could install some  
version of Studio on my host if the former...


--tim

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] 3.4.2 (or 3.5.0) on Solaris 10?

2007-09-14 Thread Tim Bradshaw
Has anyone successfully got either of these to build on Solaris 10,  
using the gcc that ships with it?  I've tried on 10u4 on x86 and  
(after fixing the known problem with B_FALSE/B_TRUE for 3.4.2) they  
both failed sometime while linking.  I just did a


./configure --prefix/what/ever

with no special options.  The eventual target would be SPARC boxes  
but I don't think the problem looks architecture-related.


Thanks

--tim


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Curious performance issue with large number of inserts

2007-05-08 Thread Tim Bradshaw

On 5 May 2007, at 17:11, Joe Wilson wrote:


What timings do you get when you run the perl script in my last email?


Your script inevitably was fine.  What seems to cause the trouble is  
something nasty in the pathnames.  I've modified my perl script to  
sanitize them by removing any non-ASCII characters and this makes it  
all just work.  Unfortunately I can't find the offending pathname.   
It would be interesting to know what went wrong - I'd imagine  
seriously bad things in strings would make it die.  It's possible  
that it missed a closing quote char and ended up eating a basically  
infinite string (though I'd expect it to grow much faster than it did  
in that case, if it grew at all).


So I guess this was basically an SQL injection attack in the form of  
carefully chosen Unix pathnames :-)


Thanks for the help!

--tim

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Curious performance issue with large number of inserts

2007-05-05 Thread Tim Bradshaw

On 5 May 2007, at 17:11, Joe Wilson wrote:


What timings do you get when you run the perl script in my last email?


I'll try it Tuesday, I'm not back at work till then.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Curious performance issue with large number of inserts

2007-05-05 Thread Tim Bradshaw

On 5 May 2007, at 15:59, Joe Wilson wrote:

Ignore the idea above - an insert of NULL or an incrementing  
integer for the

INTEGER PRIMARY KEY yields the same timings.


I might try taking it out anyway - I don't really need it other than  
to be able to say "look at row n" to someone.


I did originally have indices before doing the inserts, but I took  
them away.




I can't reproduce your problem. I can insert 16M records into your  
table
schema in 25 minutes on a 5 year old Windows machine. The sqlite3  
process

had peak RAM usage of less than 20M.


Rats, I suspect it must be some compiler/architecture specific thing.  
I can rebuild it for x86, since I don't really care where it runs. It  
definitely isn't running out of memory though (at least, not unless  
it's throttling itself somehow, the machine has loads  free when it  
gets sick).



Thanks

--tim

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Curious performance issue with large number of inserts

2007-05-04 Thread Tim Bradshaw
Apologies if I should have found an answer to this by searching or  
being less naive about SQL!


I'm using 3.3.17 built by me with gcc (3.4.3? can't check just now as  
machine is at work - it's the one Sun ship anyway) on a SPARC Solaris  
10u3 box.


I don't have any significant experience of SQL databases so I may be  
doing something trivially wrong.


I have a table defined by

create table filemap(id integer primary key,
  uid integer, gid integer, mtime integer,
  vol integer,
  path varchar(1024));

It has no indices built yet.

I'm adding quite a lot of records to it using a perl script which  
generates SQL like this:


begin;
 insert into filemap values(null, 1, 1, , 0, "/path/to/file");
 ...  more like this ...
commit;
... repeat above ...

The uid, gid and mtime fields vary obviously, but there are very many  
paths for each uid/gid pair.  The idea is that I need to be able to  
say `show me all the files owned by UID x on volume y?', and we have  
enough data that awk can't hack it.


before doing this I've done a

pragma synchronous=off;

All this is just being piped into sqlite3 (I know I should use the  
proper interface, but it's a very quick & dirty hack).


I have about 16,000,000 records.  When adding them it goes really  
quickly for about the first 1,000,000 (using the big transaction  
trick made it much faster, as did the synchronous=off thing).  But  
then it slows down dramatically, perhaps by a factor of 100 or 1000  
or something.  I've actually left it running, but I'm not convinced  
it will have done all 16 million by Monday.


I have not looked at what the program is doing in the sense of system  
calls or any more detailed profiling.  It is clear that disk activity  
falls right off when it becomes slow.


Am I doing anything obviously stupid here?  I suspect I must be.

Thanks

--tim




-
To unsubscribe, send email to [EMAIL PROTECTED]
-