Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Brandon, Nicholas (UK)

> my developemnt system is a Win XP, with of course NTFS, I 
> don't know which drive it has, I guess a standard 7200 rpm.
> 

What file extension (i.e. the letters after the dot in the filename) do
you give the database?

I faintly recall there is a windows peculiarity with system restore or
something similar that archives certain file extensions in the
background. That may contribute to your slow down.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Techniques to delay writes to SQLite

2009-02-03 Thread Brandon, Nicholas (UK)


I would like some advice on how best to implement delays in writing to a
SQLite file (in my case using PHP) to maximise concurrent access.

The web application I'm developing mainly issues quick reads and writes
on a local SQLite file using PHP 5. This works satisfactorily. However
in the future there may occasionally be a relative long running SELECT
statement while performing some reporting analysis. I would like to
reduce the potential contention with the long read blocking and
therefore timing out a write operation. I'm not concerned about the
small delay in the database being updated.

I'm aware of one technique to create/use temporary tables using a select
statement but I would like something more robust since the complexity of
the long running SELECT statement is indeterminate since it is
modifiable by an authorised user.

One idea I had was to use a shared flag across the PHP processes. A
potentially long running SELECT statement would set this flag to true.
All write operations would check for this flag and on its value being
true would open a new SQLite file and write the raw SQL strings to act
as queue. Something similar to the Undo example comes to mind
(http://www.sqlite.org/cvstrac/wiki?p=UndoRedo). On completion the long
running SELECT statement would open the new SQLite file and "play" the
SQL strings in order back into the original SQLite file. How would I
write binary safe INSERT/UPDATES SQL statement like that in PHP?

I'm open to other techniques particularly if they would be simpler to
implement and manage!

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Exporting database to CSV file

2009-01-07 Thread Brandon, Nicholas (UK)

.
> 
> Is there a way to do this entirely through php?  I would like 
> to make a query on a table and write the results to a csv 
> file so that the user can have the option of downloading it.  
> Has anyone ever done something similar to this?
> 
> Thanks
> 

I believe there is a function like 'fputcsv' which may work for you.

However I would test the multi-line output as mentioned in the earlier
email from Sylvain. I recall trying to use the function and that it
suffers from the same problem. You may find it easier just to code it
directly yourself.


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Significance of Sqlite version?

2008-12-02 Thread Brandon, Nicholas (UK)

I note recently that the SQLite version has gone from a 3 point number
(i.e. 3.6.2) to a 4 point number (i.e. 3.6.6.2).

Should I read any significance into this change? Is there going to be
two strands to development/release of SQLite or will the current
practice of the 'latest is the best' still remain true?

Regards
Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Incorporating FTS into existing database schema

2008-08-29 Thread Brandon, Nicholas (UK)

> 
> Any advice will be greatly appreciated. If there is any FM I 
> should R kindly point me to it :)
> 

FTS information is difficult to find. Try
http://www.sqlite.org/cvstrac/wiki?p=FtsOne I can't remember how I came
across this link because I can never find it on the SQLite website.
 
Ignore references to FTS1 as the SQL syntax is the same for FTS3 (I
believe it is only the internals that have changed).
 
One tip is to read the document a number of times. There are a number of
important but subtle concepts that you need to grasp to effectively use
FTS. In particular understand how the tokeniser works. Words with
hyphens or UTF8 may not work as you might expect.


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Incorporating FTS into existing database schema

2008-08-27 Thread Brandon, Nicholas (UK)

This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Incorporating FTS into existing database schema

2008-08-27 Thread Brandon, Nicholas (UK)

This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help with Sqlite

2008-08-20 Thread Brandon, Nicholas (UK)

> I'm using the install of Firefox that comes with the Wubi 
> install of Linux.  I like sqlite, but have a little problem.  
> Perhaps someone can help.
> 
> When I add a new record to a database, an entry screen comes 
> up with my fields and the ability to enter the new record.  
> But the information I type into the input field seems placed 
> in the field a bit too low -- about half of each letter is 
> cut off at the bottom and I can't really read what I'm typing.
> 
> Is there a way to fix this?  Has anyone else had this experience?

I suspect you might be using the SQLite Manager add-on to Firefox. You
can check the add-ons used in Firefox by going to the menu
"Tools->Add-ons". When the window pops up select the "Extensions" tab at
the top.

More information about SQLite Manager can be found at:
http://code.google.com/p/sqlite-manager/

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connection to in-memory database

2008-08-20 Thread Brandon, Nicholas (UK)

> 
> I would like to use transactions from separate threads, each 
> thread having one connection to a single in-memory db.
> 

If your production environment is a modern linux distribution you may
find the temporary directory ("/tmp") is already a memory drive using
the tmpfs filesystem. If not it is very easy to create one. Search the
internet for more information.

You can then access the database by multiple processes/threads by
referring to the file path.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3 Snippet function on two column MATCHes

2008-08-18 Thread Brandon, Nicholas (UK)

> 
> I would like to generate Snippets from MATCHes in two 
> columns, however, I get the following error: "unable to use 
> function MATCH in the requested context" with the following query --
> 
> SELECT poem_id, context
> FROM poems a JOIN (
>   SELECT
>   rowid,
>   Snippet(fts_poems, '', 
> '', '') AS context
>   FROM fts_poems
>   WHERE poem MATCH ? OR history MATCH ?
> ) b ON a.poem_id = b.rowid
> 

The query above does not use the syntax of MATCH when using FTS. See the
extract below from http://www.sqlite.org/cvstrac/wiki?p=FtsOne:

Any term in a query string may be preceded by the name of a particular
column to use for matching that term:

  sqlite> select name, ingredients from recipe where recipe match
'name:pie ingredients:onions';
  broccoli pie|broccoli cheese onions flour
  sqlite>

The following are entirely equivalent:

  sqlite> select name from recipe where ingredients match 'sugar';
  sqlite> select name from recipe where recipe match
'ingredients:sugar';

When a specific column name appears to the left of the MATCH operator,
that column is used for matching any term without an explicit column
qualifier. Thus, the following are equivalent:

  sqlite> select name from recipe where recipe match 'name:pie
ingredients:onions';
  sqlite> select name from recipe where name match 'pie
ingredients:onions';



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Finding similar duplicates

2008-07-31 Thread Brandon, Nicholas (UK)

> 
> You probably want
> 
> x.first_name like substr(y.first_name, 1,2) || '%'
> 
> or
> 
> substr(x.first_name, 1, 2) = substr(y.first_name, 1, 2)
> 
> Igor Tandetnik 
> 

Igor, Peter, Thanks very much for your help.



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Finding similar duplicates

2008-07-31 Thread Brandon, Nicholas (UK)

This should be simple but my brains not functioning. So I would
appreciate some help from the SQL masters...

I have a table with first_name and a last_name column. I would like to
find similar duplicates by finding the same last_name and matching the
first two characters of the first name. Therefore if the table has the
following rows:

Ind_id  last_name  first_name
100 Smithdavid
101 Smithdave
102 Smithirene

I would like it to pick out the top two rows. I'm using the following
but while the syntax is correct the last where expression below is
wrong:

select * from current as x, current as y where x.last_name = y.last_name
and x.ind_id != y.ind_id and x.first_name like substr(y.first_name, 0,2)

In my english I was trying to write "match where the first name of x
begins with the first two characters of y first name". Using a literal
example for above the SQL would end: where x.first_name like 'da%'

(Note: the ind_id comparison avoids matching itself)

Thanks
Nick








This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] restricting access to sqlite database

2008-04-22 Thread Brandon, Nicholas (UK)


> 
> Thanks for your reply! However, apache has to be able to 
> access /a/ totally/different/path/to/db, so this means that 
> any user on the same server can access it via e.g. a PHP web 
> page, if they know that path, is that correct?
> 

Yes, but

>
> >>  In MySQL for example, this is not a problem because of 
> the different  
> >> users/privileges, but what is the common way around this in SQLite?
> >

MySQL would actually suffer from a similar problem but in a different
way. Imagine the scenario that your forum accesses a MySQL database
using username & password strings stored in a PHP script. This script
would need to be readable by apache for the forum to work.

If someone else know the name of that script, they could craft a rogue
PHP to display the above PHP script so that they could copy the
username/password. They could use username/password to access your MySQL
database and corrupt/delete it.

There are alternative solutions. One I know of (but never used before)
is to use the 'cgi' version of PHP which can run under different user
names. Best place to ask would be a PHP list.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [PHP] Compiling with latest SQLite?

2008-03-27 Thread Brandon, Nicholas (UK)


> >Hello
> >
> >I noticed that the PDO::SQLite driver that comes with PHP 5.2.5 is 
> >3.3.17, while the non-OOP version is 2.8.17.
> >
> >Does someone know how to recompile PHP with the latest SQLite source?
> 
> You don't have to, the alternative is:
> 
> extension=php_pdo_sqlite_external.dll
> plus the current sqlite3.dll
> That works for me.
> 
> php_pdo_sqlite_external.dll can be in the  php/ext directory, 
> as usual. 
> You may have to copy  sqlite3.dll  to the  apache/bin directory.
> 

Kees,

That's fine for windows, unfortunately the same facility is not
available in the unix world.

I compiled a PDO module using 3.5.4 (I think) using the source code from
SQLite.org. Took a little fiddling but eventually got it to work. It
past the same PHP tests as the latest PDO version (3.3.17)  so presume
it works okay. It was compiled using Ubuntu 7.04 x86. If anyone wants
this shared library, let me know and I can send a copy.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Shared Cache for Processes

2008-01-24 Thread Brandon, Nicholas (UK)

Hi all,

Could the 'Shared Cache' option in SQLite theoretically improve the
performance of the db if used by multiple processes? The application in
particular is Apache using pre-fork processes accessing the same db.
The info at http://www.sqlite.org/sharedcache.html seems to indicate it
could benefit threads only.

I believe it would not but would like confirmation from someone else.

Thanks
Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Advice on adding page cache statistics to Sqlite

2008-01-24 Thread Brandon, Nicholas (UK)

Hi,

I'm looking into adding cache statistics (at the pager level) to SQLite
to try to better understand how often SQLite is 'hitting' the disk for a
particular application. Two ways I've considered doing this are:

1. Add a static array to SQLite and populate from function
'pagerAcquire'. - Pros: Simple - Cons: Not particularly accessible,
resides in memory, harder to dynamically adjust

2. Using some sort of special table (similar to sqlite_master) and
populate using SQL. - Pros: Data accessible from application. Cons: Not
sure where to start

At this stage I'm only looking to record primitive information. If, for
example, we went down route 2, below could be a suitable table
definition.

 "CREATE TABLE sqlite_cache_stats(\n"
 "  total_page_reads_from_cache integer,\n"
 "  total_page_reads_from_disk integer,\n"
 "  session_page_reads_from_cache integer,\n"
 "  session_page_reads_from_disk integer\n"
 ")"

Where 'total_*' fields are kept for the life of the database file and
"session_*" are kept for the life of the connection, resetting on
opening the database.

I would appreciate some suggestions on how to progress this. Perhaps
something similar may already exist, so any pointers would be beneficial

Thanks
Nick




This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Re: Disk caching impacts performance.

2007-11-09 Thread Brandon, Nicholas (UK)


  
> 
> I just tried (hadn't noticed that option before) to go from 
> 2000 to 4000 and 8000, without noticing any difference. I 
> might try next week to raise the page size to 50k and see if 
> it makes a difference?
> 

On the presumption the Sqlite allocates new pages on the page boundary I
would suggest you use multiples of the file system page size. I believe
(but I could be wrong) both Linux and Windows NTFS defaults to 4096
bytes. If you chose 4,500 bytes in this scenario, for every call to
retrieve one Sqlite page, it would require two calls to the OS. For
whats it worth, when I did some performance testing a few years ago I
recall there was no significant differenence using larger page sizes on
a standard desktop machine but your mileage may vary.

On a related note I do remember that when I was testing large reads (1M,
10M , 100M) using a single 'fread' call compiled in MSVC 2005 on Windows
XP SP2 it had an interesting side effect in the fact that it performed
the function by calling a lower level API multiple times with a size of
65,355 bytes, regardless of the original size requested in 'fread'. I
believe this can be shown using the file system tools from SysInternals.
Therefore I suspect there will be little-to-no benefit of page sizes
greater than 64kiB on Windows XP.

Nick
 


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Suggests for improving the SQLite website

2007-11-09 Thread Brandon, Nicholas (UK)


> 
> Please continue to provide feedback.
> 

Assuming the build process is fairly automated and not too onerous to
implement I would like to see 'nightlys/weeklys' source and precompiled
binaries of SQLite. I would imagine like me, many of us are behind
company firewalls with no facility for using cvs externally. The thought
of downloading every file using
http://www.sqlite.org/cvstrac/dir?d=sqlite is not particularly pleasing
;)

Admittedly not a presentation comment but rather an improvement to what
the website offers.



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] CURRENT_TIMESTAMP value in single transaction

2007-08-28 Thread Brandon, Nicholas \(UK\)


When enclosed in a single transaction, would inserting many rows into a
table using the special default keyword 'CURRENT_TIMESTAMP' result in
all of the rows guaranteeing the same timestamp value?

If not, is there a recommended way to assign a unique value to a
collection of inserts in a single transaction generated from a trigger?

Thanks in advance
Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] How can I get my query to run as fast as SQLiteSpy?

2007-05-25 Thread Brandon, Nicholas \(UK\)


> I have a simple table with five columns and 450,000 rows.  In 
> SQLiteSpy, I can run "SELECT * FROM trend_data" and get all 
> 450,000 rows in 4.5 seconds.  But in my program, if I use 
> sqlite3_prepare() and
> sqlite3_step() until I run out of data, it takes 55 seconds 
> to get through all rows.  A test with hard-coded junk data 
> showed that my program is account for only 2 seconds of that. 
>  If I use sqlite3_get_table(), I can cut my time in half, 
> which is nice, but I'm still taking 25 seconds to get the 
> same data SQLiteSpy is getting in 4.
> How is SQLiteSpy doing it, and can I use the same trick?

I suspect that SqLiteSpy is not extracting all the rows since you can't
see 450,000 rows on the computer screen at any one time. It probably
uses some form of double buffering method which extracts data as you
scroll through the rows to give the allusion that it has extracted all
the rows.

You can limit the number of rows and what position you start using the
terms LIMIT and OFFSET within your SQL statement. See
http://www.sqlite.org/lang_select.html

Rgds


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Performance analysis of SQLite statements

2007-04-12 Thread Brandon, Nicholas \(UK\)

 

> > You used to be able to compile with -DVDBE_PROFILE=1 to enable some 
> > special assembly-language instructions that would use 
> hi-res timers on 
> > ix586 chips to provide the cycle counts needed to execute each 
> > instruction in a VDBE program.  But I haven't used that feature in 
> > years so I don't know if it still works or not.
> 
> If you are talking of the "rdtsc" instruction, then to work 
> with current dual core CPUs the test must be "pinned" to a 
> single CPU, or you could be reading TSC values from different CPU's.
> 
> Other than that, as long as the code works on a "single-core" 
> CPU, it should work on newer ones.
> 
> Off course, there is that word: "should" ;-)
> 

I would add that if using windows I recommend using
QueryPerformanceCounter() instead of "rdtsc" as this function/call
already handles the above mentioned issue which affects
multiprocessor/multicore systems. In a previous job I did some
performance metrics since I presumed it would be slow but remember being
pleasantly surprised that it was as fast (circa 4 clock cycles to
retrieve and store in integer) as using assembler code. This was using
VS C++ 2003 compiler.

I'm not aware of a similar call in Unix

Regards
Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] what's the fastest way to get the record count of a table?

2007-03-28 Thread Brandon, Nicholas \(UK\)


> > Hi,
> > I want to check the record count of a table every 5 seconds.
> > It seems there's only one way to get a table's record count:
> > select coun(*) from ATable
> > 

> 
> Create a separate table that has a single row and single 
> column for storing the record count:
> 
> CREATE TABLE reccount(cnt INTEGER);
> 
> Then create triggers that fire on every insert or delete and 
> update the record count table.
> 
> CREATE TRIGGER rc1 AFTER INSERT ON tableA BEGIN
>   UPDATE reccount SET cnt=cnt+1;
> END;
> CREATE TRIGGER rc2 AFTER DELETE ON tableA BEGIN
>   UPDATE reccount SET cnt=cnt-1;
> END;
> 
> Then to get the record count do:
> 
>SELECT cnt FROM reccount;
> 
> --
> D. Richard Hipp  <[EMAIL PROTECTED]>
> 

I've noticed this question has been raised a few times in the past and
the workaround proposed is to create a "count" table to eliminate the
need to do a table scan. I was wondering whether it is worth adding this
feature to Sqlite so that a call to "select count(*)... " (which is not
restricted with a WHERE clause) is retrieved from an internal "count"
table? i.e. this performance workaround is part of the core sqlite code.

Just an idea.

Regards
Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Backing up a SQlite database

2007-02-12 Thread Brandon, Nicholas \(UK\)


>Derrell,

>Just to clarify, you don't need to use an exclusive transaction. That
will acquire a write lock and unnecessarily block 
>all other readers as well. You only need to hold a read lock to prevent
any other process from acquiring a write lock.

>Dennis Cote

I asked a similar question last year...
http://article.gmane.org/gmane.comp.db.sqlite.general/17946




This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Retrieving id after insert

2006-11-23 Thread Brandon, Nicholas \(UK\)

> how would I go about retriving the id that was assigned to it? I can't
just execute another query such as select lab_id from lab_table where
lab_name = > x because if there are two or more similar lab names, I
have a problem. Thanks.

http://www.sqlite.org/capi3ref.html#sqlite3_last_insert_rowid

Regards
Nick







This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] From Windows file format to MacOSX

2006-08-04 Thread Brandon, Nicholas (UK)



>I did the test with two versions of SQLite on Windows: with the 3.3 it
doesn't work (which seems more or less
>normal according to the documentation).
>However, with version 2.8, the mac can't read the file. The error is:

>Error: file is encrypted or is not a database

How have you transferred the file from Windows to Mac? (FTP, Appletalk?)






This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] Performance of two queries, why such big difference ?

2006-07-04 Thread Brandon, Nicholas (UK)



>Yes, but it completes the first query in 1ms somehow, so where is the
>difference ?

>From your original email...

>WHERE channel = ? AND time >= ?
>ORDER BY time ASC "

>WHERE channel IN (-2,?) AND time >= ?
>ORDER BY time ASC "

.. the WHERE clause is different, that is why you are getting the
different processing times. I think you are actually trying to find out
why 1 query is 200x faster than the other.

There could be a few reasons why they are different - One possible
reason is that you have a large number of records where the channel
value is set to "-2". Dan has also given a suggestion to try a different
approach to get the same answer to the second query.

Without giving more information, there is a limit to how much help
members of this list can give. You may wish to post the results of the
following queries which may help identify your problem.

SELECT count(*)
FROM files
WHERE channel = ? AND time >= ?
ORDER BY time ASC "
LIMIT ?

SELECT count(*)
FROM files
WHERE channel IN (-2,?) AND time >= ?
ORDER BY time ASC "
LIMIT ?

Also tell us the values used for the "?"

As I mentioned earlier, try using the EXPLAIN keyword which describes
how the internals of SQLite are working. If you have difficulty in
understanding the output of EXPLAIN, post the results to this mailing
list and I'm sure someone (with greater knowledge of this command than
myself) will respond.

Nick
















This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] Performance of two queries, why such big difference ?

2006-07-03 Thread Brandon, Nicholas (UK)


>It takes <1ms to return 16 rows using the first query, but over 200ms
when using the second one. What is wrong ? Is there a way to speed up
the second
>query ?

If you are using a newer version of SQLite, I suggest you run the two
queries again prefixing the SQL statement with EXPLAIN QUERY PLAN

Things to read:
http://www.sqlite.org/cvstrac/wiki?p=QueryPlans
http://www.sqlite.org/lang_explain.html

The time it takes to query is related to how many records SQLite will
return for your WHERE statement (in your case you have to ignore LIMIT
because it has to pull *all* the records initially to do the ORDER BY
statement).

Personally, searching and sorting ~1 million records in a fifth of a
second sounds quite quick to me.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] Sqlite crashes when i imort huge list

2006-06-26 Thread Brandon, Nicholas (UK)


>The use of sequential numbers as the trailing part of the 'word'
>results in a continual rebalancing of the b-tree with each insertion.

Is that right considering it looks like you have not created an index on
the word column before inserting the data?

Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] disabling large file support

2006-06-09 Thread Brandon, Nicholas (UK)


>gcc -g -O2 -DOS_BEOS=1 -DSQLITE_DISABLE_LFS -DHAVE_USLEEP=1 -I. -I./src
-DNDEBUG -DTHREADSAFE=1 -DSQLITE_THREAD_OVERRIDE_LOCK=-1
-DSQLITE_OMIT_CURSOR
> -DHAVE_READLINE=0 -o sqlite3 ./src/shell.c  ./.libs/libsqlite3.a
-lroot -lbe -lreadline $ sqlite3 test.db SQLite version 3.3.5 Enter
".help" for

Have you tried '-DSQLITE_DISABLE_LFS=1'?







This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] OffTopic: Locking on Windows XP

2006-06-01 Thread Brandon, Nicholas (UK)


>// Try and lock the file for writing
>ULARGE_INTEGER bytesToLock.QuadPart = 2147483648; // 2^31 if
(LockFile(h, 0, 0, bytesToLock.LowPart,
>bytesToLock.HighPart) == 0) { ...
>}

Sorry, finger trouble :)
Should read:

// Try and lock the file for writing
ULARGE_INTEGER bytesToLock.QuadPart = 2147483648; // 2^31
if (LockFile(h, 0, 0, bytesToLock.LowPart, bytesToLock.HighPart) == 0)
{
...
}




This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



[sqlite] OffTopic: Locking on Windows XP

2006-06-01 Thread Brandon, Nicholas (UK)

Hi,

This is more of a heads up than anything else. I'm developing an app
that writes to files using the Windows API. I actually used some code
from the SQLite file library hence this message.

The code snippet is:

// Try to open existing file
h = CreateFileA(path,
GENERIC_WRITE,
0,
NULL,
CREATE_NEW,
(FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN),
NULL);

if(h == INVALID_HANDLE_VALUE )
{
...
}

// Try and lock the file for writing
ULARGE_INTEGER bytesToLock.QuadPart = 2147483648; // 2^31
if (LockFile(h, 0, 0, bytesToLock.LowPart, bytesToLock.HighPart) == 0)
{
...
}

The app keeps the file locked until completion. The app takes a few mins
to run so I thought I test the locking. I opened Explorer and then
opened the file in Wordpad. Add a few garbage words and hit 'Save'. To
my surprise it had saved. I could confirm this by closing and reopening
the file in Wordpad seeing the change.

I checked my code and also tried commenting out the LockFile call (just
using the CreateFile call with the SharedMode parameter set to 0). Still
the same result.

I then changed the directory the app was writing to the local hard
drive. This time I got the usual "This file is opened by another
process..." popup when trying to open in Wordpad. So the difference was
the output directory.

It so happened that the output directory was on a network drive that had
been "Made available offline" (not sure its proper term) and was offline
at the time the app was running.

I then changed the output directory of the app to another online network
drive. Again, this time I got the usual "This file is opened by another
process..." popup when trying to open in Wordpad.

So unless someone could point out either a code/human error, I believe
locking capabilities are nonexistent for network drives that are have
file synchronisation enabled and are in offline mode.

Obviously DRH needs to verify but I'd imagine this could be a problem if
you use a multi-thread/multi-process application accessing a SQLite db
on a particular (though unlikely) network setup described above.

Regards
Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] Purging the mailing list roles. Was: Please RestoreYour Account Access

2006-05-30 Thread Brandon, Nicholas (UK)


>+1 for moderated posting status after  days, -10 for all of the
>other suggestions that will cause me to jump through hoops to stay
subscribed.

I would imagine 95% of the content of this list is done by 20 or so
posters. I'm not one of them.

For similar reasons for staying subscribed, I don't want to jump through
hoops for occasionally posting. If I feel I got something to contribute
I just want to reply and that be the end of it.

So +1 for moderated posting status *only* if its done without additional
effort of the poster (human moderator?).






This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



RE: [sqlite] Purging the mailing list roles. Was: Please Restore Your Account Access

2006-05-30 Thread Brandon, Nicholas (UK)

>
> I wonder if I need to implement some kind of mechanism that requires
> you to either send a message to the mailing list or else renew your
> subscription every 3 months.  Does anybody have any experience with
> other mailing lists that require such measures?
>

As most people, they are either email replies or a link to a website.

Obviously it depends on your desired goal. If it is keeping automated
scripts away then I would suggest a link to a website to confirm signup.
On that website ask the user to enter a random set of digits displayed
on the page. (That will keep the scripts that open links in confirmation
emails away).

If the goal is a general cleansing, then your suggestion to regularly
"opt-in" sounds fine.

Nick






This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.



[sqlite] Serialising memory DB

2006-05-10 Thread Brandon, Nicholas (UK)

Is there a way to serialise a memory DB so that it can be sent over a
network socket and re-opened on another computer as a memory DB?

I was thinking of using a SQLite table as a configuration system (like a
.ini file) that can be sent over the network. Obviously you could use a
file based DB instead and pass that around but I would like to be able
to use memory DB for computers that either don't have hard drives or an
OS that doesn't handle POSIX file operations (fopen et al).

Regards
Nick




This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


[sqlite] Hot Backups

2006-03-18 Thread Brandon, Nicholas (UK)


For simple databases (say 10's of MB), I assume the safest way of backing up 
the database is to copy the file. For a modern desktop/server, this should take 
less than 20 seconds or so.

While the database won't be under heavy utilisation, I would like to ensure 
that the database file is not corrupted during the copy, therefore I plan to do 
the following:

1. Connect to database and issue a BEGIN IMMEDIATE
2. Copy the database file to a new location
3. Issue a rollback and disconnect from the database.

Is this sufficient to ensure a smooth database backup?

Thanks
Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.




RE: [sqlite] large table performance

2006-03-03 Thread Brandon, Nicholas (UK)



>Given the schema below, feeding a million INSERTs into the database by
>sqlite3_exec() takes about 30 minutes (this includes transactions, indices
>and "PRAGMA synchronous=off" as well as a fully optimized build of sqlite).

>Is there any chance to speed this up? Production datasets could easily bring a
>billion genotypes ...

I assumed from your description that you populate many rows in one shot. If 
that is the case I recommend that you just create tables without indices and 
populate the dB with the data. Then create the indices afterwards to improve 
reading performance.



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.




RE: [sqlite] Problems with character '

2006-02-28 Thread Brandon, Nicholas (UK)

Hi Niels,

You've just found the first problem with using TEXT columns. What you need to 
do is put another ' in front of it i.e. ''. Depending on the language you are 
using, you may find it includes a function that "escapes" special characters 
like '. For example, PHP supply "sqlite_escape_string".

The alternative is to store the data as binary. To do this, I believe you need 
to do the prepare/bind/exec statements.

Hope that some help.

Regards
Nick

-Original Message-
From: Niels Boldt [mailto:[EMAIL PROTECTED]
Sent: 28 February 2006 10:26
To: sqlite-users@sqlite.org
Subject: [sqlite] Problems with character '


   *** WARNING ***

This mail has originated outside your organization,
either from an external partner or the Global Internet.

 Keep this in mind if you answer this message.


Hi Guys

I'm having some problems dealing with the character '. Sqlite thinks
fair enough that it should terminate the string, which I would like it
to avoid. Are there any solutions to my problems.

Thanks, Niels



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.




RE: [sqlite] New benchmark comparisons

2006-02-07 Thread Brandon, Nicholas (UK)



>It did pretty well against Sql Server Mobile as well.  I just posted those
>benchmarks here:
>http://sqlite.phxsoftware.com/forums/623/ShowPost.aspx

Just had a quick look and noticed a peculiarity. SQLite seems to do pretty well 
across the board except in the graph for "INSERT 1 Row Test (implicit 
transaction)". It seemly does well except when using the SD card. Then I 
noticed your labelling is not consistent across all the pictures. Is that a 
mistake or is it what you had expected?

For me it would be easier for my brain to compare if you had them in the same 
label order across the pictures (i.e. SQLite (M), SQLMobile (M), SQLite (C) ...)

Regards
Nick



This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.