Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Simon Slavin

On 13 Jan 2013, at 1:38am, Ted Heng  wrote:

> Thanks for all the help.  I was able to get my SQLite database imported from 
> the dump file after correcting several issues with the exported SQL 
> statements like VARCHAR with no length and a few other stuff.  The fact that 
> I have to use REAL for DATETIME field will be an issue, but I'll have to try 
> to work around it.

Glad it works, Ted.  Once you have your data in SQL Server I bet you can figure 
out a SQL 'UPDATE' command which will convert the datetime fields into the 
format you want.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
Hi Simon,

Thanks for all the help.  I was able to get my SQLite database imported from 
the dump file after correcting several issues with the exported SQL statements 
like VARCHAR with no length and a few other stuff.  The fact that I have to use 
REAL for DATETIME field will be an issue, but I'll have to try to work around 
it.

Ted

On Jan 12, 2013, at 1:38 PM, Simon Slavin  wrote:


On 12 Jan 2013, at 9:22pm, Ted Heng  wrote:

> Here's another problem.  I presumed the problem is with the dump value in 
> fraction instead of in the format 'MMDD ...', etc.  Can I change the date 
> format of the dump?

SQLite doesn't have a date format.  You can choose to store a date in various 
formats including text, real numbers and integers, with various interpretations 
for each.  Fortunately, DATETIME2 is a SQL Server format, and you shouldn't 
much trouble interpreting these.  I would import them into REAL fields (change 
the table definition line in your SQL text file) and later use SQL Server code 
to UPDATE using a 'cast()'.

No idea what a SQL Server-only type is doing in a SQLite database.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Warning when querying FTS3 table

2013-01-12 Thread Mike King
Hi All,

I'm using the latest System.Data.SQlite downloaded from the website.
When I query a FTS3 table (called FreeText) using the code below I get
the following warning in the VS output window.

SQLite error (1): no such table: main.FreeText_stat

>From reading the docs I believe FreeText_stat is created if I'm using
FTS4 which I'm not as it is not enabled in the .Net Wrapper.

Am I safe to ignore this warning? The code seems to function fine.

using (SQLiteConnection conn =
DBUtilities.OpenConnection(Path.Combine(dataFolder,
DocFetchConst.DocDataName)))
{
using (SQLiteCommand command = conn.CreateCommand())
{
command.CommandText = "SELECT f.Content FROM FreeText f,
FreeTextData fd WHERE f.rowid = fd.FreeTextId AND fd.DocumentVersionId
= @DocumentVersionId AND fd.Page = @Page";
command.Parameters.Add(new
SQLiteParameter("@DocumentVersionId", docVerData.DocumentVersionId));
command.Parameters.Add(new SQLiteParameter("@Page", page));
return DBUtilities.GetValue(command.ExecuteScalar(), "");
}
}

Thanks,
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Simon Slavin

On 12 Jan 2013, at 9:22pm, Ted Heng  wrote:

> Here's another problem.  I presumed the problem is with the dump value in 
> fraction instead of in the format 'MMDD ...', etc.  Can I change the date 
> format of the dump?

SQLite doesn't have a date format.  You can choose to store a date in various 
formats including text, real numbers and integers, with various interpretations 
for each.  Fortunately, DATETIME2 is a SQL Server format, and you shouldn't 
much trouble interpreting these.  I would import them into REAL fields (change 
the table definition line in your SQL text file) and later use SQL Server code 
to UPDATE using a 'cast()'.

No idea what a SQL Server-only type is doing in a SQLite database.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
It looks like all the DATE field are exported as TIMESTAMP by SQLite.  Can we 
change this so it export it in regular date format?

There is only one TIMESTAMP column in a table as well in SQL Server.


On Jan 12, 2013, at 1:22 PM, Ted Heng  wrote:

Solved the BLOB issue -- Replace X' with 0X and removed the enclosing 
apostrophe (').

Here's another problem.  I presumed the problem is with the dump value in 
fraction instead of in the format 'MMDD ...', etc.  Can I change the date 
format of the dump?

Operand type clash: numeric is incompatible with datetime2

CREATE TABLE ZADDRESS ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, 
ZADDRESSID INTEGER, ZADDRESSTYPE INTEGER, ZCITY INTEGER, ZORGANIZATION INTEGER, 
ZPERSON INTEGER, Z23_PERSON INTEGER, ZDATECREATED DATETIME2, ZDATEUPDATED 
DATETIME2, ZLATITUDE REAL, ZLONGITUDE REAL, ZSTREET1 VARCHAR, ZSTREET2 VARCHAR, 
ZZIP VARCHAR );
INSERT INTO "ZADDRESS" 
VALUES(1,1,3,1,NULL,NULL,NULL,NULL,NULL,378513273.224674,378513273.224674,53.307262,-6.219077,NULL,NULL,NULL);


On Jan 12, 2013, at 11:44 AM, Simon Slavin  wrote:


On 12 Jan 2013, at 7:32pm, Ted Heng  wrote:

> INSERT INTO "Z_METADATA" VALUES (1, '052A3800-692C-4A92-ACE8-F6CE6A3B204A', 
> X'62706C6973743030D601020304050607090A5556575F101E4E5353746F72654D6F64656C56657273696F6E4964656E746966696572735F101D4E5350657273697374656E63654672616D65776F726B56657273696F6E5F1019

Okay, the format

X'hexdigits'

is the way SQLite turns a BLOB into text.  So if you can figure out the format 
SQL Server uses to express a BLOB as text, you can probably work out how to 
turn one into another using global search and replace commands in a text editor.

As an alternative you could globally search and replace

', X'

with

', '

which may allow SQL Server to import those values as hex.  Then you can write 
your own code in SQL Server to turn those hex values into BLOBs.  But once 
again: this is a SQLite list here, and if you want help with SQL Server you 
will have to explain SQL Server syntax to us or ask on a SQL Server list.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Wayne Bradney
Keith,I started Experiment 5 soon after the original post, which was to do as 
you suggest and run concurrent queries on a single connection shared across 
multiple threads. For both shared cache databases (file- and memory-backed) it 
didn't seem to make any difference in my test case -- queries still appear to 
be serialized. Would this approach require SQLITE_CONFIG_MULTITHREAD or 
SQLITE_OPEN_NOMUTEX, I wonder?
 > Date: Sat, 12 Jan 2013 13:39:35 -0700
> From: kmedc...@dessus.com
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Concurrent read performance
> 
> 
> If your application can reasonably multi-leave "in-engine" operations -vs- 
> out-of-engine operations, then you might want to try multiple threads against 
> a single connection.  In any case, make sure that you are not opening and 
> closing connections.  Open the connections when your application starts, and 
> close them when it is done.  The page cache will have almost no effect if you 
> delete it between each use (think of the page cache as loading the database 
> into RAM -- when you open a connection, the entire database must be copied 
> into RAM -- when you close the connection, you are discarding that copy.  If 
> you then open another connection, you have to load all your data from disk 
> into that connection too.  As you can see, opening and closing connections 
> will incur a significant overhead to repetitively read data which should 
> already be in the cache, were it not for you discarding it from memory.)  As 
> long as you are properly resetting and disposing of your prepared statements 
> and make 
 su
>  re that you do not keep "unnecessary crap" (like forgetting to commit 
> transactions or reset statements), keeping a connection open for 
> minutes/hours/days/years/decades/centuries does not have any ill effect 
> (though it will make your programming errors jump out at you, often with odd 
> behaviour and crashes).
> 
> ---
> ()  ascii ribbon campaign against html e-mail
> /\  www.asciiribbon.org
> 
> 
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> > boun...@sqlite.org] On Behalf Of Wayne Bradney
> > Sent: Saturday, 12 January, 2013 12:36
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] Concurrent read performance
> > 
> > Increasing cache_size didn't seem to have an effect. I think I'm going to
> > lean towards taking the 15% or so hit and use a file-backed db without 
> > shared
> > cache for now -- I'll have to see what the impact of that will be on write
> > performance. If that's OK, then maybe a ramdisk will get back some of that
> > performance. I guess I was hoping that a memory-backed db would simply 
> > behave
> > exactly like a ramdisk in the first place, but the shared cache requirement
> > kills us.
> >  > From: mdblac...@yahoo.com
> > > To: sqlite-users@sqlite.org
> > > Date: Sat, 12 Jan 2013 12:02:25 -0600
> > > Subject: Re: [sqlite] Concurrent read performance
> > >
> > > Also...does increasing cache_size help?
> > > Are you able to use a RAM disk?
> > >
> > >
> > > -Original Message-
> > > From: sqlite-users-boun...@sqlite.org
> > > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
> > > Sent: Saturday, January 12, 2013 11:39 AM
> > > To: sqlite-users@sqlite.org
> > > Subject: [sqlite] Concurrent read performance
> > >
> > > I have a requirement for which I'm proposing SQLite as a solution, and
> > would
> > > appreciate some feedback to clarify my thinking. The application is a
> > shared
> > > service (.NET/WCF) that caches relational data in-memory and serves it up
> > > when (read-only) requests come in (via operation contracts) from multiple
> > > clients. Such client requests are high-level, not relational, and could
> > > result in several (potentially many) individual queries to the cache
> > itself.
> > > These individual cache queries are dynamically generated and could be
> > > arbitrarily complex, but are all essentially relational in nature. The
> > > service itself will periodically load data from an external data source,
> > > transform it and update the cache. There's no requirement currently for 
> > > the
> > > cache to ever be persisted - it can be reloaded from the external source 
> > > if
> > > necessary, but performance (especially read performance) is critical. The
> > > amount of data/indexes to cache potentially could be quite large (of the
> > > order of several gigabytes, let's
> > >  say). I've already worked an initial implementation that uses an 
> > > in-memory
> > > SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
> > > "mode=memory=shared" database, and each SQL query happens on its own
> > > connection, and in its own thread. Some observations of our read
> > performance
> > > (when the cache is fully populated and there are no writers): [Experiment
> > 1:
> > > Memory-backed, single query]
> > > For simple client requests that only 

Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
Solved the BLOB issue -- Replace X' with 0X and removed the enclosing 
apostrophe (').

Here's another problem.  I presumed the problem is with the dump value in 
fraction instead of in the format 'MMDD ...', etc.  Can I change the date 
format of the dump?

Operand type clash: numeric is incompatible with datetime2

CREATE TABLE ZADDRESS ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, 
ZADDRESSID INTEGER, ZADDRESSTYPE INTEGER, ZCITY INTEGER, ZORGANIZATION INTEGER, 
ZPERSON INTEGER, Z23_PERSON INTEGER, ZDATECREATED DATETIME2, ZDATEUPDATED 
DATETIME2, ZLATITUDE REAL, ZLONGITUDE REAL, ZSTREET1 VARCHAR, ZSTREET2 VARCHAR, 
ZZIP VARCHAR );
INSERT INTO "ZADDRESS" 
VALUES(1,1,3,1,NULL,NULL,NULL,NULL,NULL,378513273.224674,378513273.224674,53.307262,-6.219077,NULL,NULL,NULL);


On Jan 12, 2013, at 11:44 AM, Simon Slavin  wrote:


On 12 Jan 2013, at 7:32pm, Ted Heng  wrote:

> INSERT INTO "Z_METADATA" VALUES (1, '052A3800-692C-4A92-ACE8-F6CE6A3B204A', 
> X'62706C6973743030D601020304050607090A5556575F101E4E5353746F72654D6F64656C56657273696F6E4964656E746966696572735F101D4E5350657273697374656E63654672616D65776F726B56657273696F6E5F1019

Okay, the format

X'hexdigits'

is the way SQLite turns a BLOB into text.  So if you can figure out the format 
SQL Server uses to express a BLOB as text, you can probably work out how to 
turn one into another using global search and replace commands in a text editor.

As an alternative you could globally search and replace

', X'

with

', '

which may allow SQL Server to import those values as hex.  Then you can write 
your own code in SQL Server to turn those hex values into BLOBs.  But once 
again: this is a SQLite list here, and if you want help with SQL Server you 
will have to explain SQL Server syntax to us or ask on a SQL Server list.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem whit sqlite_prepare_v2() in 3.7.5 versionandlater

2013-01-12 Thread Igor Tandetnik

On 1/12/2013 1:20 PM, a...@zator.com wrote:

I wish I had thought before, especially, because I remember that some time ago, I had problems for 
the same reason, it seems, Visual Studio has some problems with the address of the current working 
directory. In fact, it uses different directories according to the "Debug" or 
"Release" mode used.


I'm not sure what you mean by "problems". Visual Studio sets the working 
directory to whatever you have specified under Project | Properties | 
Debugging.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Keith Medcalf

If your application can reasonably multi-leave "in-engine" operations -vs- 
out-of-engine operations, then you might want to try multiple threads against a 
single connection.  In any case, make sure that you are not opening and closing 
connections.  Open the connections when your application starts, and close them 
when it is done.  The page cache will have almost no effect if you delete it 
between each use (think of the page cache as loading the database into RAM -- 
when you open a connection, the entire database must be copied into RAM -- when 
you close the connection, you are discarding that copy.  If you then open 
another connection, you have to load all your data from disk into that 
connection too.  As you can see, opening and closing connections will incur a 
significant overhead to repetitively read data which should already be in the 
cache, were it not for you discarding it from memory.)  As long as you are 
properly resetting and disposing of your prepared statements and make su
 re that you do not keep "unnecessary crap" (like forgetting to commit 
transactions or reset statements), keeping a connection open for 
minutes/hours/days/years/decades/centuries does not have any ill effect (though 
it will make your programming errors jump out at you, often with odd behaviour 
and crashes).

---
()  ascii ribbon campaign against html e-mail
/\  www.asciiribbon.org


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Wayne Bradney
> Sent: Saturday, 12 January, 2013 12:36
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Concurrent read performance
> 
> Increasing cache_size didn't seem to have an effect. I think I'm going to
> lean towards taking the 15% or so hit and use a file-backed db without shared
> cache for now -- I'll have to see what the impact of that will be on write
> performance. If that's OK, then maybe a ramdisk will get back some of that
> performance. I guess I was hoping that a memory-backed db would simply behave
> exactly like a ramdisk in the first place, but the shared cache requirement
> kills us.
>  > From: mdblac...@yahoo.com
> > To: sqlite-users@sqlite.org
> > Date: Sat, 12 Jan 2013 12:02:25 -0600
> > Subject: Re: [sqlite] Concurrent read performance
> >
> > Also...does increasing cache_size help?
> > Are you able to use a RAM disk?
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
> > Sent: Saturday, January 12, 2013 11:39 AM
> > To: sqlite-users@sqlite.org
> > Subject: [sqlite] Concurrent read performance
> >
> > I have a requirement for which I'm proposing SQLite as a solution, and
> would
> > appreciate some feedback to clarify my thinking. The application is a
> shared
> > service (.NET/WCF) that caches relational data in-memory and serves it up
> > when (read-only) requests come in (via operation contracts) from multiple
> > clients. Such client requests are high-level, not relational, and could
> > result in several (potentially many) individual queries to the cache
> itself.
> > These individual cache queries are dynamically generated and could be
> > arbitrarily complex, but are all essentially relational in nature. The
> > service itself will periodically load data from an external data source,
> > transform it and update the cache. There's no requirement currently for the
> > cache to ever be persisted - it can be reloaded from the external source if
> > necessary, but performance (especially read performance) is critical. The
> > amount of data/indexes to cache potentially could be quite large (of the
> > order of several gigabytes, let's
> >  say). I've already worked an initial implementation that uses an in-memory
> > SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
> > "mode=memory=shared" database, and each SQL query happens on its own
> > connection, and in its own thread. Some observations of our read
> performance
> > (when the cache is fully populated and there are no writers): [Experiment
> 1:
> > Memory-backed, single query]
> > For simple client requests that only result in a single (albeit complex)
> SQL
> > query to the database, performance is excellent, and I'm very happy to get
> > the maintenance benefits of using a flexible query language against the
> > cache. [Experiment 2: Memory-backed, concurrent queries]
> > For any client request that results in multiple simultaneous SQL queries to
> > the database, those queries seem to be serialized rather than concurrent,
> > and the whole request actually performs much worse than the old
> > home-grown-but-horribly-limited caching/querying mechanism that was in
> place
> > beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries,
> > with shared cache]
> > I switched to a file-backed database (but still "cache=shared") and it
> > appears that the queries are 

Re: [sqlite] Concurrent read performance

2013-01-12 Thread Michael Black
What about using 2 or more databases?

Wayne Bradney  wrote:

>>>All access in SQLite is serialized. Apologies if I'm missing something 
>>>fundamental here, but that's not what I'm seeing with a file-backed database 
>>>when shared cache is OFF.My test has a single table with 1M rows, and four 
>>>queries that each yield 100K different rows. I run them two ways: 1. All 
>>>queries in a loop on the same thread in the same connection.2. Each query in 
>>>parallel on separate threads, each with its own connection. If all access 
>>>were serialized, I would expect these two tests to take about the same 
>>>amount of time overall, wouldn't I?In fact, with a file-backed database and 
>>>shared cache OFF, the second run takes about 70% less time.With shared cache 
>>>ON, they're the same. As to your second point, I probably should have made 
>>>it clear that this isn't an internal project, it's a software product, and 
>>>we don't control where it runs. I understand what an SSD is and why it's 
>>>better than a spindle drive, but my question wasn't really meant to solicit 
>>>suggestions for performan
 c
> e improvements outside the proposal at hand, which was to retire our existing 
> home-grown in-memory cache implementation (which is very fast for concurrent 
> reads, but is extremely limited in how it can be queried), and replace it 
> with a SQL-capable, relational store and still get roughly the same 
> performance. Our expectation was that we could achieve this with SQLite, but 
> were surprised by the apparent lack of read-concurrency, and wanted to get 
> some input on what our options might be in terms of SQLite configuration of 
> memory-backed databases. > From: slav...@bigfraud.org
>> Date: Sat, 12 Jan 2013 17:48:56 +
>> To: sqlite-users@sqlite.org
>> Subject: Re: [sqlite] Concurrent read performance
>> 
>> 
>> On 12 Jan 2013, at 5:38pm, Wayne Bradney  wrote:
>> 
>> > "mode=memory=shared"
>> 
>> 
>> > 1. when shared cache is enabled, all reads are serialized, and
>> 
>> All access in SQLite is serialised.  All transactions require locking the 
>> entire database.  SQLite is very simple -- 'lite' -- so queries run 
>> extremely quickly, so you don't normally realise that any locking has taken 
>> place.
>> 
>> > 2. there doesn't seem to be any way to have a memory-backed database that 
>> > can be accessed by multiple connections without using a shared cache,  
>> > then I guess I MUST use a file-backed database to get concurrent reads, 
>> > even though I don't need the persistence and don't want to take the I/O 
>> > hit. Am I making any sense? Anything I'm missing? 
>> 
>> You are putting programming effort into making your code fast, and this is 
>> costing you (or your employer) programmer time.  Have you tried doing this 
>> using an SSD instead of a spinning disk ?  A great deal of the time taken 
>> for on-disk SQLite is waiting for the disk to spin to the right place.  With 
>> an SSD all this time vanishes and access is nearly as fast as for in-memory 
>> databases.  The advantage is that you don't spend your time on clever 
>> optimal programming or have to do any of the things required for 
>> 'mode=memory'.  In fact it works very quickly without any special modes or 
>> PRAGMAs at all.  Though I don't know your setup in detail and it may not be 
>> of such a great advantage to you.
>> 
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Simon Slavin

On 12 Jan 2013, at 7:32pm, Ted Heng  wrote:

> INSERT INTO "Z_METADATA" VALUES (1, '052A3800-692C-4A92-ACE8-F6CE6A3B204A', 
> X'62706C6973743030D601020304050607090A5556575F101E4E5353746F72654D6F64656C56657273696F6E4964656E746966696572735F101D4E5350657273697374656E63654672616D65776F726B56657273696F6E5F1019

Okay, the format

X'hexdigits'

is the way SQLite turns a BLOB into text.  So if you can figure out the format 
SQL Server uses to express a BLOB as text, you can probably work out how to 
turn one into another using global search and replace commands in a text editor.

As an alternative you could globally search and replace

', X'

with

', '

which may allow SQL Server to import those values as hex.  Then you can write 
your own code in SQL Server to turn those hex values into BLOBs.  But once 
again: this is a SQLite list here, and if you want help with SQL Server you 
will have to explain SQL Server syntax to us or ask on a SQL Server list.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Wayne Bradney
Increasing cache_size didn't seem to have an effect. I think I'm going to lean 
towards taking the 15% or so hit and use a file-backed db without shared cache 
for now -- I'll have to see what the impact of that will be on write 
performance. If that's OK, then maybe a ramdisk will get back some of that 
performance. I guess I was hoping that a memory-backed db would simply behave 
exactly like a ramdisk in the first place, but the shared cache requirement 
kills us.
 > From: mdblac...@yahoo.com
> To: sqlite-users@sqlite.org
> Date: Sat, 12 Jan 2013 12:02:25 -0600
> Subject: Re: [sqlite] Concurrent read performance
> 
> Also...does increasing cache_size help?
> Are you able to use a RAM disk?
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
> Sent: Saturday, January 12, 2013 11:39 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Concurrent read performance
> 
> I have a requirement for which I'm proposing SQLite as a solution, and would
> appreciate some feedback to clarify my thinking. The application is a shared
> service (.NET/WCF) that caches relational data in-memory and serves it up
> when (read-only) requests come in (via operation contracts) from multiple
> clients. Such client requests are high-level, not relational, and could
> result in several (potentially many) individual queries to the cache itself.
> These individual cache queries are dynamically generated and could be
> arbitrarily complex, but are all essentially relational in nature. The
> service itself will periodically load data from an external data source,
> transform it and update the cache. There's no requirement currently for the
> cache to ever be persisted - it can be reloaded from the external source if
> necessary, but performance (especially read performance) is critical. The
> amount of data/indexes to cache potentially could be quite large (of the
> order of several gigabytes, let's 
>  say). I've already worked an initial implementation that uses an in-memory
> SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
> "mode=memory=shared" database, and each SQL query happens on its own
> connection, and in its own thread. Some observations of our read performance
> (when the cache is fully populated and there are no writers): [Experiment 1:
> Memory-backed, single query]
> For simple client requests that only result in a single (albeit complex) SQL
> query to the database, performance is excellent, and I'm very happy to get
> the maintenance benefits of using a flexible query language against the
> cache. [Experiment 2: Memory-backed, concurrent queries]
> For any client request that results in multiple simultaneous SQL queries to
> the database, those queries seem to be serialized rather than concurrent,
> and the whole request actually performs much worse than the old
> home-grown-but-horribly-limited caching/querying mechanism that was in place
> beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries,
> with shared cache]
> I switched to a file-backed database (but still "cache=shared") and it
> appears that the queries are still being serialized, and is overall about
> 15% slower than Experiment 2. [Experiment 4: File-backed, concurrent
> queries, without shared cache]
> I switched to a file-backed database without a shared cache, and performance
> improved dramatically (about 70% faster than Experiment 3). It appears that
> the queries are now truly happening concurrently. So it appears that, since:
> 1. when shared cache is enabled, all reads are serialized, and
> 2. there doesn't seem to be any way to have a memory-backed database that
> can be accessed by multiple connections without using a shared cache,  then
> I guess I MUST use a file-backed database to get concurrent reads, even
> though I don't need the persistence and don't want to take the I/O hit. Am I
> making any sense? Anything I'm missing?
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
Here's one problem.

I got the exported SQL statements and Microsoft SQL Server Management Studio 
complains about this statement.

Here's the exact error message.

Msg 102, Level 15, State 1, Line 45568
Incorrect syntax near 
'62706C6973743030D601020304050607090A5556575F101E4E5353746F72654D6F64656C56657273696F6E4964656E746966696572735F101D4E5350657273697'.

The complete SQLite statement is below.

CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), 
Z_PLIST IMAGE);
INSERT INTO "Z_METADATA" VALUES (1, '052A3800-692C-4A92-ACE8-F6CE6A3B204A', 
X'62706C6973743030D601020304050607090A5556575F101E4E5353746F72654D6F64656C56657273696F6E4964656E746966696572735F101D4E5350657273697374656E63654672616D65776F726B56657273696F6E5F10194E5353746F72654D6F64656C56657273696F6E4861736865735B4E5353746F7265547970655F10125F4E534175746F56616375756D4C6576656C5F10204E5353746F72654D6F64656C56657273696F6E48617368657356657273696F6EA10855312E302E301101A3DF10250B0C0D0E0F101112131415161718191A1B1C1D1E1F202122232425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F404142434445464748494A4B4C4D4E4F5051525354554472696C6C56506572736F6E554576656E7457436F6E74656E745C4576656E7453706F6E736F725750726F66696C65534C6F675B436F6E74656E7454797065574D6573736167655F10124F7267616E697A6174696F6E4D656D62657257576562736974655A4E616D6550726566697854476F616C58446F63756D656E745A4E616D65537569785A4576656E7447726F75705C4576656E74436F6E746163745553706F727457436F756E7472795B53706F72744D656D626572544369
 
74795853706F72745469705C53706F727452616E6B696E6757416464726573735550686F6E655C4F7267616E697A6174696F6E5B4576656E744D656D6265725C53706F7274547261696E65725E4E617669676174696F6E4974656D594D6564696146696C655C416E6E6F756E63656D656E7455496D61676555456D61696C595570646174654C6F675F10134F7267616E697A6174696F6E436F6E746163745553746174666964656F4F1020E773A4342FB3E1B37A1A9F59832D7E13BDF9DD30973F54162CEBAEDC29A720D04F10202A73C9DBF9FD151B81F92C05E73E7F94AE686AE5E414FDD4F9628A457EE00BDD4F1020F1A18A0B64B04A36574016D9B99FFB426F3E03C48A0708E8751F2A46E8F8B02F4F1020CDE089F4596718CC5ACC347BE94D0F941FD4A527227E76C7531236DD0828A0394F10200EE2B1DFEA3B4687A6A4FA88F673D7F6943BE83DEAEB78F6F5D49B4117FBFFAC4F1020141747F2C276D7B5C3D8B991FE710B43102BBBF1F2B1820A0FA279B5D44DE4314F1020DFA32955E3947904F84326C7131CC018705AFEDBCFCB7EFDE88D11A0FA76E1E94F1020F6D65A98479AF0A8A5680237A2B768717632400466921BC7ACDDE8427EEC07284F10201576B1F926790A056CEA648BA1EE4204CE83D4254D6CE17AA167931A5A90F1B24F1020E2136F7F254F2F228D36E
 
8E65813D4F5697E1274EE7637E2EB404F2A82BB853E4F102060EAE3ACF45512EB824546A74CD351FFFD3DDC47F4C1AB84909967EF8E4DBF714F1020F39AFC6E92E932DCDBD8C1AF083A581ECF9D64205A33945253DD69FA7A42A31C4F102062A39B968CC3C64D366B06E6C08AC6C1CB1B4063BCB0F687BE66BD8788E18DC24F10202512F4D47C4188F706938E1E4FE8D3C4D820976051361D1E4702E30A51AC0F924F1020C19FFE2A49A324934512A111D7884AF1191A1AE6BC5B8407F51EC170539639CC4F1020A5896DB991787D6756CC9D85B34C9F5EA01C876CB8869DD4C7EE66E9AA96E0EE4F10203530C6B2076C86C0F76362C31A7E761882FA61257D63F8559E2B35EBE4543D834F10200FE346B2F912F53AF84EF8026445E6EF1A56980A692A1580243BF3A98A39A5284F102064C338E304C3308F9F6B7766600C36D353109DCF015CEFDA2A165AB2285A35F34F1020405E154581B854ABB33A44729581844E56DDDF3F0526C6EECBB13DCF9FDE68E84F10207F61D8AE76F6EE334909F6929CA735B9E40ED6760D72A60061A6E074FD20D2A64F1020B03CAF3D9290C441DDDAAFD361768B93AC0BFA8E163188D9EF89E3424EF276E74F10206ECF4DA4E12895904371551570739FF4D65B80D81EFF7D30DCD31B4FC6BF52EC4F102045294A2AB0A1ACAC6391724E9F88DC07125C1C
 
E715CEFB9152BEF64C7FE4CF814F1020AA53F0A8DD021CF265CF72AD6B26597C6255ADC83F7AF1387CCFE074A9098ED64F1020C569F7D4EA98D4F3E26981C965F787630C9CECF71CD6029385F57F352FDE2F644F102010DD93E1A084F42C880C06C524540FB58A16C46ADBBC4703F9289CA38D68D4F74F1020F538504187C648EFB73963C2DF6B981B81ABDEB46340E89F0653F7E3BEFF56EF4F10205439C5F8EE5C0262E92A6689AFAF45AF36849B1BB4BB274C5F58749F66F2CBB34F10207EDFAAD4A27E7C9C023177089D3C43A180C10864901B77746FF3F5ED166534044F10202DA73111004DFDD402FC90F1F7DD0739CACA71288111AEFC43B63615B5B359B84F10207AE6E1C2F022B5792F78BCC302120D50A811C1B98EE7091A27D8F56FA04B7B224F10201ABD600C7BEAAA7C49510417FCFF800C060048EED9B8517DF0F5F9F39630BDAF4F10207A60CE1A1B6AA5B99E255927278C7FBC073142A46F9433C1C4D201798A6D29734F1020B7993419053716B81582A1FB43695CD893B860A3C0B7FF6BE04AEA3AFDEE1D5D4F10201423D3EAB4837A8E37BAFA25F217AF73D87567BA69EB28A0BF6FE863B698D3524F1020AF716D945960B9E275C4CD9BF59E5F190CA3753503070DCED1AA3F807B697B265653514C6974655132100300080015003600560072007E009300B600B800B
 
E00C1010E0114011B012101290136013E0142014E0156016B0173017E0183018C019701A201AF01B501BD01C901CE01D701E401EC01F201FF020B021802270231023E0244024A0254026A02700276029902BC02DF030203250348036B038E03B103D403F7041A043D0460048304A604C904EC050F053205550578059B05BE05E106040627064A066D069006B306D606F9071C073F07620785078C078E020100580790');


On Jan 12, 2013, at 11:24 AM, Simon Slavin  wrote:


On 12 Jan 2013, at 7:13pm, Ted Heng 

Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Simon Slavin

On 12 Jan 2013, at 7:13pm, Ted Heng  wrote:

> There are some compatibilities with SQL statements between SQLite and SQL 
> Server like X' for blob etc.

The syntax produced by the SQLite shell tool is fairly conservative, attempting 
to be compatible with other SQL engines.  However, it is quite possible you'll 
run up against some incompatibilities.  But there are far too many combinations 
and options in the whole of SQL for us to discuss them all.  Try it with your 
particular data and if you run up against any things you don't understand, post 
about those specifically and we'll try to help.

> Do we have a list of known compatibility issue that I can use to change the 
> dump statements to the proper SQL Server syntax?

This is a SQLite list so we (between us) understand SQLite.  If you want to 
discuss a specific in SQL Server you're going to have to tell us what the 
problem is, or ask on a SQL Server list where they can be expected to 
understand it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Wayne Bradney
>>All access in SQLite is serialized. Apologies if I'm missing something 
>>fundamental here, but that's not what I'm seeing with a file-backed database 
>>when shared cache is OFF.My test has a single table with 1M rows, and four 
>>queries that each yield 100K different rows. I run them two ways: 1. All 
>>queries in a loop on the same thread in the same connection.2. Each query in 
>>parallel on separate threads, each with its own connection. If all access 
>>were serialized, I would expect these two tests to take about the same amount 
>>of time overall, wouldn't I?In fact, with a file-backed database and shared 
>>cache OFF, the second run takes about 70% less time.With shared cache ON, 
>>they're the same. As to your second point, I probably should have made it 
>>clear that this isn't an internal project, it's a software product, and we 
>>don't control where it runs. I understand what an SSD is and why it's better 
>>than a spindle drive, but my question wasn't really meant to solicit 
>>suggestions for performanc
 e improvements outside the proposal at hand, which was to retire our existing 
home-grown in-memory cache implementation (which is very fast for concurrent 
reads, but is extremely limited in how it can be queried), and replace it with 
a SQL-capable, relational store and still get roughly the same performance. Our 
expectation was that we could achieve this with SQLite, but were surprised by 
the apparent lack of read-concurrency, and wanted to get some input on what our 
options might be in terms of SQLite configuration of memory-backed databases. > 
From: slav...@bigfraud.org
> Date: Sat, 12 Jan 2013 17:48:56 +
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Concurrent read performance
> 
> 
> On 12 Jan 2013, at 5:38pm, Wayne Bradney  wrote:
> 
> > "mode=memory=shared"
> 
> 
> > 1. when shared cache is enabled, all reads are serialized, and
> 
> All access in SQLite is serialised.  All transactions require locking the 
> entire database.  SQLite is very simple -- 'lite' -- so queries run extremely 
> quickly, so you don't normally realise that any locking has taken place.
> 
> > 2. there doesn't seem to be any way to have a memory-backed database that 
> > can be accessed by multiple connections without using a shared cache,  then 
> > I guess I MUST use a file-backed database to get concurrent reads, even 
> > though I don't need the persistence and don't want to take the I/O hit. Am 
> > I making any sense? Anything I'm missing? 
> 
> You are putting programming effort into making your code fast, and this is 
> costing you (or your employer) programmer time.  Have you tried doing this 
> using an SSD instead of a spinning disk ?  A great deal of the time taken for 
> on-disk SQLite is waiting for the disk to spin to the right place.  With an 
> SSD all this time vanishes and access is nearly as fast as for in-memory 
> databases.  The advantage is that you don't spend your time on clever optimal 
> programming or have to do any of the things required for 'mode=memory'.  In 
> fact it works very quickly without any special modes or PRAGMAs at all.  
> Though I don't know your setup in detail and it may not be of such a great 
> advantage to you.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
There are some compatibilities with SQL statements between SQLite and SQL 
Server like X' for blob etc.  Do we have a list of known compatibility issue 
that I can use to change the dump statements to the proper SQL Server syntax?

Thanks, Simon.


On Jan 12, 2013, at 11:01 AM, Simon Slavin  wrote:


On 12 Jan 2013, at 6:58pm, Ted Heng  wrote:

> I'm trying to export my SQLite database into SQL Server 2008, but it's very 
> difficult.  I'm building an iOS App using Core Data with SQLite and it works 
> very well.

Use a SQLite tool to dump to .csv or SQL format, then use a SQL Server tool to 
import the results into your SQL database. Since this is a SQLite list, we can 
help with a good SQLite tool:

http://www.sqlite.org/sqlite.html

downloadable from the SQLite site, but for the SQL Server tool you need to look 
elsewhere.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Simon Slavin

On 12 Jan 2013, at 6:58pm, Ted Heng  wrote:

> I'm trying to export my SQLite database into SQL Server 2008, but it's very 
> difficult.  I'm building an iOS App using Core Data with SQLite and it works 
> very well.

Use a SQLite tool to dump to .csv or SQL format, then use a SQL Server tool to 
import the results into your SQL database. Since this is a SQLite list, we can 
help with a good SQLite tool:

http://www.sqlite.org/sqlite.html

downloadable from the SQLite site, but for the SQL Server tool you need to look 
elsewhere.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best Tool for Exporting SQLite Database into SQL Server 2008

2013-01-12 Thread Ted Heng
Hi,

I'm trying to export my SQLite database into SQL Server 2008, but it's very 
difficult.  I'm building an iOS App using Core Data with SQLite and it works 
very well.  However, I need to synchronize it with my ASP.NET website.  I 
really don't seem to find a good approach to accomplish this task.  I've tried 
the following tools, but they all seem to lack features to fully migrating 
SQLite database into SQL Server.  They allow export features to SQL statements 
that aren't compatible with SQL Server.

Core Data Editor
SQLite Professional
Navicat Premium Essentials
Base
SQLiteSync
Any advice that I might have missed on what's the best approach for my iOS APP 
to fully integrate with my ASP.NET website database?

Thanks in advance.
Ted

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Wayne Bradney
read_uncommitted didn't seem to have any effect
 > From: mdblac...@yahoo.com
> To: sqlite-users@sqlite.org
> Date: Sat, 12 Jan 2013 11:47:55 -0600
> Subject: Re: [sqlite] Concurrent read performance
> 
> Did you try read-uncommitted?
> Sounds promising...
> 
> 2.2.1
> http://www.sqlite.org/sharedcache.html
> 
> PRAGMA read_uncommitted = ;
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
> Sent: Saturday, January 12, 2013 11:39 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Concurrent read performance
> 
> I have a requirement for which I'm proposing SQLite as a solution, and would
> appreciate some feedback to clarify my thinking. The application is a shared
> service (.NET/WCF) that caches relational data in-memory and serves it up
> when (read-only) requests come in (via operation contracts) from multiple
> clients. Such client requests are high-level, not relational, and could
> result in several (potentially many) individual queries to the cache itself.
> These individual cache queries are dynamically generated and could be
> arbitrarily complex, but are all essentially relational in nature. The
> service itself will periodically load data from an external data source,
> transform it and update the cache. There's no requirement currently for the
> cache to ever be persisted - it can be reloaded from the external source if
> necessary, but performance (especially read performance) is critical. The
> amount of data/indexes to cache potentially could be quite large (of the
> order of several gigabytes, let's 
>  say). I've already worked an initial implementation that uses an in-memory
> SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
> "mode=memory=shared" database, and each SQL query happens on its own
> connection, and in its own thread. Some observations of our read performance
> (when the cache is fully populated and there are no writers): [Experiment 1:
> Memory-backed, single query]
> For simple client requests that only result in a single (albeit complex) SQL
> query to the database, performance is excellent, and I'm very happy to get
> the maintenance benefits of using a flexible query language against the
> cache. [Experiment 2: Memory-backed, concurrent queries]
> For any client request that results in multiple simultaneous SQL queries to
> the database, those queries seem to be serialized rather than concurrent,
> and the whole request actually performs much worse than the old
> home-grown-but-horribly-limited caching/querying mechanism that was in place
> beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries,
> with shared cache]
> I switched to a file-backed database (but still "cache=shared") and it
> appears that the queries are still being serialized, and is overall about
> 15% slower than Experiment 2. [Experiment 4: File-backed, concurrent
> queries, without shared cache]
> I switched to a file-backed database without a shared cache, and performance
> improved dramatically (about 70% faster than Experiment 3). It appears that
> the queries are now truly happening concurrently. So it appears that, since:
> 1. when shared cache is enabled, all reads are serialized, and
> 2. there doesn't seem to be any way to have a memory-backed database that
> can be accessed by multiple connections without using a shared cache,  then
> I guess I MUST use a file-backed database to get concurrent reads, even
> though I don't need the persistence and don't want to take the I/O hit. Am I
> making any sense? Anything I'm missing?
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem whit sqlite_prepare_v2() in 3.7.5 versionandlater

2013-01-12 Thread ajm
Igor:

You're right. Thanks you and congratulations on your fine smell.

As indicated in the OP, indeed I was using relative path with the "original" 
table.

I wish I had thought before, especially, because I remember that some time ago, 
I had problems for the same reason, it seems, Visual Studio has some problems 
with the address of the current working directory. In fact, it uses different 
directories according to the "Debug" or "Release" mode used.

Thank you again.

--
Adolfo J. Millan

>
>  Mensaje original 
> De: Igor Tandetnik 
> Para:   sqlite-users@sqlite.org
> Fecha:  Fri, 11 Jan 2013 14:49:23 +0100
> Asunto:  Re: [sqlite] Problem whit sqlite_prepare_v2() in 3.7.5 
> versionandlater
>
> 
>
>On 1/11/2013 5:34 AM, a...@zator.com wrote:
>> Problem (partially) solved:
>>
>> Using MS Windows 8; MS Visual Studio 2012 for desktop, and SQLite 3.7.15.1 I 
>> get consistently the following behavior for the application:
>>
>> Running in Debug-mode, fails as reported in the OP.
>>
>> Running in Release-mode (without debugging), fails as reported.
>>
>> Starting the resulting executable as stand-alone -from the desktop or from 
>> the CLI-, run as expected without error.
>
>Current working directory is different in these cases. I bet you are 
>using relative paths in your statements. In which case it's a simple 
>matter of SQLite using the right database file sometimes, but creating a 
>blank new one other times.
>-- 
>Igor Tandetnik
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Clemens Ladisch
Wayne Bradney wrote:
> 1. when shared cache is enabled, all reads are serialized,

Yes.

> I guess I MUST use a file-backed database to get concurrent reads,
> even though I don't need the persistence and don't want to take the
> I/O hit.

If the in-memory database works, you do have enough memory for the data,
so you could rely on the OS file cache to keep the data in memory.

If you don't trust the OS, you could put the DB on a RAM disk (like
ImDisk).


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Michael Black
Also...does increasing cache_size help?
Are you able to use a RAM disk?


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
Sent: Saturday, January 12, 2013 11:39 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Concurrent read performance

I have a requirement for which I'm proposing SQLite as a solution, and would
appreciate some feedback to clarify my thinking. The application is a shared
service (.NET/WCF) that caches relational data in-memory and serves it up
when (read-only) requests come in (via operation contracts) from multiple
clients. Such client requests are high-level, not relational, and could
result in several (potentially many) individual queries to the cache itself.
These individual cache queries are dynamically generated and could be
arbitrarily complex, but are all essentially relational in nature. The
service itself will periodically load data from an external data source,
transform it and update the cache. There's no requirement currently for the
cache to ever be persisted - it can be reloaded from the external source if
necessary, but performance (especially read performance) is critical. The
amount of data/indexes to cache potentially could be quite large (of the
order of several gigabytes, let's 
 say). I've already worked an initial implementation that uses an in-memory
SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
"mode=memory=shared" database, and each SQL query happens on its own
connection, and in its own thread. Some observations of our read performance
(when the cache is fully populated and there are no writers): [Experiment 1:
Memory-backed, single query]
For simple client requests that only result in a single (albeit complex) SQL
query to the database, performance is excellent, and I'm very happy to get
the maintenance benefits of using a flexible query language against the
cache. [Experiment 2: Memory-backed, concurrent queries]
For any client request that results in multiple simultaneous SQL queries to
the database, those queries seem to be serialized rather than concurrent,
and the whole request actually performs much worse than the old
home-grown-but-horribly-limited caching/querying mechanism that was in place
beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries,
with shared cache]
I switched to a file-backed database (but still "cache=shared") and it
appears that the queries are still being serialized, and is overall about
15% slower than Experiment 2. [Experiment 4: File-backed, concurrent
queries, without shared cache]
I switched to a file-backed database without a shared cache, and performance
improved dramatically (about 70% faster than Experiment 3). It appears that
the queries are now truly happening concurrently. So it appears that, since:
1. when shared cache is enabled, all reads are serialized, and
2. there doesn't seem to be any way to have a memory-backed database that
can be accessed by multiple connections without using a shared cache,  then
I guess I MUST use a file-backed database to get concurrent reads, even
though I don't need the persistence and don't want to take the I/O hit. Am I
making any sense? Anything I'm missing?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Simon Slavin

On 12 Jan 2013, at 5:38pm, Wayne Bradney  wrote:

> "mode=memory=shared"


> 1. when shared cache is enabled, all reads are serialized, and

All access in SQLite is serialised.  All transactions require locking the 
entire database.  SQLite is very simple -- 'lite' -- so queries run extremely 
quickly, so you don't normally realise that any locking has taken place.

> 2. there doesn't seem to be any way to have a memory-backed database that can 
> be accessed by multiple connections without using a shared cache,  then I 
> guess I MUST use a file-backed database to get concurrent reads, even though 
> I don't need the persistence and don't want to take the I/O hit. Am I making 
> any sense? Anything I'm missing? 

You are putting programming effort into making your code fast, and this is 
costing you (or your employer) programmer time.  Have you tried doing this 
using an SSD instead of a spinning disk ?  A great deal of the time taken for 
on-disk SQLite is waiting for the disk to spin to the right place.  With an SSD 
all this time vanishes and access is nearly as fast as for in-memory databases. 
 The advantage is that you don't spend your time on clever optimal programming 
or have to do any of the things required for 'mode=memory'.  In fact it works 
very quickly without any special modes or PRAGMAs at all.  Though I don't know 
your setup in detail and it may not be of such a great advantage to you.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrent read performance

2013-01-12 Thread Michael Black
Did you try read-uncommitted?
Sounds promising...

2.2.1
http://www.sqlite.org/sharedcache.html

PRAGMA read_uncommitted = ;


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wayne Bradney
Sent: Saturday, January 12, 2013 11:39 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Concurrent read performance

I have a requirement for which I'm proposing SQLite as a solution, and would
appreciate some feedback to clarify my thinking. The application is a shared
service (.NET/WCF) that caches relational data in-memory and serves it up
when (read-only) requests come in (via operation contracts) from multiple
clients. Such client requests are high-level, not relational, and could
result in several (potentially many) individual queries to the cache itself.
These individual cache queries are dynamically generated and could be
arbitrarily complex, but are all essentially relational in nature. The
service itself will periodically load data from an external data source,
transform it and update the cache. There's no requirement currently for the
cache to ever be persisted - it can be reloaded from the external source if
necessary, but performance (especially read performance) is critical. The
amount of data/indexes to cache potentially could be quite large (of the
order of several gigabytes, let's 
 say). I've already worked an initial implementation that uses an in-memory
SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a
"mode=memory=shared" database, and each SQL query happens on its own
connection, and in its own thread. Some observations of our read performance
(when the cache is fully populated and there are no writers): [Experiment 1:
Memory-backed, single query]
For simple client requests that only result in a single (albeit complex) SQL
query to the database, performance is excellent, and I'm very happy to get
the maintenance benefits of using a flexible query language against the
cache. [Experiment 2: Memory-backed, concurrent queries]
For any client request that results in multiple simultaneous SQL queries to
the database, those queries seem to be serialized rather than concurrent,
and the whole request actually performs much worse than the old
home-grown-but-horribly-limited caching/querying mechanism that was in place
beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries,
with shared cache]
I switched to a file-backed database (but still "cache=shared") and it
appears that the queries are still being serialized, and is overall about
15% slower than Experiment 2. [Experiment 4: File-backed, concurrent
queries, without shared cache]
I switched to a file-backed database without a shared cache, and performance
improved dramatically (about 70% faster than Experiment 3). It appears that
the queries are now truly happening concurrently. So it appears that, since:
1. when shared cache is enabled, all reads are serialized, and
2. there doesn't seem to be any way to have a memory-backed database that
can be accessed by multiple connections without using a shared cache,  then
I guess I MUST use a file-backed database to get concurrent reads, even
though I don't need the persistence and don't want to take the I/O hit. Am I
making any sense? Anything I'm missing?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Concurrent read performance

2013-01-12 Thread Wayne Bradney
I have a requirement for which I'm proposing SQLite as a solution, and would 
appreciate some feedback to clarify my thinking. The application is a shared 
service (.NET/WCF) that caches relational data in-memory and serves it up when 
(read-only) requests come in (via operation contracts) from multiple clients. 
Such client requests are high-level, not relational, and could result in 
several (potentially many) individual queries to the cache itself. These 
individual cache queries are dynamically generated and could be arbitrarily 
complex, but are all essentially relational in nature. The service itself will 
periodically load data from an external data source, transform it and update 
the cache. There's no requirement currently for the cache to ever be persisted 
- it can be reloaded from the external source if necessary, but performance 
(especially read performance) is critical. The amount of data/indexes to cache 
potentially could be quite large (of the order of several gigabytes, let's 
 say). I've already worked an initial implementation that uses an in-memory 
SQLite database via System.Data.SQLite (1.0.82.0). The service maintains a 
"mode=memory=shared" database, and each SQL query happens on its own 
connection, and in its own thread. Some observations of our read performance 
(when the cache is fully populated and there are no writers): [Experiment 1: 
Memory-backed, single query]
For simple client requests that only result in a single (albeit complex) SQL 
query to the database, performance is excellent, and I'm very happy to get the 
maintenance benefits of using a flexible query language against the cache. 
[Experiment 2: Memory-backed, concurrent queries]
For any client request that results in multiple simultaneous SQL queries to the 
database, those queries seem to be serialized rather than concurrent, and the 
whole request actually performs much worse than the old 
home-grown-but-horribly-limited caching/querying mechanism that was in place 
beforehand, and I'm sad. [Experiment 3: File-backed, concurrent queries, with 
shared cache]
I switched to a file-backed database (but still "cache=shared") and it appears 
that the queries are still being serialized, and is overall about 15% slower 
than Experiment 2. [Experiment 4: File-backed, concurrent queries, without 
shared cache]
I switched to a file-backed database without a shared cache, and performance 
improved dramatically (about 70% faster than Experiment 3). It appears that the 
queries are now truly happening concurrently. So it appears that, since: 1. 
when shared cache is enabled, all reads are serialized, and
2. there doesn't seem to be any way to have a memory-backed database that can 
be accessed by multiple connections without using a shared cache,  then I guess 
I MUST use a file-backed database to get concurrent reads, even though I don't 
need the persistence and don't want to take the I/O hit. Am I making any sense? 
Anything I'm missing?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS questions

2013-01-12 Thread Michael Black
Test it yourself:

create virtual test using fts4(context text);
insert into test values ('c:\folders\video\עברית');
select * from test where context match 'עברית';
If you want a partial match add a wildcard
select * from test where context match 'עברית*';

I don't have the codepage running so I can't test it but it sure works for 
English.  Don't see why it wouldn't work for other languages.



-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of moti lahiani
Sent: Saturday, January 12, 2013 10:56 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] FTS questions

Thanks for your reply
Why I care the language: according to the documentation:
"A term is a contiguous sequence of eligible characters, where eligible
characters are all alphanumeric characters and all characters with Unicode
codepoint values greater than or equal to 128. All other characters are
discarded when splitting a document into terms. Their only contribution is
to separate adjacent terms."
if the path include folder or file name in France or Hebrew or Arabic  and
the user what to search that word according to above the FTS will not found
it. Am I wrong?

about the backslash/slash if I have a path like this and the user search
for עברית
did the FTS will find it?
c:\folders\video\עברית

about the creation I was confuse it not temporary its virtual so ignore my
question
Thanks




On Sat, Jan 12, 2013 at 5:58 PM, Michael Black  wrote:

> I'm not sure I understand your problem.
> Why do you care what language it is?  Aren't you just wanting to tokenize
> on
> backslash?
>
> Simple way is to replace all spaces in the path with another char (e.g.
> '_')
> then replace all backslashes with a space.
> Then you can just use the default tokenizer and make the same changes on
> any
> user queries.  So you map the user query to your storage format.
>
> Not sure why you would want to use some stem tokenizer on paths.
>
> And, of course, you're reinventing the wheel unless you have some special
> purpose in mind.
>
> http://locate32.cogit.net/
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of moti lahiani
> Sent: Saturday, January 12, 2013 4:37 AM
> To: sqlite-users@sqlite.org
> Cc: Moti LAHIANI
> Subject: [sqlite] FTS questions
>
> Hello all
>
> I'm new with sqlite3 and sql.
>
> I have data base that include path columns (file system path like c:\bla
> bla\myFiles\1.txt)
>
> On that columns I need to do
>
> 1)  search for patterns in case the user want to find a file or
> directory
>
> 2)  search for prefix path in case the user rename a file or directory
>
> the Database include about 1.5-2.5 million records and to use the "LIKE" is
> not possible because the result time.
>
> As an alternative I want to use FTS3 or FTS4 but I think I have a problems
> with what I read here: http://www.sqlite.org/fts3.html#section_1_4
>
> And here: http://www.sqlite.org/fts3.html#section_6_3
>
> I need to specify the language to FTS to use it as tokenize but the path
> can include multi languages how can I configure the FTS table to use all
> languages
>
> How can I tell to FTS to token the path only according to the character "\"
> ?
>
>
>
> More than that when creating FTS table it creates with the TEMPORARY  key
> word. My question is: do I need to create this table each time I run the
> data base   (because the temporary word) or for each connections (in case
> of multiconnections) or this is a table like all tables I declared and it
> stay in the data base even if I restart my PC
>
>
>
> Thanks a lot
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS questions

2013-01-12 Thread moti lahiani
Thanks for your reply
Why I care the language: according to the documentation:
"A term is a contiguous sequence of eligible characters, where eligible
characters are all alphanumeric characters and all characters with Unicode
codepoint values greater than or equal to 128. All other characters are
discarded when splitting a document into terms. Their only contribution is
to separate adjacent terms."
if the path include folder or file name in France or Hebrew or Arabic  and
the user what to search that word according to above the FTS will not found
it. Am I wrong?

about the backslash/slash if I have a path like this and the user search
for עברית
did the FTS will find it?
c:\folders\video\עברית

about the creation I was confuse it not temporary its virtual so ignore my
question
Thanks




On Sat, Jan 12, 2013 at 5:58 PM, Michael Black  wrote:

> I'm not sure I understand your problem.
> Why do you care what language it is?  Aren't you just wanting to tokenize
> on
> backslash?
>
> Simple way is to replace all spaces in the path with another char (e.g.
> '_')
> then replace all backslashes with a space.
> Then you can just use the default tokenizer and make the same changes on
> any
> user queries.  So you map the user query to your storage format.
>
> Not sure why you would want to use some stem tokenizer on paths.
>
> And, of course, you're reinventing the wheel unless you have some special
> purpose in mind.
>
> http://locate32.cogit.net/
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of moti lahiani
> Sent: Saturday, January 12, 2013 4:37 AM
> To: sqlite-users@sqlite.org
> Cc: Moti LAHIANI
> Subject: [sqlite] FTS questions
>
> Hello all
>
> I'm new with sqlite3 and sql.
>
> I have data base that include path columns (file system path like c:\bla
> bla\myFiles\1.txt)
>
> On that columns I need to do
>
> 1)  search for patterns in case the user want to find a file or
> directory
>
> 2)  search for prefix path in case the user rename a file or directory
>
> the Database include about 1.5-2.5 million records and to use the "LIKE" is
> not possible because the result time.
>
> As an alternative I want to use FTS3 or FTS4 but I think I have a problems
> with what I read here: http://www.sqlite.org/fts3.html#section_1_4
>
> And here: http://www.sqlite.org/fts3.html#section_6_3
>
> I need to specify the language to FTS to use it as tokenize but the path
> can include multi languages how can I configure the FTS table to use all
> languages
>
> How can I tell to FTS to token the path only according to the character "\"
> ?
>
>
>
> More than that when creating FTS table it creates with the TEMPORARY  key
> word. My question is: do I need to create this table each time I run the
> data base   (because the temporary word) or for each connections (in case
> of multiconnections) or this is a table like all tables I declared and it
> stay in the data base even if I restart my PC
>
>
>
> Thanks a lot
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS questions

2013-01-12 Thread Michael Black
I'm not sure I understand your problem.
Why do you care what language it is?  Aren't you just wanting to tokenize on
backslash?

Simple way is to replace all spaces in the path with another char (e.g. '_')
then replace all backslashes with a space.
Then you can just use the default tokenizer and make the same changes on any
user queries.  So you map the user query to your storage format.

Not sure why you would want to use some stem tokenizer on paths.

And, of course, you're reinventing the wheel unless you have some special
purpose in mind.

http://locate32.cogit.net/

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of moti lahiani
Sent: Saturday, January 12, 2013 4:37 AM
To: sqlite-users@sqlite.org
Cc: Moti LAHIANI
Subject: [sqlite] FTS questions

Hello all

I'm new with sqlite3 and sql.

I have data base that include path columns (file system path like c:\bla
bla\myFiles\1.txt)

On that columns I need to do

1)  search for patterns in case the user want to find a file or
directory

2)  search for prefix path in case the user rename a file or directory

the Database include about 1.5-2.5 million records and to use the "LIKE" is
not possible because the result time.

As an alternative I want to use FTS3 or FTS4 but I think I have a problems
with what I read here: http://www.sqlite.org/fts3.html#section_1_4

And here: http://www.sqlite.org/fts3.html#section_6_3

I need to specify the language to FTS to use it as tokenize but the path
can include multi languages how can I configure the FTS table to use all
languages

How can I tell to FTS to token the path only according to the character "\"
?



More than that when creating FTS table it creates with the TEMPORARY  key
word. My question is: do I need to create this table each time I run the
data base   (because the temporary word) or for each connections (in case
of multiconnections) or this is a table like all tables I declared and it
stay in the data base even if I restart my PC



Thanks a lot
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS questions

2013-01-12 Thread moti lahiani
Hello all

I'm new with sqlite3 and sql.

I have data base that include path columns (file system path like c:\bla
bla\myFiles\1.txt)

On that columns I need to do

1)  search for patterns in case the user want to find a file or
directory

2)  search for prefix path in case the user rename a file or directory

the Database include about 1.5-2.5 million records and to use the "LIKE" is
not possible because the result time.

As an alternative I want to use FTS3 or FTS4 but I think I have a problems
with what I read here: http://www.sqlite.org/fts3.html#section_1_4

And here: http://www.sqlite.org/fts3.html#section_6_3

I need to specify the language to FTS to use it as tokenize but the path
can include multi languages how can I configure the FTS table to use all
languages

How can I tell to FTS to token the path only according to the character "\"
?



More than that when creating FTS table it creates with the TEMPORARY  key
word. My question is: do I need to create this table each time I run the
data base   (because the temporary word) or for each connections (in case
of multiconnections) or this is a table like all tables I declared and it
stay in the data base even if I restart my PC



Thanks a lot
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS questions

2013-01-12 Thread moti lahiani
Hello all

I'm new with sqlite3 and sql.

I have data base that include path columns (file system path like c:\bla
bla\myFiles\1.txt)

On that columns I need to do

1)  search for patterns in case the user want to find a file or
directory

2)  search for prefix path in case the user rename a file or directory

the Database include about 1.5-2.5 million records and to use the "LIKE" is
not possible because the result time.

As an alternative I want to use FTS3 or FTS4 but I think I have a problems
with what I read here: http://www.sqlite.org/fts3.html#section_1_4

And here: http://www.sqlite.org/fts3.html#section_6_3

I need to specify the language to FTS to use it as tokenize but the path
can include multi languages how can I configure the FTS table to use all
languages

How can I tell to FTS to token the path only according to the character "\"
?



More than that when creating FTS table it creates with the TEMPORARY  key
word. My question is: do I need to create this table each time I run the
data base   (because the temporary word) or for each connections (in case
of multiconnections) or this is a table like all tables I declared and it
stay in the data base even if I restart my PC



Thanks a lot
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] convert a .sql file to a .db file

2013-01-12 Thread kevnm67
Thanks Simon! Ill try that out.




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/convert-a-sql-file-to-a-db-file-tp66516p66521.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users