Re: [sqlite] LSM bug

2014-04-29 Thread sqlite
On Tuesday, April 29, 2014 03:38:57 AM Dan Kennedy wrote:
> On 04/29/2014 03:53 AM, sql...@charles.derkarl.org wrote:
> > I didn't know this list strips attachments, so the source file is here:
> > 
> > http://derkarl.org/~charles/runlsm.cpp
> 
> Thanks for this. It is a problem.
> 
> LSM accumulates data in a tree structure in shared-memory until there is
> "enough" (~1-2 MB) to flush through to the database file. But at the
> moment, it can only flush data to the db file between transactions. And
> the in-memory tree can only hold 2GB of data (including overhead). So
> things fail if a single transaction exceeds that limit. In the short
> term, it should be changed to return LSM_FULL for any transaction too
> large to handle. But the real fix should be to change things so that LSM
> can begin flushing data to the database file mid-transaction.

I'm also seeing a similar problem in which it silently discards entries, but I 
haven't been able to narrow down an example for you. Let me know if that would 
be helpful and I'll try harder.

What could I do to improve LSM?

Charles

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite and GPFS

2014-04-29 Thread Roman Fleysher
Dear SQLiters,

I understand that SQLite relies on the underlying filesystem to perform flushes 
to disk. This is a problem in network file system (NFS) when a disk is mounted 
on several nodes of a compute cluster and SQLite is ran on them. Essentially, 
NFS disallows running SQLite concurrently on several nodes accessing the same 
database. 

What about GPFS? As I read its description, it promises to flush and report 
correctly.

Thank you,

Roman
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When to open/close connections

2014-04-29 Thread Simon Slavin

On 29 Apr 2014, at 2:24pm, Drago, William @ MWG - NARDAEAST 
 wrote:

> Does closing the connection force, or at least encourage, the OS to write to 
> disk whatever it might have been caching?

Closing a connection calls fclose() on the database file (as long as fopen() 
was called because at least one operation was done on the database).  So the 
operating system does everything it would normally do in response to fclose().

This means closing a connection can take a long time because it does lots of 
things.  For example under Windows/Mac/Unix if the file has been changed it 
changes the 'last modified' date on the file.  Consequently you would not want 
to keep opening and closing a connection in a program that made a lot of 
changes to a SQLite database.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When to open/close connections

2014-04-29 Thread Drago, William @ MWG - NARDAEAST
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Yuriy Kaminskiy
> Sent: Monday, April 28, 2014 8:36 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] When to open/close connections
>
> Simon Slavin wrote:
> > On 28 Apr 2014, at 11:11pm, RSmith  wrote:
> >
> >> Second approach is better when you rarely access the database, also
> it will make sure releases happen (or at least provide immediate errors
> if not), but keeping a connection open is much better when hundreds of
> accesses happen in terms of speed - especially loads of small queries,
> large queries won't matter either way.
> >
> > I add two other reasons to Ryan's: if you are extremely short of
> memory (for example in a tiny low-power device) or if your file system
> lacks proper flushing because although flush() is documented in reality
> it does nothing.
>
> s/flush/sync/.
> However, I don't see just how closing/opening db will make it any
> better.
> Database/journal is written and synced at transaction end, not at
> "connection"
> close.

Does closing the connection force, or at least encourage, the OS to write to 
disk whatever it might have been caching?


CONFIDENTIALITY, EXPORT CONTROL AND DISCLAIMER NOTE:This e-mail and any 
attachments are solely for the use of the addressee and may contain information 
that is privileged or confidential. Any disclosure, use or distribution of the 
information contained herein is prohibited. In the event this e-mail contains 
technical data within the definition of the International Traffic in Arms 
Regulations or Export Administration Regulations, it is subject to the export 
control laws of the U.S.Government. The recipient should check this e-mail and 
any attachments for the presence of viruses as L-3 does not accept any 
liability associated with the transmission of this e-mail. If you have received 
this communication in error, please notify the sender by reply e-mail and 
immediately delete this message and any attachments.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question : how to retrieve a byte array

2014-04-29 Thread Mike King
I'm not sure this sounds like a SQLite issue but I think File.ReadAllBytes
and File.WriteAllBytes in the System.IO namespace may do what you want.

Cheers,


On 28 April 2014 22:58, Denis Bezeau  wrote:

> Hi, I am currently working on a video game using sqlite3,  and I am having
> an issue when saving the database to the platform im on. The save function
> for these platform use either a string or a byte array to save. So I need
> to
> be able to get a byte array of the database, and be able to rebuild the
> database with that byte array, or a string.
>
>
>
> So far I have not found anything that gets me the information I need, so I
> did my own Dump function, that dump all the SQL queries needed to rebuild
> the database from scratch, but it is pretty slow.
>
>
>
> I am working in c# under Unity engine.
>
> Thanks for any help I can get.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] bug or inconsistency in column headers

2014-04-29 Thread Richard Hipp
On Tue, Apr 29, 2014 at 6:35 AM,  wrote:

>
> I'm not sure whether this is a bug or not.
> I stumbled across a situation where a table alias sometimes is reflected
> in the column name and sometimes not.
>
> Uncomment the last /*LEFT*/ JOIN and the column headers are id_flaeche,
> name, beschreibung etc. vs. vf.id_flaeche, mg.name, mvf.beschreibung as
> it stands now.
>

See FAQ #28:  http://www.sqlite.org/faq.html#q28
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM bug

2014-04-29 Thread Dan Kennedy

On 04/29/2014 03:53 AM, sql...@charles.derkarl.org wrote:

I didn't know this list strips attachments, so the source file is here:

http://derkarl.org/~charles/runlsm.cpp


Thanks for this. It is a problem.

LSM accumulates data in a tree structure in shared-memory until there is 
"enough" (~1-2 MB) to flush through to the database file. But at the 
moment, it can only flush data to the db file between transactions. And 
the in-memory tree can only hold 2GB of data (including overhead). So 
things fail if a single transaction exceeds that limit. In the short 
term, it should be changed to return LSM_FULL for any transaction too 
large to handle. But the real fix should be to change things so that LSM 
can begin flushing data to the database file mid-transaction.


Dan.







On Monday, April 28, 2014 01:41:02 PM sql...@charles.derkarl.org wrote:

Hi,

I'm not exactly sure this is the right forum for my problem, as I know that
LSM is experimental, but here we go.

I tried loading a whole lot of data using LSM. The majority of the data
goes int a single huge transaction (begin/commit pair). My program
segfaults once we're 1.61GB into my data file.

I have attached the source code to my test program. You also will need my
data file, which is too big for email:

http://www.derkarl.org/~charles/lsmlog.bz2 (744 MiB)

Here is how you can run my test program:

bunzip2 < lsmlog.bz2 | pv | ./a.out lsmdbtocreate

(You can exclude "pv" from the pipeline if you don't have it installed)

Here is the backtrace:

treeShmalloc (pDb=pDb@entry=0x12b20a8, bAlign=bAlign@entry=1,
nByte=nByte@entry=12, pRc=pRc@entry=0x7fff2fd43f44)
 at src/lsm_tree.c:682
682 pNext->iNext = 0;
(gdb) bt
#0  treeShmalloc (pDb=pDb@entry=0x12b20a8, bAlign=bAlign@entry=1,
nByte=nByte@entry=12, pRc=pRc@entry=0x7fff2fd43f44)
 at src/lsm_tree.c:682
#1  0x0041122d in treeShmallocZero (pDb=pDb@entry=0x12b20a8,
nByte=nByte@entry=12, piPtr=piPtr@entry=0x7fff2fd43f4c,
 pRc=pRc@entry=0x7fff2fd43f44) at src/lsm_tree.c:711
#2  0x00413114 in newTreeLeaf (pRc=0x7fff2fd43f44,
piPtr=0x7fff2fd43f4c, pDb=0x12b20a8) at src/lsm_tree.c:726
#3  treeInsertLeaf (iSlot=1, iTreeKey=2146172860, pCsr=0x7fff2fd43f50,
pDb=0x12b20a8) at src/lsm_tree.c:1039
#4  treeInsertEntry (pDb=pDb@entry=0x12b20a8, flags=8,
pKey=pKey@entry=0x12b2058, nKey=nKey@entry=17, pVal=,
 pVal@entry=0x12bb638, nVal=21) at src/lsm_tree.c:1552
#5  0x0041329f in lsmTreeInsert (pDb=pDb@entry=0x12b20a8,
pKey=pKey@entry=0x12b2058, nKey=nKey@entry=17,
 pVal=pVal@entry=0x12bb638, nVal=) at src/lsm_tree.c:1587
#6  0x00404db0 in doWriteOp (pDb=0x12b20a8, bDeleteRange=, pKey=0x12b2058, nKey=17, pVal=0x12bb638,
 nVal=) at src/lsm_main.c:696
#7  0x0040305d in main (argc=2, argv=0x7fff2fd44418) at
runlsm.cpp:41 (gdb) print pNext
$1 = (ShmChunk *) 0x



It's a pity that LSM isn't ready for production, because if the quality of
sqlite3 is indication, I'm going to really enjoy using it!

Charles

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question : how to retrieve a byte array

2014-04-29 Thread Denis Bezeau
Hi, I am currently working on a video game using sqlite3,  and I am having
an issue when saving the database to the platform im on. The save function
for these platform use either a string or a byte array to save. So I need to
be able to get a byte array of the database, and be able to rebuild the
database with that byte array, or a string.

 

So far I have not found anything that gets me the information I need, so I
did my own Dump function, that dump all the SQL queries needed to rebuild
the database from scratch, but it is pretty slow.

 

I am working in c# under Unity engine.

Thanks for any help I can get.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] bug or inconsistency in column headers

2014-04-29 Thread beell

I'm not sure whether this is a bug or not.
I stumbled across a situation where a table alias sometimes is reflected in the 
column name and sometimes not.
It *is* reflected when the last join is a *left* one.
Script (sorry that I couldn't come up with a simpler test case):


CREATE TABLE t_variablen_flaeche
( id  INT32,
id_var  INT32,
id_flaeche  INT32,
wert  DOUBLE,
herkunft  TEXT,
datum  DATETIME,
bemerkungen  TEXT,
id_variante  INT32 );
 
CREATE TABLE t_variablen_flaeche_metadaten
( id_var  INT32,
beschreibung  TEXT,
iwg_einheit  TEXT,
bemerkungen  TEXT,
id_stoff  INT32,
familie  TEXT,
id_spezifikation  INT32,
variante_beschreibung  TEXT,
variante_datenquelle  TEXT );
 
CREATE TABLE t_variablen_gesamt
( id  INT32,
art  TEXT,
name  TEXT,
bemerkungen  TEXT,
herkunft  TEXT );

INSERT INTO t_variablen_flaeche (id_var, id_flaeche, wert)
VALUES (1449, 10001, 291.158), (2271, 10001, 0.15);

INSERT INTO t_variablen_flaeche_metadaten  (id_var, beschreibung, iwg_einheit)
VALUES (1449, 'Fläche des Analysegebiets', 21), (2271, 'geogener CD -Gehalt im 
Oberboden von natürlich bedecktem Land und Bergregionen, v2', 33);

INSERT INTO t_variablen_gesamt (id, name)
VALUES (1449, 'BI_AU_A'), (2271, 'ER_CONT_geo_HM_v2_CD');

SELECT
  vf.id_flaeche,
  mg.name,
  mvf.beschreibung,  
  mvf.id_var,
  vf.wert,
  mvf.iwg_einheit
FROM
  t_variablen_gesamt mg JOIN t_variablen_flaeche_metadaten mvf
    ON   mg.id = mvf.id_var
    /*LEFT*/ JOIN (
   SELECT
 id_var,
 id_flaeche,
 wert
   FROM
 t_variablen_flaeche
   WHERE
 id_flaeche = 10001
  ) vf
    ON   vf.id_var = mvf.id_var
WHERE
  vf.id_flaeche IS NOT NULL
ORDER BY
  vf.id_flaeche,
  mg.name;
 
Uncomment the last /*LEFT*/ JOIN and the column headers are id_flaeche, name, 
beschreibung etc. vs. vf.id_flaeche, mg.name, mvf.beschreibung as it stands now.
Tested with the latest command line version under Windows, the latest .NET 
provider and SQLiteSpy (SQLite version 3.8.0.2).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problems with INT columns

2014-04-29 Thread Simon Slavin

On 29 Apr 2014, at 10:15am, Kleiner Werner  wrote:

> If I understand the SQLite Docu correct there is no difference between INT or 
> INTEGER, except if you use a column as Primary Autoincrement.
> I thought an Int column in SQLite is always 64bit integer, but why does the 
> SQLiteDatareader does it recognize as integer 32?

The problem is not inside SQLite itself.  As far as SQLite is concerned INT and 
INTEGER are treated identically.  You can verify this for yourself by doing

SELECT myColumn, typeof(myColumn) FROM myTable LIMIT 1

Both versions of your database should return exactly the string 'integer' for 
the type.  If they don't please post again because I'd love to see it.

The problem must lie in your interface to SQLite, which is probably 
SQLiteDatareader itself.  I suspect that the problem is somewhere in or around

private SQLiteType GetSQLiteType(int i)

which is on line 965 of



but I don't know c# well enough to get any closer.  From what I can see, 
though, it assumes that SQLite has column types which is, of course, not true.

> What is the difference if I declare a column as bigint or int?

In SQLite, none.  As you correctly understood from section 2.2 of



both pieces of text are actually interpreted as meaning "INTEGER".  However, it 
looks like SQLiteDataReader does things differently.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problems with INT columns

2014-04-29 Thread Kleiner Werner
Hello

In our SQLite DB there are some tables with column type as "int" which contains 
values with 13 digits, like 189336960

It seems that the values are stored correct in SQLite table. But if our Windows 
Tool with the SQLite Datareader tries to select this value
it returns wrong values.

We tracked it down to the column being created as type INT instead of 
INTEGER.  SQLite treats these as the same, but System.Data.SQLite treats INT as 
Int32 and INTEGER as Int64 using SQLiteDatareader.

Changing the Columns in the affected table from INT to INTEGER the 
error go away. 

SQLiteDataReader reader = _sqlDbCommand.ExecuteReader();
int fieldCount = reader.FieldCount;

while (reader.Read())
{
for (int i = 0; i < fieldCount; i++)
{
 string name = reader.GetName(i);
 string value = reader.GetValue(i).ToString();
 string typeName = reader.GetDataTypeName(i);
 string typeName2 =   reader.GetProviderSpecificFieldType(i).ToString();
 .
 // INT is treated as Int32
 // INTEGER is treated as Int64
 .
}
}

Does someone has any idea why is this behavior?
If I understand the SQLite Docu correct there is no difference between INT or 
INTEGER, except if you use a column as Primary Autoincrement.
I thought an Int column in SQLite is always 64bit integer, but why does the 
SQLiteDatareader does it recognize as integer 32?

What is the difference if I declare a column as bigint or int?

best regards
Hans


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users