Re: [sqlite] newbie has waited days for a DB build to complete. what's up with this.

2016-08-03 Thread Darren Duncan
One way to get a clue is to try doing this in stages. First start over and import a much smaller amount of data, say just a 1GB fraction say, see if that completes, and if it does, how long it takes and other factors like disk and memory etc. If 1GB doesn't work, start smaller yet, until you

[sqlite] newbie has waited days for a DB build to complete. what's up with this.

2016-08-03 Thread Kevin O'Gorman
I'm working on a hobby project, but the data has gotten a bit out of hand. I thought I'd put it in a real database rather than flat ASCII files. I've got a problem set of about 1 billion game positions and 187GB to work on (no, I won't have to solve them all) that took about 4 hours for a

Re: [sqlite] core dump when writing the DB in the middle of the long read

2016-08-03 Thread Richard Hipp
I need more debugging information. Perhaps: (1) Recompile libsqlite3.a from source code. Using -O0 (not -O2) and -g. (2) Rerun your program to crash (3) Send me the new stack trace that shows exactly which line the error occurs on (4) Also send the sqlite_source_id() for the specific version of

[sqlite] core dump when writing the DB in the middle of the long read

2016-08-03 Thread ChingChang Hsiao
Our sqlite version is "SQLite version 3.8.8.1". Modify a configuration data while reading a big configuration DB(show configuration). It goes to busyhandler and write is successful and read configuration codes goes to core dump. Is there any idea why going to core dump? Is something to do

Re: [sqlite] IS read data from sqlite cost too much time more than few tens or hundred Mega Bytes

2016-08-03 Thread Warren Young
On Aug 2, 2016, at 9:04 PM, 梨田 <1565050...@qq.com> wrote: > > I find when the data in database is larger than tens of mega bytes,it spends > more than 5~10s time to read it. If the time required to run a given SELECT call increases linearly as a function of the database size, you’re probably

[sqlite] No messages from this list since outage

2016-08-03 Thread Ralf Junker
Since the previous mailing list outage, I was able to post to this list http://www.mail-archive.com/sqlite-users%40mailinglists.sqlite.org/msg98671.html http://www.mail-archive.com/sqlite-users%40mailinglists.sqlite.org/msg98672.html but I have not received any messages since. I have

Re: [sqlite] SQLite File-Locks on SMB-Share

2016-08-03 Thread Wolfgang Haupt
Clemens Ladisch schrieb am Mi., 3. Aug. 2016 um 14:42 Uhr: > Wolfgang Haupt wrote: > > I've been playing around recently at home with mysql > > Oh? > Sorry, of course that should be SQLite :) > > I was interested why smb breaks the oplock and found that every time I > >

Re: [sqlite] SQLite File-Locks on SMB-Share

2016-08-03 Thread Clemens Ladisch
Wolfgang Haupt wrote: > I've been playing around recently at home with mysql Oh? > I was interested why smb breaks the oplock and found that every time I > execute my command: > SQLiteDataReader reader = command.ExecuteReader(); // C# code > wireshark shows me 3 lock requests/responses to/from

[sqlite] SQLite File-Locks on SMB-Share

2016-08-03 Thread Wolfgang Haupt
Hi everyone, I've been playing around recently at home with mysql databases on my smb share. In my tests I only used select-statements on Windows 10 using the C# ADO.NET wrapper with the latest nuget package. I've opened the database and selected a few records on my workstation, subsequent

Re: [sqlite] IS read data from sqlite cost too much time more than few tens or hundred Mega Bytes

2016-08-03 Thread Richard Hipp
On 8/2/16, 梨田 <1565050...@qq.com> wrote: > Dear friend: > >hi,i am a sqlite(3.7.7.1) user ,i have a question to > consult you.I find when the data in database is larger than tens of mega > bytes,it spends more than 5~10s time to read it. Is the time costed > reasonable?

[sqlite] IS read data from sqlite cost too much time more than few tens or hundred Mega Bytes

2016-08-03 Thread ????
Dear friend: hi,i am a sqlite(3.7.7.1) user ,i have a question to consult you.I find when the data in database is larger than tens of mega bytes,it spends more than 5~10s time to read it. Is the time costed reasonable? situation:one history table ,one