Simon - instead of using vacuum, it's much faster to create a new
database from the old one, then rename it. It's easy to do this in
Python using iterdump(), or you can connect to the new (empty)
database, do your create table statements, attach the old database as
olddb, then do:
insert into
If you have a table where rows are inserted but never deleted, and you
have a rowid column, you can use this:
select seq from sqlite_sequence where name = 'tablename'
This will return instantly, without scanning any rows or indexes, and
is much faster than max(rowid) for huge tables.
If no rows
To add to the real-world performance info, I recently upgraded the
backup program I'm developing, HashBackup, from SQLite 3.6.18 to
3.8.8, mostly to see if it fixed some occasional "Database is locked"
errors (multithreaded). For these tests, nothing was changed except
the SQLite library.
On Wed, Mar 16, 2011 at 3:15 PM, Travis Orr wrote:
> I am currently working on a project that requires retrieving a list of
> all the rows from a FTS3 table. The ordering of the results varies by
> search criteria. Since this is for an embedded project the list results
> are passed
gt;
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jim Wilcoxson
> Sent: March-16-11 12:51 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Optimizing list retrieval with a FTS3 table
&g
Eric Smith writes:
>
> Jay A. Kreibich wrote:
>
> > Try getting rid of the PK definition and see how much that buys you.
> > It might be worth it, or it might not.
>
> and Simon Slavin wrote:
>
> > We know you are doing a huge amount of writing to this database. Are
> >
On 6/22/10, Eric Smith wrote:
> I have confirmed that INSERT times are roughly logarithmic in
> the number of existing records after creating my specific user
> indices.
>
> But INSERT times appeared to be *linear* in the number of existing
> records before I had created any
On 6/22/10, Eric Smith <eas@gmail.com> wrote:
> Jim Wilcoxson wrote:
...
>> Did you see my earlier note about combining your two integers into the
>> primary key? This will also give you constant insert times, if you
>> insert items in the order:
...
> Thanks
On Fri, Jul 9, 2010 at 10:38 AM, Ke Tao wrote:
>
> HI All ,
>
> I have sqlite db name "wdb" and "wdb-journal" file was created by power
> failure something , when I do any db operation sqlite always prompt "disk
> I/O error" , but when I delete the "wdb-journal" ,there is no
On Fri, Jul 9, 2010 at 3:21 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
> On Fri, Jul 9, 2010 at 10:38 AM, Ke Tao <kota...@gmail.com> wrote:
>
>>
>> HI All ,
>>
>> I have sqlite db name "wdb" and "wdb-journal" file was created by
On Tue, Jul 13, 2010 at 8:06 PM, Roger Binns wrote:
>
> On 07/13/2010 04:57 PM, Simon Slavin wrote:
> > One on each page and one for the entire file that checksums the page
> checksums ?
>
> One for each page plus one of the header would make the most sense, but the
>
I
On Wed, Jul 14, 2010 at 1:35 AM, Roger Binns <rog...@rogerbinns.com> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 07/13/2010 05:30 PM, Jim Wilcoxson wrote:
> > I don't think this would work, because the problem described is that the
> > w
On Fri, Jul 16, 2010 at 6:01 AM, Max Vlasov wrote:
> Hi,
>
> always wanted to have a possibility to calculate how much a table occupies.
> As long as I see from the archive, there's no out-of-the-box solution
> (CMIIW)
>
>
You mean, like this?
pragma page_count
pragma
On Fri, Jul 16, 2010 at 9:13 AM, Jim Wilcoxson <pri...@gmail.com> wrote:
> On Fri, Jul 16, 2010 at 6:01 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Hi,
>>
>> always wanted to have a possibility to calculate how much a table
>> occupies.
>> A
On 8/10/10, Paweł Hajdan, Jr. wrote:
> So this is another chromium patch I'd like to submit:
> http://src.chromium.org/viewvc/chrome/trunk/src/third_party/sqlite/preload-cache.patch?revision=26596=markup
>
> I'm not the author of that one, but the main idea seems to be
SQLite is a great product; I use SEE, the encrypted edition.
For those of us who are not using the C interface, I'd like to request
that when possible, new C features like hinting about file sizes and
extents also be added as pragmas. Pragmas make it very easy for non-C
interfaces to make use of
On 8/19/10, Simon Slavin wrote:
>
> On 19 Aug 2010, at 9:27pm, Taras Glek wrote:
>
>> I really appreciate that sqlite got this feature to reduce
>> fragmentation, but why not expose this as a pragma?
>
> Do you have figures which suggest that reducing fragmentation leads to
On 8/20/10, Cory Nelson wrote:
> +1 for this feature request. They've got a very specific and fairly
> rare use case, but when opportunity strikes partial indexes are much
> more convenient, straightforward, and efficient than the alternative.
>
> - If a table has
On 8/20/10, Cory Nelson <phro...@gmail.com> wrote:
> On Fri, Aug 20, 2010 at 7:39 AM, Jim Wilcoxson <pri...@gmail.com> wrote:
>> ...
>> The best I could come up with is a separate table. The problem is,
>> indexing the SHA1 normally means there is a copy in t
Using the SQLite Encryption Edition rev 3.7.0.1, the latest released
version, on OSX Tiger (10.4), results in an undefined reference to
gethostuuid. Is OSX 10.4 no longer supported?
Thanks,
Jim
--
HashBackup: easy onsite and offsite Unix backup
http://sites.google.com/site/hashbackup
On Mon,
I'd also be interested in a VACUUM TO feature, more for performance aspect
than the fragmentation, although that's a plus too. The backup program I'm
working on packs many files into archives, which are SQLite databases. I
have run some vacuum tests here; the filesystem cache was purged before
This may be a bug in 3.7.1, or I may have been taking advantage of a flaw in
3.6.18:
HashBackup uses SQLite to store backup data in a meta data db and file data
archive db. First I open the main database, then attach an archive
database. When the archive gets full, I detach it, create a new
This type of thing works with SQLite 3.6.18. I have a similar issue with
3.7. I believe the difference is that in 3.6.18, if you do a select on a
main database that doesn't involve an attached database, the attached
database isn't locked. In 3.7, if you do a select on the main database that
On Thu, Oct 21, 2010 at 1:27 PM, Dustin Sallings wrote:
>
> On Oct 21, 2010, at 10:05, Pavel Ivanov wrote:
>
> > I think it's not related to fragmentation, but to fill percentage of
> > b-tree pages. I guess your reconstructed table is much less in total
> > size than your
Jonathan - 500 queries per second is 2ms per query. You'll have a hard time
getting that kind of speed for random queries with any rotating media. Your
database needs to be in memory - all of it, not just indexes - or on a flash
drive.
If your queries are not random but are somehow related, eg,
On Thu, Nov 25, 2010 at 11:06 AM, Tito Ciuro wrote:
> On 25 Nov 2010, at 12:51, Igor Tandetnik wrote:
>
> > Run "PRAGMA integrity_check" right after opening.
>
> That could be a potentially slow operation if the database is valid and
> contains lots of
A lot of the SQLite overhead is going to be in the stuff surrounding your
actual data; I'd be surprised if you saved much space by using fixed-size
ints vs the varints used by SQLite. You didn't mention about indexes; if
you have any, they will take a lot of space because your row size is so
On Fri, Jan 21, 2011 at 6:53 PM, Josh Gibbs wrote:
> Could someone please clarify for me if the the resulting order of a UNION
> query will come back with the left data first, then the right data in
> the case
> that no ordering has been defined for the query.
>
> My need is to
It looks interesting. Should your except stmt reference apsw? -Jim
--
HashBackup: easy onsite and offsite Unix backup
http://www.hashbackup.com
On Wed, Jan 26, 2011 at 11:38 AM, Chris Wolf wrote:
>
>
> I know this is an old thread, but shortly after I read it, I attempted
On Thu, Feb 3, 2011 at 5:07 PM, Kevin Wojniak wrote:
>
> On Feb 3, 2011, at 11:41 AM, Petite Abeille wrote:
>
> > On Feb 3, 2011, at 6:53 PM, Kevin Wojniak wrote:
> >
> >> The trigger is ran once via sqlite3_exec();
> >
> > Hmm... you mean the trigger is run every single
If you don't care about the order and the integers are smallish, like 31
bits or less, I'd do this:
create table t(k int primary key);
i = whatever
j = whatever
if i < j:
k = i<<32 | j
else:
k = j<<32 | i
insert k into table
To see if a pair is in the table, do the same steps and lookup
On Wed, Feb 9, 2011 at 5:25 PM, Simon Slavin wrote:
>
> On 9 Feb 2011, at 7:32pm, Black, Michael (IS) wrote:
>
> > Didn't we just determine a couple of days ago that triggers were
> performance killers?
> >
> > That's one reason I want to avoid those.
>
> Okay, then since
On Fri, Feb 11, 2011 at 10:50 PM, Thomas Fjellstrom <
tfjellst...@strangesoft.net> wrote:
> I've seen numbers on the internet stating that sqlite is super fast, should
> be
> able to handle tens of thousands of inserts or updates in a second or two,
> even in sync mode. So I'm a bit stumped as to
On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS) wrote:
> Here's a little benchmark program I wrote to test a super-simple
> one-integer insert to test sql speed and commit interval behavior.
>
> Running it on my machine (Windows XP64 8-core 3Ghz gcc (tdm64-1) 4.5.1)
On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov <max.vla...@gmail.com> wrote:
> On Sat, Feb 12, 2011 at 7:11 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
>
> >
> >
> > Unless I'm missing something, SQLite has to update the first page of the
> > database on
On Sun, Feb 13, 2011 at 7:07 AM, Jean-Christophe Deschamps wrote:
>
> >So if the drive is 5400 rpm, 227 is much more than 5400/60=90 and even if
> >it's 7200 (manufacturers sometimes upgrade drives inside portable hd
> >without
> >prior notice), it's still twice as much as
On Sun, Feb 13, 2011 at 11:55 AM, Max Vlasov <max.vla...@gmail.com> wrote:
> On Sun, Feb 13, 2011 at 5:31 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
>
> > On Sun, Feb 13, 2011 at 1:15 AM, Max Vlasov <max.vla...@gmail.com>
> wrote:
> > > So the f
On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> On Sun, Feb 13, 2011 at 8:53 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
>
> > > But I thought about how it would be possible to test this explanation .
> > I'm
> > > going to
sing.
Jim
--
HashBackup: easy onsite and offsite Unix backup
http://www.hashbackup.com
>
> On Mon, Feb 14, 2011 at 8:49 AM, Jim Wilcoxson <pri...@gmail.com> wrote:
>> On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>>>
>>> On Sun, Feb 13, 2011
341s
user0m0.193s
sys 0m0.535s
Jim
--
HashBackup: easy onsite and offsite Unix backup
http://www.hashbackup.com
>
>
> Pavel
>
> On Mon, Feb 14, 2011 at 8:49 AM, Jim Wilcoxson <pri...@gmail.com> wrote:
>> On Mon, Feb 14, 2011 at 2:33 AM, Max Vlasov <max.vla...@gma
On Mon, Feb 14, 2011 at 3:02 PM, Max Vlasov <max.vla...@gmail.com> wrote:
> On Mon, Feb 14, 2011 at 8:42 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
>
>> On Mon, Feb 14, 2011 at 10:56 AM, Pavel Ivanov <paiva...@gmail.com> wrote:
>> >> So my question is,
This is a common issue on the mailing list. The first time you do
count(*), SQLite (actually your OS) has to load data into memory.
Most OS's will keep the file in a buffer cache, so the 2nd count(*)
doesn't have to read from disk.
Here's a timing from my own system, after a purge command to
On Mon, Feb 21, 2011 at 10:38 AM, Puneet Kishor <punk.k...@gmail.com> wrote:
> On Mon, Feb 21, 2011 at 10:17:03AM -0500, Jim Wilcoxson wrote:
>> This is a common issue on the mailing list. The first time you do
>> count(*), SQLite (actually your OS) has to load data into
On Mon, Feb 21, 2011 at 11:05 AM, Sven L wrote:
>
> Thank you for your detailed explanation!
> First, can you please tell me how to purge the cache in Windows 7? This could
> be very useful for my tests!
Sorry, dunno for Windows. On Mac OSX it is the purge command, in the
The SQLite cache size is in pages (2000 by default), so by increasing
the page size 8x, you're also increasing the cache size and memory
requirements by 8x. Not saying it's a bad thing, just something to be
aware of.
If you want to compare 1K and 8K page size and only compare the effect
page
On Wed, Feb 23, 2011 at 11:12 AM, Sven L wrote:
>
> Thanks for pointing this out!
>
> In my case I have spent much time on normalizing my tables, so the row size
> should be constant in most cases. I do wonder though, what if the row size is
> 32 bytes? Or is there a
I am creating an SQLite database via Python, and trying to understand
some performance issues.
This application does 3.8M inserts. Most inserts are to a main
database that ends up being around 293MB. Around 740K of the inserts
with larger data records are to 25 related databases of around
I had a similar experience: changing my page size decreased the run
time of my SQLite application from 4 hours to 80 minutes (Linux).
I think it would be very good if the default page size was changed to
4096, at least for LInux builds. It makes a huge difference in
performance.
Jim
On
Have you tried changing the page size to 4096 or 8192? Doing this
with my SQLite application and increasing the transaction size
decreased runtime from over 4 hours to 75 minutes.The runtime for
my app writing the same amount of data to flat files was 55 minutes,
so the time penalty for
If your goal is to handle 1 million inserts/updates per second, and a
hard transaction to disk takes 1/60th of a second, you need at least
1 insert/updates per transaction.
Do your testing with a transaction size of 20,000 and see what kind of
performance you get. I'd probably set it higher,
You have specified how the movies table relates to the other tables,
but you haven't specified any independent selection criteria for any
tables. For example, in your query you need to add something like:
and genres.genre = 'drama'. For this query, only the movies and
genres tables are needed
You could eliminate met_grid_id from the cells table and replace it
with an expression cell_id/2500. This expression will automatically
truncate, giving you met_grid_id whenever you need it. This will save
around 5 MB for a 1M row cell table.
Also, queries in the cells table by met_grid_id, if
Drop is executed within a transaction, which means that every record
you touch has to be backed up to the journal first, then modified in
the database. I'm guessing that if you use pragma synchronous=off, it
would speed up the drop index, but you'd take a chance on corrupting
the database if the
Stevenson-Molnar <steve...@evergreen.edu> wrote:
> I've actually been running it with synchronous=off. Unfortunately, it
> doesn't seem to run any faster and still creates a journal file.
>
> _Nik
>
> On Mar 17, 2009, at 6:05 PM, Jim Wilcoxson wrote:
>
>> Drop
Well, it doesn't exactly say that an integer is returned. round()
always returns a float:
sqlite> select typeof(3);
typeof(3)
--
integer
sqlite> select typeof(round(3));
typeof(round(3))
real
sqlite>
You can do this:
sqlite> select typeof(cast(round(3.14) as
I'm not sure I completely understand your data structure, but here are
some ideas:
First, conduct experiments with different page and cache sizes. I
don't know if you jumped to a 32K page size or experimented with each
size, but experimentation is the way to go.
I'm guessing that in your
The reason you're getting the same results is because you are
CPU-bound. I/O has nothing to do with this problem. From your
timings of your app, 31.76/33 = 96% CPU. If you were I/O bound, your
real time would be 33 seconds and your sys+user time would be 3
seconds, or something low.
My guess
What about:
select blah from blah where rowid < windowstartrowid order by
rowid desc limit 1
to get the row before, and:
select blah from blah where rowid > windowlastrowid limit 1
to get the row after.
Jim
On 3/22/09, sorka wrote:
>
> I have a table of events
Not sure if it will make a difference, but in your trigger stuff you
explicitly coded null for the primary key value. Have you tried
changing that so that you don't specify the primary key field at all?
I can't remember from the previous post, but I think it was (or should
be) set up as
Yes, you're right, but if the data is already in index order, you'll
do less I/O when creating the index. Whether the sort + create DB
time is less than "create DB from random input" time is another
question.
Jim
On 3/29/09, mrobi...@cs.fiu.edu wrote:
> question:
> When
You want:
delete from dummy where var=2;
In years of DB work, I've never used exists. If you're mentioning
this as a bug, I guess it could be: I'd have to lookup exists to see
exactly how it's supposed to work.
Jim
On 4/9/09, Dave Dyer wrote:
>
> This little
You'll get this if you have a database with an active journal
(incomplete transactions) and you don't have write access to the
database. In other words, the database needs a rollback from some
prior operations done under a different userid, but now you don't have
write access to do the rollback.
I think it would be a good idea for sqlite3 to display a message like
"Database opened for read-only" if you don't have permission to write.
I saw this problem myself where a rollback was necessary from a
previous root process, and as a new SQLite user, it was confusing and
made me think "great,
You could do a binary search to find the highest accessible rowid:
select rowid where rowid = 2147483647 (fails)
select rowid where rowid = 1073741824 (fails)
...
select rowid where rowid = 65536 (fails)
select rowid where rowid = 32768 (works!)
select rowid where rowid =
I'm not sure what you are considering a massive slowdown, but let's
assume that the entire database fits into memory and disk I/O isn't
the bottleneck. You said you're running 300 instances of the query on
several processors. If several means 3 CPUs, then in a perfect world,
running 300
disk, so
> a database in memory still shares a single disk resource.
>
> Jim Wilcoxson wrote:
>> I'm not sure what you are considering a massive slowdown, but let's
>> assume that the entire database fits into memory and disk I/O isn't
>> the bottleneck.
I don't know if it makes any difference, but is that where clause the same as:
WHERE name< 'zUmM' OR (name= 'zUmM' AND S.id< 8122)
The original way it was coded, all 3 conditions would have to be
evaluated most of the time. The new way might get most rows with 1
condition. Depends on the data
May I suggest an extension PRAGMA SYNCHRONOUS = 3 | ASYNC so that
non-C bindings can use the async functionality?
Thanks, this is a great enhancement!
Jim
>> On Wed, May 06, 2009 at 10:36:50PM -0400, D. Richard Hipp wrote:
>>>
>>> A new optional extension is included that implements an
>>>
base.
---
It sounds like it might use indexes for an OR after all.
Jim
On 5/8/09, Igor Tandetnik <itandet...@mvps.org> wrote:
> "Jim Wilcoxson" <pri...@gmail.com> wrote
> in message
> news:c5830b750905080813p6bf901bn526c81ca8ce7a...@mail.gmail.com
>> I don't
I think you have answered your own question. If you use
synchronous=off, you are saying "I don't care much about this
database." When you "save" documents, you are merely putting them in
a computer's cache (memory) and then confirming to the user that they
are on the hard drive, when they aren't
On 5/18/09, Kevin Gale wrote:
> According to the PRAGMA command documentation the database might become
> corrupted if the o/s crashes or the computer loses power before the data has
> been written to the disk surface. From the information we have from the
> customer
I'd like to suggest that features such as this be implemented with
PRAGMA commands whenever possible, the reason being that the new
functionality will then be immediately available to folks using non-C
bindings, without waiting for an update to a binding package that may
or may not occur.
Jim
On
For my money, I'd prefer to have a smaller, faster parser that worked
correctly on correct input at the expense of not catching all possible
syntax errors on silly input. There is a definite trade-off here, and
I could see where a totally complete parser that caught every possible
error in SQL
I'm running on Linux with ext3 and just wrote a Python test program to
insert rows into a table with a single column, no indexing, and doing
a commit after each insert. When I first ran it, I got around 440
inserts per second, which is clearly impossible. This is a 7200rpm
drive, so even if I
is doing the syncs correctly, but ext3 is getting
cached.
Jim
On 5/27/09, Jim Wilcoxson <pri...@gmail.com> wrote:
> I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each
What platform is this? I just posted a note today that my Linux box
running 2.6-25 and ext3 isn't doing fsyncs like it should, so I would
be susceptible to DB corruption if my machine crashed during DB I/O.
I posted a C program you could run on the specific machine with a
corrupt DB to see if
It would be handy for file synchronization if SQLite stored the
database file's mtime in the journal and reset it if a rollback
occurs. It's difficult to do this in an application:
1. If a DB is opened, a transaction started, some stuff happens, and a
rollback is executed, the file mtime only
I agree that adding this to the library, and making it accessible via
a pragma command would be very useful. For example, pragma commitrate
1000 would test the commit rate of 1000 commits and return the results
in transactions per second as a row.
If I install my app on a client's machine, I
SQLite has surprised me with its quick performance, not the other way
around. In fact, I've implemented all kinds of lookup queries that I
knew could be optimized by caching results so I didn't have to keep
repeating the SQL query, but the performance was so good even
repeating the queries that I
I read through the header after Simon's request, and saw the data
change counter he mentions. What I wasn't sure about is whether this
number gets reset during a rollback. For example, you start a
transaction, make some changes, some cache pages get flushed, and
SQLite changes this number so
.78447008133 TPS: 627.028688445
Obviously, there's no way a 7200RPM drive can actually achieve these
results. I'm building my own version of Python and sqlite here, and
after some research, I think I didn't build sqlite with the
-DHAVE_FULLFSYNC flag, so it isn't actually doing the fdcntl() call.
J
Simon, the real point here is that Python and SQLite aren't doing real
transactions. What we should be seeing here is AT MOST 30 TPS with
synchronous=normal, assuming SQLite does 3 hard disk syncs per commit
as was described in an earlier post.
On my old AMD box, the one where fsync() is broken,
Microsoft has an interesting article on hard drive caches re: SQL Server:
http://support.microsoft.com/kb/234656
"Many disk drives (SATA, ATA, SCSI and IDE based) contain onboard
caches of 512 KB, 1 MB, and larger. Drive caches usually rely on a
capacitor and not a battery-backed solution. These
If you only have a handful of values for C and are already going to
the trouble of creating separate views for each C, you could partition
your data into separate tables for each value of C and maybe create
another table containing the list of values of C and maybe the number
of items in each C
Here's what I'd try:
1. Write a small server that accepts connections and writes to the
SQLite database using prepared statements. If you need require 500
transaction per second, it's simply not possible with rotating media.
So the solution is to either turn off synchronous, which is dangerous,
of queries, and have only 1 process (or thread)
writing, while a pool of processes handles queries. Not sure how well
SQLite handles this situation, but since you are doing group commits,
it will greatly decrease your write load and potential concurrency
issues.
Jim
On 6/11/09, Jim Wilcoxson <
the effects of the read, erase, write cycle, usually by some
form of caching, but then you are also playing with losing the
transaction guarantees of a commit. Can't really have it both ways.
Jim
On 6/11/09, Sam Carleton <scarle...@gmail.com> wrote:
> Jim Wilcoxson wrote:
>> Here's what I
survives intact.
Jim
On 6/11/09, Jim Wilcoxson <pri...@gmail.com> wrote:
> SSD's usually have poor write performance, because to do a write, they
> have to use read, erase, write sequences across large blocks like 64K.
--
Software first. So
Yes, good point.
If you partition the database into multiple databases, you will have
to place each on its own physical disk drive to increase transaction
rates. If your base transaction rate with one drive is T, with N
drives it should be N*T; 4 drives gives you 4x the transaction rate,
etc.
There was a recent SQLite bug that caused the size of the SQLite cache
to shrink in some circumstances, and the longer a program ran, the
smaller the cache became. Maybe you are running into this bug. IIRC,
you had to do an update in the select loop to trigger the bug, so if
you're not doing
You are doing transactions here, which is a very different thing from
normal disk I/O. Your CPU is idle because it is waiting for the disk.
Your disk is idle because it is waiting for the platters to rotate
around again. The best case you can achieve on a 7200RPM disk is 120
transactions
Async I/O fits perfectly with my app: I don't need the durable
guarantee. But I do need the ability to activate it with a pragma
since it isn't a C app.
Are there plans to make async I/O available via pragma?
Jim
On 6/20/09, Dan wrote:
>
> I think we have quite
I'm using the Python sqlite3 (pysqlite) bindings. I'd like to be able
to do a select, and while fetching those rows and inserting new ones,
periodically do a commit. With the Python bindings, an error occurs
because a commit resets all pending select statements. Here is an
example showing the
er Binns <rog...@rogerbinns.com> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jim Wilcoxson wrote:
>> With the Python bindings, an error occurs
>> because a commit resets all pending select statements.
>
> Note that there are some constrain
Thanks for the comments and explanations everyone - much appreciated.
It seems there are a few alternatives I can check out.
Jim
On 6/30/09, Roger Binns <rog...@rogerbinns.com> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jim Wilcoxson wrote:
>> I guess I
old or new data would be one thing, but in this example,
totally wrong data is returned. There's no explanation of why it was
happening.
On 6/30/09, Jim Wilcoxson <pri...@gmail.com> wrote:
> Thanks for the comments and explanations everyone - much appreciated.
> It seems the
Example:
[jim@mb ~]$ sqlite3 dummy
-- Loading resources from /Users/jim/.sqliterc
SQLite version 3.15.1 2016-11-04 12:08:49 with the Encryption Extension
Copyright 2016 Hipp, Wyrick & Company, Inc.
Enter ".help" for usage hints.
sqlite> create table t1(text);
sqlite> ^D
[jim@mb ~]$ ls -l dummy
97 matches
Mail list logo