Re: [sqlite] New word to replace "serverless"

2020-01-28 Thread Acer Yang
On Tue, Jan 28, 2020, 16:13 Cory Nelson  wrote:

> in-situ
>
> I think this distinguishes sqlite as being different from an "in-proc yet
> separate server".
>
> On Mon, Jan 27, 2020 at 2:19 PM Richard Hipp  wrote:
>
> > For many years I have described SQLite as being "serverless", as a way
> > to distinguish it from the more traditional client/server design of
> > RDBMSes.  "Serverless" seemed like the natural term to use, as it
> > seems to mean "without a server".
> >
> > But more recently, "serverless" has become a popular buzz-word that
> > means "managed by my hosting provider rather than by me."  Many
> > readers have internalized this new marketing-driven meaning for
> > "serverless" and are hence confused when they see my claim that
> > "SQLite is serverless".
> >
> > How can I fix this?  What alternative word can I use in place of
> > "serverless" to mean "without a server"?
> >
> > Note that "in-process" and "embedded" are not adequate substitutes for
> > "serverless".  An RDBMS might be in-process or embedded but still be
> > running a server in a separate thread. In fact, that is how most
> > embedded RDBMSes other than SQLite work, if I am not much mistaken.
> >
> > When I say "serverless" I mean that the application invokes a
> > function, that function performs some task on behalf of the
> > application, then the function returns, *and that is all*.  No threads
> > are left over, running in the background to do housekeeping.  The
> > function does send messages to some other thread or process.  The
> > function does not have an event loop.  The function does not have its
> > own stack. The function (with its subfunctions) does all the work
> > itself, using the callers stack, then returns control to the caller.
> >
> > So what do I call this, if I can no longer use the word "serverless"
> > without confusing people?
> >
> > "no-server"?
> > "sans-server"?
> > "stackless"?
> > "non-client/server"?
> >
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
> --
> Cory Nelson
> http://int64.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


How about 'immediate'?

>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Prebuilt Windows x86 binary missing symbols per sqlite3.def

2017-06-03 Thread Acer Yang
Sqlite version is 3190220.
x86 and x64 prebuilt Windows binaries are downloaded from
https://www.sqlite.org/download.html

I compared sqlite3.def files with sort and diff tools.
Some missing symbols as of x86 def file are listed below.

sqlite3_data_directory
sqlite3_fts5_may_be_corrupt
sqlite3_temp_directory
sqlite3_version

That should cause link errors per .lib created from the .def file.

Best wishes,
Acer
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite custom function for regular expression using c/c++

2016-05-05 Thread Acer Yang
Hi Bhagwat,

I try to accomplish that with sqlite3cpp (
https://github.com/yangacer/sqlite3cpp), a light wrapper of sqlite3 for C++.
Hope it help :-)


// Require sqlite3cpp to be installed from source.
// Compile with: clang++-3.6 -g sqlite_re.cpp -lsqlite3cpp -lsqlite3
-std=c++11
// 
#include 
#include 
#include 
#include 

#include "sqlite3cpp.h"


int main() {
using namespace sqlite3cpp;

database db(":memory:");

// Register the lambda
db.create_scalar("re_replace",
 [](
   std::string pattern,
   std::string value,
   std::string text)
 {
 // Replace regex |pattern| found in |text| with
|value|
 std::stringstream out;
 std::regex re(pattern);


 std::regex_replace(std::ostreambuf_iterator(out),
text.begin(), text.end(),
re, value);
 return out.str();
 });

// Test data
auto csr = db.make_cursor();
csr.executescript(
  "CREATE TABLE T (data TEXT);"
  "INSERT INTO T VALUES('Quick brown fox');"
  );

// Replace vowels with '*'
char const *query = "SELECT re_replace('a|e|i|o|u', '*', data) FROM T";

// Execute the query and print out replaced results
for(auto const &row : csr.execute(query)) {
string_ref result;
std::tie(result) = row.to();
std::cout << result << std::endl;
}

// Should print:
// Q**ck br*wn f*x

return 0;
}


On Wed, May 4, 2016 at 8:25 PM, Bhagwat Balshetwar <
bhagwat.balshetwar at gmail.com> wrote:

> I want to write the custom function for regular expression using C/C++.
> How to write it. Is there the document/sample code available on this.
>
> -Bhagwat
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] sqlite3cpp, Yet Another C++ Wrapper of SQLite

2016-04-20 Thread Acer Yang
Dear SQLite users,

If you are interested in using SQLite database in modern C++ way
sqlite3cpp( https://github.com/yangacer/sqlite3cpp ) hope to be your
favorite choice.

The sqlite3cpp provides following features:

1. Query with range for-loop and typed parameter binding
2. Create SQL scalar function with C++ lambda function
3. Create SQL aggregate with functor

Please check above link for further detail.

Best wishes,
Acer


[sqlite] question for auto increament

2014-04-22 Thread Yang Hong
Hello, all:

 

I am using VS2013 + sqlite.net for 4.5.1 with 32 bit. I have question for a
primary key with auto increment. I can't figure out how to do it in C#. when
I use sqlite datasource designer in VS2013. It has option to add identity
checking, however, this feature doesn't work even I check this option. Do
you have anybody to help me out?

 

Regards,

 

yh

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQL statements not captured by sqlite_trace or sqlite_profile?

2013-02-07 Thread 杨苏立 Yang Su Li
Hi,

I am trying to use sqlite_trace and sqlite_profile to trace what SQL
statements have been executed.

In the callback function of sqlite_trace/profile I simply print out the SQL
statement which triggers the callback. And in sqlite_open_v2() I have:
sqlite3_trace(*ppDb, print_sql_callback, *ppDb);
sqlite3_profile(*ppDb, print_sql_callback, *ppDb);


However, it seems like not every SQL statement is captured. As I also
instrumented sqlite_step(), sqlite_bind_*(), and sqlite_clear_binding(),
sqlite_reset() to print out log. And some sql statements which are shown in
these logs do not seem to be printed out by sqlite_trace or sqlite_profile.

Does anyone have an idea about what went wrong here?

Thanks a lot!

Suli

-- 
Suli Yang

Department of Physics
University of Wisconsin Madison

4257 Chamberlin Hall
Madison WI 53703
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is there a way to measure query time in sqlite at runtime?

2012-11-17 Thread 杨苏立 Yang Su Li
I want to measure the execution time of each sql statement in sqlite.

I understand in sqlite shell you could just do .timer on, but I am
wondering how to do it in a pogrammerable way so that I could know how much
time a sql statement takes when applications are accessing the database at
real time, not just by replaying that statement in a shell afterwards?

Could I just provide a plugin function for sqlite_profile?

Thanks a lot


Suli

-- 
Suli Yang

Department of Physics
University of Wisconsin Madison

4257 Chamberlin Hall
Madison WI 53703
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] light weight write barriers

2012-11-15 Thread 杨苏立 Yang Su Li
On Thu, Nov 15, 2012 at 10:29 AM, Simon Slavin  wrote:

>
> On 15 Nov 2012, at 4:14pm, 杨苏立 Yang Su Li  wrote:
>
> > 1. fsync actually does two things at the same time: ordering writes (in a
> > barrier-like manner), and forcing cached writes to disk. This makes it
> very
> > difficult to implement fsync efficiently. However, logically they are two
> > distinctive functionalities, and user might want one but not the other.
> > Particularly, users might want a barrier, but doesn't care about
> durability
> > (much). I have no idea why ordering and durability, which seem quite
> > different, ended up being bundled together in a single fsync call.
> >
> > 2. fsync semantic in POSIX is a bit vague (at least to me) in a
> concurrent
> > setting. What is the expected behavior when more than one thread write to
> > the same file descriptor, or different file descriptor associated with
> the
> > same file?
>
> And, as has been posted many times here and elsewhere, on many systems
> fsync does nothing at all.  It is literally implemented as a 'noop'.  So
> you cannot use it as a basis for barriers.
>

I think it is because it's so difficult to implement fsync efficiently,
some systems just stop trying.

>
> > In modern file system we do all kind of stuff to ensure ordering, and I
> > think I can see how leveraging ordered commands (when it is available
> from
> > hardware) could potentially boost performance.
>
> Similarly, on many hard disk subsystems (the circuit board and firmware
> provided with the hard disk), the 'wait until cache has been written'
> operation does nothing.  So even if you /could/ depend on fsync you still
> couldn't depend on the hardware.  Read the manufactuer's documentation:
> they don't hide it, they boast about it because it makes the hard drive far
> faster.  If you really want this feature to work you have to buy expensive
> server-quality hard drives and set the jumpers in the right positions.
>

When you cannot trust hardware, there are still somethings you can do to
ensure durability and consistency. There are some work from UW-Madsion,
talks about how do you do that without trusting the hardware, say, using
coerced cache eviction. Of course, this is expensive, thus we want to
decouple ordering and durability even more.

Suli

>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] light weight write barriers

2012-11-15 Thread 杨苏立 Yang Su Li
On Thu, Nov 15, 2012 at 6:07 AM, David Lang  wrote:

> On Wed, 14 Nov 2012, Vladislav Bolkhovitin wrote:
>
>  Nico Williams, on 11/13/2012 02:13 PM wrote:
>>
>>> declaring groups of internally-unordered writes where the groups are
>>> ordered with respect to each other... is practically the same as
>>> barriers.
>>>
>>
>> Which barriers? Barriers meaning cache flush or barriers meaning commands
>> order, or barriers meaning both?
>>
>> There's no such thing as "barrier". It is fully artificial abstraction.
>> After all, at the bottom of your stack, you will have to translate it
>> either to cache flush, or commands order enforcement, or both.
>>
>
> When people talk about barriers, they are talking about order enforcement.
>
>
>  Your mistake is that you are considering barriers as something real,
>> which can do something real for you, while it is just a artificial
>> abstraction apparently invented by people with limited knowledge how
>> storage works, hence having very foggy vision how barriers supposed to be
>> processed by it. A simple wrong answer.
>>
>> Generally, you can invent any abstraction convenient for you, but farther
>> your abstractions from reality of your hardware => less you will get from
>> it with bigger effort.
>>
>> There are no barriers in Linux and not going to be. Accept it. And start
>> instead thinking about offload capabilities your storage can offer to you.
>>
>
> the hardware capabilities are not directly accessable from userspace (and
> they probably shouldn't be)
>
> barriers keep getting mentioned because they are a easy concept to
> understand. "do this set of stuff before doing any of this other set of
> stuff, but I don't care when any of this gets done" and they fit well with
> the requirements of the users.
>

Well, I think there are two questions to be answered here: what primitive
should be offered to the user by the file system (currently we have fsync);
and what primitive should be offered by the lower level and used by the
file system (currently we have barrier, or flushing and FUA).

I do agree that we should keep what is accessible from user-space simple
and stupid. However if you look into fsync semantics a bit closer, I think
there are two things to be noted:

1. fsync actually does two things at the same time: ordering writes (in a
barrier-like manner), and forcing cached writes to disk. This makes it very
difficult to implement fsync efficiently. However, logically they are two
distinctive functionalities, and user might want one but not the other.
Particularly, users might want a barrier, but doesn't care about durability
(much). I have no idea why ordering and durability, which seem quite
different, ended up being bundled together in a single fsync call.

2. fsync semantic in POSIX is a bit vague (at least to me) in a concurrent
setting. What is the expected behavior when more than one thread write to
the same file descriptor, or different file descriptor associated with the
same file?

So I do think in the user space, we need some kind of barrier (or other)
primitive which is not tied to durability guarantees; and hopefully this
primitive could be implemented more efficiently than fsync. And of course,
this primitive should be simple and intuitive, abstracting the complexity
out.


On the other hand, we have the questions of what should file system use.
Traditionally block layer provides barrier primitive, and now I think they
are moving to flushing and FUA, or even ordered commands. (
http://lwn.net/Articles/400541/).

In terms of whether file system should be exposed with the hardware
capability, in this case, ordered commands. I personally think it should.
In modern file system we do all kind of stuff to ensure ordering, and I
think I can see how leveraging ordered commands (when it is available from
hardware) could potentially boost performance. And all the complexity of,
say, topological order, is dealt within the file system, and is not visible
to the user.

Of course, there are challenges in when you want to do ordered writes in
file system. As Ts'o mentioned, *when you have entagled metadata updates,
i.e., *you update file A, and file B, and file A and B might share
metadata, it could be difficult to get the ordering right without
sacrificing performance. But I personally think it is worth exploring.

Suli


>
> Users readily accept that if the system crashes, they will loose the most
> recent stuff that they did, but they get annoyed when things get corrupted
> to the point that they loose the entire file.
>
> this includes things like modifying one option and a crash resulting in
> the config file being blank. Yes, you can do the 'write to temp file, sync
> file, sync directory, rename file" dance, but the fact that to do so the
> user must sit and wait for the syncs to take place can be a problem. It
> would be far better to be able to say "write to temp file, and after it's
> on disk, rename the file" and not have the user wait. The user doe

Re: [sqlite] light weight write barriers

2012-11-10 Thread 杨苏立 Yang Su Li
On Fri, Oct 26, 2012 at 8:54 PM, Vladislav Bolkhovitin  wrote:

>
> Theodore Ts'o, on 10/25/2012 01:14 AM wrote:
>
>> On Tue, Oct 23, 2012 at 03:53:11PM -0400, Vladislav Bolkhovitin wrote:
>>
>>> Yes, SCSI has full support for ordered/simple commands designed
>>> exactly for that task: to have steady flow of commands even in case
>>> when some of them are ordered.
>>>
>>
>> SCSI does, yes --- *if* the device actually implements Tagged Command
>> Queuing (TCQ).  Not all devices do.
>>
>> More importantly, SATA drives do *not* have this capability, and when
>> you compare the price of SATA drives to uber-expensive "enterprise
>> drives", it's not surprising that most people don't actually use
>> SCSI/SAS drives that have implemented TCQ.
>>
>
> What different in our positions is that you are considering storage as
> something you can connect to your desktop, while in my view storage is
> something, which stores data and serves them the best possible way with the
> best performance.
>
> Hence, for you the least common denominator of all storage features is the
> most important, while for me to get the best of what possible from storage
> is the most important.
>
> In my view storage should offload from the host system as much as
> possible: data movements, ordered operations requirements, atomic
> operations, deduplication, snapshots, reliability measures (eg RAIDs), load
> balancing, etc.
>
> It's the same as with 2D/3D video acceleration hardware. If you want the
> best performance from your system, you should offload from it as much as
> possible. In case of video - to the video hardware, in case of storage - to
> the storage. The same as with video, for storage better offload - better
> performance. On hundreds of thousands IOPS it's clearly visible.
>
> Price doesn't matter here, because it's completely different topic.
>
>
>  SATA's Native Command
>> Queuing (NCQ) is not equivalent; this allows the drive to reorder
>> requests (in particular read requests) so they can be serviced more
>> efficiently, but it does *not* allow the OS to specify a partial,
>> relative ordering of requests.
>>
>
> And so? If SATA can't do it, does it mean that nobody else can't do it
> too? I know a plenty of non-SATA devices, which can do the ordering
> requirements you need.
>

I would be very much interested in what kind of device support this kind of
"topological order", and in what settings they are typically used.

Does modern flash/SSD (esp. which are used on smartphones) support this?

If you could point me to some information about this, that would be very
much appreciated.

Thanks a lot!

Suli

>
> Vlad
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  
> http://vger.kernel.org/**majordomo-info.html
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] light weight write barriers

2012-10-11 Thread 杨苏立 Yang Su Li
I am not quite whether I should ask this question here, but in terms
of light weight barrier/fsync, could anyone tell me why the device
driver / OS provide the barrier interface other than some other
abstractions anyway? I am sorry if this sounds like a stupid questions
or it has been discussed before

I mean, most of the time, we only need some ordering in writes; not
complete order, but partial,very simple topological order. And a
barrier seems to be a heavy weighted solution to achieve this anyway:
you have to finish all writes before the barrier, then start all
writes issued after the barrier. That is some ordering which is much
stronger than what we need, isn't it?

As most of the time the order we need do not involve too many blocks
(certainly a lot less than all the cached blocks in the system or in
the disk's cache), that topological order isn't likely to be very
complicated, and I image it could be implemented efficiently in a
modern device, which already has complicated caching/garbage
collection/whatever going on internally. Particularly, it seems not
too hard to be implemented on top of SCSI's ordered/simple task mode?
(I believe Windows does this to an extent, but not quite sure).

Thanks a lot

Suli


On Wed, Oct 10, 2012 at 12:17 PM, Andi Kleen  wrote:
> Richard Hipp writes:
>>
>> We would really, really love to have some kind of write-barrier that is
>> lighter than fsync().  If there is some method other than fsync() for
>> forcing a write-barrier on Linux that we don't know about, please enlighten
>> us.
>
> Could you list the requirements of such a light weight barrier?
> i.e. what would it need to do minimally, what's different from
> fsync/fdatasync ?
>
> -Andi
>
> --
> a...@linux.intel.com -- Speaking for myself only
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proper way to change temp directory

2011-11-20 Thread Yang Zhang
Cool beans, perhaps this should be added to the docs!

On Sun, Nov 20, 2011 at 1:36 AM, Dan Kennedy  wrote:
> On 11/20/2011 04:00 PM, Yang Zhang wrote:
>>
>> Out of curiosity, what's the proper way to change the temp directory
>> (say, to avoid "Error: database or disk full" errors on vacuum, which
>> I ran into)?  temp_store_directory has been working for me but it's
>> deprecated and may be elided from builds.  Is the only option to
>> recompile sqlite?  Thanks.
>
> On unix setting the TMPDIR environment variable works.
>
> On windows SQLite uses GetTempPath(). So perhaps setting
> TMP or TEMP works there.
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Yang Zhang
http://yz.mit.edu/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Proper way to change temp directory

2011-11-20 Thread Yang Zhang
Out of curiosity, what's the proper way to change the temp directory
(say, to avoid "Error: database or disk full" errors on vacuum, which
I ran into)?  temp_store_directory has been working for me but it's
deprecated and may be elided from builds.  Is the only option to
recompile sqlite?  Thanks.

-- 
Yang Zhang
http://yz.mit.edu/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is this a bug? Can't tell from docs....

2011-11-18 Thread Yang Zhang
I just got bit by some peculiar behavior in sqlite where

 id int primary key
is different from:
 id integer primary key
In particular, sqlite will generate values for the latter but not the former:
sqlite> create table a (a integer primary key, b integer);sqlite>
insert into a (b) values (0);sqlite> select * from a;1|0sqlite> create
table b (a int primary key, b integer);sqlite> insert into b (b)
values (0);sqlite> select * from b;|0
I couldn't find in http://www.sqlite.org/autoinc.html
orhttp://www.sqlite.org/datatype3.html any mention of this
peculiardistinguishing behavior.  Anyway, if this is intentional (as
I'm guessing), I wouldn't have been able to tell from the docs -
perhaps this would warrant special mention?  Just thought I'd bring
this to your attention.
-- 
Yang Zhang
http://yz.mit.edu/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] page cache vs OS cache

2010-11-19 Thread Yang
I wonder why we need page cache, since we already have OS file cache.

in other words, if we simply set OS cache to be very large, would it
have the same effect as page cache?

thanks
Yang
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how can I use a larger cache ? "Application Defined Page Cache." ??

2010-11-19 Thread Yang
I read from the docs that by setting
"PRAGMA cache_size = Number-of-pages;"
(http://www.sqlite.org/pragma.html#pragma_cache_size)
I can use a larger cache, but the same paragraph mentions that how
much is used is actually "at the discretion of the application defined
page cache"

I do not set anything for the "application defined page cache", so
what is the behavior of the default one?
the application page cache page does not describe this.
with the default "application defined page cache", would it utilize
all of the cache I declared ?

I am hoping to use a very large cache, for example 1G

Thanks
Yang
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] creating a sqlite db on raw disk ?

2010-11-19 Thread Yang
Thanks for your detailed explanation.

for your first question: I mean creating a new database file.

since you asserted "
>> innodb allows creating a db on a raw disk partition, can we do the
>> same on  sqlite?
>
> No.  You need a file system of some kind.  Or, at least, your operating 
> system needs to be able to address your storage using file-system calls, not 
> storage-structure calls.
",  that answers my question.



On Fri, Nov 19, 2010 at 5:38 AM, Simon Slavin  wrote:
>
> On 19 Nov 2010, at 7:04am, Yang wrote:
>
>> when I create a db on a file system, I guess a query process
>
> Wait ... are you talking here about creating a new database file or querying 
> one what already exists ?
>
>> has to go
>> through 2 levels of seeks ?
>> first sqlite finds the B-tree node that stores the index to the file
>> offset of my desired record, then sqlite uses that offset to make
>> syscall seek(offset),
>> then Kernel consults the FS implementation to find from its  OWN
>> B-tree (for example in ext3 fs )  the block location of that offset.
>
> Assuming that you're using a FS that uses B-trees, nodes, and indices (many 
> don't), that's a fair summary of what happens.  But SQLite runs fine on many 
> embedded systems that use linked lists instead of B-trees, or don't use 
> nodes, or have databases stored in non-writable sequential memory.
>
>> innodb allows creating a db on a raw disk partition, can we do the
>> same on  sqlite?
>
> No.  You need a file system of some kind.  Or, at least, your operating 
> system needs to be able to address your storage using file-system calls, not 
> storage-structure calls.
>
>> I tried directly creating a db on ramdisk, failed:
>>
>> javasqlite-20100727# sqlite3 /dev/ram0
>
> You didn't tell it what to call the database, just where you wanted it.  I 
> think you want something like
>
> # sqlite3 /dev/ram0/myDatabase.sqlite
>
> There's no reason this shouldn't work if your drivers and your version of 
> *n*x /fully/ support the use of /deb/ram0 for file storage, including support 
> for locking calls.  Works fine under Mac OS X, by the way.
>
> But SQLite does provide its own way of creating a database in memory.  See
>
> http://www.sqlite.org/inmemorydb.html
>
> So you'd use something like
>
> # sqlite3 :memory:
>
> Of course, that database will be visible only to the process that creates it. 
>  Which means that if you use the command-line tool to create it it will 
> effectively disappear as you quit the command-line tool.
>
> If you're using Linux or a Unix that supports it, you can also look into the 
> shared memory filesystem 'tmpfs'.  I haven't tried it, but this should also 
> support SQLite without any problems.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] creating a sqlite db on raw disk ?

2010-11-18 Thread Yang
when I create a db on a file system, I guess a query process has to go
through 2 levels of seeks ?
first sqlite finds the B-tree node that stores the index to the file
offset of my desired record, then sqlite uses that offset to make
syscall seek(offset),
then Kernel consults the FS implementation to find from its  OWN
B-tree (for example in ext3 fs )  the block location of that offset.


innodb allows creating a db on a raw disk partition, can we do the
same on  sqlite?

I tried directly creating a db on ramdisk, failed:


javasqlite-20100727# sqlite3 /dev/ram0
SQLite version 3.7.3
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table blah ( x int );
Error: disk I/O error


note that I was able to create an e2fs on the same ramdisk

Thanks
Yang
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] append to a column?

2010-11-18 Thread Yang
I wonder if I can do this efficiently in sqlite:

for a row, and one column (most likely blob type),  I want to append
some value to the end of the blob, I do this many times. then I may
read up all the appended sections and write out the entire blob with a
new value


without any special support, I can read existing blob, add some new
data to it, and update the column with the new blob value.
but because most incremental updates are small, reading the majority
of the old blob is a waste of time.

ideally this is a sparse table ( each append creates a new "column",
there is no limit on how many columns each row could have, or any
schema for columns, for that matter). But I can't find an
off-the-shelf solution for sparse tables. (can't use Columnar  DB,
since my access pattern is still first based on rows)


Thanks a lot

Yang
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] util.c compilation error in SQLite book

2009-07-26 Thread T.J. Yang

Hi,

I got the book, "The definitive guide to SQLite". Wondering if anyone has same 
issue or know the answer.

Following is the error when compiling the example CAPI example code 

 



[tjy...@ibm examples]$ ls
capi  commonfoods.sql  perlruby tcl
capi_ext  foods.db  java   python  sql.sql
[tjy...@ibm examples]$


[tjy...@ibm capi]$ gmake
gcc -c  -D_REENTRANT -fPIC -pthread -W -Wall -g -D DEBUG  util.c
util.c: In function âexecuteâ:
util.c:9: warning: passing argument 5 of âsqlite3_execâ from incompatible 
pointer type
util.c:14: warning: passing argument 1 of âsqlite3_freeâ discards qualifiers 
from pointer target type
util.c: In function âprint_errorâ:
util.c:27: error: âsâ undeclared (first use in this function)
util.c:27: error: (Each undeclared identifier is reported only once
util.c:27: error: for each function it appears in.)
util.c:32: warning: passing argument 1 of âsqlite3_freeâ discards qualifiers 
from pointer target type
util.c:27: warning: second parameter of âva_startâ not last named argument
util.c: In function âprint_sql_resultâ:
util.c:43: warning: unused variable âstmtâ
util.c:42: warning: unused variable âtailâ
util.c:41: warning: unused variable ârcâ
util.c:80: warning: control reaches end of non-void function
util.c: At top level:
util.c:195: warning: unused parameter âdataâ
gmake: *** [util.o] Error 1
[tjy...@ibm capi]$

 

Tracing the C code and found va_start using "s" undeclared.


>if(msg) {
>va_list ap;
>va_start(ap,s);

 

 

 

[tjy...@ibm capi]$ ls
authorizer.c   capi.suoDebugMakefile   test.c
authorizer.ilk columns.c   ex1.cmprintf.c  test.db
authorizer.vcproj  columns.ilk exec_busy.c  parameters.c   util.c
auth_trans columns.vcproj  exec.c   parameters.ilk util.h
auth_trans.c   create.cexec.ilk parameters.vcproj
capi.cpp   create.ilk  exec.vcproj  select.c
capi.ncb   create.oget_table.c  select.ilk
capi.sln   create.vcproj   main.c   select.vcproj
[tjy...@ibm capi]$




T.J. Yang



_
Windows Live™ Hotmail®: Celebrate the moment with your favorite sports pics. 
Check it out.
http://www.windowslive.com/Online/Hotmail/Campaign/QuickAdd?ocid=TXT_TAGLM_WL_QA_HM_sports_photos_072009&cat=sports
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> Actually, this is only because Python 3 str is Python 2 unicode.  Python 
>> 2 (which I'm currently using, and which I believe most of the world is 
>> using) str is a physical string of bytes, not a logical/decoded 
>> character string.  Python 2.6 introduces bytes as a synonym for str, but 
>> I am using Python 2.5 at the moment.
> 
> This is all pedantically true, but it is still a really bad way to
> structure your program?  Did you read the Joel Unicode and character
> sets link?

I *have* in fact read that article a very, very long time ago, but that 
is besides the point.  I am aware of character encoding issues, thanks.

> 
> It was because Python 2 messed up on bytes versus strings versus unicode
> that they had to clean it up in Python 3.  It is also why the SQLite
> wrappers in Python 2 return blobs as the buffer type so that there is no
> accidental mingling of bytes and strings.  (Disclosure: I am the author
> of the APSW wrapper)  SQLite *only* supports Unicode strings.  Other
> databases do support non-Unicode strings, character set conversions and
> all that other complexity.

I require str because that is what cPickle.loads() requires; you cannot 
pass it a buffer.  I need to store pickled objects in the database 
because I am implementing an SQLite backend for the Python 2 shelve module.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Igor Tandetnik wrote:
> Yang Zhang  wrote:
>> Pavel Ivanov wrote:
>>> BTW, ACID that you mentioned has nothing to do with snapshot
>>> isolation that you want to achieve. AFAIK only Oracle supports this
>>> kind of statement isolation level.
>> Actually, Oracle, Postgresql, SQL Server, Firebird, and others support
>> snapshot isolation.  http://en.wikipedia.org/wiki/Snapshot_isolation
> 
> ... but not between two statements running within _the same 
> transaction_. Isolation level (snapshot or otherwise) describes how two 
> transactions are isolated from each other. In your example, you only 
> have one transaction, so any discussion of isolation levels is moot.

Right, I mean the whole reason why I originally wrote to this list was 
because I was under the (incorrect) impression that I was working with 
two separate transactions.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Pavel Ivanov wrote:
> BTW, ACID that you mentioned has nothing to do with snapshot isolation
> that you want to achieve. AFAIK only Oracle supports this kind of
> statement isolation level.

Actually, Oracle, Postgresql, SQL Server, Firebird, and others support 
snapshot isolation.  http://en.wikipedia.org/wiki/Snapshot_isolation

And I certainly hope I did not convey that ACID implies snapshot isolation.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> I copied and pasted this code straight from my actual application, which 
>> uses blobs instead of integers, which I need to convert into strings 
>> (since Python interfaces with blobs using the `buffer` type, not `str`).
> 
> And for very good reason.  Blobs are buckets of bytes and those are not
> strings.  In your example there was no need to do the conversion since
> you can supply buffers as values too.  (In Python 3 the bytes type is used.)

Actually, this is only because Python 3 str is Python 2 unicode.  Python 
2 (which I'm currently using, and which I believe most of the world is 
using) str is a physical string of bytes, not a logical/decoded 
character string.  Python 2.6 introduces bytes as a synonym for str, but 
I am using Python 2.5 at the moment.

 From http://mail.python.org/pipermail/python-list/2009-January/696449.html:

> In Python 2.x, str means "string of bytes". This has been renamed "bytes" 
> in Python 3.
> 
> In Python 2.x, unicode means "string of characters". This has been 
> renamed "str" in Python 3.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Yang Zhang wrote:
> John Elrick wrote:
>> Yang Zhang wrote:
>>> Roger Binns wrote:
>>>  
>>>> Yang Zhang wrote:
>>>>
>>>>> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key 
>>>>> FROM shelf ORDER BY ROWID')):
>>>>>   
>>>> You are converting the key which is an integer into a string for no
>>>> apparent reason.
>>>> 
>>> I copied and pasted this code straight from my actual application, 
>>> which uses blobs instead of integers, which I need to convert into 
>>> strings (since Python interfaces with blobs using the `buffer` type, 
>>> not `str`).
>>>
>>>  
>>>> If you also ask for the ROWID you will see that what is happening is a
>>>> new rowid is generated for the replaced row so that if you are 
>>>> iterating
>>>> over the table while modifying it then you effectively have an infinite
>>>> length table.
>>>> 
>>> This is unusual for a RDBMS that claims to provide ACID properties - 
>>> in particular, this is not even as strong an isolation level as 
>>> snapshot isolation, as a reader transaction is able to see a 
>>> concurrent writer transaction's effects.  In fact, this is weaker 
>>> than the weakest isolation level in (say) Postgresql, which is READ 
>>> COMMITTED (in which any statement is guaranteed to not see the 
>>> effects of a transaction that is committed after the query has 
>>> started execution).
>>
>> As I am not an expert in the Python wrapper, I could be incorrect; 
>> however, your code as written appears to be equivalent to the following:
>>
>> begin transaction
>> for select(
>>   insert stuff
>> end
>> commit
>>
>> rather than your intended:
>>
>> s = select(...
>> begin transaction
>> for s...
>>   insert stuff
>> end
>> commit
>>
>> I say this because your example implies that the Python wrapper starts 
>> the transaction automatically inside the execute, and I would not be 
>> surprised if it did so BEFORE executing the SQL parameter.
> 
> The cursor() method that I call on the conn for the SELECT should give 
> me a separate transaction.

I also tried using separate connections, but that just ends up blocking 
and failing with a timeout on the lock acquisition because it appears 
that SQLite only has full-table locking, and not MVCC/snapshot 
isolation.  Do I need to manually extract out all the data first into 
another store, and then iterate over that to operate on original database?
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
John Elrick wrote:
> Yang Zhang wrote:
>> Roger Binns wrote:
>>   
>>> Yang Zhang wrote:
>>> 
>>>> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
>>>> shelf ORDER BY ROWID')):
>>>>   
>>> You are converting the key which is an integer into a string for no
>>> apparent reason.
>>> 
>> I copied and pasted this code straight from my actual application, which 
>> uses blobs instead of integers, which I need to convert into strings 
>> (since Python interfaces with blobs using the `buffer` type, not `str`).
>>
>>   
>>> If you also ask for the ROWID you will see that what is happening is a
>>> new rowid is generated for the replaced row so that if you are iterating
>>> over the table while modifying it then you effectively have an infinite
>>> length table.
>>> 
>> This is unusual for a RDBMS that claims to provide ACID properties - in 
>> particular, this is not even as strong an isolation level as snapshot 
>> isolation, as a reader transaction is able to see a concurrent writer 
>> transaction's effects.  In fact, this is weaker than the weakest 
>> isolation level in (say) Postgresql, which is READ COMMITTED (in which 
>> any statement is guaranteed to not see the effects of a transaction that 
>> is committed after the query has started execution).
> 
> As I am not an expert in the Python wrapper, I could be incorrect; 
> however, your code as written appears to be equivalent to the following:
> 
> begin transaction
> for select(
>   insert stuff
> end
> commit
> 
> rather than your intended:
> 
> s = select(...
> begin transaction
> for s...
>   insert stuff
> end
> commit
> 
> I say this because your example implies that the Python wrapper starts 
> the transaction automatically inside the execute, and I would not be 
> surprised if it did so BEFORE executing the SQL parameter.

The cursor() method that I call on the conn for the SELECT should give 
me a separate transaction.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
>> shelf ORDER BY ROWID')):
> 
> You are converting the key which is an integer into a string for no
> apparent reason.

I copied and pasted this code straight from my actual application, which 
uses blobs instead of integers, which I need to convert into strings 
(since Python interfaces with blobs using the `buffer` type, not `str`).

> 
> If you also ask for the ROWID you will see that what is happening is a
> new rowid is generated for the replaced row so that if you are iterating
> over the table while modifying it then you effectively have an infinite
> length table.

This is unusual for a RDBMS that claims to provide ACID properties - in 
particular, this is not even as strong an isolation level as snapshot 
isolation, as a reader transaction is able to see a concurrent writer 
transaction's effects.  In fact, this is weaker than the weakest 
isolation level in (say) Postgresql, which is READ COMMITTED (in which 
any statement is guaranteed to not see the effects of a transaction that 
is committed after the query has started execution).

> 
>> Any way to solve this problem?  
> 
> You currently have the SELECT results being read back one at a time
> (lazily) on each iteration of the for loop.  The simplest solution is to
> read them all in first.  Add .fetchall() after the execute.

Unfortunately in the original application the table is large (many GBs). 
  Any way to solve this problem without first reading everything into 
memory, and without manually creating a second copy of the table?  Is 
there no way to request a stronger isolation level, such as snapshot 
isolation?
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Hi, for some reason the following program will loop forever:

#!/usr/bin/env python

import sqlite3

conn = sqlite3.connect(':memory:')
conn.text_factory = bytes
conn.execute('CREATE TABLE shelf (key INTEGER NOT NULL, value INTEGER 
NOT NULL)')
for i in xrange(3):
   conn.execute('INSERT INTO shelf (key, value) VALUES (?,?)', (i, i))
conn.commit()

for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
shelf ORDER BY ROWID')):
   conn.execute('REPLACE INTO shelf (key, value) VALUES (?,?)', (i, i))
   conn.commit()
   print i

Anybody understand why?  I thought the REPLACE and SELECT transactions 
should be (snapshot) isolated from each other, so why does the SELECT 
keep getting updated rows from the REPLACE?  Any way to solve this 
problem?  So far all I've found are commands that can change the 
connection-level isolation/locking, but not the cursor-level 
(transaction-level) isolation.

Thanks in advance for any answers!
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database open problem

2008-04-24 Thread Yang WenYuan
Hi,

There is a very strange problem when I play with the sqlite-ce database.

I put the database file and my application program together in the same folder. 
I call sqlite3_open(), the database can be opened properly. I call 
sqlite3_step(), it return SQLITE_ROW, that means it found the record in the 
database.

However, if I put the database file in the other disk, the problem occurred. I 
call sqlite3_open(), the database can still be opened properly. However, I call 
sqlite3_step(), it return SQLITE_DONE, that means it cannot found the record in 
the database. 

Actually, the database file is exactly same, only different is the store place. 
I wonder whether it is limitation for sqlite-Wince? 

Any comment? Thanks! Code is as follows:

{
sqlite3 *db;
char *zErrMsg = 0;
int rc, length;
char *sqlcmd;
sqlite3_stmt * stat;
TCHAR MsgChar[ 256 ], MsgTmpChar[ 256 ];

//   * Put in the same folder, it works properly. *
//  rc = sqlite3_open( "initprint.db", &db);
 
//   * Put in the different folder, it cannot work properly. * 
rc = sqlite3_open( "\\FlashDisk\\System\\initprint.db", &db);
if( rc )
{
sqlite3_close(db);
return false;
}

sqlcmd = sqlite3_mprintf( "SELECT * FROM PRINT WHERE TAGTYPE=%d AND 
BARCODE='%s';", intTagType, chrBarCode );
rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
if( rc != SQLITE_OK )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}

rc = sqlite3_step( stat );  
if( rc != SQLITE_ROW )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}
else
{
const void *printdata = sqlite3_column_blob( stat, 2 );
length = sqlite3_column_bytes( stat, 2 );
memcpy( buffer, printdata, length );
buffer[ length ] = 0x00;
*len = length;
}

rc = sqlite3_finalize( stat );
if( rc != SQLITE_OK )
{
mbstowcs( MsgTmpChar, sqlite3_errmsg( db ), strlen( 
sqlite3_errmsg( db ) ) );
wsprintf( MsgChar, _T( "SQL error: %s\n" ), MsgTmpChar );

MessageBox( MsgChar, _T( "Error Info" ), MB_OK );

sqlite3_free( sqlcmd );
sqlite3_close(db);
return false;
}

sqlite3_free( sqlcmd );
sqlite3_close(db);
return true;

}


WenYuan



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WinCE Memory Problem

2008-04-09 Thread Yang WenYuan
Now, I change my code as follows, but, the problem is
still there. The PDA memory usage become larger and
larger. Did I miss something? Anything wrong with my
code?  Thanks.



sqlcmd = sqlite3_mprintf( "INSERT INTO TEST( BARCODE,
TAGTYPE, PRINTDATA ) VALUES( ?, ?, ? );" );

rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

for( i = 0 ; i < 20; i ++ )
{
// ...
// chrBarCode, tagtype and chrPrintData is changed
for every loop
// ...

rc = sqlite3_bind_text( stat, 1, chrBarCode, strlen(
chrBarCode ), NULL ); 
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

rc = sqlite3_bind_int( stat, 2, tagtype ); 
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

rc = sqlite3_bind_blob( stat, 3, chrPrintData,
length, NULL ); 
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

rc = sqlite3_step( stat );  
if( rc != SQLITE_DONE )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

rc = sqlite3_reset( stat );
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}

}

rc = sqlite3_finalize( stat );
if( rc != SQLITE_OK )
{
sqlite3_free( sqlcmd );
sqlite3_close(db);
return -1;
}



Best Regards,
WenYuan




--- Michael Ruck <[EMAIL PROTECTED]> wrote:

> No, you only need to create the statement once. And
> bind all variables
> inside the loop. This at least saves computation
> time. Additionally you
> should use sqlite3_mprintf instead
> of sprintf to protect against sql injection (if that
> is an issue for you.)
> 
> You're already doing it with the blob, why not with
> the other fields?
> 
> Mike
> 
> 
> -Ursprüngliche Nachricht-----
> Von: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Im Auftrag
> von Yang WenYuan
> Gesendet: Mittwoch, 9. April 2008 08:11
> An: General Discussion of SQLite Database
> Betreff: Re: [sqlite] WinCE Memory Problem
> 
> Thanks, Mike. 
> 
> Because, each record has different barcode, tagtype,
> and printdata. That
> means, I need to call:
> 
> sprintf( sqlcmd, "INSERT INTO TEST( BARCODE,
> TAGTYPE, PRINTDATA ) VALUES(
> %s, %d, ? );", chrBarCode, tagtype ); rc =
> sqlite3_prepare( db, sqlcmd, -1,
> &stat, 0 );
> 
> to update the contents of the each records inside
> the loop. Am I right? 
> 
> Any suggestion? Thanks.
> 
> WenYuan
> 
> 
> 
> 
> 
> --- Michael Ruck <[EMAIL PROTECTED]>
> wrote:
> 
> > You should only prepare the statement once before
> the loop. The only 
> > thing you should do in the loop itself is bind
> varying data and call 
> > sqlite_step.
> > There's no need to call prepare, reset, finalize
> inside the loop. If 
> > chrPrintData doesn't change you can also move
> bind_blob in front of 
> > the loop.
> > 
> > Mike
> > 
> > -Ursprüngliche Nachricht-
> > Von: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] Im
> Auftrag von Yang WenYuan
> > Gesendet: Mittwoch, 9. April 2008 06:08
> > An: sqlite-users@sqlite.org
> > Betreff: [sqlite] WinCE Memory Problem
> > 
> > I use the Sqlite in the PDA which is WinCE OS. I
> need to add more than 
> > 200,000 records. I used sqlite3_prepare-> 
> > sqlite3_bind_blob->sqlite3_step->sqlite3_finalize
> to write each record 
> > in to Database. However, I found that after each
> record is inserted, 
> > the PDA memory became larger and larger. In the
> end, the whole PDA 
> > memory is occupied by this application and the
> system halt. Is there 
> > any thing I miss to release the memory?
> > Following is my code:
> > 
> > sqlite3 *db;
> > sqlite3_

Re: [sqlite] WinCE Memory Problem

2008-04-08 Thread Yang WenYuan
Thanks, Mike. 

Because, each record has different barcode, tagtype,
and printdata. That means, I need to call:

sprintf( sqlcmd, "INSERT INTO TEST( BARCODE, TAGTYPE,
PRINTDATA ) VALUES( %s, %d, ? );", chrBarCode, tagtype
);
rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );

to update the contents of the each records inside the
loop. Am I right? 

Any suggestion? Thanks.

WenYuan





--- Michael Ruck <[EMAIL PROTECTED]> wrote:

> You should only prepare the statement once before
> the loop. The only thing
> you should do in the loop itself is bind varying
> data and call sqlite_step.
> There's no need to call prepare, reset, finalize
> inside the loop. If
> chrPrintData doesn't change you can also move
> bind_blob in front of the
> loop.
> 
> Mike
> 
> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Im Auftrag
> von Yang WenYuan
> Gesendet: Mittwoch, 9. April 2008 06:08
> An: sqlite-users@sqlite.org
> Betreff: [sqlite] WinCE Memory Problem
> 
> I use the Sqlite in the PDA which is WinCE OS. I
> need to add more than
> 200,000 records. I used sqlite3_prepare->
> sqlite3_bind_blob->sqlite3_step->sqlite3_finalize to
> write each record in to
> Database. However, I found that after each record is
> inserted, the PDA
> memory became larger and larger. In the end, the
> whole PDA memory is
> occupied by this application and the system halt. Is
> there any thing I miss
> to release the memory?
> Following is my code:
> 
> sqlite3 *db;
> sqlite3_stmt * stat;
> char *zErrMsg = 0;
> char sqlcmd[ 512 ];
> int rc;
> char chrBarCode[ 16 ], chrPrintData[ 512 ]; int
> tagtype;
> 
> 
> 
> for( i = 0; i < 200; i ++ )
> { 
>   sprintf( sqlcmd, "INSERT INTO TEST( BARCODE,
> TAGTYPE, PRINTDATA )
> VALUES( %s, %d, ? );", chrBarCode, tagtype );
>   
>   rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
>   if( rc != SQLITE_OK )
>   {
>   sqlite3_close(db);
>   return -1;
>   }
>   
>   rc = sqlite3_bind_blob( stat, 1, chrPrintData,
> length, NULL ); 
>   if( rc != SQLITE_OK )
>   {
>   sqlite3_close(db);
>   return -1;
>   }
>   
>   rc = sqlite3_step( stat );  
>   if( rc != SQLITE_DONE )
>   {
>   sqlite3_close(db);
>   return -1;
>   }
>   
>   rc = sqlite3_reset( stat );
>   if( rc != SQLITE_OK )
>   {
>   sqlite3_close(db);
>   return -1;
>   }
>   
>   rc = sqlite3_finalize( stat );
>   if( rc != SQLITE_OK )
>   {
>   sqlite3_close(db);
>   return -1;
>   }
>   
> }
> 
> 
> WenYuan
> 
> 
> 
>  
>
__
> Search, browse and book your hotels and flights
> through Yahoo! Travel.
> http://sg.travel.yahoo.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
>
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
>
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 



  __
Search, browse and book your hotels and flights through Yahoo! Travel.
http://sg.travel.yahoo.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] WinCE Memory Problem

2008-04-08 Thread Yang WenYuan
I use the Sqlite in the PDA which is WinCE OS. I need
to add more than 200,000 records. I used
sqlite3_prepare->
sqlite3_bind_blob->sqlite3_step->sqlite3_finalize to
write each record in to Database. However, I found
that after each record is inserted, the PDA memory
became larger and larger. In the end, the whole PDA
memory is occupied by this application and the system
halt. Is there any thing I miss to release the memory?
Following is my code:

sqlite3 *db;
sqlite3_stmt * stat;
char *zErrMsg = 0;
char sqlcmd[ 512 ];
int rc;
char chrBarCode[ 16 ], chrPrintData[ 512 ];
int tagtype;



for( i = 0; i < 200; i ++ )
{   
sprintf( sqlcmd, "INSERT INTO TEST( BARCODE, TAGTYPE,
PRINTDATA ) VALUES( %s, %d, ? );", chrBarCode, tagtype
);

rc = sqlite3_prepare( db, sqlcmd, -1, &stat, 0 );
if( rc != SQLITE_OK )
{
sqlite3_close(db);
return -1;
}

rc = sqlite3_bind_blob( stat, 1, chrPrintData,
length, NULL ); 
if( rc != SQLITE_OK )
{
sqlite3_close(db);
return -1;
}

rc = sqlite3_step( stat );  
if( rc != SQLITE_DONE )
{
sqlite3_close(db);
return -1;
}

rc = sqlite3_reset( stat );
if( rc != SQLITE_OK )
{
sqlite3_close(db);
return -1;
}

rc = sqlite3_finalize( stat );
if( rc != SQLITE_OK )
{
sqlite3_close(db);
return -1;
}

}


WenYuan



  __
Search, browse and book your hotels and flights through Yahoo! Travel.
http://sg.travel.yahoo.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users