We have a problem with the latest stable release of SQLite. When compiled for 32 bit under windows and vacuum a database file larger than 2GB, memory storage usage gets up to 2GB and than vacuum crashes with not enough memory. It seems that a temp file is not generated, not for standard and not for Wal database.

Anybody else got into this issue?

Regards,
cc



Am 24. Oktober 2016 2:00:16 nachm. schrieb sqlite-users-requ...@mailinglists.sqlite.org:

Send sqlite-users mailing list submissions to
        sqlite-users@mailinglists.sqlite.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
or, via email, send a message with subject or body 'help' to
        sqlite-users-requ...@mailinglists.sqlite.org

You can reach the person managing the list at
        sqlite-users-ow...@mailinglists.sqlite.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of sqlite-users digest..."


Today's Topics:

   1. Re: Import 5000 xml files in a sqlite database file
      (Sylvain Pointeau)
   2. Re: Import 5000 xml files in a sqlite database file
      (Eduardo Morras)
   3. Development environment reccomendation (Philip Rhoades)
   4. How does the pager know whether its memory cache is still
      valid? (Rowan Worth)
   5. Re: How does the pager know whether its memory cache      is      still
      valid? (Clemens Ladisch)
   6. Re: How does the pager know whether its memory cache is still
      valid? (Rowan Worth)
   7. Best way to wipe out data of a closed database (Max Vlasov)
   8. Warning automatic index on (Werner Kleiner)
   9. Re: Warning automatic index on (Rowan Worth)
  10. Re: Development environment reccomendation (Simon Slavin)
  11. Re: Best way to wipe out data of a closed database (Simon Slavin)
  12. Re: Best way to wipe out data of a closed database (Richard Hipp)
  13. Re: Import 5000 xml files in a sqlite database file (Kevin Youren)
  14. Re: Warning automatic index on (Simon Slavin)
  15. Re: Import 5000 xml files in a sqlite database file
      (Preguntón Cojonero Cabrón)
  16. Re: Virtual table acting as wrapper of a regular table
      (Hick Gunter)
  17. Re: Warning automatic index on (Richard Hipp)
  18. Re: Best way to wipe out data of a closed database (Max Vlasov)
  19. Re: Best way to wipe out data of a closed database (Max Vlasov)


----------------------------------------------------------------------

Message: 1
Date: Sun, 23 Oct 2016 18:03:57 +0200
From: Sylvain Pointeau <sylvain.point...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Import 5000 xml files in a sqlite database file
Message-ID:
        <CAFfm6WvfTSB=oe8u0ymff_ja1m+pbcrkqguqhmbehircxxh...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

hello,

I am not sure if Oxygen or another XML specialized software could do it,
however it would be easy done using C or C++ or Java. Advantage is that it
is then easy to automatize in a batch mode. I can propose you my services
if you are interested.

Best regards,
Sylvain

Le samedi 22 octobre 2016, <bob_sql...@web.de> a écrit :

Hi,

I have more than 5000 xml files. All files have the same xml-structure.

Each file has different values (timestamps, numbers and strings). I would
like to put all these values in a sqlite database tabke, all in one table.
=> Import the data values into a sqlite database table.

Can you please tell me a software program, that can do this quickly?

Thank you for your answers.

Best regards

Bob
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org <javascript:;>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



------------------------------

Message: 2
Date: Sun, 23 Oct 2016 18:36:28 +0200
From: Eduardo Morras <emorr...@yahoo.es>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Import 5000 xml files in a sqlite database file
Message-ID: <20161023183628.ff445ccd40a80b1d20441...@yahoo.es>
Content-Type: text/plain; charset=US-ASCII

On Sat, 22 Oct 2016 19:26:42 +0200
bob_sql...@web.de wrote:

Hi,

I have more than 5000 xml files. All files have the same
xml-structure.

Each file has different values (timestamps, numbers and strings). I
would like to put all these values in a sqlite database tabke, all in
one table. => Import the data values into a sqlite database table.

Can you please tell me a software program, that can do this quickly?

You can convert them to json and use sqlite3 json capabilities.
Duckduckgo search engine takes me to:

http://web-notes.wirehopper.com/2013/12/06/linux-command-line-convert-xml

http://openlife.cc/blogs/2013/november/translating-reliably-between-xml-and-json-xml2json

Which describes some apps to convert from xml to json. Some are in
javascript, python, php, perl and others are online converters.


Thank you for your answers.

Best regards

Bob
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>


------------------------------

Message: 3
Date: Mon, 24 Oct 2016 16:26:10 +1100
From: Philip Rhoades <p...@pricom.com.au>
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Development environment reccomendation
Message-ID: <6d3ac38eb8b64e389d7fa97f4ec7d1e6@localhost>
Content-Type: text/plain; charset=US-ASCII; format=flowed

People,

I previously posted about a "Simple SQLite app importing from mmssms.db
and using GContacts"


I periodically crash or otherwise do a factory reset on my Android
phone
but rather than restoring all the SMS messages I would like to just
import from the backup mmssms.db into a Linux app and keep all the old
messages for historical posterity allowing me to search and look up
messages etc in a consolidated DB.  I can't find anything in existence
that does this nicely so it seems like the easiest way to do this would
be build it myself using Ruby + SQLite but I thought I would check here
first - has anyone already done something like this?  Any pointers
about
where to get started?


but I didn't get any responses so I am posting again with a different
Subject.  My process is:

- rsync the current mmssms.db from my phone to my Fedora Linux system
prior to doing any factory resets or dangerous tinkering

- dump data out of the mmssms.db file

I want to import data into a DB that can keep accumulating stuff.

I will probably only ever want to look at historical stuff on my
computer but maybe a web app would be convenient for remote access.

What development environment would people suggest for building the
sqlite app?

Thanks,

Phil.


--
Philip Rhoades

PO Box 896
Cowra  NSW  2794
Australia
E-mail:  p...@pricom.com.au


------------------------------

Message: 4
Date: Mon, 24 Oct 2016 15:36:37 +0800
From: Rowan Worth <row...@dug.com>
To: General Discussion of SQLite Database
        <sqlite-users@mailinglists.sqlite.org>
Subject: [sqlite] How does the pager know whether its memory cache is
        still   valid?
Message-ID:
        <CAJtcO2T7c=nxygn569m8xghhdttjrcqd4uhshqjga+e+pya...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi guys,

I haven't been able to figure this one out from the docs, nor have I
stumbled onto the right section of the source.

Say you have two separate processes accessing an sqlite DB. P1 starts a
transaction, reads page #5, ends transaction. P2 starts a transaction,
modifies page #5, ends transaction. P1 starts a new transaction and reads
page #5 again.

At this point P1 still has page #5 cached in memory, but clearly it needs
to read from disk to pick up the changes. How does sqlite determine that
the cached page is out of date?


Ultimately the question I'm trying to answer is whether increasing the size
of the pager cache will reduce the amount of I/O required by a single
process in the case where a database is modified concurrently.

Cheers,
-Rowan


------------------------------

Message: 5
Date: Mon, 24 Oct 2016 09:44:56 +0200
From: Clemens Ladisch <clem...@ladisch.de>
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] How does the pager know whether its memory cache
        is      still valid?
Message-ID: <45365e61-0122-ca88-e984-c66398bb5...@ladisch.de>
Content-Type: text/plain; charset=us-ascii

Rowan Worth wrote:
How does sqlite determine that the cached page is out of date?

http://www.sqlite.org/fileformat2.html#file_change_counter

Ultimately the question I'm trying to answer is whether increasing the size
of the pager cache will reduce the amount of I/O required by a single
process in the case where a database is modified concurrently.

The only way to determine how big the effect is in your specific
situation is to measure it yourself.


Regards,
Clemens


------------------------------

Message: 6
Date: Mon, 24 Oct 2016 16:31:43 +0800
From: Rowan Worth <row...@dug.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] How does the pager know whether its memory cache
        is still valid?
Message-ID:
        <cajtco2se-ifbxhqwtkfbhxwouok9tjqe6twh-1sxzk6tvqz...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On 24 October 2016 at 15:44, Clemens Ladisch <clem...@ladisch.de> wrote:

Rowan Worth wrote:
> How does sqlite determine that the cached page is out of date?

http://www.sqlite.org/fileformat2.html#file_change_counter

> Ultimately the question I'm trying to answer is whether increasing the
size
> of the pager cache will reduce the amount of I/O required by a single
> process in the case where a database is modified concurrently.

The only way to determine how big the effect is in your specific
situation is to measure it yourself.


OK, so the entire cache is invalidated when another process updates the DB,
which is what I feared. In this case I'm looking at too many concurrent
updates for caching to add much value.

Thanks! I had been over the file format -- clearly I need to read slower :)
-Rowan


------------------------------

Message: 7
Date: Mon, 24 Oct 2016 11:58:27 +0300
From: Max Vlasov <max.vla...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: [sqlite] Best way to wipe out data of a closed database
Message-ID:
        <CAJtDhuBLwkqov6hC0JW9kXP7j+K2xo5uoqH9e7ENeE7fFZo0=q...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hi,

in an application that implements encryption/decryption with VFS, what
is the best way to ensure that the memory of the application doesn't
contain decrypted data after the database is closed. So no side
application could retrieve sensitive information by reading this
process memory. Not only the base as a whole but also fragments of
database sectors anywhere in the process memory space.

One of the trick possible is to add additional zeroing out to the
global free handler, but this can probably introduce performance
penalties.

Is there any other way to do this?

Thanks,

Max


------------------------------

Message: 8
Date: Mon, 24 Oct 2016 10:59:53 +0200
From: Werner Kleiner <sqlitetes...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: [sqlite] Warning automatic index on
Message-ID:
        <cae0bn6mkg5h42nmdesvrze-p9n_hwetrz0cyyc_u8zkmvdm...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Hello,

In an error log there is a message like:
SQlite warning (284) automatic index on is_mytable(internalvalue)

What does this mean?
Can sqlite not use the index correct?
How can we check or optimize the index?
What do I have to do to cancel the message?

Here is the Table DDL:

-- Table: is_mytable
CREATE TABLE "is_mytable" (
    "mytableid" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL  ,
    "compcid" INT  NULL DEFAULT 0 ,
    "installid" INT  NOT NULL DEFAULT 276 ,
    "internalvalue" VARCHAR(250)  NOT NULL  COLLATE NOCASE,
    "namekey" VARCHAR(250)  NOT NULL DEFAULT 'Document' COLLATE NOCASE,
    "textid" INT  NOT NULL  ,"defaultvalue" INT  NOT NULL DEFAULT 0 );

-- Index: InternalName
CREATE INDEX 'InternalName' ON 'is_mytable' (`internalvalue` DESC);

-- Index: OpenUI
CREATE INDEX 'OpenUI' ON 'is_mytable' (`namekey` DESC);

-- Index: idx_mytable_compcid
CREATE INDEX 'idx_mytable_compcid' ON 'is_mytable' (`compcid` DESC);

-- Index: idx_mytable_internalvalue
CREATE INDEX idx_mytable_internalvalue ON is_mytable (internalvalue);

regards
Werner


------------------------------

Message: 9
Date: Mon, 24 Oct 2016 17:11:02 +0800
From: Rowan Worth <row...@dug.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Warning automatic index on
Message-ID:
        <CAJtcO2Qgc5X6cFaYYcn32YtUf-DwXhU=zms2yzfyp_pwf+f...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On 24 October 2016 at 16:59, Werner Kleiner <sqlitetes...@gmail.com> wrote:

Hello,

In an error log there is a message like:
SQlite warning (284) automatic index on is_mytable(internalvalue)

What does this mean?


It means that SQLite's query optimiser has decided the most efficient way
to get the results you asked for is to:

(a) create an index on the 'internalvalue' column of 'is_mytable'
(b) use the index from (a) to help execute the query
(c) drop the index created in (a)

However I see you already have an index 'InternalName' which covers the
'internalvalue' column, so not sure why that isn't being used. Maybe it's
confused by the DESC or the MySQL back-ticks?


What query are you running which produces the warning?

What version of SQLite are you using?
-Rowan


------------------------------

Message: 10
Date: Mon, 24 Oct 2016 11:02:01 +0100
From: Simon Slavin <slav...@bigfraud.org>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Development environment reccomendation
Message-ID: <a5e7d6a1-ecda-4697-af87-598748c9a...@bigfraud.org>
Content-Type: text/plain; charset=us-ascii


On 24 Oct 2016, at 6:26am, Philip Rhoades <p...@pricom.com.au> wrote:

What development environment would people suggest for building the sqlite app?

If I understand correctly, 'mmssms.db' is itself a SQLite database file. So your choice comes down to whatever programming language/environment you're familiar with, as long as it supports the SQLite API or has a library which does.

Ruby is fine, since you mention that:

<http://sqlite-ruby.rubyforge.org/sqlite3/faq.html>

Simon.

------------------------------

Message: 11
Date: Mon, 24 Oct 2016 11:34:58 +0100
From: Simon Slavin <slav...@bigfraud.org>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Best way to wipe out data of a closed database
Message-ID: <3654878e-861a-4c81-bb6c-12c290ea3...@bigfraud.org>
Content-Type: text/plain; charset=us-ascii


On 24 Oct 2016, at 9:58am, Max Vlasov <max.vla...@gmail.com> wrote:

in an application that implements encryption/decryption with VFS, what
is the best way to ensure that the memory of the application doesn't
contain decrypted data after the database is closed.

We can't answer about memory that your own application handles, of course.

To ensure zeroing out of memory I suggest you use the zero-malloc option as provided by SQLite's memory allocator. For more details on them see sections 3.1.4 and 3.1.5 of

<https://www.sqlite.org/malloc.html>

It's also worth noting here that SQLite has the following PRAGMA:

        PRAGMA schema.secure_delete = boolean

which zeros space in files. However I don't remember this working by zeroing out the memory copy of the file then writing that block to disk.

Simon.

------------------------------

Message: 12
Date: Mon, 24 Oct 2016 06:36:18 -0400
From: Richard Hipp <d...@sqlite.org>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Best way to wipe out data of a closed database
Message-ID:
        <CALwJ=MzZOBzuSWa8Gw4XBxTk4EtMqwgfVApWD=8gisarqp0...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On 10/24/16, Max Vlasov <max.vla...@gmail.com> wrote:

One of the trick possible is to add additional zeroing out to the
global free handler, but this can probably introduce performance
penalties.

Is there any other way to do this?

If you set up to use memsys5 at compile-time
(https://www.sqlite.org/malloc.html#zero_malloc_memory_allocator) then
after all use of SQLite has ended, you are left with a single big
chunk of memory that can be zeroed.

Memsys5 is also faster than your global system memory allocator
(before the extra overhead of zeroing, at least).  But on the other
hand, you have to know the maximum amount of memory SQLite will want
at the very beginning, and that memory will be used only by SQLite and
not other parts of your application, so memory utilization is not as
efficient.

--
D. Richard Hipp
d...@sqlite.org


------------------------------

Message: 13
Date: Mon, 24 Oct 2016 10:35:28 +1100
From: Kevin Youren <kevin.you...@gmail.com>
To: bob_sql...@web.de
Cc: sqlite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Import 5000 xml files in a sqlite database file
Message-ID: <1477265728.4714.28.ca...@gmail.com>
Content-Type: text/plain; charset="UTF-8"

Bob,

my name is Kevin Youren, and I did this task about 4 years ago in
Python 3, by parsing XML files and creating CSV files. The CSV files
were used to load Sqlite tables, MS Xcel spreadsheets and IBM mainframe
DB2 tables. The XML data was mildly complex, large, and error prone.

If you have a sample, say 2 or 3 of the normal files, I could make some
suggestions.

Please note that unless the data is ultra simple, XML is generally
better translated as several tables.

For example, my application stored Control-M scheduler information.

Two tables for the schedule group or table name.

The third table for the jobs in each schedule group/table.

The fourth table for the conditions for the jobs for the schedule
group/table.

Each table had columns for the tags or attributes.

regs,

Kev








------------------------------

Message: 14
Date: Mon, 24 Oct 2016 11:37:08 +0100
From: Simon Slavin <slav...@bigfraud.org>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Warning automatic index on
Message-ID: <41d5f042-ac3d-4b2e-92e0-2d1a18e3d...@bigfraud.org>
Content-Type: text/plain; charset=us-ascii


On 24 Oct 2016, at 10:11am, Rowan Worth <row...@dug.com> wrote:

However I see you already have an index 'InternalName' which covers the
'internalvalue' column, so not sure why that isn't being used. Maybe it's
confused by the DESC or the MySQL back-ticks?

I think it's the "DESC". Since SQLite understands how to search an index backwards when it needs to, try dropping the DESC on that index and see if the problem goes away.

Simon.

------------------------------

Message: 15
Date: Sun, 23 Oct 2016 18:12:44 +0200
From: Preguntón Cojonero Cabrón  <preguntoncojon...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Import 5000 xml files in a sqlite database file
Message-ID:
        <cals96mqqvgo4dpjmgsnvav546ufxvcon6gob5nhtwmqacxx...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Scripting powershell? C#?

El 23/10/2016 18:04, "Sylvain Pointeau" <sylvain.point...@gmail.com>
escribió:

hello,

I am not sure if Oxygen or another XML specialized software could do it,
however it would be easy done using C or C++ or Java. Advantage is that it
is then easy to automatize in a batch mode. I can propose you my services
if you are interested.

Best regards,
Sylvain

Le samedi 22 octobre 2016, <bob_sql...@web.de> a écrit :

> Hi,
>
> I have more than 5000 xml files. All files have the same xml-structure.
>
> Each file has different values (timestamps, numbers and strings). I would
> like to put all these values in a sqlite database tabke, all in one
table.
> => Import the data values into a sqlite database table.
>
> Can you please tell me a software program, that can do this quickly?
>
> Thank you for your answers.
>
> Best regards
>
> Bob
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org <javascript:;>
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



------------------------------

Message: 16
Date: Mon, 24 Oct 2016 10:39:50 +0000
From: Hick Gunter <h...@scigames.at>
To: 'SQLite mailing list' <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Virtual table acting as wrapper of a regular
        table
Message-ID: <f52c0630e31cbbdf6d39a187571d1954d084a176@localhost>
Content-Type: text/plain; charset="utf-8"

Your xFilter method is blindly assuming that there is always an argv[0] without checking argc first.

You are incurring an extra "prepare" in your xConnect method, an extra "step" and "column" in your xFilter/xNext methods and an extra "result" in your xColumn function. Doing twice as much work taking twice as long seems quite reasonable.

-----Ursprüngliche Nachricht-----
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im Auftrag von Dimitris Bil
Gesendet: Samstag, 22. Oktober 2016 15:31
An: sqlite-users@mailinglists.sqlite.org
Betreff: [sqlite] Virtual table acting as wrapper of a regular table

Hello,

I am trying to create a simple virtual table that acts as wrapper for a normal database table for a specific query. For example, consider that I have tables A and B in my database and the query:
select count(A.id) from A CROSS JOIN B where A.id=B.id

Now I am using a virtual table acting as wrapper for table B, so I can execute the query select count(A.id) from A CROSS JOIN w where A.id=w.id, where w is the virtual table instance, and get the same result. My problem is that the second query is about 2 times slower. I would normally expect some overhead, but this seems quite slow, so I was wondering if there is something wrong with my code.

I am using a prepared statement in connect method for query "select id from B where id=?1" and reset/bind/step in filter method and (if there are more results) step in next method.

Here's my code (I am just setting id=0 to denote eof) http://pastebin.com/ce8b4aLL

Do you think there's something wrong or it's an unavoidable overhead that comes with the virtual table usage? Are there any chances to improve performance?

thanks,
Dimitris

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___________________________________________
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the intended recipient(s) only and may contain information that is confidential, privileged or legally protected. Any unauthorized use or dissemination of this communication is strictly prohibited. If you have received this communication in error, please immediately notify the sender by return e-mail message and delete all copies of the original communication. Thank you for your cooperation.



------------------------------

Message: 17
Date: Mon, 24 Oct 2016 06:40:20 -0400
From: Richard Hipp <d...@sqlite.org>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Warning automatic index on
Message-ID:
        <CALwJ=Myu+_D=k96wKgR7di=n4jsqj4_kgtoofmwhoitd-h6...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On 10/24/16, Simon Slavin <slav...@bigfraud.org> wrote:

I think it's the "DESC".  Since SQLite understands how to search an index
backwards when it needs to, try dropping the DESC on that index and see if
the problem goes away.

He already has another index without the DESC :-\

I think those indexes are redundant.  I cannot, off hand, think of a
set of queries that would benefit from having them both.  SQLite is
perfectly capable using a DESC index for an ASC scan and vice verse.

I don't think the MySQL-backtics are a factor either.  Though they
should be fixed, SQLite at least is able to deal with them.
--
D. Richard Hipp
d...@sqlite.org


------------------------------

Message: 18
Date: Mon, 24 Oct 2016 14:15:32 +0300
From: Max Vlasov <max.vla...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Best way to wipe out data of a closed database
Message-ID:
        <CAJtDhuC8=84wr8r5q7a8oo0uouujxn2u23azrosqn9op9lj...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Simon, thanks
never heard of secure_delete, interesting, but probably no use in case
of VFS Layer that leaves only encrypted data on disk.
As for zero-malloc option, it looks promising.

On Mon, Oct 24, 2016 at 1:34 PM, Simon Slavin <slav...@bigfraud.org> wrote:

On 24 Oct 2016, at 9:58am, Max Vlasov <max.vla...@gmail.com> wrote:

in an application that implements encryption/decryption with VFS, what
is the best way to ensure that the memory of the application doesn't
contain decrypted data after the database is closed.

We can't answer about memory that your own application handles, of course.

To ensure zeroing out of memory I suggest you use the zero-malloc option as provided by SQLite's memory allocator. For more details on them see sections 3.1.4 and 3.1.5 of

<https://www.sqlite.org/malloc.html>

It's also worth noting here that SQLite has the following PRAGMA:

        PRAGMA schema.secure_delete = boolean

which zeros space in files. However I don't remember this working by zeroing out the memory copy of the file then writing that block to disk.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


------------------------------

Message: 19
Date: Mon, 24 Oct 2016 14:52:24 +0300
From: Max Vlasov <max.vla...@gmail.com>
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Best way to wipe out data of a closed database
Message-ID:
        <CAJtDhuDiTHEAuW1uVQySynT5MBWzE1TNyRAP-Og9K2m=5dd...@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

On Mon, Oct 24, 2016 at 1:36 PM, Richard Hipp <d...@sqlite.org> wrote:

Memsys5 is also faster than your global system memory allocator
(before the extra overhead of zeroing, at least).  But on the other
hand, you have to know the maximum amount of memory SQLite will want
at the very beginning, and that memory will be used only by SQLite and
not other parts of your application, so memory utilization is not as
efficient.


Thanks, I understand the risks and benefits, so probably it will be
either zero-malloc allocator or my own allocator replacement.


------------------------------

Subject: Digest Footer

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


------------------------------

End of sqlite-users Digest, Vol 106, Issue 24
*********************************************
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to