Is it time to break out an API for schema lookup? That would seem to be the
least work for the developers and would give people the chance to implement
whatever strategy they need to manage large schemas, including storing them in
the database in a structured manager, or a compressed in-memory r
On 3/17/17, Dominique Devienne wrote:
>
> But what prevents SQLite from using the constraint's name, if one is
> specified, from using it for the index?
Backwards compatibility. This would change the file format, rendering
database files that are corrupt in the eyes of older versions of
SQLite.
On Fri, Mar 17, 2017 at 9:30 AM, Simon Slavin wrote:
> On 17 Mar 2017, at 7:49am, Dominique Devienne wrote:
> > Richard, why is SQLite ignoring an attempt to giving these an explicit name?
> —DD
> > […]
> >> sqlite> create table t (id constraint u1 unique);
>
> You are supplying a name for the c
On Thu, Mar 16, 2017 at 11:19 PM, Bob Friesenhahn
wrote:
> On Thu, 16 Mar 2017, Richard Hipp wrote:
>>
>>
>> Your 664K is a conservative estimate. On my (64-bit linux) desktop,
>> I'm showing 1.58MB of heap space used to store the schema. (Hint:
>> bring up the database in the command-line shell
On 17 Mar 2017, at 7:49am, Dominique Devienne wrote:
> Richard, why is SQLite ignoring an attempt to giving these an explicit
> name? —DD
>
> […]
>
> sqlite> create table t (id constraint u1 unique);
You are supplying a name for the constraint. But you’re still leaving it up to
SQLite to cr
On Fri, Mar 17, 2017 at 12:00 AM, Richard Hipp wrote:
> On 3/16/17, Bob Friesenhahn wrote:
> > In sqlite_master I see quite a lot of "sql_autoindex" indexes. Do
> > these auto indexes consume the same RAM as explicit indexes?
>
> Yes. Those indexes are implementing UNIQUE constraints.
>
Richar
On 3/16/17, Bob Friesenhahn wrote:
> In sqlite_master I see quite a lot of "sql_autoindex" indexes. Do
> these auto indexes consume the same RAM as explicit indexes?
Yes.
Those indexes are implementing UNIQUE constraints.
--
D. Richard Hipp
d...@sqlite.org
_
On 3/16/17, Bob Friesenhahn wrote:
>
> I just checked and the total character count for the trigger and index
> names themselves is only 23k, which is not even a tiny dent in 1.58MB.
> Is there a muliplying factor somewhere which would make this worth
> doing?
I did say it was a "small step" :-)
In sqlite_master I see quite a lot of "sql_autoindex" indexes. Do
these auto indexes consume the same RAM as explicit indexes?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
On Thu, 16 Mar 2017, Richard Hipp wrote:
One thing you can do right away to save space is pick shorter names
for your 650 triggers an d indexes. SQLite stores the full name. But
as these names are not (normally) used by DML statements, you can call
them whatever you want. I'm showing your ave
On Thu, 16 Mar 2017, Richard Hipp wrote:
Your 664K is a conservative estimate. On my (64-bit linux) desktop,
I'm showing 1.58MB of heap space used to store the schema. (Hint:
bring up the database in the command-line shell, load the schema by
doing something like ".tables", then type ".stats".
On 3/16/17, Bob Friesenhahn wrote:
>
> I shared our database privately with Richard via email.
>
Your 664K is a conservative estimate. On my (64-bit linux) desktop,
I'm showing 1.58MB of heap space used to store the schema. (Hint:
bring up the database in the command-line shell, load the schema
On Thu, 16 Mar 2017, Richard Hipp wrote:
On 3/16/17, Bob Friesenhahn wrote:
The schema (already stripped to remove white space and comments) for
our database has reached 664K
Yikes. That's about 10x or 20x what we typically see. Are you able
to share your schema with us?
I shared our da
On 3/16/17, Bob Friesenhahn wrote:
>
> The schema (already stripped to remove white space and comments) for
> our database has reached 664K
Yikes. That's about 10x or 20x what we typically see. Are you able
to share your schema with us?
--
D. Richard Hipp
d...@sqlite.org
__
On 16 Mar 2017, at 8:09pm, Bob Friesenhahn wrote:
> Would it be reasonably feasible to compress the per-connection schema data
> (stored in RAM) and decompress it as needed? This would make
> prepared-statement and possibly other operations a bit slower but if objects
> are compressed at suf
Would it be reasonably feasible to compress the per-connection schema
data (stored in RAM) and decompress it as needed? This would make
prepared-statement and possibly other operations a bit slower but if
objects are compressed at sufficiently small granularity, then the
per-connection memory
16 matches
Mail list logo