Re: [sqlite] Maximum database size?

2010-03-04 Thread Collin Capano
Hi Roger,

Thanks for the explanation.

Collin

On 3/3/10 11:11 PM, Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Collin Capano wrote:
>
>> Why does it prefer to use
>> /var/tmp or some other temp directory as opposed to just using the
>> current directory? Is there some performance advantage in doing this?
>>  
> The temp tables are for a particular connection only.  If the process dies
> unexpectedly then no other process would ever look for the temp tables.
> Consequently the temp files backing the temp tables really are temporary
> files and the OS appropriate locations are used by default.  (For example
> the OS may remove temp files not touched in the last 7 days.)
>
> By contrast the journal is looked for by other processes.  If a process is
> writing to the journal and dies unexpectedly then another SQLite based
> process will look for the journal and do a rollback as appropriate.
>
> As you saw you can control the temporary location at compile and run time,
> and also chance the behaviour between using files and using memory.
>
> ie SQLite has sensible defaults but you can still make things work however
> you want.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkuPMukACgkQmOOfHg372QTTVgCgt5Cp3uk+mY/DaTgX+CycOwa2
> bt4An31hdkCLYeQG1b8Tp8L3Z8AK4/vQ
> =zBma
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Collin Capano
Hi all,

Thanks so much for the info! The problem does appear to have been due to 
temp_store_directory. It was set to /var/tmp; on our clusters /var/tmp 
exits on another disk which only has about 3GB free as opposed to the 
3TB I have on the disk that the database lives on. I re-set it the 
database's directory and the errors went away.

Side question: I looked at the sqlite source code and I saw that it 
tries several temp directories such as /var/tmp when compiling before it 
will use the current working directory. Why does it prefer to use 
/var/tmp or some other temp directory as opposed to just using the 
current directory? Is there some performance advantage in doing this?

Either way, thanks again for the help! This cleared up a lot of problems 
that has been bothering me for awhile now.

Collin

On 3/3/10 4:52 AM, Dan Kennedy wrote:
> On Mar 3, 2010, at 12:57 PM, Collin Capano wrote:
>
>
>> Hello SQLite users,
>>
>> I've been running into some disk I/O errors when doing things such as
>> vacuuming and/or inserting things into temp tables in a database. The
>> databases that are giving me trouble are quite large: between 29 and
>> 55GB. However, as large as that is, I don't think running out of disk
>> space is the issue as I have about 3TB of free space on the disk.
>> So, my
>> question is, is there a maximum size that databases can be? If so,
>> what
>> is the limiting factor? The databases in question don't seem to be
>> corrupt; I can open them on the command line and in python programs
>> (using pysqlite) and can read triggers from them just fine. It's just
>> when I try to vacuum and create temp tables that I run into trouble.
>>  
> Running out of space on the /tmp partition perhaps. See pragma
> temp_store_directory:
>
> http://www.sqlite.org/pragma.html#pragma_temp_store_directory
>
> Dan.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Maximum database size?

2010-03-02 Thread Collin Capano
Hello SQLite users,

I've been running into some disk I/O errors when doing things such as 
vacuuming and/or inserting things into temp tables in a database. The 
databases that are giving me trouble are quite large: between 29 and 
55GB. However, as large as that is, I don't think running out of disk 
space is the issue as I have about 3TB of free space on the disk. So, my 
question is, is there a maximum size that databases can be? If so, what 
is the limiting factor? The databases in question don't seem to be 
corrupt; I can open them on the command line and in python programs 
(using pysqlite) and can read triggers from them just fine. It's just 
when I try to vacuum and create temp tables that I run into trouble.

If you need to know, I am running sqlite version 3.5.9 on CentOS 5.3.

Thanks,
Collin Capano
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users