Re: [sqlite] Maximum database size?

2010-03-04 Thread Collin Capano
Hi Roger,

Thanks for the explanation.

Collin

On 3/3/10 11:11 PM, Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Collin Capano wrote:
>
>> Why does it prefer to use
>> /var/tmp or some other temp directory as opposed to just using the
>> current directory? Is there some performance advantage in doing this?
>>  
> The temp tables are for a particular connection only.  If the process dies
> unexpectedly then no other process would ever look for the temp tables.
> Consequently the temp files backing the temp tables really are temporary
> files and the OS appropriate locations are used by default.  (For example
> the OS may remove temp files not touched in the last 7 days.)
>
> By contrast the journal is looked for by other processes.  If a process is
> writing to the journal and dies unexpectedly then another SQLite based
> process will look for the journal and do a rollback as appropriate.
>
> As you saw you can control the temporary location at compile and run time,
> and also chance the behaviour between using files and using memory.
>
> ie SQLite has sensible defaults but you can still make things work however
> you want.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkuPMukACgkQmOOfHg372QTTVgCgt5Cp3uk+mY/DaTgX+CycOwa2
> bt4An31hdkCLYeQG1b8Tp8L3Z8AK4/vQ
> =zBma
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Collin Capano wrote:
> Why does it prefer to use 
> /var/tmp or some other temp directory as opposed to just using the 
> current directory? Is there some performance advantage in doing this?

The temp tables are for a particular connection only.  If the process dies
unexpectedly then no other process would ever look for the temp tables.
Consequently the temp files backing the temp tables really are temporary
files and the OS appropriate locations are used by default.  (For example
the OS may remove temp files not touched in the last 7 days.)

By contrast the journal is looked for by other processes.  If a process is
writing to the journal and dies unexpectedly then another SQLite based
process will look for the journal and do a rollback as appropriate.

As you saw you can control the temporary location at compile and run time,
and also chance the behaviour between using files and using memory.

ie SQLite has sensible defaults but you can still make things work however
you want.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkuPMukACgkQmOOfHg372QTTVgCgt5Cp3uk+mY/DaTgX+CycOwa2
bt4An31hdkCLYeQG1b8Tp8L3Z8AK4/vQ
=zBma
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Collin Capano
Hi all,

Thanks so much for the info! The problem does appear to have been due to 
temp_store_directory. It was set to /var/tmp; on our clusters /var/tmp 
exits on another disk which only has about 3GB free as opposed to the 
3TB I have on the disk that the database lives on. I re-set it the 
database's directory and the errors went away.

Side question: I looked at the sqlite source code and I saw that it 
tries several temp directories such as /var/tmp when compiling before it 
will use the current working directory. Why does it prefer to use 
/var/tmp or some other temp directory as opposed to just using the 
current directory? Is there some performance advantage in doing this?

Either way, thanks again for the help! This cleared up a lot of problems 
that has been bothering me for awhile now.

Collin

On 3/3/10 4:52 AM, Dan Kennedy wrote:
> On Mar 3, 2010, at 12:57 PM, Collin Capano wrote:
>
>
>> Hello SQLite users,
>>
>> I've been running into some disk I/O errors when doing things such as
>> vacuuming and/or inserting things into temp tables in a database. The
>> databases that are giving me trouble are quite large: between 29 and
>> 55GB. However, as large as that is, I don't think running out of disk
>> space is the issue as I have about 3TB of free space on the disk.
>> So, my
>> question is, is there a maximum size that databases can be? If so,
>> what
>> is the limiting factor? The databases in question don't seem to be
>> corrupt; I can open them on the command line and in python programs
>> (using pysqlite) and can read triggers from them just fine. It's just
>> when I try to vacuum and create temp tables that I run into trouble.
>>  
> Running out of space on the /tmp partition perhaps. See pragma
> temp_store_directory:
>
> http://www.sqlite.org/pragma.html#pragma_temp_store_directory
>
> Dan.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Jay A. Kreibich
On Wed, Mar 03, 2010 at 12:57:22AM -0500, Collin Capano scratched on the wall:
> Hello SQLite users,
> 
> I've been running into some disk I/O errors when doing things such as 
> vacuuming and/or inserting things into temp tables in a database.

  Both of those operations require temp space.  Depending on your
  config, that may or may not be on the same disk as the database
  itself.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Dan Kennedy

On Mar 3, 2010, at 12:57 PM, Collin Capano wrote:

> Hello SQLite users,
>
> I've been running into some disk I/O errors when doing things such as
> vacuuming and/or inserting things into temp tables in a database. The
> databases that are giving me trouble are quite large: between 29 and
> 55GB. However, as large as that is, I don't think running out of disk
> space is the issue as I have about 3TB of free space on the disk.  
> So, my
> question is, is there a maximum size that databases can be? If so,  
> what
> is the limiting factor? The databases in question don't seem to be
> corrupt; I can open them on the command line and in python programs
> (using pysqlite) and can read triggers from them just fine. It's just
> when I try to vacuum and create temp tables that I run into trouble.

Running out of space on the /tmp partition perhaps. See pragma
temp_store_directory:

   http://www.sqlite.org/pragma.html#pragma_temp_store_directory

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Max Vlasov
On Wed, Mar 3, 2010 at 8:57 AM, Collin Capano wrote:

> The databases in question don't seem to be
> corrupt; I can open them on the command line and in python programs
> (using pysqlite) and can read triggers from them just fine. It's just
> when I try to vacuum and create temp tables that I run into trouble.
>
>
Did you try to perform PRAGMA integrity_check; on theses bases? At least
you'll be sure sqlite reads all the necessary data and considers it correct.
I don't know what will it take for 55G database, maybe hour maybe a full
night, but if you have enough time, I'd do this before anything else

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maximum database size?

2010-03-03 Thread Simon Slavin

On 3 Mar 2010, at 5:57am, Collin Capano wrote:

> I've been running into some disk I/O errors when doing things such as 
> vacuuming and/or inserting things into temp tables in a database. The 
> databases that are giving me trouble are quite large: between 29 and 
> 55GB. However, as large as that is, I don't think running out of disk 
> space is the issue as I have about 3TB of free space on the disk. So, my 
> question is, is there a maximum size that databases can be?

Nothing of that level built into SQLite.  And even if there was, it would not 
produce an i/o error, it would complain about pages or filesize.  Run a 
hardware check on your computer: one of those programs which reads every sector 
of the disk and checks other pieces of hardware.

The other thing to do would be to use the command-line tools to turn your 
database into SQL commands, then use those commands to create a new database 
file, then delete the old database file and rename the new one.  This would 
definitively get rid of any structure problems in your database file and make 
sure it wasn't occupying any faulty disk sectors.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users