On Wed, 7 Jan 2009 10:17:06 -0800, "Jim Dodgen"
<j...@dodgen.us> wrote in General Discussion of SQLite
Database <sqlite-users@sqlite.org>:


> I'm a little worried about how long it takes to open one 
> of 20,000,000 files in a directory on the NAS?

I agree. It would require a very cleverly contructed
directory tree, and very short (sub)dir names to reduce the
effort to locate a file.

"Edward J. Yoon" wrote:

>> Each NAS_000 ~ N storages have approximately 300,000
>> files, the average size of file is few MB (not over GB).
>> The broker servers (with SQLite library) are on the 
>> NAS 

It's not clear how many broker servers there are.
One per NAS?

>> and The front-end web servers (more than 200 servers)
>> communicate with living broker servers after request
>> location from location addressing system. 

, which is implemented in MySQL, right?

>> There are high frequency read/write/delete operations.

Let's say a few MB is 50 MB, so 300,000 files on one NAS
would contain 5E7 * 3E5 = 15E12 = 15 TB

There would have to be 20E6 / 3E5 = 67 NAS installations,
all connected to 200 webservers via broker servers.

I'm afraid the chosen architecture isn't scalable, and code
tweaking in sqlite will not help much. 

Opening and closing one of 20,000,000 files for every
logical transaction is not suitable for such a scale. An
operation of that size should be able to construct a better
solution.

Or we still don't understand what's really going on.
-- 
  (  Kees Nuyt
  )
c[_]
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to