So the question I would ask goes more like do you really need 32K
databases in one installation? Have you considered using schemas
instead? Databases are, by design, pretty heavyweight objects.
I agree, but at the same time, we might: a) update our documentation to
indicate it depends on
Not sure that this really belongs on pgsql-committers - maybe pgsql-hackers?
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations. One solution, for example, would be
to use a file system that does not have a limitation of 32k
(removed -committers)
Mark,
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against a scheme such as
* Stephen Frost (sfr...@snowman.net) wrote:
Ehhh, it's likely to be cached.. Sounds like a stretch to me that this
would actually be a performance hit. If it turns out to really be one,
we could just wait to move to subdirectories until some threshold (eg-
30k) is hit.
Thinking this through
On 09/12/2009 03:33 PM, Stephen Frost wrote:
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against
On 09/12/2009 03:48 PM, Stephen Frost wrote:
This would allow for 220M+ databases. I'm not sure how bad it'd be to
introduce another field to pg_database which provides the directory (as
it'd now be distinct from the oid..) or if that might require alot of
changes. Not sure how easy it'd be to
Stephen Frost sfr...@snowman.net writes:
* Mark Mielke (m...@mark.mielke.cc) wrote:
No matter what scheme PostgreSQL uses for storing the data, there can be
underlying file system limitations.
This is true, but there's a reason we only create 1GB files too. I
wouldn't be against a scheme
Mark Mielke m...@mark.mielke.cc writes:
My God - I thought 32k databases in the same directory was insane.
220M+???
Considering that the system catalogs alone occupy about 5MB per
database, that would require an impressive amount of storage...
In practice I think users would be
* Mark Mielke (m...@mark.mielke.cc) wrote:
There is no technical requirement for PostgreSQL to separate data in
databases or tables on subdirectory or file boundaries. Nothing wrong
with having one or more large files that contain everything.
Uhh, except where you run into system
Stephen Frost sfr...@snowman.net writes:
* Mark Mielke (m...@mark.mielke.cc) wrote:
I guess I'm not seeing how using 32k tables is a sensible model.
For one thing, there's partitioning. For another, there's a large user
base. 32K tables is, to be honest, not all that many, especially for
* Tom Lane (t...@sss.pgh.pa.us) wrote:
I believe the filesystem limit the OP is hitting is on the number of
*subdirectories* per directory, not on the number of plain files.
Right, I'm not entirely sure how we got onto the question of number of
tables.
So the question I would ask goes more
On 09/12/2009 04:17 PM, Stephen Frost wrote:
* Mark Mielke (m...@mark.mielke.cc) wrote:
There is no technical requirement for PostgreSQL to separate data in
databases or tables on subdirectory or file boundaries. Nothing wrong
with having one or more large files that contain everything.
Tom Lane wrote:
So the question I would ask goes more like do you really need 32K
databases in one installation? Have you considered using schemas
instead? Databases are, by design, pretty heavyweight objects.
That's a fair question. OTOH, devising a scheme to
Andrew Dunstan and...@dunslane.net writes:
Tom Lane wrote:
So the question I would ask goes more like do you really need 32K
databases in one installation? Have you considered using schemas
instead? Databases are, by design, pretty heavyweight objects.
That's a fair question. OTOH,
14 matches
Mail list logo