Tom Lane-2 wrote:
>
>
> I don't think it's a leak, exactly: it's just that the "relcache" entry
> for each one of these views occupies about 100K. A backend that touches
> N of the views is going to need about N*100K in relcache space. I can't
> get terribly excited about that. Trying to redu
Merlin Moncure-2 wrote:
>
>
> ... I've coded a
> lot of multi schema designs and they tend to either go the one
> session/schema route or the connection pooling route. Either way,
> cache memory usage tends to work itself out pretty well (it's never
> been a problem for me before at least). I
On Wed, Apr 13, 2011 at 12:29 AM, Tom Lane wrote:
> Merlin Moncure writes:
>> I think you may have uncovered a leak (I stand corrected).
>
>> The number of schemas in your test is irrelevant -- the leak is
>> happening in proportion to the number of views (set via \setrandom
>> tidx 1 10). At 1
Merlin Moncure writes:
> I think you may have uncovered a leak (I stand corrected).
> The number of schemas in your test is irrelevant -- the leak is
> happening in proportion to the number of views (set via \setrandom
> tidx 1 10). At 1 I don't think it exists at all -- at 100 memory use
> grow
Merlin Moncure-2 wrote:
>
>
> I think you may have uncovered a leak (I stand corrected).
>
> The number of schemas in your test is irrelevant -- the leak is
> happening in proportion to the number of views (set via \setrandom
> tidx 1 10). At 1 I don't think it exists at all -- at 100 memory u
On Tue, Apr 12, 2011 at 12:48 PM, Shianmiin wrote:
>
> Merlin Moncure-2 wrote:
>>
>>
>> I am not seeing your results. I was able to run your test on a stock
>> config (cut down to 50 schemas though) on a vm with 512mb of memory.
>> What is your shared buffers set to?
>>
>>
>
> The shared buffers
Merlin Moncure-2 wrote:
>
>
> I am not seeing your results. I was able to run your test on a stock
> config (cut down to 50 schemas though) on a vm with 512mb of memory.
> What is your shared buffers set to?
>
>
The shared buffers was set to 32MB as in default postgresql.conf
To save you so
On Fri, Apr 8, 2011 at 5:07 PM, Shianmiin wrote:
>
> Merlin Moncure-2 wrote:
>>
>> On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin
>> wrote:
>>> Further clarification,
>>>
>>> if I run two concurrent threads
>>>
>>> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>>>
>>> both b
Merlin Moncure-2 wrote:
>
> On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin
> wrote:
>> Further clarification,
>>
>> if I run two concurrent threads
>>
>> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>>
>> both backend processes uses 1.5GB and result in 3GB in total.
>
> y
On Fri, Apr 8, 2011 at 2:00 PM, Shianmiin wrote:
> Further clarification,
>
> if I run two concurrent threads
>
> pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
>
> both backend processes uses 1.5GB and result in 3GB in total.
yes. could you please post a capture of top after running the mod
Further clarification,
if I run two concurrent threads
pgbench memoryusagetest -c 2 -j 2 -T180 -f test.sql
both backend processes uses 1.5GB and result in 3GB in total.
Samuel
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usag
No I didn't configured 1.5GB shared memory. For this test I recreated a
database cluster and leave everything in the configuration as default.
As in the original post,
when the connection was first established, the memory usage of backend
process showed in top was
VIRT = 182MB, RES = 6240K, SHR=
On Fri, Apr 8, 2011 at 10:30 AM, Shianmiin wrote:
>
> Shianmiin wrote:
>>
>> Hi Merlin,
>>
>> I revised the test code with attached files and use pgbench to send the
>> test queries.
>>
>> http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
>> http://postgresql.1045698.n5.nabble.
No. The highmemory usage issueis stll there.
We could change select count(*) to select * or select 1 if you like. Therre
is no data in the tables anyway.
Sent from my iPad
On 2011-04-08, at 8:25 AM, "Merlin Moncure-2 [via PostgreSQL]" <
ml-node+4290983-1196677718-196...@n5.nabble.com> wrote:
On
Shianmiin wrote:
>
> Hi Merlin,
>
> I revised the test code with attached files and use pgbench to send the
> test queries.
>
> http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
> http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
> initialize.sql
> http
On Fri, Apr 8, 2011 at 7:43 AM, Shianmiin wrote:
> Hi Merlin,
>
> I revised the test code with attached files and use pgbench to send the test
> queries.
>
> http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
> http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
>
Thanks. Probably, but that's not the point here.
The issue here is how PostgreSQL backend process uses memory and I wonder if
there any way to configure it.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p428955
if we go with single-db-multiple-schema model, either our data access layer
will need to ensure qualifying all the database objects with proper schema
name, or with postgresql, just to change the search path while the
connection passed from pool to app code. Another model under evaluation is
singl
Good point. Thanks.
The tests we did in house is all from client site and definitely not in a
single transaction. I just found this simplified test case can reproduce the
same memory usage issue and didn't pay too much attention to it.
If we repeatedly doing smaller batches, we can still see the
Hi Merlin,
I revised the test code with attached files and use pgbench to send the test
queries.
http://postgresql.1045698.n5.nabble.com/file/n4290723/dotest dotest
http://postgresql.1045698.n5.nabble.com/file/n4290723/initialize.sql
initialize.sql
http://postgresql.1045698.n5.nabble.com/file/n
On 04/07/2011 03:46 PM, John R Pierce wrote:
On 04/07/11 1:42 PM, Shianmiin wrote:
Since the connection pool will be used by all tenants, eventually each
connection will hit all the tables/views.
don't all connections in a given pool have to use the same user
credentials? won't that be prob
On 04/07/11 1:42 PM, Shianmiin wrote:
Since the connection pool will be used by all tenants, eventually each
connection will hit all the tables/views.
don't all connections in a given pool have to use the same user
credentials? won't that be problematic for this architecture?
--
Sent v
On Thu, Apr 7, 2011 at 3:42 PM, Shianmiin wrote:
> Hi there,
>
> We are evaluating using PostgreSQL to implement a multitenant database,
> Currently we are running some tests on single-database-multiple-schema model
> (basically, all tenants have the same set of database objects under then own
> s
Hi there,
We are evaluating using PostgreSQL to implement a multitenant database,
Currently we are running some tests on single-database-multiple-schema model
(basically, all tenants have the same set of database objects under then own
schema within the same database).
The application will maintai
24 matches
Mail list logo