At 09:40 PM 8/17/2005, Alan Stange wrote:
is there a simple way to limit the number of concurrent callers to a
stored proc?
The problem we have is about 50 clients come and perform the same
operation at nearly the same time. Typically, this query takes a
few seconds to run, but in the case
Hi Alan,
On Wed, 17 Aug 2005, Alan Stange wrote:
> Hello all,
>
> is there a simple way to limit the number of concurrent callers to a
> stored proc?
>
> The problem we have is about 50 clients come and perform the same
> operation at nearly the same time. Typically, this query takes a few
> sec
You could use a 1 column/1 row table perhaps. Use some sort of locking
mechanism.
Also, check out contrib/userlock
Chris
Alan Stange wrote:
Hello all,
is there a simple way to limit the number of concurrent callers to a
stored proc?
The problem we have is about 50 clients come and perfor
Jeremiah Jahn wrote:
> I just put together a system with 6GB of ram on a 14 disk raid 10 array.
> When I run my usual big painful queries, I get very little to know
> memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used
> most of the time. the new devel box sits at around 250MB.
>
I just put together a system with 6GB of ram on a 14 disk raid 10 array.
When I run my usual big painful queries, I get very little to know
memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used
most of the time. the new devel box sits at around 250MB.
I've switched to an 8.0 syste
Hello all,
is there a simple way to limit the number of concurrent callers to a
stored proc?
The problem we have is about 50 clients come and perform the same
operation at nearly the same time. Typically, this query takes a few
seconds to run, but in the case of this thundering herd the que
Merlin,
> -> Index Scan using product_structure_file_pkey on
> product_structure_file (cost=0.00..45805.23 rows=5722 width=288)
> (actual time=0.063..0.063 row
> s=1 loops=1)
It appears that your DB is estimating the number of rows returned much too
high (5722 instead of 1). Please raise t
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Doing some testing on upcoming 8.1 devel and am having serious issues
> with new bitmap index scan feature. It is easy to work around (just
> disable it) but IMO the planner is using it when a regular index scan
> should be strongly favored.
I think
Manfred Koizar wrote:
> On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton
> wrote:
>
>>You could get away with one query if you converted them to left-joins:
>>INSERT INTO ...
>>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL
>>UNION
>>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL
>
>
>
On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton
wrote:
>You could get away with one query if you converted them to left-joins:
>INSERT INTO ...
>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL
>UNION
>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL
For the archives: This won't work.
Chris,
> I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
> they have HT on). The box has 8GB memory. In my postgresql.conf, I
> have set the effective_cache_size = 530505 (~4GB).
>
> However, I am noticing on this machine, top is telling me that I have
> ~3.5GB in the buff an
On 8/17/05, Ron <[EMAIL PROTECTED]> wrote:
> At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
> >Hello,
> >
> >thanks for all your suggestions.
> >
> >I can see that the Linux system is 90% waiting for disc io.
...
> 1= your primary usage is OLTP-like, but you are also expecting to do
> reports against
Michael,
> Well, you don't have to spend *quite* that much to get a decent storage
> array. :)
Yes, I'm just pointing out that it's only the extreme cases which are
clear-cut. Middle cases are a lot harder to define. For example, we've
found that on DBT2 running of a 14-drive JBOD, seperating
Hello,
Doing some testing on upcoming 8.1 devel and am having serious issues
with new bitmap index scan feature. It is easy to work around (just
disable it) but IMO the planner is using it when a regular index scan
should be strongly favored. The performance of the bitmapscan in my
usage is actua
At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io.
A clear indication that you need to improve your HD IO subsystem if possible.
At that time all my queries are *very* slow.
To be more precise, y
At 05:15 AM 8/17/2005, Ulrich Wisser wrote:
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io.
A clear indication that you need to improve your HD IO subsystem.
At that time all my queries are *very* slow.
To be more precise, your server pe
Sorry, forgot to state that we are still on PG 7.3.4.
On 8/17/05, Chris Hoover <[EMAIL PROTECTED]> wrote:
> I have some questions about tuning my effective_cache_size
>
> I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
> they have HT on). The box has 8GB memory. In my postgr
I have some questions about tuning my effective_cache_size
I have a RHEL 2.1 box running with dual Xeon (2.6 GHz I believe and
they have HT on). The box has 8GB memory. In my postgresql.conf, I
have set the effective_cache_size = 530505 (~4GB).
However, I am noticing on this machine, top is tel
Ulrich,
> I believe the biggest problem is disc io. Reports for very recent data
> are quite fast, these are used very often and therefor already in the
> cache. But reports can contain (and regulary do) very old data. In that
> case the whole system slows down. To me this sounds like the recent d
On Wed, 2005-08-17 at 11:15 +0200, Ulrich Wisser wrote:
> Hello,
>
> thanks for all your suggestions.
>
> I can see that the Linux system is 90% waiting for disc io. At that time
> all my queries are *very* slow. My scsi raid controller and disc are
> already the fastest available.
What RAID c
Mahesh Shinde wrote:
> Hi
...
> To connect To the PostgreSQL I am using **PostgreSQL Win32 ODBC and
> OLEDB client drivers 1.0.0.2**
>
...
> Since in the database I have one Major table that Debtor table which
> is master table and having around 55 lac records. I have set debtorId as
>
Ulrich Wisser <[EMAIL PROTECTED]> writes:
> My machine has 2GB memory, please find postgresql.conf below.
> max_fsm_pages = 5 # min max_fsm_relations*16, 6 bytes each
FWIW, that index I've been groveling through in connection with your
other problem contains an astonishingly large a
Barry,
One way to do this is to turn logging on for calls over a certain
duration
log_duration in the config file. This will only log calls over n
milliseconds.
There's a tool called iron eye SQL that monitors JDBC calls.
http://www.irongrid.com/
unfortunately I am getting DNS errors
> When I fire a query to search a debtor id, it took around 5 seconds
> to return an answer for a query [...]
Are you sure that time is actually spent in the database engine?
Maybe there are DNS resolving issues or something...
Did you try to execute the queries directly on the server from
the
Hi
I am using Postgres version
*PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2
20030222 (Red Hat Linux 3.2.2-5).*
for an multy
user desktop application using VB 6.0 as a front end
toll.
To connect To the PostgreSQL I am using *PostgreSQL
Hello,
thanks for all your suggestions.
I can see that the Linux system is 90% waiting for disc io. At that time
all my queries are *very* slow. My scsi raid controller and disc are
already the fastest available. The query plan uses indexes and "vacuum
analyze" is run once a day.
To avoid a
Quoting Barry Lind <[EMAIL PROTECTED]>:
>
>
> What I see when running the V3 protocol under 'top' is that the postgres
> processes are routinely using 15% or more of the CPU each, when running
> the V2 protocol they use more like 0.3%.
>
>
>
> Does anyone have any suggestions on an approac
27 matches
Mail list logo