I'm having an odd case where my system is locking such that if I insert
into a table during a transaction, if I start a new connection and
transaction, it blocks while trying to do a similar insert until the
first transaction is committed or rolled back.
The schema is rather complex (currently 157
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
>>I'm having an odd case where my system is locking such that if I insert
>>into a table during a transaction, if I start a new connection and
>>transaction, it blocks while trying
Jeffrey Tenny wrote:
> I have a largely table-append-only application where most transactions
> are read-intensive and many are read-only. The transactions may span
> many tables, and in some cases might need to pull 70 MB of data out of a
> couple of the larger tables.
>
>
> In 7.3, I don't seem
Alvaro Herrera wrote:
> On Thu, Aug 11, 2005 at 03:36:31PM -0500, John A Meinel wrote:
>
...
>
> This is a known problem, solved in 8.1. A workaround for previous
> releases is to defer FK checks until commit:
So I don't know exactly what the fix was, but I just teste
Ulrich Wisser wrote:
> Hello,
>
> one of our services is click counting for on line advertising. We do
> this by importing Apache log files every five minutes. This results in a
> lot of insert and delete statements. At the same time our customers
> shall be able to do on line reporting.
What are
Alvaro Herrera wrote:
> On Tue, Aug 16, 2005 at 09:12:31AM -0700, Josh Berkus wrote:
>
>
>>However, you are absolutely correct in that it's *relative* advice, not
>>absolute advice. If, for example, you're using a $100,000 EMC SAN as your
>>storage you'll probably be better off giving it everythi
Anjan Dave wrote:
> Yes, that's true, though, I am a bit confused because the Clariion array
> document I am reading talks about how the write cache can eliminate the
> RAID5 Write Penalty for sequential and large IOs...resulting in better
> sequential write performance than RAID10.
>
> anjan
>
We
Anjan Dave wrote:
> Yes, that's true, though, I am a bit confused because the Clariion array
> document I am reading talks about how the write cache can eliminate the
> RAID5 Write Penalty for sequential and large IOs...resulting in better
> sequential write performance than RAID10.
>
> anjan
>
To
Tom Lane wrote:
> John A Meinel <[EMAIL PROTECTED]> writes:
>
>>Alvaro Herrera wrote:
>>
>>>I've been asked this a couple of times and I don't know the answer: what
>>>happens if you give XLog a single drive (unmirrored single spindle), and
&g
Mahesh Shinde wrote:
> Hi
...
> To connect To the PostgreSQL I am using **PostgreSQL Win32 ODBC and
> OLEDB client drivers 1.0.0.2**
>
...
> Since in the database I have one Major table that Debtor table which
> is master table and having around 55 lac records. I have set debtorId as
>
Manfred Koizar wrote:
> On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton
> wrote:
>
>>You could get away with one query if you converted them to left-joins:
>>INSERT INTO ...
>>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL
>>UNION
>>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL
>
>
>
Jeremiah Jahn wrote:
> I just put together a system with 6GB of ram on a 14 disk raid 10 array.
> When I run my usual big painful queries, I get very little to know
> memory usage. My production box (raid 5 4GB ram) hovers at 3.9GB used
> most of the time. the new devel box sits at around 250MB.
>
Qingqing Zhou wrote:
> "Alvaro Herrera" <[EMAIL PROTECTED]> writes
>
>>Interesting; do they use an overwriting storage manager like Oracle, or
>>a non-overwriting one like Postgres?
>>
>
>
> They call this MVCC "RLV(row level versioning)". I think they use rollback
> segment like Oracle (a.k.a "ver
Diego de Lima wrote:
> Hi list,
>
> I´m using Pg 8.0.3 on Linux FC2.
>
> This question may have a very simple answer (I hope), but I´m having
> lots of trouble solving it, and I counldn´t find any other post about it
> or anything in the pg docs.
>
> I have some very complex select statements
Jeremiah Jahn wrote:
> Sorry about the formatting.
>
> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
>>
...
>>The expensive parts are the 4915 lookups into the litigant_details (each
>>one takes approx 4ms for a total of ~20s).
>>And then you do it again
Ron wrote:
> At 01:18 PM 8/19/2005, John A Meinel wrote:
>
>> Jeremiah Jahn wrote:
>> > Sorry about the formatting.
>> >
>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>> >
>> >>Jeremiah Jahn wrote:
>> >>
&
Jeremiah Jahn wrote:
> On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
...
>>
>>Well, in general, 3ms for a single lookup seems really long. Maybe your
>>index is bloated by not vacuuming often enough. Do you tend to get a
Jeremiah Jahn wrote:
> Rebuild in progress with just ext3 on the raid array...will see if this
> helps the access times. If it doesn't I'll mess with the stripe size. I
> have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index
>
>>/dev/null' none of this seems to have helped, or eve
4 75 0 0 0
1 0 0|1334M 10M 4596M 25M| 0 0 | 0 0 : 056k| 160k
2048M| 21 4 75 0 0 0
On Fri, 2005-08-19 at 16:07 -0500, John A Meinel wrote:
Jeremiah Jahn wrote:
Rebuild in progress with just ext3 on the raid array...will see if this
helps the acc
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote:
> Jeremiah Jahn wrote:
> > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
> >
>
> > it's cached alright. I'm getting a read rate of
Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:
Ron wrote:
At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:
Well, since you can get a read of the RAID at 150MB/s, that means that
it is actual I/O speed. It may not be cached in RAM. Perhaps you could
try the same
Jeremiah Jahn wrote:
> On Sun, 2005-08-21 at 16:13 -0400, Ron wrote:
>
>>At 10:54 AM 8/21/2005, Jeremiah Jahn wrote:
>>
...
>>So you have 2 controllers each with 2 external slots? But you are
>>currently only using 1 controller and only one external slot on that
>>controller?
>
>
> Sorry, no. I
gokulnathbabu manoharan wrote:
> Hi all,
>
> I like to know the caching policies of Postgresql.
> What parameter in the postgresql.conf affects the
> cache size used by the Postgresql? As far as I have
> searched my knowledge of the parameters are
In general, you don't. The OS handles caching bas
Martin Nickel wrote:
> Chris,
> Would you say that 3 pages is a good maximum for a Postgres install?
> We're running 8.0.3 on 64-bit SUSE on a dual Opteron box with 4G and have
> shared_buffers set at 12. I've moved it up and down (it was 16
> when I got here) without any measurable pe
Guido Neitzer wrote:
> Hi.
>
> I have an interesting problem with the JDBC drivers. When I use a
> select like this:
>
> "SELECT t0.aktiv, t0.id, t0.ist_teilnehmer, t0.nachname, t0.plz,
> t0.vorname FROM public.dga_dienstleister t0 WHERE t0.plz like
> ?::varchar(256) ESCAPE '|'" withBindings: 1:"5
Kevin wrote:
> Arnau wrote:
>
>> Hi all,
>>
>> >
>> > COPY FROM a file with all the ID's to delete, into a temporary
>> table, and do a joined delete to your main table (thus, only one query).
>>
>>
>> I already did this, but I don't have idea about how to do this join,
>> could you give me
Alvaro Herrera wrote:
> On Wed, Sep 14, 2005 at 11:28:43AM -0700, Jeffrey W. Baker wrote:
>
>>On Wed, 2005-09-14 at 11:25 -0700, Peter Darley wrote:
>>
>>> I'm getting a new server for our database, and I have a quick question
>>>about RAID controllers with a battery backed cache. I understand
[EMAIL PROTECTED] wrote:
> I have a database of hundreds of millions of web links (between sites)
> in Postgres. For each link, we record the url, the referer, and the
> most recent date the link exists. I'm having some serious performance
> issues when it comes to writing new data into the datab
bm\mbn wrote:
> Hi Everyone
>
> The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k
> disks.
>
> 2 disks are in RAID1 and hold the OS, SWAP & pg_xlog
> 4 disks are in RAID10 and hold the Cluster itself.
>
> the DB will have two major tables 1 with 10 million rows and one with
> 100
Martin Foster wrote:
Simon Riggs wrote:
On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
I've seen this behavior before when restarting the web server during
heavy loads.Apache goes from zero connections to a solid 120,
causing PostgreSQL to spawn that many children in a short order of
Hasnul Fadhly bin Hasan wrote:
Hi Bryan,
Just wondering, i ran vacuumdb but didn't get the information that you
get about the free space even when i set the verbose option. How did
you get that?
Thanks,
Hasnul
I believe it is
VACUUM FULL ANALYZE VERBOSE;
At the very end you will get a listing
The first thing to check... Did you do a recent VACUUM ANALYZE? This
updates all the statistics. There are a number of places where it says
"rows=1000" which is usually the "I have no idea, let me guess 1000".
Also, there are a number of places where the estimates are pretty far
off. For instance:
The first thing to check... Did you do a recent VACUUM ANALYZE? This
updates all the statistics. There are a number of places where it says
"rows=1000" which is usually the "I have no idea, let me guess 1000".
Also, there are a number of places where the estimates are pretty far
off. For instan
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally wrong.
I used to run vacuumdb on the whole database daily once and when i
posted the same problem of wrong stats in the pg_class most
Pallav Kalva wrote:
John A Meinel wrote:
Pallav Kalva wrote:
Hi Everybody.
I have a table in my production database which gets updated
regularly and the stats on this table in pg_class are totally
wrong. I used to run vacuumdb on the whole database daily once and
when i posted the same
Dirk Lutzebaeck wrote:
Hi,
here is a query which produces over 1G temp file in pgsql_tmp. This
is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB
sort_mem and 320MB shared_mem.
Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All
tables have been analyzed before.
Can s
Dirk Lutzebaeck wrote:
Greg,
Thanks for your analysis. But I dont get any better after bumping
STATISTICS target from 10 to 200.
explain analyze shows that the optimizer is still way off estimating
the rows. Is this normal? It still produces a 1 GB temp file.
I simplified the query a bit, now onl
Dirk Lutzebaeck wrote:
Greg Stark wrote:
I gave a bunch of "explain analyze select" commands to test estimates
for
individual columns. What results do they come up with? If those are
inaccurate
then raising the statistics target is a good route. If those are
accurate
individually but the combina
Alex wrote:
Hi,
we just got a new dual processor machine and I wonder if there is a
way to utilize both processors.
Our DB server is basically fully dedicated to postgres. (its a dual
amd with 4gb mem.)
I have a batch job that periodically loads about 8 million records
into a table.
for this I
Alex wrote:
Thanks John.
Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm
SCSI Disks, 4GB of memory.
Disks are pretty fast and memory should be more than enough. Currently
we dont have many concurrent connections.
Well, you didn't mention Opteron before (it makes a difference a
Tom Lane wrote:
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
I always get round milliseconds on running. In fa
Ken Egervari wrote:
I took John's advice and tried to work with sub-selects. I tried
this variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's
becau
Jeremiah Jahn wrote:
I have about 5M names stored on my DB. Currently the searches are very
quick unless, they are on a very common last name ie. SMITH. The Index
is always used, but I still hit 10-20 seconds on a SMITH or Jones
search, and I average about 6 searches a second and max out at about
3
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure ho
Tom Lane wrote:
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
There is. I beleive QueryPerformanceCounter has sub-mirosecond
resolution.
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor/instrument.c in particular.
Or we modify the win32
Dave Held wrote:
There is always clock(). It's mandated by ANSI C, but my docs say
that POSIX requires CLOCKS_PER_SEC == 100 regardless of actual
timer resolution, which seems a little brain-dead to me.
__
David B. Held
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, an
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Dave Held wrote:
There is always clock().
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and
it had a resolution of 55clocks / s. When I just did this:
The other problem is it measures p
Michael McFarland wrote:
I'm trying to understand why a particular query is slow, and it
seems like the optimizer is choosing a strange plan. See this summary:
...
explain select * from foo where barId = 412 order by id desc limit 25;
Limit ()
-> Index scan backward using primarykey_index
Markus Bertheau â wrote:
Hi, I have the following strange situation:
...
oocms=# explain analyze select count(1) from objects where class = 'Picture';
QUERY PLAN
Gaetano Mendola wrote:
Markus Bertheau â wrote:
Hi, I have the following strange situation:
that is no so strange. I have an example where:
SELECT * FROM my_view WHERE field1 = 'New'; ==> 800 seconds
SELECT * FROM my_view; ==> 2 seconds
the only solution I had was to write a function table
James G Wilkinson wrote:
All,
...
Firstly, I am frankly mystified on how to interpret all this. If anyone
could point me to a document or two that will help me decipher this,
I will greatly appreciate it.
I assume you have looked at:
http://www.postgresql.org/docs/8.0/static/performance-tips.html
Karim Nassar wrote:
From rom http://www.powerpostgresql.com/PerfList/
"even in a two-disk server, you can put the transaction log onto the
operating system disk and reap some benefits."
Context: I have a two disk server that is about to become dedicated to
postgresql (it's a sun v40z running gentoo
Joost Kraaijeveld wrote:
Hi all,
Is the number of rows in explain the number of rows that is expected to be
visited or retrieved?
Groeten,
Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
e-mail: [EMAIL PROTECTED]
web: www.askesis.nl
101 - 154 of 154 matches
Mail list logo