David Kerr wrote:
I don't think it's anything in the Db that's causing it. ( drop and re-create
the db between tests) I actually suspect a hardware issue somewhere.
You might find my "Database Hardware Benchmarking" talk, available at
http://projects.2ndquadrant.com/talks , useful to help
Scott Marlowe writes:
> On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging wrote:
>> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
>> noticed that on the 8.4.* versions, a lot of our code is either taking much
>> longer to complete, or never completing. I think I have isolate
norn wrote:
>> (1) Try it without the ORDER BY clause and the LIMIT.
> W/o the 'order by' it works instantly (about 1ms!)
> W/o the limit it takes 1.4 seconds
>>(2) Temporarily take that top index out of consideration
> It works nice! Query takes about 0.6 seconds as expected!
> So, as we
On Tue, 20 Apr 2010, Dave Crooke wrote:
a. Make setFetchSize(1) the default
The reason this is not done is that the mechanism used for fetching a
piece of the results at a time can change the query plan used if using a
PreparedStatement. There are three ways to plan a PreparedStatemen
I digest this down to "this is the best that can be achieved on a connection
that's single threaded"
I think the big difference with Oracle is this:
i. in Oracle, a SELECT does not have to be a transaction, in the sense that
PG's SELECT does ... but in Oracle, a SELECT can fail mid-stream if you
On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote:
- David Kerr wrote:
- >the db, xlog and logs are all on separate areas of the SAN.
- >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't
- >expect
- >contention there.
- >
-
- Just because you don't expect it doesn't
David Kerr wrote:
the db, xlog and logs are all on separate areas of the SAN.
separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect
contention there.
Just because you don't expect it doesn't mean it's not there.
Particularly something as complicated as a SAN setup, p
Dave Crooke wrote:
> a. the fact that Statement.executeQuery("select * from
> huge_table") works out of the box with every one of those
> databases, but results in java.langOutOfMemory with PG without
> special setup. Again, this is to the letter of the standard, it's
> just not very user friend
On Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke wrote:
>
> I wouldn't hold MySQL up to be a particularly good implmentation of
> anything, other than speed (MyISAM) and usability (the CLI) I find
> Oracle's JDBC implmentation to be both user friendly and (largely) standards
> compliant.
>
Dave,
On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging wrote:
> Hi,
>
> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
> noticed that on the 8.4.* versions, a lot of our code is either taking much
> longer to complete, or never completing. I think I have isolated the
> problem to q
I don't want to get into a big debate about standards, but I will clarify a
couple of things inline below.
My key point is that the PG JDBC driver resets people's expecations who have
used JDBC with other databases, and that is going to reflect negatively on
Postgres if Postgres is in the minority
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote:
> On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:
> - So are you logging to the same drive that has pg_xlog and your
> - data/base directory on this machine?
> -
>
> the db, xlog and logs are all on separate areas of the SAN.
>
> s
On Tue, Apr 20, 2010 at 12:47 PM, David Kerr wrote:
> On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:
> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> - > that thought occured to me while I was testing this. I ran a vacuumdb -z
> - > on my database during the load and it didn
Hi,
I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
noticed that on the 8.4.* versions, a lot of our code is either taking
much longer to complete, or never completing. I think I have isolated
the problem to queries using in(), not in() or not exists(). I've put
toge
On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
-
- The window to run ANALYZE
On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote:
- David Kerr wrote:
-
- > Incidentally the code is written to work like this :
- >
- > while (read X lines in file){
- > Process those lines.
- > write lines to DB.
- > }
-
- Unless you're selecting from multiple database tables
On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote:
- >
- > I'm logging via syslog, I've had trouble with that before. when i moved to
syslog-ng
- > on my dev environments that mostly resoved the probelm for me. but these
machines
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote:
>
> I'm logging via syslog, I've had trouble with that before. when i moved to
> syslog-ng
> on my dev environments that mostly resoved the probelm for me. but these
> machines
> still have vanilla syslog.
Yea, I almost always log directly via
On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote:
- On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote:
- > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
- > -
- > - You can absolutely use copy if you like bu
On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote:
> On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
> - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> -
> - > that thought occured to me while I was testing this. I ran a vacuumdb -z
> - > on my database during the load and
On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote:
- On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
-
- > that thought occured to me while I was testing this. I ran a vacuumdb -z
- > on my database during the load and it didn't impact performance at all.
- >
- > Incidentally the
On Tue, Apr 20, 2010 at 12:15 PM, Scott Marlowe wrote:
> On Tue, Apr 20, 2010 at 11:39 AM, David Kerr wrote:
>> Howdy all,
>>
>> I've got a huge server running just postgres. It's got 48 cores and 256GB of
>> ram. Redhat 5.4, Postgres 8.3.9.
>> 64bit OS. No users currently.
>
> What's your IO su
On Tue, 20 Apr 2010, Nikolas Everett wrote:
You can absolutely use copy if you like but you need to use a non-standard
jdbc driver: kato.iki.fi/sw/db/postgresql/jdbc/copy/. I've used it in the
past and it worked.
Copy support has been added to the 8.4 driver.
Kris Jurka
--
Sent via pgsql
David Kerr wrote:
> Incidentally the code is written to work like this :
>
> while (read X lines in file){
> Process those lines.
> write lines to DB.
> }
Unless you're selecting from multiple database tables in one query,
effective_cache_size shouldn't make any difference. There's
probably
On Tue, Apr 20, 2010 at 11:39 AM, David Kerr wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
What's your IO subsystem look like? What did vmstat actually say?
--
Sent via pg
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.
The window to run ANALYZE usefully is pretty short. If you run it
before the load is complete, y
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.
>
> Incidentally the code is written to work like this :
>
> while (read X lines in file){
> Proc
On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote:
- On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote:
- > My initial thought was hardware issues so we got sar, vmstat, etc all
running on the box and they didn't give
- > any indication that we had resource issues.
- >
- > So I decided
On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
>
> I've got a J2EE app that loads data into the DB, it's got logic behind it so
> i
On Tue, 2010-04-20 at 10:39 -0700, David Kerr wrote:
> Howdy all,
>
> I've got a huge server running just postgres. It's got 48 cores and 256GB of
> ram. Redhat 5.4, Postgres 8.3.9.
> 64bit OS. No users currently.
>
> I've got a J2EE app that loads data into the DB, it's got logic behind it so
Howdy all,
I've got a huge server running just postgres. It's got 48 cores and 256GB of
ram. Redhat 5.4, Postgres 8.3.9.
64bit OS. No users currently.
I've got a J2EE app that loads data into the DB, it's got logic behind it so
it's not a simple bulk load, so
i don't think we can use copy.
Bas
Dave Crooke wrote:
> AFAICT from the Java end, ResultSet.close() is supposed to be
> final.
For that ResultSet. That doesn't mean a ResultSet defines a cursor.
Such methods as setCursorName, setFetchSize, and setFetchDirection
are associated with a Statement. Think of the ResultSet as the
re
On Mon, 19 Apr 2010, Dave Crooke wrote:
Statement.close() appears to get the job done (in my envrionment, PG's
driver never sees a Connection.close() because of DBCP).
I'd consider the fact that ResultSet.close() does not release the implicit
cursor to be something of a bug, but it may well h
AFAICT from the Java end, ResultSet.close() is supposed to be final. There
is no way I know of in JDBC to get a handle back to the cursor on the server
side once you have made this call - in fact, its sole purpose is to inform
the server in a timely fashion that this cursor is no longer required, s
On Fri, 2010-04-16 at 11:02 +0300, Віталій Тимчишин wrote:
> Hello.
>
>
> I have a query that performs very poor because there is a limit on
> join column that is not applied to other columns:
>
>
> select * from company this_ left outer join company_tag this_1_ on
> this_.id=this_1_.company_id
Hey folks
I am trying to do a full table scan on a large table from Java, using a
straightforward "select * from foo". I've run into these problems:
1. By default, the PG JDBC driver attempts to suck the entire result set
into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, i
Kevin,
thanks for your time!
Here the requested tests.
> (1) Try it without the ORDER BY clause and the LIMIT.
W/o the 'order by' it works instantly (about 1ms!)
Limit (cost=0.00..3.59 rows=5 width=4) (actual time=0.127..0.229
rows=5 loops=1)
-> Nested Loop (cost=0.00..277863.53 rows=386544
I have a lot of centos servers which are running postgres. Postgres isn't used
that heavily on any of them, but lately, the stats collector process keeps
causing tons of IO load. It seems to happen only on servers with centos 5.
The versions of postgres that are running are:
8.1.18
8.2.6
8.3.1
8
Dave Crooke wrote:
> I'd consider the fact that ResultSet.close() does not release the
> implicit cursor to be something of a bug
What's your reasoning on that? The definitions of cursors in the
spec, if memory serves, allow a cursor to be closed and re-opened;
why would this be treated diffe
Hi,
I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled
latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am
little confused about the results. I ran dbt2 with the following configuration
i.e.
DBT2 Options :
WAREHOUSES=75
DB_CONNECTIONS=20
40 matches
Mail list logo