-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I think the problem is the ODBC driver NOT using cursors properly even
if it should. The database itself is not doing anything it shouldn't do,
in fact it has all the needed functionality to handle this request in a
fast and effective way - just l
> Is there a reason you can't use cursors (explicitely, or via ODBC if it
> provides some glue on top of them) to keep the result set on the server?
>
> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-declare.html
> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-fetch.html
I can only us
Tom Lane wrote:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > On Tue, 26 Aug 2003, Ang Chin Han wrote:
> >> Veering aside a bit, since we usually pinpoint performance problems by
> >> looking at EXPLAIN ANALYZE's differences between the planner's
> >> estimation and actual execution's stats, what
On Thu, 2003-09-04 at 22:13, Relaxin wrote:
> Finally, someone who will actually assume/admit that it is returning the
> entire result set to the client.
> Where as other DBMS manage the records at the server.
Is there a reason you can't use cursors (explicitely, or via ODBC if it
provides some gl
Thank you Christopher.
> Change fsync to true (you want your data to survive, right?) and
> increase shared buffers to something that represents ~10% of your
> system memory, in blocks of 8K.
I turned it off just in the hope that things would run faster.
> None of this is likely to substantially
A long time ago, in a galaxy far, far away, "Relaxin" <[EMAIL PROTECTED]> wrote:
>> Have you changed any of the settings yet in postgresql.conf,
>> specifically the shared_buffers setting?
>
> fsync = false
> tcpip_socket = true
> shared_buffers = 128
Change fsync to true (you want your data to su
I reset the shared_buffers to 1000 from 128, but it made no difference.
""scott.marlowe"" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> On Wed, 3 Sep 2003, Relaxin wrote:
>
> > I have a table with 102,384 records in it, each record is 934 bytes.
> >
> > Using the follow select stat
On Thu, 2003-09-04 at 19:50, Neil Conway wrote:
> On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:
> > Run a VACUUM FULL ANALYZE between runs. This will force a full scan of
> > all data for stats
>
> It will? Are you sure about that?
You're right. According to the docs it won't.
I had a poor sta
The documentation lead us to believe that it would not, but we are
testing just the same (at least checking that the pg_statistics are the
same after each load and VACUUM FULL ANALYZE). Will report back.
On Thu, 2003-09-04 at 16:50, Neil Conway wrote:
> On Thu, 2003-09-04 at 13:46, Rod Taylor w
On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:
> Run a VACUUM FULL ANALYZE between runs. This will force a full scan of
> all data for stats
It will? Are you sure about that?
-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive F
>
> Have you changed any of the settings yet in postgresql.conf,
> specifically the shared_buffers setting?
>
fsync = false
tcpip_socket = true
shared_buffers = 128
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PR
Mary Edie Meredith <[EMAIL PROTECTED]> writes:
> Stephan Szabo kindly responded to our earlier queries suggesting we look
> at default_statistics_target and ALTER TABLE ALTER COLUMN SET
> STATISTICS.
> These determine the number of bins in the histogram for a given column.
> But for a large num
You would "get" all rows, but they'd be stored server side until your
client asked for them.
I.e. a cursor would level the field here, since you say that the other
test cases stored the entire result set on the server. Or did I
misunderstand what you meant there?
On Thu, 4 Sep 2003, Relaxin w
Hi,
I have a table that looks like this:
DATA ID TIME
|--||--|
The table holds app. 14M rows now and grows by app. 350k rows a day.
The ID-column holds about 1500 unique values (integer).
The TIME-columns is of type timestamp without timezone.
I have one index (b-tree) on th
On Thu, 4 Sep 2003, Federico Sevilla III wrote:
> (Please follow Mail-Followup-To, I'm not on the pgsql-performance
> mailing list but am on the Linux-XFS mailing list. My apologies too for
> the cross-post. I'm cc'ing the Linux-XFS mailing list in case people
> there will be interested in this, t
On Wed, 3 Sep 2003, Relaxin wrote:
> I have a table with 102,384 records in it, each record is 934 bytes.
>
> Using the follow select statement:
> SELECT * from
>
> PG Info: version 7.3.4 under cygwin on Windows 2000
> ODBC: version 7.3.100
>
> Machine: 500 Mhz/ 512MB RAM / IDE HDD
>
>
> U
Vivek Khera wrote:
> > "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
>
>
> BM> The 32k number is from Tatsuo testing a few years ago.
>
> Can you verify for me that these parameters in postgresql.conf are
> based on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):
>
> shared
Vivek Khera wrote:
> > "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
>
>
> BM> The 32k number is from Tatsuo testing a few years ago.
>
> Can you verify for me that these parameters in postgresql.conf are
> based on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):
>
> shared
On Wed, 3 Sep 2003, Vivek Khera wrote:
> > "SC" == Sean Chittenden <[EMAIL PROTECTED]> writes:
>
> >> I need to step in and do 2 things:
> SC> Thanks for posting that. Let me know if you have any questions while
> SC> doing your testing. I've found that using 16K blocks on FreeBSD
> SC> res
> "BM" == Bruce Momjian <[EMAIL PROTECTED]> writes:
BM> The 32k number is from Tatsuo testing a few years ago.
Can you verify for me that these parameters in postgresql.conf are
based on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):
shared_buffers
effective_cache_size
Logica
Vivek Khera wrote:
> > "MGF" == Marc G Fournier <[EMAIL PROTECTED]> writes:
>
> MGF> Just curious, but Bruce(?) mentioned that apparently a 32k block size was
> MGF> found to show a 15% improvement ... care to run one more test? :)
>
>
> Well, it is hard to tell based on my quick and dirty t
On Thu, 2003-09-04 at 13:41, Mary Edie Meredith wrote:
> Our port of OSDL DBT3 test suite to PostgreSQL (see Background
> information below) is nearing completion. We would also like to confirm
> our understanding of an outstanding consistency issue.
>
> We have not been able to do meaningful ker
Relaxin wrote:
I have a table with 102,384 records in it, each record is 934 bytes.
Using the follow select statement:
SELECT * from
PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100
Machine: 500 Mhz/ 512MB RAM / IDE HDD
Under PG: Data is returned in 26 secs!!
Under SQ
Our port of OSDL DBT3 test suite to PostgreSQL (see Background
information below) is nearing completion. We would also like to confirm
our understanding of an outstanding consistency issue.
We have not been able to do meaningful kernel testing since the runs
(all parameters/kernels being equal) a
I had these same issues with the PeerDirect version also.
"Hannu Krosing" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Relaxin kirjutas N, 04.09.2003 kell 17:35:
> > So after you did that, where able to position to ANY record within the
> > resultset?
> >
> > Ex. Position 100,000;
Relaxin kirjutas N, 04.09.2003 kell 17:35:
> So after you did that, where able to position to ANY record within the
> resultset?
>
> Ex. Position 100,000; then to Position 5; then to position 50,000, etc...
not in the case of :
time psql test100k -c 'select * from test' > /dev/null
as the who
> "MGF" == Marc G Fournier <[EMAIL PROTECTED]> writes:
MGF> Just curious, but Bruce(?) mentioned that apparently a 32k block size was
MGF> found to show a 15% improvement ... care to run one more test? :)
Well, it is hard to tell based on my quick and dirty test:
16k page size:
restore time
The table has been Vacuumed and seq_scan is turned on.
"Jean-Luc Lachance" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> You forgot that the original poster's query was:
> SELECT * from
>
> This should require a simple table scan. NO need for stats.
> Either the table has not b
You forgot that the original poster's query was:
SELECT * from
This should require a simple table scan. NO need for stats.
Either the table has not been properly vacuumed or he's got seq_scan
off...
JLL
Nick Fankhauser wrote:
>
> > Yes I Analyze also, but there was no need to because it wa
So after you did that, where able to position to ANY record within the
resultset?
Ex. Position 100,000; then to Position 5; then to position 50,000, etc...
If you are able to do that and have your positioned row available to you
immediately, then I'll believe that it's the ODBC driver.
"Hannu K
Sean Chittenden <[EMAIL PROTECTED]> writes:
>> Just wonderin. What if you symlink WAL to a directory which is on
>> mounted USB RAM drive?
> USB 2.0 you mean? It supposedly runs at 1394 speeds, but USB 1.0/1.1
> runs at 1MB/s under ideal circumstances... that's slower than even old
> IDE drives.
Relaxin kirjutas N, 04.09.2003 kell 03:28:
> I have a table with 102,384 records in it, each record is 934 bytes.
I created a test database on my Linux (RH9) laptop with 30GB/4200RPM ide
drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930 bytes
each and did
[EMAIL PROTECTED] hannu]
On Wed, Sep 03, 2003 at 12:32:42PM -0700, Naveen Palavalli wrote:
> My server spawns a new connection to Postgresql foreach client. My
I don't think you want to do that. You should use a pool. Back end
startup is mighty expensive.
> 1) Effects related to Vaccum :- I performed 10 trials of addin
All rows are required.
""Shridhar Daithankar"" <[EMAIL PROTECTED]> wrote in
message news:[EMAIL PROTECTED]
> On 4 Sep 2003 at 0:48, Relaxin wrote:
> > All of the databases that I tested the query against gave me immediate
> > access to ANY row of the resultset once the data had been returned.
> >
On 4 Sep 2003 at 0:48, Relaxin wrote:
> All of the databases that I tested the query against gave me immediate
> access to ANY row of the resultset once the data had been returned.
> Ex. If I'm currently at the first row and then wanted to goto the 100,000
> row, I would be there immediately, and
> Can you tell us what you were *actually* doing? Somehow it sounds as
> though the other databases were throwing away the data whereas
> PostgreSQL was returning it all "kawhump!" in one batch.
All of the databases that I tested the query against gave me immediate
access to ANY row of the result
> Just wonderin. What if you symlink WAL to a directory which is on
> mounted USB RAM drive?
USB 2.0 you mean? It supposedly runs at 1394 speeds, but USB 1.0/1.1
runs at 1MB/s under ideal circumstances... that's slower than even old
IDE drives.
> Will that increase any throughput?
Probably not.
On 3 Sep 2003 at 23:36, Rod Taylor wrote:
> > - the way PostgreSQL expects data to be written to disk without the
> >fsync calls for things not to get corrupted in the event of a crash,
> >and
>
> If you want the filesystem to deal with this, I believe it is necessary
> for it to write t
38 matches
Mail list logo