Sai Hertz And Control Systems <[EMAIL PROTECTED]> writes:
> I have created my tables without OIDS now my doubts are :
> 1. Will this speed up the data insertion process
Slightly. It means that each inserted row will be 4 bytes smaller (on
disk), which in turn means you can fit more tuples on a p
Russell Garrett wrote:
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how badly it gets man
> WAL on single drive: 7.990 rec/s
> WAL on 2nd IDE drive: 8.329 rec/s
> WAL on tmpfs: 13.172 rec/s
>
> A huge jump in performance but a bit scary having a WAL that can
> disappear at any time. I'm gonna workup a rsync script and do some
> power-off experiments to see how badly it gets mangled.
Su
Some arbitrary data processing job
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how ba
Manfred Spraul <[EMAIL PROTECTED]> writes:
> One advantage of a seperate write and fsync call is better performance
> for the writes that are triggered within AdvanceXLInsertBuffer: I'm not
> sure how often that's necessary, but it's a write while holding both the
> WALWriteLock and WALInsertLoc
If you want to speed up the elapsed times, then the first thing would be
to attempt to reduce the IO using some indexes, e.g. on test1(anumber),
test2(anumber), test3((anumber%13)), test3((anumber%5)) and
test4((anumber%27))
However if you wish to keep hammering the IO then the you would not us
Hi everyone,
I found that performance get worse as the size of a given table
increases. I mean, for example I´ve just run some scripts shown in
http://www.potentialtech.com/wmoran/postgresql.php
I understand that those scripts are designed to see the behavior of postgresql under
different file
Shridhar Daithankar wrote:
FWIW, there are only two pieces of software that need 64bit aware for a
typical server job. Kernel and glibc. Rest of the apps can do fine as 32
bits unless you are oracle and insist on outsmarting OS.
In fact running 32 bit apps on 64 bit OS has plenty of advantages l
David Shadovitz <[EMAIL PROTECTED]> writes:
> If you think that you or anyone else would invest the time, I could post more
> info.
I doubt you will get any useful help if you don't post more info.
> I will also try Shridhar's suggestions on statistics_target and
> enable_hash_join.
It seemed t
Dear all ,
I have created my tables without OIDS now my doubts are :
1. Will this speed up the data insertion process
2. Though I have not written any code in my any of the pgsql functions
which depend on OIDS
1. Will without OIDS the functions behave internally differently
2. Will my applica
On Fri, 12 Dec 2003, Rhaoni Chiu Pereira wrote:
Hi, is there a switch in your pgsql/odbc connector to enable cursors? If
so, try turning that on.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.post
> This is not very informative when you didn't show us the query nor
> the table schemas..
> BTW, what did you do with this, print and OCR it?
Tom,
I work in a classified environment, so I had to sanitize the query plan, print
it, and OCR it. I spent a lot of time fixing typos, but I guess at
David Shadovitz <[EMAIL PROTECTED]> writes:
> Well, now that I have the plan for my slow-running query, what do I
> do?
This is not very informative when you didn't show us the query nor
the table schemas (column datatypes and the existence of indexes
are the important parts). I have a feeling th
Hi List,
First of all, I tried to subcribe the ODBC list but it seems that the
subscription's link is broken ! So here it goes:
I have a delphi software use ttable components that converts dbf information
to PostgreSQL an Oracle Databases. My problem is that PostgreSQL is too slow,
t
for the clearer understanding: this is NOT about TRUNCATE being
slow "as such" vs. DELETE, but about a change in the order of
a (...) magnitude from 7.3.4 to 7.4...
here's some more info, plus test results w/a "full" db:
300 tables, 2 pieces of modelled hw, so there's one table
w/2 entrie
David Shadovitz wrote:
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Briefly looking over the plan and seeing the estimated v/s actual row mismatch,I
can suggest you following.
1. Vacuum(full) the database. Probably you have already
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Thanks.
-David
Hash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual
time=118773.28..580889.01 rows=57076 loops=1)
-> Hash Join (cost=16619.49..21628.48 rows=40133
Dnia 2003-12-12 02:17, Użytkownik Aram Kananov napisał:
select localtimestamp into v;
raise notice ''Timestamp: %'', v;
Don't use localtimestamp, now() neither any transaction based time
function. They all return the same value among whole transaction. The
only time function, which can be
18 matches
Mail list logo