Joel Fradkin wrote:
I did think of something similar just loading the data tables with junk
records and I may visit that idea with Josh.
I did just do some comparisons on timing of a plain select * from tbl where
indexed column = x and it was considerably slower then both MSSQL and MYSQL,
so I am
[EMAIL PROTECTED] wrote:
Hmm,
I have asked some Peoples on the List an some one has posted this links
http://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php
It is quite usefull to read but iam not sure thadt theese Trick is verry
helpfull.
I want to splitt my 1GByte Table into
Greg Sabino Mullane wrote:
Um, can't we just get that from pg_settings?
Anyway, I'll be deriving settings from the .conf file, since most of the
time the Configurator will be run on a new installation.
Aren't most of the settings all kept in the SHOW variables anyway?
As I said, it may
Greg Sabino Mullane wrote:
I wonder if this could be combined with the configurator somehow.
Currently, integration won't work with Perl, so maybe C for the core and
Perl for the interactive part would be better.
Probably so. Seems there is a bit of convergent evolution going on. When I
Bruce Momjian wrote:
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a detailed
trace of all the transactions/queries that are executed on a database during
a specified period of
Jim C. Nasby wrote:
On Sun, Jul 31, 2005 at 08:51:06AM -0800, Matthew Schumacher wrote:
Ok, here is the current plan.
Change the spamassassin API to pass a hash of tokens into the storage
module, pass the tokens to the proc as an array, start a transaction,
load the tokens into a temp table
Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
If you don't want to optimize the whole application, I'd at least
just push the DB operations down to a very small number of
connections (*one* might even be optimal!), waiting on some kind of
thread-safe queue for updates from the
Joe wrote:
The pages do use a number of queries to collect all the data for display
but nowhere near 50. I'd say it's probably less than a dozen.
The schema is fairly simple having two main tables: topic and entry
(sort of like account and transaction in an accounting scenario). There
NSO wrote:
Hello,
No it is not web app, I tested on simple delphi app and with PGAdmin
III.. same results.. Query from PGAdmin takes up to 30seconds...
Displaying the data can take a long time on several platforms for
pgAdmin; complex controls tend to be dead slow on larger data sets.
NSO wrote:
Well, no. Delphi isn't better, same time just for downloading data... But
as I told before, if for ex. pgAdminIII is running on server machine it is
a lot faster, I do not know why, I was monitoring network connection
between client and server and it is using only up to 2% of full
Dave Page wrote:
Now *I* am confused. What does PgAdmin do more than giving
the query to
the database?
Nothing - it just uses libpq's pqexec function. The speed issue in
pgAdmin is rendering the results in the grid which can be slow on some
OS's due to inefficiencies in some grid
Qingqing Zhou wrote:
Jeff Frost [EMAIL PROTECTED] wrote
The hardware I have available is as follows:
* 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI
* 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI
* 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA
(2) The hardware
David Lang wrote:
These boxes don't look like being designed for a DB server. The first
are very CPU bound, and the third may be a good choice for very large
amounts of streamed data, but not optimal for TP random access.
I don't know what you mean when you say that the first ones are CPU
Wolfgang Gehner wrote:
Hi there,
I need a simple but large table with several million records. I do batch
inserts with JDBC. After the first million or so records,
the inserts degrade to become VERY slow (like 8 minutes vs initially 20
secondes).
The table has no indices except PK while I
Rodrigo Madera wrote:
Imagine a table named Person with first_name and age.
Now let's make it fancy and put a mother and father field that is
a reference to the own table (Person). And to get even fuzzier, let's
drop in some siblings:
CREATE TABLE person(
id bigint PRIMARY KEY,
first_name
Hélder M. Vieira wrote:
- Original Message - From: Andreas Pflug
[EMAIL PROTECTED]
Create a table sibling with parent_id, sibling_id and appropriate
FKs, allowing the model to reflect the relation. At the same time, you
can drop mother and father, because this relation is covered
Christopher Kings-Lynne wrote:
The pgAdmin query tool is known to give an answer about 5x the real
answer - don't believe it!
Everybody please forget immediately the factor 5. It's no factor at all,
but the GUI update time that is *added*, which depends on rows*columns.
ryan groth wrote:
Antoine wrote:
Hi,
I have enabled the autovacuum daemon, but occasionally still get a
message telling me I need to run vacuum when I access a table in
pgadmin.
pgAdmin notices a discrepancy between real rowcount and estimated
rowcount and thus suggests to run vacuum/analyze; it won't examine
Antoine wrote:
On 18/03/06, Andreas Pflug [EMAIL PROTECTED] wrote:
Antoine wrote:
Hi,
I have enabled the autovacuum daemon, but occasionally still get a
message telling me I need to run vacuum when I access a table in
pgadmin.
Bring up the postgresql.conf editor on that server, and watch
Jesper Krogh wrote:
Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.something-good
I'd run pg_dump | gzip sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about
I have this table setup on a 8.1.4 server:
pj_info_attach(attachment_nr, some more cols) -- index, 50k rows
pj_info_attach_compressable() INHERITS (pj_info_attach) -- index, 1M rows
pj_info_attach_not_compressable() INHERITS (pj_info_attach) -- index, 0
rows
EXPLAIN ANALYZE SELECT aes FROM
Alvaro Herrera wrote:
Personally I think it would be neat. For example the admin-tool guys
would be able to get a dump without invoking an external program.
Second it would really be independent of core releases (other than being
tied to the output format.) pg_dump would be just a simple
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Any explanation for this horror?
Existing releases aren't smart about planning joins to inheritance
trees.
Using a view that UNIONs SELECT .. ONLY as replacement for the parent
table isn't any better. Is that improved too?
CVS
Josh Berkus wrote:
Folks,
In which case, why was 64-bit such a big deal?
We had this discussion with 16/32 bit too, back in those 286/386 times...
Not too many 16bit apps left now :-)
Regards,
Andreas
---(end of broadcast)---
TIP 9: In
24 matches
Mail list logo