Roberto Germano Vieweg Neto wrote:
My application is using Firebird 1.5.2
I have at my database:
- 150 Doamins
- 318 tables
- 141 Views
- 365 Procedures
- 407 Triggers
- 75 generators
- 161 Exceptions
- 183 UDFs
- 1077 Indexes
My question is:
Postgre SQL will be more faster than Firebird? Ho
Insert into a temp table then use INSERT INTO...SELECT FROM to insert
all rows into the proper table that don't have a relationship.
Chris
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for
[EMAIL PROTECTED] ("Jeffrey W. Baker") writes:
> I haven't tried this product, but the microbenchmarks seem truly
> slow. I think you would get a similar benefit by simply sticking a
> 1GB or 2GB DIMM -- battery-backed, of course -- in your RAID
> controller.
Well, the microbenchmarks were pretty
Easier and faster than doing the custom trigger is to simply define a
unique index and let the DB enforce the constraint with an index lookup,
something like:
create unique index happy_index ON happy_table(col1, col2, col3);
That should run faster than the custom trigger, but not as fast as the
t
What's the your platform? Windows or Linux?
What's the data volume (up million records)?
-Mensagem original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Em nome de Roberto Germano
Vieweg Neto
Enviada em: terça-feira, 26 de julho de 2005 16:35
Para: pgsql-performance@postgresql.org
Ass
The number of objects in your system has virtually nothing to do with
performance (at least on any decent database...)
What is your application doing? What's the bottleneck right now?
On Tue, Jul 26, 2005 at 04:35:19PM -0300, Roberto Germano Vieweg Neto wrote:
> My application is using Firebird 1
> you'd be much better served by
> putting a big NVRAM cache in front of a fast disk array
I agree with the point below, but I think price was the issue of the
original discussion. That said, it seems that a single high speed spindle
would give this a run for its money in both price and performan
Matthew Nuzum wrote:
On 7/26/05, Dan Harris <[EMAIL PROTECTED]> wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is alread
On 7/26/05, Dan Harris <[EMAIL PROTECTED]> wrote:
> I am working on a process that will be inserting tens of million rows
> and need this to be as quick as possible.
>
> The catch is that for each row I could potentially insert, I need to
> look and see if the relationship is already there to pre
My application is using Firebird 1.5.2
I have at my database:
- 150 Doamins
- 318 tables
- 141 Views
- 365 Procedures
- 407 Triggers
- 75 generators
- 161 Exceptions
- 183 UDFs
- 1077 Indexes
My question is:
Postgre SQL will be more faster than Firebird? How much (in percent)?
I need about 20%
Hannu,
On 7/26/05 11:56 AM, "Hannu Krosing" <[EMAIL PROTECTED]> wrote:
> On T, 2005-07-26 at 11:46 -0700, Luke Lonergan wrote:
>
>> Yah - that's a typical approach, and it would be excellent if the COPY
>> bypassed WAL for the temp table load.
>
> Don't *all* operations on TEMP tables bypass WA
On Tue, Jul 26, 2005 at 11:23:23AM -0700, Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
On the contrary--it's not obvious that it is an ideal fit for a WAL. A
ram disk like this is optimized for
Alex Turner wrote:
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
Well, the whole point
Please see:
http://www.newegg.com/Product/Product.asp?Item=N82E16820145309
and
http://www.newegg.com/Product/Product.asp?Item=N82E16820145416
The price of Reg ECC is not significantly higher than regular ram at
this point. Plus if you go with super fast 2-2-2-6 then it's actualy
more than good o
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
On 7/26/05, John A Meinel <[EMAIL PROTECTED]
On Tue, 2005-07-26 at 10:50 -0600, Dan Harris wrote:
> I am working on a process that will be inserting tens of million rows
> and need this to be as quick as possible.
>
> The catch is that for each row I could potentially insert, I need to
> look and see if the relationship is already there
John,
On 7/26/05 9:56 AM, "John A Meinel" <[EMAIL PROTECTED]> wrote:
> You could insert all of your data into a temporary table, and then do:
>
> INSERT INTO final_table SELECT * FROM temp_table WHERE NOT EXISTS
> (SELECT info FROM final_table WHERE id=id, path=path, y=y);
>
> Or you could load
Luke Lonergan wrote:
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
Well, you could do it for any journaled system (XFS, JFS, ext3, reiserfs).
But yes, it seems specifically designed for a battery backed journal.
Th
Yup - interesting and very niche product - it seems like it's only obvious
application is for the Postgresql WAL problem :-)
The real differentiator is the battery backup part. Otherwise, the
filesystem caching is more effective, so put the RAM on the motherboard.
- Luke
-
I'm a little leary as it is definitely a version 1.0 product (it is
still using an FPGA as the controller, so they were obviously pushing to
get the card into production).
Not necessarily. FPGA's have become a sensible choice now. My RME studio
soundcard uses a big FPGA.
The performance
On Jul 26, 2005, at 12:34 PM, John A Meinel wrote:
Basically, it is a PCI card, which takes standard DDR RAM, and has
a SATA port on it, so that to the system, it looks like a normal
SATA drive.
The card costs about $100-150, and you fill it with your own ram,
so for a 4GB (max size) dis
On Tue, 2005-07-26 at 11:34 -0500, John A Meinel wrote:
> I saw a review of a relatively inexpensive RAM disk over at
> anandtech.com, the Gigabyte i-RAM
> http://www.anandtech.com/storage/showdoc.aspx?i=2480
>
> Basically, it is a PCI card, which takes standard DDR RAM, and has a
> SATA port on
Dan Harris wrote:
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to prevent multiple
entries. Currently
[EMAIL PROTECTED] (John A Meinel) writes:
> I saw a review of a relatively inexpensive RAM disk over at
> anandtech.com, the Gigabyte i-RAM
> http://www.anandtech.com/storage/showdoc.aspx?i=2480
And the review shows that it's not *all* that valuable for many of the
cases they looked at.
> Basical
I am working on a process that will be inserting tens of million rows
and need this to be as quick as possible.
The catch is that for each row I could potentially insert, I need to
look and see if the relationship is already there to prevent
multiple entries. Currently I am doing a SELECT
I saw a review of a relatively inexpensive RAM disk over at
anandtech.com, the Gigabyte i-RAM
http://www.anandtech.com/storage/showdoc.aspx?i=2480
Basically, it is a PCI card, which takes standard DDR RAM, and has a
SATA port on it, so that to the system, it looks like a normal SATA drive.
Th
On Jul 19, 2005, at 3:01 PM, Tom Lane wrote:
You could possibly get some improvement if you can re-use prepared
plans
for the queries; but this will require some fooling with the client
code
(I'm not sure if DBD::Pg even has support for it at all).
DBD::Pg 1.40+ by default uses server-si
On Jul 26, 2005, at 8:15 AM, Chris Isaacson wrote:
I am using InnoDB with MySQL which appears to enforce true transaction
support. (http://dev.mysql.com/doc/mysql/en/innodb-overview.html) If
not, how is InnoDB "cheating"?
are you sure your tables are innodb?
chances are high unless you exp
I need the chunks for each table COPYed within the same transaction
which is why I'm not COPYing concurrently via multiple
threads/processes. I will experiment w/o OID's and decreasing the
shared_buffers and wal_buffers.
Thanks,
Chris
-Original Message-
From: Gavin Sherry [mailto:[EMAIL
John,
(FYI: got a failed to deliver to [EMAIL PROTECTED])
I do not have any foreign keys and I need the indexes on during the
insert/copy b/c in production a few queries heavily dependent on the
indexes will be issued. These queries will be infrequent, but must be
fast when issued.
I am using I
I do not have any foreign keys and I need the indexes on during the
insert/copy b/c in production a few queries heavily dependent on the
indexes will be issued. These queries will be infrequent, but must be
fast when issued.
I am using InnoDB with MySQL which appears to enforce true transaction
s
Hi Chris,
Have you considered breaking the data into multiple chunks and COPYing
each concurrently?
Also, have you ensured that your table isn't storing OIDs?
On Mon, 25 Jul 2005, Chris Isaacson wrote:
> #---
>
> # RESOURC
Tomeh, Husam wrote:
The other question I have. What would be the proper approach to rebuild
indexes. I re-indexes and then run vacuum/analyze. Should I not use the
re-index approach, and instead, drop the indexes, vacuum the tables, and
then create the indexes, then run analyze on tables and inde
33 matches
Mail list logo