* Marc Mamin ([EMAIL PROTECTED]) wrote:
> I've just made a first test wich resulted in a query being 15KB big annd
> containing 63 UNION.
If the data is distinct from each other or you don't mind duplicate
records you might try using 'union all' instead of 'union'. Just a
thought.
Stephe
* Dan Harris ([EMAIL PROTECTED]) wrote:
> On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
> >I might be wrong, but there may be something much more substantially
> >wrong than slow i/o.
>
> Yes, I'm afraid of that too. I just don't know what tools I should
> use to figure that out. I have so
* Donald Courtney ([EMAIL PROTECTED]) wrote:
> To be clear -
> I built postgreSQL 8.1 64K bit on solaris 10 a few months ago
> and side by side with the 32 bit postgreSQL build saw no improvement.
> In fact the 64 bit result was slightly lower.
That makes some sense actually. It really depends o
* Ksenia Marasanova ([EMAIL PROTECTED]) wrote:
> Any tips are greatly appreciated.
EXPLAIN ANALYZE of the same queries would be much more useful.
Thanks,
Stephen
signature.asc
Description: Digital signature
* Ksenia Marasanova ([EMAIL PROTECTED]) wrote:
> test=# explain analyze select * from user_ left join church on
> user_.church_id = church.id;
[...]
> Total runtime: 2025.946 ms
> (6 rows)
>
> test=# set enable_seqscan='false';
> SET
> test=# explain analyze select * from user_ left join church o
* Brandon Black ([EMAIL PROTECTED]) wrote:
> Ideally I'd like to commit the data seperately, as the data could contain
> errors which abort the transaction, but it may come down to batching it and
> coding things such that I can catch and discard the offending row and retry
> the transaction if
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> > Luke, would it help to have one machine read the file and
> > have it connect to postgres on a different machine when doing
> > the copy? (I'm thinking that the first machine may be able to
> > do a lot of the parseing and conversion, leaving the se
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> On 12/2/05 12:18 PM, "Stephen Frost" <[EMAIL PROTECTED]> wrote:
> > Just a thought, but couldn't psql be made to use the binary mode of
> > libpq and do at least some of the conversion on the client side? Or
>
* Madison Kelly ([EMAIL PROTECTED]) wrote:
> If the performace difference comes from the 'COPY...' command being
> slower because of the automatic quoting can I somehow tell PostgreSQL
> that the data is pre-quoted? Could the performance difference be
> something else?
I doubt the issue is wi
* Bill Moran ([EMAIL PROTECTED]) wrote:
> What I'm fuzzy on is how to discretely know when I'm overflowing
> work_mem? Obviously, if work_mem is exhausted by a particular
> query, temp files will be created and performance will begin to suck,
I don't believe this is necessairly *always* the case.
Dave, et al,
* Dave Cramer ([EMAIL PROTECTED]) wrote:
> Don't the archived logs have to be copied as well as the regular WAL
> logs get recycled ?
Yes, but I'd expect at the point they're being copied off to some other
store (probably a seperate disk, or even over the network to another
system,
* Luke Lonergan ([EMAIL PROTECTED]) wrote:
> Hi Rajesh,
>
> On 5/29/07 7:18 PM, "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> wrote:
>
> > D1 raid1 D2 raid1 D5 --> MD0
> > D3 raid1 D4 raid1 D6 --> MD1
> > MD0 raid0 MD1 --> MDF (final)
>
> AFAIK you can't RAID1 more than two drives, so the above d
* Peter Childs ([EMAIL PROTECTED]) wrote:
> Good point, also if you had Raid 1 with 3 drives with some bit errors at
> least you can take a vote on whats right. Where as if you only have 2 and
> they disagree how do you know which is right other than pick one and hope...
> But whatever it will be s
* Tyrrill, Ed ([EMAIL PROTECTED]) wrote:
> Based on all this we will be going with 8.2.4.1, but it seems like
> currently the query planner isn't choosing the best plan for this case.
Was the 'work_mem' set to the same thing on all these runs? Also, you
might try increasing the 'work_mem' under 8
* Ed Tyrrill ([EMAIL PROTECTED]) wrote:
> Yes, work_mem was set to 128MB for all runs. All settings were the same
> except for the change to default_statistics_target. I'm certainly
> memory constrained, but giving 2GB to one one session doesn't allow
> other sessions to do anything. Possibly wh
* Ed Tyrrill ([EMAIL PROTECTED]) wrote:
> It seems to me that the first plan is the optimal one for this case, but
> when the planner has more information about the table it chooses not to
> use it. Do you think that if work_mem were higher it might choose the
> first plan again?
Seems likely to
* Jeff Frost ([EMAIL PROTECTED]) wrote:
> Here are the plans:
It's probably just me but, honestly, I find it terribly frustrating to
try and read a line-wrapped explain-analyze output... I realize it
might not be something you can control in your mailer, but you might
consider putting the various
Corin,
* Corin (wakath...@gmail.com) wrote:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking
> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The
> database server is a dual dualcore
Corin,
* Corin (wakath...@gmail.com) wrote:
> I fill this table with around 2.800.000 random rows (values between 1
> and 500.000 for user_id, ref_id).
Using random data really isn't a good test.
> The intention of the query is to find rows with no "partner" row. The
> offset and limit are j
* David Jarvis (thanga...@gmail.com) wrote:
> I was hoping to eliminate this part of the query:
> (cast(extract( YEAR FROM m.taken ) + greatest( -1 *
> sign(
> (extract( YEAR FROM m.taken )||'-12-31')::date -
> (extract( YEAR FROM m.taken )||'-01-01')::date
* David Jarvis (thanga...@gmail.com) wrote:
> There are 72 child tables, each having a year index and a station index,
> which are defined as follows:
S, my thoughts:
Partition by something that makes sense... Typically, I'd say that you
would do it by the category id and when the measuremen
* David Jarvis (thanga...@gmail.com) wrote:
> I was still referring to the measurement table. You have an index on
> > stationid, but still seem to be getting a sequential scan. Maybe the planner
> > does not realise that you are selecting a small number of stations. Posting
> > an EXPLAIN ANALYSE
* Yeb Havinga (yebhavi...@gmail.com) wrote:
>> Normalizing by date parts was fast. Partitioning the tables by year
>> won't do much good -- users will probably choose 1900 to 2009,
>> predominately.
> Ok, in that case it is a bad idea.
Yeah, now that I understand what the user actually wants,
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Would a query such as this obtain any performance improvement by being
> re-written using C?
I wouldn't expect the queries called by the pl/pgsql function to be much
faster if called through SPI from C instead. I think the question you
n
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Thanks for the quick follow-up. So, you are saying that if I can do SPI in
> _PG_init, then I could prepare all my queries there and they would be
> prepared once for the entire function when it is loaded? That would
> certainly achieve wh
* Eliot Gable (egable+pgsql-performa...@gmail.com) wrote:
> Since PostgreSQL is written in C, I assume there is no
> such additional overhead. I assume that the PL/PGSQL implementation at its
> heart also uses SPI to perform those executions. Is that a fair statement?
Right, but I also wouldn't ex
* Matthew Wakeling (matt...@flymine.org) wrote:
> The major case I found when writing pl/pgsql was when trying to build
> arrays row by row. AFAIK when I tried it, adding a row to an array caused
> the whole array to be copied, which put a bit of a damper on performance.
Using the built-ins now
Mozzi,
* Mozzi (mozzi.g...@gmail.com) wrote:
> Thanx mate Create Index seems to be the culprit.
> Is it normal to just use 1 cpu tho?
Yes, PG can only use 1 CPU for a given query or connection. You'll
start to see the other CPUs going when you have more than one connection
to the database. If y
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> Tom Wilcox wrote:
> > Is it possible to get postgres to make use of the available 96GB
> > RAM on a Windows 32-bit build?
>
> I would try setting shared_memory to somewhere between 200MB and 1GB
> and set effective_cache_size = 90GB or so.
Jens,
* Jens Hoffrichter (jens.hoffrich...@gmail.com) wrote:
> I'm just curious if there is any way to improve the performance of
> those queries. I'm seeing SeqScans in the EXPLAIN ANALYZE, but nothing
> I have done yet has removed those.
SeqScans aren't necessairly bad. Also, providing your po
Ranga,
* Ranga Gopalan (ranga_gopa...@hotmail.com) wrote:
> It seems that this is an issue faced by others as well - Please see this
> link:
> http://stackoverflow.com/questions/2236776/efficient-querying-of-multi-partition-postgres-table
>
> Is this a known bug? Is this something that someone
* Ivan Voras (ivo...@freebsd.org) wrote:
> Total runtime: 0.507 ms
[...]
> Total runtime: 118.689 ms
>
> See in the first query where I have a simple LIMIT, it fetches random 10
> rows quickly, but in the second one, as soon as I give it to execute and
> calculate the entire result set before I
* Mason Harding (mason.hard...@gmail.com) wrote:
> Hi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)
Can you provide \d output from all the tables involved..?
Also, what does the query plan look like w/o 'enable_seqscan=off' (which
is not a good setting to use...)? Increasi
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> The reason it's sorting by all the columns is the DISTINCT
You might also verify that you actually need/*should* have the DISTINCT,
if it's included today.. Often developers put that in without
understanding why they're getting dups (which can often be due
* Robert Haas (robertmh...@gmail.com) wrote:
> Hey, I didn't know about those. That sounds like it might be worth
> investigating, though I confess I lack a 48-core machine on which to
> measure the alleged benefit.
I've got a couple 24-core systems, if it'd be sufficiently useful to
test with..
* Robert Haas (robertmh...@gmail.com) wrote:
> It's good to be you.
They're HP BL465 G7's w/ 2x 12-core AMD processors and 48G of RAM.
Unfortunately, they currently only have local storage, but it seems
unlikely that would be an issue for this.
> I don't suppose you could try to replicate the lse
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> Robert Haas wrote:
> > perhaps it would be possible by, say, increasing the number of
> > lock partitions by 8x. It would be nice to segregate these issues
> > though, because using pread/pwrite is probably a lot less work
> > than rewriting
* Dan Harris (f...@drivefaster.net) wrote:
> On 10/7/10 11:47 AM, Aaron Turner wrote:
>> Basically, each connection is taking about 100MB resident. As we need
>> to increase the number of threads to be able to query all the devices
>> in the 5 minute window, we're running out of memory.
> I think
* Aaron Turner (synfina...@gmail.com) wrote:
> The graphing front end CGI is all SELECT. There's 12k tables today,
> and new tables are created each month.
That's a heck of alot of tables.. Probably more than you really need.
Not sure if reducing that number would help query times though.
> T
* Aaron Turner (synfina...@gmail.com) wrote:
> Basically, each connection is taking about 100MB resident
Errr.. Given that your shared buffers are around 100M, I think you're
confusing what you see in top with reality. The shared buffers are
visible in every process, but it's all the same actual
* Michael Kohl (michael.k...@tupalo.com) wrote:
> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1
I'm amazed no one else has mentioned this yet, but you should look into
splitting your data and your WALs. Obviously, having another set of
SSDs to put your WALs on would be ideal.
You should probably also
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
> Once the bulk data is inserted into the tables I generally
> do some updates on columns to set values which characterize the
> data.
Please tell me you're not running actual full-table UPDATE statements...
You would be *much* better of
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
>Depends on what you mean by that. The tables that I'm concerned with look
>something like bigint x2, char var x13, int x24, real x8, smallint x4 by
>about 65M rows, each. I only do the updates on one table at a time. The
>
* Scott Marlowe (scott.marl...@gmail.com) wrote:
> There's nothing wrong with whole table updates as part of an import
> process, you just have to know to "clean up" after you're done, and
> regular vacuum can't fix this issue, only vacuum full or reindex or
> cluster.
Just to share my experiences
* Jeremy Palmer (jpal...@linz.govt.nz) wrote:
> In normal circumstances does locking a table in access exclusive mode improve
> insert, update and delete operation performance on that table.
>
> Is MVCC disabled or somehow has less work to do?
MVCC certainly isn't disabled. Does it have less wo
* Mark Stosberg (m...@summersault.com) wrote:
> Recommendations?
PostGIS, geometry columns, and UTM.. I'm not sure where they are wrt
adding KNN support, but it's something they've been anxious to have for
a while, so I expect support will come quickly.
Thanks,
Stephen
Uwe,
* Uwe Bartels (uwe.bart...@gmail.com) wrote:
> So I checked this again and raised afterwards maintenance_work_mem step by
> step up 64GB.
> I logged in via psql, run the following statements
> set maintenance_work_mem = '64GB';
I believe maintenance_work_mem suffers from the same problem tha
* Jay Greenfield ([EMAIL PROTECTED]) wrote:
> Database has one table with 1.2 million rows
> Query:
>
> UPDATE ntdn SET gha=area/1
>
> I could post the EXPLAIN ANALYZE results but its 4,000+ lines long
How do you get 4,000+ lines of explain analyze for one update query in a
database with onl
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
> > While it's true that Access almost certainly takes some shortcuts, 24
> > minutes for an update across 1.2 millon rows seems an awefully long time
> > for Postgres.
>
>
* Anthony Presley ([EMAIL PROTECTED]) wrote:
> I had an interesting discussion today w/ an Enterprise DB developer and
> sales person, and was told, twice, that the 64-bit linux version of
> Enterprise DB (which is based on the 64-bit version of PostgreSQL 8.1)
> is SIGNIFICANTLY SLOWER than the 32
* Steinar H. Gunderson ([EMAIL PROTECTED]) wrote:
> On Mon, Jun 12, 2006 at 10:44:01PM -0400, Tom Lane wrote:
> > (Personally, if I'd designed it, the libraries would actually live in
> > /usr/lib32 and /usr/lib64, and /usr/lib would be a symlink to whichever
> > you needed it to be at the moment.
* Tim Allen ([EMAIL PROTECTED]) wrote:
> The conclusion I'm drawing here is that this SAN does not perform at all
> well, and is not a good database platform. It's sounding from replies
> from other people that this might be a general property of SAN's, or at
> least the ones that are not strato
* John Vincent ([EMAIL PROTECTED]) wrote:
> >> I'd have to agree with you about the specific SAN/setup you're working
> >> with there. I certainly disagree that it's a general property of SAN's
> >> though. We've got a DS4300 with FC controllers and drives, hosts are
> >> generally dual-controlle
* andy rost ([EMAIL PROTECTED]) wrote:
> We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3.
> Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz
> 64-bit processors. There are two internal drives and an external
> enclosure containing 14 drives (configured
* Guoping Zhang ([EMAIL PROTECTED]) wrote:
> Obviously, if there is no better solution, the TCP round trip penalty will
> stop us doing so as we do have performance requirement.
Actually, can't you stick multiple inserts into a given 'statement'?
ie: insert into abc (123); insert into abc (234);
* Florian Weimer ([EMAIL PROTECTED]) wrote:
> * Stephen Frost:
> > Actually, can't you stick multiple inserts into a given 'statement'?
> > ie: insert into abc (123); insert into abc (234);
>
> IIRC, this breaks with PQexecParams, which is the recommended m
* Alex Turner ([EMAIL PROTECTED]) wrote:
> First off - very few third party tools support debian. Debian is a sure
> fire way to have an unsupported system. Use RedHat or SuSe (flame me all
> you want, it doesn't make it less true).
Yeah, actually, it does make it less true since, well, it's rea
Alvaro,
* Alex Turner ([EMAIL PROTECTED]) wrote:
> The other thing is you will probably want to turn on stats in postgres to
> figure out which queries are the bad ones (does anyone have good docs posted
> for this?). Once you have identified the bad queries, you can explain
> analyze them, and f
* David Lang ([EMAIL PROTECTED]) wrote:
> there's a huge difference between 'works on debian' and 'supported on
> debian'. I do use debian extensivly, (along with slackware on my personal
> machines), so i am comfortable getting things to work. but 'supported'
> means that when you run into a pr
* Merlin Moncure ([EMAIL PROTECTED]) wrote:
> explain analyze select 5000!;
> A: 2.4 seconds
> B: 1.8 seconds
>
> explain analyze select count(*) from generate_series(1,50);
> A: 0.85 seconds
> B: 4.94 seconds
Try w/o the explain analyze. It adds quite a bit of overhead and that
might be inc
* Bill ([EMAIL PROTECTED]) wrote:
> I currently have a mysql server running with a database of around 800
> gb. The problem is that the server is old (500 MHz Pentium III with 512
> MB RAM) and I want to change this to a new server and convert the
> existing database to Postgresql on Debian (I ass
* Scott Marlowe ([EMAIL PROTECTED]) wrote:
> On Wed, 2004-06-16 at 13:15, Bill wrote:
> > 2. It is advantageous to buy AMD 64 rather than the Pentium IV?
>
> Yes and no. If having more than 2 gigs of ram is important, 64 bit
> architecures run faster than 32 bit, where having over 2 gigs usually
* Tom Lane ([EMAIL PROTECTED]) wrote:
> [... thinks for awhile ...] It seems possible that they may use sort
> code that knows it is performing a DISTINCT operation and discards
> duplicates on sight. Given that there are only 534 distinct values,
> the sort would easily stay in memory if that we
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
> > * Tom Lane ([EMAIL PROTECTED]) wrote:
> >> [... thinks for awhile ...] It seems possible that they may use sort
> >> code that knows it is performing a DISTINCT operation and
* Stephen Frost ([EMAIL PROTECTED]) wrote:
> systems does in 40 seconds. My only other concern is the Oracle system
> having to do the write I/O while the postgres one doesn't... I don't
> see an obvious way to get around that though, and I'm not sure if it'd
* Markus Schaber ([EMAIL PROTECTED]) wrote:
> Generally, what is the fastest way for doing bulk processing of
> update-if-primary-key-matches-and-insert-otherwise operations?
This is a very good question, and I havn't seen much of an answer to it
yet. I'm curious about the answer myself, actuall
* Richard Rowell ([EMAIL PROTECTED]) wrote:
> I have included an EXPLAIN ANALYZE, relevant table counts, and relevant
> indexing information. If anyone has any suggestions on how to improve
> performance TIA!
Just a thought- do the UNION's actually have to be union's or would
having them be
* Matt Casters ([EMAIL PROTECTED]) wrote:
> I have the go ahead of a customer to do some testing on Postgresql in a
> couple of weeks as a
> replacement for Oracle.
> The reason for the test is that the number of users of the warehouse is going
> to increase and this
> will have a serious impact
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Is there any solution with PostgreSQL matching these needs ... ?
You might look into pg_pool. Another possibility would be slony, though
I'm not sure it's to the point you need it at yet, depends on if you can
handle some delay before an insert makes
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :
> > * Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> > > Is there any solution with PostgreSQL matching these needs ... ?
> >
> > You might look into pg_pool. Anot
* Christopher Kings-Lynne ([EMAIL PROTECTED]) wrote:
> PostgreSQL has replication, but not partitioning (which is what you want).
It doesn't have multi-server partitioning.. It's got partitioning
within a single server (doesn't it? I thought it did, I know it was
discussed w/ the guy from Cox Co
* Herv? Piedvache ([EMAIL PROTECTED]) wrote:
> I know they are not using PostgreSQL ... but how a company like Google do to
> get an incredible database in size and so quick access ?
They segment their data across multiple machines and have an algorithm
which tells the application layer which mac
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> I think maybe a SAN in conjunction with tablespaces might be the answer.
> Still need one honking server.
That's interesting- can a PostgreSQL partition be acress multiple
tablespaces?
Stephen
signature.asc
Description: Digital signature
Tore,
* Tore Halset ([EMAIL PROTECTED]) wrote:
> All of our existing servers are from Dell, but I want to look at some other
> options as well. We are currently looking at rack boxes with 8 internal SAS
> discs. Two mirrored for OS, Two mirrored for WAL and 4 in raid 10 for the
> base.
I'm a b
* Simon Riggs ([EMAIL PROTECTED]) wrote:
> On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:
> > Given that this list spends all day every day discussing cases where the
> > planner is wrong, I'd have to think that that's a bet I wouldn't take.
>
> I think you have a point, but the alternative is
* Ravi Chemudugunta ([EMAIL PROTECTED]) wrote:
> Which version is faster?
In general I would recommend that you benchmark them using
as-close-to-real load as possible again as-real-as-possible data.
> Does the exception mechanism add any overhead?
Yes, using exceptions adds a fair bit of overhea
* Mikkel Høgh ([EMAIL PROTECTED]) wrote:
> I have been testing it a bit performance-wise, and the numbers are
> worrying. In my test, MySQL (using InnoDB) had a 40% lead in
> performance, but I'm unsure whether this is indicative for PostgreSQL
> performance in general or perhaps a misconfigu
Craig,
* Craig Ringer (cr...@postnewspapers.com.au) wrote:
> I've been doing some testing for the Bacula project, which uses
> PostgreSQL as one of the databases in which it stores backup catalogs.
We also use Bacula with a PostgreSQL backend.
> I've been evaluating a schema change for Bacula th
* Matthew Wakeling (matt...@flymine.org) wrote:
> On Tue, 14 Apr 2009, Stephen Frost wrote:
>> Bacula should be using COPY for the batch data loads, so hopefully won't
>> suffer too much from having the fields split out. I think it would be
>> interesting to try doing
Craig,
* Craig Ringer (cr...@postnewspapers.com.au) wrote:
> In other cases, binary-format COPY would be unsafe without some way to
> determine remote endianness and sizeof(various types).
As Tom mentioned already, the binary protocol is actually pretty well
defined, and it's in network-byte-orde
David,
* da...@lang.hm (da...@lang.hm) wrote:
> I am working with the rsyslog developers to improve it's performance in
> inserting log messages to databases.
Great!
> currently they have a postgres interface that works like all the other
> ones, where rsyslog formats an insert statement, pa
Greg,
* Greg Smith (gsm...@gregsmith.com) wrote:
> The win from switching from INSERT to COPY can be pretty big, further
> optimizing to BINARY you'd really need to profile to justify.
Have you done any testing to compare COPY vs. INSERT using prepared
statements? I'd be curious to know how
David,
* da...@lang.hm (da...@lang.hm) wrote:
> the database structure is not being defined by (or specificly for)
> rsyslog. so at compile time we have _no_ idea how many variables of what
> type there are going to be. my example of ($timestamp,$msg) was intended
> to just be a sample (avoi
David,
* da...@lang.hm (da...@lang.hm) wrote:
> any idea what sort of difference binary mode would result in?
It depends a great deal on your application..
> currently rsyslog makes use of it's extensive formatting capabilities to
> format a string along the lines of
> $DBformat="insert into t
David,
* da...@lang.hm (da...@lang.hm) wrote:
> I thought that part of the 'efficiancy' and 'performance' to be gained
> from binary modes were avoiding the need to parse commands, if it's only
> the savings in converting column contents from text to specific types,
> it's much less importan
* da...@lang.hm (da...@lang.hm) wrote:
> while I fully understand the 'benchmark your situation' need, this isn't
> that simple.
It really is. You know your application, you know it's primary use
cases, and probably have some data to play with. You're certainly in a
much better situation to at
David,
* da...@lang.hm (da...@lang.hm) wrote:
> is this as simple as creating a database and doing an explain on each of
> these? or do I need to actually measure the time (at which point the
> specific hardware and tuning settings become an issue again)
No, you need to measure the time. An
* Ben Chobot (be...@silentmedia.com) wrote:
> On Mon, 20 Apr 2009, da...@lang.hm wrote:
>> one huge advantage of putting the sql into the configuration is the
>> ability to work around other users of the database.
>
> +1 on this. We've always found tools much easier to work with when they
> coul
* da...@lang.hm (da...@lang.hm) wrote:
> I think the key thing is that rsyslog today doesn't know anything about
> SQL variables, it just creates a string that the user and the database
> say looks like a SQL statement.
err, what SQL variables? You mean the $NUM stuff? They're just
placeholde
* da...@lang.hm (da...@lang.hm) wrote:
>> Ignoring the fact that this is horrible, horrible non-SQL,
>
> that example is for MySQL, nuff said ;-)
indeed.
> for some reason I was stuck on the idea of the config specifying the
> statement and variables seperatly, so I wasn't thinking this way, ho
* da...@lang.hm (da...@lang.hm) wrote:
> by the way, now that I understand how you were viewing this, I see why
> you were saying that there would need to be a SQL parser. I was missing
> that headache, by going the direction of having the user specify the
> individual components (which has it's
* James Mansion (ja...@mansionfamily.plus.com) wrote:
> da...@lang.hm wrote:
>> on the other hand, when you have a full queue (lots of stuff to
>> insert) is when you need the performance the most. if it's enough of a
>> win on the database side, it could be worth more effort on the
>> applic
David,
* da...@lang.hm (da...@lang.hm) wrote:
> in a recent thread about prepared statements, where it was identified
> that since the planning took place at the time of the prepare you
> sometimes have worse plans than for non-prepared statements, a proposal
> was made to have a 'pre-parsed, b
* James Mansion (ja...@mansionfamily.plus.com) wrote:
> Fine. But like I said, I'd suggest measuring the fractional improvement
> for this
> when sending multi-row inserts before writing something complex. I
> think the
> big will will be doing multi-row inserts at all.
You're re-hashing t
Glenn,
* Glenn Maynard (glennfmayn...@gmail.com) wrote:
> This is all well-known, covered information, but perhaps some numbers
> will help drive this home. 4 inserts into a single-column,
> unindexed table; with predictable results:
Thanks for doing the work. I had been intending to but ha
* Glenn Maynard (glennfmayn...@gmail.com) wrote:
> >> separate inserts, no transaction: 21.21s
> >> separate inserts, same transaction: 1.89s
> >> 40 inserts, 100 rows/insert: 0.18s
> >> one 4-value insert: 0.16s
> >> 40 prepared inserts, 100 rows/insert: 0.15s
> >> COPY (text): 0.10s
> >> COPY
* Glenn Maynard (glennfmayn...@gmail.com) wrote:
> On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost wrote:
> > For a single column table, I wouldn't expect much either. With more
> > columns I think it would be a larger improvement.
>
> Maybe. I'm not sure why pars
* da...@lang.hm (da...@lang.hm) wrote:
> On Wed, 22 Apr 2009, Glenn Maynard wrote:
>> You're talking about round-trips to a *local* server, on the same
>> system, not a dedicated server with network round-trips, right?
>
> the use-case for a production setup for logging servers would probably
> i
* da...@lang.hm (da...@lang.hm) wrote:
> On Wed, 22 Apr 2009, Stephen Frost wrote:
>> Erm.. Prepared queries is about using PQexecPrepared(), not about
>> sending a text string as an SQL EXECUTE(). PQexecPrepared takes an
>> array of arguments. That gets translated into
* Glenn Maynard (glennfmayn...@gmail.com) wrote:
> I'd suggest this be mentioned in the sql-prepare documentation, then,
> because that documentation only discusses using prepared statements to
> eliminate redundant planning costs. (I'm sure it's mentioned in the
> API docs and elsewhere, but if i
1 - 100 of 173 matches
Mail list logo