On Fri, Apr 01, 2005 at 09:59:44PM -0800, Mischa wrote:
> >
> > select ... where first_date <= today and last_date >= today
> >
> > Whatever index we create system always does a sequential scan (which I can
> > understand). Has someone a smarter solution?
>
> Yep, standard SQL problem. The
Quoting Mischa <[EMAIL PROTECTED]>:
[deleted]
> SELECT People.* FROM People
> JOIN Widths
> ONPeople.start = today - today % Widths.width
> AND People.width = Widths.width
Yikes! I hit the SEND button one ohnosecend too fast.
(1) You still ALSO have to test:
... AND today between first_d
Quoting "H.J. Sanders" <[EMAIL PROTECTED]>:
>
> Anybody a solution for the next problem:
> people can subscribe to a service for 1 or more days (upto a max. of 365).
> So in the database is stored: first_date and last_date
> To select which people are subscribed for a certain date (e.g. today) we
On Fri, Apr 01, 2005 at 12:05:44PM +0200, H.J. Sanders wrote:
>
> people can subscribe to a service for 1 or more days (upto a max. of 365).
>
> So in the database is stored: first_date and last_date
>
> To select which people are subscribed for a certain date (e.g. today) we use
> a select like
> > Well, I just did an insert of 27,500 records with 9 fields, averaging
> > around 118 bytes per record, each insert statement coming from a separate
> > SQL statement fed to psql, and it took a bit over 4 minutes, or about
> > 106 inserts per second.
>
> Is that with a separate transaction f
Alex Turner <[EMAIL PROTECTED]> writes:
> On Apr 1, 2005 4:17 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
>> 1250/sec with record size average is 26 bytes
>> 800/sec with record size average is 48 bytes.
>> 250/sec with record size average is 618 bytes.
> Oh - this is with a seperate transaction pe
On Mar 31, 2005, at 9:01 PM, Steve Poe wrote:
Now, we need to purchase a good U320 RAID card now. Any suggestions
for those which run well under Linux?
Not sure if it works with linux, but under FreeBSD 5, the LSI MegaRAID
cards are well supported. You should be able to pick up a 320-2X with
1
Oh - this is with a seperate transaction per command.
fsync is on.
Alex Turner
netEconomist
On Apr 1, 2005 4:17 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> 1250/sec with record size average is 26 bytes
> 800/sec with record size average is 48 bytes.
> 250/sec with record size average is 618 byt
1250/sec with record size average is 26 bytes
800/sec with record size average is 48 bytes.
250/sec with record size average is 618 bytes.
Data from pg_stats and our own job monitoring
System has four partitions, two raid 1s, a four disk RAID 10 and a six
disk RAID 10.
pg_xlog is on four disk RA
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> What seems to happen is it slams into a "wall" of some sort, the
> system goes into disk write frenzy (wait=90% CPU), and eventually
> recovers and starts running for a while at a more normal speed.
Checkpoint maybe? If so, tweaking the bgwriter
Mike Nolan <[EMAIL PROTECTED]> writes:
> Well, I just did an insert of 27,500 records with 9 fields, averaging
> around 118 bytes per record, each insert statement coming from a separate
> SQL statement fed to psql, and it took a bit over 4 minutes, or about
> 106 inserts per second.
Is that wi
On Apr 1, 2005 3:53 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> > What seems to happen is it slams into a "wall" of some sort, the
> > system goes into disk write frenzy (wait=90% CPU), and eventually
> > recovers and starts running for a while at a more normal speed. What
> > I need thoug
> What seems to happen is it slams into a "wall" of some sort, the
> system goes into disk write frenzy (wait=90% CPU), and eventually
> recovers and starts running for a while at a more normal speed. What
> I need though, is to not have that wall happen. It is easier for me
> to accept a consta
On Apr 1, 2005 3:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Christopher Petrilli <[EMAIL PROTECTED]> writes:
> > I can start at about 4,000 rows/second, but at about 1M rows, it
> > plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
> > That's only about 15 rows/second, which is qu
Christopher Petrilli <[EMAIL PROTECTED]> writes:
> I can start at about 4,000 rows/second, but at about 1M rows, it
> plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
> That's only about 15 rows/second, which is quite pathetic. The
> problem seems to be related to my indexes, si
On Apr 1, 2005 1:06 PM, Marc G. Fournier <[EMAIL PROTECTED]> wrote:
>
> Just curious, but does anyone have an idea of what we are capable of? I
> realize that size of record would affect things, as well as hardware, but
> if anyone has some ideas on max, with 'record size', that would be
> apprec
> Just curious, but does anyone have an idea of what we are capable of? I
> realize that size of record would affect things, as well as hardware, but
> if anyone has some ideas on max, with 'record size', that would be
> appreciated ...
Well, I just did an insert of 27,500 records with 9 field
Just curious, but does anyone have an idea of what we are capable of? I
realize that size of record would affect things, as well as hardware, but
if anyone has some ideas on max, with 'record size', that would be
appreciated ...
Thanks ...
Marc G. Fournier Hub.Org Networking Ser
"Peterson, Bjorn" <[EMAIL PROTECTED]> writes:
>> That's what it's there for ... but it would be useful to look into why
>> the planner gets it so wrong without that hint. Could we see EXPLAIN
>> ANALYZE both ways?
> Below is my query and the output of EXPLAIN - I was not able to run EXPLAIN
> ANA
-Original Message-
>From: Tom Lane [mailto:[EMAIL PROTECTED]
>Sent: Friday, April 01, 2005 10:37 AM
>To: Peterson, Bjorn
>Cc: pgsql-performance@postgresql.org
>Subject: Re: [PERFORM] enable_XXX options
>
>"Peterson, Bjorn" <[EMAIL PROTECTED]> writes:
>> I have a query in my application th
"Peterson, Bjorn" <[EMAIL PROTECTED]> writes:
> I have a query in my application that takes an unreasonable amount of time
> to complete (>4.5 hours execution time). After reviewing the EXPLAIN and
> EXPLAIN ANALYZE output for that and similar queries, my colleagues and I
> determined that turnin
I have a query in my application that takes an unreasonable
amount of time to complete (>4.5 hours execution time). After
reviewing the EXPLAIN and EXPLAIN ANALYZE output for that and similar queries,
my colleagues and I determined that turning off the enable_nestloop option
might help –
Hi All,
I have a trigger function that copies data from an input table to a table in
the actual data model. The data model table has a trigger after update on it.
Is the first trigger fired after the copy terminates or after each insert?
Is the second trigger fired after the first trigger is com
Hello,
What would be reasonable settings for quite heavily used
but not large database?
Dabatase is under 1G in size and fits into server cache (server
has 2GB of memeory). Two of most used tables are ~100k rows each
but they get up to 50inserts/updates/deletes per second.
How to tweak
On Apr 1, 2005 4:03 AM, Yudie Pg <[EMAIL PROTECTED]> wrote:
> > You need to look at what else is using RAM on that machine. And maybe buy
> > more.
>
> Ouch.. I had that feeling also. then how can I know how much memory
> needed for certain amount words? and why counting uncommon words are
> fast
Anybody a solution for the next problem:
people can subscribe to a service for 1 or more days (upto a max. of 365).
So in the database is stored: first_date and last_date
To select which people are subscribed for a certain date (e.g. today) we use
a select like
select ... where first_dat
I'd use two of your drives to create a mirrored partition where pg_xlog
resides separate from the actual data.
RAID 10 is probably appropriate for the remaining drives.
Fortunately, you're not using Dell, so you don't have to worry about
the Perc3/Di RAID controller, which is not so compatible w
27 matches
Mail list logo