Its been a while since I really worked with Postgres, but could you write a
trigger to convert empty string to null on save? You'd have to carefully
apply it everywhere but it'd get you the searching for null finds empty. If
that is all you do the you've got it.
Essentially, there isn't a switch
On Mon, Aug 5, 2013 at 9:22 PM, Josh Berkus wrote:
> Ivan,
>
> > Or, more generally, is there some set of circumstances under which the
> > catastrophic scenario will happen?
>
> Yes:
>
> SELECT * FROM table ORDER BY id DESC LIMIT 10 OFFSET 10
>
> This is the "high offset" problem, and affect
On Wed, May 15, 2013 at 11:52 AM, Heikki Linnakangas <
hlinnakan...@vmware.com> wrote:
> On 15.05.2013 18:31, Shaun Thomas wrote:
>
>> I've seen conversations on this since at least 2005. There were even
>> proposed patches every once in a while, but never any consensus. Anyone
>> care to comment?
On Fri, Apr 5, 2013 at 9:55 AM, Franck Routier wrote:
> Le 04/04/2013 21:08, Tom Lane a écrit :
> Maybe the statistics tables for sandia and saneds are in a bad shape ?
> (don't know how to check this).
>
> Regards,
>
> Franck
>
>
Could this be caused by system table bloat?
Also, can you check h
We had a problem where the actual query planning time blew up and took way
more time then the query execution. We reproduced the problem by forming a
new connection and then just explaining the query. If that takes more than
a couple of milliseconds you likely have the problem. The second plan w
I can't really help, but I can make it more clear why postgres is choosing
a _bitmap_ index scan rather than a regular index scan. With a regular
index scan it pumps the index for the locations of the rows that it points
to and loads those rows as it finds them. This works great if the rows in
th
It looks like it was a problem with NFS. We're not really sure what was
wrong with it but once we failed over to an iSCSI mount for the data
everything is running just fine.
On Thu, Dec 27, 2012 at 6:12 PM, Tom Lane wrote:
> Nikolas Everett writes:
> > On Thu, Dec 27, 2012 a
On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane wrote:
> Nikolas Everett writes:
> > We straced the backend during the explain and it looked like the open
> > commands were taking several seconds each.
>
> Kind of makes me wonder if you have a whole lot of tables ("wh
Another other thing - the query seems to get faster after the first time we
plan it. I'm not sure that this is the case but I think it might be.
On Thu, Dec 27, 2012 at 2:28 PM, Nikolas Everett wrote:
> Sorry for the confusion around the queries. Both queries are causing
> trou
?
Thanks!
Nik
On Thu, Dec 27, 2012 at 2:21 PM, Tom Lane wrote:
> Nikolas Everett writes:
> > We just upgraded from 8.3 to 9.1 and we're seeing some performance
> > problems. When we EXPLAIN ANALYZE our queries the explain result claim
> > that the queries are reasonabl
Actually that last paragraph doesn't make much sense. Please ignore it.
On Thu, Dec 27, 2012 at 12:58 PM, Nikolas Everett wrote:
> New news - the hot slave seems to be performing as expected with no long
> pauses.
>
> It looks like we're using an archive_timeout of
to clear all of the
dirty buffers every 60 seconds. With 48 gigs of shared buffers we could
have quite a few buffers to clear. Is there some place I could check on
how all that is going?
On Thu, Dec 27, 2012 at 12:45 PM, Nikolas Everett wrote:
> p
set to 48 gigs which is comfortably less than
the 146 gigs in the machine.
On Thu, Dec 27, 2012 at 12:21 PM, François Beausoleil
wrote:
>
> Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
>
> > We just upgraded from 8.3 to 9.1 and we're seeing some performance
> problems. W
This probably isn't the right place to ask that question but you may as well
try `pg_dump -t PATTERN`. Man pg_dump for more information on how to form
that pattern.
On Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad wrote:
> Hi List,
> I am using PostgreSQL 9.0.3 and I have a need to dump only the s
On Wed, Feb 2, 2011 at 3:29 PM, Nikolas Everett wrote:
>
>
> On Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett wrote:
>
>> Given that the a list would be difficult to maintain, is there some way I
>> can make Postgres spit out the list of what locks are taken?
>>
&
On Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett wrote:
> Given that the a list would be difficult to maintain, is there some way I
> can make Postgres spit out the list of what locks are taken?
>
> --Nik
>
I just answered my own question -
compile with -DLOCK_DEBUG in your src/
Given that the a list would be difficult to maintain, is there some way I
can make Postgres spit out the list of what locks are taken?
--Nik
On Wed, Feb 2, 2011 at 1:58 PM, Greg Smith wrote:
> Nikolas Everett wrote:
>
>> Is there an exhaustive list of what takes what locks and h
Dear list,
Is there an exhaustive list of what takes what locks and how long they last?
I'm asking because we just had some trouble doing a hot db change to an
8.3.6 system. I know it is an old version but it is what I have to work
with. You can reproduce it like so:
First:
DROP TABLE IF EXIST
Distinct on is working really well! If I need to be able to index something
I might start thinking along those lines.
On Fri, Jan 21, 2011 at 12:13 PM, Robert Haas wrote:
> On Fri, Jan 14, 2011 at 8:50 PM, Nikolas Everett
> wrote:
> >
> >
> > On Fri, Jan 14, 2011 at
On Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner wrote:
> Tom Lane wrote:
>
> > Shaun's example is a bit off
>
> > As for speed, either one might be faster in a particular
> > situation.
>
> After fixing a mistake in my testing and learning from Tom's example
> I generated queries against the OP'
On Fri, Jan 14, 2011 at 5:30 PM, Kevin Grittner wrote:
> SELECT y.*
> from (select a, max(revision) as revision
> from test where a between 2 and 200
> group by a) x
> join test y using (a, revision);
While certainly simpler than my temp table this really just exposes a flaw
I'm using 8.3 and I have a table that contains many revisions of the same
entity and I have a query that is super slow. Please help! I'm going to
paste in some SQL to set up test cases and some plans below. If that isn't
the right way to post to this list please let me know and I'll revise.
My
On Wed, Apr 21, 2010 at 11:30 AM, Tom Lane wrote:
> >> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka wrote:
> >>> ... There are three ways to plan a PreparedStatement:
>
> FWIW, I think there is some consensus to experiment (in the 9.1 cycle)
> with making the server automatically try replanning o
On Wed, Apr 21, 2010 at 10:41 AM, Robert Haas wrote:
> On Tue, Apr 20, 2010 at 5:05 PM, Kris Jurka wrote:
> > The reason this is not done is that the mechanism used for fetching a
> piece
> > of the results at a time can change the query plan used if using a
> > PreparedStatement. There are thr
On Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke wrote:
>
> I wouldn't hold MySQL up to be a particularly good implmentation of
> anything, other than speed (MyISAM) and usability (the CLI) I find
> Oracle's JDBC implmentation to be both user friendly and (largely) standards
> compliant.
>
Dave,
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote:
> that thought occured to me while I was testing this. I ran a vacuumdb -z
> on my database during the load and it didn't impact performance at all.
>
> Incidentally the code is written to work like this :
>
> while (read X lines in file){
> Proc
See http://www.postgresql.org/docs/current/static/storage-page-layout.html for
all of what is taking up the space. Short version:
Per block overhead is > 24 bytes
Per row overhead is 23 bytes + some alignment loss + the null bitmap if you
have nullable columns
On Mon, Mar 29, 2010 at 8:24 AM, r
On Tue, Mar 16, 2010 at 7:24 AM, Pierre C wrote:
>
> I wonder about something, too : if your DB size is smaller than RAM, you
> could in theory set shared_buffers to a size larger than your DB provided
> you still have enough free RAM left for work_mem and OS writes management.
> How does this int
On Tue, Feb 23, 2010 at 6:49 AM, Pierre C wrote:
> Note that's power draw per bit. dram is usually much more densely
>> packed (it can be with fewer transistors per cell) so the individual
>> chips for each may have similar power draws while the dram will be 10
>> times as densely packed as the
The planner knows that that particular date range is quite selective so it
doesn't have to BitmapAnd two indexes together.
The problem is that a prepared statement asks the db to plan the query
without knowing anything about the parameters. I think functions behave in
exactly the same way. Its k
2010/2/5 Wojtek
>
> partitions are will typically have from 200k to 300k rows, i have 52
> partitions per year and I'm keeping around 4-5 years of history. However,
> they will query last 3-4 months most often.
>
Do you mean 12 partitions a year or weekly partitions?
> Most of my queries will ha
On Tue, Feb 2, 2010 at 7:14 PM, Tom Lane wrote:
> "Davor J." writes:
> > Now, if one takes a subquery for "1", the optimizer evaluates it first
> > (let's say to "1"), but then searches for it (sequentially) in every
> > partition, which, for large partitions, can be very time-consuming and
> go
On Wed, Jan 27, 2010 at 9:54 AM, Kevin Grittner wrote:
> It is if you don't have an index on the table which has a foreign
> key defined which references the table in which you're doing
> deletes. The author of the benchmark apparently didn't realize that
> MySQL automatically adds such an index
Joining via a tinyint or something will make your large table smaller which
is nice. Smaller tables = faster tables.
On Wed, Jan 6, 2010 at 11:21 PM, Radhika S wrote:
> Hi,
> I am going to test this out but would be good to know anyways. A large
> table is joined to a tiny table (8 rows) on a t
This table is totally unnormalized. Normalize it and try again. You'll
probably see a huge speedup. Maybe even 10x. My mantra has always been
less data stored means less data to scan means faster scans.
On Thu, Jan 7, 2010 at 12:57 PM, Gurgel, Flavio wrote:
> - "Matthew Wakeling" escreve
Inheritance would only make sense if each of your categories had more
columns. Say if you had a "wines" category and only they had a year column.
Its probably not worth it for one or two columns but if you've got a big
crazy heterogeneous tree of stuff then its probably appropriate.
I'm with Ric
On Fri, Dec 11, 2009 at 3:50 PM, Joshua D. Drake wrote:
> On Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:
> > Turning fsync off on a dev database is a bad idea? Sure you might
> > kill it and have to start over, but thats kind of the point in a dev
> > database.
Turning fsync off on a dev database is a bad idea? Sure you might kill it
and have to start over, but thats kind of the point in a dev database.
On Fri, Dec 11, 2009 at 12:58 PM, Joshua D. Drake wrote:
> On Thu, 2009-12-10 at 20:38 -0500, Nikolas Everett wrote:
> > In my limited experi
In my limited experience ext4 as presented by Karmic is not db friendly. I
had to carve my swap partition into a swap partition and an xfs partition to
get better db performance. Try fsync=off first, but if that doesn't work
then try a mini xfs.
On Thu, Dec 10, 2009 at 5:09 PM, Andres Freund w
So you've got a query like:
SELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN 1 ELSE 0 END) as
zeroToTen,
SUM(CASE WHEN field >= 10 AND field < 20 THEN 1 ELSE 0 END) as
tenToTwenty,
SUM(CASE WHEN field >= 20 AND field < 30 THEN 1 ELSE 0 END) as
tenToTwenty,
...
FROM b
If my un-word wrapping is correct your running ~90% user cpu. Yikes. Could
you get away with fewer disks for this kind of thing?
On Mon, Oct 5, 2009 at 5:32 PM, Scott Marlowe wrote:
> On Mon, Oct 5, 2009 at 7:30 AM, Nikolas Everett wrote:
> >
> >> But you should plan
> But you should plan on partitioning to multiple db servers up front
> and save pain of conversion later on. A dual socket motherboard with
> 16 to 32 SAS drives and a fast RAID controller is WAY cheaper than a
> similar machine with 4 to 8 sockets is gonna be. And if you gotta go
> there anyway
2009/8/19 Grzegorz Jaśkiewicz
> that seems to be the killer:
>
> and time >= extract ('epoch' from timestamp '2009-08-12')
> and time < extract ('epoch' from timestamp '2009-08-13' )
>
> You probably need an index on time/epoch:
>
> CREATE INDEX foo ON table(extract ('epoch' from timestamp time )
Can you run those two queries with psql?
I remember having some trouble running multiple queries in the same pgadmin
process. Both would get stuck until both finished I think. I went to
running a pgadmin process per query.
On Tue, Jul 28, 2009 at 9:17 AM, Robert James wrote:
> Thanks for the r
>
> In our application we defer the updates to a separate asynchronous
> process using a simple queue mechanism, but in our case, we found that
> the updates are fast enough (in the order of a few milliseconds) not
> to warrant batching them into single transactions.
>
We do a very similar trick f
Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.
Do you need every sensor update to hit the database? In a situation like
this I'd be tempted to keep the current values in the application itself and
then sweep them all into the database periodically. If
The plan ought to be different when there are more scores and the table is
analyzed and your statistics target is high enough. At this point you don't
have enough data to merit doing anything but a seq scan. The overhead is
simply not worth it.
You could try inserting a lot more rows. I'd creat
On Thu, May 7, 2009 at 11:19 AM, Matthew Wakeling wrote:
> On Thu, 7 May 2009, David Brain wrote:
>
>> Certainly random access like this index scan can be extremely slow. 2-4
>>> MB/s
>>> is quite reasonable if you're fetching one 8kB block per disc seek - no
>>> more
>>> than 200 per second.
>>>
>
> Nested Loop Left Join (cost=0.00..6462463.96 rows=1894 width=110)
> -> Append (cost=0.00..6453365.66 rows=1894 width=118)
> -> Seq Scan on datatable sum (cost=0.00..10.75 rows=1 width=118)
> Filter: ((datapointdate >= '2009-04-01
> 00:00:00'::timestamp without time
>
>If you often do range queries on date, consider partitioning your
> table by date (something like 1 partition per month).
>Of course, if you also often do range queries on something other
> than date, and uncorrelated, forget it.
If you pick your partition to line up with your
The thing to keep in mind is that every update creates a new row version
that has to be indexed for all indexes on the table, not just the indexes on
the column updated. You can test the weight of indexes by copying the table
then trying your query again.
I've heard tell that if you have a table
"Kevin Grittner" <[EMAIL PROTECTED]> writes:
> http://wiki.postgresql.org/wiki/Hint_Bits
On Thu, Sep 18, 2008 at 2:13 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> freezing old tuples
hash join was spilling out to temp files
>
Since this was a new table and the writes to the table's disk were very
List,
I'm a bit confused as to why this query writes to the disk:
SELECT count(*)
FROMbigbigtable
WHERE customerid IN (SELECT customerid FROM
smallcustomertable)
AND x !=
'special'
AND y IS NULL
It writes a whole bunch of data to the disk that has the tablespace where
bigbigtable lives as we
On Mon, Sep 8, 2008 at 10:59 AM, Gregory Stark <[EMAIL PROTECTED]>wrote:
>
> That "media recovery" it's referring to sounds like precisely our WAL full
> page writes...
>
>
> --
> Gregory Stark
> EnterpriseDB http://www.enterprisedb.com
> Ask me about EnterpriseDB's PostGIS support!
postgres forces the underlying OS and file
system to do that stuff (sans the mutli-host magic) using fsync. Is that
right?
It does look like there are some gotchas with NFS.
On Mon, Sep 8, 2008 at 10:16 AM, Gregory Stark <[EMAIL PROTECTED]>wrote:
>
> "Nikolas Everett" <
On Sat, Sep 6, 2008 at 3:46 PM, Bruce Momjian <[EMAIL PROTECTED]> wrote:
> Nikolas Everett wrote:
> > I seem to have answered my own question. I'm sending the answer to the
> list
> > in case someone else has the same question one day.
> >
> > According to
I seem to have answered my own question. I'm sending the answer to the list
in case someone else has the same question one day.
According to the NetApp documentation, it does protect me from partial page
writes. Thus, full_page_writes = off.
On Wed, Sep 3, 2008 at 12:03 PM, Nikolas Ev
I have the honor to be configuring Postgres to back into a NetApp FAS3020
via fiber.
Does anyone know if the SAN protects me from breakage due to partial page
writes?
If anyone has an SAN specific postgres knowledge, I'd love to hear your
words of wisdom.
For reference:
[EMAIL PROTECTED] bonnie]
The only thing thats bitten me about hibernate + postgres is that when
inserting into partitioned tables, postgres does not reply with the number
of rows that hibernate expected. My current (not great) solution is to
define a specific SQLInsert annotation and tell it not to do any checking
like so
ou find a better solution please let me know.
>
>
> Neil Peter Braggio
> [EMAIL PROTECTED]
>
>
> On Tue, May 13, 2008 at 11:48 AM, Nikolas Everett <[EMAIL PROTECTED]>
> wrote:
> > I figure this subject belongs on the performance mailing list because it
> is
&g
I figure this subject belongs on the performance mailing list because it is
about partitioning, which is a performance issue.
I'm working on partitioning some of the tables used by an application that
uses OpenJPA. It turns out that OpenJPA is sensitive to the numbers
returned when you do an inse
On Thu, Apr 24, 2008 at 12:56 PM, PFC <[EMAIL PROTECTED]> wrote:
>
> Our ~600,000,000
>> row table is changed very infrequently and is on a 12 disk software raid-6
>> for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
>> Fusion-MPT SAS Our ~50,000,000 row staging table is o
I have a question about index us and IO and am seeking advice.
We are running postgres 8.2. We have two big big tables. Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT
The thing to remember here is that prepared statements are only planned once
and strait queries are planned for each query.
When you give the query planner some concrete input like in your example
then it will happily use the index because it can check if the input starts
with % or _. If you use
Is there a way I can change where postgres writes it temporary files? My
data directory is on a slow array, but we also have a fast array. I'm
looking to get all the temp file creation onto the fast array.
65 matches
Mail list logo