Setting join_collapse_limit=1 improves my performance dramatically.
Even on a query with only 3 tables.
This surprised me, since there are only 3 tables being joined, I would
have assumed that the optimizer would have done the exhaustive search
and not used geqo stuff - and that this exhaustive
Tom Lane wrote:
[EMAIL PROTECTED] writes:
select *
from streetname_lookup as sl
join city_lookup as cl on (true)
left outer join tlid_smaller as ts on (sl.geo_streetname_id =
ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)
where str_name='alamo' and city='san antonio' and
Ron Mayer wrote:
Tom Lane wrote:
[EMAIL PROTECTED] writes:
select *
from streetname_lookup as sl
join city_lookup as cl on (true)
left outer join tlid_smaller as ts on (sl.geo_streetname_id =
ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)
where str_name='
Rod Taylor wrote:
> On Fri, 2005-04-15 at 15:43 -0500, Matthew Nuzum wrote:
>> * I agree with the threads that more disks are better.
>> * I also agree that SCSI is better, but can be hard to justify
Here's another approach to spend $7000 that we're currently
trying but it'll only work for cert
If I have a freshly CLUSTERed table and queries that want to do a
merge join, it seems to me that quite a bit of time is spent
unnecessarily sorting the already-sorted table. An example such
query I found in my log files is shown below. If I read the
EXPLAIN ANALYZE output correctly, it's saying th
Josh Berkus wrote: [quoted out of order]
Ron,
Looking at your analyze, though, I think it's not the sort that's taking the
time as it is that the full sorted entity_id column won't fit in work_mem.
Try increasing it?
Yup, that indeed fixed this particular query since neither table was
particular
Christopher Browne wrote:
There is a common "use case" where MySQL(tm) ...
select * from some_table where id='some primary key value';
If your usage patterns differ from that...
However this is a quite common use-case; and I wonder what the
best practices for postgresql is for applications
t noone
really uses those.
Ron Mayer
---(end of broadcast)---
TIP 6: explain analyze is your friend
When I need to insert a few hundred or thousand things in
a table from a 3-tier application, it seems I'm much better
off creating a big string of semicolon separated insert
statements rather than sending them one at a time - even
when I use the obvious things like wrapping the statements
in a tra
William Yu wrote:
Our SCSI drives have failed maybe a little less than our IDE drives.
Microsoft in their database showcase terraserver project has
had the same experience. They studied multiple configurations
including a SCSI/SAN solution as well as a cluster of SATA boxes.
They measured a
Christopher Kings-Lynne wrote:
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lin
Tom Lane wrote:
[EMAIL PROTECTED] writes:
On Mon, 5 Dec 2005, Tom Lane wrote:
I speculate that the seq_scan wasn't really the slow part
compared to not using using both parts of the index in the
second part of the plan. The table point_features is tens of
thousands of rows, while the table
Tom Lane wrote:
...planner is actually going to choose based on the ultimate join cost,
not on the subplan costs...
In this explanation, the reason for the change in plans over time could
be a change in the statistics for the other table. Is "facets" more
dynamic than "point_features"?
In to
Tom Lane wrote:
If you have background tasks doing ANALYZEs then this explanation seems
plausible enough. I'm willing to accept it anyway ...
Yup, there are such tasks. I could dig through logs to try to confirm
or reject it; but I think it's reasonably likely that this happened.
Basically,
Short summary:
* Papers studying priority inversion issues with
databases including PosgreSQL and realistic workloads
conclude setpriority() helps even in the presence of
priority inversion issues for TCP-C and TCP-W like
workloads.
* Avoiding priority inversion with priority in
Before asking them to remove it, are we sure priority inversion
is really a problem?
I thought this paper: http://www.cs.cmu.edu/~bianca/icde04.pdf
did a pretty good job at studying priority inversion on RDBMs's
including PostgreSQL on various workloads (TCP-W and TCP-C) and
found that the benefit
Mark Kirkwood wrote:
> Ron Mayer wrote:
>> Short summary:
>> * Papers studying priority inversion issues with
>> databases including PosgreSQL and realistic workloads
>> conclude setpriority() helps even in the presence of
>> priority inversio
Brian Hurt wrote:
> Mark Lewis wrote:
>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
>>
>>> I have the same question. I've done some embedded real-time
>>> programming, so my innate reaction to priority inversions is that
>>> they're evil. But, especially given priority inheritance,
Brian Hurt wrote:
> Ron Mayer wrote:
>> Brian Hurt wrote:
>>> Mark Lewis wrote:
>>>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
>>>>> But, especially given priority inheritance, is there any
>
> That second paper is interes
Bruno Wolff III wrote:
> On Thu, Dec 14, 2006 at 01:39:00 -0500,
> Jim Nasby <[EMAIL PROTECTED]> wrote:
>> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:
>>> This appears to be changing under Linux. Recent kernels have write
>>> barriers implemented using cache flush commands (which
>>>
Xiaoning Ding wrote:
> Postgresql is 7.3.18. [...]
> 1 process takes 0.65 second to finish.
> I update PG to 8.2.3. The results are [...] now.
> 1 process :0.94 second
You sure about your test environment? Anything else
running at the same time, perhaps?
I'm a bit surprised that 8.2.3 would
[EMAIL PROTECTED] wrote:
> 8*73GB SCSI 15k ...(dell poweredge 2900)...
> 24*320GB SATA II 7.2k ...(generic vendor)...
>
> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendor
Craig A. James wrote:
> Merlin Moncure wrote:
>> Using surrogate keys is dangerous and can lead to very bad design
>> habits that are unfortunately so prevalent in the software industry
>> they are virtually taught in schools. ... While there is
>> nothing wrong with them in principle (you are ex
Dan Harris wrote:
> Daniel Haensse wrote:
>> Has anybody a nice
>> solution to change process priority? A shell script, maybe even for java?
One way is to write astored procedure that sets it's own priority.
An example is here:
http://weblog.bignerdranch.com/?p=11
> While this may technically wo
Andrew Sullivan wrote:
> On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:
>> One way is to write astored procedure that sets it's own priority.
>> An example is here:
>> http://weblog.bignerdranch.com/?p=11
>
> Do you have evidence to show this will
Greg Smith wrote:
>
> Count me on the side that agrees adjusting the vacuuming parameters is
> the more straightforward way to cope with this problem.
Agreed for vacuum; but it still seems interesting to me that
across databases and workloads high priority transactions
tended to get through fast
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Greg Smith wrote:
>>> Count me on the side that agrees adjusting the vacuuming parameters is
>>> the more straightforward way to cope with this problem.
>
>> Agreed for vacuum; but it still seems
Greg Smith wrote:
>
> Let's break this down into individual parts:
Great summary.
> 4) Is vacuuming a challenging I/O demand? Quite.
>
> Add all this up, and that fact that you're satisfied with how nice has
> worked successfully for you doesn't have to conflict with an opinion
> that it's not
Seems Linux has IO scheduling through a program called ionice.
Has anyone here experimented with using it rather than
vacuum sleep settings?
http://linux.die.net/man/1/ionice
This program sets the io scheduling class and priority
for a program. As of this writing, Linux supports 3 schedulin
Jay Kang wrote:
> Hello,
>
> I'm currently trying to decide on a database design for tags in my web
> 2.0 application. The problem I'm facing is that I have 3 separate tables
> i.e. cars, planes, and schools. All three tables need to interact with
> the tags, so there will only be one universal se
I notice that I get different plans when I run the
following two queries that I thought would be
identical.
select distinct test_col from mytable;
select test_col from mytable group by test_col;
Any reason why it favors one in one case but not the other?
d=# explain analyze select distinct
Trevor Talbot wrote:
>
> Lack of reliability compared to _UFS_? Can you elaborate on this?
What elaboration's needed? UFS seems to have one of the longest
histories of support from major vendors of any file system supported
on any OS (Solaris, HP-UX, SVR4, Tru64 Unix all use it).
Can you elab
Csaba Nagy wrote:
>
> Well, my problem was actually solved by rising the statistics target,
Would it do more benefit than harm if postgres increased the
default_statistics_target?
I see a fair number of people (myself included) asking questions who's
resolution was to ALTER TABLE SET STATISTICS;
Heikki Linnakangas wrote:
> Peter Schuller wrote:
>> to have a slow background process (similar to normal non-full vacuums
> ...
> I think it's doable, if you take a copy of the tuple, and set the ctid
> pointer on the old one like an UPDATE, and wait until the old tuple is
> no longer visible to
Greg Smith wrote:
> Bruce Momjian wrote:
>> I always assumed SCSI disks had a write-through cache and therefore
>> didn't need a drive cache flush comment.
Some do. SCSI disks have write-back caches.
Some have both(!) - a write-back cache but the user can explicitly
send write-through requests.
Jon Schewe wrote:
> OK, so if I want the 15 minute speed, I need to give up safety (OK in
> this case as this is just research testing), or see if I can tune
> postgres better.
Depending on your app, one more possibility would be to see if you
can re-factor the application so it can do multiple w
Kevin Grittner wrote:
>
> ...Sybase named caches...segment off portions of the memory for
> specific caches... bind specific database
> objects (tables and indexes) to specific caches. ...
>
> When I posted to the list about it, the response was that LRU
> eviction was superior to any tuning an
Tom Lane wrote:
That guy doesn't actually have the foggiest idea what he's doing.
The reason there is no built-in capability to do that is that it *does
not work well*. Search the list archives for "priority inversion" to
find out why not.
I agree that that particular author seems clueless, bu
Combining the "insert" statements in a big concatenated string
joined by semicolons - rather than sending each individually
can drastically speed up your inserts; making them much closer
to the speed of copy.
For example, instead of sending them separately, it's much faster
to send a single strin
Luke Lonergan wrote:
>
> I think the topic is similar to "cache bypass", used in cache capable vector
> processors (Cray, Convex, Multiflow, etc) in the 90's. When you are
> scanning through something larger than the cache, it should be marked
> "non-cacheable" and bypass caching altogether. Thi
Jim C. Nasby wrote:
>
> Index scans are also pretty picky about correlation. If you have really
> low correlation you don't want to index scan,
I'm still don't think "correlation" is the right metric
at all for making this decision.
If you have a list of addresses clustered by "zip"
the "correla
Jean-David Beyer wrote:
>
> Sure, some even read the entire cylinder. But unless the data are stored
> contiguously, this does little good. The Linux ext2 and ext3 file systems
> try to get more contiguity by allocating (IIRC) 8 blocks each time a write
> needs space
>From where do you recall thi
toby wrote:
>
> That's not quite what I meant by "trust". Some drives lie about the
> flush.
Is that really true, or a misdiagnosed software bug?
I know many _drivers_ lie about flushing - for example EXT3
on Linux before early 2005 "did not have write barrier support
that issues the FLUSH CACH
Tom Lane wrote:
> "Craig A. James" <[EMAIL PROTECTED]> writes:
>> Here's something I found googling for "memory overcommitment"+linux
>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html
>
> That might have been right when it was written (note the reference to a
> 2.2 Linux kernel
Tom Lane wrote:
> "Craig A. James" <[EMAIL PROTECTED]> writes:
>> Here's something I found googling for "memory overcommitment"+linux
>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html
>
> That might have been right when it was written (note the reference to a
> 2.2 Linux kernel
On Fri, 23 Apr 2004, Manfred Koizar wrote:
>
> Setting shared_buffers to half your available memory is the worst thing
> you can do. You would end up caching exactly the same set of blocks in
> the internal buffers and in the OS cache, thus effectively making one of
> the caches useless.
One min
Any advice for settings for extremely IO constrained systems?
A demo I've set up for sales seems to be spending much of it's time in
disk wait states.
The particular system I'm working with is:
Ext3 on Debian inside Microsoft VirtualPC on NTFS
on WindowsXP on laptops of our sales team.
Some
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert the
results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
it be good to have
Randolf Richardson wrote:
While this doesn't exactly answer your question, I use this little
tidbit of information when "selling" people on PostgreSQL. PostgreSQL
was chosen over Oracle as the database to handle all of the .org TLDs
information. ...
Do you have a link for that informatio
Ron Mayer wrote:
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53
Wrong link...
http://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=812
This is the one that discusses scalability, price, performance,
failover, power consumption, hard
Merlin Moncure wrote:
...You need to build a bigger, faster box with lots of storage...
Clustering ...
B: will cost you more, not less
Is this still true when you get to 5-way or 17-way systems?
My (somewhat outdated) impression is that up to about 4-way systems
they're price competitive; but bey
I sometimes also think it's fun to point out that Postgresql
bigger companies supporting it's software - like this one:
http://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf
with $43 billion revenue -- instead of those little companies
like Mysql AB or Oracle.
:)
---
Short summary... the second query runs faster, and I think
they should be identical queries. Should the optimizer
have found this optimization?
I have two identical (or so I believe) queries; one where I
explicitly add a "is not null" comparison; and one where I
think it would implicitly onl
Tom Lane wrote:
Great Bridge did essentially that years ago, but I think we only got
away with it because we didn't say which DBs "Commercial Database A"
and "Commercial Database B" actually were. Even off the record, we
were only allowed to tell people that the commercial DBs were Oracle
and SQL
Josh Berkus wrote:
Now you can see why other DBMSs don't use the OS disk cache. ...
...as long as we use the OS disk cache, we can't
eliminate checkpoint spikes, at least on Linux.
Wouldn't the VM settings like the ones under /proc/sys/vm
and/or the commit=XXX mount option if using ext3 be a go
Merlin Moncure wrote:
readv and writev are in the single unix spec...and yes ...
On some systems they might just be implemented as a loop inside the
library, or even as a macro.
You sure?
Requirements like this:
http://www.opengroup.org/onlinepubs/007908799/xsh/write.html
"Write requests of {PIPE
I have a query that runs quite quickly using a hash join when run
standalone.
When I use this query as a subquery the planner always seems to
pick a differnt plan with an order of magnitude worse performance.
This bad plan is chosen even when the outer sql statement is
a trivial expression like
Tom Lane wrote:
And you can't just dismiss the issue of wrong cost models and say we can
get numbers anyway.
Is there a way to see more details about the cost estimates.
EXPLAIN ANALYZE seems to show the total time and rows; but not
information like how many disk pages were accessed.
I get the feel
I'm guessing your data is actually more "clustered" than the
"correlation" stastic thinks it is.
Alex Turner wrote:
> trendmls=# explain analyze select listnum from propmain where
> listprice<=30 and listprice>=20;
Is that a database of properties like land/houses?
If
I'm guessing your data is actually more "clustered" than the
"correlation" statistic thinks it is.
Alex Turner wrote:
> trendmls=# explain analyze select listnum from propmain where
> listprice<=30 and listprice>=20;
Is that a database of properties like land/houses?
Bill Moran wrote:
> On Fri, 9 Nov 2007 11:11:18 -0500 (EST)
> Greg Smith <[EMAIL PROTECTED]> wrote:
>> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:
>>> If the queries are complex, this is understable.
>> The queries used for this comparison are trivial. There's only one table
>> involved and
Tom Lane wrote:
> Michael Stone <[EMAIL PROTECTED]> writes:
>> OTOH, the planner can really screw up queries on really large databases.
>> ... I've got some queries that the
>> planner thinks will return on the order of 10^30 rows for that sort of
>> reason. In practice, the query may return 10^
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>
>> Also shown below it seems that if I use "OFFSET 0" as a "hint"
>> I can force a much (10x) better plan. I wonder if there's room for
>> a pgfoundry project for a patch set tha
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> ...given that that plan has a lower cost estimate, it
>>> should've picked it without any artificialconstraints.
>
>>I think the reason it's not picking it was discu
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> There's something fishy about this --- given that that plan has a lower
>>> cost estimate, it should've picked it without any artificial
>>> constraints.
One final t
Joshua D. Drake wrote:
> Actually this is not true. Although I have yet to test 8.3. It is
> pretty much common knowledge that after 8 cores the acceleration of
> performance drops with PostgreSQL...
>
> This has gotten better every release. 8.1 for example handles 8 cores
> very well, 8.0 didn't
Greg Smith wrote:
On Sat, 1 Mar 2008, Steve Poe wrote:
SATA over SCSI?
I've collected up many of the past list comments on this subject and put
a summary at
http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks
Should this section:
ATA Disks... Always default to the write cache
Tom Lane wrote:
Bruce Momjian <[EMAIL PROTECTED]> writes:
I have _not_ added a TODO for this item. Let me know if one is needed.
Please do, I think it's an open issue.
* Consider Cartesian joins when both relations are needed to form an
indexscan qualification for a third relation
Woul
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that would be part of thi
Ron Mayer wrote:
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that wo
Matthew Wakeling wrote:
On Thu, 15 May 2008, Luke Lonergan wrote:
...HINT bit optimization, but avoids this whole ³write the data,
write it to the log also, then write it again just for good measure²
...
The hint data will be four bits per tuple plus overheads, so it could be
made very compact
Greg Smith wrote:
some write cache in the SATA disks...Since all non-battery backed caches
need to get turned off for reliable database use, you might want to
double-check that on the controller that's driving the SATA disks.
Is this really true?
Doesn't the ATA "FLUSH CACHE" command (say,
Scott Carey wrote:
Some SATA drives were known to not flush their cache when told to.
Can you name one? The ATA commands seem pretty clear on the matter,
and ISTM most of the reports of these issues came from before
Linux had write-barrier support.
I've yet to hear of a drive with the problem
Scott Marlowe wrote:
I can attest to the 2.4 kernel not being able to guarantee fsync on
IDE drives.
Sure. But note that it won't for SCSI either; since AFAICT the write
barrier support was implemented at the same time for both.
--
Sent via pgsql-performance mailing list (pgsql-performance@
Scott Marlowe wrote:
On Tue, Aug 12, 2008 at 10:28 PM, Ron Mayer ...wrote:
Scott Marlowe wrote:
I can attest to the 2.4 kernel ...
...SCSI...AFAICT the write barrier support...
Tested both by pulling the power plug. The SCSI was pulled 10 times
while running 600 or so concurrent pgbench
Greg Smith wrote:
The below disk writes impossibly fast when I issue a sequence of fsync
'k. I've got some homework. I'll be trying to reproduce similar
with md raid, old IDE drives, etc to see if I can reproduce them.
I assume test_fsync in the postgres source distribution is
a decent way to
Scott Marlowe wrote:
IDE came up corrupted every single time.
Greg Smith wrote:
you've drank the kool-aid ... completely
ridiculous ...unsafe fsync ... md0 RAID-1
array (aren't there issues with md and the barriers?)
Alright - I'll eat my words. Or mostly.
I still haven't found IDE drives
Greg Smith wrote:
On Wed, 13 Aug 2008, Ron Mayer wrote:
Second of all - ext3 fsync() appears to me to be *extremely* stupid.
It only seems to correctly do the correct flushing (and waiting) for a
drive's cache to be flushed when a file's inode has changed.
This is bad, b
Scott Carey wrote:
For reads, if your shared_buffers is large enough, your heavily used
indexes won't likely go to disk much at all.
ISTM this would happen regardless of your shared_buffers setting.
If you have enough memory the OS should cache the frequently used
pages regardless of shared_buf
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU eff
M. Edward (Ed) Borasky wrote:
> At the CMG meeting I asked the disk drive engineers, "well, if the
> drives are doing the scheduling, why does Linux go to all the trouble?"
> Their answer was something like, "smart disk drives are a relatively
> recent invention." But
One more reason?
I imagine th
Short summary:
* extremely slow intarray indexes with gist__int_ops
* gist__intbig_ops seems much faster even with short arrays
* gin seems much much faster for inserts and creation(and queries)
I was debugging a system with a table that slowed to where individual
inserts were taking
Ron Mayer wrote:
> This table summarizes some of the times, shown more completely
> in a script below.
> =
> create gist index on 1 = 5 seconds
> create gist index on 2 = 32 seconds
> create gist ind
Tom Lane wrote:
> Ron Mayer writes:
>> vm=# create index "gist7" on tmp_intarray_test using GIST (my_int_array
>> gist__int_ops);
>> CREATE INDEX
>> Time: 2069836.856 ms
>
>> Is that expected, or does it sound like a bug to take over
>>
Greg Smith wrote:
> On Wed, 1 Apr 2009, Scott Carey wrote:
>
>> Write caching on SATA is totally fine. There were some old ATA drives
>> that when paried with some file systems or OS's would not be safe. There are
>> some combinations that have unsafe write barriers. But there is a
>> standard
Greg Smith wrote:
> I keep falling into situations where it would be nice to host a server
> somewhere else. Virtual host solutions and the mysterious cloud are no
> good for the ones I run into though, as disk performance is important
> for all the applications I have to deal with.
It's worth no
Grzegorz Jaśkiewicz wrote:
>
> I thought that's where the difference is between postgresql and oracle
> mostly, ability to handle more transactions and better scalability .
>
Which were you suggesting had this "better scalability"?
I recall someone summarizing to a CFO where I used to work:
"Or
astro77 wrote:
> Thanks Kevin. I thought about using tsearch2 but I need to be able to select
> exact values on other numerical queries and cannot use "contains" queries.
You might be able to make use of a custom parser for tsearch2 that creates
something like a single "word" for xml fragments lik
Alan Hodgson wrote:
> On Monday 21 September 2009, Scott Marlowe wrote:
>> I'm looking at running session servers in ram.
>
> Use memcached for session data.
IMHO postgres is more appropriate for some types of session data.
One of the apps I work on involves session data that consists of
geospa
If the table can be clustered on that column, I suspect
it'd be a nice case for the grouped index tuples patch
http://community.enterprisedb.com/git/
Actually, simply clustering on that column might give
major speedups anyway.
Vikul Khosla wrote:
> Folks,
>
> We have just migrated from Oracle to
Steve Crawford wrote:
> Greg Smith wrote:
>> Jochen Erwied wrote:
>>> - Promise Technology Supertrak ES4650 + additional BBU
>>>
>> I've never seen a Promise controller that had a Linux driver you would
>> want to rely on under any circumstances...
> +1
>
> I haven't tried Promise recently, but
Bruce Momjian wrote:
> Greg Smith wrote:
>> A good test program that is a bit better at introducing and detecting
>> the write cache issue is described at
>> http://brad.livejournal.com/2116715.html
>
> Wow, I had not seen that tool before. I have added a link to it from
> our documentation, an
Bruce Momjian wrote:
>> For example, ext3 fsync() will issue write barrier commands
>> if the inode was modified; but not if the inode wasn't.
>>
>> See test program here:
>> http://www.mail-archive.com/linux-ker...@vger.kernel.org/msg272253.html
>> and read two paragraphs further to see how touchi
Bruce Momjian wrote:
> Greg Smith wrote:
>> Bruce Momjian wrote:
>>> I thought our only problem was testing the I/O subsystem --- I never
>>> suspected the file system might lie too. That email indicates that a
>>> large percentage of our install base is running on unreliable file
>>> systems ---
Jesper Krogh wrote:
> I have a table that consists of somewhere in the magnitude of 100.000.000
> rows and all rows are of this tuples
>
> (id1,id2,evalue);
>
> Then I'd like to speed up a query like this:
>
> explain analyze select id from table where id1 = 2067 or id2 = 2067 order
> by evalue
Bruce Momjian wrote:
> Agreed, thought I thought the problem was that SSDs lie about their
> cache flush like SATA drives do, or is there something I am missing?
There's exactly one case I can find[1] where this century's IDE
drives lied more than any other drive with a cache:
Under 120GB Maxto
Bruce Momjian wrote:
> Greg Smith wrote:
>> If you have a regular SATA drive, it almost certainly
>> supports proper cache flushing
>
> OK, but I have a few questions. Is a write to the drive and a cache
> flush command the same?
I believe they're different as of ATAPI-6 from 2001.
>
Bruce Momjian wrote:
> Greg Smith wrote:
>> Bruce Momjian wrote:
>>> I have added documentation about the ATAPI drive flush command, and the
>>
>> If one of us goes back into that section one day to edit again it might
>> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command
>
98 matches
Mail list logo