Ron Mayer wrote:
Tom Lane wrote:
[EMAIL PROTECTED] writes:
select *
from streetname_lookup as sl
join city_lookup as cl on (true)
left outer join tlid_smaller as ts on (sl.geo_streetname_id =
ts.geo_streetname_id and cl.geo_city_id=ts.geo_city_id)
where str_name='alamo'
[EMAIL PROTECTED] writes:
streetname_lookup
(for every street name used in the country)
streetid | name | type
--++--
1 | Main | St
2 | 1st | St
Afa I'm concerned, I would add the column city_id since 2
The reason: if the power cord is yanked, the OS _must_ boot back up in
good condition. If the DB is corrupted, whatever, nuke it then re-
initialize it. But the OS must survive act-of-god events.
Well, in that case :
- Use reiserfs3 for your disks
- Use MySQL with MyISAM
Steve Wampler [EMAIL PROTECTED] wrote on 03/30/2005 03:58:12 PM:
[EMAIL PROTECTED] wrote:
Mohan, Ross wrote:
VOIP over BitTorrent?
Now *that* I want to see. Aught to be at least as interesting
as the TCP/IP over carrier pigeon experiment - and more
challenging to boot!
It
On 2005-03-31 15:19, [EMAIL PROTECTED] wrote:
Now *that* I want to see. Aught to be at least as interesting
as the TCP/IP over carrier pigeon experiment - and more
challenging to boot!
..
Interestingly, we had a follow on contract to investigate routing
optimization using flooding
On Wed, Mar 30, 2005 at 08:41:43PM -0600, John Arbash Meinel wrote:
If all you are doing is append only logging, the fastest thing is
probably just a flat file. You could have something that comes along
later to move it into the database. It doesn't really sound like you are
using any features
Stefan Weiss wrote:
On 2005-03-31 15:19, [EMAIL PROTECTED] wrote:
Now *that* I want to see. Aught to be at least as interesting
as the TCP/IP over carrier pigeon experiment - and more
challenging to boot!
..
Interestingly, we had a follow on contract to investigate routing
optimization
On Thu, 2005-03-31 at 00:15 -0800, Ron Mayer wrote:
Ron Mayer wrote:
Tom Lane wrote:
[EMAIL PROTECTED] writes:
select *
from streetname_lookup as sl
join city_lookup as cl on (true)
left outer join tlid_smaller as ts on (sl.geo_streetname_id =
[EMAIL PROTECTED] wrote on 03/31/2005 10:48:09 AM:
Stefan Weiss wrote:
On 2005-03-31 15:19, [EMAIL PROTECTED] wrote:
Now *that* I want to see. Aught to be at least as interesting
as the TCP/IP over carrier pigeon experiment - and more
challenging to boot!
..
Interestingly, we
I've tested several keyword count from 2 millions record book
description table that indexed with tseach2 indexing.
The result is always slow for first query attempt.
This my sample query:
-- test one phrase --
SELECT count(*) from table1
WHEREsearchvector @@ to_tsquery('default' ,'david') limit
Hi,
We are migrating to a new server with more memory and also from
postgres 7.4 to postgres 8.0.1 version.
Here are my settings on the current 7.4 version:
OS : RedHat 9
CPUS: 2 hyperthreaded
Memory: 4gig
shared_buffers: 65536
sort_mem: 16384
vacuum_mem: 32768
wal_buffers: 64
Hi all,
I have a table with a little over 200,000 columns in it that I need
to update using a regular expression. I admit that though I am not a
beginner and postgres, I am also far from an expert. :p
I tried to create an Index that would optimize the UPDATE but I may
have made an error in
Philip Hallstrom wrote:
I'm not sure about this which is why I'm replying off list, but your
index is on file_type, file_parent_dir, and file_name and you're query
is on file_parent_dir and file_name.
I seem to remember reading that that the index will only get used if the
columns in the where
(It is the 2nd posting, maybe the 1st one didn't goes thru)
I've tested several keyword count from 2 millions record book
description table that indexed with tseach2 indexing.
The result is always slow for first query attempt.
This my sample query:
-- test one phrase --
SELECT count(*) from
Yudie,
(It is the 2nd posting, maybe the 1st one didn't goes thru)
I've tested several keyword count from 2 millions record book
description table that indexed with tseach2 indexing.
The result is always slow for first query attempt.
Yes, this is because your tsearch2 index is getting pushed
So I need to either find an Index that will work with regexes or
re-write my code to update each subdirectory separately and use simpler
UPDATE statement for each.
Why don't you use a LTREE type to model your directory tree ? It's been
designed specifically for this purpose and has
On Fri, Mar 25, 2005 at 06:21:24PM -0500, Bruce Momjian wrote:
Can we issue a LOCK TABLE with a statement_timeout, and only do the
VACUUM FULL if we can get a lock quickly? That seems like a plan.
I think someone else's remark in this thread is important, though:
autovacuum shouldn't ever
Thanks for everyone's feedback on to best improve our Postgresql
database for the animal hospital. I re-read the PostgreSQL 8.0
Performance Checklist just to keep focused.
We purchased (2) 4 x 146GB 10,000rpm SCSI U320 SCA drive arrays ($2600)
and (1) Sun W2100z dual AMD64 workstation with 4GB
You need to look at what else is using RAM on that machine. And maybe buy
more.
Ouch.. I had that feeling also. then how can I know how much memory
needed for certain amount words? and why counting uncommon words are
faster than common one?
---(end of
19 matches
Mail list logo