Re: [GENERAL] RemoveIPC problem
Yeah, not my style. fsync is on, no caching RAID controller, etc. Thanks. > On Aug 31, 2017, at 9:48 PM, Michael Paquier <michael.paqu...@gmail.com> > wrote: > > As long as you don't run Postgres on scissors with things like fsync = > off or full_page_writes = off, there should be no risk with the data > consistency. -- Scott Ribe scott_r...@elevated-dev.com (303) 722-0567 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] RemoveIPC problem
Yeah, I was kind of thinking that PG detects the semaphore not existing, bails immediately, restarts clean, thus no problem. I just wanted to hear from people, like you, that know way more than I do about the internals. > On Aug 31, 2017, at 9:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > scott ribe <scott_r...@elevated-dev.com> writes: >> Thanks to a typo, I did not turn off systemd's RemoveIPC, and had many many >> pg restarts before I figured out the problem. > >> Should my data be OK? Or do I need to dump & reload? > > I don't know of any reason to think that that poses a data corruption > risk. (But I've been wrong before.) > > regards, tom lane -- Scott Ribe scott_r...@elevated-dev.com (303) 722-0567 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] RemoveIPC problem
Thanks to a typo, I did not turn off systemd's RemoveIPC, and had many many pg restarts before I figured out the problem. Should my data be OK? Or do I need to dump & reload? -- Scott Ribe scott_r...@elevated-dev.com (303) 722-0567 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] the "PgJDBC driver is not an executable" message
Well you know the old saying: "the road to hell is paved with good intentions"! So an attempt was made to help people who download the JDBC driver and then try to "run" it: https://github.com/pgjdbc/pgjdbc/pull/112/files But here's a fun side effect: If you add a .jar to an Eclipse project that has a main(), Eclipse adds a new run configuration with that class as the main class, and further makes that the current selected config, automatically and silently. Took me a little bit to figure that one out... (because I'm a novice Eclipse user...) So, is it worth adding some explanatory text? It was really annoying to suddenly start getting this message when I never had any intention of "running" the JDBC driver ;-) -- Scott Ribe scott_r...@killerybtes.com (303) 722-0567 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PG on SSD
Is it reasonable to run PG on a mirrored pair of something like the Intel SSD DC 3610 series? (For example: http://ark.intel.com/products/82935/Intel-SSD-DC-S3610-Series-480GB-2_5in-SATA-6Gbs-20nm-MLC) I'd *hope* that anything Intel classifies as a "Data Center SSD" would be reasonably reliable, have actually-working power loss protection etc, but is that the case? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] different empty array syntax requirements
On Apr 21, 2016, at 8:37 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > The parens turn into an actual parsetree node when > operator_precedence_warning is on, and the cast-of-an-array hack doesn't > know it should look through such a node. I figured that. The mystery is why on my pg, and not on the other. I've asked the other guy to try it in a newly-created database. > That's a bug. Will fix it. OK, cool. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] different empty array syntax requirements
On Apr 21, 2016, at 8:25 PM, Alvaro Aguayo Garcia-Rada <aagu...@opensysperu.com> wrote: > > Looks like one has the appropiate cast operator, while the other hasn't. Have > you tried doing the same, on both server, on an empty database created from > template0? Excellent suggestion: pedcard=# create database test; CREATE DATABASE pedcard=# \c test SSL connection (protocol: TLSv1, cipher: DHE-RSA-AES256-SHA, bits: 256, compression: off) You are now connected to database "test" as user "admin". test=# select (ARRAY[])::text[]; ERROR: cannot determine type of empty array LINE 1: select (ARRAY[])::text[]; ^ HINT: Explicitly cast to the desired type, for example ARRAY[]::integer[]. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] different empty array syntax requirements
How is that one pgsql build (both are 9.5.2) has different casting behavior for empty arrays: ericmj=# select ARRAY[]::text[]; array --- {} (1 row) ericmj=# select (ARRAY[])::text[]; array --- {} (1 row) --VS-- pedcard=# select ARRAY[]::text[]; array --- {} (1 row) pedcard=# select (ARRAY[])::text[]; ERROR: cannot determine type of empty array LINE 1: select (ARRAY[])::text[]; ^ HINT: Explicitly cast to the desired type, for example ARRAY[]::integer[]. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] recover from this error
Alright, check kernel version, but what else, dump & restore? ERROR: unexpected data beyond EOF in block 1 of relation base/16388/35954 HINT: This has been seen to occur with buggy kernels; consider updating your system. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] quick q re execute scope of new
Easier to give an example than describe the question, any chance of making something like this work? execute('insert into ' || tblname || ' values(new.*)'); -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] quick q re execute scope of new
On Apr 2, 2015, at 10:14 PM, Adrian Klaver adrian.kla...@aklaver.com wrote: EXECUTE 'insert into ' || quote_ident(tblname) || ' values(' || new.* || ')' Not that easy, strings are not quoted correctly, and null values are blank. Might be a function to translate new.* into a string as needed for this use, but I found another way based on Tom's suggestion: execute('insert into ' || tblnm || ' select $1.*') using new; -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] quick q re execute scope of new
On Apr 2, 2015, at 10:10 PM, Tom Lane t...@sss.pgh.pa.us wrote: Not like that, for certain. It might work to use EXECUTE ... USING new.* or some variant of that. Couldn't get a variant of that to work, but this did: execute('insert into ' || tblnm || ' select $1.*') using new; -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] scope quirk in copy in function
create or replace function archive_some_stuff() returns void as $$ declare cutoff timestamptz; begin cutoff := now() - '1 day'::interval; copy (select * from log where end_when cutoff) to ... ... Gives me an error that there is column named cutoff. (Other uses of cutoff in queries not inside a copy, iow the delete from commands, work.) Is there any alternative to just duplicating the now() expression inside every copy? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ https://www.linkedin.com/in/scottribe/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] fastest dump/restore
Is there an article anywhere which documents everything the current state of the art for the fastest dump/restore? What dump/restore format options? What things to tweak in the config? I've picked up a few bits here and there along the line, but was just wondering if there's a comprehensive source of current advice. (I want to do a prophylactic dump/restore, after a middle-of-the-day OS crash caused by a third-party in-kernel driver--which I am going to remove now.) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] what checksum algo?
What checksum algorithm wound up in 9.3? (I found Simon Riggs 12/2011 submittal using Fletcher's, Michael Paquier's 7/2013 post stating CRC32 reduced to 16, and another post online claiming that it was changed from CRC before release but not stating what it was changed to.) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] like optimization
Thank you all. Both the double index pg_trgm would be good solutions. On Oct 14, 2013, at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote: On Sat, Oct 12, 2013 at 4:28 PM, Torsten Förtsch torsten.foert...@gmx.net wrote: On 12/10/13 20:08, Scott Ribe wrote: select * from test where tz = start and tz end and colb like '%foobar%' I think you can use an index only for wildcard expressions that are anchored at the beginning. So, select * from test where tz = start and tz end and colb like 'foobar%' can use an index on colb. You could perhaps select * from test where tz = start and tz end and colb like 'foobar%' union all select * from test where tz = start and tz end and reverse(colb) like 'raboof%' Then you need 2 indexes, one on colb the other on reverse(colb). You can have duplicates in the result set if the table contains rows where colb='foobar'. If that's a problem, use union distinct. Alternatively, if foobar is kind of a word (with boundaries), you could consider full-text search. pg_trgm module optimizes 'like with wildcards' without those restrictions. It's very fast for what it does. Because of the GIST/GIN dependency index only scans are not going to be used through pg_tgrm though. merlin -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] like optimization
PG 9.3, consider a table test like: tz timestamp not null, cola varchar not null, colb varchar not null 2 compound indexes: tz_cola on (tz, cola) tz_colb on (tz, colb varchar_pattern_ops) now a query, for some start end timestamps: select * from test where tz = start and tz end and colb like '%foobar%' Assume that the tz restriction is somewhat selective, say 1% of the table, and the colb restriction is extremely selective, say less than 0.1%. It seems to me that the fastest way to resolve this query is to use the tz_colb index directly, scanning the range between tz = start and tz end for the colb condition. But pg wants to use the pg_cola index to find all rows in the time range, then filter those rows for the colb condition. (FYI, cola contains only very small values, while colb's values are typically several times longer.) Now if I tweak the time range, I can get it to seq scan the table for all conditions, or bitmap heap scan + re-check cond tz + filter colb + bitmap index scan tz_cola, but never use the tz_colb index... Am I right about the fastest way to perform the search? Is there some way to get pg to do this, or would this require an enhancement? Here's a sample query plan: Index Scan using tz_cola on test (cost=0.56..355622.52 rows=23 width=106) (actual time=61.403..230.649 rows=4 loops=1) Index Cond: ((tz = '2013-04-01 06:00:00-05'::timestamp with time zone) AND (tz = '2013-04-30 06:00:00-05'::timestamp with time zone)) Filter: ((colb)::text ~~ '%foobar%'::text) Rows Removed by Filter: 261725 Total runtime: 230.689 ms -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] like optimization
On Oct 12, 2013, at 4:21 PM, Tom Lane t...@sss.pgh.pa.us wrote: The reason you're losing on this is that the select * command eliminates the possibility of an index-only scan (I'm assuming that that selects some columns that aren't in the index). Given that a plain indexscan will always involve fetching each heap row that satisfies the indexable condition (the one on tz), the planner figures it might as well use the physically-smaller index. OK, that logic makes sense. In the particular case I'm looking at, the comparison to colb will match such a tiny fraction that I think it should be faster to use the index first before fetching heap rows. (It most certainly would be faster if the rows to be evaluated for the colb match were randomly dispersed, but because they tend to be naturally clustered on tz anyway, and the rows are pretty small, there's some chance an index scan might not save enough heap row I/O to offset it's own I/O.) It's true that in principle we could use the index-only-scan index AM machinery to retrieve colb from the index, and then check the LIKE predicate on that value before we go to the heap to get the other values; but the code isn't factored that way at the moment. I'm not entirely sure that such cases arise often enough to be worth making it happen. I think there was discussion of this point back when the index-only-scan patch was being written, and we decided it didn't seem worth pursuing at the time. It's not a common-enough case for me to worry about. This is a very rare query in this application--I just wanted to know if I was missing something wrt indexes or whatever. It took me a long time to even find varchar_pattern_ops. (This is one particular question where the top results from google searches are dominated by incorrect assertions. Yes, Virginia, it *IS* possible to use an index in evaluating a like '%whatever' condition--whether or not it helps in a particular query is an open question, but it most certainly is possible.) Besides, you've given me the hint, if I really care about this I can try a covering index ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] how to tell master from replica
Assuming a master replica set up using streaming replication, hot standby, 9.3. I'd like to have a single script on both machines distinguish whether it's running on the machine with the master or replica, and take different actions accordingly. Is the way to do this to check for the presence of wal sender process vs wal receiver process? Or is there a query that could executed against sys tables to find current running config of the local postmaster? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to tell master from replica
On Sep 25, 2013, at 6:13 PM, Michael Paquier michael.paqu...@gmail.com wrote: SELECT pg_is_in_recovery(); can be used to make the difference between a master and a slave. Exactly what I need; thanks. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] why does this not give a syntax error?
pg 9.2: delete from ExternalDocument where id = 11825657and Billed = 'f'; -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] why does this not give a syntax error?
On Jun 10, 2013, at 12:52 PM, Tom Lane wrote: Scott Ribe scott_r...@elevated-dev.com writes: pg 9.2: delete from ExternalDocument where id = 11825657and Billed = 'f'; 11825657and is not any more lexically ambiguous than 11825657+. It has to be two separate tokens, and that's how it's read. But it's not read correctly. In other words: delete from ExternalDocument where id = 11825657and Billed = 'f'; deleted 0 rows, while: delete from ExternalDocument where id = 11825657 and Billed = 'f'; deleted 1 row. ??? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] update performance of degenerate index
I'm seeing occasional simple-looking updates take way longer than I think they should, and if my theory about it is correct, it's not actually a problem. Consider this index, intended to provide extremely quick access to a small number of items from a much larger table: create index not_exported on exports(id) where exported_when is null My guess is that if instead of a very small number of items, there are 1000s or 10s of 1000s of items, and a process is updating them one at a time, then occasionally there will be an expensive update of that index that involves touching writing a lot of pages? If that's what's happening, great. (The processing is normally triggered by notify, and happens much faster than the rate at which these come in, so the number of items in that index should be 0 most of the time, occasionally 1 for a second, and possibly but rarely 2 or 3 for a second. The current situation of lots of entries in it has to do with 1-time processing of legacy data.) If that can't be what's happening, then I would want to investigate further why an update of a smallish row with 3 small indexes sometimes takes 600ms. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] seeking SQL book recommendation
For a client who needs to learn how to query the db: - No SQL knowledge at all; needs to start from square 1. - Smart, capable person, who will be in this position for a long time, using this db for a long time. - No chance in hell this db will be moved off PG, so PG-centric is fine ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 9.2 upgrade glitch with search_path
Built installed 9.2.3. Dumped 9.1 db (using 9.2 pg_dump IIRC). Restored. Database search path was not restored. Had to execute alter database ... set search_path to... Dump commands: pg_dumpall -g -f roles.dump pg_dump -F c -Z 0 -v pedcard db.dump Restore commands: psql -f roles.dump postgres pg_restore -j 4 -veC -d postgres db.dump -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.2 upgrade glitch with search_path
On Jan 13, 2013, at 2:51 PM, Tom Lane wrote: That's a hole in the particular dump methodology you selected: pg_dumpall -g -f roles.dump pg_dump -F c -Z 0 -v pedcard db.dump pg_dump does not dump/restore database properties, only database contents. Properties are the responsibility of pg_dumpall, which you bypassed (for databases anyway). There's been some discussion of refactoring these responsibilities, but no consensus. Ah, this is my first upgrade using that methodology, in order to get concurrent restore functionality. Prior to this I've always used pg_dumpall. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] bug with indexing of infinite timestamps?
I hit this peculiar result last night with 9.1.3 (Intel, 64-bit). Check out the query and the explain analyze results: explain analyze select count(*) from page_log where end_when current_date - 1 and succeeded = 't'; QUERY PLAN --- Aggregate (cost=380325.72..380325.73 rows=1 width=0) (actual time=312.169..312.169 rows=1 loops=1) - Bitmap Heap Scan on page_log (cost=5445.24..379741.58 rows=233654 width=0) (actual time=307.596..312.160 rows=20 loops=1) Recheck Cond: (end_when (('now'::text)::date - 1)) Filter: succeeded - Bitmap Index Scan on page_log__end_pager (cost=0.00..5386.83 rows=233989 width=0) (actual time=278.954..278.954 rows=287118 loops=1) Index Cond: (end_when (('now'::text)::date - 1)) 287,118 rows found via the index scan, 20 rows after the recheck. There were no other transactions open at the time. (I'm just running some tests on my personal machine.) Analyzing the table did not change the results (as I expected). So there had originally been 287,118 rows with an end_when value of 'infinity', then I updated 287,098 of them to some value further in the past than yesterday, leaving just 20 at infinity being the only ones matched by the query. That update had finished about an hour before I got the above result--but bear in mind that this db is running on a crappy disk. Reindexing the table fixed the issue: QUERY PLAN --- Aggregate (cost=378380.91..378380.92 rows=1 width=0) (actual time=0.059..0.059 rows=1 loops=1) - Bitmap Heap Scan on page_log (cost=5371.15..377803.88 rows=230815 width=0) (actual time=0.031..0.056 rows=20 loops=1) Recheck Cond: (end_when (('now'::text)::date - 1)) Filter: succeeded - Bitmap Index Scan on page_log__end_pager (cost=0.00..5313.45 rows=231146 width=0) (actual time=0.025..0.025 rows=20 loops=1) Index Cond: (end_when (('now'::text)::date - 1)) So are the 'infinite' values somehow sticky in the index??? The table definition is simple: \d page_log Table public.page_log Column| Type | Modifiers -+--+-- id | bigint | not null default nextval('rowids'::regclass) ip | character varying| not null pager_num | character varying| not null message | character varying| not null succeeded | boolean | not null user__id| bigint | by_group| boolean | not null by_schedule | boolean | not null start_when | timestamp with time zone | not null end_when| timestamp with time zone | not null request__id | bigint | Indexes: page_log_pkey PRIMARY KEY, btree (id) page_count__2012_12_28 btree (start_when) WHERE succeeded = true AND start_when = '2012-12-28 00:00:00-07'::timestamp with time zone page_log__end btree (end_when, succeeded) page_log__end_pager btree (end_when, pager_num) page_log__pager_end btree (pager_num, end_when) page_log__start_ok btree (start_when, succeeded) Check constraints: page_log_ip_check CHECK (btrim(ip::text) ''::text) page_log_message_check CHECK (btrim(message::text) ''::text) page_log_pager_num_check CHECK (btrim(pager_num::text) ''::text) Foreign-key constraints: page_log_request__id_fkey FOREIGN KEY (request__id) REFERENCES page_requests(id) page_log_user__id_fkey FOREIGN KEY (user__id) REFERENCES users(id) DEFERRABLE INITIALLY DEFERRED (The currently-defined indexes are a little overlapping, because I'm experimenting with various queries and performance.) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] progress of long running operation
Is there any way to get some insight into the progress of: insert into foo select distinct on (...) from bar where... It's got to with importing some legacy data, which has no proper primary key, and duplicates, and garbage that won't be accepted. And there's 30,000,000 rows, and I'm running on a slow disk for testing--so I know this is going to be painfully slow. But after a few hours I'd like to know if it's going to finish overnight, or if it will take so long that I need to look at alternate approaches. (I upped my shared buffers work mem, so explain on the select statement shows a bit better than 50% reduction in predicted work for that part. And I will go ahead and drop all indexes on the target table.) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] progress of long running operation
On Dec 27, 2012, at 12:46 PM, Tom Lane wrote: Or you could run contrib/pgstattuple's pgstattuple() function every so often --- it will report the uncommitted tuples as dead, which is inaccurate, but you'd be able to see how fast the number is increasing. That's exactly the kind of thing I was hoping for. I'm actually inserting into an empty table, so dead tuples would be dead accurate in my case ;-) Or I could suck it up and do them in batches instead of one giant pass... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] variadic function, query in, help with syntax/function
Briefly, what would it take to make the following work? create function getbatch (variadic ids int8[]) returns setof foobar as $$ begin return query select * from foobar where id in (ids); end; $$ language plpgsql; -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] variadic function, query in, help with syntax/function
On Oct 21, 2012, at 11:01 AM, Pavel Stehule wrote: Hello 2012/10/21 Scott Ribe scott_r...@elevated-dev.com: Briefly, what would it take to make the following work? create function getbatch (variadic ids int8[]) returns setof foobar as $$ begin return query select * from foobar where id in (ids); end; $$ language plpgsql; create function getbatch (variadic ids int8[]) returns setof foobar as $$ begin return query select * from foobar where id = any (ids); end; $$ language plpgsql; Ah, thanks, I didn't know any worked in that context--I've just used it (and only seen examples) the other way: const_id = any(somecol). note, for these single statement function, sql language is better I greatly simplified it for the question; it's kind of a nasty function that's multiple unions of multiple joins. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] options for ad-hoc web-based data queries
Anybody know of tools for adding ad-hoc query builder to a web app? (Backed by PostgreSQL 9.1.) I'm familiar with HTSQL, and it looks good for more highly skilled trained users. But I'm looking for something more graphical, you know: list of tables, select one, list of columns, enter conditions, click search, see the rows, right click see a list of tables related by foreign key... Being available within the web app is not a hard 100% requirement, but if we're talking about installing an application and JDBC driver on the user's machine, then I already know about lots lots of options ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] ARD update warning (Mac stuff)
If you install the latest ARD update (which does not require a reboot), it apparently does something similar to: sudo killall postmaster Oops. Thanks, Apple. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] evaluating subselect for each row
As part of anonymizing some data, I want to do something like: update foo set bar = (select bar2 from fakes order by random() limit 1); But of course, that sets them all to the same value, whereas I want them all different. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Rails pg setup question
What I'm trying to determine is: should I plan on using pgbouncer? With Rails Passenger, do the app server processes take and hold connections to the db? Or take release with queries/connections? This is not a scalability question; given the traffic the site will have I only need a small handful of connections. It's a latency question, I don't want to be starting up new pg processes excessively. (Yes, I thought about asking on the RoR list, since it's really a question about RoR behaviors, but on the other hand it involves pg optimization best practice, which is, ahem, different than with MySQL.) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] sql query bug???
Is this a bug (in 9.0.4), or have I just gone cross-eyed from too much work? The query: select t0.ICD9, t0.Description, count(*) from (select distinct Person_Id, ICD9, Description from PatientDiagnoses) as t0 group by (t0.ICD9, t0.Description) order by count(*) desc limit 10; The error: column t0.ICD9 must appear in the GROUP BY clause or be used in an aggregate function Huh? FWIW, I'm not providing the PatientDiagnoses def because it's a complex query involving 3-way union of 5-way joins--intended for end-user querying. Of note, this query works (and performance is good enough as well): select ICD9, count(*) from (select distinct Person_Id, ICD9 from PatientDiagnoses) as t0 group by ICD9 order by count(*) desc limit 10; -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] sql query bug???
On Feb 5, 2012, at 10:46 PM, Tom Lane wrote: Drop the parentheses in the GROUP BY. I had the suspicion that it was some kind of a late-night brain fart ;-) I don't know where the hell the parens came from, since I've *NEVER* put spurious parens in a group by clause before. But it took someone pointing it out to me to get me to notice that irregularity. Fatigue... One more day of super-crunch and then I get to take a break... Thanks. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 7
On Oct 11, 2011, at 8:18 PM, The Great SunWuKung wrote: This shop is number 1 at my shop-list! So why the fuck is your spam title 7??? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Installation woes via Macports on Mac OS X 10.7
On Oct 7, 2011, at 2:24 PM, René Fournier wrote: I've tried installation 8.4 and 9.0 on two different machines, and at the end can't start Postgresql. Here's the basic story: You can't change shmall shmmax after boot. They must be set during startup. If you're on a recent version of OS X, you do this in /etc/sysctl.conf. Also the /usr/bin/postgres that you seen running is not where macports puts it and not the one you tried to start a couple of lines earlier, so you have something already installed on your system that is running a postgres instance. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Installation woes via Macports on Mac OS X 10.7
On Oct 7, 2011, at 3:48 PM, René Fournier wrote: Well, I changed them at the shell and in sysctl, and restarted (on both machines). Check them at the shell. If they're not what you set in sysctl.conf, then your sysctl.conf is not correct, and since you can't change them after startup, you won't be able to launch pg with anything but a tiny shared buffers setting. There are restrictions on them, such as shmall being an even multiple of the VM page size, and some relationship or other between shmall shmmax but I don't remember the details. So one thing that's odd, I have two version of postgresql apparently running (although I've removed 9.0 from Macports)… Not sure if this is the cause. Any other suggestions? Yes. Figure out where the logging for the postgres instances you are trying to launch are going, and read the log messages. Or maybe they're just going to console--check that first. You'll have a bit of difficulty getting help here, because you're not likely to find people who know what those wrappers (daemondo postgresql84-server.wrapper) actually do, since they're not part of postgresql. One thing you should know, it is possible to have 2 versions of pg running at the same time, but it is not possible to have 2 versions running on the default port at the same time, or in the same data directory (cluster in pg terminology). Also, you don't have any actual pg instances running there. What you have is some kind of wrapper that tries to launch pg. That wrapper is failing to launch, and either looping, or quitting and being relaunched--depending on how it is set up, which I have no idea about. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Getting PostGIS 1.5.3 working with Postgresql90 (Macports)
On Oct 7, 2011, at 5:38 PM, René Fournier wrote: (Sorry, I'm a MySQL guy. I'm just trying to get started without asking too many dumb questions.) Frankly, I think you'd be better served by deleting the entirety of the macports stuff and installing postgresql from source the normal UNIX way: ./configure, make, sudo make install... That's the way I do it, and it works fine on OS X. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Getting PostGIS 1.5.3 working with Postgresql90 (Macports)
On Oct 7, 2011, at 7:39 PM, René Fournier wrote: Plus, I find Macports makes it easy to automate a server build in one script. The problem as I see it isn't Macports but my unfamiliarity with Postgresql. I think it's both ;-) I tried macports for a couple of things and gave up on it. It works for getting dependencies for certain things. Then one day it doesn't work because you're trying a combination that wasn't accounted for, and you now have no idea how all those things you previously installed are configured... But if you can find the pg log, it will usually pretty explicitly tell you why the server is quitting on launch. So you might just need to read those wrapper scripts to see how exactly they invoke postgres. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How can i get record by data block not by sql?
On Oct 3, 2011, at 3:03 AM, 姜头 wrote: I want to read and write lots of data by data blocks, so i can form a disk-resident tree by recording the block address. Then you want some kind of lower-level b-tree manager, not a SQL database, and certainly not PostgreSQL. Before you go down that path, you should seriously consider whether you really need that, rather than a higher-level solution. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Inconsistency: varchar is equivalent to varchar(255) and also not equivalent?
On Oct 3, 2011, at 10:12 AM, Boszormenyi Zoltan wrote: But I would like to know why isn't the type conversion from unlimited varchar to varchar(255) invoked in the pl/pgsql function? What if t1 || t2 is longer than 255? You need to explicitly specify. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] (another ;-)) PostgreSQL-derived project ...
On Sep 25, 2011, at 2:11 AM, Albretch Mueller wrote: For what exactly? Isn't a comparison on 4 numeric bytes (1 (or 1/2) word in modern hardware) more efficient than comparing sequences of string characters? What on earth makes you think the db engine compares numbers as strings??? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to get Transaction Timestamp ?
On Sep 17, 2011, at 1:09 AM, Raghavendra wrote: However, I was curious to know any thing stored at Page-Level(like XID) to help me in getting the transaction timestamp. No, there is no such thing. If you want timestamps, you have to record them yourself. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] integer instead of 'double precision'?
On Sep 9, 2011, at 8:42 AM, Henry Drexler wrote: any ideas on how to get this type of a manufactured column (not sure the right term for it) to show the double precision result? Use floating point types in the calculation to begin with. 1.0/3.0 1::float8 / 3::float8 float8(1) / float8(3) 1.0/3 1/3.0 1::float8 / 3 ... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Variable column name
On Sep 2, 2011, at 2:31 PM, Bob Pawley wrote: It seems to work when I hard code the column name and array point, so I was hoping to make it work through a loop using variables for column and array point. Does this make sense?? Building queries this way is tedious error prone; that's just the way it is. Put the command into a variable, then raise a notice with that variable, then execute it. That way, when you get a failure, you just copy the failed SQL from the notice into an editor, tweak it until it works, then adjust your code accordingly to produce the corrected query. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Variable column name
On Sep 1, 2011, at 9:04 AM, Bob Pawley wrote: Would it be possible for you to point me to an example?? The EXECUTE command is what you want. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] out of memory - no sort
On Aug 31, 2011, at 9:51 AM, Don wrote: Both machines are 64bit. Are all your server client builds 64-bit? 32M rows, unless the rows are 50 bytes each, you'll never be able to manipulate that selection in memory with a 32-bit app. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how do I disable automatic start on mac os x?
On Aug 31, 2011, at 8:46 AM, edwardIshaq wrote: if you do: open OS X will open the file in the plist editor. I tried doing that but didn't get a way with saving though :) Probably a privileges violation, right? In a prior message on this thread I suggested: sudo launchctl unload -w blahblahblah.plist I also said: The -w option causes it to not only unload the item, but also write a disabled key into it which will stop it from loading at launch. That is no longer true. Instead of the disabled key, 10.6 up (or was it 10.5 up?) now keep track of enabled/disabled elsewhere, so you really need to use launchctl instead of editing the plist. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] out of memory - no sort
On Aug 31, 2011, at 10:52 AM, Don wrote: I had always thought that a 32bit machine could access up to 4GB. So what is the limiting factor ? - Half of your memory space may be given over to memory-mapped I/O. Now you're down to 2GB. - Your process's executable, plus any libraries it uses, plus all the system libraries that they touch, recursively all the way down, are mapped into this space. Now you're likely down to 1.5GB or less free. - Then of course your process allocates various data structures for each row, even if it's just a huge array of pointers to each row, that would be overhead. And of course the overhead is not nearly that simple--there will be allocations for pointers to varchars, and info about columns and data types, and heap data structures to keep track of allocated vs free blocks. - Memory will be fragmented of course, so you can't even use all of what's left. So no, you can't manipulate 32M of anything except plain numbers or very simple structs in RAM in a 32-bit process. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] duplicate key violates unique on a nextval() field
On Aug 30, 2011, at 10:19 AM, Peter Warasin wrote: The message tells me furthermore that freeradius tries to insert a record with a radacctid which already exists. But how can that happen when it is bigserial? Postgres only assigns the value if it is not explicitly provided. Any client, freeradius included, could be assigning ids and could have bugs. Allowing pg to assign the value is safe, using nextval is safe--I'd look for client code that tries to get ranges ahead of time cache... The error message posted above tells me that the insert statement does even not succeed after the reconnect. (failed after reconnect) If freeradius is trying to insert a record with an id that already exists, after a re-connect there's no reason at all to assume that the previously existing record is gone and that the insert with the same id will now succeed. That DDL is also kind of nasty... Why the big effort to set the sequence to 1 immediately after creating the table? Why the creation of a unique index when the primary key attribute already causes a unique index to be created on the id? Ugh. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] IDLE queries taking up space
On Aug 30, 2011, at 10:03 AM, JD Wong wrote: How can I set postgres to qutomatically close connections that have finished their queries and now sit idle? They haven't finished their queries. They've opened transactions, and then are sitting there doing nothing. In other words, this is a bug in your clients, and no, you really would not want PG automatically terminating connections mid-transaction just because it thought the client was taking too long to get to the next step. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] COPY failure on directory I own
On Aug 30, 2011, at 11:14 AM, Rich Shepard wrote: The permissions on that directory are 755 and it's owned by me. Since I have no problems writing other files to that directory I must have the command syntax incorrect but I don't see where. Where is the server and where are you? You are issuing a command to the server to create a file at that path on the server. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SELECT Query on DB table preventing inserts
On Aug 30, 2011, at 8:22 AM, Dan Scott wrote: Perhaps because I'm locking the table with my query? Do you mean you're explicitly locking the table? If so, why??? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Suspicious Bill
Mods: FYI, this is not a one-off thing. I've seen this email on 4 other lists so far this morning. So some turd is spamming every list he can subscribe to. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Using Postgresql as application server
On Aug 17, 2011, at 12:53 AM, Sim Zacks wrote: In your scenario, if you send the NOTIFY message and then you roll back the transaction, the helper application will still send the email. How? NOTIFY doesn't get delivered until the transaction commits. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Using Postgresql as application server
On Aug 17, 2011, at 1:05 AM, Sim Zacks wrote: One problem we have with LISTEN/NOTIFY (and I haven't found the cause for this yet) is every once in a while my daemon stops listening. It may be after a month of use or longer, and may be caused by the database being restarted or something similar. When the daemon stops listening, it doesn't give any errors or indication that it isn't working anymore. So your daemon has a bug. When the database is restarted, connections will be closed, and the daemon should certainly notice that. Of course the cause may be something else, but either way I doubt it's a problem with NOTIFY/LISTEN. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERTing rows from external file
On Aug 16, 2011, at 4:13 PM, Rich Shepard wrote: Here's the full statement for the last row: psql:chem_too.sql:5517: ERROR: invalid input syntax for type boolean: LINE 1: ...NS','1996-11-21','Potassium','0.94988','mg/L','','','','... ^ The column is NULLable and if there's no value a NULL should be entered. An empty string is not null. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FREE hosting platforms with PostgreSQL, Java SDK, Tomcat, ecc.?
After open source for the software, we will wait for open resource for the hardware (this is just a first example http://www.arduino.cc/, even if of different nature). While the plans may be free, the actual hardware sure as hell won't be. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication does not work across datacenter with 20ms latency?
On Jul 23, 2011, at 6:50 AM, Yan Chunlu wrote: what does invalid record length and invalid magic number normally means? xlog corrupted? Thanks for any further help! It means your build settings for pg are not compatible across the 2 machines. For instance, one machine is 32-bit and the other is 64-bit, or one machine is big-endian and the other is little-endian... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication does not work across datacenter with 20ms latency?
On Jul 23, 2011, at 8:43 PM, Yan Chunlu wrote: I used apt-get to install postgresql, running pg_config showing they are exactly the same... BTW, forgot to mention this in my first message: I run streaming replication across the country with latency well over 100ms and no problems. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] interesting finding on order by behaviour
On Jul 22, 2011, at 11:11 AM, Samuel Hwang wrote: results = SQL Server 2008 R2 (with case insensitive data, the ordering follows ASCII order) f1 --- AbC abc ABc cde CdE Well, if it's case insensitive, then AbC abc ABc are all equal, so any order for those 3 would be correct... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Programmer ( Postgres), Milwaukee - offsite-Remote - onsite
On Jul 19, 2011, at 9:27 AM, Martin Gainty wrote: I do'nt believe Rao would discriminate against anyone that speaks the Kings English. So, what makes you think they won't hire us Americans? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] announcements regarding tools
I'm not sure to whom this specifically should be addressed, but something that's been bugging me for a while: announcements like this morning's AnySQL Maestro 11.7 released, where the announcement mentions nothing about platform support. And it's not just the lack of that info in announcements; on many web sites you have to dig for a while to find info about platform support, and I do not enjoy finding what looks like a nice tool, only to have to spend 5-10 minutes to figure out that it is Windows only. My suggestion: all such announcements should include information about supported platforms. Any announcement submitted without that info should be rejected, and the vendor instructed to add it before re-submission. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error Importing CSV File
On Jul 15, 2011, at 11:44 AM, Tom Lane wrote: Never heard of rake before, but I'm betting that it's doing stuff behind your back, like including an id column in the table definition. Try looking at the table in psql (\d geo_data), or enabling query logging on the server so you can see what the actual CREATE TABLE command sent to the server looks like. That's it. Rake is part of Ruby on Rails, and RoR wants every table to start with an integer synthetic key column. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error Importing CSV File
On Jul 15, 2011, at 12:06 PM, Bryan Nelson wrote: Hi Scott, do you know if it's possible to force it not to create the extra field? If you do that, you are going to have to figure out how to get Rails to work with that table--which is probably far beyond the Rails help you're going to get on a Postgres mailing list ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Real type with zero
On Jun 29, 2011, at 9:50 AM, David Johnston wrote: Aside from storing the true precision in a separate integer field what solution is there is this situation. I think the only other way would be a custom data type encapsulating those 2 bits of info. Which might be the best solution, since if you really need to maintain info about the significant digits of measurements, you need that to carry through properly in calculations with those numbers. OTOH, it's possible to have that info in a separate field, and leave with clients the responsibility for correct calculations... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] rationale behind quotes for camel case?
On Tue, Jun 28, 2011 at 11:09:52AM -0500, dennis jenkins wrote: Any suggestions on how to name tables when table names contain both multi-word nouns and mutli-table many-many mappings? Example: Suppose that I have a table called foo and another table called barBiz (or bar_biz if you prefer). Further, both of these tables have a serial primary key. Now I want to create a third table that represents a many-to-many relationship between foo and barBiz. So far I have been keeping compound-noun table names in camel case, but mapping tables separate the base table names with underscores. Thus the table name would be foo_barBiz. However, I find the above distasteful, for many of the reasons that Merlin and others have outlined. Yet naming the table foo_bar_biz seems ambiguous to me, as does using just lower-case foo_barbiz / barbiz. These examples are contrived. The real table names are normal English words with subjective meaning. I'd like in ask the pgsql community for suggestions on how they name tables. Well, when I avoid camel case, then I use _ to separate words in a table name, and __ to separate table names. Likewise with column names for foreign keys, __ between table and column name. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 2 questions re RAID
On Jun 21, 2011, at 7:49 AM, Vick Khera wrote: Ok...there is *one* advantage: you can lose any two drives at the same time and still survive, with RAID-10 if you lose the wrong two drives you're hosed. Exactly. The performance advantage of RAID-10 over RAID-6 in this sever is, I think, not useful. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] insert a SYSTIMESTAMP value in postgres
On Jun 20, 2011, at 1:32 PM, Leon Match wrote: How can I insert a dynamic timestamp value in postgress, please? http://www.postgresql.org/docs/9.0/static/functions-datetime.html -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Another RAID controller recommendation question
On Jun 19, 2011, at 12:33 AM, David Boreham wrote: One thing I don't understand is why is the BBU option never available with integrated LSI controllers? Because integrated means it's on the mobo to save costs. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 2 questions re RAID
On Jun 17, 2011, at 11:23 PM, Greg Smith wrote: I guess you could call Highpoint a RAID manufacturer, but I wouldn't do so. They've released so many terrible problems over the years that it's hard to take the fact that they may have something reasonable you can buy now (the 43XX cards I think?) seriously. Ah, I see. So they're on par with Apple's RAID controller instead of being the first step up. Atto is so Mac focused that you're not going to find much experience here, for the same reason you didn't get any response to your original question. Their cards are using the same Intel IO Processor (IOP) hardware as some known capable cards. For example, the ExpressSAS R348 is named that because it has an Intel 348 IOP. That's the same basic processor as on the medium sized Areca boards: http://www.areca.us/products/pcietosas1680series.htm So speed should be reasonable, presuming they didn't make any major errors in board design or firmware. Good info. Didn't know about their focus, because the last time I dealt with them was so many years ago they still had a significant focus on Windows, or so it seemed to me at the time. Focus on Mac says nothing about the firmware on the card, but it should bode well for the driver. The real thing you need to investigate is whether the write cache setup is done right, and whether monitoring is available in a way you can talk to. What you want is for the card to run in write-back mode normally, degrading to write-through when the battery stops working well. If you don't see that sort of thing clearly documented as available, you really don't want to consider their cards. Well, right up front in their marketing materials they make a major point about cache protection, how important it is, how good it is, using ultracapacitor+flash over batteries (on some of their controllers). So they have awareness intent; competence and follow-through of course are not assured by marketing materials. (Also they talk about background scanning of drives for defects.) And it looks like they offer all of: GUI setup/monitoring that runs on OS X, command-line setup/monitoring that runs on OS X, SNMP... You're basically asking if I don't write to the database, does the fact that write performance on RAID5 is slow matter? When asked that way, sure, it's fine. If after applying the write cache to help, your write throughput requirements don't ever exceed what a single disk can provide, than maybe RAID5 will be fine for you. Make sure you keep shared_buffers low though, because you're not going to be able to absorb a heavy checkpoint sync on RAID5. Yes, basically I wanted to confirm that's what I was actually asking ;-) The only circumstance under which I could see overflowing the card's write cache is during migrations. So my choice then really is better performance during rare migrations vs being able to lose any 2 drives out of 4 (RAID6). Which is OK, since neither choice is really bad--having been burned by bad disk runs before, I'll probably go for safety. (FYI this is not my only margin for failure. Two geographically-distributed WAL-streaming replicas with low-end RAID1 are the next line of defense. Followed by, god forbid I should ever have to use them, daily dumps.) Thanks for all the info. I guess about all I have remaining to do is sanity-check my beliefs about disk I/O. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] 2 questions re RAID
No responses to my earlier post, I'm assuming because OS X experience is rather thin in this group ;-) So a couple of more specific questions: 1) Is my impression correct that given a choice between Areca Highpoint, it's a no-brainer to go with Areca? 2) I understand why RAID 5 is not generally recommended for good db performance. But if the database is not huge (10-20GB), and the server has enough RAM to keep most all of the db cached, and the RAID uses (battery-backed) write-back cache, is it sill really an issue? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 2 questions re RAID
Thanks much for the specific info on Areca RAID cards. Very helpful. On Jun 17, 2011, at 11:20 AM, Scott Marlowe wrote: The problem with RAID-5 is crappy write performance. Being big or small won't change that. Plus if the db is small why use RAID-5? It's small enough that there's some other things going on at the same small server with 4 disk bays ;-) My thinking was that write-back cache might mitigate the poor write performance enough to not be noticed. This db doesn't generally get big batch updates anyway, it's mostly a constant stream of small updates coming in and I have a hard time imagining 256MB of cache filling up very often. (I have at least a fuzzy understanding of how WAL segments affect the write load.) RAID-1 RAID-10 are not ruled out, I'm just exploring options. And I'm not actually wanting to use RAID 5; it's RAID 6 that I'm considering... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 2 questions re RAID
On Jun 17, 2011, at 11:20 AM, Scott Marlowe wrote: Generally, yes, but the model of the card is more important than the maker. I.e. an Areca 1880 or 1680 is a fantastic performer. But the older 1120 series aren't gonna set the world on fire or anything. And, in further digging, I discover that ATTO ExpressSAS is an option for me. Anyone got comments on these? (I notice that they use ultracapacitor/flash to protect cache...) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] No implicit index created when adding primary key with ALTER TABLE
On Jun 16, 2011, at 6:52 AM, Stefan Keller wrote: IMO this decision is actually questionable. Agreed. One should not have to look at constraints to figure out if there's an index. One should be able to check that directly from the list of indexes. I would think this is really obvious. (But then again, db tools in general aren't really masters of the obvious when it comes to user interface...) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] recommendations on storage, fairly low-end
- RAID 1 with fairly ordinary (7200RPM) SATA drives has been working fine performance-wise (helped by the battery-backed cache on the RAID controller) and it will continue to be fine for the growth we expect. - The RAID card's driver bugs battery conditioning glitches have been problematic though, and we need to bump storage from 2TB total to 3TB total, and the RAID card does not support that. - Server is Mac OS X, which limits some choices because of drivers... But I'll take all suggestions and filter them as needed to find something that I can use here. - Obviously, experience regarding particular brands of RAID cards is appreciated. - Also, external RAID boxes with eSATA connections could be considered. - There is the possibility of iSCSI to some managed enterprise storage, but here I would want to know about write order sync issues--what questions to ask regarding this and/or what brands/buzzwords to look for. Any and all advice and links appreciated ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] performance of count(*)
I need to optimize queries that deal with some aggregates regarding resource availability. My specific problem is, I think, very closely analogous to select count(*)... where... I know roughly how to do it, aggregated stats table, triggers appending to it, occasional updates to coalesce entries. I'd just like to see an example to confirm my own plan and see if I'm missing any details. I'm sure I've seen references to articles on ways to do this, but all google is getting me is generic complaints about count(*) performance and suggestions to use stats for estimated total rows in a table, nothing useful for this. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] performance of count(*)
On May 6, 2011, at 1:39 PM, Tomas Vondra wrote: Anyway I'd recommend to start with the eager approach, it's much easier to implement. You can implement the lazy approach later, if you find out it's needed. With the eager approach, I think I'm too likely to get write conflicts. Thanks for the reference to the paper, I believe that's what I was looking for. And you should strive to use HOT feature (if you're on = 8.4), especially with the eager approach - it often does a lot of updates and leads to bloat of the aggregated table. So decrease the fillfactor and do not index the columns that are updated by the triggers. See, that's the kind of info I'm looking for ;-) On May 6, 2011, at 1:59 PM, Andrew Sullivan wrote: If the WHERE clause is fairly selective and indexed, that should be fast. Not as fast as estimates based on trigger-written values in another table, of course, but reasonably fast. So the first order of business is usually to find or create indexes that will make SELECT on the same criteria fast. In this case, it depends on the result of a pretty complex join that involves some gnarly time calculations, and finding the unmatched rows from one side of an outer join. I really don't think there's a way to optimize the straight-up query to be faster than it is, I looked at that for a good long time, explain/analyze and all. Postgres is using the appropriate index to narrow things down as much as it can at the very beginning, it just then has to perform a heck of a lot of work to finish the join... And it's not taking ***that*** long--it's just that I want it faster! It's only unqualified SELECT count(*) that is slow. Generally, the system table is good enough for that, I find. (Someone: How long will this take? Me: There are about 400 million rows to go through. Even if you're off by 50 million at that point, it doesn't matter.) FYI, I have no need for unqualified select count(*) in this app--just doesn't happen, ever ;-) Thanks. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] performance of count(*)
On May 6, 2011, at 4:15 PM, Andrew Sullivan wrote: Yeah, in that case the HOT suggestions are very important. I strongly recomment you experiment in a test system with real data and pathological cases in particular, in order to see what happens when the outlier cases inevitably, Murphy willing, crop up. That's not to say you should arrange your plans for them, but forewarned is forearmed. Again thanks. The HOT tip led me down the road of paying attention to my indexes, which led me to a nice realization about how to shrink the overall footprint of the materialized aggregates ;-) Which led me to a technique to seriously minimize updates... I didn't have to worry about bloat too much--overall activity level is not huge; the possibility of collisions on updates is mostly because users tend to work on the same very small (but ever-shifting) subset of the data at the same time, but now I think I'm really set! -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDD reliability
On May 4, 2011, at 9:34 PM, David Boreham wrote: So ok, yeah...I said that chips don't just keel over and die mid-life and you came up with the one counterexample in the history of the industry Actually, any of us who really tried could probably come up with a dozen examples--more if we've been around for a while. Original design cutting corners on power regulation; final manufacturers cutting corners on specs; component manufacturers cutting corners on specs or selling outright counterfeit parts... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] SSDD reliability
Yeah, on that subject, anybody else see this: Absolutely pathetic. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDD reliability
On May 4, 2011, at 10:50 AM, Greg Smith wrote: Your link didn't show up on this. Sigh... Step 2: paste link in ;-) http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDD reliability
On May 4, 2011, at 11:31 AM, David Boreham wrote: To be honest, like the article author, I'd be happy with 300+ days to failure, IF the drives provide an accurate predictor of impending doom. No problem with that, for a first step. ***BUT*** the failures in this article and many others I've read about are not in high-write db workloads, so they're not write wear, they're just crappy electronics failing. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pervasiveness of surrogate (also called synthetic) keys
On May 2, 2011, at 10:53 PM, Rob Sargent wrote: ...and you're at risk of having to reformat them when you buy out your competitor. The scheme described was awfully similar to one that a client of mine used, product family prefix, identifiers within the family. And guess what? The scheme, which had been stable for 20+ years, had to change when a new variant of product was introduced which cut across family product. I don't remember the details. I do remember that I hadn't used the supposedly stable product ids as PKs ;-) -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDs with Postgresql?
On Apr 28, 2011, at 7:21 AM, David Boreham wrote: I don't think you can simply say that I am writing so many Gb WAL files, therefore according to the vendor's spec Also, I fully expect the vendors lie about erase cycles as baldly as they lie about MTBF, so I would divide by a very healthy skepticism factor. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDs with Postgresql?
On Apr 28, 2011, at 8:48 AM, David Boreham wrote: As a former card-carrying semiconductor company employee, I'm not so sure about this. Well, yes, you have a good point that in many, if not all, cases we're dealing with different companies. That really should have occurred to me, that manufacturers of SSDDs (or at least some of them) might not have an ingrained culture of extreme cost cutting and deceptive ratings--I'm going to use feeling under the weather as my excuse. (Of course reliability of some early consumer-grade SSDDs was abysmal, but that should be a fairly easy problem to avoid.) MTBF otoh is a mythical computed value... It's not only mythical, it's not even remotely realistic, to the point that it is no exaggeration to call it a bald-faced lie. Sorry, don't remember the university, but there was a nice study of large numbers of disks in data centers, and the result was that actual lifespans were so far from MBTF specs, that the remaining disks would have to just about outlive the universe in order to get the mean near the same order of magnitude as the published numbers. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pervasiveness of surrogate (also called synthetic) keys
On Apr 28, 2011, at 11:53 AM, Rob Sargent wrote: Hm, I get the feeling that only the good folks at Hibernate seem to think using a natural key is the _only_ way to go. Well, natural keys are quite obviously the way to go, when they exist. The problem is, they usually don't really exist. What's usually proposed as a natural key, will upon further investigation, either not be guaranteed unique, or not guaranteed to be unchanging, or both. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SSDs with Postgresql?
On Apr 21, 2011, at 9:44 AM, Florian Weimer wrote: But log files are recycled, so looking at the directory alone does not seem particularly helpful. You have to look at the file timestamps. From that you can get an idea of traffic. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] temp tables not dropping at end of script
On Apr 6, 2011, at 9:47 AM, Davenport, Julie wrote: We’ve never explicitly closed the connection, it just seemed to close automatically when the coldfusion script ended. My guess is you've also upgraded coldfusion, or changed its config, and now it's caching connections. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trigger Function return values
On Mar 22, 2011, at 4:12 PM, Andy Chambers wrote: How is the return value of a trigger function defined in plpgsql used? I can't find anything in the documentation but some of the examples return NULL, and others return something like NEW. http://www.postgresql.org/docs/9.0/static/trigger-definition.html -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query execution time
On Mar 21, 2011, at 9:55 AM, preetika tyagi wrote: For example, if A is 15 minutes, then B is 1.5 hrs. Well, considering that random disk access is on the order of 10,000 times slower than RAM... But you can answer the question yourself by comparing the query run against cold caches (after a reboot, or various command-line tricks to purge cache) vs against warm caches (twice back-to-back). -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] query execution time
On Mar 21, 2011, at 12:03 PM, preetika tyagi wrote: I tried running the same query after reboot and back-to-back, it was taking less time in both the cases. It means the problem is something else. Can there be a reason which is more hardware/operating system specific and due to which the behavior is not uniform? While I do have a couple of ideas, you're probably better served by letting those here with more optimization experience help you, as their answers will be more complete. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Primary key vs unique index
On Mar 17, 2011, at 6:19 AM, Voils, Steven M wrote: Is there a fundamental difference between a primary key and a unique index? Currently we have primary keys on tables that have significant amounts of updates performed on them, as a result the primary key indexes are becoming significantly bloated. There are other indexes on the tables that also become bloated as a result of this, but these are automatically rebuild periodically by the application (using the concurrently flag) when read usage is expected to be very low. We don’t want to remove the unique constraint of that the primary key is providing, but the space on disk will continue to grow unbounded so we must do something. Can we replace the primary key with a unique index that could be rebuilt concurrently, or would this be considered bad design? The reasoning behind this would be that the unique index could be rebuilt concurrently without taking the application down or exclusively locking the table for an extending period of time. Are there other advantages to a primary key outside of a uniqueness constraint and an index? So, you can't use REINDEX because it lacks the CONCURRENTLY option, and that would lock too much for too long? The only thing the primary key designation provides beyond not null unique is the metadata about what is the primary key. Which for example in the db allows foreign key constraints to be created without specifying that column. And some ORM/apps/frameworks can automatically make use of the information as well. I like having them for clarity, but you really can do away with them if your deployment needs to do so. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] NULL value vs. DEFAULT value.
On Mar 8, 2011, at 7:54 AM, James B. Byrne wrote: My question is: Why am I getting a NULL exception? Because you're trying to insert NULL explicitly? -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: *****SPAM***** [GENERAL] Index question
On Mar 2, 2011, at 11:31 AM, Michael Black wrote: Ok. I have been working with databases a few years but my first real venture in to PostgreSql. I just want a plain simple index regardless if there are duplicates or not. How do I accomplish this in PostgreSql? Same as any other SQL database: create index foobaridx on foo(bar)... -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index question
On Mar 2, 2011, at 11:43 AM, Michael Black wrote: Thanks Scott. I just did not see the options in the PGAdmin III nor in the doc at You may want to bookmark this: http://www.postgresql.org/docs/9.0/static/sql-commands.html -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Thoroughly confused about time zones
On Feb 28, 2011, at 8:06 AM, Rob Richardson wrote: But if PostgreSQL doesn’t store time zones internally, then that difference is going to be 24 hours, which doesn’t help me. No, postgres stores timestamptz as UTC, so that calculation will work exactly like you want. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general