[GENERAL] VIEW problem

2000-10-04 Thread Tim Uckun
? As long as the SQL statement that the view is based on is still valid why does it care if the table is dropped and recreated? -- Tim Uckun Mobile Intelligence Unit. -- "There are some who

Re: [GENERAL] VIEW problem

2000-10-05 Thread Tim Uckun
which is to say that it's nothing more then a SQL statement. As long as that SQL statement is valid, parseable and returns a recordset it really ought not to care about oids. -- Tim Uckun Mobile Intelligence Unit

Re: [GENERAL] postgresql 7.1

2000-10-11 Thread Tim Uckun
engines. WAL is a backup system. TOAST is a system for working with rows that have to use more then the 8K limitation. AFAIK! What happened to outer joins? Don't you need outer joins to compete with the big boys? -- Tim Uckun Mobile

Re: [GENERAL] postgresql 7.1

2000-10-12 Thread Tim Uckun
At 01:37 AM 10/12/2000 -0400, Tom Lane wrote: Tim Uckun [EMAIL PROTECTED] writes: What happened to outer joins? Don't you need outer joins to compete with the big boys? They're done too ;-) Wooo Hooo time to break out the champagne

Re: [GENERAL] Re: PostgreSQL book

2000-10-12 Thread Tim Uckun
At 04:58 PM 10/12/00 -0400, Louis Bertrand wrote: Thanks. It helped cheer me up: I'm fighting with MS-Access at the moment (and losing badly). Error Number 3135 There is no message for this error. :wq Tim Uckun Due Diligence Inc. http://www.diligence.com/ Americas Background Investigation

Re: [GENERAL] Simple Question: Case sensitivity - Performance?

2000-12-11 Thread Tim Uckun
the database in any way. I am suprised nobody has done this yet. Is there a document which describes how to create locales? :wq Tim Uckun Due Diligence Inc. http://www.diligence.com/ Americas Background Investigation Expert. If your company isn't doing background checks, maybe you haven't

Re: [GENERAL] Re: RE: Re: MySQL and PostgreSQL speed compare

2001-01-03 Thread Tim Uckun
that are dribbled here and there... You are right of course but what happens once you have learned it? For me I never seem seem to be able to do the right thing that being "now that I have solved the problem I should write it down and submit it to the maintainers of the document". :wq Tim

Re: [GENERAL] new type proposal

2001-02-06 Thread Tim Uckun
is the currency datatype working with access and ODBC yet? :wq Tim Uckun Due Diligence Inc. http://www.diligence.com/Americas Background Investigation Expert. If your company isn't doing background checks, maybe you haven't considered the risks of a bad hire.

[GENERAL] citext in windows.

2008-02-26 Thread Tim Uckun
I just downloaded postgres 8.3 for windows and noticed that citext is not an option for columns. The web site has the source code but no binaries for windows. I downloaded the enterprisedb and noticed that it has the citext.dll in the lib directory so I copied the dll over to the postgres lib

[GENERAL] UUID-OSSP for windows.

2008-02-26 Thread Tim Uckun
Where can I get uuid-ossp for windows? Also where can I get citext for windows. These two are missing from the windows installer. Thanks. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ?

Re: [GENERAL] UUID-OSSP for windows.

2008-02-27 Thread Tim Uckun
citext is not part of core PostgreSQL or something we have any intention to include in the Windows distribution at this time. Is there an alternative for people wanting a case insensitive collation? ---(end of broadcast)--- TIP 4: Have you

[GENERAL] calendar best practices for postgres

2008-04-06 Thread Tim Uckun
Does anybody know of an open source application which leverages postgres to build a scheduling/calendaring application. Especially if it uses some of the datetime types and functions that are unique to postgres. I am specifically interested in methods to deal with recurring events and dealing

[GENERAL] Is the primary key constraint also an index?

2008-08-12 Thread Tim Uckun
If I have a primary key constraint defined in the database do I also need to create an index on that field for fast lookup? The documentation on the web seems to imply that the contraint is not an index. Is that right? What the difference between creating a unique, not null index and setting a

[GENERAL] Question about NOT NULL and default values.

2008-10-16 Thread Tim Uckun
Hey all. I am using postgres 8.3 with a rails application. I have a column defined like this. ALTER TABLE provisions ADD COLUMN provider_id integer; ALTER TABLE provisions ALTER COLUMN provider_id SET STORAGE PLAIN; ALTER TABLE provisions ALTER COLUMN provider_id SET NOT NULL; ALTER TABLE

Re: [GENERAL] Question about NOT NULL and default values.

2008-10-16 Thread Tim Uckun
Not sure how you do this in rails but if you use DEFAULT in the query it works: create table a(a int default 5); CREATE TABLE insert into a(a) values (DEFAULT); INSERT 0 1 Unfortunately the SQL is being generated by the ORM. I really don't want to bypass the ORM that would be way too

Re: [GENERAL] Question about NOT NULL and default values.

2008-10-16 Thread Tim Uckun
Are you using the ruby-pg interface? I was under the impression it handled this properly. I am using postgres-pr -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Question about NOT NULL and default values.

2008-10-16 Thread Tim Uckun
On Fri, Oct 17, 2008 at 4:40 PM, Scott Marlowe [EMAIL PROTECTED] wrote: On Thu, Oct 16, 2008 at 9:09 PM, Tim Uckun [EMAIL PROTECTED] wrote: Hey all. I am using postgres 8.3 with a rails application. I have a column defined like this. ALTER TABLE provisions ADD COLUMN provider_id integer

[GENERAL] Huge sample dataset for testing.

2009-04-28 Thread Tim Uckun
Does anybody know if there is a sample database or text files I can import to do some performance testing? I would like to have tables with tens of millions of records if possible.

Re: [GENERAL] Huge sample dataset for testing.

2009-04-28 Thread Tim Uckun
I would like to have tables with tens of millions of records if possible. It is easy to create such a table: test=# create table huge_data_table as select s, md5(s::text) from generate_series(1,10) s; Thanks I'll try something like that. I guess can create some random dates or something

Re: [GENERAL] [PERFORM] Postgres Clustering

2009-07-07 Thread Tim Uckun
2009/5/28 Eddy Ernesto Baños Fernández eeba...@estudiantes.uci.cu: Try Cybercluster I looked into that. There is one piece of documentation that is less than ten pages long. There is no users group, no listserve, no community that I can discern. Do you have experience with it and if so

Re: [GENERAL] Replication

2009-07-07 Thread Tim Uckun
Does anybody have any experience with tungsten or sequia they would like to share? How about pgcluster or cybercluster? There are a lot of options but not a lot of insight or documentation really. Also note that the postgres-r web site says it's not production ready and it will take months if

[GENERAL] ubuntu packages for 8.4

2009-07-09 Thread Tim Uckun
I don't see any ubuntu packages for 8.4 in the default repositories. Does anybody know if they will be upgrading the postgresql package to 8.4 or creating a new package for it. I'd rather use the packages than to compile it myself. If anybody has an argument as to why I should compile it I am

Re: [GENERAL] ubuntu packages for 8.4

2009-07-10 Thread Tim Uckun
On Fri, Jul 10, 2009 at 9:22 PM, Stuart Bishopstu...@stuartbishop.net wrote: On Fri, Jul 10, 2009 at 8:28 AM, Tim Uckuntimuc...@gmail.com wrote: I don't see any ubuntu packages for 8.4 in the default repositories. Does anybody know if they will be upgrading the postgresql package to 8.4 or

[GENERAL] A question about pg_standby.

2009-07-23 Thread Tim Uckun
I am using pg_standby to set up a warm standby. Everything seems to be working OK so far but I do have one question. I wanted to check to make sure that the replication was happening so I created the trigger file which put the database to livemode. I ran a query on the standby to make sure the

[GENERAL] A question about the permissions

2009-07-27 Thread Tim Uckun
I am trying to monitor replication lag using zabbix. I have written a simple script in ruby to get the lag it goes like this. require 'date' require 'yaml' y = YAML.load `/usr/lib/postgresql/8.3/bin/pg_controldata /var/lib/postgresql/8.3/main` last_checkpoint = DateTime.parse( y['Time of latest

Re: [GENERAL] A question about the permissions

2009-07-28 Thread Tim Uckun
On lots of systems, giving group permissions is nearly as bad as giving world permissions (eg, all the users might be in a users group). So we don't do it by default.  If you want to poke holes in the security of your own installation, go right ahead. I decided to see if I could do it without

[GENERAL] Monitoring with pg_controldata

2009-07-29 Thread Tim Uckun
I am trying to monitor my replication lag with pg_controldata and it's driving me nuts. If I run pg_controldata from the command line as user postgres or root I get the following line Time of latest checkpoint:Thu 30 Jul 2009 00:36:12 NZST If I run it from the crontab I get this

Re: [GENERAL] Monitoring with pg_controldata

2009-07-29 Thread Tim Uckun
O cron runs programs in a very limited environment. Things like TZ etc. are usually not set. To see what your cron sets, just run a shell script with something like #!/bin/sh env and look at the email you get with the output. Read the cron/crontab manpage. It tells you how to set

[GENERAL] Query not using the indexes properly.

2009-10-01 Thread Tim Uckun
I have a pretty simple query on a pretty simple table with about 60 million records in it. This is the query. SELECT * FROM changes WHERE (id 1935759 and company_id = 4 and source_model_name = 'CommissionedVisit') ORDER BY id ASC LIMIT 1 The id field is the primary key. The other fields are

Re: [GENERAL] Query not using the indexes properly.

2009-10-01 Thread Tim Uckun
It probably thinks the id check is going to be better to limit the result set. How many records are there for id 1935759 ? About 40 million or so. vs How many records for company_id = 4 and source_model_name = 'CommissionedVisit' ? If this is a common query you could probably do a

Re: [GENERAL] Query not using the indexes properly.

2009-10-01 Thread Tim Uckun
If you try the multi-column index (which is a good idea), be sure that id is the last of the three columns, since that's the column on which you have an inequality test rather than an equality test; eg, (company_id,source_model_name,id). Interesting. I would have thought the order of the

[GENERAL] Postgres won't start. Nothing in the log.

2009-10-04 Thread Tim Uckun
I just did an upgrade on two of my servers (the main and the failover). The main went OK but the postgres on the failover won't start. Unfortunately there is nothing anywhere telling me what the problem is. The log file is empty, there is nothing in the /var/log/messages or /var/log/syslog

Re: [GENERAL] Postgres won't start. Nothing in the log.

2009-10-06 Thread Tim Uckun
I know, from IRC, the problem has been solved, there was no space on the disk ... Unfortunately, i haven't logs. Yes. Thanks to everybody on the IRC who helped me out. The suggestion that was most helpful was to call the posgres binary directly. /usr/lib/postgresql/8.3/bin/postgres. Calling

[GENERAL] unable to restore. pg_restore: implied data-only restore

2011-05-24 Thread Tim Uckun
I am trying to backup one database and restore it into a new schema in another database. Database1 has the tables in the public schema database2 has some tables in the public schema but their names will clash so the database needs to be stored in a different schema. I back up like this.

Re: [GENERAL] unable to restore. pg_restore: implied data-only restore

2011-05-24 Thread Tim Uckun
There is no support for that built into pg_dump.  You could try: That's too bad. * dumping to a text script and doing search-and-replace for the schema name on the script file. I did a dump without privileges or owners so I was thinking I could just replace the SET search_path = public,

[GENERAL] Detecting duplicates in messy data

2011-06-06 Thread Tim Uckun
I have a couple of tables (people and addresses) which are using serials as primary keys and contain many potentially duplicate data in them. The problem is that the data has not been input in a careful way so for example you have a first_name, middle_name and last_name fields but you could have

[GENERAL] Unexpected results with joins on dates

2011-07-11 Thread Tim Uckun
I have three tables. traffic, sales and dates. Both the traffic table and the sales table has multiple entries per date with each row representing the date, some subdivision, and the total. For example every day five divisions could be reporting their sales so there would be five entries in the

Re: [GENERAL] Unexpected results with joins on dates

2011-07-12 Thread Tim Uckun
On Tue, Jul 12, 2011 at 3:01 PM, David Johnston pol...@yahoo.com wrote: If traffic has 5 records on a date and sales has 4 on the same date you would output 20 records for that date. What would I have to do in order to get 9 records instead of 20. Like a union but with dissimilar schema. --

[GENERAL] Unexpected results when joining on date fields

2011-07-12 Thread Tim Uckun
I have two tables, traffic and sales. Each one has a date field and lists the traffic and sales broken down by various parameters (multiple rows for each date). If I run select (select count(*) from traffic) as traffic, (select count(*) from sales) as sales; I get the following 49383;167807 if

Re: [GENERAL] Unexpected results with joins on dates

2011-07-12 Thread Tim Uckun
Only the first SELECT is used to define column types and names (in the case of NULL AS source2_* I am not positive if you need to cast the NULL or if it will use the type found in the second SELECT) and I generally put a source field into the output with a textual representation of which

[GENERAL] Select count with offset returns nothing.

2011-08-05 Thread Tim Uckun
I am using a library which is emitting SQL like this SELECT COUNT(*) FROM batches LIMIT 15 OFFSET 15 the library fails because on postgres this query returns nothing (not even zero as a result). Presumably it returns some valid value on mysql and other databases. Other than hacking the library

Re: [GENERAL] Select count with offset returns nothing.

2011-08-05 Thread Tim Uckun
Yea I figured it would need a subquery. I filed a ticket with the library. Hopefully they will fix it. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Getting TOAST errors

2011-09-19 Thread Tim Uckun
I am occasionally getting this kind of error when attempting a SELECT statement. PGError: ERROR: missing chunk number 0 for toast value 27143 in pg_toast_2619 What does this mean? Is some sort of corruption creeping into the database? Postgres 9.0 linux. -- Sent via pgsql-general mailing

Re: [GENERAL] Getting TOAST errors

2011-09-19 Thread Tim Uckun
Hard to tell.  We've seen enough reports like that to make it seem like there may be some bug buried there, but no one has provided anything to do any debugging work with.  Can you create a reproducible test case? Not really. I have a nightly process which downloads data and sticks it into a

Re: [GENERAL] Getting TOAST errors

2011-09-19 Thread Tim Uckun
Not really. I have a nightly process which downloads data and sticks it into a text field. Afterwards another process reads that text data and processes it creating rows in another table. The problem occurs in the last step and at seemingly random intervals. For example one time it might

Re: [GENERAL] Getting TOAST errors

2011-09-19 Thread Tim Uckun
A self-contained test case (code and data) that triggers the error. If it only does so probabilistically, once in every-so-many runs, that's fine. I'll see what I can do. Give me a few days. Cheers. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to

[GENERAL] HA options

2012-01-16 Thread Tim Uckun
Hey Guys. It's been a while since I looked into this and it seems like new options have cropped up for postgres HA and scalability. Is there a consensus on the best way to achieve HA. My primary concern is HA but of course any scalability gains would be more than welcome. All the servers will

Re: [GENERAL] HA options

2012-01-16 Thread Tim Uckun
On Tue, Jan 17, 2012 at 10:47 AM, David Morton davidmor...@xtra.co.nz wrote: Is shared storage an option for you ? We've had a fairly pleasant experience with shared storage partnered up with SLES and its HAE (high availability extension) suite using a Pacemaker cluster for resource control. On

Re: [GENERAL] HA options

2012-01-16 Thread Tim Uckun
virtual servers tend to have lousy storage performance, for what thats worth.  the actual physical resources are being shared by who knows what other workloads, and they tend to be higher latency than direct-attach storage, or proper SAN. I realize that. Eventually we might have to go to

Re: [GENERAL] HA options

2012-01-16 Thread Tim Uckun
I wonder.  If its a write heavy database, I totally agree with you.  But if its mostly read-only, and mostly fits in ram, then a pgpool of servers should be faster. Be nice to know the usage patterns of this database. (and size). In this case the databases are small to medium and the usage

Re: [GENERAL] HA options

2012-01-16 Thread Tim Uckun
On Tue, Jan 17, 2012 at 12:31 PM, David Morton davidmor...@xtra.co.nz wrote: Have you looked at a 'shared storage' solution based on DRBD ? I configured a test environment using SLES HAE and DRBD with relative ease and it behaved very well (can probably supply a build script if you like), there

Re: [GENERAL] Does Version 9.1 Streaming Replication Supports Multi-Master?

2012-01-17 Thread Tim Uckun
Only a single-master. If you want a multi-master solution, see Postgres-XC. Is postgres XC production ready? Can I trust my most valuable data to it? Cheers. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:

Re: [GENERAL] HA options

2012-01-17 Thread Tim Uckun
I have a few clusters running on EC2 using DRBD to replicate between availability zones. It's not fast, but it works. If your write load is under 30MB/sec it's definitely an option. I run DRBD over SSH tunnels to get around the random IP address issue. I use heartbeat on top for resource

Re: [GENERAL] HA options

2012-01-17 Thread Tim Uckun
because you quickly get trapped into OS specific quicksand with these features. Isn't that an issue with just about every feature? Besides the issues have already been solved mostly. Pgpool already exists. Tatsuo Ishii says porting a windows is just a resource issue as he doesn't have the

[GENERAL] A better COPY?

2012-02-26 Thread Tim Uckun
I have a situation where I am pulling CSV data from various sources and putting them into a database after they are cleaned up and such. Currently I am doing bulk of the work outside the database using code but I think the work would go much faster if I was to use import the data into temp tables

Re: [GENERAL] A better COPY?

2012-02-27 Thread Tim Uckun
1. COPY from a text field in a table like this COPY from (select text_field from table where id =2) as text_data ... The syntax is a bit different: CREATE TABLE text_data AS select text_field from table where id=2 Really? Wow, I would have never guessed that. That's awesome. Thanks. --

[GENERAL] Partial matches on full text searches.

2012-02-28 Thread Tim Uckun
I am trying to understand this bit of documentation about GIST and GIN searches Also, * can be attached to a lexeme to specify prefix matching: SELECT to_tsquery('supern:*A star:A*B'); to_tsquery -- 'supern':*A 'star':*AB I tried various experiments but can't

[GENERAL] full text search and ILIKE type clauses.

2012-03-11 Thread Tim Uckun
I want to be able to search a lot of fields using queries that use ILIKE and unfortunately many of the queries will be using the '%SOMETHING%' or '%SOMETHING' type clauses. Since indexes are useless on those I was thinking I could use tsvectors but I can't figure out how to accomplish this. One

Re: [GENERAL] full text search and ILIKE type clauses.

2012-03-11 Thread Tim Uckun
If you're using 9.1, you might look into contrib/pg_trgm instead. If I was to use trgm would it be better to create a trigram index on each text field? In the past I have created a text field which contains the rest of the fields concatenated. That works great as long as you are looking for

Re: [GENERAL] full text search and ILIKE type clauses.

2012-03-12 Thread Tim Uckun
We made most of our text, varchar columns citext data types so that we could do case insensitive searches.  Is this going to negate most of the index searches?  It appeared to our DBA that it would be easier to use citext data type then need to use ILIKE instead? In the same vein... Does

Re: [GENERAL] Upgrade questions

2012-03-12 Thread Tim Uckun
However, given the size of this table, I have no idea how long something like this might take.  In general I've had a tough time getting feedback from postgres on the progress of a query, how long something might take, etc. You can always do this which would result in minimum hassles.

Re: [GENERAL] full text search and ILIKE type clauses.

2012-03-12 Thread Tim Uckun
It is my understanding that since the extention citext is available that this gives you what your asking for and at least at this point isn't going to be part of the core. For me it's more of a workaround than a solution but yes probably good enough. Collation is more subtle than case

[GENERAL] Backing up through a database connection (not pg_dump)

2012-03-26 Thread Tim Uckun
Is there a way to backup a database or a cluster though a database connection? I mean I want to write some code that connects to the database remotely and then issues a backup command like it would issue any other SQL command. I realize the backups would need to reside on the database server. --

Re: [GENERAL] Backing up through a database connection (not pg_dump)

2012-03-26 Thread Tim Uckun
On Tue, Mar 27, 2012 at 1:00 PM, David Boreham david_l...@boreham.org wrote: fwiw we run db_dump locally, compress the resulting file and scp or rsync it to the remote server. I wanted to see if I can do that without running pg_dump on the remote server. That would involve connecting to the

Re: [GENERAL] Backing up through a database connection (not pg_dump)

2012-03-26 Thread Tim Uckun
We're also using libpq to trigger backups using NOTIFY from a client app. Do you have an example of how this is done? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] Updateable Views or Synonyms.

2012-05-29 Thread Tim Uckun
I am wondering if either of these features are on the plate for postgres anytime soon? I see conversations going back to 2007 on updateable views and some conversations about synonyms but obviously they have never been added to the database for some reason or another. With regards to synonyms. It

Re: [GENERAL] Updateable Views or Synonyms.

2012-06-03 Thread Tim Uckun
what sorts of operations do synonyms need to support? truncate? alter table? reindex? I am no expert or anything but I would think they would be like symlinks so yes to all of the above. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your

[GENERAL] Hot standby streaming replication doesn't work

2012-06-24 Thread Tim Uckun
I am following the instructions on the wiki https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#PITR.2C_Warm_Standby.2C_Hot_Standby.2C_and_Streaming_Replication using the 10 minute version of the setup. On the master I have postgresql.conf wal_level = hot_standby max_wal_senders = 3

Re: [GENERAL] Hot standby streaming replication doesn't work

2012-06-24 Thread Tim Uckun
I am using 9.1. Apparently it's working now, it took a couple of restarts but it seems to be going. Thanks. On Mon, Jun 25, 2012 at 12:57 PM, Michael Nolan htf...@gmail.com wrote: On Sun, Jun 24, 2012 at 1:57 AM, Tim Uckun timuc...@gmail.com wrote: I am following the instructions

[GENERAL] create database from template requires the source database to be unused

2012-06-27 Thread Tim Uckun
I am sure this is intended behavior but it seems odd (and inconvenient) to me. create database tim_test_copy template tim_test ERROR: source database tim_test is being accessed by other users DETAIL: There are 1 other session(s) using the database. I would presume only reads are required from

Re: [GENERAL] Shortcut evaluation in OR or IN

2013-05-07 Thread Tim Uckun
Thanks for the explanation. On Mon, May 6, 2013 at 8:43 PM, Jasen Betts ja...@xnet.co.nz wrote: On 2013-05-06, Tim Uckun timuc...@gmail.com wrote: --047d7b2e4ea07402b004dc034a3b Content-Type: text/plain; charset=UTF-8 Say I have a select like this. SELECT * FROM table where field

[GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-01 Thread Tim Uckun
I have the following query. with parsed_data as ( SELECT devicereportedtime , DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start FROM systemevents WHERE devicereportedtime =

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
I can aways convert it. I am just not getting the right offset when I convert. That's what's puzzling. On Wed, Oct 2, 2013 at 9:15 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote: Tim Uckun wrote: I have the following query. [...] SELECT interval_start, (interval_start AT TIME ZONE

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
Theodore tony.theod...@gmail.comwrote: On 02/10/2013, at 6:49 PM, Tim Uckun timuc...@gmail.com wrote: The reason for that is that in PostgreSQL there is no time zone information stored along with a timestamp with time zone, it is stored in UTC. That seems unintuitive. What is the difference

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
I do think it would help to add it to the docs. The name TIMESTAMP WITH TIME ZONE clearly implies the time zone is stored in the field. One day there will really be a timestamp with time zone embedded in it field and I wonder what they will call that.

[GENERAL] invisible dependencies on a table?

2013-12-12 Thread Tim Uckun
I have a table foo. It has a serial column called id. I execute the following statement ALTER TABLE table_name RENAME TO archived_table_name; CREATE TABLE table_name (LIKE archived_table_name INCLUDING DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES); . Archieve the table

Re: [GENERAL] invisible dependencies on a table?

2013-12-14 Thread Tim Uckun
I cannot readily speak to why you are not seeing sequence ownership as a dependent when looking at the now-archive table definition. pgadmin knows it's a dependency because when you try to drop it you get a message saying so but I can't see it in the defintion of the table. BTW is there a

Re: [GENERAL] invisible dependencies on a table?

2013-12-14 Thread Tim Uckun
As I've marked here, both default expressions are depending on the sequence, but there's only one ownership dependency of the sequence on a column. To complete the switchover you'd need to use ALTER SEQUENCE ... OWNED BY ... to move that ownership dependency to the new table. Then the old

[GENERAL] Database snapshots or clones for staging and testing.

2014-01-30 Thread Tim Uckun
Hi all. I have the following scenario I want to accomplish. In order to test a new branch of code I want to create a snapshot of the live database into a testing database. The code will be deployed after that and it may run some migrations which will change the schema of the database. The code

[GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-08 Thread Tim Uckun
I want to accomplish what I would think would be a simple thing. I want the root user to be able to connect to the postgres database as user postgres from the local machine without passwords. Since I am doing this from a program I don't want to use the su facility. I have tried a lot of

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-08 Thread Tim Uckun
I suspect you are expecting that the map will cause root to be logged in as postgres without asking for that.  It won't. What it will do is allow psql -U postgres and similar to work. That's exactly what I am looking to do. In my case I have a script that runs as root. I want to log in as

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-08 Thread Tim Uckun
then say you're postgres in the script with the -U (if you're using psql) AS ROOT: psql -U postgres -h remote_db dbname Note that ident doesn't work so well between machines, so you might want to look at .pgpass That's what I am trying to get working. In actuality I am using ruby and

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-09 Thread Tim Uckun
either create a postgres user named 'root' and give it superuser privileges, In order to do that I need to connect to the database with my script which is running under the root account. or switch to a different method of authentication for LOCAL users I am confused. I presumed the proper

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-09 Thread Tim Uckun
But afterwards, inside the script, you could use su to temporarily switch to a less priviledged user: ... commands running as root su postgres -c 'psql '  # running as postgres ... running as root again OK I will try this. I am very confused about something though. Not one person here

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-09 Thread Tim Uckun
authenication type is controlled via the pg_hba.conf file. frankly, I've never used the pg_ident file, it just seems like it would add more confusion to things.   But, it appears to use it you need a map=/mapname/ primitive in your pg_hba.conf That's why I attempted to do. I read the

Re: [GENERAL] I can't seem to put the right combination of magic into the pg_hba and pg_ident files.

2009-11-09 Thread Tim Uckun
I just tried with 8.4.1. Started with the default configuration, created data/pg_ident.conf with: pg_map root postgres pg_map postgres postgres Replaced in pg_hba.conf:   local   all         all                               trust by   local   all         all                              

[GENERAL] get a log of queries that take up a lot of CPU or take a very long time.

2009-11-23 Thread Tim Uckun
Is there a way I can get a list of the top 10 longest running queries for the day/week/month or the top 10 queries that took the most CPU? select * from pg_stat_activity only shows the current status. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your

[GENERAL] timestams in the the pg_standby output

2010-01-04 Thread Tim Uckun
Is there a way to get pg_standby to put timestamps in the output it generates? I am currently piping the output to a log fie but since it contains no timestamps it's of limited use to me. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:

Re: [GENERAL] timestams in the the pg_standby output

2010-01-04 Thread Tim Uckun
    printf %s %s\n, strftime(%Y-%m-%d %H:%M:%S, localtime(time)), $item (untested, and I am not a regular Perl programmer, its but File::Tail is the best library I know of to do this sort of thing) Ah if I am going to do that I suppose something like this would work. #!/bin/sh while read

Re: [GENERAL] timestams in the the pg_standby output

2010-01-05 Thread Tim Uckun
If that works, great.  I'm not sure if you'll run afoul of output buffering in this situation.  Clearly you've got the right idea, just need to make sure it behaves as you expect and doesn't clump the line reads into larger chunks. Actually I could not get it to send the output to the pipe

Re: [GENERAL] PostgreSQL Write Performance

2010-01-05 Thread Tim Uckun
You might use the copy command instead of insert, which is far faster. If you want the fastest possible inserts, then probably copy is the way to go instead of insert. Here is copy command via API: http://www.postgresql.org/docs/current/static/libpq-copy.html Here is copy command via SQL:

Re: [GENERAL] PostgreSQL Write Performance

2010-01-05 Thread Tim Uckun
Technically you *can* disable triggers, including RI checks, but it's VERY unwise and almost completely defeats the purpose of having the checks. In most such situations you're much better off dropping the constraints then adding them again at the end of the load. I know that the SQL server

Re: [GENERAL] PostgreSQL Write Performance

2010-01-05 Thread Tim Uckun
I, for one, would loudly and firmly resist the addition of such a feature. Almost-as-fast options such as intelligent re-checking of Even if it was not the default behavior? If you really want to do that, look at the manual for how to disable triggers, but understand that you are throwing

Re: [GENERAL] PostgreSQL Write Performance

2010-01-06 Thread Tim Uckun
On Thu, Jan 7, 2010 at 3:13 AM, Dimitri Fontaine dfonta...@hi-media.com wrote: Tim Uckun timuc...@gmail.com writes: Is there a command like COPY which will insert the data but skip all triggers and optionally integrity checks. pg_bulkload does that AFAIK. That's a great utility

[GENERAL] Postgres connection errors

2010-11-02 Thread Tim Uckun
Hello. I have lots of ruby daemons running connected to postgres. Some of them start getting connection errors after about a day or two of running. The odd thing is that they don't all get the same error. Some get this error: PGError: lost synchronization with server: got message type T Others

Re: [GENERAL] Postgres connection errors

2010-11-02 Thread Tim Uckun
Most of the cases we've seen like that have been because multiple threads in the client application were trying to use the same PGconn connection object concurrently.  There's no cross-thread synchronization built into libpq, so you have to provide the interlocks yourself if there's any

[GENERAL] Adding an and is not null on an indexed field slows the query down immensely.

2011-01-14 Thread Tim Uckun
I have this query it runs reasonably quickly. SELECT consolidated_urls.* FROM consolidated_urls INNER JOIN topical_urls ON consolidated_urls.id = topical_urls.consolidated_url_id WHERE ((topical_urls.domain_id = 157) AND ((topical_urls.hidden = 'f'))) ORDER BY index_delta DESC LIMIT 10 The

Re: [GENERAL] Adding an and is not null on an indexed field slows the query down immensely.

2011-01-15 Thread Tim Uckun
2011/1/15 pasman pasmański pasma...@gmail.com: Try : order by index_delta+1 desc I have attached the explain analyze for that below why does this return instantly? Limit (cost=29910.05..29910.07 rows=10 width=1880) (actual time=42.563..42.563 rows=0 loops=1) - Sort

Re: [GENERAL] Adding an and is not null on an indexed field slows the query down immensely.

2011-01-15 Thread Tim Uckun
2011/1/16 pasman pasmański pasma...@gmail.com: I think this is a planner's bug. Can you send these explains to pgsql-bugs ? Sure. BTW I thought I would change the query a little by putting a AND index_value .100 instead of index_delta and it didn't help at all. I thought maybe using another

[GENERAL] Postgres federation

2011-05-11 Thread Tim Uckun
I want to set up a central database and several satellite databases which use some of the data from the central database. For example Say my central database contains people records, with a many to many relationship with clients records. Each client has their own database but needs read, write

  1   2   3   >