[GENERAL] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-16 Thread Christophe Pettus
The video archive for the 10/13 SFPUG meeting, "The Mighty GUCS: A  
guide to the essential PostgreSQL settings you need to know," is now  
available:


http://thebuild.com/blog/2009/10/16/the-mighty-gucs/

It's also available on Vimeo:

http://vimeo.com/7109722
--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Mike Christensen
Hmm would this be a bad time to ask for PostGres 1.0 support?

On Fri, Oct 16, 2009 at 1:55 PM, Jeff Davis  wrote:
> On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote:
>> We are running with postgres sql 7.3.2. We were trying to create an
>> index on a big table. The create index command ran for nearly 5 hours
>> at which point we decided to interrupt it. Since this was interrupted,
>> any operations attempted on the table on which the index was being
>> created gives following error in pgsql log:
>
>> LOG: all server processes terminated; reinitializing shared memory and
>> semaphor
>
> 1. Shut down postgresql and make a full filesystem copy of the PGDATA
> directory. This will ensure that anything else you do won't leave you in
> a worse position.
>
> 2. Upgrade to the latest version of postgresql 7.3, which is 7.3.21
>
> 3. Start up again
>
> 4. Try to fix the problem:
>  a. see if there are any indexes on the table
>  b. if so, drop them
>  c. try to get a good logical backup using pg_dump. You may want to
> disable index scans by using "SET enable_indexscan = f;".
>  d. If that doesn't work, you may have catalog corruption. Examine the
> catalogs (documented here:
> http://www.postgresql.org/docs/7.3/static/catalogs.html ), and look
> entries related to your table in pg_class and pg_index, and see if
> anything looks wrong.
>  e. start from a fresh install and restore using the logical backup
>
> 5. Upgrade all data to a recent version of postgresql. You're on a
> really ancient version that has fallen out of official support. This may
> require several upgrade steps, but you should get upgraded to 8.3.8 or
> 8.4.1. A lot of bugs have been fixed, and as long as you are on 7.3, you
> will still be at serious risk.
>
> Regards,
>        Jeff Davis
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inheritance on foreign key

2009-10-16 Thread decibel

On Oct 12, 2009, at 1:21 PM, Erik Jones wrote:

On Oct 9, 2009, at 9:57 AM, Takeichi Kanzaki Cabrera wrote:

Hi everybody, I need your help. I have a hierarchy of tables, and
other table that has a foreign key with the top table of the
hierarchy, can I insert a value into the "other" table where the  
value

it reference is not on the parent table? (it's in one of its child)


No, foreign key checks do not (yet) follow inheritance  
hierarchies.  Here's the specific clause in the manual (http:// 
www.postgresql.org/docs/8.4/interactive/ddl-inherit.html) that  
covers this:


"All check constraints and not-null constraints on a parent table  
are automatically inherited by its children. Other types of  
constraints (unique, primary key, and foreign key constraints) are  
not inherited."



That said, there are ways around this. We're using inheritance to  
deal with things like customer "accounts" such as bank accounts,  
debit cards, etc. There's stuff that all of these have in common, and  
stuff that's specific, so the bank_account and debit_card tables each  
inherit from a customer_account table.  
customer_account.customer_account_type_id specifies what type of  
account a record is. Using that, we have a trigger that you can put  
on some other table that's referencing  
customer_account.customer_account_id; that trigger implements part of  
the functionality of a true foreign key. It only handles certain  
cases because that's all we need, but I believe you should be able to  
provide full foreign key support if you wanted to create all the  
right trigger functions. The key is to have the trigger function look  
at the parent table to determine what type of account / record it is,  
and then use that information to go to the appropriate child table  
and aquire a FOR UPDATE lock.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrib/plantuner - enable PostgreSQL planner hints

2009-10-16 Thread Greg Stark
On Fri, Oct 16, 2009 at 10:04 AM, decibel  wrote:
> Out of curiosity, did you look at doing hints as comments in a query? I'm
> guessing you couldn't actually do that in just a contrib module, but it's
> how Oracle handles hints, and it seems to be *much* more convenient, because
> a hint only applies for a specific query

Fwiw Oracle is moving away from this now. At OpenWorld the optimizer
folks were pleading with folks to get rid of all their hard-coded
hints by preparing plans for hinted queries and loading those as the
approved plans. In 11g there's a system which ensures the database
will not run any plan that isn't approved.

In fact it looks an *awful* lot like the system I sketched out 6 years
ago in: http://article.gmane.org/gmane.comp.db.postgresql.devel.general/11385

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Kirk Wythers
On Oct 16, 2009, at 4:51 PM, Scott Marlowe   
wrote:


On Fri, Oct 16, 2009 at 11:25 AM, Kirk Wythers   
wrote:



Any ideas what the problem could be here?


Use the pg_dump from the target (i.e. newer) pgsql.  I.e. if going
from 8.3.8 to 8.4.1, use the pg_dump that comes with 8.4.1 to dump the
8.3.8 database.


Can I assume that this is even more critical if gong from 8.2 to 8.4?



I usually just do it like so:

(First migrate accounts:)
pg_dumpall --globals -h oldserver | psql -h newserver postgres


I'm a little confused here. Are you saying to used the network  
connections between thetwo servers and to pipe the dumpall directly to  
the psql load?



(then each database:)
createdb -h newserver dbname


Then create new databases on the the new server to match the. The  
names from the old server?




pg_dump -h oldserver dbname | psql -h newserver dbname
(repeat as needed, save output for error messages)


Then dump each database individually and pipe the dump to the psql load?


These two procedures seem to duplicate the goal? Or am I mosaic  
something?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgadmin3 hangs during dump

2009-10-16 Thread Dennis Gearon
Is this the right place to post this?

I set up the same characteristics on the console, and it runs fine, (COPY 
commands will import back, right? That's what it output.)

On the console, it was:
pg_dump -vaF p -f dbase.sql -U user-name dbase-name

More details:
 about 11 tables, practically empty.
 no other fancy stuff like functions/views/installed languages/blah,blah
 running on Ubuntu latptop

I don't know if I have log files set up or where they'd be. 
 

Dennis Gearon

Signature Warning

EARTH has a Right To Life

I agree with Bolivian President Evo Morales

# The right to life: "The right for no ecosystem to be eliminated by the 
irresponsible acts of human beings."

# The right of biosystems to regenerate themselves: "Development cannot be 
infinite. There's a limit on everything."

# The right to a clean life: "The right for Mother Earth to live without 
contamination, pollution. Fish and animals and trees have rights."

# The right to harmony and balance between everyone and everything: "We are all 
interdependent."


See the movie - 'Inconvenient Truth'
See the movie - 'Syriana'

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Magnus Hagander
2009/10/17 Tom Lane :
> If you decide to go with this approach and use PAM as intermediary,
> you'll need the patch I just committed in response to bug #5121 --- it
> turns out nobody had ever tried that with Postgres before :-(.  But
> I think it's also possible to just use PG's native Kerberos support
> with AD, which would explain why nobody had tried it.

It most definitely is possible.  Note that the recommendation is to
use GSSAPI with Kerberos and not the native "krb5" authentication for
tihs if possible.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Tom Lane
"Scot Kreienkamp"  writes:
> On 16/10/2009 19:38, Scot Kreienkamp wrote:
>> ...  We are a largely Windows shop with many app and
>> database servers running Linux.  The Linux environment is growing too
>> large not to do centralized authentication of some kind.  

> So I guess what I see taking shape is setting up everything to auth
> against PAM locally, then setting up local PAM to auth to a remote
> source.  

What are you using for central auth in the Windows portions of your
shop?

What I'd suggest is that you standardize on Kerberos auth (that's what
it's called in the Unix world, MS might have another name for it).
You can definitely plug Linux into an Active Directory server for this,
and I believe that you have the option to switch it around in future
if you decide you'd rather have a Linux machine as your central auth
server.

If you decide to go with this approach and use PAM as intermediary,
you'll need the patch I just committed in response to bug #5121 --- it
turns out nobody had ever tried that with Postgres before :-(.  But
I think it's also possible to just use PG's native Kerberos support
with AD, which would explain why nobody had tried it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Scott Marlowe
On Fri, Oct 16, 2009 at 11:25 AM, Kirk Wythers  wrote:

> Any ideas what the problem could be here?

Use the pg_dump from the target (i.e. newer) pgsql.  I.e. if going
from 8.3.8 to 8.4.1, use the pg_dump that comes with 8.4.1 to dump the
8.3.8 database.

I usually just do it like so:

(First migrate accounts:)
pg_dumpall --globals -h oldserver | psql -h newserver postgres
(then each database:)
createdb -h newserver dbname
pg_dump -h oldserver dbname | psql -h newserver dbname
(repeat as needed, save output for error messages)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned tables as a poor mans columnar index?

2009-10-16 Thread Peter Hunsberger
On Fri, Oct 16, 2009 at 3:31 PM, marcin mank  wrote:
> On Fri, Oct 16, 2009 at 9:19 PM, Peter Hunsberger
>  wrote:
>
>> The basic problem I have is that I have some tables that are
>> potentially very long (100,000's to millions of rows) and very skinny,
>
>> and I end up with maybe a total of 12 bits of data in each row.
>
> Are You aware that there are some 20-ish bytes of metadata for each
> row? saving 4 bytes buys You nothing. Give it up.

No, the metadata is a whole 'nother problem.  I'm just talking about
fk relationships here.  This isn't an isolated issue within this
particular domain.  If you where to use a conventional table design,
then once the rest of the associated tables get built along with their
associated indexes you'd be looking at in the order of a terabyte for
this half of the DB...

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Jeff Davis
On Fri, 2009-10-16 at 11:26 +0100, Neha Patel wrote:
> We are running with postgres sql 7.3.2. We were trying to create an
> index on a big table. The create index command ran for nearly 5 hours
> at which point we decided to interrupt it. Since this was interrupted,
> any operations attempted on the table on which the index was being
> created gives following error in pgsql log:

> LOG: all server processes terminated; reinitializing shared memory and
> semaphor

1. Shut down postgresql and make a full filesystem copy of the PGDATA
directory. This will ensure that anything else you do won't leave you in
a worse position.

2. Upgrade to the latest version of postgresql 7.3, which is 7.3.21

3. Start up again

4. Try to fix the problem:
 a. see if there are any indexes on the table
 b. if so, drop them
 c. try to get a good logical backup using pg_dump. You may want to
disable index scans by using "SET enable_indexscan = f;".
 d. If that doesn't work, you may have catalog corruption. Examine the
catalogs (documented here:
http://www.postgresql.org/docs/7.3/static/catalogs.html ), and look
entries related to your table in pg_class and pg_index, and see if
anything looks wrong.
 e. start from a fresh install and restore using the logical backup

5. Upgrade all data to a recent version of postgresql. You're on a
really ancient version that has fallen out of official support. This may
require several upgrade steps, but you should get upgraded to 8.3.8 or
8.4.1. A lot of bugs have been fixed, and as long as you are on 7.3, you
will still be at serious risk.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Raymond O'Donnell
On 16/10/2009 19:38, Scot Kreienkamp wrote:
> Hey everyone,
> 
>  
> 
> I apologize in advance for going slightly off topic, but I have never
> setup a centralized authentication scheme under Linux.  My question is,
> what do most people do for centralized command line, X, and PG
> authentication?  From what I've read the main choices are NIS or LDAP.
> LDAP would be problematic as I would have to embed a login and plain
> text password in the ldap.conf file for binding to the MS AD.  On the
> other hand, it seems like NIS is old, inflexible, outdated, and possibly
> nearing end of life.  We are a largely Windows shop with many app and
> database servers running Linux.  The Linux environment is growing too
> large not to do centralized authentication of some kind.  
> 
>  
> 
> At this point I'm open to suggestions or comments.  SSH and X are
> required, PG would be nice to be able to auth centrally as well while
> I'm at it.

Does "PG" = PostgreSQL? If so, it can do LDAP, Kerberos and PAM, among
other things:

  http://www.postgresql.org/docs/8.4/static/client-authentication.html

Ray.


-- 
--  Raymond O'Donnell
--  Tela Design Ltd, Craughwell, Co. Galway, Ireland.
--  Software & graphic design and consultancy
--  r...@teladesign.ie
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] contrib/plantuner - enable PostgreSQL planner hints

2009-10-16 Thread Jeff Davis
On Fri, 2009-10-16 at 12:04 -0500, decibel wrote:
> I'm guessing you couldn't actually do that in just a contrib module,  
> but it's how Oracle handles hints, and it seems to be *much* more  
> convenient, because a hint only applies for a specific query.

If that's the only reason, that seems easy enough to solve by using SET
right before the query. SET LOCAL might be convenient if you want to
forget the setting after the query. Connection pool software will do a
RESET ALL anyway.

There are reasons that it might be convenient to use hints inside the
query itself -- for instance, if you want something to apply only to a
subquery. I'm still hoping that someone will come up with a more elegant
solution to solve that problem though.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned tables as a poor mans columnar index?

2009-10-16 Thread marcin mank
On Fri, Oct 16, 2009 at 9:19 PM, Peter Hunsberger
 wrote:

> The basic problem I have is that I have some tables that are
> potentially very long (100,000's to millions of rows) and very skinny,

> and I end up with maybe a total of 12 bits of data in each row.

Are You aware that there are some 20-ish bytes of metadata for each
row? saving 4 bytes buys You nothing. Give it up.

Also, these are actually pretty small tables (i.e. they fit in memory
of any non-toy server).

Greetings
Marcin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrib/plantuner - enable PostgreSQL planner hints

2009-10-16 Thread Christophe Pettus


On Oct 16, 2009, at 10:04 AM, decibel wrote:

Out of curiosity, did you look at doing hints as comments in a query?


I don't think that a contrib module could change the grammar.

--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inheritance on foreign key

2009-10-16 Thread decibel

On Oct 12, 2009, at 1:21 PM, Erik Jones wrote:

On Oct 9, 2009, at 9:57 AM, Takeichi Kanzaki Cabrera wrote:

Hi everybody, I need your help. I have a hierarchy of tables, and
other table that has a foreign key with the top table of the
hierarchy, can I insert a value into the "other" table where the  
value

it reference is not on the parent table? (it's in one of its child)


No, foreign key checks do not (yet) follow inheritance  
hierarchies.  Here's the specific clause in the manual (http:// 
www.postgresql.org/docs/8.4/interactive/ddl-inherit.html) that  
covers this:


"All check constraints and not-null constraints on a parent table  
are automatically inherited by its children. Other types of  
constraints (unique, primary key, and foreign key constraints) are  
not inherited."



That said, there are ways around this. We're using inheritance to  
deal with things like customer "accounts" such as bank accounts,  
debit cards, etc. There's stuff that all of these have in common, and  
stuff that's specific, so the bank_account and debit_card tables each  
inherit from a customer_account table.  
customer_account.customer_account_type_id specifies what type of  
account a record is. Using that, we have a trigger that you can put  
on some other table that's referencing  
customer_account.customer_account_id; that trigger implements part of  
the functionality of a true foreign key. It only handles certain  
cases because that's all we need, but I believe you should be able to  
provide full foreign key support if you wanted to create all the  
right trigger functions. The key is to have the trigger function look  
at the parent table to determine what type of account / record it is,  
and then use that information to go to the appropriate child table  
and acquire a FOR UPDATE lock.


I can probably provide a more concrete example of this if anyone's  
interested.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrib/plantuner - enable PostgreSQL planner hints

2009-10-16 Thread decibel

On Oct 6, 2009, at 2:57 PM, Oleg Bartunov wrote:
this is an announcement of our new contribution module for  
PostgreSQL - Plantuner - enable planner hints

(http://www.sai.msu.su/~megera/wiki/plantuner).

=# set enable_seqscan=off;
=# set plantuner.forbid_index='id_idx2';



Out of curiosity, did you look at doing hints as comments in a query?  
I'm guessing you couldn't actually do that in just a contrib module,  
but it's how Oracle handles hints, and it seems to be *much* more  
convenient, because a hint only applies for a specific query. I think  
it's extremely unlikely you would intentionally want the same hint to  
apply to a bunch of queries, and extremely likely that you could  
accidentally forget to re-enable something.


That said, thanks for contributing this!
--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp

On 16/10/2009 19:38, Scot Kreienkamp wrote:
> Hey everyone,
> 
>  
> 
> I apologize in advance for going slightly off topic, but I have never
> setup a centralized authentication scheme under Linux.  My question
is,
> what do most people do for centralized command line, X, and PG
> authentication?  From what I've read the main choices are NIS or LDAP.
> LDAP would be problematic as I would have to embed a login and plain
> text password in the ldap.conf file for binding to the MS AD.  On the
> other hand, it seems like NIS is old, inflexible, outdated, and
possibly
> nearing end of life.  We are a largely Windows shop with many app and
> database servers running Linux.  The Linux environment is growing too
> large not to do centralized authentication of some kind.  
> 
>  
> 
> At this point I'm open to suggestions or comments.  SSH and X are
> required, PG would be nice to be able to auth centrally as well while
> I'm at it.

Does "PG" = PostgreSQL? If so, it can do LDAP, Kerberos and PAM, among
other things:

[Scot Kreienkamp] 
But of course.  :)  

So I guess what I see taking shape is setting up everything to auth
against PAM locally, then setting up local PAM to auth to a remote
source.  


Thanks,

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partitioned tables as a poor mans columnar index?

2009-10-16 Thread Peter Hunsberger
I just realized that my replies to my previous question on sparse
arrays went off list due to the way this list server  is set up
(sigh).  It has occurred to me that for my problem, one possible
solution is columnar indexes and that, in a way, partitioned tables in
Postgres might give me somewhat the same capability.

The basic problem I have is that I have some tables that are
potentially very long (100,000's to millions of rows) and very skinny,
essentially just a FK to a parent table and about 12 bits, maybe less,
of actual data.  Now if I use a traditional table an int for FK is as
large or even larger than the data itself which is why I had been
looking at a sparse array (in which case I don't even need 12 bits for
the data, since part of the information is positional in nature).
However, building the routines to manipulate the arrays could be
painful and it's been suggested that their performance will be
non-optimal.

One alternative might be to use partitioned tables. If I partition the
tables in such a way that the table name can be associated directly to
the FK then I no longer need to actually store the FK in the table,
and I end up with maybe a total of 12 bits of data in each row.
Normally, I'd shy away from such a scheme since it strikes me as
potentially problematic:

1) you've got to join through DB metadata back to the actual data in question;

2) some process has to have create table auths if any of this is to be automated

My bias against 1) might just be because I come from an Oracle
background and it seems that in Postgres this is not as cumbersome or
as frowned on as it might be elsewhere?  The second issue seems
problematic no matter how I look at it.  However, I might be willing
to live with it for this particular problem, particulary if I can talk
myself into believing that I'm building a proxy for columnar indexes
:-)

So I guess two sets of questions:

1) In general how comfortable are members of the Postgres community in
using table names for partitioned tables where the table name itself
conveys some indirect join relationship?  It seems there is no extra
performance penalty in joining back to the system tables to do a look
up with Postgres so this is perhaps more of a best practices question
(or even a religious issue for relational purists) than anything
else...

2) If you're comfortable with the idea, how far would you go in
embracing it?  In particular, would you add code to Postgres to hide
the fact that you are joining via table name? Would you go as far as
to do it at the system level or would you stick to just wrapping it in
some functions (in which case the create table privilege is still
needed)?

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp
These are all RH4 and 5, so they do all have PAM.  I thought PAM had to
interface with something else, which is where NIS and LDAP enter the
picture, to authenticate to another server though.  Otherwise I'm not
sure how it works?

 

Thanks,

 

Scot Kreienkamp

skre...@la-z-boy.com

 

From: Scott Mead [mailto:scott.li...@enterprisedb.com] 
Sent: Friday, October 16, 2009 2:50 PM
To: Scot Kreienkamp
Cc: pgsql-general
Subject: Re: [GENERAL] slightly off-topic: Central Auth

 

 

On Fri, Oct 16, 2009 at 2:38 PM, Scot Kreienkamp 
wrote:

Hey everyone,

 

I apologize in advance for going slightly off topic, but I have never
setup a centralized authentication scheme under Linux.  My question is,
what do most people do for centralized command line, X, and PG
authentication?  From what I've read the main choices are NIS or LDAP.
LDAP would be problematic as I would have to embed a login and plain
text password in the ldap.conf file for binding to the MS AD.  


It sounds like PAM would be useful for you.  That's really what is
was built for.
--Scott



Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scott Mead
On Fri, Oct 16, 2009 at 2:38 PM, Scot Kreienkamp wrote:

>  Hey everyone,
>
>
>
> I apologize in advance for going slightly off topic, but I have never setup
> a centralized authentication scheme under Linux.  My question is, what do
> most people do for centralized command line, X, and PG authentication?  From
> what I’ve read the main choices are NIS or LDAP.  LDAP would be problematic
> as I would have to embed a login and plain text password in the ldap.conf
> file for binding to the MS AD.
>

It sounds like PAM would be useful for you.  That's really what is was
built for.
--Scott


[GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp
Hey everyone,

 

I apologize in advance for going slightly off topic, but I have never
setup a centralized authentication scheme under Linux.  My question is,
what do most people do for centralized command line, X, and PG
authentication?  From what I've read the main choices are NIS or LDAP.
LDAP would be problematic as I would have to embed a login and plain
text password in the ldap.conf file for binding to the MS AD.  On the
other hand, it seems like NIS is old, inflexible, outdated, and possibly
nearing end of life.  We are a largely Windows shop with many app and
database servers running Linux.  The Linux environment is growing too
large not to do centralized authentication of some kind.  

 

At this point I'm open to suggestions or comments.  SSH and X are
required, PG would be nice to be able to auth centrally as well while
I'm at it.

 

Thanks,

 

Scot Kreienkamp 

 



Re: [GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Thom Brown
2009/10/16 Kirk Wythers :
> I am trying to move databases to another macine (and update from 8.2 to 8.4
> along the way). I first tried pg_dumpall, but I found that one of the data
> bases did not restore and data, just an empty db with no tables. Since then
> I have tried pg_dump with the following:
>
> bash-3.2$ /usr/local/pgsql/bin/pg_dump -o mn_timber >
> /Volumes/disk3/backup_db/mn_timber20091016.out
>
> then restore on the new machine with:
>
> kwythers$ pg_restore -C -d postgres mn_timber20091016.out
>
> But I am getting the error:
>
> pg_restore: [archiver] input file does not appear to be a valid archive
> onceler:~ kwythers$
>
> Looking at the db on the original machine, all looks good.
>
> mn_timber=# \d
>                   List of relations
>  Schema |          Name          |   Type   |  Owner
> ++--+--
>  public | all_timber_data        | view     | kwythers
>  public | county                 | table    | kwythers
>  public | forties                | table    | kwythers
>  public | geometry_columns       | table    | kwythers
>  public | grid_cell              | view     | kwythers
>  public | mn_pls_grid            | table    | kwythers
>  public | mn_pls_grid_gid_seq    | sequence | kwythers
>  public | rdir                   | table    | kwythers
>  public | session                | table    | kwythers
>  public | session_session_id_seq | sequence | kwythers
>  public | spatial_ref_sys        | table    | kwythers
>  public | timber_type            | table    | kwythers
>  public | timber_volume          | table    | kwythers
>  public | timber_volume_seq      | sequence | kwythers
> (14 rows)
>
> mn_timber=# SELECT * FROM timber_volume;
>  grid_id | tt_id | year | cords | mbm | poles | posts | tv_id
> -+---+--+---+-+---+---+---
>  263515 |    17 | 1920 |    11 |   2 |       |       | 10176
>  266999 |     6 | 1920 |     7 |  19 |       |       | 10869
>  1141653 |     5 | 1920 |    10 |   1 |       |       | 10238
>  1143744 |     5 | 1920 |     2 |   1 |       |       | 10293
>  263560 |     9 | 1920 |       |   5 |       |       | 10346
>  264027 |     3 | 1920 |    49 |   1 |       |       | 10391
>  264180 |     9 | 1920 |    70 |   5 |       |       | 10430
>  263728 |     4 | 1920 |       |     |   919 |  1495 | 10468
>  263667 |    17 | 1920 |     1 |     |       |       | 10501
>  263658 |    17 | 1920 |    15 |     |       |       | 10528
>  263984 |     3 | 1920 |    98 |     |       |       | 10554
>  264289 |    17 | 1920 |     1 |     |       |       | 10579
>  263973 |     4 | 1920 |       |     |    40 |    40 | 10601
> .
> .
> .
> (38437 rows)
>
> Any ideas what the problem could be here?
>
> Thanks in advance.
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

I believe pg_dump, by default, outputs in plain format.  pg_restore
only accepts tar and custom, so you should be able to just pass the
file to psql, or back it up again with a different format.

As the documentation states: "pg_restore is a utility for restoring a
PostgreSQL database from an archive created by pg_dump in one of the
non-plain-text formats."

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] db not dumping properly, or at least not restoring

2009-10-16 Thread Kirk Wythers
I am trying to move databases to another macine (and update from 8.2  
to 8.4 along the way). I first tried pg_dumpall, but I found that one  
of the data bases did not restore and data, just an empty db with no  
tables. Since then I have tried pg_dump with the following:


bash-3.2$ /usr/local/pgsql/bin/pg_dump -o mn_timber > /Volumes/disk3/ 
backup_db/mn_timber20091016.out


then restore on the new machine with:

kwythers$ pg_restore -C -d postgres mn_timber20091016.out

But I am getting the error:

pg_restore: [archiver] input file does not appear to be a valid archive
onceler:~ kwythers$

Looking at the db on the original machine, all looks good.

mn_timber=# \d
   List of relations
 Schema |  Name  |   Type   |  Owner
++--+--
 public | all_timber_data| view | kwythers
 public | county | table| kwythers
 public | forties| table| kwythers
 public | geometry_columns   | table| kwythers
 public | grid_cell  | view | kwythers
 public | mn_pls_grid| table| kwythers
 public | mn_pls_grid_gid_seq| sequence | kwythers
 public | rdir   | table| kwythers
 public | session| table| kwythers
 public | session_session_id_seq | sequence | kwythers
 public | spatial_ref_sys| table| kwythers
 public | timber_type| table| kwythers
 public | timber_volume  | table| kwythers
 public | timber_volume_seq  | sequence | kwythers
(14 rows)

mn_timber=# SELECT * FROM timber_volume;
 grid_id | tt_id | year | cords | mbm | poles | posts | tv_id
-+---+--+---+-+---+---+---
  263515 |17 | 1920 |11 |   2 |   |   | 10176
  266999 | 6 | 1920 | 7 |  19 |   |   | 10869
 1141653 | 5 | 1920 |10 |   1 |   |   | 10238
 1143744 | 5 | 1920 | 2 |   1 |   |   | 10293
  263560 | 9 | 1920 |   |   5 |   |   | 10346
  264027 | 3 | 1920 |49 |   1 |   |   | 10391
  264180 | 9 | 1920 |70 |   5 |   |   | 10430
  263728 | 4 | 1920 |   | |   919 |  1495 | 10468
  263667 |17 | 1920 | 1 | |   |   | 10501
  263658 |17 | 1920 |15 | |   |   | 10528
  263984 | 3 | 1920 |98 | |   |   | 10554
  264289 |17 | 1920 | 1 | |   |   | 10579
  263973 | 4 | 1920 |   | |40 |40 | 10601
.
.
.
(38437 rows)

Any ideas what the problem could be here?

Thanks in advance.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread John R Pierce

Neha Patel wrote:


Hi,

 


We are running with postgres sql 7.3.2. We were trying to...



well, right off the bat, thats a -really- old version.   Release notes 
say February 2003.  7.3 was updated to 7.3.21, and we're currently on 
8.4 (while still supporting 8.3, 8.2, 8.1, and 7.4).There were a LOT 
of fixes between 7.3.2 and 7.3.21, see the release notes for each 
version in between...  
http://www.postgresql.org/docs/current/static/release.html






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
Hi David,

Many thanks for your reply. After good 10 hours of work we managed to
restore from a backup.

Regards
Neha>
 

-Original Message-
From: David Fetter [mailto:da...@fetter.org] 
Sent: 16 October 2009 17:28
To: Neha Patel
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Urgent Help required

On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote:
> Hi,
>  
> We are running with postgres sql 7.3.2.

Whatever you thought your most urgent priority was, it's actually
getting your database off of a major version of PostgreSQL, 7.3, whose
end-of-life was well over a year ago.

Your second most urgent priority is creating an upgrade strategy and
integrating it into your development and deployment processes.

I hope you have good backups.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread David Fetter
On Fri, Oct 16, 2009 at 11:26:40AM +0100, Neha Patel wrote:
> Hi,
>  
> We are running with postgres sql 7.3.2.

Whatever you thought your most urgent priority was, it's actually
getting your database off of a major version of PostgreSQL, 7.3, whose
end-of-life was well over a year ago.

Your second most urgent priority is creating an upgrade strategy and
integrating it into your development and deployment processes.

I hope you have good backups.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Help required

2009-10-16 Thread Grzegorz Jaśkiewicz
On Fri, Oct 16, 2009 at 11:26 AM, Neha Patel  wrote:

>  Hi,
>
>
>
> We are running with postgres sql 7.3.2. We were trying to create an index
> on a big table. The create index command ran for nearly 5 hours at which
> point we decided to interrupt it. Since this was interrupted, any
> operations attempted on the table on which the index was being created
> gives following error in pgsql log:
>
upgrade to whatever is the newest 7.3.X version now first, and also - 7.3 is
an ancient history - in terms of age, so you might wanna look at upgrading
to 8.3 or newer soon.


>
>
> LOG:  all server processes terminated; reinitializing shared memory and
> semaphor
>
> es
>
>
> LOG:  database system was interrupted at 2009-10-16 10:44:54 BST
>
>
> LOG:  checkpoint record is at 150/71A0C0CC
>
>
> LOG:  redo record is at 150/71A0C0CC; undo record is at 0/0; shutdown TRUE
>
>
> LOG:  next transaction id: 1757299460; next oid: 43508941
>
>
> LOG:  database system was not properly shut down; automatic recovery in
> progress
>
> LOG:  ReadRecord: record with zero length at 150/71A0C10C
>
>
> LOG:  redo is not required
>
>
> LOG:  database system is ready
>
>
>
>
> Any idea what this means and what we need to do to resolve access to this
> table again? We can see the next oid number increases over time. Access to
> all other tables in the database is fine. Any help would be greatly
> appreciated.
>
>
>
> Many Thanks
>
> *Neha.*
>



-- 
GJ


Re: [GENERAL] Can't find documentation for ~=~ operator

2009-10-16 Thread Viktor Rosenfeld
Tom Lane wrote:

> Viktor Rosenfeld  writes:
> > I can't find the documentation of the ~=~ operator anywhere on the
> > PostgreSQL homepage.
> 
> Which version's documentation are you reading?  It's gone as of 8.4.

I realize that, but I have to use 8.3 right now and can't find it in
that manual either.

The only hit for ~=~ when searching the docs of 8.2, 8.3 and 8.4 is the
following link:

http://archives.postgresql.org/pgsql-hackers/2006-12/msg00477.php

When I search for xxx_pattern_ops (which, if my memory is correct, was
part of the page that described ~=~) I get section 11.9 (Operator
Classes and Operator Families) as a hit, but that page doesn't contain a
table of operators either.

Cheers,
Viktor

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Urgent Help required

2009-10-16 Thread Neha Patel
Hi,
 
We are running with postgres sql 7.3.2. We were trying to create an
index on a big table. The create index command ran for nearly 5 hours at
which point we decided to interrupt it. Since this was interrupted, any
operations attempted on the table on which the index was being created
gives following error in pgsql log:
 
LOG:  all server processes terminated; reinitializing shared memory and
semaphor
es

LOG:  database system was interrupted at 2009-10-16 10:44:54 BST

LOG:  checkpoint record is at 150/71A0C0CC

LOG:  redo record is at 150/71A0C0CC; undo record is at 0/0; shutdown
TRUE  
LOG:  next transaction id: 1757299460; next oid: 43508941

LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  ReadRecord: record with zero length at 150/71A0C10C

LOG:  redo is not required

LOG:  database system is ready

 
Any idea what this means and what we need to do to resolve access to
this table again? We can see the next oid number increases over time.
Access to all other tables in the database is fine. Any help would be
greatly appreciated. 
 
Many Thanks
Neha.


Re: [postgis-users] [GENERAL] pgsql2shp : Encoding headache

2009-10-16 Thread InterRob
I would do this last query searching for the 0xC29F character WITHOUT the
upper() function on the source table, in the native (to table) UTF8 client
encoding. No result either?

Rob

2009/10/16 Arnaud Lesauvage 

> Arnaud Lesauvage a écrit :
>
>> But then, if I dump it through a query to have my field in uppercase, I
>> get an error 'character 0xc29f of encoding UTF8 has no equivalent in
>> WIN1252' (translated by myself, the message is in French)
>> The command is simply :
>> pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT
>> upper(mytext) as mytext, mygeom FROM mytemptable"
>>
>
> OK, I narrowed down the problem to the WIN1252 encoding.
> Using LATIN1 or LATIN9 for instance works correctly.
> Since my application seems to work with LATIN9, I'll go with it.
>
> I am still perplex though. What is this 0xc29f character ? An internet
> search tells me that this is some Kanju character. I am quite confident that
> if this is true, it would not convert any better to LATIN9 than to WIN1252.
>
> Also, doing a search like :
> SELECT * FROM mytable WHERE upper(myflied) ILIKE u&'%c29f%';
> Gives me 0 result.
> Am I wrong to think that the error 'character 0xc29f of UTF8' relates to
> the character with code point C29F in UTF8 ?
>
> Thanks again for your help/lightings on this matter.
>
> Arnaud
>
>
> ___
> postgis-users mailing list
> postgis-us...@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
>


Re: [GENERAL] savepoint name vs prepared transaction name

2009-10-16 Thread Grzegorz Jaśkiewicz
2009/10/16 Tom Lane 

> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?=  writes:
> > Is there any reason, why I can't put quotes around name of savepoint, but
> I
> > have/can do this for prepare transaction ?
>
> Savepoint names are identifiers; the SQL spec says so.  Prepared
> transaction GIDs are string literals.  The relevant discussion
> about that is here:
> http://archives.postgresql.org/pgsql-hackers/2005-05/msg01292.php
>
> We could conceivably allow an identifier too in PREPARE TRANSACTION,
> but I think that that might be more dangerous than helpful, because
> of the implicit case folding for identifiers.  'XX' and 'xx' are
> distinct but XX and xx wouldn't be.  There are a few other places
> where we allow strings and identifiers interchangeably, but AFAIR
> they are all places where case doesn't matter.
>
> makes sense. Cheers Tom.



-- 
GJ


Re: [GENERAL] Wrong estimation of rows for hash join

2009-10-16 Thread Tom Lane
Alban Hertroys  writes:
> I'm also somewhat surprised to see an array of what appear to be  
> integers be cast to bpchar[]. Did you define those coordinates(?) as  
> character types? Numerical comparisons tend to be faster than string  
> comparisons, which should make some difference on sequential scans.

Or, if the column can't be changed to an integer, at least consider
making it varchar not char.  The funny rules about trailing blanks
make char comparison a bit slower than varchar, IIRC.  They aren't
very conducive to fast hashing either ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] savepoint name vs prepared transaction name

2009-10-16 Thread Tom Lane
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?=  writes:
> Is there any reason, why I can't put quotes around name of savepoint, but I
> have/can do this for prepare transaction ?

Savepoint names are identifiers; the SQL spec says so.  Prepared
transaction GIDs are string literals.  The relevant discussion
about that is here:
http://archives.postgresql.org/pgsql-hackers/2005-05/msg01292.php

We could conceivably allow an identifier too in PREPARE TRANSACTION,
but I think that that might be more dangerous than helpful, because
of the implicit case folding for identifiers.  'XX' and 'xx' are
distinct but XX and xx wouldn't be.  There are a few other places
where we allow strings and identifiers interchangeably, but AFAIR
they are all places where case doesn't matter.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql2shp : Encoding headache

2009-10-16 Thread Arnaud Lesauvage

Arnaud Lesauvage a écrit :
But then, if I dump it through a query to have my field in uppercase, I 
get an error 'character 0xc29f of encoding UTF8 has no equivalent in 
WIN1252' (translated by myself, the message is in French)

The command is simply :
pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT 
upper(mytext) as mytext, mygeom FROM mytemptable"


OK, I narrowed down the problem to the WIN1252 encoding.
Using LATIN1 or LATIN9 for instance works correctly.
Since my application seems to work with LATIN9, I'll go with it.

I am still perplex though. What is this 0xc29f character ? An internet 
search tells me that this is some Kanju character. I am quite confident 
that if this is true, it would not convert any better to LATIN9 than to 
WIN1252.


Also, doing a search like :
SELECT * FROM mytable WHERE upper(myflied) ILIKE u&'%c29f%';
Gives me 0 result.
Am I wrong to think that the error 'character 0xc29f of UTF8' relates to 
the character with code point C29F in UTF8 ?


Thanks again for your help/lightings on this matter.

Arnaud


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Wrong estimation of rows for hash join

2009-10-16 Thread Alban Hertroys

On 16 Oct 2009, at 10:59, Christian Schröder wrote:


Hi list,
I have the following query:
  SELECT *
  FROM base
  INNER JOIN pt USING (x) WHERE pt.y IN ('1121', '11411', '11421',  
'1161', '1162');


"explain analyze" yields the following result:

   QUERY PLAN

Hash Join  (cost=529.18..164930.70 rows=28374 width=0) (actual  
time=10.834..4877.326 rows=245298 loops=1)

 Hash Cond: (b.x = pt.x)
 ->  Seq Scan on b  (cost=0.00..159579.93 rows=1210093 width=4)  
(actual time=0.018..2464.871 rows=1210250 loops=1)
 ->  Hash  (cost=527.41..527.41 rows=142 width=4) (actual  
time=10.775..10.775 rows=138 loops=1)
   ->  Seq Scan on pt  (cost=0.00..527.41 rows=142 width=4)  
(actual time=0.057..10.556 rows=138 loops=1)
 Filter: (y = ANY ('{1121,11411,11421,1161,1162}'::bpchar 
[]))

Total runtime: 5170.837 ms

As you can see, the estimated number of rows in the join is much  
lower than the actual number. Normally, the join is part of a more  
complex query which gets really slow, probably (among other reasons)  
because the query planner uses the incorrect row estimate for its  
further plans.


Question is: What can I do to improve that estimation? There is a  
foreign key from base.x to pt.x and both fields are declared not null.


The planner seems to think that x and y have a low selectivity, hence  
the sequential scans. The estimate on the hash join seems to indicate  
that the planner is actually still being too optimistic about the  
selectivity of those columns. Is it really the case that those values  
are random, or can their frequency in the table be predicted?


If so, you can create a selectivity function and a domain that uses  
that selectivity function for equality. Check Nathan's recent video on  
the subject (posted just yesterday).


I'm also somewhat surprised to see an array of what appear to be  
integers be cast to bpchar[]. Did you define those coordinates(?) as  
character types? Numerical comparisons tend to be faster than string  
comparisons, which should make some difference on sequential scans.


Side question: What can I do to improve the speed of the hash join  
itself? I understand that 2.5 secs are needed to perform the  
sequential scan of table b, but where do the remaining 2.5 secs come  
from?



As I read it the seq-scans take up the first 2.5s and the actual Hash  
Join the other 2.5s.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4ad86e2511683848115674!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] savepoint name vs prepared transaction name

2009-10-16 Thread Grzegorz Jaśkiewicz
Is there any reason, why I can't put quotes around name of savepoint, but I
have/can do this for prepare transaction ?

Ie:
SAVEPOINT 'foo'; --- doesn't work
SAVEPOINT foo; --- all grand

PREPARE TRANSACTION 'foo'; --- grand
PREPARE TRANSACTION foo; refuses to work.

It is quite confusing, I feel like some consistency would be nice here.


-- 
GJ


Re: [GENERAL] pgsql2shp : Encoding headache

2009-10-16 Thread Arnaud Lesauvage

Raymond O'Donnell a écrit :

If it's any help to you, you can get iconv (and a bunch of other helpful
stuff) from GnuWin32:

  http://gnuwin32.sourceforge.net/


Thanks for your help Raymond.

I tried iconv but I have other problems now.
I still have to load the file into postgresql because the shapefiles 
datafile (.dbf) is associated with an index file, and I have to use 
pgsql2shp to rebuild it.
I'll try some more though. Maybe iconv before shp2pgsql, then load with 
client_encoding set to WIN1252, then dump.

I'll see how it goes.

Arnaud

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [postgis-users] pgsql2shp : Encoding headache

2009-10-16 Thread InterRob
Does that last query (invoking the upper() function) actually run well when
executed in pgsql console?

Rob

2009/10/16 Arnaud Lesauvage 

> Hi all!
>
> I have an UTF8 encoded shapefile, and an UTF8 encoded postgis-enabled
> database. I want my shapefile to be encoded in WIN1252, and a particular
> field to be in uppercase.
>
> Since I am on windows, I don't have an iconv executable. Therefore, I am
> trying to :
> - dump the shapefile with shp2pgsql to an sql text file
> - load the resulting sql file into a postgresql table
> - dump this table into a shapefile with pgsql2shp (in WIN1252)
>
> To load the shapefile into postgresql, I had to dump it without specifying
> a '-W' argument, set my client_encoding to UTF8, then load the sql file into
> postgresql.
>
> If I look at the data with pgAdmin (with client_encoding set to UTF8), it
> looks good : accentuation is fine, special characters are fine.
>
> To dump the data in a WIN1252-encoded shapefile, pgsql2shp has no encoding
> argument, so I set my client encoding to WIN1252 thtough the environment
> variable PGCLIENTENCODING.
>
> If I just dump the file this way, it seems to be fine. So this command
> works fine :
> pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT mytext,
> mygeom FROM mytemptable"
> ->  [621679 rows]
>
> But then, if I dump it through a query to have my field in uppercase, I get
> an error 'character 0xc29f of encoding UTF8 has no equivalent in WIN1252'
> (translated by myself, the message is in French)
> The command is simply :
> pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT
> upper(mytext) as mytext, mygeom FROM mytemptable"
>
> So I guess there is a problem with my 'upper' conversion, but I have no
> idea what this 0xc29f character could be.
>
> Any help would be greatly appreciated.
> Thanks a lot !
>
> --
> Arnaud Lesauvage
> ___
> postgis-users mailing list
> postgis-us...@postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
>


Re: [GENERAL] pgsql2shp : Encoding headache

2009-10-16 Thread Raymond O'Donnell
On 16/10/2009 10:36, Arnaud Lesauvage wrote:

> I have an UTF8 encoded shapefile, and an UTF8 encoded postgis-enabled
> database. I want my shapefile to be encoded in WIN1252, and a particular
> field to be in uppercase.
> 
> Since I am on windows, I don't have an iconv executable. Therefore, I am
> trying to :

If it's any help to you, you can get iconv (and a bunch of other helpful
stuff) from GnuWin32:

  http://gnuwin32.sourceforge.net/


Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql2shp : Encoding headache

2009-10-16 Thread Arnaud Lesauvage

Hi all!

I have an UTF8 encoded shapefile, and an UTF8 encoded postgis-enabled 
database. I want my shapefile to be encoded in WIN1252, and a particular 
field to be in uppercase.


Since I am on windows, I don't have an iconv executable. Therefore, I am 
trying to :

- dump the shapefile with shp2pgsql to an sql text file
- load the resulting sql file into a postgresql table
- dump this table into a shapefile with pgsql2shp (in WIN1252)

To load the shapefile into postgresql, I had to dump it without 
specifying a '-W' argument, set my client_encoding to UTF8, then load 
the sql file into postgresql.


If I look at the data with pgAdmin (with client_encoding set to UTF8), 
it looks good : accentuation is fine, special characters are fine.


To dump the data in a WIN1252-encoded shapefile, pgsql2shp has no 
encoding argument, so I set my client encoding to WIN1252 thtough the 
environment variable PGCLIENTENCODING.


If I just dump the file this way, it seems to be fine. So this command 
works fine :
pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT 
mytext, mygeom FROM mytemptable"

->  [621679 rows]

But then, if I dump it through a query to have my field in uppercase, I 
get an error 'character 0xc29f of encoding UTF8 has no equivalent in 
WIN1252' (translated by myself, the message is in French)

The command is simply :
pgsql2shp -f myouput.shp -u postgres -g mygeom mydatabase "SELECT 
upper(mytext) as mytext, mygeom FROM mytemptable"


So I guess there is a problem with my 'upper' conversion, but I have no 
idea what this 0xc29f character could be.


Any help would be greatly appreciated.
Thanks a lot !

--
Arnaud Lesauvage

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Wrong estimation of rows for hash join

2009-10-16 Thread Christian Schröder

Hi list,
I have the following query:
   SELECT *
   FROM base
   INNER JOIN pt USING (x) WHERE pt.y IN ('1121', '11411', '11421', 
'1161', '1162');


"explain analyze" yields the following result:

QUERY PLAN

Hash Join  (cost=529.18..164930.70 rows=28374 width=0) (actual 
time=10.834..4877.326 rows=245298 loops=1)

  Hash Cond: (b.x = pt.x)
  ->  Seq Scan on b  (cost=0.00..159579.93 rows=1210093 width=4) 
(actual time=0.018..2464.871 rows=1210250 loops=1)
  ->  Hash  (cost=527.41..527.41 rows=142 width=4) (actual 
time=10.775..10.775 rows=138 loops=1)
->  Seq Scan on pt  (cost=0.00..527.41 rows=142 width=4) 
(actual time=0.057..10.556 rows=138 loops=1)

  Filter: (y = ANY ('{1121,11411,11421,1161,1162}'::bpchar[]))
Total runtime: 5170.837 ms

As you can see, the estimated number of rows in the join is much lower 
than the actual number. Normally, the join is part of a more complex 
query which gets really slow, probably (among other reasons) because the 
query planner uses the incorrect row estimate for its further plans.


Question is: What can I do to improve that estimation? There is a 
foreign key from base.x to pt.x and both fields are declared not null.


Side question: What can I do to improve the speed of the hash join 
itself? I understand that 2.5 secs are needed to perform the sequential 
scan of table b, but where do the remaining 2.5 secs come from?


Thanks a lot in advance,
   Christian

--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax:  +49 551 489500-91
Hans-Böckler-Straße 2  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general