Re: [HACKERS] SSPI authentication

2007-07-17 Thread Paul Silveira

This is great.  I've worked on 2 projects in the last year that desperately
needed this.  It will certainly make the security model more seamless...

-Paul




Magnus Hagander-2 wrote:
 
 A quick status update on the SSPI authentication part of the GSSAPI
 project.
 
 I have libpq SSPI working now, with a few hardcoded things still in
 there to be fixed. But it means that I can connect to a linux server
 using kerberos/GSSAPI *without* the need to set up MIR Kerberos
 libraries and settings on the client. This is great :-) The code is
 fairly trivial.
 
 I've set it up as a different way of doing GSSAPI authentication. This
 means that if you can't have both SSPI and MIT KRB GSSAPI in the same
 installation. I don't see a problem with this - 99.9% of windows users
 will just want the SSPI version anyway. But I figured I'd throw it out
 here to see if there are any objections to this?
 
 I'd like to make this enabled by default on Win32, since all supported
 windows platforms have support for it. Then we can add a configure
 option to turn it *off* if we want to. Comments? Do we even need such an
 option?
 
 Right now, the SSPI path is hardcoded to just support Kerberos. Once we
 have both client and server with SSPI support I see no reason to keep
 this restriction. Anybody against that? (Not saying that'll happen for
 8.3, because it certainly needs a bunch of extra testing, but eventually)
 
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 
 

-- 
View this message in context: 
http://www.nabble.com/SSPI-authentication-tf4090227.html#a11654750
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Proposal for Implenting read-only queries during wal replay (SoC 2007)

2007-02-27 Thread Paul Silveira

Hello,

I just wanted to voice my opinion for this feature...  I've implemented a
few Production applicaitons with PostgreSQL now and would die for that
feature.  Right now, I am constantly trying to find way's to make my data
more available.  I've even resulted to using pg_dump to create read only
copies of the database and placed them behind load balancers to make the
data more available.  Something like this would allow me to quickly leverage
a read only node to scale out the applicaiton...  If it can at all be built,
it would get my first, second and third vote. :)

Regards,

Paul Silveira




Jonah H. Harris-2 wrote:
 
 On 2/26/07, Bruce Momjian [EMAIL PROTECTED] wrote:
 Jonah, I have no idea what fault you are trying to blame on the
 community in the above statement.  The author didn't discuss the idea
 with the community before spending months on it so we have no obligation
 to accept it in the core.
 
 You're missing the point entirely.  The majority of the (vocal)
 community didn't even want the feature and as such, failed to provide
 viable suggestions for him to move forward.  As the majority of the
 community didn't want the feature, it wouldn't have made a difference
 when he proposed it; which would have remained negative nonetheless.
 
 -- 
 Jonah H. Harris, Software Architect | phone: 732.331.1324
 EnterpriseDB Corporation| fax: 732.331.1301
 33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
 Iselin, New Jersey 08830| http://www.enterprisedb.com/
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate
 
 

-- 
View this message in context: 
http://www.nabble.com/Proposal-for-Implenting-read-only-queries-during-wal-replay-%28SoC-2007%29-tf3279821.html#a9197770
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] snapshot replication with pg_dump

2006-08-21 Thread Paul Silveira

Yes the needs are simple.  I was also thinking about using DBI.  The most
important thing to me is that everything is kept in a transaction so that
users can still read the data while I'm snapshotting it at the same time. 
If my transaction is isolated from all the reads happening, then it
shouldn't matter how long it takes for me to move the data over (granted,
that will increase latency, but in this project that's not really too
sensitive) and it will be transparent to the end users.  

Does anyone have any examples of using pg_dump in a transaction with a
DELETE or TRUNCATE command?  I have begun writing this to get the job
done...

cat DELETE.sql COPYDATA.sql | psql -Upostgres -dMyDBName -hTestServer2

This command will combine the two sql files that I have (the first one just
deletes all from a certain table and the second one is a COPY command from a
previous pg_dump of a specific table) and then it pipes that out to psql to
run it on the remote server.  

I like what I have so far but would like to make it more dynamic.  If I
could eliminate the need for the two .sql files and make it all happen
within the command line, that would rock.  

I guess I'd need something like this... (Pseudo code...)

cat DELETE FROM MyTable pg_dump MyDBName -hTestServer1 -a -tMyTableName |
psql -Upostgres -dMyDBName -hTestServer2


I'm not sure how to cat the DELETE at the beginning of the COPY command that
would be delivered from the pg_dump and then pipe that complete thing to the
remote server to be executed as a transaction so that users could still read
from that able while my command was running.  

Any ideas???

Thanks in advance,

Paul






Christopher Browne-4 wrote:
 
 [EMAIL PROTECTED] (Paul Silveira) writes:
 Does anyone have any good examples of implementing snapshot
 replication. I know that PostgreSQL does not have snapshot
 replication and that Slony-I is the recomended replication senario
 but I've configured it and it seems rather advanced for a shop that
 is implementing PostgreSQL for the first time.  I have an
 application that will be mostly reads and snapshot replication would
 probably be simple enough and would work.  I was thinking about just
 using pg_dump to do the trick because the DB should not get very
 large.  Does anyone have any advanced examples of doing something
 like this? Also, does anyone have any comments they'd like to share
 about this...
 
 If your database is small, and your needs simple, then using pg_dump
 to generate snapshots is a perfectly reasonable idea.
 
 I suppose the primary complication is whether or not you have multiple
 databases around on the cluster...  If you don't, or if they all need
 to be snapshotted, you might consider using pg_dumpall, which also
 creates users and databases.
 
 If pg_dumpall is unsuitable, then you'll still need to grab user
 information that isn't part of pg_dump output...
 -- 
 (reverse (concatenate 'string gro.mca @ enworbbc))
 http://www3.sympatico.ca/cbbrowne/postgresql.html
 This .signature is  shareware.  Send in $20 for  the fully registered
 version...
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
 
 

-- 
View this message in context: 
http://www.nabble.com/snapshot-replication-with-pg_dump-tf2090351.html#a5907049
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] snapshot replication with pg_dump

2006-08-21 Thread Paul Silveira

Can you do that if you have functions tied to the table?  Also would that be
in a transaction?  I need to allow seamless usability to the data while I'm
doing this snapshot.  Not sure the -c option (Clean Drop schema) would work
here.  I want to only drop a table and not the entire db so that I'm not
moving data that doesn't need to be moved.

The goal is to only shapshot data in tables that has changed.  I would like
to wrap that in a transaction.  

-Paul






Martijn van Oosterhout wrote:
 
 On Mon, Aug 21, 2006 at 06:40:22AM -0700, Paul Silveira wrote:
 
 Yes the needs are simple.  I was also thinking about using DBI.  The most
 important thing to me is that everything is kept in a transaction so that
 users can still read the data while I'm snapshotting it at the same time. 
 If my transaction is isolated from all the reads happening, then it
 shouldn't matter how long it takes for me to move the data over (granted,
 that will increase latency, but in this project that's not really too
 sensitive) and it will be transparent to the end users.  
 
 Looks to me like the -c option to pg_dump should do what you want.
 
 snip
 
 Have a nice day,
 -- 
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
 litigate.
 
 

-- 
View this message in context: 
http://www.nabble.com/snapshot-replication-with-pg_dump-tf2090351.html#a5908347
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] snapshot replication with pg_dump

2006-08-11 Thread Paul Silveira

Hello,

Does anyone have any good examples of implementing snapshot replication. 
I know that PostgreSQL does not have snapshot replication and that Slony-I
is the recomended replication senario but I've configured it and it seems
rather advanced for a shop that is implementing PostgreSQL for the first
time.  I have an application that will be mostly reads and snapshot
replication would probably be simple enough and would work.  I was thinking
about just using pg_dump to do the trick because the DB should not get very
large.  Does anyone have any advanced examples of doing something like this? 
Also, does anyone have any comments they'd like to share about this...

Thanks in advance,

Paul


-- 
View this message in context: 
http://www.nabble.com/snapshot-replication-with-pg_dump-tf2090351.html#a5761329
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Better name/syntax for online index creation

2006-07-28 Thread Paul Silveira

I really like the CREATE INDEX CONCURRENTLY suggestion that I've seem in this
thread.  That seems like a good alternative to ONLINE and is very easy to
understand.  

Regards,

Paul
-- 
View this message in context: 
http://www.nabble.com/Better-name-syntax-for-%22online%22-index-creation-tf1992993.html#a5538009
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Better name/syntax for online index creation

2006-07-26 Thread Paul Silveira

I understand the negative implications with calling it ONLINE with regards
to the index rebuild but I believe that would follow what the industry and
professionals understand.  Oracle denotes this concept as ONLINE and
microsoft with it's new SQL Server 2005 markets the ability to do ONLINE
reindexing.  (rebuilding an index in SQL Server 2000 will cause blocking)

If I didn't know anything about PostgreSQL and read a manual about it's
indexing capabilities and read that it had ONLINE reindexing, the first
thing that I would think about was the ability to rebuild my indexes without
causing any blocking or writers or readers.  

There might be a better token word to use in this situation but I don't
think that ONLINE would be out of bounds...

Just my 2 cents...

Paul Silveira
-- 
View this message in context: 
http://www.nabble.com/Better-name-syntax-for-%22online%22-index-creation-tf1992993.html#a5501425
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly