[GENERAL] CentOS 6 - www.pgrpms.org - SELinux

2011-09-02 Thread Michael A. Peters
I'm setting up a new server for a CMS I have written (er, partially, 
needs work) that uses PostgreSQL as a backend.


All my existing CentOS 5 servers, I use pgrpms for PostgreSQL.

I would like to do the same with CentOS 6 but I also want to keep 
SELinux enabled on this box.


Do the RPM's in pgrpms work with SELinux out of the box, or will I need 
to do additional work after the install and each update?


Secondly, why does the pgrpms repository repackage libevent 1.4.13 as 
compat-libevent14? Binary compatibility with upstream is VERY important 
to me, it seems to me that if PostgreSQL needs a newer libevent, the 
newer libevent should have the compat package name so that the vendors 
libevent can continue to be maintained by the upstream vendor.


I'm probably going to have exclude libevent from the PostgreSQL yum repo 
file, download the src.rpm for libevent 2, and repackage it myself with 
a proper compat name so that it does not conflict with upstreams 
packaging. This means more work for me not only because I have to modify 
a spec file, but it means I have to pay closer attention for patches 
since I will be maintaining it myself.


Please in the future (assuming the rpm packagers are reading this) 
consider leaving vendor libraries alone and using compat for versions of 
libraries you require that are different than upstream distro packaging.


Obviously for PostgreSQL itself you shouldn't need to do that, but for 
3rd party libraries your build links against, if at all possible you should.


Thank you./*
*/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL versus Postgres

2010-08-09 Thread Michael A. Peters


 On Aug 8, 2010, at 2:45 AM, Torsten Zühlsdorff wrote:

*snip*

 I understand and appreciate your position.  Thanks for the
 clarification.

 While I believe that this thread has, for all intents and purposes,
 run its course (and I look forward to reading the documentation it
 informs), I'm going to go out on a limb and present an additional use-
 case that may be unpopular, or at least controversial.

 There are times when a documentation's audience is not interested in
 taking the subject matter to expert level.  (eg:  informed supervisory
 or vendor-client relationships, proof of concept development, hobbies,
 c.).  For those cases, a working understanding is all that's
 strictly necessary.  Annotated, cookbook-style code reference is
 especially well suited for that mode of learning.

As a recent convert from MySQL (I needed PostGIS) who has also seen the
benefit of Postgresql over MySQL in numerous other areas, that's exactly
what I am doing for myself.

I have 4x6 cards that I write the postgresql way of doing what I use to do
with MySQL so that I can easily reference them when I need to.

Should I sit down and read a book and go through the exercises?
Yes. But I need to get stuff done now, and the cheat sheets I make for
myself let me do just that.

I am not a DBA - I am not even a web developer.
I do both because I can't afford to hire them, and when I have used stuff
created by them, very frequently their code is clearly crap and insecure
and even I can see that, so unless I really want to pay the big bucks,
it's better for me to do it myself and cheat sheets really help.

-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data dumps to files - best methods?

2010-07-26 Thread Michael A. Peters

 Hi All


From what I can see in the specs and current output files, the client
 needs the data output in .xml format in order to use this on their side,
 still trying to understand why though...

I don't know what they are doing but XML is an excellent data storage
format. I use it for some stuff I don't need in a relational database, I
can just import the xml into DOMDocument and operate on it that way.

You can probably use the libxml2 facilities of your favorite scripting
language (php,perl,python,ruby) to dump the database into whatever kind of
XML they want.

-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-25 Thread Michael A. Peters

 Michael A. Peters wrote:
 I already maintain my own php RPMs because RHEL php is too old (I need
 the DOMDocument stuff) so maybe I need to add Postgresql to that.


 Note that you don't even have to build them yourself; the set at
 https://public.commandprompt.com/projects/pgcore/wiki/ are a
 straightforward drop-in replacement for the ones that RedHat provides.
 Subscribe to that yum repo just for the postgresql* packages and you can
 easily run 8.3 or 8.4 instead of the system 8.1.  You might need to
 recompile your custom PHP against that afterwards, but you shouldn't
 have to build the database itself completely from source.  And you'll
 still get security updates and bug fix point upgrades from that yum
 repo, continuing after the ones for 8.1 slow down.

It looks like I might need to.
I did the transition on my test machine and everything seems to be working
well from the shell except it won't connect from php.

I'm going out for week so I'll mess with it when I get back. Could be
something else trivial too.

-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-25 Thread Michael A. Peters

 25.Tem.2010 tarihinde 00:23 saatinde, Michael A. Peters
 mpet...@shastaherps.org
   şunları yazdı:
 OK. I already maintain my own php RPMs because RHEL php is too old
 (I need the DOMDocument stuff)

 IIRC, Centosplus repo has already Dom stuff.

I basically just rebuild the src.rpm from Fedora, which I think is similar
to what they have in CentOS plus.


-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-24 Thread Michael A. Peters

 On 24 Jul 2010, at 24:20, Michael A. Peters wrote:

*snip*

 Schema's in Postgres are similar to different databases in MySQL. They
 allow you to organise your tables in groups of tables belonging to similar
 functionality, for example. They have their own permissions too, which is
 nice if you need to restrict certain users to certain functionality. And
 of course you can access tables cross-schema, if you aren't denied the
 permissions.

 In your case, you could move those troublesome tables into their own
 schema and adjust the search_path accordingly for the user your PHP
 application uses to connect to the DB.

I spent last night playing with schemas and I must say, they absolutely
rock. I especially like the fact that you can still do queries involving
multiple schemas if you need to because they are still part of the same
database, and pg_dump keeping track of the various user authentications
granted to a schema and its tables is class.

It's the right way to do things.

*snip*


 You could move Sphyder's tables into a separate schema too, but... if you
 disallow the accompanying role (let's say 'sphyder') access to the public
 schema, then it can't read various system tables either. That can cause
 issues with looking up FK constraints and the like.
 Mind that I've never been in a situation where I needed to disallow some
 roles to access to the public schema, I'm not 100% sure about this - a
 simple test case is easy to create though.

 I'd probably just put most (or all) of my main database in a schema other
 than 'public' so that the sphyder role can still access the system tables
 it needs (and it won't be able to change those if that role is set up with
 sufficiently restrictive permissions).

That's what I'm doing now.


 As an aside; I'm not familiar with Sphyder, but Postgres' TSearch 2 is
 pretty good too. It's built into the main database since version 8.3, not
 in your version. For 8.1 there is an extension with largely the same
 functionality, in case you're interested. I'm not sure how easy that would
 be to upgrade to the builtin version once you get to 8.3 or newer
 though...

I am going to look into that.


-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-24 Thread Michael A. Peters

 Michael A. Peters wrote:
 I run CentOS 5.x and I do not like to upgrade vendor supplied packages.
 My
 version of pg_dump is from postgresql-8.1.21-1.el5_5.1 - I'm assuming
 the
 8.1.21 is the important part.


 That's a bad policy with PostgreSQL.  I guarantee you that the problems
 you will run into because you're on PostgreSQL 8.1 are far worse than
 any you might encounter because you've updated from RedHat's PostgreSQL
 to the RPM packages provided by the PostgreSQL packagers.  There are
 hundreds of known limitations in 8.1 you will absolutely suffer from as
 you expand your deployment that have been fixed in later versions.  Yes,
 you can run into a packaging problem after upgrading to the PostgreSQL
 provided 8.3 or 8.4 that doesn't exist with the 8.1 they ship.  But
 that's a *possible* issue, compared to the *guaranteed* limitations that
 are removed by using a later version of the database.

 Also, take a look at
 http://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy
 8.1 will be a frozen release no longer receiving bug fixes real soon now.

OK. I already maintain my own php RPMs because RHEL php is too old (I need
the DOMDocument stuff) so maybe I need to add Postgresql to that.

PHP is the only thing I currently have that links against postgresql anyway.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-24 Thread Michael A. Peters

 Tom Lane wrote:
 Please note also that Red Hat has been shipping PG 8.4 for RHEL5 for
 awhile --- it's the postgresql84-* package set.  I would hope CentOS
 has copied that by now.


 They have, as of CentOS's 5.5 back in May, and I keep forgetting its
 there.  I'm not sure whether I like the trade-offs that come from using
 that packaging in every case yet though.  The dependency issues with
 httpd are particularly weird:
 http://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.5 (last item in
 Known Issues).  I personally would rather just replace the system
 database with the newer version directly as the PGDG yums do, but you're
 right that some might prefer to use the system one instead.

I went with the upstream postgresql RPMs. They provided a compat package
for CentOS stuff that links against older client libs, so it works out
nicely.

The reason I went with CentOS for server when I first started my project
was because I wanted a system that had long term vendor maintenance and
kept things stable rather than bleeding edge, a system that required
minimal package maintenance on my part. The postgresql yum repo allows
that.

Since PostgreSQL has a 5 year commitment to support, even though it isn't
vendor packaging I can pretty much guarantee that I'll have upgraded the
server before that time limit expires, and even if they don't provide RPMs
for that long, I can maintain the src.rpm for 8.4 series myself if I need
to (which I hope I don't).

The library version was not the issue with my php connection problem,
though building against newer client libs was probably a good idea anyway.
Still looking at it (yes I checked and double checked pg_hba.conf), I'll
figure it out.


-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump, shemas, backup strategy

2010-07-24 Thread Michael A. Peters


 The library version was not the issue with my php connection problem

I needed to grant connect.
I guess that must be new? Anyway I knew it was likely something simple.


-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump, shemas, backup strategy

2010-07-23 Thread Michael A. Peters
I've been using MySQL for years. I switched (er, mostly) to PostgreSQL
recently because I need to use PostGIS. It is all working now for the most
part, and PostGIS is absolutely wonderful.

I run CentOS 5.x and I do not like to upgrade vendor supplied packages. My
version of pg_dump is from postgresql-8.1.21-1.el5_5.1 - I'm assuming the
8.1.21 is the important part.

In writing my backup cron job, I ran into a small problem. It seems that
my version of pg_dump does not accept the -T option for excluding tables.
There are a couple tables that never need to be included in the backup (IE
php session data). Since I prefer not to upgrade pgsql at this time, I was
wondering if this is where schemas might help? IE can I put those few
tables into a different schema and then tell pg_dump to only dump the
public schema? Schema is kind of a new concept to me.

For my code, I use the php pear::MDB2 wrapper (which made moving from
MySQL to PostgreSQL much easier, just had to fix some non standard SQL I
had). If I move stuff out of the public schema, am I going to have tell
MDB2 how to find which schema it is in? I guess that may be better suited
for php list, but hopefully someone knows.

Finally, the one part of my site that is NOT moved over to PostgreSQL is
the site content search engine, which is sphyder. I would like to move
that over as I do not see a need to run two databases if one will suffice.
Sphyder also does not use a database layer or prepared statements (and I
love prepared statements for security aspect), so to move it over it looks
like what I should do is first port it to use MDB2 with prepared
statements and then fix any sql that causes it to break in PostgreSQL.

When everything was MySQL - I ran sphyder in its own database so that a
bug in sphyder code could not be exploited to hack my main database.
However, I'm wondering if that is an area where schema would be better. IE
create a schema called sphyder and only give the sphyder user permission
to select from the sphyder schema. Is that what the concept of schemas is
for?

Thanks for helping out a n00b.


-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general