Re: [GENERAL] Best practice on inherited tables

2013-05-19 Thread Frank Lanitz
Am 17.05.2013 21:21, schrieb Alfonso Afonso:
 Hi Frank
 
 Although you are thinking in OOP, the SQL is itself one definition
 model that you should not ignore and, IMHO, try to follow the
 normalization statements.
 
 You can build a robust and normalized schema (table primarylocation ,
 table secondlocation that have a idprimarylocation, etc.) and later
 you could build your OOP software translating this to the proper
 classes (in Java you could use a DAO-POJO class or hibernate-jpa, for
 example).
 
 With this solution you can obtain all the benefits of DBRMS besides a
 OOP robust software :)

I was really thinking about this way as it's tradition relational model
and of course kind of a rock stable solution. But I have the fear that
it will end up in a real mess of joins at database layer in the end so I
thought to make usage of such a feature if available ;)

Cheers,
Frank



signature.asc
Description: OpenPGP digital signature


[GENERAL] Best practice on inherited tables

2013-05-17 Thread Frank Lanitz
Hi folkes,

I'm looking for a nice way to build this scenario:
I've got a lot of locations with some special types. For example I've
got workplaces, places like real laboratories and virtual places like
maybe parcel service. For each of the different types I need to store
some common attributes as well as some special ones. Having OOP in mind
I came to the point of inherit tables. so I've create something like
that (just a minimal example):

CREATE TABLE locations(
id SERIAL PRIMARY KEY,
name varchar(50)
);
CREATE TABLE workplaces(
workers integer
) INHERITS (locations);

But now I got stuck with the primary key thing. As described in the
documentation it is not supported. And now I'm looking for the best way
on having at table workplaces also the unique constraint from locations
etc. so e.g. I can do something like that:

INSERT INTO workplaces (name, workers) VALUES ('My Place', 5);

having the incrementation and the uniqueness. I was thinking off
creating a number of triggers doing this for me but wondering whether
there might be a better way.

Cheers,
Frank

BTW: Using Postgres 9.2 and up


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update

2013-04-11 Thread Frank Lanitz
Am 11.04.2013 10:29, schrieb jpui:
 Hi,
 I'm running a server using postgres 8.3 and i was adviced to update it...
 what i have to do  in order to update it and don't stop the service? 

8.3 is out of support so you will need to at a very minimum 8.4. This
cannot be done without restarting. Please check for HowTo for upgrading
postgres.

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] HwTo Foreign tables anybody?

2013-03-08 Thread Frank Lanitz
Hi folks,

I'm looking for a HowTo of Foreign Tables feature. We are thinking of
connecting two postgres databases via this way and I wanted to try
before I do say yes or no ;) However, I didn't find any good HowTo on
via §search_engine.

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to store version number of database layout

2013-02-12 Thread Frank Lanitz
Hi folks,

It's more like a question of best practice:
How do you managing different version of database layout for e.g.
software releases?
We are planing to do an application based on postgres and want to store
some version number of database layout to ensure correct upgrade-scripts
can be applied in case of an upgrade. Is there any build in for?

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG under OpenVZ?

2012-11-21 Thread Frank Lanitz

Am 2012-11-13 14:53, schrieb François Beausoleil:

Hi!

I've found an old thread on OpenVZ:

(2008): 
http://archives.postgresql.org/pgsql-performance/2008-03/msg00076.php


And a more recent question that scared me a bit:

(2011): 
http://serverfault.com/questions/281783/running-mongodb-with-openvz


On the PostgreSQL general mailing list, I've only found 54 results
when searching for OpenVZ. I'm wondering if OpenVZ is simply
unpopular, or not used at all for PG. What experiences do you have
with OpenVZ? Any performance problems?

We're buying bare metal to run our clusters on, and the supplier is
late delivering the machines. They suggested lending us a machine and
run PostgreSQL under OpenVZ. When the real hardware is ready, we'd
migrate the VZ over to the new physical servers. Thoughts on this?

I have no experience with OpenVZ itself, so if you have general
comments about it's stability and/or performance, even unrelated to
PostgreSQL, I'd appreciate.


Running a small PG-Server for private purposes on openVZ. Cannot 
complain so far.


Cheers,
Frank



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too much clients connected to the PostgreSQL Database

2012-10-30 Thread Frank Lanitz
Am 30.10.2012 02:06, schrieb rodr...@paripassu.com.br:
 BTW, 200 seems alwfully high unless a *really* high end machine. Â You
 may have fewer timeouts if you avoid swamping the server with a
 thundering herd of requests.

I was maintaining a setup which had  1000 connections on a not very
high-end server (12GB of Ram). It was just most of the connections were
idling most the time. Tomcat with a high number of consistent
connections for some reasons and end user stand alone clients which are
establishing a database connection on startup and keeping them until
shutdown.

Cheers,
Frank



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Help estimating database and WAL size

2012-10-19 Thread Frank Lanitz

Am 2012-10-15 23:13, schrieb John R Pierce:

On 10/15/12 2:03 PM, Daniel Serodio (lists) wrote:

John R Pierce wrote:

On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice **


postgres doesn't have a 'transaction log', it has the WAL 
(Write-Ahead Logs).  These are typically 16MB each.  on databases 
with a really heavy write load, I might bump the checkpoint_segments 
as high as 60, which seems to result in about 120 of them being 
created, 2GB total.  these files get reused, unless you are archiving 
them to implement a continuous realtime backup system (which enables 
PITR, Point in Time Recovery)
Thanks, I was using the term transaction log as a synonym for WAL. 
We're planning on enabling PITR; how can we calculate the WAL size and 
the WAL archive size in this case?



its based on how much data you're writing to the database.   Wheen
you write tuples (rows) to the database, they are stored in 8K
pages/blocks which are written to the current WAL file as they are
committed, when that WAL file fills up, or the checkpoint_timeout is
reached (the default is 30 seconds, I believe) , the WAL file is
written to the archive.

To be able to utilize PITR, you need a complete base backup of the
file system, and /all/ the archived WAL files since that base backup
was taken.


In huge number of cases you will also write these files to some kind of 
network storage via e.g. CIFS or NFS so you have access to them via your 
warm-standby-machines. I want to say: this is taken some storage but can 
be reviewed kind of independent from database itself.


Cheers,
Frank





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL force create table / ignore constraints?

2012-10-03 Thread Frank Lanitz
On Wed, 3 Oct 2012 08:12:25 -0700 (PDT)
hartrc rha...@mt.gov wrote:

 Version Postgresql 9.1.6
 OS: SLES 11 64 bit
 
 Background:
 Our developers create database schema in development environment using
 PGAdmin (often using the GUI to generate the DDL). 
 We always deploy to production using a script, a single .sql file
 which we execute via psql command line. This allows us to generate an
 output with any errors and have good view of deployment history over
 time. 
 
 Issue
 The issue we have is that developers generate the .sql script mainly
 by copying and pasting from PGAdmin's SQL pane. The issue we have is
 then the order of the object creation is important otherwise creation
 of tables and fail when there is a foreign key constraint on another
 table that does not exist (but is created later in the script). This
 is not a big deal in a schema with 3 or 4 tables but when there are 20
 + it is time consuming task to reorder all the create statements.
 
 Can anyone recommend a way of dealing with this? My only other
 thought has been pg_dump although i would prefer if the developers
 could generate the scripts themselves.

What about using pg_dump --schema-only when creating the files? (Or
are you talking about icremental changes?)

Cheers, 
Frank

-- 
Frank Lanitz fr...@frank.uvena.de


pgpJnRr67CUNQ.pgp
Description: PGP signature


Re: [GENERAL] Odd query result

2012-08-27 Thread Frank Lanitz
On Mon, 27 Aug 2012 10:55:43 +0200
Maximilian Tyrtania li...@contactking.de wrote:

 Hello from Berlin,
 
 I can't quite make sense of this (running PG 9.0.3):
 
 psql (9.0.3)
 Type help for help.
 
 FAKDB=# Select _rowid,f_firmen_isKunde(firmen),bezeichnung::text from
 firmen
 where
 (firmen.bezeichnung='Microsoft Deutschland GmbH'); _rowid  |
 f_firmen_iskunde |bezeichnung 
 --+--+
   1214700 | f| Microsoft Deutschland GmbH
  15779700 | t| Microsoft Deutschland GmbH
166300 | t| Microsoft Deutschland GmbH
 (3 rows)
 
 FAKDB=# Select _rowid,f_firmen_isKunde(firmen),bezeichnung::text from 
 FAKDB-# firmen where 
 FAKDB-# (firmen.bezeichnung='Microsoft Deutschland GmbH') and 
 FAKDB-# (f_firmen_isKunde(firmen)=true) and firmen._rowid=15779700 ;
   _rowid  | f_firmen_iskunde |bezeichnung 
 --+--+
  15779700 | t| Microsoft Deutschland GmbH
 (1 row)
 
 Fine. But this record won't be found if I omit the last condition.
 
 FAKDB=# Select _rowid,f_firmen_isKunde(firmen),bezeichnung::text from
 firmen
 where
 (firmen.bezeichnung='Microsoft Deutschland GmbH') and
 (f_firmen_isKunde(firmen)=true); _rowid | f_firmen_iskunde |
 bezeichnung 
 +--+
  166300 | t| Microsoft Deutschland GmbH
 (1 row)
 
 
 What might be up there?

How is f_firmen_isKunde() defined? 

Cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgp1GbDwLQBZT.pgp
Description: PGP signature


Re: [GENERAL] Result from Having count

2012-08-23 Thread Frank Lanitz
Am 23.08.2012 09:52, schrieb Condor:
 Hello ppl,
 
 I try to make query and see how many ids have more then one row.
 
 few records is:
 
 ids | val | some
  a  | 1   | x
  a  | 1   | v
  b  | 1   | x
  b  | 2   | c
 
 
 I focus on ids and val with:
 
 SELECT ids, val FROM table WHERE ids = 'a' GROUP BY ids, val HAVING
 COUNT(ids)  1;
 
 and result is:
 
 ids | val
  a  |   1
 
 Well in this condition pgsql shold not return me positive result because
 on documentation I read having count work on group clause,
 and when I group these two records based on ids = 'a' they become to one
 row and my condition is if the result after grouping is greeter then 1.
 
 I use postgresql 9.1.4 x64
 
 
 Any one can tell me what I miss ?


Not sure I understand you correct, but maybe count() is working for you.
Maybe you would need some primary key for good values.

cheers,
Frank



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Result from Having count

2012-08-23 Thread Frank Lanitz
Am 23.08.2012 10:45, schrieb Condor:
 On , Frank Lanitz wrote:
 Am 23.08.2012 09:52, schrieb Condor:
 Hello ppl,

 I try to make query and see how many ids have more then one row.

 few records is:

 ids | val | some
  a  | 1   | x
  a  | 1   | v
  b  | 1   | x
  b  | 2   | c


 I focus on ids and val with:

 SELECT ids, val FROM table WHERE ids = 'a' GROUP BY ids, val HAVING
 COUNT(ids)  1;

 and result is:

 ids | val
  a  |   1

 Well in this condition pgsql shold not return me positive result because
 on documentation I read having count work on group clause,
 and when I group these two records based on ids = 'a' they become to one
 row and my condition is if the result after grouping is greeter then 1.

 I use postgresql 9.1.4 x64


 Any one can tell me what I miss ?


 Not sure I understand you correct, but maybe count() is working for you.
 Maybe you would need some primary key for good values.

 cheers,
 Frank
 
 
 Sorry for my email,
 after some thinking I understand my error and change query to:
 
 SELECT COUNT(DISTINCT val), ids FROM table WHERE ids = 'a' GROUP BY ids
 HAVING COUNT(DISTINCT val)  1;
 
 and it's work.

At least I was wrong in understanding your request. ;) But glad, you
found a solution.

Cheers,
Frank



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best practice non privilege postgres-user

2012-08-20 Thread Frank Lanitz
On Fri, 17 Aug 2012 08:53:05 -0400
Moshe Jacobson mo...@neadwerx.com wrote:

 I do not know of anything that can't be done from within psql.
 We use non-privileged user roles in postgres for day-to-day
 operations. When I need to modify the schema, I become postgres (you
 can do \c - postgres) and do what I need to do, then revert back to
 my regular user.

It's not only about the things that can be done from within psql. At
least originally. Some of our currently workflows are basing on real
shell access. 

Cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgpBsGObDQVNO.pgp
Description: PGP signature


[GENERAL] Best practice non privilege postgres-user

2012-08-17 Thread Frank Lanitz
Hi folks,

I'm looking for some kind of best practice for a non-privilege postgres
user. As not all operations can be done within psql you might need
access to postgres- on command line from time to time. Currently this is
done via root-privvileges and »su - postgres« directly on database
server - which is might not the best idea. Therefor our goal is to limit
access to a little number of people on the first hand and don't
necessary give them root-privileges on the databse server. We
experimented a bit with sudo but had issues with some of the
environmental variables. So my question is: do you have any best
practice how to manage this? Is there any golden rule for this?

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to don't update sequence on rollback of a transaction

2012-08-03 Thread Frank Lanitz
Hi,

Thanks very much for the detailed answer. I totally missed the issue
with concurrent transactions.

Am 03.08.2012 02:00, schrieb Craig Ringer:

 It's interesting that you read the documentation and still got bitten by
 this. I'll have to think about writing a patch to add some
 cross-references and make the tx exception of sequences more obvious.

This would be great. I just read the transaction documentation and had
only a short look onto sequence documentation part. I totally missed the
important window at the end.

 The general idea with sequences is that they produce numbers that can be
 meaningfully compared for equality and for greater/less-than, but *not*
 for distance from each other. Because they're exempt from transactional
 rollback you shouldn't use them when you need a gap-less sequence of
 numbers.
 
 It's usually a sign of an application design problem when you need a
 gapless sequence. Try to work out a way to do what you need when there
 can be gaps. Sometimes it's genuinely necessary to have gapless
 sequences though - for example, when generating cheque or invoice numbers.

Yes. I understood now ;)

 Gap-less sequences are often implemented using a counter table and
 UPDATE ... RETURNING, eg:
 
 CREATE TABLE invoice_number (
 last_invoice_number integer primary key
 );
 
 -- PostgreSQL specific hack you can use to make
 -- really sure only one row ever exists
 CREATE UNIQUE INDEX there_can_be_only_one
 ON invoice_number( (1) );
 
 -- Start the sequence so the first returned value is 1
 INSERT INTO invoice_number(last_invoice_number) VALUES (0);
 
 -- To get a number; PostgreSQL specific but cleaner.
 UPDATE invoice_number
 SET last_invoice_number = last_invoice_number + 1
 RETURNING last_invoice_number;
 
 
 Note that the `UPDATE ... RETURNING` will serialize all transactions.
 Transaction n+1 can't complete the UPDATE ... RETURNING statement until
 transaction `n' commits or rolls back. If you are using gap-less
 sequences you should try to keep your transactions short and do as
 little else in them as possible

Thanks for the detailed idea how to do it correct. I'm not thinking
about invoice number handling but something I also don't want to have gaps.

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to don't update sequence on rollback of a transaction

2012-08-03 Thread Frank Lanitz
Am 02.08.2012 17:15, schrieb Andrew Hastie:
 Hi Frank,
 
 I believe this is by design. See the bottom of the documentation on
 sequences where it states ;-
 
 *Important:* To avoid blocking concurrent transactions that obtain
 numbers from the same sequence, a |nextval| operation is never rolled
 back; that is, once a value has been fetched it is considered used, even
 if the transaction that did the |nextval| later aborts. This means that
 aborted transactions might leave unused holes in the sequence of
 assigned values. |setval| operations are never rolled back, either.
 
 http://www.postgresql.org/docs/9.1/static/functions-sequence.html
 
 If you really want to reset the sequence, I think you would have to call
 SELECT SETVAL(.) at the point you request the roll-back.

Yepp. Somehow I missed that part of documentation. I don't think setval
will do the trick I want to perform, but Craig's idea looks very well.

Thanks for feedback!

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to don't update sequence on rollback of a transaction

2012-08-02 Thread Frank Lanitz
Hi folks,

I did a test with transactions and wondered about an behavior I didn't
expected. At http://pastebin.geany.org/bYQNo/raw/ I posted a complete
backlog for.

To make it short: I created a table with a serial and started a
transactions. After this I was inserting values into the table but did a
rollback. However. The sequence of the serial filed has been incremented
by 1 on each insert (which is fine), but wasn't reset after rollback of
transaction.

Documentation stats:
If, partway through the transaction, we decide we do not wantto commit
(perhaps we just noticed that Alice's balance went   negative), we can
issue the command ROLLBACK instead of COMMIT, and all our updates so far
will be canceled.

My understanding of all was that it includes sequences. Obviously, I'm
wrong... but how to do it right?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_database_size differs from df -s

2012-06-06 Thread Frank Lanitz
Hi folks,

I've got an issue I'm not sure I might have a misunderstanding. When
calling

select sum(pg_database_size(datid)) as total_size from pg_stat_database

the result is much bigger than running a df -s over the postgres folder
- Its about factor 5 to 10 depending on database.

My understanding was, pg_database_size is the database size on disc. Am
I misunderstanding the docu here?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_database_size differs from df -s

2012-06-06 Thread Frank Lanitz
Am 06.06.2012 17:49, schrieb Tom Lane:
 Frank Lanitz fr...@frank.uvena.de writes:
 I've got an issue I'm not sure I might have a misunderstanding. When
 calling
 
 select sum(pg_database_size(datid)) as total_size from pg_stat_database
 
 the result is much bigger than running a df -s over the postgres folder
 - Its about factor 5 to 10 depending on database.
 
 Did you mean du -s?

Yepp, sure. Was to confused about the two numbers. ;)

 My understanding was, pg_database_size is the database size on disc. Am
 I misunderstanding the docu here?
 
 For me, pg_database_size gives numbers that match up fairly well with
 what du says.  I would not expect an exact match, since du probably
 knows about filesystem overhead (such as metadata) whereas
 pg_database_size does not.  Something's fishy if it's off by any large
 factor, though.  Perhaps you have some tables in a nondefault
 tablespace, where du isn't seeing them?

Nope. Its a pretty much clean database without any fancy stuff.

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_database_size differs from df -s

2012-06-06 Thread Frank Lanitz
On Wed, 6 Jun 2012 20:31:36 +0200
Alban Hertroys haram...@gmail.com wrote:

 On 6 Jun 2012, at 16:33, Frank Lanitz wrote:
 
  the result is much bigger than running a df -s over the postgres
  folder
  - Its about factor 5 to 10 depending on database.
 
 
 Is your du reporting sizes in Bytes or blocks or ...?

Should be byte as its a linux. 

cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgpoJNH3d0L7h.pgp
Description: PGP signature


[GENERAL] Variables inside plpythonu

2012-05-12 Thread Frank Lanitz
Hi folks, 

I did check the documentation but seem to didn't found the best way to
use plpythonu with a variable inside a query. 
Let's say I want to run this query 
SELECT id FROM some_table WHERE date=date_from_function_call
How a CREATE FUNCTION stateent have to look like? 
I already figured out that the python code should look similar to 

plan = plpy.prepare(SELECT id FROM some_table WHERE date= 
return = plpy.execure(plan)

But somehow a last piece is missing. 

Can anybody help?

Cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgprKWy6SHnPI.pgp
Description: PGP signature


Re: [GENERAL] Lock out PostgreSQL users for maintenance

2012-05-12 Thread Frank Lanitz
On Sat, 12 May 2012 06:29:54 +0200
Alexander Farber alexander.far...@gmail.com wrote:

 Or should I edit pg_hba.conf and restart the process?

At least this is what we are doing. We are having a normal pg_hba.conf
and a pg_hba.conf for maintenance and switching on demand. Maybe not
the best solution, but its working ;) 

Cheers, 
Frank 
-- 
Frank Lanitz fr...@frank.uvena.de


pgpKHQ4kaTSLV.pgp
Description: PGP signature


Re: [GENERAL] Variables inside plpythonu

2012-05-12 Thread Frank Lanitz
On Sat, 12 May 2012 07:11:08 -0700
Adrian Klaver adrian.kla...@gmail.com wrote:

 Something like this?:
 
 create or replace function date_test(some_date date) returns void as
 $Body$
 date_plan = plpy.prepare(select id_fld from date_test where date_fld
 = $1, [date]) date_rs = plpy.execute(date_plan,[some_date])
 plpy.notice(date_rs[0][id_fld])
 $Body$
 language plpythonu;

Yes. Gave me the missing piece. Thanks a lot!

Cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgpKZuZqB0PRy.pgp
Description: PGP signature


[GENERAL] Is it possible to call other functions inside plpythonu?

2012-04-27 Thread Frank Lanitz
Hi folks,

Just looking for a nice server side solution to implement some
fundamental logic for an application. plpythonu looks in this tmers very
well as I'm liking the syntax of Python. However, an very old blog post
at [1] made me unsure whether really to use it. Is it still (or has it
ever been) an issue that plpythonu is having a lot of overhead and not
able to make use of other functions? Didn't found anything on docu for
9.1 about that.

Cheers,
Frank

[1] http://spyced.blogspot.de/2005/04/plpython-intro.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configuring RAID10 for data in Amazon EC2 cloud?

2012-03-27 Thread Frank Lanitz
On Tue, 27 Mar 2012 11:25:53 -0400
Welty, Richard rwe...@ltionline.com wrote:

 does anyone have any tips on this? Linux Software Raid doesn't seem
 to be doing a very good job here, but i may well have missed
 something.
 
 i did a fairly naive setup using linux software raid on an amazon
 linux instance, 10 volumes (8G each), (WAL on a separate EBS volume)
 with the following setup:
 

You might want to check with Amazon here. 

Cheers, 
Frank 

-- 
Frank Lanitz fr...@frank.uvena.de


pgpmHnneAclhe.pgp
Description: PGP signature


[GENERAL] Values inside rolvaliduntil of pg_authid

2012-03-26 Thread Frank Lanitz
Hi folks,

I'm currently doing some checks for users. During this I've found inside
pg_authid.rolvaliduntil a couple of values I wasn't able to figure out
via documentation, whether they are valid Maybe you can help me out
helping whether these are valid dates and what is postgres interpreting
them:

- infinity (I assume it's treaded as NULL inside this column -
unlimited password)
- 1970-01-01 00:00:00+01 (UNIX timestamp 0. I have no f. bloody idea here)


(I'm running 8.4 here)

cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs

2012-03-23 Thread Frank Lanitz
Am 23.03.2012 06:45, schrieb Gerhard Wiesinger:
 With a database admin of a commercial database system I've discussed
 that they have to provide and they also achieve 2^31 transactions per
 SECOND!

Just corious: What is causing this many transactions?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs

2012-03-23 Thread Frank Lanitz
Am 23.03.2012 14:23, schrieb Adrian Klaver:
  I would say either they got the numbers wrong or someone is pulling
 your leg. That rate is not going to happen.

Maybe twitter or facebook all in all...

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] POSTGRESQL Newbie

2012-03-21 Thread Frank Lanitz
Am 21.03.2012 12:35, schrieb Marti Raudsepp:
 On Wed, Mar 21, 2012 at 11:10, Vincent Veyron vv.li...@wanadoo.fr wrote:
 However, I once read that the real reason is that mysql was available
 when ISPs came of existence, circa 1995. It lacked important features of
 an RDBMS (you can google the details), but it was enough to satisfy the
 needs of php scripts for instance.

 First to market, in short.
 
 Let's not forget that PostgreSQL sucked, too, back then.
 
 PostgreSQL's maintenance was absolutely horriffic. And if you got it
 wrong, it would bog down all your hardware resources. MySQL lacked
 many features, but it just worked without maintenance.
 
 E.g. VACUUM/ANALYZE needed to be ran manually and it used to take an
 *exclusive* lock on tables, for longish periods, preventing any
 queries! Failure to vacuum would cause the files to bloat without
 limit and slow down your queries gradually. In the worst case, you hit
 XID wraparound and the database would shut down entirely.
 
 Even still in 8.3 (which was newest until 2009) with autovacuum, if
 you got max_fsm_pages tuned wrong, vacuum would basically stop
 functioning and your tables would bloat.

Yepp.. Remmembering back when I started to get in contact with LAMP
mysql just worked. Wasn't fast and didn't had a lot of fancy features
but it just worked in default config for day2day stuff.

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large PostgreSQL servers

2012-03-21 Thread Frank Lanitz
On Wed, 21 Mar 2012 20:31:08 +0100
Kjetil Nygård polpo...@gmail.com wrote:

 We are considering to migrate some of our databases to PostgreSQL. 
 
 We wonder if someone could give some hardware / configuration specs
 for large PostgreSQL installations. 
 We're interested in:
   - Number of CPUs
   - Memory on the server
   - shared_buffers
   - Size of the database on disk

I guess this is extremely depending on how big you database is ... 

Cheers, 
Frank 
-- 
Frank Lanitz fr...@frank.uvena.de


pgpwUPnXlZqUS.pgp
Description: PGP signature


Re: [GENERAL] Adding German Character Set to PostgresSQL

2012-01-03 Thread Frank Lanitz
Am 02.01.2012 20:13, schrieb Hagen Finley:
 I am using psql (8.2.15) and I would like to input German characters
 (e.g. ä,ß,ö) into char fields I have in a database

I see that you are using Outlook which leads me to assume you are
running Windows as host for your transaction. Therefor you might are
logged in into server with putty. Can you ensure you putty settings are
correct so ä etc are get posted properly to database? Also what is
bringing you to the points its not working correctly?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2011-12-10 Thread Frank Lanitz
On Thu, 08 Dec 2011 23:12:36 +
Raymond O'Donnell r...@iol.ie wrote:

 Just wondering, and without intending to cast any aspersions on the
 poster - is this spam or legit? I didn't take the risk of actually
 clicking it...
 
 There have been a few posts like this recently - links without any
 commentary or explanation.

Take it as spam and don't rethink about. 

Cheers, 
Frank
-- 
Frank Lanitz fr...@frank.uvena.de


pgpcfuux8tyNM.pgp
Description: PGP signature


Re: [GENERAL] How to configure the connection timeout in PostgreSQL 8.3

2011-12-09 Thread Frank Lanitz
Am 09.12.2011 16:02, schrieb Andre Lopes:

 I'm using PostgreSQL 8.3 and I need to reduce the timeout. How can I
 configure the connection timeout?

Which connection timeout you like to change? Most likely this should be
an option you can change on your client.

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_standby: How to check in which state the server is currently?

2011-12-06 Thread Frank Lanitz
Am 05.12.2011 17:02, schrieb Cédric Villemain:
 Le 5 décembre 2011 12:16, Frank Lanitz fr...@frank.uvena.de a écrit :
 Hi list,

 We had in past from time to time the issue, that the standby server is
 stopping recovering, creating a new timeline and become up and running.
 In parallel to check for the reasons of this behavior we are looking for
 a clean way to check whether warm standby database is still in
 recovering mode or has become ready. I did some search, but didn't found
 any ready-2-use script for doing this on a stand alone basis (should
 only be temp. solution that will be replaced by real monitoring later)
 
 See check_postgres , I have added that recently for similar purpose.
 You give it  one  of '--assume-standby-mode' or '--assume-prod' when
 you check the last checkpoint.
 If the server is not in the expected mode, emit CRITICAL (for nagios,
 but check_postgres is a standlone script and can be used with other
 supervision/monitoring software)
 It is not yet release, see :
 https://github.com/bucardo/check_postgres/commit/0ff408711dab18b05de26656a945fa37e363f6aa
 (depends on other patches but you get the idea)
 http://bucardo.org/wiki/Check_postgres

A first view looks very promissing. Will have a deeper look ;)

Cheers,
Frank


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_standby: How to check in which state the server is currently?

2011-12-05 Thread Frank Lanitz
Hi list,

We had in past from time to time the issue, that the standby server is
stopping recovering, creating a new timeline and become up and running.
In parallel to check for the reasons of this behavior we are looking for
a clean way to check whether warm standby database is still in
recovering mode or has become ready. I did some search, but didn't found
any ready-2-use script for doing this on a stand alone basis (should
only be temp. solution that will be replaced by real monitoring later)
I looked for some solution checking ps for pg_standby and trying to
connect to database. But I'm not sure how 'secure' in terms of falls
positive and missed events this is.

Can anybody put me onto the right way here?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CPU move

2011-11-27 Thread Frank Lanitz
Am 26.11.2011 19:18, schrieb Carlos Henrique Reimer:
 Hi,
 
 We're planning to move our postgreSQL database from one CPU box to
 another box.
 
 I'm considering an alternative procedure for the move as the standard
 one (pg_dump from the old, copy dump to the new box, psql to restore in
 the new) will take about 10 hours to complete. The ideia is installing
 the same Linux and PostgreSQL versions in the new box and copy the
 entire database cluster directory from the old to the new one using the
 scp Linux command.

If you are using the same architecture you could use the warm-standby
procedure for doing the sync and then switching the system. There is
pg_standby available for.

Cheers,
Frank



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Postgres 8.4: archive_timeout vs. checkpoint_timeout

2011-10-10 Thread Frank Lanitz
Hi,

Thanks for your response.

Am 07.10.2011 22:05, schrieb Derrick Rice:

 On Thu, Oct 6, 2011 at 3:47 AM, Frank Lanitz fr...@frank.uvena.de
 mailto:fr...@frank.uvena.de wrote:
 
 Hi folks,
 
 I want to refer to a question Rob did back in 2008 at
 http://archives.postgresql.org/pgsql-general/2008-07/msg01167.php as we
 are currently running into a similar question:
 We are using warm standby via PITR using a shared drive between master
 and slave node.
 
 Our setup currently is set to archive_timeout = 60s and
 checkpoint_timeout = 600s.
 
 We expected that now every minute a WAL-file is written to the share,
 but somehow we might misunderstood some part of the documentation as in
 periods with low traffic on database the interval between WAL files is
 1min up to ten minutes.
 
 
 The 8.4 docs lack this detail, but the 9.0 docs explain this.  I don't
 believe it's a behavior change; I think it's just more clarification in
 the documents (
 http://www.postgresql.org/docs/9.0/interactive/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT
 )
 
  When this parameter is greater than zero, the server will switch to a
 new segment file whenever this many seconds have elapsed since the last
 segment file switch, ***and there has been any database activity,
 including a single checkpoint.*** (emphasis mine)
 
 Tom said something similar in the thread you referenced:
 
 http://archives.postgresql.org/pgsql-general/2008-07/msg01166.php
 
 One possible connection is that an xlog file switch will not actually
 happen unless some xlog output has been generated since the last switch.
 If you were watching an otherwise-idle system then maybe the checkpoint
 records are needed to make it look like a switch is needed.  OTOH if
 it's *that* idle then the checkpoints should be no-ops too.

We are recognizing import failures on slave after we lower the
archive_timeout below the checkpoint_timeout. Did I understand it
correctly that these errors might get caused by this?

 However, the goal was to have a WAL file every minute so disaster
 recovering can be done fast with a minimum of lost data.
 
 
 
 If there was any data, it's existence in the transaction log would
 trigger the archive_timeout behavior.  With no database activity, you
 aren't missing anything.
  
 
 Question is: What did we miss? Do we need to put checkpoint_timeout also
 to 60s and does this makes sense at all?
 
 
 You are getting what you need (maximum 60s between data and the
 corresponding data being sent through archive_command), just not exactly
 what you thought you asked for.

 If you absolutely must have a file every in order to sleep well, you can
 lower checkpoint_timeout.  Keep in mind the cost of checkpoints.

We will have to think about this.

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Standby server witching into master role after import failure

2011-10-07 Thread Frank Lanitz
Hi folks,

I've got an issue I'm currently investigating and I'm not 100 sure about
the root cause.
The setup is pretty easy: A standby server via PITR is importing
WAL-files from master server. During this import it appears there has
been an error which cause the standby to finish the importing and going
live. I'm copying in LOG-file pars I tried to translate to English
(server running de_DE)

 2011-09-28 19:38:26 CEST  LOG:  unexpected page address 8B/5A00
at logfile 144, Segment 1, Offset 0
 2011-09-28 19:38:26 CEST  LOG:  Redo done by 90/1EB48
 2011-09-28 19:38:26 CEST  LOG:  last completed transaction was
at logtime 2011-09-28 19:37:04.605199+02
 2011-09-28 19:38:26 CEST  LOG:  Logdatei 00010090
recovered from achive
 2011-09-28 19:39:26 CEST  LOG:  choosen new Timeline-ID: 2
 2011-09-28 19:40:26 CEST  LOG:  Recover from archive finished

Obviously there have been an issue on WAL-file. But any chance to get
any detail what went wrong and what can be done to prevent such things
in future?

It's all 8.4 running on SLES11 SP1 AMD64

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 8.4: archive_timeout vs. checkpoint_timeout

2011-10-06 Thread Frank Lanitz
Hi folks,

I want to refer to a question Rob did back in 2008 at
http://archives.postgresql.org/pgsql-general/2008-07/msg01167.php as we
are currently running into a similar question:
We are using warm standby via PITR using a shared drive between master
and slave node.

Our setup currently is set to archive_timeout = 60s and
checkpoint_timeout = 600s.

We expected that now every minute a WAL-file is written to the share,
but somehow we might misunderstood some part of the documentation as in
periods with low traffic on database the interval between WAL files is
1min up to ten minutes.

However, the goal was to have a WAL file every minute so disaster
recovering can be done fast with a minimum of lost data.
Question is: What did we miss? Do we need to put checkpoint_timeout also
to 60s and does this makes sense at all?

Cheers,
Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing thick/fat databases

2011-07-25 Thread Frank Lanitz

Am 22.07.2011 21:15, schrieb Karl Nack:

to move as much business/transactional logic as
possible into the database, so that client applications become little
more than moving data into and out of the database using a well-defined
API, most commonly (but not necessarily) through the use of stored
procedures.


Beside the points already mentioned, doing this will might cause bottle 
necks if you have complicated transactions as the DB-cluster might can 
not be scaled as good as maybe a farm of application server could be done.


Cheers,
Frank

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing thick/fat databases

2011-07-25 Thread Frank Lanitz

Am 25.07.2011 10:12, schrieb Pavel Stehule:

2011/7/25 Frank Lanitzfr...@frank.uvena.de:

Am 22.07.2011 21:15, schrieb Karl Nack:


to move as much business/transactional logic as
possible into the database, so that client applications become little
more than moving data into and out of the database using a well-defined
API, most commonly (but not necessarily) through the use of stored
procedures.


Beside the points already mentioned, doing this will might cause bottle
necks if you have complicated transactions as the DB-cluster might can not
be scaled as good as maybe a farm of application server could be done.



Yes, and no - this can decrease network overhead, can decrease a data
conversion overhead. Sometimes I was surprised how much time I got
with moving to stored procedures.


Yep. Its always depending on what you are doing I guess.

Cheers,
Frank

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Implementing thick/fat databases

2011-07-25 Thread Frank Lanitz

Am 25.07.2011 10:24, schrieb Sim Zacks:

On 07/25/2011 11:06 AM, Frank Lanitz wrote:


Am 22.07.2011 21:15, schrieb Karl Nack:

to move as much business/transactional logic as
possible into the database, so that client applications become little
more than moving data into and out of the database using a well-defined
API, most commonly (but not necessarily) through the use of stored
procedures.


Beside the points already mentioned, doing this will might cause
bottle necks if you have complicated transactions as the DB-cluster
might can not be scaled as good as maybe a farm of application server
could be done.

Cheers,
Frank



If I understand you correctly, you are saying that to handle business
logic processing, I may require X servers. Only a percentage of that
traffic actually requires database processing. if I use a cluster of
application servers against a single database, it will scale better then
if I have to cluster my database, which brings in all sorts of messy
master-master replication issues.

Is this accurate?


As I don't know the kind of your application and business as well as 
your structure of code you already have I cannot say for sure. There is 
no golden-100%-all-will-be-solved-rule ... this is what I can say.


Cheers,
Frank


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general