Re: [GENERAL] what database schema version management system to use?

2016-04-08 Thread Karsten Hilbert
On Fri, Apr 08, 2016 at 09:09:22AM -0500, Merlin Moncure wrote:

> I rolled my own in bash.  It wasn't that difficult.  The basic tactic is to:
> 
> *) separate .sql that can be re-applied (views, functions, scratch tables,
> etc)  from .sql that can't be re-applied (create table, index, deployment
> data changes etc).  I call the former 'soft' and latter 'hard' changes.
> *) keep each database tracked in its own folder in the tree and put all the
> soft stuff there.  I keep all the hard stuff in a folder, 'schema'.  I also
> ha ve a special library folder which tracks all databases
> *) redeploy 'soft' changes every release.  The bash script deploys files in
> mtime order after setting mtime to git commit time since git doesn't track
> mtime
> *) keep a tracking table in each database tracking deployed scripts

GNUmed does pretty much the same thing except we call it
"static" vs "dyamic" changes.

Also, with modern PostgreSQL versions (UPSERT, ON CONFLICT,
IF EXISTS) many items among "index, deployment data changes"
can be turned into soft (dynamic) changes.

We've never had a single bit of patient data get lost among
GNUmed database versions up to the current v21 (but of course
we are paranoid and check md5 sums of the schema before/after
upgrades and run automated data conversion sanity checks
after an upgrade).

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Tom Lane
"Bannert  Matthias"  writes:
> Thanks for your reply. I do think it is rather a postgres than an R issue, 
> here's why:
> a) R simply puts an SQL string together. What Charles had posted was an 
> excerpt of that string. 
> Basically we have 1.7 MB of that string. Everything else is equal just the 
> hstore contains 40K key value pairs. 

Well, as a test I ran a query that included an hstore literal with 4
million key/value pairs (a bit shy of 70MB of query text).  I didn't see
any misbehavior on a machine with 2MB max_stack_depth.  So there's
something else going on in your situation.

I concur with the suggestion to try to get a stack backtrace from the
point of the error.  Setting a breakpoint at errfinish() is usually
an effective strategy when you know that the query will provoke a SQL
error report.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade with an extension name change

2016-04-08 Thread Christophe Pettus
I'm attempting to upgrade a database from 9.2 to 9.5 using pg_upgrade.  The 9.2 
database has the "orafunc" extension installed, which appears to have changed 
names to "orafce".  pg_upgrade complains that it can't find "orafunc" on 9.5, 
which is true.  Is there a standard way of handling this situation?

--
-- Christophe Pettus
   x...@thebuild.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Bannert Matthias
Thanks for your reply. I do think it is rather a postgres than an R issue, 
here's why:

a) R simply puts an SQL string together. What Charles had posted was an excerpt 
of that string. 
Basically we have 1.7 MB of that string. Everything else is equal just the 
hstore contains 40K key value pairs. 

b) The error message clearly mentions max_stack_depth which is a postgres 
parameter. 

c) If I just take that SQL string (only the first part of it, i.e. the create 
temp table and insert into  part w/o all the 
update and join gibberish and put it to a .sql file and simply run it through a 
psql client like this: 
\i myquery.sql

I get exactly the same error message (without any R involved at any stage)

psql:query.sql:3: ERROR:  stack depth limit exceeded
HINT:  Increase the configuration parameter "max_stack_depth" (currently 
7168kB), after ensuring the platform's stack depth limit is adequate.

d) I ran into to quite some R stack errors and they look different... 
(C_STACK_SIZE)

conclusion:
We are running a simple insert. Nothing special except for the fact that hstore 
has 40K key value pairs. Could it be that the indexing of that hstore gets 
kinda large 
and thus a query string that only has 1.7 MB exceeds the stack ? 







From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Friday, April 08, 2016 4:20 PM
To: Charles Clavadetscher
Cc: pgsql-general@postgresql.org; Bannert  Matthias
Subject: Re: [GENERAL] max_stack_depth problem though query is substantially 
smaller

"Charles Clavadetscher"  writes:
> When R processes the daily time serie we get a stack size exceeded
error, followed by the hint to increase the max_stack_depth.

Postgres doesn't generally allocate large values on the stack, and I doubt
that R does either.  Almost certainly, what is causing this is not data
size per se but unreasonable call nesting depth in your R code.  You may
have a function that's actually in infinite recursion, or maybe it's
recursing to a depth governed by the number of data elements.  If so,
consider revising it into iteration with an explicitly-represented state
stack.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade error regarding hstore operator

2016-04-08 Thread Feld, Michael (IMS)
Thanks for the reply Tom. template1 is definitely empty and does not contain 
any hstore objects. I did a little debugging and placed the below SQL before 
and after the hstore creation in the file produced by the pg_dump and 
determined that these operator objects only become present immediately after 
the creation of the hstore extension, and not before. Then, much later down the 
pg_dump file, it attempts to create the operator family for these 4 items 
producing the errors. I did a pg_dump of the same database on a 9.1 instance 
and it does not produce the operator creation objects SQL outside of the 
extension. This seems to be something that happened as part of the pg_upgrade. 
Any idea why these have showed up outside the extension? Is there anything I 
can do to fix this? Thanks for all of your help.

Mike

SELECT am.amname AS index_method,
   opf.opfname AS opfamily_name,
   amop.amopopr::regoperator AS opfamily_operator
FROM pg_am am, pg_opfamily opf, pg_amop amop
WHERE opf.opfmethod = am.oid AND
  amop.amopfamily = opf.oid and opf.opfname like '%hstore%'
ORDER BY index_method, opfamily_name, opfamily_operator;

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, April 06, 2016 7:01 PM
To: Feld, Michael (IMS) 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade error regarding hstore operator

"Feld, Michael (IMS)"  writes:
> Thanks for the assist Tom. That worked for us. Noticing a different
> issue following the pg_upgrade. If we take a pg_dump of a database on
> this upgraded instance with the hstore extension and try to pg_restore
> it back up to the same instance we get the following errors

Those are the *only* errors you get?  That seems rather odd.  I could believe 
something like this happening if, say, you had an "unpackaged"
(that is, pre-extensions) version of hstore lying about.  But then you'd 
probably get conflicts on all the hstore-related objects, not only the 
opclasses.

In any case, by far the most likely explanation is that you're trying to 
restore into a non-empty database, probably because you've put stuff into
template1 and are cloning the new database from there.

regards, tom lane



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bypassing NULL elements in row_to_json function

2016-04-08 Thread Michael Nolan
It looks like json_strip_nulls() may be what I need, I'm currently on 9.3,
which doesn't have that function but may be in a position to upgrade to 9.5
this summer.   I think the apps that would be receiving the data can deal
with any resulting 'holes' in the data set by just setting them to null.
--
Mike Nolan


Re: [GENERAL] recover from this error

2016-04-08 Thread Melvin Davidson
On Fri, Apr 8, 2016 at 11:44 AM, Scott Ribe 
wrote:

> Alright, check kernel version, but what else, dump & restore?
>
> ERROR:  unexpected data beyond EOF in block 1 of relation base/16388/35954
> HINT:  This has been seen to occur with buggy kernels; consider updating
> your system.
>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> http://www.elevated-dev.com/
> https://www.linkedin.com/in/scottribe/
> (303) 722-0567 voice
>
>


>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

> 16388/35954 = DB/object

1.Get the object that is causing the problem:

SELECT datname FROM pg_database WHERE oid = 16388;

SELECT relname,
  CASE
 WHEN relkind = 'r' THEN 'TABLE'
 WHEN relkind = 'i' THEN 'INDEX'
 WHEN relkind = 'S' THEN 'SEQUENCE'
 WHEN relkind = 'v' THEN 'VIEW'
 WHEN relkind = 'c' THEN 'VIEW'
 WHEN relkind = 'f' THEN 'Foreign Table'
 ELSE 'Unknown'
  END
   FROM pg_class
 WHERE relfilenode = 35954;

Make sure you have a good backup then:
2. DROP the bad object.
3. RESTORE the object.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread John R Pierce

On 4/8/2016 7:20 AM, Scott Mead wrote:


I'm not sure if that link exists, the general rule is In g if it's 
POSIX, it'll work. You'll find that most PostgreSQL-ers have strong 
opinions and preferences in regards to filesystems.   Personally, I 
know that XFS will work, it's not *my* preference, but, to each their own.



and my experience is that in RHEL 6 and 7, XFS works very well, and IS 
my preference for data volumes.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transitioning to a SQL db

2016-04-08 Thread Adrian Klaver

On 04/08/2016 08:04 AM, Karl O. Pinc wrote:

Hi Tim,

As arranged I am cc-ing the pgsql-general list in the hope
they will assist.  Your posts to the list may be delayed for
moderation, I can't say.

It could be helpful if you subscribed to the list, but it
is relatively high traffic and I know you have extremely limited
and expensive bandwidth.  For this reason I'm not sure I can
recommend subscribing.

On Thu, 7 Apr 2016 12:22:26 +0200
Tim Vink  wrote:


Many thanks for your kind offer to give us some advice. I am Tim Vink,
Research Techician at the Kalahari Research Trust and currently
Database and Networks manager of the project. Our Project leaders,
Chris and Laura at the Meerkat project mainly involved with the main
Meerkat database that is currently in access, where Chris is helping
me develop the additional databases (that are currently loose db
and/or files (read up to 8 csv or mapsource files) and make a
coherent structure for these.



I can figure out some of what you want from Karl's answer to your 
original post. Still it would be nice to have some sort of outline form of:


1) How does data currently gets into your system?

2) What do you do with the data?

3) What are your hardware resources?

4) Who needs to access the data and how and when?

5) What software languages are your developers comfortable with?




Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
  -- Robert A. Heinlein





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bypassing NULL elements in row_to_json function

2016-04-08 Thread Adrian Klaver

On 04/08/2016 08:31 AM, Michael Nolan wrote:

I'm looking at the possibility of using JSON as a data exchange format
with some apps running on both PCs and Macs.   .

The table I would be exporting has a lot of NULL values in it.  Is
there any way to skip the NULL values in the row_to_json function and
include only the fields that are non-null?


I guess it depends on your data.

Are the NULLs all in one field or scattered across fields?

Imagining this scenario:

fld_1 fld_2fld_3
'val1_1'  NULL 'val1_3
NULL  'val2_2' 'val2_3'
'val3_3'  'val3_2'  NULL

How do you deal with the holes(NULL) on the receiving end?



--
Mike Nolan
no...@tssi.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bypassing NULL elements in row_to_json function

2016-04-08 Thread David G. Johnston
On Fri, Apr 8, 2016 at 8:53 AM, Raymond O'Donnell  wrote:

> On 08/04/2016 16:31, Michael Nolan wrote:
> > I'm looking at the possibility of using JSON as a data exchange format
> > with some apps running on both PCs and Macs.   .
> >
> > The table I would be exporting has a lot of NULL values in it.  Is
> > there any way to skip the NULL values in the row_to_json function and
> > include only the fields that are non-null?
>
> You could use a CTE to filter out the nulls (not tested - I haven't used
> JSON in PG (yet!)):
>
> with no_nulls as (
>   select ... from my_table
>   where whatever is not null
> )
> select row_to_json() from no_nulls;
>

​One of us is confused.  I'm reading this as "I want a row_to_json" call to
generate objects with different keys depending on whether a given key would
have a null - in which case exclude the key.

I
​ think one would have to simply allow row_to_json to populate the keys
with null values and then post-process them away:

json_strip_nulls(row_to_json(...))​

http://www.postgresql.org/docs/current/static/functions-json.html

David J.


Re: [GENERAL] Bypassing NULL elements in row_to_json function

2016-04-08 Thread Raymond O'Donnell
On 08/04/2016 16:31, Michael Nolan wrote:
> I'm looking at the possibility of using JSON as a data exchange format
> with some apps running on both PCs and Macs.   .
> 
> The table I would be exporting has a lot of NULL values in it.  Is
> there any way to skip the NULL values in the row_to_json function and
> include only the fields that are non-null?

You could use a CTE to filter out the nulls (not tested - I haven't used
JSON in PG (yet!)):

with no_nulls as (
  select ... from my_table
  where whatever is not null
)
select row_to_json() from no_nulls;

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recover from this error

2016-04-08 Thread Scott Ribe
Alright, check kernel version, but what else, dump & restore?

ERROR:  unexpected data beyond EOF in block 1 of relation base/16388/35954
HINT:  This has been seen to occur with buggy kernels; consider updating your 
system.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
https://www.linkedin.com/in/scottribe/
(303) 722-0567 voice







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bypassing NULL elements in row_to_json function

2016-04-08 Thread Michael Nolan
I'm looking at the possibility of using JSON as a data exchange format
with some apps running on both PCs and Macs.   .

The table I would be exporting has a lot of NULL values in it.  Is
there any way to skip the NULL values in the row_to_json function and
include only the fields that are non-null?
--
Mike Nolan
no...@tssi.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transitioning to a SQL db

2016-04-08 Thread Karl O. Pinc
Hi Tim,

As arranged I am cc-ing the pgsql-general list in the hope
they will assist.  Your posts to the list may be delayed for
moderation, I can't say.

It could be helpful if you subscribed to the list, but it
is relatively high traffic and I know you have extremely limited
and expensive bandwidth.  For this reason I'm not sure I can
recommend subscribing.

On Thu, 7 Apr 2016 12:22:26 +0200
Tim Vink  wrote:

> Many thanks for your kind offer to give us some advice. I am Tim Vink,
> Research Techician at the Kalahari Research Trust and currently
> Database and Networks manager of the project. Our Project leaders,
> Chris and Laura at the Meerkat project mainly involved with the main
> Meerkat database that is currently in access, where Chris is helping
> me develop the additional databases (that are currently loose db
> and/or files (read up to 8 csv or mapsource files) and make a
> coherent structure for these.
> 
> We are in the starting position and started off with MySQL using the
> Percona Server variant for more advanced / easy replication and
> databackups, would you recommend to move to PostGreSQL, what would be
> our main advantages over MySQL?

I no longer keep up with the MySQL feature set, and can't be
considered an expert in the overall status of databases
in the FOSS world.  But I am not entirely disconnected either
so should be of some help.

I can't give you a point-by-point comparison of MySQL and Postgres.
Overall, the difference is in approach.  The goal of Postgres
is to be SQL standards compliant, to be well designed,
to be ACID compliant, and to be reliable.  

The attention to design is most significant.  MySQL was written
as glue, layering an SQL interface on top of a number of underlying
database engines.  And they were looking for speed.  ACID compliance
was something that came later.  As a result, in MySQL there are
myriad corner cases and rough edges.  Postgres does not have these.
This really starts to matter for people who don't spend all their
time living cuddled up to their database and getting to know
all it's quirks.

Postgres takes the time to ensure each new feature is "clean".
Although the emphasis is not on performance, this also includes
performance.  Again reliability is paramount.  Postgres goes to
great lengths to ensure that the data in the database is
never corrupted.  I already mentioned that it won't restore
a database dump unless the resulting referential integrity is intact.

A few minor examples:  PG distinguishes
between an empty string and NULL, the indeterminate data value.
Most other dbs represent the empty string as a string containing
a single space, and there is no distinction between an empty
string and a single space.  You can, with care, construct views in 
PG that act in every way like regular tables, they can be updated,
inserted into, deleted from, etc.  This is a great boon when
users, such as in your case, work directly with the db.  The
PG SQL variant is a "clean", "typical" variant -- generally tracking
Oracle.  (The SQL standard is huge and awful and allows
just about anything that any major vendor wanted to cram in.)
There are no wierdo ways to write SQL that supply hints as to
how to optimize the query, the PG query optimizer does the
right thing for you.  PG has nothing like the variation
in SQL case sensitivity depending on underlying OS platform
like you find in MySQL.  In PG transactions apply everywhere,
even to meta-data like table creation.  This is unusual,
and very useful when making test alterations to a test
database as it eliminates error-prone cleanup on failure
and consequential lack of synchronization between test and
production databases.

The PG "window functions" (see the PG docs) are also incredibly
powerful for data analysis.  As is the ability to embed
R (r-project.org) into Postgres.  (Although embedding R
is dangerous from a security standpoint since it's then
reasonably impossible to prevent a PG user from writing
arbitrary files to disk.  This breaks the barrier between
db access and OS access.)

You may also be interested in PostGIS for geospatial
integration.  (postgis.org)
It is "way cool".

Regards Percona Server, it's FAQ says it tracks Oracle's MySQL.
I see the open source community moving away from Oracle's
MySQL to MariaDB and this could be a long-term concern.

> I take great example in your babase and ambaselli baboon project work.
> Where we have started (still very very much under development) a
> meerkat wiki. (meerkat.kalahariresearch.org)

You may also want to look at gombemi.ccas.gwu.edu for some
work I did for some of the Jane Goodall folk.  It has
some more advanced, from the standpoint of program
internals, web-based tools.

We use 2 idioms extensively.  The first is batch upload
from csv files.  This makes converting data to electronic
form, usually via MS Excel, a low-skill task.
We upload with custom programs, and with
generic uploads to tables and views.  We have a 

Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Tom Lane
"Charles Clavadetscher"  writes:
> When R processes the daily time serie we get a stack size exceeded
error, followed by the hint to increase the max_stack_depth.

Postgres doesn't generally allocate large values on the stack, and I doubt
that R does either.  Almost certainly, what is causing this is not data
size per se but unreasonable call nesting depth in your R code.  You may
have a function that's actually in infinite recursion, or maybe it's
recursing to a depth governed by the number of data elements.  If so,
consider revising it into iteration with an explicitly-represented state
stack.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Scott Mead
On Fri, Apr 8, 2016 at 9:16 AM, Marllius  wrote:

> thank you, but i need a link in official postgresql documentation
>

I'm not sure if that link exists, the general rule is In g if it's POSIX,
it'll work. You'll find that most PostgreSQL-ers have strong opinions and
preferences in regards to filesystems.   Personally, I know that XFS will
work, it's not *my* preference, but, to each their own.


>
> OCFS2 = oracle cluster file system 2
>
>
2016-04-08 10:00 GMT-03:00 Bob Lunney :
>
>> XFS absolutely does.  Its well supported on Redhat and CentOS 6.x and
>> 7.x.  Highly recommended.
>>
>> Don’t know about OCFS2.
>>
>> Bob Lunney
>> Lead Data Architect
>> MeetMe, Inc.
>>
>> > On Apr 8, 2016, at 8:56 AM, Marllius  wrote:
>> >
>> > Hi guys!
>> >
>> > The OCFS2 and XFS have compatibility with postgresql 9.3.4?
>> >
>>
>
I did some experimentation with ocfs2 back about 7 or 8 years ago
(admittedly, a Big-Bang away, so keep that in mind when reading my
comments).  At the time, OCFS2 was *mostly* POSIX compatible and would
indeed work with Postgres.  What we found (again, at the time) is that
OCFS2 started to have performance problems and eventually a race condition
when using a large number of [relatively] small files.  I believe the DB I
was working on had 10's of databases, each with 1,000+ tables in it, so,
lots of files.  It was really designed for use with Oracle (small number of
large files) and was passed over in favor of ASM.

If it were me, I'd stay away from OCFS2 for anything except Oracle (and in
that case, I'd use ASM).


> > I was looking the documentation but i not found it.
>> >
>> >
>> >
>>
>>
>
>
> --
> Atenciosamente,
>
> Márllius de Carvalho Ribeiro
>


Re: [GENERAL] what database schema version management system to use?

2016-04-08 Thread Merlin Moncure
On Wed, Apr 6, 2016 at 5:55 AM, Alexey Bashtanov  wrote:
> Hi all,
>
> I am searching for a proper database schema version management system.
>
> My criteria are the following:
> 0) Open-source, supports postgresql
> 1) Uses psql to execute changesets (to have no problems with COPY,
> transaction management or sophisticated DDL commands, and to benefit from
> scripting)
> 2) Support repeatable migrations (SQL files that get applied every time
they
> are changed, it is useful for functions or views tracking).
>
> Reasonable?
>
> But unfortunately it looks like each of Liquibase, Flyway, SqlHawk,
> MigrateDB, Schema-evolution-manager, Depesz's Versioning, Alembic and
Sqitch
> does not satisfy some of those, right?
>
> What DB VCS do you use and how does it related with the criteria listed
> above?
> Do you have any idea what other systems to try?

I rolled my own in bash.  It wasn't that difficult.  The basic tactic is to:

*) separate .sql that can be re-applied (views, functions, scratch tables,
etc)  from .sql that can't be re-applied (create table, index, deployment
data changes etc).  I call the former 'soft' and latter 'hard' changes.
*) keep each database tracked in its own folder in the tree and put all the
soft stuff there.  I keep all the hard stuff in a folder, 'schema'.  I also
ha ve a special library folder which tracks all databases
*) redeploy 'soft' changes every release.  The bash script deploys files in
mtime order after setting mtime to git commit time since git doesn't track
mtime
*) keep a tracking table in each database tracking deployed scripts

Here is example of output:
mmoncure@mernix2 09:07 AM (AO_3_9) ~/src/aodb/release/ao$ DRY_RUN=1
./deploy.sh

---DEPLOYMENT STARTING--

LOG: Dry run requested
LOG: Attempting connection to control database @ host=
rcoylsdbpgr001.realpage.com dbname=ysconfig port=5432
LOG: Got connection host=10.34.232.70 dbname=ysconfig port=5432 to ysconfig
LOG: Got connection host=10.34.232.70 dbname=ysanalysis port=5432 to
ysanalysis
LOG: Got connection host=10.34.232.70 dbname=revenueforecaster port=5432 to
revenue forecaster
LOG: Got connection host=10.34.232.68 dbname=cds2 to node
LOG: Release folder is /home/mmoncure/src/aodb/release/ao/SAT/1.0.0
LOG: Database host=10.34.232.70 dbname=ysconfig port=5432 is getting update
PropertyNicheSchema.sql
LOG: Database host=10.34.232.70 dbname=ysanalysis port=5432 is getting
update PropertyNicheSchema.sql
LOG: Database host=10.34.232.70 dbname=revenueforecaster port=5432 is
getting update PropertyNicheSchema.sql
LOG: Database host=10.34.232.68 dbname=cds2 is getting update
PropertyNicheSchema.sql
LOG: Database host=10.34.232.70 dbname=ysanalysis port=5432 is getting
update ca_scenario_position.sql
LOG: building SAT ysconfig update script
LOG: building SAT ysanalysis update script
LOG: building SAT revenueforecaster update script
LOG: building SAT node update script
LOG: Applying SAT ysconfig update to host=10.34.232.70 dbname=ysconfig
port=5432
LOG: ysconfig successfully updated!
LOG: Applying SAT ysanalysis update to host=10.34.232.70 dbname=ysanalysis
port=5432
LOG: ysanalysis successfully updated!
LOG: Applying SAT revenue forecaster update to host=10.34.232.70
dbname=revenueforecaster port=5432
LOG: revenueforecaster successfully updated!
LOG: Applying SAT node id 0 update to host=10.34.232.68 dbname=cds2
LOG: node 0 successfully updated!
LOG: Applying SAT node id 1 update to host=10.34.232.69 dbname=cds2
LOG: node 1 successfully updated!

LOG: Dry run requested: changes not committed!


merlin


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Marllius
thank you, but i need a link in official postgresql documentation

OCFS2 = oracle cluster file system 2

2016-04-08 10:00 GMT-03:00 Bob Lunney :

> XFS absolutely does.  Its well supported on Redhat and CentOS 6.x and
> 7.x.  Highly recommended.
>
> Don’t know about OCFS2.
>
> Bob Lunney
> Lead Data Architect
> MeetMe, Inc.
>
> > On Apr 8, 2016, at 8:56 AM, Marllius  wrote:
> >
> > Hi guys!
> >
> > The OCFS2 and XFS have compatibility with postgresql 9.3.4?
> >
> > I was looking the documentation but i not found it.
> >
> >
> >
>
>


-- 
Atenciosamente,

Márllius de Carvalho Ribeiro


Re: [GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Bob Lunney
XFS absolutely does.  Its well supported on Redhat and CentOS 6.x and 7.x.  
Highly recommended.

Don’t know about OCFS2.

Bob Lunney
Lead Data Architect
MeetMe, Inc.

> On Apr 8, 2016, at 8:56 AM, Marllius  wrote:
> 
> Hi guys!
> 
> The OCFS2 and XFS have compatibility with postgresql 9.3.4? 
> 
> I was looking the documentation but i not found it.
> 
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.3.4 file system compatibility

2016-04-08 Thread Marllius
Hi guys!

The OCFS2 and XFS have compatibility with postgresql 9.3.4?

I was looking the documentation but i not found it.


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Albe Laurenz
Charles Clavadetscher wrote:
> We have a process in R which reads statistical raw data from a table and 
> computes time series values
> from them.
> The time series values are in a hstore field with the date as the key and the 
> value as the value.
> The process writes the computed value into a temporary table and locks the 
> corresponding row(s) of the
> target table for update.
> Finally the row(s) are updated if they already exist or inserted if they do 
> not exist.
> 
> This process runs nightly and processes many GB of data without generating 
> any error. Normally these
> are low frequency time series
> (monthly or quarterly data).
> 
> Now we have a daily time serie with about 46'000 key/value pairs. In near 
> future this will increase to
> 86'000 including data from
> previous years.
> 
> When R processes the daily time serie we get a stack size exceeded error, 
> followed by the hint to
> increase the max_stack_depth. My
> colleague, who wrote the R function and performed the test read the docs and 
> increased, according to
> ulimit -s the max_stack_depth
> to 7MB.
> 
> Here the details of OS and PG:
> OS: osx 10.10.5
> PG: 9.3.3
> 
> ulimit -s = 8192
> 
> The resize did work as *show max_stack_depth;" has shown. After this change, 
> however, the query states
> the same error as before,
> just with the new limit of 7 MB.
> 
> The query itself was written to a file in order to verify its size. The size 
> turned out to be 1.7MB,
> i.e. even below the
> conservative default limit of 2 MB, yet alone substantially below 7 MB.
> 
> Apart from the fact that we could consider using a different strategy to 
> store time series, we would
> like to understand what is
> causing the problem.
> 
> Here the query as it looks like in the R code:
> sql_query_data <- sprintf("BEGIN;
>CREATE TEMPORARY TABLE ts_updates(ts_key varchar, 
> ts_data hstore,
> ts_frequency integer) ON COMMIT DROP;
>INSERT INTO ts_updates(ts_key, ts_data) VALUES %s;
>LOCK TABLE %s.timeseries_main IN EXCLUSIVE MODE;
> 
>UPDATE %s.timeseries_main
>SET ts_data = ts_updates.ts_data
>FROM ts_updates
>WHERE ts_updates.ts_key = 
> %s.timeseries_main.ts_key;
> 
>INSERT INTO %s.timeseries_main
>SELECT ts_updates.ts_key, ts_updates.ts_data, 
> ts_updates.ts_frequency
>FROM ts_updates
>LEFT OUTER JOIN %s.timeseries_main ON 
> (%s.timeseries_main.ts_key =
> ts_updates.ts_key)
>WHERE %s.timeseries_main.ts_key IS NULL;
>COMMIT;",
>values, schema, schema, schema, schema, schema, 
> schema, schema)
> 
> And here is how it looks like at the end:
> 
> INSERT INTO ts_updates(ts_key, ts_data, ts_frequency) VALUES 
> ('somekey',hstore('1900-01-01','-
> 0.395131869823009')||
> 
> hstore('1900-01-02','-
> 0.595131869823009')||
> 
> hstore('1900-01-03','-
> 0.395131869823009')||
> [...] 
> 46'000 times
> 
> hstore('1900-01-04','-
> 0.395131869823009'),NULL);

I don't understand which query causes the error.
The queries you quote above are smaller than 1.7 MB...
You could log errors to find out which statement causes the error.

One idea would be to attach a debugger to the backend, set a breakpoint in 
check_stack_depth()
where the error is thrown, and take a stack trace when you hit the error.
Maybe that can show you what is going on.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Please let me know the latest PostgreSQL version available on Solaris 11?

2016-04-08 Thread Raymond O'Donnell
On 08/04/2016 11:50, M Tarkeshwar Rao wrote:
> Hi all,
> 
>  
> 
> Please let me know the latest PostgreSQL version available on Solaris 11?
> 
>  
> 
> Which PostgreSQL version will be supported on Solaris 11.x version and
> when the same will be available ?

http://www.postgresql.org/download/solaris/


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Please let me know the latest PostgreSQL version available on Solaris 11?

2016-04-08 Thread M Tarkeshwar Rao
Hi all,

Please let me know the latest PostgreSQL version available on Solaris 11?

Which PostgreSQL version will be supported on Solaris 11.x version and when the 
same will be available ?

Regards
Tarkeshwar



[GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Charles Clavadetscher
Hello

We have a process in R which reads statistical raw data from a table and 
computes time series values from them.
The time series values are in a hstore field with the date as the key and the 
value as the value.
The process writes the computed value into a temporary table and locks the 
corresponding row(s) of the target table for update.
Finally the row(s) are updated if they already exist or inserted if they do not 
exist.

This process runs nightly and processes many GB of data without generating any 
error. Normally these are low frequency time series
(monthly or quarterly data).

Now we have a daily time serie with about 46'000 key/value pairs. In near 
future this will increase to 86'000 including data from
previous years.

When R processes the daily time serie we get a stack size exceeded error, 
followed by the hint to increase the max_stack_depth. My
colleague, who wrote the R function and performed the test read the docs and 
increased, according to ulimit -s the max_stack_depth
to 7MB.

Here the details of OS and PG:
OS: osx 10.10.5
PG: 9.3.3

ulimit -s = 8192

The resize did work as *show max_stack_depth;" has shown. After this change, 
however, the query states the same error as before,
just with the new limit of 7 MB.

The query itself was written to a file in order to verify its size. The size 
turned out to be 1.7MB, i.e. even below the
conservative default limit of 2 MB, yet alone substantially below 7 MB.

Apart from the fact that we could consider using a different strategy to store 
time series, we would like to understand what is
causing the problem.

Here the query as it looks like in the R code:
sql_query_data <- sprintf("BEGIN;
   CREATE TEMPORARY TABLE ts_updates(ts_key varchar, 
ts_data hstore, ts_frequency integer) ON COMMIT DROP;
   INSERT INTO ts_updates(ts_key, ts_data) VALUES %s;
   LOCK TABLE %s.timeseries_main IN EXCLUSIVE MODE;

   UPDATE %s.timeseries_main
   SET ts_data = ts_updates.ts_data
   FROM ts_updates
   WHERE ts_updates.ts_key = %s.timeseries_main.ts_key;

   INSERT INTO %s.timeseries_main
   SELECT ts_updates.ts_key, ts_updates.ts_data, 
ts_updates.ts_frequency
   FROM ts_updates
   LEFT OUTER JOIN %s.timeseries_main ON 
(%s.timeseries_main.ts_key = ts_updates.ts_key)
   WHERE %s.timeseries_main.ts_key IS NULL;
   COMMIT;",
   values, schema, schema, schema, schema, schema, 
schema, schema)

And here is how it looks like at the end:

INSERT INTO ts_updates(ts_key, ts_data, ts_frequency) VALUES 
('somekey',hstore('1900-01-01','-0.395131869823009')||

hstore('1900-01-02','-0.595131869823009')||

hstore('1900-01-03','-0.395131869823009')||
[...] 
46'000 times

hstore('1900-01-04','-0.395131869823009'),NULL);

The computer where my colleague made the test is local. There are no other 
concurrent users.
We thank you for hints on what the problem may be and/or how to investigate it 
further.

Please reply to all, as my colleague is not yet subscribed to the mailing list.

Regards,
Charles and Matthias




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Shipping big WAL archives to hot standby

2016-04-08 Thread Jordi
I'm looking to extend my PostgreSQL 9.4 master with a few slaves in hot 
standby read-only for load balancing.


The idea would be to update the slaves only at defined times (once every 
24/48 hours) to avoid migration issues with the application server code 
and also because the "freshness" of the slaves is not so important.


What would be the effect of suddenly introducing a 1-2 GB of WAL 
archives to the WAL restore folder on the slave? Would there be a big 
performance effect on the incoming queries to the slave? Would the slave 
be available for queries while the WAL logs are restored into the DB?


Thanks
Jordi



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-08 Thread Albe Laurenz
Rakesh Kumar wrote:
>> Every row has two system columns associated with it: xmin and xmax
>>
>> xmin is the transaction ID that created the row, while xmax is
>> the transaction ID that removed the row.
>>
>> So when an update takes place, xmax of the original row and xmin
>> of the new row are set to the transaction ID of the current transaction.
>>
>> Furthermore, the commit log (CLOG) logs for each transaction whether
>> it was committed or rolled back.
>>
>> Now when a backend examines a row, it first checks if the row is
>> visible, i.e. xmin must be less or equal to the current transaction ID
>> and xmax must be 0 or greater than the current transaction ID
>> or belong to a transaction that was rolled back.
>>
>> To save CLOG lookups, the first reader who consults the CLOG will
>> save the result in so-called "hint bits" on the row itself.
> 
> I am assuming the same must be true for the indexes also. Does PG look
> up primary key
> by examining the rows like you described above.

Rows remain in the index as long as they remain in the table.

What actually removes old rows from the table - VACUUM - also
removes them from the index.

The index does not contain visibility information about the indexed row,
so every row found in an index must still be checked for visibility,
either as described above, or in the case of an index-only scan
by checking the visibility map if all tuples on that heap page are
visible to all transactions.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trying to understand page structures in PG

2016-04-08 Thread Albe Laurenz
Jeff Janes wrote:
>> I am curious because of "while xmax is the transaction ID that
>> *removed* the row".
> 
> "marked for removal" would be more accurate.  If the row were actually
> physically removed, it would no longer have a xmax to set.

Yes, thanks for the clarification.
I was thinking "logically removed", but I realize my wording was confusing.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general