Re: [HACKERS] [RFC] What would be difficult to make data models pluggable for making PostgreSQL a multi-model database?

2017-08-25 Thread Henry M
This may be interesting... they implement cypher (unfortunately they had to
fork in order to have cypher be a first class query language with SQL).

https://github.com/bitnine-oss/agensgraph



On Mon, Aug 21, 2017 at 12:44 AM Chris Travers 
wrote:

> On Sun, Aug 20, 2017 at 4:10 AM, MauMau  wrote:
>
>> From: Chris Travers
>> > Why cannot you do all this in a language handler and treat as a user
>> defined function?
>> > ...
>> > If you have a language handler for cypher, why do you need in_region
>> or cast_region?  Why not just have a graph_search() function which
>> takes in a cypher query and returns a set of records?
>>
>> The language handler is for *stored* functions.  The user-defined
>> function (UDF) doesn't participate in the planning of the outer
>> (top-level) query.  And they both assume that they are executed in SQL
>> commands.
>>
>
> Sure but stored functions can take arguments, such as a query string which
> gets handled by the language handler.  There's absolutely no reason you
> cannot declare a function in C that takes in a Cypher query and returns a
> set of tuples.   And you can do a whole lot with preloaded shared libraries
> if you need to.
>
> The planning bit is more difficult, but see below as to where I see major
> limits here.
>
>>
>> I want the data models to meet these:
>>
>> 1) The query language can be used as a top-level session language.
>> For example, if an app specifies "region=cypher_graph" at database
>> connection, it can use the database as a graph database and submit
>> Cypher queries without embedding them in SQL.
>>
>
> That sounds like a foot gun.  I would probably think of those cases as
> being ideal for a custom background worker, similar to Mongress.
> Expecting to be able to switch query languages on the fly strikes me as
> adding totally needless complexity everywhere to be honest.  Having
> different listeners on different ports simplifies this a lot and having,
> additionally, query languages for ad-hoc mixing via language handlers might
> be able to get most of what you want already.
>
>>
>> 2) When a query contains multiple query fragments of different data
>> models, all those fragments are parsed and planned before execution.
>> The planner comes up with the best plan, crossing the data model
>> boundary.  To take the query example in my first mail, which joins a
>> relational table and the result of a graph query.  The relational
>> planner considers how to scan the table, the graph planner considers
>> how to search the graph, and the relational planner considers how to
>> join the two fragments.
>>
>
> It seems like all you really need is a planner hook for user defined
> languages (I.e. "how many rows does this function return with these
> parameters" right?).  Right now we allow hints but they are static.  I
> wonder how hard this would be using preloaded, shared libraries.
>
>
>>
>> So in_region() and cast_region() are not functions to be executed
>> during execution phase, but are syntax constructs that are converted,
>> during analysis phase, into calls to another region's parser/analyzer
>> and an inter-model cast routine.
>>
>
> So basically they work like immutable functions except that you cannot
> index the output?
>
>>
>> 1. The relational parser finds in_region('cypher_graph', 'graph
>> query') and produces a parse node InRegion(region_name, query) in the
>> parse tree.
>>
>> 2. The relational analyzer looks up the system catalog to checks if
>> the specified region exists, then calls its parser/analyzer to produce
>>
> the query tree for the graph query fragment.  The relational analyser
>
>
>> attaches the graph query tree to the InRegion node.
>>
>> 3. When the relational planner finds the graph query tree, it passes
>> the graph query tree to the graph planner to produce the graph
>> execution plan.
>>
>> 4. The relational planner produces a join plan node, based on the
>> costs/statistics of the relational table scan and graph query.  The
>> graph execution plan is attached to the join plan node.
>>
>> The parse/query/plan nodes have a label to denote a region, so that
>> appropriate region's routines can be called.
>>
>
> It would be interesting to see how much of what you want you can get with
> what we currently have and what pieces are really missing.
>
> Am I right that if you wrote a function in C to take a Cypher query plan,
> and analyse it, and execute it, the only thing really missing would be
> feedback to the PostgreSQL planner regarding number of rows expected?
>
>>
>> Regards
>> MauMau
>>
>>
>
>
> --
> Best Regards,
> Chris Travers
> Database Administrator
>
> Tel: +49 162 9037 210 <+49%20162%209037210> | Skype: einhverfr |
> www.adjust.com
> Saarbrücker Straße 37a, 10405 Berlin
>
>


Re: [HACKERS] Development build with uuid-ossp support - macOS

2016-09-25 Thread Enrique M
I am going to try and switch to macports instead... I see the documentation
provides the macports command for installing the toolset...
https://www.postgresql.org/docs/9.6/static/docguide-toolsets.html  Thank
you.

On Sat, Sep 24, 2016 at 4:52 PM Enrique M <enrique.mailing.li...@gmail.com>
wrote:

> I am trying to do a macOS build of postgresql (9.6 stable branch from
> github) with the uuid-ossp contrib by typing "make world" but it fails due
> to an openjade error (I did install openjade using homebrew using this
> setup https://github.com/petere/homebrew-sgml).
>
> Is there a way to build postgresql and install with uuid-ossp without
> having to build the documentation? I don't really need the documentation
> for my test.
>
> Thank you,
>
> Enrique
>
>
>


[HACKERS] Development build with uuid-ossp support - macOS

2016-09-24 Thread Enrique M
I am trying to do a macOS build of postgresql (9.6 stable branch from
github) with the uuid-ossp contrib by typing "make world" but it fails due
to an openjade error (I did install openjade using homebrew using this
setup https://github.com/petere/homebrew-sgml).

Is there a way to build postgresql and install with uuid-ossp without
having to build the documentation? I don't really need the documentation
for my test.

Thank you,

Enrique


Re: [HACKERS] Allowing GIN array_ops to work on anyarray

2016-08-11 Thread M Enrique
This is awesome. I will build it to start using and testing it in my
development environment. Thank you so much for making this change.

On Thu, Aug 11, 2016 at 11:33 AM Tom Lane  wrote:

> In
> https://www.postgresql.org/message-id/15293.1466536...@sss.pgh.pa.us
> I speculated that it might not take too much to replace all the variants
> of GIN array_ops with a single polymorphic opclass over anyarray.
> Attached is a proposed patch that does that.
>
> There are two bits of added functionality needed to make this work:
>
> 1. We need to abstract the storage type.  The patch does this by teaching
> catalog/index.c to recognize an opckeytype specified as ANYELEMENT with an
> opcintype of ANYARRAY, and doing the array element type lookup at index
> creation time.
>
> 2. We need to abstract the key comparator.  The patch does this by
> teaching gin/ginutil.c that if the opclass omits a GIN_COMPARE_PROC,
> it should look up the default btree comparator for the index key type.
>
> Both of these seem to me to be reasonable general-purpose behaviors with
> potential application to other opclasses.
>
> In the aforementioned message I worried that a core opclass defined this
> way might conflict with user-built opclasses for specific array types,
> but it seems to work out fine without any additional tweaks: CREATE INDEX
> already prefers an exact match if it finds one, and only falls back to
> matching anyarray when it doesn't.  Also, all the replaced opclasses are
> presently default for their types, which means that pg_dump won't print
> them explicitly in CREATE INDEX commands, so we don't have a dump/reload
> or pg_upgrade hazard from them disappearing.
>
> A potential downside is that for an opclass defined this way, we add a
> lookup_type_cache() call to each initGinState() call.  That's basically
> just a single dynahash lookup once the caches are populated, so it's not
> much added cost, but conceivably it could be measurable in bulk insert
> operations.  If it does prove objectionable my inclination would be to
> look into ways to avoid the repetitive function lookups of initGinState,
> perhaps by letting it cache that stuff in the index's relcache entry.
>
> I'll put this on the September commitfest docket.
>
> regards, tom lane
>
>


Re: [HACKERS] Gin index on array of uuid

2016-06-29 Thread M Enrique
Thank you.

On Tue, Jun 28, 2016 at 11:06 PM Oleg Bartunov <obartu...@gmail.com> wrote:

> On Wed, Jun 29, 2016 at 6:17 AM, M Enrique <
> enrique.mailing.li...@gmail.com> wrote:
>
>> What's a good source code entry point to review how this is working for
>> anyarray currently? I am new to the postgres code. I spend some time
>> looking for it but all I found is the following (which I have not been able
>> to decipher yet).
>>
>
> Look on https://commitfest.postgresql.org/4/145/
>
>
>>
>> [image: pasted1]
>>
>> Thank you,
>> Enrique
>>
>>
>>
>> On Tue, Jun 21, 2016 at 12:20 PM Tom Lane <t...@sss.pgh.pa.us> wrote:
>>
>>> Enrique MailingLists <enrique.mailing.li...@gmail.com> writes:
>>> > Currently creating an index on an array of UUID involves defining an
>>> > operator class. I was wondering if this would be a valid request to
>>> add as
>>> > part of the uuid-ossp extension? This seems like a reasonable operator
>>> to
>>> > support as a default for UUIDs.
>>>
>>> This makes me itch, really, because if we do this then we should
>>> logically
>>> do it for every other add-on type.
>>>
>>> It seems like we are not that far from being able to have just one GIN
>>> opclass on "anyarray".  The only parts of this declaration that are
>>> UUID-specific are the comparator function and the storage type, both of
>>> which could be gotten without that much trouble, one would think.
>>>
>>> > Any downsides to adding this as a default?
>>>
>>> Well, it'd likely break things at dump/reload time for people who had
>>> already created a competing "default for _uuid" opclass manually.  I'm
>>> not
>>> entirely sure, but possibly replacing the core opclasses with a single
>>> one
>>> that is "default for anyarray" could avoid such failures.  We'd have to
>>> figure out ambiguity resolution rules.
>>>
>>> regards, tom lane
>>>
>>


Re: [HACKERS] Gin index on array of uuid

2016-06-28 Thread M Enrique
What's a good source code entry point to review how this is working for
anyarray currently? I am new to the postgres code. I spend some time
looking for it but all I found is the following (which I have not been able
to decipher yet).

[image: pasted1]

Thank you,
Enrique



On Tue, Jun 21, 2016 at 12:20 PM Tom Lane  wrote:

> Enrique MailingLists  writes:
> > Currently creating an index on an array of UUID involves defining an
> > operator class. I was wondering if this would be a valid request to add
> as
> > part of the uuid-ossp extension? This seems like a reasonable operator to
> > support as a default for UUIDs.
>
> This makes me itch, really, because if we do this then we should logically
> do it for every other add-on type.
>
> It seems like we are not that far from being able to have just one GIN
> opclass on "anyarray".  The only parts of this declaration that are
> UUID-specific are the comparator function and the storage type, both of
> which could be gotten without that much trouble, one would think.
>
> > Any downsides to adding this as a default?
>
> Well, it'd likely break things at dump/reload time for people who had
> already created a competing "default for _uuid" opclass manually.  I'm not
> entirely sure, but possibly replacing the core opclasses with a single one
> that is "default for anyarray" could avoid such failures.  We'd have to
> figure out ambiguity resolution rules.
>
> regards, tom lane
>


Re: [HACKERS] Identify user requested queries

2015-11-23 Thread Praveen M
Hi Craig,

Thanks for the input. I guess i need to read more code and see if it is
achievable. I started looking into the code very recently. Your inputs is
very valuable to me. Thanks.

Yes I am trying to do something similar to multi-tenancy. I will look at
the row level security.

Thanks
Praveen

On Mon, Nov 23, 2015 at 2:16 AM, Craig Ringer <cr...@2ndquadrant.com> wrote:

> On 23 November 2015 at 13:27, Praveen M <thr...@gmail.com> wrote:
>
>> Hi All,
>>
>> When the user attempts to make a connection with the database , the code
>> will look into various pg_catalog tables internally. However the user also
>> can query the pg_catalog tables. Is there a way to identify the user
>> requested (or typed query) vs the system requested (internal) queries?
>>
>
> As far as I know there is no simple and reliable method but I'm no
> expert.
>
> Most system accesses to common catalogs use the syscache, which doesn't go
> through the SQL parse/bind/execute process. Or they construct simple scans
> directly, again bypassing the full parser. The system will run internal
> queries with the SPI though, and that's full-fledged SQL. Triggers, rules,
> views, etc, use the SPI, as does plpgsql, fulltext search, XML support, and
> a few other parts of the system. So you cannot assume that anything using
> SQL is user-originated.
>
> Take a look at PostgresMain in src/backend/tcop/postgres.c for the
> top-level user query entry point. You'll see there that you cannot rely on
> testing isTopLevel because multiple statements sent as a single query
> string are treated as if they were a nested transaction block.
> (see exec_simple_query(), postgres.c around line 962). That'd also cause
> problems with use of PL/PgSQL.
>
> You can't assume that all SPI queries are safe, because the user can run
> queries via the SPI using plpgsql etc.
>
> I don't see any way to do this without introducing the concept of a
> "system query"... and in PostgreSQL that's not simple, because the system
> query could cause the invocation of user-defined operators, functions,
> triggers, etc, that then run user-defined code. You'd have to clear the
> "system query" flag whenever you entered user-defined code, then restore it
> on exit. That seems exceedingly hard to get right reliably.
>
> Reading between the lines, it sounds like you are looking for a way to
> limit end-user access to system catalogs as part of a lockdown effort,
> perhaps related to multi-tenancy. Correct? If so, you may wish to look at
> the current work on supporting row security on system catalogs, as that is
> probably closer to what you will need.
>
>
>> Also what procedure or function in the code that indicates the user can
>> write queries , something like I wanted to know the code where the
>> connection is created and available for user to use.
>>
>
> Start reading at src/backend/tcop/postgres.c .
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


[HACKERS] Identify user requested queries

2015-11-22 Thread Praveen M
Hi All,

When the user attempts to make a connection with the database , the code
will look into various pg_catalog tables internally. However the user also
can query the pg_catalog tables. Is there a way to identify the user
requested (or typed query) vs the system requested (internal) queries?

Also what procedure or function in the code that indicates the user can
write queries , something like I wanted to know the code where the
connection is created and available for user to use.

Please Help!!

Praveen


[HACKERS] CreateFunction Statement

2015-11-20 Thread Praveen M
Hi All,

I am trying to get the schema name of the create function call from the
parse tree. When I look at the structure of the CreateFunctionStmt , I do
not see the schemaname information . Can you please help me to understand
how to extract the schema name for the function.

typedef struct CreateFunctionStmt
{
NodeTag type;
bool replace; /* T => replace if already exists */
List   *funcname; /* qualified name of function to create */
List   *parameters; /* a list of FunctionParameter */
TypeName   *returnType; /* the return type */
List   *options; /* a list of DefElem */
List   *withClause; /* a list of DefElem */
} CreateFunctionStmt;


[HACKERS] Need to print the raw_parse_tree in the Log file

2015-11-07 Thread Praveen M
Hi All,

I would like to print the raw parse tree into the log . Is there any
internal utility function to achieve this. If there is none , can you
please help me to achieve this.

Thanks as always
Praveen


[HACKERS] Eclipse Help

2015-10-13 Thread Praveen M
Hi All,

I was able to follow the debugging of the child process using this link,
https://wiki.postgresql.org/wiki/Working_with_Eclipse

As per the notes , I was able to set breakpoints and everything seem to be
working (hopefully). However I am not able to see the debug messages in the
eclipse console (for the attached process) . Please help

When I check on the console in eclipse , this is the last message I see.

0x773fad48 in poll () from /lib/x86_64-linux-gnu/libc.so.6

I added a 2 lines in pl_exec.c and kept breakpoints for these lines. The
breakpoints work fine but I am not able to see the console log.
I was able to use the log message "ereport(LOG, (errmsg("test here
started")));" in autovaccum.c line 414 and see the message in the console.
But this message is from the main process. I am having trouble seeing the
console log only for the attached process.

pl_exec.c :

Line 310 :
 ereport(LOG,
(errmsg("test here started")));

Line 311 :
elog(ERROR,"test here");

Thanks
Praveen


Re: [HACKERS] [Pgbuildfarm] buildfarm olinguito vs python

2015-05-27 Thread Davin M. Potts
On Mon, May 25, 2015 at 04:37:11PM -0400, Tom Lane wrote:
 Davin M. Potts da...@discontinuity.net writes:
  At Alvaro's suggestion, I'm forwarding my questions (see email thread
  further below) to this list.
 
  In short, building of PL/Python has been disabled on OpenBSD since 2005.
  The errors seen at the time (on OpenBSD and FreeBSD, both) may or may
  not still be an issue with modern builds of Python.  Can someone point
  me to examples of how these errors manifested themselves?  Has Peter
  Eisentraut or others poked at this recently enough to tell me this is
  not worth chasing down?
 
 I'm fairly sure that the errors were blatantly obvious, ie failure to
 build or failure to pass even basic regression tests.  If you can tell
 us that that configure check is inappropriate on modern openbsd, I'd
 be happy to see it go.

With Tom's bit of encouragement, I removed these four lines from the
config/python.m4 file:
  case $host_os in
  openbsd*)
AC_MSG_ERROR([threaded Python not supported on this platform])
;;
  esac

Though in truth, I did take the shortcut of not actually regenerating
the configure file from it and instead I simply commented out those same
exact four lines from the configure and did a proper clean build of HEAD.
The net result is that everything passed from configure through check and
the contrib checks too -- to the extent that we have tests for PL/Python,
all of those tests pass with Python 2.7.10 on OpenBSD (olinguito in the
buildfarm).

To verify that I hadn't done something boneheaded, I manually connected
with psql and did a couple of CREATE FUNCTION ... LANGUAGE plpythonu;
and exercised those new functions successfully.

PL/Python appears happy and healthy on OpenBSD, as best as I can tell
from the test suites passing and my own manual poking.  I suggest those
four lines specific to OpenBSD can be removed from the configure check.

Though I have only verified this against HEAD, if this change is made
across all the active branches, we will see its impact on olinguito's
buildfarm-builds of those branches too.  Or, I can walk through and
manually test each branch if that's preferred?



Davin



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm] buildfarm olinguito vs python

2015-05-27 Thread Davin M. Potts
On Mon, May 25, 2015 at 04:35:11PM -0300, Alvaro Herrera wrote:
 Davin M. Potts wrote:
  At Alvaro's suggestion, I'm forwarding my questions (see email thread
  further below) to this list.
  
  In short, building of PL/Python has been disabled on OpenBSD since 2005.
  The errors seen at the time (on OpenBSD and FreeBSD, both) may or may
  not still be an issue with modern builds of Python.  Can someone point
  me to examples of how these errors manifested themselves?  Has Peter
  Eisentraut or others poked at this recently enough to tell me this is
  not worth chasing down?
 
 http://www.postgresql.org/message-id/flat/20061015211642.gf...@nasby.net#20061015211642.gf...@nasby.net
 http://www.postgresql.org/message-id/flat/42f0d5b1.2060...@dunslane.net#42f0d5b1.2060...@dunslane.net

Slightly ironic is that at the moment I received this email from Álvaro,
I was sitting across from Jim Nasby (see first link) in a coffeeshop.


Davin



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm] buildfarm olinguito vs python

2015-05-27 Thread Davin M. Potts
On Mon, May 25, 2015 at 04:26:02PM -0400, Andrew Dunstan wrote:
 
 On 05/25/2015 03:35 PM, Andrew Dunstan wrote:
 
 On 05/25/2015 12:38 PM, Davin M. Potts wrote:
 At Alvaro's suggestion, I'm forwarding my questions (see email thread
 further below) to this list.
 
 In short, building of PL/Python has been disabled on OpenBSD since 2005.
 The errors seen at the time (on OpenBSD and FreeBSD, both) may or may
 not still be an issue with modern builds of Python.  Can someone point
 me to examples of how these errors manifested themselves?  Has Peter
 Eisentraut or others poked at this recently enough to tell me this is
 not worth chasing down?
 
 
 Thanks for any and all pointers.
 
 
 I'm inclined just to remove this in config/python.m4 and see what
 happens:
 
case $host_os in
   openbsd*)
 AC_MSG_ERROR([threaded Python not supported on this platform])
 ;;
esac
 
 
 
 Oh, I see see what's happening, kinda. Is your threaded python
 linked against libc or libc_r?

It's built against libc -- I don't think we have libc_r on OpenBSD or if
it used to be there, it looks to be gone now.


Davin



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Pgbuildfarm] buildfarm olinguito vs python

2015-05-25 Thread Davin M. Potts
At Alvaro's suggestion, I'm forwarding my questions (see email thread
further below) to this list.

In short, building of PL/Python has been disabled on OpenBSD since 2005.
The errors seen at the time (on OpenBSD and FreeBSD, both) may or may
not still be an issue with modern builds of Python.  Can someone point
me to examples of how these errors manifested themselves?  Has Peter
Eisentraut or others poked at this recently enough to tell me this is
not worth chasing down?


Thanks for any and all pointers.


Davin


- Forwarded message from Davin M. Potts da...@discontinuity.net -

Date: Mon, 25 May 2015 11:12:53 -0500
From: Davin M. Potts da...@discontinuity.net
To: Alvaro Herrera alvhe...@2ndquadrant.com
Cc: Andrew Dunstan and...@dunslane.net, pgbuildf...@lists.commandprompt.com
pgbuildf...@lists.commandprompt.com
Subject: Re: [Pgbuildfarm] buildfarm olinguito vs python

I have rebuilt python to ensure that the relatively new dependency on
the shared-lib is satisfied.  This addresses the previous error that
terminated the configure step with the complaint that the shared-lib was
not found.

However, a new complaint is now encountered, complaining that
threaded Python not supported on this platform despite the configure
successfully verifying that Python was indeed compiled with thread
support.  Looking in python.m4, I see specifically:
---
# threaded python is not supported on OpenBSD
AC_MSG_CHECKING(whether Python is compiled with thread support)
pythreads=`${PYTHON} -c import sys; print(int('thread' in
sys.builtin_module_na
mes))`
if test $pythreads = 1; then
  AC_MSG_RESULT(yes)
  case $host_os in
  openbsd*)
AC_MSG_ERROR([threaded Python not supported on this platform])
---


Looking at the history on python.m4, this has been present for quite a
while, originating back to Bruce Momjian's commits from 2005.

I was convinced that I was able to successfully use the PL/Python
interface on OpenBSD in the past though admittedly I've exclusively
been doing so on other platforms in recent years.


The nature of the errors that were seen back in 2005/2006 are not easily
found.  Can somebody point me in the right direction?  Or have folks
like Peter Eisentraut tested this recently to verify that the problem
persists with modern builds of Python on OpenBSD?  (I can always disable
this test in the configure to see it build but it might not prove much
if I don't try to provoke the previously seen issues.)



Davin


On Tue, May 19, 2015 at 10:55:40AM -0300, Alvaro Herrera wrote:
 Davin M. Potts wrote:
  It may help to understand that olinguito uses the same build of python
  (same directory) for happily building all the REL9_X branches and only
  HEAD seems to have this problem.  I have tried cleaning everything (not
  just cache but blowing away the directories and doing a clean git pull
  with new, fresh directories) but HEAD's problems persist.  If you look
  back in the history, HEAD used to build happily on this system with this
  python build.  This seems to support the idea that we are looking at a
  regression.
 
 It's a deliberate change, not a regression.  See here:
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d664a10f9623fd2198b257e513bce849d439a773
 
 
 -- 
 Álvaro Herrerahttp://www.twitter.com/alvherre

- End forwarded message -


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] can you have any idea about toast missing chunk issu resolution

2015-01-15 Thread M Tarkeshwar Rao
Hi all,

We are getting following error message on doing any action on the table 
like(Select or open from pgadmin).

Please suggest.

ERROR:  missing chunk number 0 for toast value 54787 in pg_toast_2619
** Error **

ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
SQL state: XX000


CREATE TABLE mm_activealarm
(
  alarm_id integer NOT NULL,
  source_address character varying(255) NOT NULL,
  alarm_instance_id integer NOT NULL,
  alarm_raise_time bigint,
  alarm_update_time bigint,
  alarm_cease_time bigint,
  alarm_count integer,
  alarm_severity integer NOT NULL,
  source_type character varying(40) NOT NULL,
  alarm_state integer NOT NULL,
  event_type integer,
  notification_id integer NOT NULL,
  probable_cause integer NOT NULL,
  specific_problem integer NOT NULL,
  alarm_additional_text character varying(10240),
  alarm_ack_time bigint,
  alarm_ack_user character varying(100) NOT NULL,
  alarm_ack_system character varying(100) NOT NULL,
  alarm_proposed_repair_action character varying(10240) NOT NULL,
  CONSTRAINT mm_activealarm_pk PRIMARY KEY (alarm_id, source_address)
  USING INDEX TABLESPACE mgrdata
)
WITH (
  OIDS=FALSE
)
TABLESPACE mgrdata;
ALTER TABLE ss_activealarm
  OWNER TO ss_super;

Regards
Tarkeshwar



Re: [HACKERS] Postgres TR for missing chunk

2014-12-19 Thread M Tarkeshwar Rao
Hello friends,

Thanks for your useful inputs.

We are facing this issue and want to analyse this through logging. 
can you please share a sample Postgres config file to enable max logging with 
syslog support?

What should be the debug level so that I can capture the failure information?

Regards
Tarkeshwar

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: 16 December 2014 22:25
To: Jaime Casanova
Cc: M Tarkeshwar Rao; PostgreSQL-development
Subject: Re: [HACKERS] Postgres TR for missing chunk

Jaime Casanova ja...@2ndquadrant.com writes:
 You know, that toast table name ringed a bell.
 Look at this thread maybe this is your problem, and if it is then is 
 already fixed and you should update.
 http://www.postgresql.org/message-id/12138.1336019...@sss.pgh.pa.us

That was about transient failures though, not persistent ones, which is what 
the OP seems to be claiming he's getting.

 Btw, when giving a bug report you should start but saying your 
 PostgreSQL's version and explain what you did based on Google's wisdom

Yeah.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres TR for missing chunk

2014-12-15 Thread M Tarkeshwar Rao
Hello Friends,

Can you please tell me the how can I track the which bugs are fixed in which 
release and when they will be fixed,
If I want to track the analysis and status of the bug raised on Postgres. Can I 
get this information.

From last few days we are struggling with following issue:

1.   Additionally we found that few operations on this table is getting 
failed like select or truncate and a more specific error is thrown as per 
below:-

ERROR:  missing chunk number 0 for toast value 54787 in pg_toast_2619

** Error **

We done all the suggested things on Google but not able to resolve it. I want 
to know how to avoid this issue?

Can you please suggest upto when following bugs will be resolved?

There are the known Bug on Postgres. Bugs detail are mentioned below.

BUG #9187: corrupt toast tables

http://www.postgresql.org/message-id/30154.1392153...@sss.pgh.pa.us
http://www.postgresql.org/message-id/cafj8praufpttn5+ohfqpbcd1jzkersck51uakhcwd8nt4os...@mail.gmail.com
http://www.postgresql.org/message-id/20140211162408.2713.81...@wrigleys.postgresql.org

BUG #7819: missing chunk number 0 for toast value 1235919 in pg_toast_35328

http://www.postgresql.org/message-id/C62EC84B2D3CF847899CCF4B589CCF70B20AA08F@BBMBX.backbone.local

Thanks !!
Tarkeshwar


[HACKERS] issue in postgresql 9.1.3 in using arrow key in Solaris platform

2014-11-26 Thread M Tarkeshwar Rao
Hi all,

We are facing following issue in postgresql 9.1.3 in using arrow key in Solaris 
platform.
Can you please help us to resolve it or any new release has fix for this or any 
workaround for this?

issue: psql client generates a core when up arrow is used twice.

Platfrom: Solaris X86

Steps to  reproduce:
=
1. Login to any postgres database
2. execute any quer say  \list
3. press up arrow twice.
4. segmentation fault occurs and core is generated. Also session is terminated.

PLease find example below

# ./psql -U super -d mgrdb
Password for user super:
psql (9.1.3)
Type help for help.

mgrdb=# \l
  List of databases
   Name|  Owner   | Encoding |   Collate   |Ctype|   Access privileg
es
---+--+--+-+-+--
-
mgrdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
   |  |  | | | postgres=CTc/post
gres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
   |  |  | | | postgres=CTc/post
gres
(4 rows)

mgrdb=#
mgrdb=# select count(1) from operator_msm;Segmentation Fault (core dumped)

Regards
Tarkeshwar


[HACKERS] Insert query hangs

2014-07-09 Thread M Tarkeshwar Rao
Hi all,

We have a table in a database DB1 with name Test. We imported this database 
from another machine.
When I fire insert statement it is going in the hang state.

Then I created another table with same structure and with same data within it 
as in table Test. 
Then I fired the insert statement. It is working fine.

I am not able find the reason for this. Can you please help me out on this. 
This scenario easily reproducible.

I have a standalone system and postgresql  9.1 installed on it.

Regards
Tarkeshwar


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] FW: [postgres-discuss] Insert query hangs

2014-07-09 Thread M Tarkeshwar Rao
I have the hang issue in Postgres, When I am going to insert into record in a 
table.
Table structure is enclosed in the discussion.

Also we found the issue with unique indexes. When I removed the unique index 
insert operation is working fine.
I need help from this core group that weather this is the bug in the Postgres 
code or we are using unique index wrongly.

Regards
Tarkeshwar

From: Niklas Andersson
Sent: 09 July 2014 18:21
To: M Tarkeshwar Rao; Leo Zhou; postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs

I wouldn't advice you to drop the indexes in a production environment, as they 
are usually very important to get fast queries.

 Your index doesn't seem to be of much use though, as it looks like you are 
only indexing one single column that is an integer. It seems as it is not 
needed. Usually you use indexes with two or more columns to speed up queries, 
or you join on those columns.

 If you want to make sure that that column is unique, I would advice you to 
define it as a primary key. You could also use the keyword unique, but in this 
case I would prefer to define it as a primary key.

Then in order to always get a new, unique integer as a primary key, I would 
suggest you have a look at CREATE SEQUENCE. The syntax comes from how Oracle 
does it and it works very nice [1]

But, this doesn't explain why your current index is causing problems, becuase 
it _shouldnt_ ;-/

I think you would need some tools to have a check on the server load. Or have a 
look at how EXPLAIN works, Unfortunately I don't have that deep competence :-(

[1] http://www.postgresql.org/docs/8.1/static/sql-createsequence.html


Regards,
Niklas

From: M Tarkeshwar Rao
Sent: Wednesday, 09 July 2014 2:29 PM
To: Niklas Andersson; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs
What should I do resolve this issue?

Change the structure of Table or I should not create the index.

From: Niklas Andersson
Sent: 09 July 2014 17:58
To: M Tarkeshwar Rao; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs

Yes, and the more data, the longer it takes to rebuild the index.

 This is why you drop the indexes during certain copy operations, if you have 
indexes enabled the copy would take forever.

Regards,
Niklas

From: M Tarkeshwar Rao
Sent: Wednesday, 09 July 2014 2:22 PM
To: Niklas Andersson; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs
Fine now I understand why it is taking time.

Is it possible that insert operation will take time when unique index is 
already created on the table and table has some data within it?


From: Niklas Andersson
Sent: 09 July 2014 17:20
To: M Tarkeshwar Rao; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs

Can this be of help [1]?

[1] 
http://www.postgresql.org/docs/9.2/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY

Regards,
Niklas

From: M Tarkeshwar Rao
Sent: Wednesday, 09 July 2014 1:41 PM
To: Niklas Andersson; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs

CREATE TABLE eventlogentry

(

   tableindex integer,

   object character varying(80),

   method character varying(80),

   bgwuser character varying(80),

   time character(23),

   realuser character varying(80),

   host character varying(80),

   application character varying(80)

)

WITH (

   OIDS=FALSE

)

TABLESPACE mmdata;

ALTER TABLE eventlogentry

   OWNER TO mmsuper;

GRANT ALL ON TABLE eventlogentry TO mmsuper; GRANT SELECT ON TABLE 
eventlogentry TO report;



CREATE UNIQUE INDEX ind1_eventlogentry

   ON eventlogentry

   USING btree

   (tableindex )

TABLESPACE mmindex;



I am sharing the table structure. When we removed the unique index it is 
working fine.

And when created normal index(not unique) it is working fine.



After removing unique index we tried to recreate it but it is giving following 
infinite logs :


concurrent insert in progress within table eventlogentry

caveat when building a unique index concurrently is that the uniqueness 
constraint is already being enforced against other transactions when the second 
table scan begins





Regards

Tarkeshwar


From: Niklas Andersson
Sent: 09 July 2014 16:10
To: M Tarkeshwar Rao; Leo Zhou; 
postgres-disc...@mailman.lmera.ericsson.semailto:postgres-disc...@mailman.lmera.ericsson.se
Subject: RE: [postgres-discuss] Insert query hangs

Hi,

 You have some info on checking on corrupt tables here [1

[HACKERS] Wide area replication postgres 9.1.6 slon 2.1.2 large table failure.

2013-01-11 Thread Tory M Blue
So I started this thread on the slon forum, and they mentioned that I/we
should ask here.

Postgres 9.1.4 slon 2.1.1
-and-
Postgres 9.1.6 slon 2.1.2

Scenario:

Node 1, is on gig circut and is the master  (West Coast)

Node 2, is also on a gig circuit and is the slave (Georgia)

Symptoms, slon immediately dies after transferring the biggest table in the
set (this happens with 2 of 3 sets, the set that actually completes has no
large tables).

Set 1 has a table that takes just under 6000 seconds, and set 2 has a table
that takes double that, and again it completes.

1224459-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: 5760.913
seconds to copy table cls.listings
1224560-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: copy table
cls.customers
1224642-2013-01-11 14:21:10 PST CONFIG remoteWorkerThread_1: Begin COPY of
table cls.customers
1224733-2013-01-11 14:21:10 PST ERROR  remoteWorkerThread_1: select
_admissioncls.copyFields(8);  --- this has the proper data
1224827:2013-01-11 14:21:10 PST WARN   remoteWorkerThread_1: data copy for
set 1 failed 1 times - sleep 15 seconds

Now in terms of postgres, if I do a copy from node 1 to node 2 the large
table (2 hors) completes without issue.

From Node 2:
-bash-4.1$ psql -h idb02 -d admissionclsdb -c copy cls.listings to stdout
| wc
 4199441 600742784 6621887401

This worked fine.

I get no errors in the postgres logs, there is no network disconnect and
since I can do a copy over the wire that completes, I'm at a loss.  I don't
know what to look at, what to look for or what to do.  Obviously this is
the wrong place to slon issues.

One of the slon developers stated;
I wonder if there's something here that should get bounced over to
pgsql-hackers or such; we're poking at a scenario here where the use
of COPY to stream data between systems is proving troublesome, and
perhaps there may be meaningful opinions over there on that.

If a copy of the same table that seems to be at the end of a slon failed
attempt and it will complete with a copy, I'm just not sure what is going
on.

Any suggestions, please ask for more data, I can do anything to the slave
node, it's a bit tougher on the source, but I can arrange to make changes
to it if need be.


I just upgraded to 9.1.6 and slon 2.1.2 but prior tests were on 9.1.4 and
slon 2.1.1 and a mix of postgres 9.1.4 slon 2.1.1 and postgres 9.1.6 slon
2.1.1 (node 2)

The other difference is node 1 is running on Fedora12 and node 2 is running
CentOS 6.2

Thanks in advance
Tory


[HACKERS] File Corruption recovery

2012-11-06 Thread m...@rpzdesign.com
I have been working on external replication on Postgresql 9.2 for a 
little while

(with too many interruptions blocking my progress!)

Who knows a good utility to aggressively analyze
and recover Postgresql Databases?

It seems the standard reply that I see
is Make regular backups, but that guarantees maximum full data loss
defined by the backup time interval.

Our MariaDB Mysql/ExtraDB/Innodb friends and Aria_check and some other tools
to recover as much as possible up to the moment of failure.

While full replication is the ultimate safeguard, in split brain mode, 
I could
see a hardware failure causing loss of data up to the last replication 
exchange

or last backup interval.

During a data crash, I want the recovery tool to HELP me get as much 
data recovered
and get back to operations.  What I do not want to do is a bunch of 
manual command line
file copy and deletes to guess my way back to operational mode (some 
data loss is inevitable)


I could make a daily snapshot of the system catalog to assist the 
recovery tool in

restoring the database.

Who has ideas on this?






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #7521: Cannot disable WAL log while using pg_dump

2012-10-18 Thread Gezeala M . Bacuño II
You may disable full_page_writes, but as you can see from my previous
post, disabling it did not do the trick. My zfs' USED property
continues to increase.

On Wed, Oct 17, 2012 at 3:55 PM, ichbinrene rene.romer...@gmail.com wrote:
 I'm experiencing the exact same issue:

 PostgreSQL 9.1.6 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305
 (Red Hat 4.4.6-4), 32-bit
 CentOS release 6.3 (Final)

 I might also turn full_page_writes off but I fear for data integrity in case
 of a crash .



 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/BUG-7521-Cannot-disable-WAL-log-while-using-pg-dump-tp5722846p5728727.html
 Sent from the PostgreSQL - bugs mailing list archive at Nabble.com.


 --
 Sent via pgsql-bugs mailing list (pgsql-b...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-bugs


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching timeline over streaming replication

2012-09-26 Thread m...@rpzdesign.com

Josh:

The good part is you are the first person to ask for a copy
and I will send you the hook code that I have and you can be a good sport
and put it on GitHub, that is great, you can give us both credit for a 
joint effort, I do the code,

you put it GitHub.

The not so good part is that the community has a bunch of other trigger work
and other stuff going on, so there was not much interest in non-WAL 
replication hook code.


I do not have time to debate implementation nor wait for release of 9.3
with my needs not met, so I will just keep patching the hook code into 
whatever release

code base comes along.

The bad news is that I have not implemented the logic of the external 
replication daemon.


The other good and bad news is that you are free to receive the messages 
from the hook code
thru the unix socket and implement replication any way you want and the 
bad news is that you are free

to IMPLEMENT replication any way you want.

I am going to implement master-master-master-master SELF HEALING 
replication, but that is just my preference.
Should take about a week to get it operational and another week to see 
how it works in my geographically dispersed

servers in the cloud.

Send me a note if it is ok to send you a zip file with the source code 
files that I touched in the 9.2 code base so you

can shove it up on GitHub.

Cheers,

marco


On 9/26/2012 6:48 PM, Josh Berkus wrote:

I was able to patch the 9.2.0 code base in 1 day and change my entire
architecture strategy for replication
into self healing async master-master-master and the tiniest bit of
sharding code imaginable

Sounds cool.   Do you have a fork available on Github?  I'll try it out.





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching timeline over streaming replication

2012-09-25 Thread m...@rpzdesign.com

Amit:

At some point, every master - slave replicator gets to the point where 
they need

to start thinking about master-master replication.

Instead of getting stuck in the weeds to finally realize that 
master-master is the ONLY way
to go, many developers do not start out planning for master - master, 
but they should, out of habit.


You can save yourself a lot of grief just be starting with master-master 
architecture.


But you don't have to USE it, you can just not send WRITE traffic to the 
servers that you do
not want to WRITE to, but all of them should be WRITE servers. That way, 
the only timeline
you ever need is your decision to send WRITE traffic request to them, 
but there is nothing
that prevents you from running MASTER - MASTER all the time and skip the 
whole slave thing

entirely.

At this point, I think synchronous replication is only for immediate 
local replication needs

and async for all the master - master stuff.

cheers,

marco


On 9/24/2012 9:44 PM, Amit Kapila wrote:

On Monday, September 24, 2012 9:08 PM m...@rpzdesign.com wrote:
What a disaster waiting to happen. Maybe the only replication should be
master-master replication
so there is no need to sequence timelines or anything, all servers are
ready masters, no backups or failovers.
If you really do not want a master serving, then it should only be
handled in the routing
of traffic to that server and not the replication logic itself.  The
only thing that ever came about
from failovers was the failure to turn over.  The above is opinion
only.

This feature is for users who want to use master-standby configurations.

What do you mean by :
then it should only be  handled in the routing of traffic to that server
and not the replication logic itself.

Do you have any idea other than proposed implementation or do you see any
problem in currently proposed solution?



On 9/24/2012 7:33 AM, Amit Kapila wrote:

On Tuesday, September 11, 2012 10:53 PM Heikki Linnakangas wrote:

I've been working on the often-requested feature to handle timeline
changes over streaming replication. At the moment, if you kill the
master and promote a standby server, and you have another standby
server that you'd like to keep following the new master server, you
need a WAL archive in addition to streaming replication to make it
cross the timeline change. Streaming replication will just error

out.

Having a WAL archive is usually a good idea in complex replication
scenarios anyway, but it would be good to not require it.

Confirm my understanding of this feature:

This feature is for case when standby-1 who is going to be promoted

to

master has archive mode 'on'.
As in that case only its timeline will change.

If above is right, then there can be other similar scenario's where

it can

be used:

Scenario-1 (1 Master, 1 Stand-by)
1. Master (archive_mode=on) goes down.
2. Master again comes up
3. Stand-by tries to follow it

Now in above scenario also due to timeline mismatch it gives error,

but your

patch should fix it.



Some parts of this patch are just refactoring that probably make

sense

regardless of the new functionality. For example, I split off the
timeline history file related functions to a new file, timeline.c.
That's not very much code, but it's fairly isolated, and xlog.c is
massive, so I feel that anything that we can move off from xlog.c is

a

good thing. I also moved off the two functions RestoreArchivedFile()
and ExecuteRecoveryCommand(), to a separate file. Those are also not
much code, but are fairly isolated. If no-one objects to those

changes,

and the general direction this work is going to, I'm going split off
those refactorings to separate patches and commit them separately.

I also made the timeline history file a bit more detailed: instead

of

recording just the WAL segment where the timeline was changed, it

now

records the exact XLogRecPtr. That was required for the walsender to
know the switchpoint, without having to parse the XLOG records (it
reads and parses the history file, instead)

IMO separating timeline history file related functions to a new file

is

good.
However I am not sure about splitting for RestoreArchivedFile() and
ExecuteRecoveryCommand() into separate file.
How about splitting for all Archive related functions:
static void XLogArchiveNotify(const char *xlog);
static void XLogArchiveNotifySeg(XLogSegNo segno);
static bool XLogArchiveCheckDone(const char *xlog);
static bool XLogArchiveIsBusy(const char *xlog);
static void XLogArchiveCleanup(const char *xlog);
..
..

In any case, it will be better if you can split it into multiple

patches:

1. Having new functionality of Switching timeline over streaming
replication
2. Refactoring related changes.

It can make my testing and review for new feature patch little

easier.

With Regards,
Amit Kapila.









--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref

Re: [HACKERS] Switching timeline over streaming replication

2012-09-25 Thread m...@rpzdesign.com

John:

Who has the money for oracle RAC or funding arrogant bastard Oracle CEO 
Ellison to purchase another island?


Postgres needs CHEAP, easy to setup, self healing, 
master-master-master-master and it needs it yesterday.


I was able to patch the 9.2.0 code base in 1 day and change my entire 
architecture strategy for replication
into self healing async master-master-master and the tiniest bit of 
sharding code imaginable


That is why I suggest something to replace OIDs with ROIDs for 
replication ID.  (CREATE TABLE with ROIDS)

I implement ROIDs as a uniform design pattern for the table structures.

Synchronous replication maybe between 2 local machines if absolutely no 
local
hardware failure is acceptable, but cheap, scaleable synchronous, 
TRANSACTIONAL, master-master-master-master is a real tough slog.


I could implement global locks in the external replication layer if I 
choose, but there are much easier ways in routing
requests thru the load balancer and request sharding than trying to 
manage global locks across the WAN.


Good luck with your HA patch for Postgres.

Thanks for all of the responses!

You guys are 15 times more active than the MySQL developer group, likely 
because

they do not have a single db engine that meets all the requirements like PG.

marco

On 9/25/2012 5:10 PM, John R Pierce wrote:

On 09/25/12 11:01 AM, m...@rpzdesign.com wrote:



At some point, every master - slave replicator gets to the point 
where they need
to start thinking about master-master replication. 


master-master and transactional integrity are mutually exclusive, 
except perhaps in special cases like Oracle RAC, where the masters 
share a coherent cache and implement global locks.










--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-24 Thread m...@rpzdesign.com

Dmitri:

Thanks for the response.

I am resigned to just patch each major release with my own pile of hook 
code that I can quickly

graft into the code base, currently grafting 9.2.0.

My strategy is to let the PG code base float around with all the work of 
the fine hackers on this list,

maybe debate a couple of things with some friendly types, but
really just put my effort into logic piled into external replication 
daemon which

will NOT change due to anything in the PG core.

If one day, the code base actually feeds me the event information I 
need, maybe I will change it.


And I have not seen anybody request my hook code but a few have 
responded that the are working

on things in the code base, release date unknown.

Cheers,

marco

On 9/24/2012 10:20 AM, Dimitri Fontaine wrote:

m...@rpzdesign.com m...@rpzdesign.com writes:

You may want to consider changing the command TRIGGER into a command FILTER
and possibly post processing TRIGGER that
is determined to be called INSIDE the FILTER.  Or some way to pass
information between the FILTER and the post processing trigger.

The only current event supported by the system is the
ddl_command_start one. We mean to add some more, and triggers wanting
to communicate data in between ddl_command_start and ddl_command_end
(for example) will have to use something like a table.


Also, something information as to whether a series of statements was ROLLED
BACK would be helpful.

Event Triggers are not an autonomous transaction: any effect they have
in the database is rolled-backed when the main transaction is rolled
backed. You can use LISTEN/NOTIFY or PGQ that both know how to handle
that semantics.

Regards,




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-22 Thread m...@rpzdesign.com

Andres  those involved in the 9.3 Command triggers:

You may want to consider changing the command TRIGGER into a command 
FILTER and possibly post processing TRIGGER that
is determined to be called INSIDE the FILTER.  Or some way to pass 
information between the FILTER and the post processing trigger.


Also, something information as to whether a series of statements was 
ROLLED BACK would be helpful.


How to rebuild the parsetree so external replication could catch a clue 
that all of the last series of updates was actually rolled back!


For example, I want to enforce a policy where all DROP TABLE  
commands become ALTER TABLE  RENAME to SIGy
All DROP TABLE Sig commands are not changed so that the SIG tables 
actually drop. (Almost like a TRASH Can for postgresql!)


ALL TRUNCATE TABLE  Become ALTER TABLE  RENAME to SIGy AND 
THEN CREATE TABLE  SELECT AS SIG


This way, there is not possible way to have to enforce a Waiting period 
for replication to begin its work when errant DROP / TRUNCATES occur.


All of this is important for Easy Setup/Initiation, Self-Healing, Fault 
Tolerant, Scaleable, INEXPENSIVE, External REPLICATION, a larger subject 
indeed.


I want CLOUD behavior without CLOUD prices.

Anybody who is working on the 9.3 COMMAND TRIGGER, drop me a note if you 
wish.


marco

On 9/21/2012 3:41 PM, Andres Freund wrote:

Hi,

On Friday, September 21, 2012 11:06:46 PM m...@rpzdesign.com wrote:

Thanks for the link on the GIT patch code.

I did a big major domo search and found some stuff related to command
triggers:

http://archives.postgresql.org/pgsql-hackers/2012-03/msg00169.php

Look into the new event triggers started recently. -

Are these command triggers currently in the 9.2.0 code base or is it in
a alpha 9.2.1xxx?

Its not in 9.2 and will only be in 9.3

Andres




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC][PATCH] wal decoding, attempt #2 - Design Documents (really attached)

2012-09-22 Thread m...@rpzdesign.com

Andres, nice job on the writeup.

I think one aspect you are missing is that there must be some way for 
the multi-masters to
re-stabilize their data sets and quantify any data loss.  You cannot do 
this without
some replication intelligence in each row of each table so that no 
matter how disastrous
the hardware/internet failure in the cloud, the system can HEAL itself 
and keep going, no human beings involved.


I am laying down a standard design pattern of columns for each row:

MKEY - Primary key guaranteed unique across ALL nodes in the CLOUD with 
NODE information IN THE KEY. (A876543 vs B876543 or whatever)(network 
link UP or DOWN)

CSTP - create time stamp on unix time stamp
USTP - last update time stamp based on unix time stamp
UNODE - Node that updated this record

Many applications already need the above information, might as well 
standardize it so external replication logic processing can self heal.


Postgresql tables have optional 32 bit int OIDs, you may want consider 
having a replication version of the ROID, replication object ID and then 
externalize the primary

key generation into a loadable UDF.

Of course, ALL the nodes must be in contact with each other not allowing 
signficant drift on their clocks while operating. (NTP is a starter)


I just do not know of any other way to add self healing without the 
above information, regardless of whether you hold up transactions for 
synchronous
or let them pass thru asynch.   Regardless if you are getting your 
replication data from the WAL stream or thru the client libraries.


Also, your replication model does not really discuss busted link 
replication operations, where is the intelligence for that in the 
operation diagram?


Everytime you package up replication into the core, someone has to tear 
into that pile to add some extra functionality, so definitely think
about providing sensible hooks for that extra bit of customization to 
override the base function.


Cheers,

marco

On 9/22/2012 11:00 AM, Andres Freund wrote:

This time I really attached both...






[HACKERS] DB Schema

2012-09-21 Thread m...@rpzdesign.com
During the execution of ProcessUtility() function in 
/src/backend/xxx/utility.c,


the CreateStmt node type is processed to create a table.

Is there a global function in the context of the backend process that 
will deliver what the current database and schema names?


The querystring cannot be relied upon for discerning this information.

marco


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] External Replication

2012-09-21 Thread m...@rpzdesign.com

Heikki:

Thanks for the response.  I am writing an external replication engine
and putting hooks into postgres to send signals via a unix socket to 
the outside world.


All of the logic and implementation will occur OUTSIDE the postgres 
codebase and

will not interfere with any WAL based replication schemes.

The usual Trigger level asynch replication does on not suffice since 
it does not handle
new databases, new schemas, new tables, new indexes, alter everything, 
new functions, etc.


So I started putting into utility.c-xxx_ProcessUtility(..., stmt* 
parsetree,) so that discrete


Does anybody have other ideas where to better locate the Hooks for 
external replication/signaling

than utility.c?

One drawback is that I have to patch directly into those areas where new 
relation IDs are created
so I can pass them outside of the process to the replication engine.  
Process Utility does a really bad job of retaining
its work as it processes the statements, so I have to patch code where 
the relation IDS are created.  Those new IDS are never retained

when leaving ProcessUtility, its work is done.

Is there a way to put a Trigger on pg_class, pg_database, pg_namespace 
instead of patching the statically

compiled binary to simulate the triggers?

Cheers,

marco


On 9/21/2012 10:15 AM, Heikki Linnakangas wrote:

On 21.09.2012 17:58, m...@rpzdesign.com wrote:

During the execution of ProcessUtility() function in
/src/backend/xxx/utility.c,

the CreateStmt node type is processed to create a table.

Is there a global function in the context of the backend process that
will deliver what the current database and schema names?


There's a global variable MyDatabaseId for the database the backend is 
connected to. It doesn't change after login.


There's no such thing as a current schema, but I think you'll want 
to take a look at src/backend/catalog/namespace.c, which handles the 
search_path. There's a variable activeCreationNamespace there; look at 
the functions in namespace.c to see how it works.


- Heikki






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-21 Thread m...@rpzdesign.com

Andres:

Thanks for the link on the GIT patch code.

I did a big major domo search and found some stuff related to command 
triggers:


http://archives.postgresql.org/pgsql-hackers/2012-03/msg00169.php

Look into the new event triggers started recently. -

Are these command triggers currently in the 9.2.0 code base or is it in 
a alpha 9.2.1xxx?


I searched the 9.2.0 code base and found nothing on CREATE TRIGGER that 
had anything to do with other than TABLES and VIEWS.


I cannot wait for stable code to arrive, I need to add it today.

Since the hackers group is already working on this, I will not even try 
to contribute this work, unless someone wants it.


marco


On 9/21/2012 1:19 PM, Andres Freund wrote:

On Friday, September 21, 2012 08:12:26 PM m...@rpzdesign.com wrote:

Heikki:

Thanks for the response.  I am writing an external replication engine
and putting hooks into postgres to send signals via a unix socket to
the outside world.

All of the logic and implementation will occur OUTSIDE the postgres
codebase and
will not interfere with any WAL based replication schemes.

The usual Trigger level asynch replication does on not suffice since
it does not handle
new databases, new schemas, new tables, new indexes, alter everything,
new functions, etc.

So I started putting into utility.c-xxx_ProcessUtility(..., stmt*
parsetree,) so that discrete

Does anybody have other ideas where to better locate the Hooks for
external replication/signaling
than utility.c?

Look into the new event triggers started recently.

Commits
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3855968f328918b6cd1401dd11d109d471a54d40
and
3a0e4d36ebd7f477822d5bae41ba121a40d22ccc

Look into earlier discussions around event/command triggers why putting stuff
plainly into ProcessUtility is not all that helpful...

Greetings,

Andres





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #7521: Cannot disable WAL log while using pg_dump

2012-09-09 Thread Gezeala M . Bacuño II
On Fri, Sep 7, 2012 at 11:40 AM, Gezeala M. Bacuño II geze...@gmail.com wrote:
 adding pgsql-bugs list in case OP posts back.

 On Fri, Sep 7, 2012 at 11:29 AM, Pavan Deolasee
 pavan.deola...@gmail.com wrote:
 (Adding -hackers. Did not realize it got dropped)

 On Fri, Sep 7, 2012 at 11:25 PM, Gezeala M. Bacuño II geze...@gmail.com
 wrote:

 On Fri, Sep 7, 2012 at 7:17 AM, Pavan Deolasee pavan.deola...@gmail.com
 wrote:
 
 
  On Fri, Sep 7, 2012 at 7:00 PM, Marie Bacuno II geze...@gmail.com
  wrote:
 
 
  On Sep 7, 2012, at 2:19, Pavan Deolasee pavan.deola...@gmail.com
  wrote:
 
 
   or have long running transactions ?
 
  Yes but I don't think there are when the snapshot was taken. Does the
  pg_xlog_location_diff() result from latest and prior checkpoint upon
  start-up indicates the size of replayed changes?
 
 
  Thats the amount of additional WAL generated after you started the
  server.
 
 
  
   BTW, the following query returns ~60GB. Thats the amount of WAL
   written after the server was started and at the end of pg_dump (I
   don't think pg_xlog_location_diff() is available in the older
   releases).
  
   postgres=# select pg_xlog_location_diff('4450/7A14F280',
   '4441/5E681F38')/(2^30);
  ?column?
   --
   60.1980484202504
 
  It'll be great to know what the wals modified..?
 
 
  You would need something like xlogdump to decipher them. I quickly tried
  this and it seems to work against 8.4 version that you are running.
  https://github.com/snaga/xlogdump
 
  Download the source code, compile and run it against one of the most
  recent
  WAL files in the cluster against which you ran pg_dump. You would need
  to
  set PATH to contain the pg_config of the server you are running. Please
  post
  the output.
 
  Thanks,
  Pavan
 
 

 Here you go:

 ## last WAL
 $ xlogdump -S /dbpool/data/pg_xlog/00014450007A

 /dbpool/data/pg_xlog/00014450007A:

 Unexpected page info flags 0003 at offset 0
 Skipping unexpected continuation record at offset 0
 ReadRecord: record with zero len at 17488/7A14F310
 Unexpected page info flags 0001 at offset 15
 Skipping unexpected continuation record at offset 15
 Unable to read continuation page?
  ** maybe continues to next segment **
 ---
 TimeLineId: 1, LogId: 17488, LogSegment: 122

 Resource manager stats:
   [0]XLOG  : 3 records, 120 bytes (avg 40.0 bytes)
  checkpoint: 3, switch: 0, backup end: 0
   [1]Transaction: 0 record, 0 byte (avg 0.0 byte)
  commit: 0, abort: 0
   [2]Storage   : 0 record, 0 byte (avg 0.0 byte)
   [3]CLOG  : 0 record, 0 byte (avg 0.0 byte)
   [4]Database  : 0 record, 0 byte (avg 0.0 byte)
   [5]Tablespace: 0 record, 0 byte (avg 0.0 byte)
   [6]MultiXact : 0 record, 0 byte (avg 0.0 byte)
   [7]Reserved 7: 0 record, 0 byte (avg 0.0 byte)
   [8]Reserved 8: 0 record, 0 byte (avg 0.0 byte)
   [9]Heap2 : 2169 records, 43380 bytes (avg 20.0 bytes)
   [10]Heap  : 0 record, 0 byte (avg 0.0 byte)
  ins: 0, upd/hot_upd: 0/0, del: 0
   [11]Btree : 0 record, 0 byte (avg 0.0 byte)
   [12]Hash  : 0 record, 0 byte (avg 0.0 byte)
   [13]Gin   : 0 record, 0 byte (avg 0.0 byte)
   [14]Gist  : 0 record, 0 byte (avg 0.0 byte)
   [15]Sequence  : 0 record, 0 byte (avg 0.0 byte)

 Backup block stats: 2169 blocks, 16551816 bytes (avg 7631.1 bytes)


 I think both my theories seem to be holding up. Heap2 resource manager is
 used only for vacuum freeze, lazy vacuum or HOT prune. Given your access
 pattern, I bet its the third activity that kicking in on your database. You
 got many pages with dead tuples and they are getting cleaned at the first
 opportunity, which happens to be the pg_dump thats run immediately after the
 server restart. This is seen by all 2169 WAL records in the file being
 attributed to the Heap2 RM above.

 Whats additionally happening is each of these records are on different heap
 pages. The cleanup activity dirties those pages. Since each of these pages
 is being dirtied for the first time after a recent checkpoint and
 full_page_writes is turned ON, entire page is backed up in the WAL record.
 You can see the exact number of backup blocks in the stats above.

 I don't think we have any mechanism to control or stop HOT from doing what
 it wants to do, unless you are willing to run a modified server for this
 reason. But you can at least bring down the WAL volume by turning
 full_page_writes OFF.

 Thanks,
 Pavan

 Great. Finally got some light on this. I'll disable full_page_writes
 on my next backup and will post back results tomorrow. Thanks.

Results with full_page_writes disabled:

pg_controldata right after cluster start:
Latest checkpoint location:   4469/B8352EB8
Prior checkpoint location:4469/B5EAE428
Latest checkpoint's REDO location:4469/B8352EB8
Latest checkpoint's TimeLineID:   1
Latest checkpoint's NextXID

Re: [HACKERS] [BUGS] BUG #7521: Cannot disable WAL log while using pg_dump

2012-09-07 Thread Gezeala M . Bacuño II
adding pgsql-bugs list in case OP posts back.

On Fri, Sep 7, 2012 at 11:29 AM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
 (Adding -hackers. Did not realize it got dropped)

 On Fri, Sep 7, 2012 at 11:25 PM, Gezeala M. Bacuño II geze...@gmail.com
 wrote:

 On Fri, Sep 7, 2012 at 7:17 AM, Pavan Deolasee pavan.deola...@gmail.com
 wrote:
 
 
  On Fri, Sep 7, 2012 at 7:00 PM, Marie Bacuno II geze...@gmail.com
  wrote:
 
 
  On Sep 7, 2012, at 2:19, Pavan Deolasee pavan.deola...@gmail.com
  wrote:
 
 
   or have long running transactions ?
 
  Yes but I don't think there are when the snapshot was taken. Does the
  pg_xlog_location_diff() result from latest and prior checkpoint upon
  start-up indicates the size of replayed changes?
 
 
  Thats the amount of additional WAL generated after you started the
  server.
 
 
  
   BTW, the following query returns ~60GB. Thats the amount of WAL
   written after the server was started and at the end of pg_dump (I
   don't think pg_xlog_location_diff() is available in the older
   releases).
  
   postgres=# select pg_xlog_location_diff('4450/7A14F280',
   '4441/5E681F38')/(2^30);
  ?column?
   --
   60.1980484202504
 
  It'll be great to know what the wals modified..?
 
 
  You would need something like xlogdump to decipher them. I quickly tried
  this and it seems to work against 8.4 version that you are running.
  https://github.com/snaga/xlogdump
 
  Download the source code, compile and run it against one of the most
  recent
  WAL files in the cluster against which you ran pg_dump. You would need
  to
  set PATH to contain the pg_config of the server you are running. Please
  post
  the output.
 
  Thanks,
  Pavan
 
 

 Here you go:

 ## last WAL
 $ xlogdump -S /dbpool/data/pg_xlog/00014450007A

 /dbpool/data/pg_xlog/00014450007A:

 Unexpected page info flags 0003 at offset 0
 Skipping unexpected continuation record at offset 0
 ReadRecord: record with zero len at 17488/7A14F310
 Unexpected page info flags 0001 at offset 15
 Skipping unexpected continuation record at offset 15
 Unable to read continuation page?
  ** maybe continues to next segment **
 ---
 TimeLineId: 1, LogId: 17488, LogSegment: 122

 Resource manager stats:
   [0]XLOG  : 3 records, 120 bytes (avg 40.0 bytes)
  checkpoint: 3, switch: 0, backup end: 0
   [1]Transaction: 0 record, 0 byte (avg 0.0 byte)
  commit: 0, abort: 0
   [2]Storage   : 0 record, 0 byte (avg 0.0 byte)
   [3]CLOG  : 0 record, 0 byte (avg 0.0 byte)
   [4]Database  : 0 record, 0 byte (avg 0.0 byte)
   [5]Tablespace: 0 record, 0 byte (avg 0.0 byte)
   [6]MultiXact : 0 record, 0 byte (avg 0.0 byte)
   [7]Reserved 7: 0 record, 0 byte (avg 0.0 byte)
   [8]Reserved 8: 0 record, 0 byte (avg 0.0 byte)
   [9]Heap2 : 2169 records, 43380 bytes (avg 20.0 bytes)
   [10]Heap  : 0 record, 0 byte (avg 0.0 byte)
  ins: 0, upd/hot_upd: 0/0, del: 0
   [11]Btree : 0 record, 0 byte (avg 0.0 byte)
   [12]Hash  : 0 record, 0 byte (avg 0.0 byte)
   [13]Gin   : 0 record, 0 byte (avg 0.0 byte)
   [14]Gist  : 0 record, 0 byte (avg 0.0 byte)
   [15]Sequence  : 0 record, 0 byte (avg 0.0 byte)

 Backup block stats: 2169 blocks, 16551816 bytes (avg 7631.1 bytes)


 I think both my theories seem to be holding up. Heap2 resource manager is
 used only for vacuum freeze, lazy vacuum or HOT prune. Given your access
 pattern, I bet its the third activity that kicking in on your database. You
 got many pages with dead tuples and they are getting cleaned at the first
 opportunity, which happens to be the pg_dump thats run immediately after the
 server restart. This is seen by all 2169 WAL records in the file being
 attributed to the Heap2 RM above.

 Whats additionally happening is each of these records are on different heap
 pages. The cleanup activity dirties those pages. Since each of these pages
 is being dirtied for the first time after a recent checkpoint and
 full_page_writes is turned ON, entire page is backed up in the WAL record.
 You can see the exact number of backup blocks in the stats above.

 I don't think we have any mechanism to control or stop HOT from doing what
 it wants to do, unless you are willing to run a modified server for this
 reason. But you can at least bring down the WAL volume by turning
 full_page_writes OFF.

 Thanks,
 Pavan

Great. Finally got some light on this. I'll disable full_page_writes
on my next backup and will post back results tomorrow. Thanks.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump custom format specification

2012-01-21 Thread Guillermo M. Narvaja
Someone has some kind of specification or description of the pg_dump
custom format?

What I'm trying to do is an utility to remotelly syncronize postgresql
dumps, using lib_rsync to syncronize each table independently and
copying blobs only when sizes differ. 

I've made the same using the Tar format, but I think most of the
features and optimizations like paralell-restore are now only available
with the custom format.

I will do the script in Python, so, I need to know how to parse and
write pg_dumps in custom format.

Thanks in advance.

-- 
Guillermo M. Narvaja
Lambda Sistemas S.R.L.
www.fierro-soft.com.ar
Tel: (5411) 4139-0493/4
Cel: (5411) 15-6783-4435
Email: guillermo.narv...@fierro-soft.com.ar
MSN: guillermo_narv...@hotmail.com
Skype: guillermonarvaja
Lavalleja 519 1er Piso - Ciudad de Buenos Aires - Argentina


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POSIX shared memory redux

2011-04-13 Thread A . M .

On Apr 13, 2011, at 2:06 AM, Tom Lane wrote:

 A.M. age...@themactionfaction.com writes:
 On Apr 11, 2011, at 7:13 PM, Tom Lane wrote:
 Robert Haas robertmh...@gmail.com writes:
 I mean I'm not convinced that fcntl() locking will be as reliable.
 
 I'm not either.  Particularly not on NFS.
 
 Is there an example of a recent system where fcntl is broken (ignoring NFS)?
 
 Well, the fundamental point is that ignoring NFS is not the real
 world.  We can't tell people not to put data directories on NFS,
 and even if we did tell them not to, they'd still do it.  And NFS
 locking is not trustworthy, because the remote lock daemon can crash
 and restart (forgetting everything it ever knew) while your own machine
 and the postmaster remain blissfully awake.
 
 None of this is to say that an fcntl lock might not be a useful addition
 to what we do already.  It is to say that fcntl can't just replace what
 we do already, because there are real-world failure cases that the
 current solution handles and fcntl alone wouldn't.


The goal of this patch is to eliminate SysV shared memory, not to implement 
NFS-capable locking which, as you point out, is virtually impossible.

As far as I can tell, in the worst case, my patch does not change how 
postgresql handles the NFS case. SysV shared memory won't work across NFS, so 
that interlock won't catch, so postgresql is left with looking at a lock file 
with PID of process on another machine, so that won't catch either. This patch 
does not alter the lock file semantics, but merely augments the file with file 
locking.

At least with this patch, there is a chance the lock might work across NFS. In 
the best case, it can allow for shared-storage postgresql failover, which is a 
new feature.

Furthermore, there is an improvement in shared memory handling in that it is 
unlinked immediately after creation, so only the postmaster and its children 
have access to it (through file descriptor inheritance). This means shared 
memory cannot be stomped on by any other process.

Considering that possibly working NFS locking is a side-effect of this patch 
and not its goal and, in the worst possible scenario, it doesn't change current 
behavior, I don't see how this can be a ding against this patch.

Cheers,
M
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Calling Matlab function from Postgres

2011-04-11 Thread Susan M Farley
I'm trying to call MATLAB functions from PostgreSQL. I was trying to use Joshua 
Kaplan's java MATLAB control. I install my jar file which in turn calls MATLAB 
to run my algorithm, but get the error message ERROR:  
java.lang.NoClassDefFoundError: matlabcontrol/RemoteMatlabProxyFactory when I 
call the java function. I saw where DerTech LLC developed a MATLAB interface 
for PostgreSQL, but their web site seems to be gone. Does anyone either have an 
idea of how to solve my error or have a copy of the code that DerTech developed 
for the interface?

Thank you,
Susan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] POSIX shared memory redux

2010-11-13 Thread A . M .
The goal of this work is to address all of the shortcomings of previous POSIX 
shared memory patches as pointed out mostly by Tom Lane.

Branch: 
http://git.postgresql.org/gitweb?p=users/agentm/postgresql.git;a=shortlog;h=refs/heads/posix_shmem
Main file: 
http://git.postgresql.org/gitweb?p=users/agentm/postgresql.git;a=blob;f=src/backend/port/posix_shmem.c;h=da93848d14eeadb182d8bf1fe576d741ae5792c3;hb=refs/heads/posix_shmem

Design goals:
1) ensure that shared memory creation collisions are impossible 
2) ensure that shared memory access collisions are impossible
3) ensure proper shared memory cleanup after backend and postmaster close
4) minimize API changes
http://archives.postgresql.org/pgsql-patches/2007-02/msg00527.php
http://archives.postgresql.org/pgsql-patches/2007-02/msg00558.php

This patch addresses the above goals and offers some benefits over SysV shared 
memory:

1) no kern.sysv management (one documentation page with platform-specific help 
can disappear)
2) shared memory allocation limited only by mmap usage
3) shared memory regions are completely cleaned up when the postmaster and all 
of its children are exited or killed for any reason (including SIGKILL)
4) shared memory creation race conditions or collisions between postmasters or 
backends are impossible
5) after postmaster startup, the postmaster becomes the sole arbiter of which 
other processes are granted access to the shared memory region 
6) mmap and munmap can be used on the shared memory region- this may be useful 
for offering the option to expand the memory region dynamically

The design goals are accomplished by a simple change in shared memory creation: 
after shm_open, the region name is immediately shm_unlink'd. Because POSIX 
shared memory relies on file descriptors, the shared memory is not deallocated 
in the kernel until the last referencing file descriptor is closed (in this 
case, on process exit). The postmaster then becomes the sole arbiter of passing 
the shared memory file descriptor (either through children or through file 
descriptor passing, if necessary).

The patch is a reworked version of Chris Marcellino cmarcell...@apple.com's 
patch.

Details:

1) the shared memory name is based on getpid()- this ensures that no two 
starting postmasters (or other processes) will attempt to acquire the same 
shared memory segment.
2) the shared memory segment is created and immediately unlinked, preventing 
outside access to the shared memory region
3) the shared memory file descriptor is passed to backends via static int file 
descriptor (normal file descriptor inheritance)
* perhaps there is a better location to store the file descriptor- 
advice welcomed.
4) shared memory segment detach occurs when the process exits (kernel-based 
cleanup instead of scheduled in-process clean up)

Additional notes:
The feature whereby arbitrary postgres user processes could connect to the 
shared memory segment has been removed with this patch. If this is a desirable 
feature (perhaps for debugging or performance tools), this could be added by 
implementing a file descriptor passing server in the postmaster which would use 
SCM_RIGHTS control message passing to a) verify that the remote process is 
running as the same user as the postmaster b) pass the shared memory file 
descriptor to the process. I am happy to implement this, if required.

I am happy to continue work on this patch if the pg-hackers deem it worthwhile. 
Thanks!

Cheers,
M



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Discarding the resulting rows

2010-04-26 Thread Murali M. Krishna
Hello Hackers:

Two questions.

1.

I would like to execute queries such as 

select * from part and time the query. But I want to ignore how much time is 
taken for printing the result to a file or the screen.

Basically, I would like to discard the result rows after doing all the work 
required to execute the query.

I looked at the documentation and I saw something about using the keyword 
PERFORM rather than SELECT.

I tried PERFORM * from part; 

But this gave me a syntax error.

Please let me know how this can be done.

2.

How do I clear the buffer caches between two query runs?
I believe this is not possible in Postgres. Can someone please confirm this or 
tell me how it may be done.

Thanks,

Murali.



-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!



  

Re: [HACKERS] Discarding the resulting rows

2010-04-26 Thread Murali M. Krishna


Hello All:

The optimizer assumes that data is disk resident when computing the cost of a 
query plan.
I am trying to ascertain what the correlation is between times and costs of 
some benchmark queries to see how good the cost model is.

Since I have more than 100 queries, it would be painful to stop and start the 
server each time to force all the buffer pages out. Also, some of these queries 
have large number of result rows. I don't want the time to be skewed by the 
output time.

Cheers,

Murali.



-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!


--- On Mon, 4/26/10, Tom Lane t...@sss.pgh.pa.us wrote:

From: Tom Lane t...@sss.pgh.pa.us
Subject: Re: [HACKERS] Discarding the resulting rows
To: Jaime Casanova jcasa...@systemguards.com.ec
Cc: Robert Haas robertmh...@gmail.com, Kevin Grittner 
kevin.gritt...@wicourts.gov, pgsql-hackers@postgresql.org, Murali M. 
Krishna murali1...@yahoo.com
Date: Monday, April 26, 2010, 1:25 PM

Jaime Casanova jcasa...@systemguards.com.ec writes:
 On Mon, Apr 26, 2010 at 3:03 PM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Apr 26, 2010 at 3:36 PM, Kevin Grittner
 kevin.gritt...@wicourts.gov wrote:
 I would use EXPLAIN ANALYZE SELECT ...
 
 There's some overhead to that, of course.

 he could see the actual time in the very first row of the EXPLAIN
 ANALYZE... isn't that a value that is more close to what the OP is
 looking for?

Well, it will include the instrumentation overhead of EXPLAIN ANALYZE,
which can be nontrivial depending on your hardware and the query plan.

On the other hand, EXPLAIN skips the cost of converting the result data
to text form, not to mention the network overhead of delivering it; so
in another sense it's underestimating the work involved.

I guess the real question is exactly what the OP is hoping to measure
and why.

            regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



  

[HACKERS] debugger question

2010-04-12 Thread Murali M. Krishna
Hello:

I am brand new to Postgresql.

I ran the following commands.
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test

I would like to start using gdb.

What is the simplest way of doing this? I read the instructions
on this page

http://sites.google.com/a/cs.berkeley.edu/cs186-fa09/home/assignments/assignment2/pggdb

, but this is what I get.


gdb) attach 1731
Attaching to program: /usr/local/pgsql/bin/postgres, process 1731
ptrace: Operation not permitted.
(gdb) break cost_seqscan
Breakpoint 1 at 0x81cdf97: file costsize.c, line 163.
(gdb) c
The program is not being run.

-

Please help.

Thanks.

MMK.













-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!



  

Re: [HACKERS] debugger question

2010-04-12 Thread Murali M. Krishna
The OS is

Fedora 12.






-
Please visit NumberFest.com for educational number puzzles  mind exercises for 
all ages! And please tell your friends about it. Thank You!


--- On Mon, 4/12/10, to...@tuxteam.de to...@tuxteam.de wrote:

From: to...@tuxteam.de to...@tuxteam.de
Subject: Re: [HACKERS] debugger question
To: Murali M. Krishna murali1...@yahoo.com
Cc: pgsql-hackers@postgresql.org
Date: Monday, April 12, 2010, 9:28 PM

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Apr 12, 2010 at 08:31:38PM -0700, Murali M. Krishna wrote:
 Hello:
 
 I am brand new to Postgresql.
 
 I ran the following commands.
 ./configure
 gmake
 su
 gmake install
 adduser postgres
 mkdir /usr/local/pgsql/data
 chown postgres /usr/local/pgsql/data
 su - postgres
 /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
 /usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data logfile 21 
 /usr/local/pgsql/bin/createdb test
 /usr/local/pgsql/bin/psql test
 
 I would like to start using gdb.
 
 What is the simplest way of doing this? I read the instructions
 on this page
 
 http://sites.google.com/a/cs.berkeley.edu/cs186-fa09/home/assignments/assignment2/pggdb
 
 , but this is what I get.
 
 
 gdb) attach 1731
 Attaching to program: /usr/local/pgsql/bin/postgres, process 1731
 ptrace: Operation not permitted.
 (gdb) break cost_seqscan
 Breakpoint 1 at 0x81cdf97: file costsize.c, line 163.
 (gdb) c
 The program is not being run.

Hm. Seems you got the right PID (gdb finds the executable after all).
Are you perhaps running under SELinux? (i just boldly assumed some
GNU/Linux). Which distribution, which kernel version (there seems to be
a bug in 2.4-ish Linux kernels which manifests itself like that, but
that's quite a while ago).

Next time, please tell us what OS is under you (although it might be fun
to watch people make wild guesses :)

Regards
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFLw/LTBcgs9XrR2kYRArYMAJ9JHu/Sl5JWSAv77om9HXHIzZtrDACZAWWu
fpk1yLbio8KOcWjTEWCXrK4=
=z0qo
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



  

Re: [HACKERS] remove contrib/xml2

2010-02-17 Thread M Z
Hi Alvaro,

I followed your instruction but put the patch on 8.4.2 as I found it
crashes. It looks like the server still crash in the same way. Can you and
anyone give me some ideas how to fix this bug?

==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t xml);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!
==

Best,
M Z



  CREATE TABLE xpath_test (id integer NOT NULL, t xml);
  INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
  SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int', 'true')
  as t(id int4);

  Hmm.  Well, all I know is that the first thing I tried crashed the
 server.



 This trivial patch lingering on my system fixes this crasher (this is
 for the 8.3 branch).  It makes the problem in alloc set ExprContext
 warning show up instead.

 There are still lotsa other holes, but hey, this is a start ...

 Index: contrib/xml2/xpath.c
 ===
 RCS file: /home/alvherre/Code/cvs/pgsql/contrib/xml2/xpath.c,v
 retrieving revision 1.16.2.1
 diff -c -p -r1.16.2.1 xpath.c
 *** contrib/xml2/xpath.c26 Mar 2008 01:19:11 -  1.16.2.1
 --- contrib/xml2/xpath.c27 Jan 2010 15:30:56 -
 *** xpath_table(PG_FUNCTION_ARGS)
 *** 793,798 
 --- 793,801 
   */
pgxml_parser_init();

 +   PG_TRY();
 +   {
 +
/* For each row i.e. document returned from SPI */
for (i = 0; i  proc; i++)
{
 *** xpath_table(PG_FUNCTION_ARGS)
 *** 929,934 
 --- 932,944 
if (xmldoc)
pfree(xmldoc);
}
 +   }
 +   PG_CATCH();
 +   {
 +   xmlCleanupParser();
 +   PG_RE_THROW();
 +   }
 +   PG_END_TRY();

xmlCleanupParser();
  /* Needed to flag completeness in 7.3.1. 7.4 defines it as a no-op. */

 --
 Alvaro Herrera
 http://www.CommandPrompt.com/
 The PostgreSQL Company - Command Prompt, Inc.

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



[HACKERS] Error when building postgresql with contrib functions

2010-02-15 Thread M Z
Hi

I am trying to build postgresql with contrib functions from source code
checked out from cvs version 8.3.8 but getting error:

==
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
ERROR:  function xpath_table(unknown, unknown, unknown, unknown, unknown)
does not exist
LINE 1: SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int...
  ^
HINT:  No function matches the given name and argument types. You might need
to add explicit type casts.
==

Looks like the contrib functions have not been added in.


OS Ubuntu Karmic.

I checked out source code from branch 8.3.8. Before building postgresql,
$ sudo apt-get install libreadline6-dev zlib1g-dev libxml2-dev libxslt1-dev
bison flex

libreadline6-dev needs libncurses5-dev as dependency so libncurses5-dev was
also installed.

The installation step I performed:
$ ./configure --with-libxml --with-libxslt
$ make
$ make check
$ sudo make install
$ export PGDATA=/data/pgsql/data
$ initdb
$ createdb conifer
$ pg_ctl start
$ psql

everything looks fine but I got error by doing:

conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t xml);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
ERROR:  function xpath_table(unknown, unknown, unknown, unknown, unknown)
does not exist
LINE 1: SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int...
  ^
HINT:  No function matches the given name and argument types. You might need
to add explicit type casts.


Thank you very much for your help.

Best,
M Z


Re: [HACKERS] CVS checkout source code for different branches

2010-02-09 Thread M Z
Hi Andrew and all,

I am still struggling with the branch repositories. I tried to checkout from
branch repositories but got error no such repositories:

$ cvs -z3 -d  
:pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/REL8_4_STABLE
co -P pgsql
/projects/REL8_4_STABLE: no such repository

$ cvs -z3 -d 
:pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/REL8_3_STABLE
co -P pgsql
/projects/REL8_3_STABLE: no such repository

$ cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/REL8_4_STABLE
co -P pgsql
/REL8_4_STABLE: no such repository

$ cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/REL8_3_STABLE
co -P pgsql
/REL8_3_STABLE: no such repository

What are the correct repositories for these branch? Is there a way I can
list these branch repositories and how?

Many thanks,
M Z

On Mon, Feb 8, 2010 at 11:56 PM, Andrew Dunstan and...@dunslane.net wrote:


 The only sane things to check out apart from HEAD are normally the STABLE
 branches. For release m.n those are always called RELm_n_STABLE. You can
 also get the tag set for a specific release. Those are called RELm_n_o for
 m.n.o releases.

 If you look at the output for cvs log configure.in you can see near the
 top a list of tag sets under the heading symbolic names.

 HTH.



 M Z wrote:

 For example, how can I list all the branches for postgresql 8.3 (and 8.4)?
 Now I can checkout code using:
 cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
 co -P pgsql

 But I don't know when version it is, and I want get code from some
 postgresql 8.3 and 8.4 branches but I don't know the their cvsroot

 Thanks
 M Z

 On Mon, Feb 8, 2010 at 11:04 PM, M Z jm80...@gmail.com mailto:
 jm80...@gmail.com wrote:

Hi Andrew,

Could you please give a little more detail how I can find
different CVS branches?

Thanks,
M Z



On Mon, Feb 8, 2010 at 10:38 PM, Andrew Dunstan
and...@dunslane.net mailto:and...@dunslane.net wrote:



M Z wrote:

Hi,

I am trying to checkout code from different branches (such
as 8.3, 8.4).

I found a few ways to checkout code from CVS:
1. webCVS: http://anoncvs.postgresql.org/cvsweb.cgi/

2. cvs -z3 -d
:pserver:anoncvs:passw...@anoncvs.postgresql.org:
 /projects/cvsroot
co -P pgsql

3. $ rsync --progress -avzCH --delete
anoncvs.postgresql.org::pgsql-cvs $CVSROOT

However, how can I checkout code from different branches
(such as 8.3, 8.4)?



CVS is documented here, among other places:
http://ximbiot.com/cvs/manual/cvs-1.11.20/cvs.html

To check out a particular branch such as REL8_4_STABLE, use
the -r option on the checkout command:

  cvs checkout -r branchname

cheers

andrew









Re: [HACKERS] CVS checkout source code for different branches

2010-02-09 Thread M Z
Final, I tried:

$ cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
co -r REL8_3_STABLE -P pgsql

Checked file configure.in. Around line 20, there is line:
 AC_INIT([PostgreSQL], [8.3.9], [pgsql-b...@postgresql.org])

Not sure that indicates version 8.3.9?

But still, is there a testing/developing version for 8.3? I remember Alvaro
made a patch to contrib/xml2/xpath.c. What is that version? and How can get
this version? Is there a way I can list all branches / versions in CVS
server?


***
Index: contrib/xml2/xpath.c
==
=
RCS file: /home/alvherre/Code/cvs/pgsql/contrib/xml2/xpath.c,v
retrieving revision 1.16.2.1
diff -c -p -r1.16.2.1 xpath.c
*** contrib/xml2/xpath.c26 Mar 2008 01:19:11 -  1.16.2.1
--- contrib/xml2/xpath.c27 Jan 2010 15:30:56 -
*** xpath_table(PG_FUNCTION_ARGS)
*** 793,798 
--- 793,801 
  */
   pgxml_parser_init();

+   PG_TRY();
+   {
+
   /* For each row i.e. document returned from SPI */
   for (i = 0; i  proc; i++)
   {
*** xpath_table(PG_FUNCTION_ARGS)
*** 929,934 
--- 932,944 
   if (xmldoc)
   pfree(xmldoc);
   }
+   }
+   PG_CATCH();
+   {
+   xmlCleanupParser();
+   PG_RE_THROW();
+   }
+   PG_END_TRY();

   xmlCleanupParser();
 /* Needed to flag completeness in 7.3.1. 7.4 defines it as a no-op. */
***


Re: [HACKERS] CVS checkout source code for different branches

2010-02-09 Thread M Z
Thanks Robert,

Your reply helps a lot. Just right after sending post, I found the way to
list all branches in the CVS server.

Best,
M Z

On Tue, Feb 9, 2010 at 10:43 PM, Robert Haas robertmh...@gmail.com wrote:

 On Tue, Feb 9, 2010 at 10:35 PM, M Z jm80...@gmail.com wrote:
  Final, I tried:
 
  $ cvs -z3 -d
  :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot co -r
  REL8_3_STABLE -P pgsql
 
  Checked file configure.in. Around line 20, there is line:
   AC_INIT([PostgreSQL], [8.3.9], [pgsql-b...@postgresql.org])
 
  Not sure that indicates version 8.3.9?
 
  But still, is there a testing/developing version for 8.3?

 REL8_3_STABLE is the head of the stable branch for 8.3.  It says 8.3.9
 right now because 8.3.9 is the last release that was stamped off that
 branch, but what you checked out actually has any changes backpatched
 since then on it as well.

  I remember Alvaro
  made a patch to contrib/xml2/xpath.c. What is that version? and How can
 get
  this version?

 That patch hasn't been applied yet.  You have to download it from his
 message and apply it to your local copy using the patch program.

  Is there a way I can list all branches / versions in CVS server?

 Andrew already answered that question in his previous message: use cvs
 log on a file that's been around for a long time, like configure.in.

 It sounds like you need to find some documentation for CVS and read
 it, and/or find a CVS help mailing list and post to it.

 ...Robert



[HACKERS] CVS checkout source code for different branches

2010-02-08 Thread M Z
Hi,

I am trying to checkout code from different branches (such as 8.3, 8.4).

I found a few ways to checkout code from CVS:
1. webCVS: http://anoncvs.postgresql.org/cvsweb.cgi/

2. cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
co -P pgsql

3. $ rsync --progress -avzCH --delete anoncvs.postgresql.org::pgsql-cvs
$CVSROOT

However, how can I checkout code from different branches (such as 8.3, 8.4)?

Thank you.
M Z


Re: [HACKERS] CVS checkout source code for different branches

2010-02-08 Thread M Z
Hi Andrew,

Could you please give a little more detail how I can find different CVS
branches?

Thanks,
M Z


On Mon, Feb 8, 2010 at 10:38 PM, Andrew Dunstan and...@dunslane.net wrote:



 M Z wrote:

 Hi,

 I am trying to checkout code from different branches (such as 8.3, 8.4).

 I found a few ways to checkout code from CVS:
 1. webCVS: http://anoncvs.postgresql.org/cvsweb.cgi/

 2. cvs -z3 -d 
 :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
 co -P pgsql

 3. $ rsync --progress -avzCH --delete anoncvs.postgresql.org::pgsql-cvs
 $CVSROOT

 However, how can I checkout code from different branches (such as 8.3,
 8.4)?



 CVS is documented here, among other places: 
 http://ximbiot.com/cvs/manual/cvs-1.11.20/cvs.html

 To check out a particular branch such as REL8_4_STABLE, use the -r option
 on the checkout command:

   cvs checkout -r branchname

 cheers

 andrew







Re: [HACKERS] CVS checkout source code for different branches

2010-02-08 Thread M Z
For example, how can I list all the branches for postgresql 8.3 (and 8.4)?
Now I can checkout code using:
cvs -z3 -d :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
co -P pgsql

But I don't know when version it is, and I want get code from some
postgresql 8.3 and 8.4 branches but I don't know the their cvsroot

Thanks
M Z

On Mon, Feb 8, 2010 at 11:04 PM, M Z jm80...@gmail.com wrote:

 Hi Andrew,

 Could you please give a little more detail how I can find different CVS
 branches?

 Thanks,
 M Z



 On Mon, Feb 8, 2010 at 10:38 PM, Andrew Dunstan and...@dunslane.netwrote:



 M Z wrote:

 Hi,

 I am trying to checkout code from different branches (such as 8.3, 8.4).

 I found a few ways to checkout code from CVS:
 1. webCVS: http://anoncvs.postgresql.org/cvsweb.cgi/

 2. cvs -z3 -d 
 :pserver:anoncvs:passw...@anoncvs.postgresql.org:/projects/cvsroot
 co -P pgsql

 3. $ rsync --progress -avzCH --delete anoncvs.postgresql.org::pgsql-cvs
 $CVSROOT

 However, how can I checkout code from different branches (such as 8.3,
 8.4)?



 CVS is documented here, among other places: 
 http://ximbiot.com/cvs/manual/cvs-1.11.20/cvs.html

 To check out a particular branch such as REL8_4_STABLE, use the -r option
 on the checkout command:

   cvs checkout -r branchname

 cheers

 andrew








Re: [HACKERS] remove contrib/xml2

2010-02-06 Thread M Z
The thing is, why it doesn't crash on 8.3.8 but crash on 8.4.2? Any idea? A
patch was applied to 8.3 but not to 8.4.2?

Thanks,
M Z

On Fri, Feb 5, 2010 at 1:45 PM, Robert Haas robertmh...@gmail.com wrote:

 On Wed, Feb 3, 2010 at 8:49 AM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
  Robert Haas escribió:
  On Mon, Feb 1, 2010 at 5:23 PM, Andrew Dunstan and...@dunslane.net
 wrote:
   Robert Haas wrote:
   (2) add a very, very large warning that this will crash if you do
   almost anything with it.
  
   I think that's an exaggeration. Certain people are known to be using
 it
   quite successfully.
 
  Hmm.  Well, all I know is that the first thing I tried crashed the
 server.
 
  CREATE TABLE xpath_test (id integer NOT NULL, t xml);
  INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
  SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int', 'true')
  as t(id int4);
 
  This trivial patch lingering on my system fixes this crasher (this is
  for the 8.3 branch).  It makes the problem in alloc set ExprContext
  warning show up instead.
 
  There are still lotsa other holes, but hey, this is a start ...

 Interestingly M Z found he couldn't reproduce this crash on 8.3.  Can
 you?  If so, +1 for applying this and backpatching it as far as make
 sense.

 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] remove contrib/xml2

2010-02-04 Thread M Z
I did some tests followed Robert's test cases on both postgresql 8.4.2-0ubu
and 8.3.8-1, OS: Ubuntu Karmic.

1) 1st test case, it doesn't crash on 8.3.8 but crash on 8.4.2;
2) 2nd test case, both 8.3.8 and 8.4.2 are fine, and no warning (different
from Robert's test?);
3) 3rd test case (and modified test case for 8.3.8), both 8.3.8 and 8.4.2
are not correct, same with Robert's test (8.5 beta?);


*
1st test case:
==
8.3.8
==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t xml);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
 id

  1
(1 row)

==
8.4.2
==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t xml);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!
*

*
2nd test case
==
8.3.8 and 8.4.2
==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t text);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int',
'true') as t(id int4);
 id

  1
(1 row)
*

*
3rd test case
==
8.3.8 and 8.4.2
==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t text);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'rowlistrow a=1/row a=2
b=oops//rowlist');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test',
'/rowlist/row/@a|/rowlist/row/@b', 'true') as t(id int4, a text, b text);
 id | a |  b
+---+--
  1 | 1 | oops
  1 | 2 |
(2 rows)


==
8.3.8 (modified 3rd test case, because 8.3.8 won't crash using xml)
==
conifer=# CREATE TABLE xpath_test (id integer NOT NULL, t xml);
CREATE TABLE
conifer=# INSERT INTO xpath_test VALUES (1, 'rowlistrow a=1/row a=2
b=oops//rowlist');
INSERT 0 1
conifer=# SELECT * FROM xpath_table('id', 't', 'xpath_test',
'/rowlist/row/@a|/rowlist/row/@b', 'true') as t(id int4, a text, b text);
 id | a |  b
+---+--
  1 | 1 | oops
  1 | 2 |
(2 rows)
*


For 1st test case, not sure if some paths applied to 8.3 haven't been
applied to 8.4, or other reasons cause the difference between 8.3.8 and
8.4.2.

Any ideas or comments?

Thank you,
M Z

On Mon, Feb 1, 2010 at 8:44 PM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Feb 1, 2010 at 5:23 PM, Andrew Dunstan and...@dunslane.net
 wrote:
  Robert Haas wrote:
  (2) add a very, very large warning that this will crash if you do
  almost anything with it.
 
  I think that's an exaggeration. Certain people are known to be using it
  quite successfully.

 Hmm.  Well, all I know is that the first thing I tried crashed the server.

 CREATE TABLE xpath_test (id integer NOT NULL, t xml);
 INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
 SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int', 'true')
 as t(id int4);

 It doesn't crash if you change the type of t from xml to text; instead
 you get a warning about some sort of memory allocation problem.

 DROP TABLE xpath_test;
 CREATE TABLE xpath_test (id integer NOT NULL, t text);
 INSERT INTO xpath_test VALUES (1, 'docint1/int/doc');
 SELECT * FROM xpath_table('id', 't', 'xpath_test', '/doc/int', 'true')
 as t(id int4);

 yields:

 WARNING:  problem in alloc set ExprContext: bogus aset link in block
 0x14645e0, chunk 0x14648b8

 And then there's this (see also bug #5285):

 DELETE FROM xpath_test;
 INSERT INTO xpath_test VALUES (1, 'rowlistrow a=1/row a=2
 b=oops//rowlist');
 SELECT * FROM xpath_table('id', 't', 'xpath_test',
 '/rowlist/row/@a|/rowlist/row/@b', 'true') as t(id int4, a text, b
 text);

 which yields an answer that is, at least, extremely surprising, if not
 flat-out wrong:

  id | a |  b
 +---+--
  1 | 1 | oops
  1 | 2 |
 (2 rows)

 Bugs #4953 and #5079 can also be reproduced in CVS HEAD.  Both crash the
 server.

 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] contrib\xml2 package's xpath_table function in PostgreSQL

2010-02-01 Thread M Z
Is there a way to fix it so that those functions are usable in 8.4 without
crashing the server?

Thanks,
M Z

On Mon, Feb 1, 2010 at 10:50 AM, Robert Haas robertmh...@gmail.com wrote:

 The functions haven't actually been removed in 8.4, in spite of the
 deprecation notice.  But it's very easy to use them in a way that
 crashes the entire server, so you're playing with fire.  :-(

 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] contrib\xml2 package's xpath_table function in PostgreSQL

2010-02-01 Thread M Z
I am very interested in doing it. However I am new to postgresql. Could you
and anyone here please give me some hint, which way I should, which part of
code I should focus to fix it?

Thanks,
M Z

On Mon, Feb 1, 2010 at 1:23 PM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Feb 1, 2010 at 1:20 PM, M Z jm80...@gmail.com wrote:
  Is there a way to fix it so that those functions are usable in 8.4
 without
  crashing the server?

 Nobody seems to be interested enough to figure that out and submit a
 patch to fix it.  If someone does, I think it would have a good chance
 of being accepted.

 ...Robert



Re: [HACKERS] Listen / Notify rewrite

2009-11-11 Thread A . M .


On Nov 11, 2009, at 9:28 PM, Merlin Moncure wrote:

On Wed, Nov 11, 2009 at 5:48 PM, A.M.  
age...@themactionfaction.com wrote:
At least with this new payload, I can set the payload to the  
transaction ID
and be certain that all the notifications I sent are processed  
(and in order

even!) but could you explain why the coalescing is still necessary?


Christmas comes early this year! :-).

three reasons:
*) it works that way now...a lot of people use this feature for all
kinds of subtle things and the behavior chould change as little as
possible
*) legacy issues aside, I think it's generally better behavior (how
many times do you need to be tapped on the shoulder?)
*) since you can trivially differentiate it (using xid, sequence,
etc), what's the fuss?


Except for the fact that the number of times a notification occurred  
may be valuable information.


I thought of a compromise: add the number of times a notification was  
generated (coalesced count+1) to the callback data. That would  
satisfy any backwards compatibility concerns and my use case too!


Cheers,
M

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Problem with estimating pages for a table

2009-05-13 Thread Cristina M





Hello,

I posted to the general list, and didn't receive any replies. Therefore, I am 
trying this list now, hopefully this is the right mailing list for this type of 
questions.

I am trying to compute the no of pages of a table. I am using the formula :

pages = ( columns width + 28) * no. of rows / block size

For each varchar column - I add an extra 4 bytes
For each numeric column - I add an extra 8 bytes
Add a 28 bytes row overhead.

For example if i have a table with col1: integer, col2: varchar, col3 varchar, 
I will get:
pages = (col1width + col2width + 4 + col3width + 4 + 28)* no. of rows / block 
size

The problem is that I have some problems for some tables where i have numeric 
and varchar columns. I tested on TPC-H database.

- for table t1(c_custkey, int, c_nationkey int, c_acctbal numeric) i got 
similar result with the real no of pages. Here c_acctbal has 8 byte, and i 
added the extra 8 bytes.

- for table t2(l_orderkey int, l_partkey int, l_quantiy, l_tax, 
l_extendedprice, l_discount) I got an error of 42 %. The last 4 columns are 
numeric and i added an extra 8 bytes for each of them - 32 bytes. 
(colwidths + 32 + 28)*no.of rows/ block size
I would have got a correct value, if i had added only 4 total bytes.. instead 
of the 32: (colwidths + 4 + 28)*no.of rows/ block size

One more question. I do not understand how to use the aligment value property. 
Does it depend on the position of attribute in the table?
I am using Postgres 8.3

Thank you very much for any help in this regard,
Cristina


  

Re: [HACKERS] Lisp as a procedural language?

2008-10-19 Thread M. Edward (Ed) Borasky
On Sun, 2008-10-19 at 09:24 +0300, Volkan YAZICI wrote:
 M. Edward (Ed) Borasky [EMAIL PROTECTED] writes:
  Someone at the PostgreSQL West conference last weekend expressed an
  interest in a Lisp procedural language. The only two Lisp environments
  I've found so far that aren't GPL are Steel Bank Common Lisp (MIT,
  http://sbcl.sourceforge.net) and XLispStat (BSD,
  http://www.stat.uiowa.edu/~luke/xls/xlsinfo/xlsinfo.html). SBCL is a
  very active project, but I'm not sure about XLispStat.
 
 You see PL/scheme[1]?

I don't remember who it was at the conference, but when I suggested
Scheme, he said that it already existed, and that (Common) Lisp was
really what was wanted. 

Scheme is a much simpler beast. Both Scheme and Common Lisp are similar
in complexity at the core/virtual machine/interpreter/compiler level.
But once you load on all the libraries, object models (CLOS), etc.,
Common Lisp is much bigger.
-- 
M. Edward (Ed) Borasky
ruby-perspectives.blogspot.com

A mathematician is a machine for turning coffee into theorems. --
Alfréd Rényi via Paul Erdős



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Lisp as a procedural language?

2008-10-18 Thread M. Edward (Ed) Borasky
Someone at the PostgreSQL West conference last weekend expressed an
interest in a Lisp procedural language. The only two Lisp environments
I've found so far that aren't GPL are Steel Bank Common Lisp (MIT,
http://sbcl.sourceforge.net) and XLispStat (BSD,
http://www.stat.uiowa.edu/~luke/xls/xlsinfo/xlsinfo.html). SBCL is a
very active project, but I'm not sure about XLispStat. 
-- 
M. Edward (Ed) Borasky
ruby-perspectives.blogspot.com

A mathematician is a machine for turning coffee into theorems. --
Alfréd Rényi via Paul Erdős



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Lisp as a procedural language?

2008-10-18 Thread M. Edward (Ed) Borasky
On Sat, 2008-10-18 at 20:43 -0400, Nikolas Everett wrote:
 From what I remember with tinkering with Lisp a while back, SBCL and
 CMUCL are the big free implementations.  I remember something about
 GCL being non-standard.  Either of those should make lisp hackers
 happy.

GCL (and Clisp) are both reasonable implementations of Common Lisp.
However, they are both GPL, which I think is an issue for PostgreSQL
community members. CMUCL development more or less stalled out, and many
of the heavyweights moved to Steel Bank Common Lisp (SBCL). It's kind of
a joke -- Carnegie = Steel, Mellon = Bank, so Carnegie Mellon
(University) Common Lisp = Steel Bank Common Lisp. :)

In any event, SBCL is MIT-licensed, which is free of some of the more
annoying GPL restrictions. BTW, I checked on XLispStat and it seems to
be frozen in time -- most of the people who used to use XLispStat
(including me) have moved on to R (which is GPL, unfortunately).
--
M. Edward (Ed) Borasky
ruby-perspectives.blogspot.com

A mathematician is a machine for turning coffee into theorems. --
Alfréd Rényi via Paul Erdős


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-05 Thread Gavin M. Roy
2008-03-04 05:45:47 EST [6698]: [1-1] LOG:  process 6698 still waiting for
AccessShareLock on relation 1247 of database 16385 after 1001.519 ms
2008-03-04 05:45:47 EST [6698]: [2-1] STATEMENT:  VACUUM FULL
autograph.autograph_creators
2008-03-04 05:46:28 EST [6730]: [1-1] LOG:  process 6730 still waiting for
AccessShareLock on relation 1247 of database 16385 after 1000.887 ms
2008-03-04 05:46:28 EST [6730]: [2-1] STATEMENT:  VACUUM FULL
lunchmoney.totals
2008-03-04 05:47:55 EST [3809]: [18-1] LOG:  server process (PID 6742) was
terminated by signal 6: Aborted
2008-03-04 05:47:55 EST [3809]: [19-1] LOG:  terminating any other active
server processes
2008-03-04 05:47:55 EST [6741]: [12-1] WARNING:  terminating connection
because of crash of another server process


On Tue, Mar 4, 2008 at 9:56 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Gavin M. Roy [EMAIL PROTECTED] writes:
  (gdb) where
  #0  0x003fe362e21d in raise () from /lib64/tls/libc.so.6
  #1  0x003fe362fa1e in abort () from /lib64/tls/libc.so.6
  #2  0x0063a2e3 in errfinish ()
  #3  0x005974c4 in DeadLockReport ()
  #4  0x0059381f in LockAcquire ()
  #5  0x00592357 in LockRelationOid ()
  #6  0x00457255 in relation_open ()
  #7  0x004574c3 in heap_open ()
  #8  0x0062cf41 in CatalogCacheInitializeCache ()
  #9  0x0062dfad in PrepareToInvalidateCacheTuple ()
  #10 0x0062e8c5 in CacheInvalidateHeapTuple ()
  #11 0x0045c803 in heap_page_prune ()
  #12 0x005086cd in vacuum_rel ()
  #13 0x005096bb in vacuum ()
  #14 0x005a163b in PortalRunUtility ()
  #15 0x005a1714 in PortalRunMulti ()
  #16 0x005a1d30 in PortalRun ()
  #17 0x0059f4b6 in PostgresMain ()
  #18 0x005760c0 in ServerLoop ()
  #19 0x0050 in PostmasterMain ()
  #20 0x0052fd3e in main ()

 So what did DeadLockReport put in the server log before panic'ing?

 I'm wondering exactly why CatalogCacheInitializeCache is being called
 here --- seems like that should have been done long before we got to
 VACUUM.  But it would be useful to know just what deadlock it saw.

regards, tom lane



Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-05 Thread Gavin M. Roy
On Wed, Mar 5, 2008 at 10:13 AM, Tom Lane [EMAIL PROTECTED] wrote:

 I wrote:
  In particular, if that's the problem, why has this not been seen before?
  The fact that it's going through heap_page_prune doesn't seem very
  relevant --- VACUUM FULL has certainly always had to invoke
  CacheInvalidateHeapTuple someplace or other.  So I still want to see
  the deadlock report ... we at least need to know which tables are
  involved in the deadlock.

 Actually, maybe it *has* been seen before.  Gavin, are you in the habit
 of running concurrent VACUUM FULLs on system catalogs, and if so have
 you noted that they occasionally get deadlock failures?


Generally no,  I've never noticed deadlocks before, but I'll go back and
look at some of the other the machines.

 A separate line of thought is whether it's a good idea that
  heap_page_prune calls the inval code inside a critical section.
  That's what's turning an ordinary deadlock failure into a PANIC.
  Even without the possibility of having to do cache initialization,
  that's a lot of code to be invoking, and it has obvious failure
  modes (eg, out of memory for the new inval list item).

 The more I think about this the more I don't like it.  The critical
 section in heap_page_prune is *way* too big.  Aside from the inval
 call, there are HeapTupleSatisfiesVacuum() calls, which could have
 failures during attempted clog references.

 The reason the critical section is so large is that we're manipulating
 the contents of a shared buffer, and we don't want a failure to leave a
 partially-modified page in the buffer.  We could fix that if we were to
 memcpy the page into local storage and do all the pruning work there.
 Then the critical section would only surround copying the page back to
 the buffer and writing the WAL record.  Copying the page is a tad
 annoying but heap_page_prune is an expensive operation anyway, and
 I think we really are at too much risk of PANIC the way it's being done
 now.  Has anyone got a better idea?

regards, tom lane



Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-05 Thread Gavin M. Roy
On Wed, Mar 5, 2008 at 10:31 AM, Tom Lane [EMAIL PROTECTED] wrote:

 Gavin M. Roy [EMAIL PROTECTED] writes:
  2008-03-04 05:45:47 EST [6698]: [1-1] LOG:  process 6698 still waiting
 for
  AccessShareLock on relation 1247 of database 16385 after 1001.519 ms
  2008-03-04 05:45:47 EST [6698]: [2-1] STATEMENT:  VACUUM FULL
  autograph.autograph_creators
  2008-03-04 05:46:28 EST [6730]: [1-1] LOG:  process 6730 still waiting
 for
  AccessShareLock on relation 1247 of database 16385 after 1000.887 ms
  2008-03-04 05:46:28 EST [6730]: [2-1] STATEMENT:  VACUUM FULL
  lunchmoney.totals
  2008-03-04 05:47:55 EST [3809]: [18-1] LOG:  server process (PID 6742)
 was
  terminated by signal 6: Aborted
  2008-03-04 05:47:55 EST [3809]: [19-1] LOG:  terminating any other
 active
  server processes
  2008-03-04 05:47:55 EST [6741]: [12-1] WARNING:  terminating connection
  because of crash of another server process

 How annoying ... the PANIC message doesn't seem to have reached the log.
 elog.c is careful to fflush(stderr) before abort(), so that isn't
 supposed to happen.  But it looks like you are using syslog for logging
 (correct?) so maybe this is a problem with the syslog implementation
 you're using.  What's the platform exactly?

 I wonder if it'd be reasonable to put a closelog() call just before
 the abort() ...

regards, tom lane


I'm using stderr, emulated to look like syslog for pgfouine.

log_destination = 'stderr'
logging_collector = on  # Enable capturing of stderr and
csvlog
log_directory = '/var/log/postgres/' # Directory where log files are
written
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-05 Thread Gavin M. Roy
On Wed, Mar 5, 2008 at 10:31 AM, Tom Lane [EMAIL PROTECTED] wrote:

 Gavin M. Roy [EMAIL PROTECTED] writes:
  2008-03-04 05:45:47 EST [6698]: [1-1] LOG:  process 6698 still waiting
 for
  AccessShareLock on relation 1247 of database 16385 after 1001.519 ms
  2008-03-04 05:45:47 EST [6698]: [2-1] STATEMENT:  VACUUM FULL
  autograph.autograph_creators
  2008-03-04 05:46:28 EST [6730]: [1-1] LOG:  process 6730 still waiting
 for
  AccessShareLock on relation 1247 of database 16385 after 1000.887 ms
  2008-03-04 05:46:28 EST [6730]: [2-1] STATEMENT:  VACUUM FULL
  lunchmoney.totals
  2008-03-04 05:47:55 EST [3809]: [18-1] LOG:  server process (PID 6742)
 was
  terminated by signal 6: Aborted
  2008-03-04 05:47:55 EST [3809]: [19-1] LOG:  terminating any other
 active
  server processes
  2008-03-04 05:47:55 EST [6741]: [12-1] WARNING:  terminating connection
  because of crash of another server process

 How annoying ... the PANIC message doesn't seem to have reached the log.
 elog.c is careful to fflush(stderr) before abort(), so that isn't
 supposed to happen.  But it looks like you are using syslog for logging
 (correct?) so maybe this is a problem with the syslog implementation
 you're using.  What's the platform exactly?

 I wonder if it'd be reasonable to put a closelog() call just before
 the abort() ...

regards, tom lane


The panic may have made it if this is what you were looking for:

2008-03-04 05:45:20 EST [6742]: [7-1] PANIC:  deadlock detected
2008-03-04 05:58:33 EST [8751]: [3-1] PANIC:  deadlock detected

(it cored twice before I lowered the concurrency of vacuums)


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-05 Thread Gavin M. Roy
2008-03-04 05:45:20 EST [6742]: [7-1] PANIC:  deadlock detected
2008-03-04 05:45:20 EST [6742]: [8-1] DETAIL:  Process 6742 waits for
AccessShareLock on relation 2619 of database 16385; blocked by process 6740.
Process 6740 waits for AccessShareLock on relation 1247 of database 16385;
blocked by process 6742.
2008-03-04 05:45:20 EST [6742]: [9-1] STATEMENT:  VACUUM FULL
pg_catalog.pg_type

Sorry, been juggling too many things this morning!

On Wed, Mar 5, 2008 at 10:45 AM, Tom Lane [EMAIL PROTECTED] wrote:

 Gavin M. Roy [EMAIL PROTECTED] writes:
  The panic may have made it if this is what you were looking for:

  2008-03-04 05:45:20 EST [6742]: [7-1] PANIC:  deadlock detected
  2008-03-04 05:58:33 EST [8751]: [3-1] PANIC:  deadlock detected

 That's what I expected to find, but where's the DETAIL for these?

regards, tom lane



[HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Gavin M. Roy
This morning I had a postgres 8.3 install core this morning while multiple
vacuum fulls were taking place. I saved the core file, would anyone be
interested in dissecting it?  I've otherwise had no issues with this machine
or pgsql install.
Gavin


Re: [HACKERS] 8.3.0 Core with concurrent vacuum fulls

2008-03-04 Thread Gavin M. Roy
[EMAIL PROTECTED] backup]$ cat /etc/redhat-release
CentOS release 4.4 (Final)

BINDIR = /usr/local/pgsql/bin
DOCDIR = /usr/local/pgsql/doc
INCLUDEDIR = /usr/local/pgsql/include
PKGINCLUDEDIR = /usr/local/pgsql/include
INCLUDEDIR-SERVER = /usr/local/pgsql/include/server
LIBDIR = /usr/local/pgsql/lib
PKGLIBDIR = /usr/local/pgsql/lib
LOCALEDIR =
MANDIR = /usr/local/pgsql/man
SHAREDIR = /usr/local/pgsql/share
SYSCONFDIR = /usr/local/pgsql/etc
PGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--with-ldap' '--with-perl' '--enable-integer-datetimes'
CC = gcc
CPPFLAGS = -D_GNU_SOURCE
CFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
CFLAGS_SL = -fpic
LDFLAGS = -Wl,-rpath,'/usr/local/pgsql/lib'
LDFLAGS_SL =
LIBS = -lpgport -lz -lreadline -ltermcap -lcrypt -ldl -lm
VERSION = PostgreSQL 8.3.0

(gdb) where
#0  0x003fe362e21d in raise () from /lib64/tls/libc.so.6
#1  0x003fe362fa1e in abort () from /lib64/tls/libc.so.6
#2  0x0063a2e3 in errfinish ()
#3  0x005974c4 in DeadLockReport ()
#4  0x0059381f in LockAcquire ()
#5  0x00592357 in LockRelationOid ()
#6  0x00457255 in relation_open ()
#7  0x004574c3 in heap_open ()
#8  0x0062cf41 in CatalogCacheInitializeCache ()
#9  0x0062dfad in PrepareToInvalidateCacheTuple ()
#10 0x0062e8c5 in CacheInvalidateHeapTuple ()
#11 0x0045c803 in heap_page_prune ()
#12 0x005086cd in vacuum_rel ()
#13 0x005096bb in vacuum ()
#14 0x005a163b in PortalRunUtility ()
#15 0x005a1714 in PortalRunMulti ()
#16 0x005a1d30 in PortalRun ()
#17 0x0059f4b6 in PostgresMain ()
#18 0x005760c0 in ServerLoop ()
#19 0x0050 in PostmasterMain ()
#20 0x0052fd3e in main ()

On Tue, Mar 4, 2008 at 11:35 AM, Alvaro Herrera [EMAIL PROTECTED]
wrote:

 Gavin M. Roy wrote:
  This morning I had a postgres 8.3 install core this morning while
 multiple
  vacuum fulls were taking place. I saved the core file, would anyone be
  interested in dissecting it?  I've otherwise had no issues with this
 machine
  or pgsql install.

 Of course.  Please post the backtrace.

 --
 Alvaro Herrera
 http://www.CommandPrompt.com/
 The PostgreSQL Company - Command Prompt, Inc.



Re: [HACKERS] 8.3devel slower than 8.2 under read-only load

2007-11-24 Thread Gavin M. Roy
On 11/24/07, Gregory Stark [EMAIL PROTECTED] wrote:

 This is a conflict which will affect Postgres in the future as well.
 Generally
 I/O costs win over cpu costs in databases since only relatively small
 systems
 are cpu-bound. Large systems are typically I/O-bound.


That really depends on hardware doesn't it?  I'd say that I'm more concerned
with CPU than IO on high-end hardware as a generalization, especially with
the scaling issues beyond 32 CPU's.

Regards,

Gavin


Re: [HACKERS] Machine available for community use

2007-11-02 Thread Gavin M. Roy
Just a follow-up to note that Red Hat has graciously donated a 1 year
RHEL subscription and myYearbook is paying Command Prompt to setup the
RHEL box for community use.

We've not worked out a scheduling methodology, or how to best organize
the use of said hardware, but I know that Tom and others are
interested.

Does anyone have a scheduling solution for things like this to make
sure people aren't stepping on each others toes processor/ram/disk
wise?

Also, what should the policies be for making sure that people can use
the box for what they need to use the box for?

Should people clean up after themselves data usage wise after their
scheduled time?

Should people only be able to run PostgreSQL in the context of their
own user?  Do we have experience with such setups in the past?  What
has worked well and what hasn't?

Gavin

On 7/25/07, Gavin M. Roy [EMAIL PROTECTED] wrote:
 Recently I've been involved in or overheard discussions about SMP
 scalability at both the PA PgSQL get together and in some list
 traffic.

 myYearbook.com would ike to make one of our previous production
 machines available to established PgSQL Hackers who don't have access
 to this level of hardware for testing, benchmarking and development to
 work at improving SMP scalability and related projects.

 The machine is a HP 585 G1, 8 Core AMD, 32GB RAM with one 400GB 14
 Spindle DAS Array dedicated to community use.  I've attached a text
 file with dmesg and /proc/cpuinfo output.

 I'm working on how this will be setup and am open to suggestions on
 how to structure access.

 I'm currently in the process of having Gentoo linux reinstalled on the
 box since that is what I am most comfortable administering from a
 security perspective.  If this will be a blocker for developers who
 would actually work on it, please let me know.

 If you're interested in access, my only requirement is that you're a
 current PgSQL Hacker with a proven track-record of committing patches
 to the community.  This is a resource we could be using for something
 else, and I'd like to see the community get direct benefit from it as
 opposed to it being a play sandbox for people who want to tinker.

 Please let me know thoughts, concerns or suggestions.

 Gavin M. Roy
 CTO
 myYearbook.com
 [EMAIL PROTECTED]



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] PAM authentication fails for local UNIX users

2007-08-20 Thread Dhanaraj M

Hi all,

http://archives.postgresql.org/pgsql-admin/2003-02/msg00301.php

I also try to address the same issue..

I run postmaster as postgres user and pg_hba.conf includes the following 
entry:


local   all dhanaraj  pam

However, the authentication fails for this unix local user, whereas it 
works for LDAP users.


bash-3.00$ psql -h superdesktop.india.sun.com -U dhanaraj mydb
Password for user dhanaraj:
psql: FATAL: PAM authentication failed for user dhanaraj

The following error message that I could see in the server log:
..
LOG: pam_authenticate failed: Conversation failure
FATAL: PAM authentication failed for user dhanaraj
LOG: pam_authenticate failed: No account present for user
FATAL: PAM authentication failed for user dhanaraj


The non-root user does not have the permission to read other unix local 
user password.

I found two solutions:

1. usermod -K defaultpriv=Basic,file_dac_read  postgres

 - Gives privilege to read all files. This solution works. Is it the 
right way to do?


2. chmod +s processName

 - This does not work, because postgres never allows this.


Is there anyother solution to this problem?

Thanks
Dhanaraj


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Machine available for community use

2007-08-01 Thread Gavin M. Roy
Let us know when/if and we'll pay command prompt to install the base OS on
the system.  All that we're waiting on at this point is the final on the OS.

Gavin

On 7/31/07, Tom Lane [EMAIL PROTECTED] wrote:

 Josh Berkus [EMAIL PROTECTED] writes:
  Hey, this is looking like a serious case of Bike Shedding.  That is, a
 dozen
  people are arguing about what color to paint the bike shed instead of
 getting
  it built.[1]

 FWIW, it's looking like Red Hat will donate a RHEL/RHN subscription if
 we want one, though I don't have final approval quite yet.

 regards, tom lane

 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at

 http://www.postgresql.org/about/donate



Re: [HACKERS] Machine available for community use

2007-07-31 Thread Gavin M. Roy
It's actually in Texas, and we have no intention to put a time limit
on its availability. I think the availability will be there as long as
there is use and we're in the Texas data center, which I don't see
ending any time soon.

On 7/31/07, Josh Berkus [EMAIL PROTECTED] wrote:
 Gavin,

 I'm actually in the middle of assembling a general performance test lab for
 the PostgreSQL hackers, using equipment donated by Sun, Hi5, and (hopefully)
 Unisys and Intel.  While your machine would obviously stay in Pennsylvania,
 it would be cool if we could somehow arrange a unified authentication 
 booking system.

 I'm pretty sure I can even raise money to get one created.

 How long will this system remain available to us?

 --
 Josh Berkus
 PostgreSQL @ Sun
 San Francisco


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Machine available for community use

2007-07-26 Thread Gavin M. Roy

Let me look at what makes sense there, I am open to it.

On 7/26/07, Tom Lane [EMAIL PROTECTED] wrote:

Greg Smith [EMAIL PROTECTED] writes:
 But this is pushing forward PostgreSQL development you're doing here.  If
 you've got a problem such that something works differently based on the
 order in which you built the packages, which is going to be unique to
 every Linux distribution already, that is itself noteworthy and deserves
 engineering out.  You might think of this high-end machine being a little
 different as usefully adding diversity robustness in a similar way to how
 the buildfarm helps improve the core right now.

Actually, the thing that's concerning me is *exactly* lack of diversity.
If we have just one of these things then there's a significant risk of
unconsciously tuning PG towards that specific platform.  I'd rather we
take that risk with a well-standardized, widely used platform than with
something no one else can reproduce.

Really there's a pretty good argument for having several different OS'es
available on the box --- I wonder whether Gavin is up to managing some
sort of VM or multiboot setup.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

Recently I've been involved in or overheard discussions about SMP
scalability at both the PA PgSQL get together and in some list
traffic.

myYearbook.com would ike to make one of our previous production
machines available to established PgSQL Hackers who don't have access
to this level of hardware for testing, benchmarking and development to
work at improving SMP scalability and related projects.

The machine is a HP 585 G1, 8 Core AMD, 32GB RAM with one 400GB 14
Spindle DAS Array dedicated to community use.  I've attached a text
file with dmesg and /proc/cpuinfo output.

I'm working on how this will be setup and am open to suggestions on
how to structure access.

I'm currently in the process of having Gentoo linux reinstalled on the
box since that is what I am most comfortable administering from a
security perspective.  If this will be a blocker for developers who
would actually work on it, please let me know.

If you're interested in access, my only requirement is that you're a
current PgSQL Hacker with a proven track-record of committing patches
to the community.  This is a resource we could be using for something
else, and I'd like to see the community get direct benefit from it as
opposed to it being a play sandbox for people who want to tinker.

Please let me know thoughts, concerns or suggestions.

Gavin M. Roy
CTO
myYearbook.com
[EMAIL PROTECTED]
/proc/cpuinfo:

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : AMD Opteron (tm) Processor 885
stepping: 2
cpu MHz : 1999.953
cache size  : 1024 KB
physical id : 0
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm 3dnowext 3dnow pni
bogomips: 3599.40
TLB size: 1088 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor   : 1
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : AMD Opteron (tm) Processor 885
stepping: 2
cpu MHz : 1999.953
cache size  : 1024 KB
physical id : 1
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm 3dnowext 3dnow pni
bogomips: 3599.40
TLB size: 1088 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor   : 2
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : AMD Opteron (tm) Processor 885
stepping: 2
cpu MHz : 1999.953
cache size  : 1024 KB
physical id : 2
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm 3dnowext 3dnow pni
bogomips: 3599.40
TLB size: 1088 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor   : 3
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : AMD Opteron (tm) Processor 885
stepping: 2
cpu MHz : 1999.953
cache size  : 1024 KB
physical id : 3
siblings: 2
core id : 0
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm 3dnowext 3dnow pni
bogomips: 3599.40
TLB size: 1088 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

processor   : 4
vendor_id   : AuthenticAMD
cpu family  : 15
model   : 33
model name  : AMD Opteron (tm) Processor 885
stepping: 2
cpu MHz : 1999.953
cache size  : 1024 KB
physical id : 0
siblings: 2
core id : 1
cpu cores   : 2
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext lm 3dnowext 3dnow pni
bogomips: 3599.40
TLB size: 1088 4K pages
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual

Re: [HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

If you're interested in using the box, name what you want installed.

On 7/25/07, Tom Lane [EMAIL PROTECTED] wrote:

Gavin M. Roy [EMAIL PROTECTED] writes:
 I'm currently in the process of having Gentoo linux reinstalled on the
 box since that is what I am most comfortable administering from a
 security perspective.  If this will be a blocker for developers who
 would actually work on it, please let me know.

Personally I'd prefer almost any of the other Linux distros.
Gentoo always leaves me wondering exactly what I'm running today,
and I think reproducibility is an important attribute for a benchmarking
machine.

regards, tom lane



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

Note it's a 28 disk system, and I can allocate more if needed, but I
was going to use one MSA for internal use.

On 7/25/07, Mark Wong [EMAIL PROTECTED] wrote:

On 7/25/07, Tom Lane [EMAIL PROTECTED] wrote:
 Gavin M. Roy [EMAIL PROTECTED] writes:
  I'm currently in the process of having Gentoo linux reinstalled on the
  box since that is what I am most comfortable administering from a
  security perspective.  If this will be a blocker for developers who
  would actually work on it, please let me know.

 Personally I'd prefer almost any of the other Linux distros.
 Gentoo always leaves me wondering exactly what I'm running today,
 and I think reproducibility is an important attribute for a benchmarking
 machine.

Tom, have any specific ideas in mind for using the system?  While I'm
used to having more disks it could be useful nonetheless for the tests
I used to run if there are no other ideas.

Rats, I've always liked Gentoo. ;)

Regards,
Mark



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

Ubuntu server?  Slackware?  Not a fan of Centos, RHEL or Fedora...
What about on the BSD side of things?

On 7/25/07, Tom Lane [EMAIL PROTECTED] wrote:

Gavin M. Roy [EMAIL PROTECTED] writes:
 If you're interested in using the box, name what you want installed.

Personally I use Fedora, but that's because of where I work ;-).
I have no objection to some other distro so long as it's one where
other people can duplicate your environment easily (no locally
compiled stuff).  A disadvantage of Fedora is its relatively short
support lifetime --- if you don't want to have to reinstall at least
once a year, something else would be better.

regards, tom lane



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

If RH can sponsor a license of RHEL I'm inclined to go there.  Not
that it was offered, but I think Dave's suggestion was Tom could field
that for the box if inclined.  If I'm wrong, let me know.  If that
can't happen, would people prefer CentOS or Ubuntu Server?  The people
I'm most concerned with are the people who will actually use it.  If
you consider yourself one of those people, pipe in now, I will tally
votes and go from there.  From a Gentoo side, I would have kept things
pretty stable, but I'd rather developers be comfortable with the
environment which will encourage you to use it.  I'm not interested in
running Debian, which I'm happy to talk about off topic, in private,
if anyone cares enough to want to discuss it.

What I'm most interested in to touch on Simon's request is SMP
scaling.  From another Hackers thread this month, which I can dig up,
I've walked away with the impression that after 4 cores, we don't see
the same level of per-processor performance improvement that we see =
4 cores.  What you actually do is up to you, we want to provide this
to the hacker community to use as they see fit to continue to improve
PostgreSQL which is integral to our operation.  Any performance,
scalability or even advocacy efforts (read benchmarking) will benefit
myYearbook.

Gavin


On 7/25/07, Simon Riggs [EMAIL PROTECTED] wrote:

On Wed, 2007-07-25 at 14:32 -0400, Tom Lane wrote:
 Dave Page [EMAIL PROTECTED] writes:
  Perhaps RH could donate us a RHEL/RHN licence for this?

 I could ask, if there's consensus we want it.

Please.

  It sounded like more
 people like Debian, though.

Well, if you don't we probably will go Debian.

--
  Simon Riggs
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Machine available for community use

2007-07-25 Thread Gavin M. Roy

One thing to take into account is I dont have physical access to the
box (It is in TX, I am in PA).  All installs but Gentoo will be
performed by a well trained NOC monkey. *cough*

On 7/25/07, Dave Page [EMAIL PROTECTED] wrote:



 --- Original Message ---
 From: Tom Lane [EMAIL PROTECTED]
 To: Greg Smith [EMAIL PROTECTED]
 Sent: 25/07/07, 18:54:50
 Subject: Re: [HACKERS] Machine available for community use

 Another fairly big issue is that we need to know whether measurements we
 take in August are comparable to measurements we take in October, so a
 fairly stable platform is important.  As you say, a fast-changing kernel
 would make it difficult to have any confidence about comparability over
 time.  That would tend to make me vote for RHEL/Centos, where long-term
 stability is an explicit development goal.  Debian stable might do too,
 though I'm not as clear about their update criteria as I am about Red Hat's.

Perhaps RH could donate us a RHEL/RHN licence for this?

/D

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] pg_dump ignore tablespaces

2007-07-14 Thread Gavin M. Roy

Recently I ran into an issue where restoring from pg_dump from one machine
to another with non-matching tablespaces.  The primary issue is that index
creation will fail if the tablespace does not exist from the dump.  I was
thinking to best solution for this would be a pg_dump option such as
--ignore-tablespaces which would not dump any tablespace related data.  This
would benefit restoring a dump from one machine to another where there are
different disk or tablespace topologies.
If such a patch were presented and found to be acceptable code wise, would
it be a generally useful enough option to be included?

Gavin


[HACKERS] Replacement of readline by libedit in PostgreSQL 8.1.x

2007-04-15 Thread Dhanaraj M

Hi all,

In 8.2.x version of postgres, there is a configuration switch   
--with-libedit-preferred  prefer BSD Libedit over GNU Readline.

However, I don't see this switch in 8.1.x.
Since people still use 8.1.x version, is there any plan to back-port 
this feature?

If so, I like to work on this.

Thanks
Dhanaraj

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Allow the identifier length to be increased via a

2006-12-28 Thread Dhanaraj M

Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:
  

Dhanaraj M wrote:


I am sending the patch for the following TODO item:
Allow the identifier length to be increased via a configure option
  


  

You should use pg_config.h, not mangle postgres_ext.h like that.  Or
maybe generate postgres_ext.h from an hypotetical postgres_ext.h.in (but
I wouldn't do that, really).



I'm wondering how this got into the TODO list.  It seems rather
pointless, and likely to create client compatibility problems (if not,
why is NAMEDATALEN exported at all?)
  

Will this TODO item be removed from the list?
Or I shall proceed with the suggestions given.

Thanks
Dhanaraj

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Clarification needed

2006-10-11 Thread Dhanaraj M
In psql, 
psql\d tableName


Query is

SELECT pg_catalog.format_type(a.atttypid, a.atttypmod) from 
pg_attribute a, pg_class c where  a.attrelid =c.oid and 
c.relname='TableName' and a.attname='ColName';



Zdenek Kotala wrote:



Look at http://www.postgresql.org/docs/8.1/interactive/catalogs.html

Specially on pg_attribute, pg_class and pg_type table. Or you can use 
some features in the psql.


Zdenek


Indira Muthuswamy napsal(a):


Hai,
 
Can anyone of you help me in finding the datatype of a particular 
column in a table in Postgres?
 
Thanks and Regards,

M.Indira
 
 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Is there any utility to update the table whenever text file gets changed?

2006-09-14 Thread Dhanaraj M

Is there any utility in postgresql which can do the following?

The utility must update the table whenever there is any change in the 
text file.

COPY command helps to do that, though this is not straight forward.
Can it be automated?

Thanks
Dhanaraj

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] Patch - Have psql show current values

2006-08-24 Thread Dhanaraj M

For \ds command,
this patch displays the current sequence value(last_value) for each 
sequence.

This was suggested during the earlier discussion.

Output of the current patch:
--
mydb=# \ds
   List of relations
Schema | Name |   Type   |  Owner   | Seq Value
+--+--+--+---
public | a| sequence | Dhanaraj | 5
public | b| sequence | Dhanaraj | 2
public | c| sequence | Dhanaraj | 1
(3 rows)

output without aplying the patch

mydb=# \ds
   List of relations
Schema | Name |   Type   |  Owner  
+--+--+--+-

public | a| sequence | Dhanaraj
public | b| sequence | Dhanaraj
public | c| sequence | Dhanaraj
(3 rows)

Peter Eisentraut wrote:


Dhanaraj M wrote:
 


This patch was discussed a few months ago.
I could not complete this patch at that time.
I hope that the current version of my patch is acceptable.
   



What is this patch supposed to be doing?

 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.2 features status

2006-08-06 Thread Agent M


On Aug 5, 2006, at 10:48 PM, Christopher Browne wrote:


Quoth [EMAIL PROTECTED] (David Fetter):

On Fri, Aug 04, 2006 at 02:37:56PM -0700, Neil Conway wrote:

On Fri, 2006-08-04 at 12:40 -0700, David Fetter wrote:

While I am not going to reopen the can of worms labeled 'bug
tracker', I think it would be good to have a little more formality
as far as claiming items goes.



What say?


I think this is a good plan for adding additional process overhead,
and getting essentially nothing of value in return. I'm not
convinced there's a problem in need of solving here...


Perhaps you'd like to explain how big a burden on the developer it is
to send an once a week, that being what I'm proposing here.

As far as the problem in need of solving, it's what Andrew Dunstan
referred to as splendid isolation, which is another way of saying,
letting the thing you've taken on gather dust while people think
you're working on it.


It seems to me once a week is a bit too often to demand, particularly
when trying to herd cats.

A burden of once a month may seem more reasonable.


One of the problems is that CVS branching is rather painful and some 
contributors can't commit. If there were some place where one could 
maintain a publicly-visible development branch just for feature X, that 
would make the work open source and trackable instead of 
open-source-once-I'm-done.


-M

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Security bugs

2006-07-26 Thread Dhanaraj M


I heard that 2 security bugs were fixed in 8.1.4.
Since I like to upgrade from 8.1.3, I like to know the bugs.
Can somebody give the description of those bugs?

Thanks
Dhanaraj

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Progress bar updates

2006-07-19 Thread Agent M

Why make it so complicated?

There could be a guc to indicate that the client is interested in 
progress updates. For the execution phase, elog(INFO,...) could be 
emitted for each major plan node. (The client would probably run the 
explain plan beforehand or it would be embedded in the elog).


During the downloading of the rows, the client would display the bar 
relative to the number of estimated rows returned.


-M

On Jul 18, 2006, at 2:35 PM, Gregory Stark wrote:



Has anyone looked thought about what it would take to get progress 
bars from

clients like pgadmin? (Or dare I even suggest psql:)

My first thought would be a message like CancelQuery which would cause 
the
backend to peek into a static data structure and return a message that 
the
client could parse and display something intelligent. Various commands 
would

then stuff information into this data structure as they worked.


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] SPI Elections and mailing list

2006-07-16 Thread Agent M
Sorry- perhaps I misunderstand the purpose of your group, but how can 
you claim to be making decisions on software in the public interest 
on a private, paid-member mailing list?


-M

On Jul 16, 2006, at 2:10 PM, Josh Berkus wrote:


Folks,

Hopefully by now a bunch of you have joined as Software in the Public 
Interest
Contributing members per my earlier e-mail and are aware that the SPI 
annual
board election has started.   If you are a registered contributing 
member

with SPI, elections are at: http://members.spi-inc.org/vote/
and candidate statements are at:
http://www.spi-inc.org/secretary/votes/vote5/

Voting closes July 28th.   If you did not already register as an SPI
contributing member, it is too late for this year.

Please also note that the current volume of e-mail on the spi-private 
mailing
list is due entirely to the election and is not at all typical of the 
list.


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Online index builds

2006-07-15 Thread Agent M
A great first step would be to add elog(INFO,...) in some standardized 
format over the wire so that clients can tell what's going on. It could 
be triggered by a GUC which is off by default.


-M

On Jul 15, 2006, at 9:10 PM, Greg Stark wrote:



Hannu Krosing [EMAIL PROTECTED] writes:


Maybe we can show progress indicators in status line (either
pg_stat_activity.current_query or commandline shown in ps), like

WAITING TO START PHASE 1 - WAITING FOR TRANSACTION XXX TO COMPLETE

or

INSERTING INDEX ENTRY N OF M

changing every few seconds.


Hm. That would be very interesting. I'll say that one of the things 
that
impressed me very much with Postgres moving from Oracle was the focus 
on
usability. Progress indicators would be excellent for a lot of 
operations.


That said I'm not sure how much I can do here. For a substantial index 
we
should expect most of the time will be spent in the tuplesort. It's 
hard to
see how to get any sort of progress indicator out of there and as long 
as we
can't it's hard to see the point of getting one during the heap scan 
or any of

the other i/o operations.

I think it does make sense to put something in current_query 
indicating when
it's waiting for transactions to end and when it's past that point. 
That's

something the DBA should be aware of.


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] 10th Anniversary Conference

2006-07-10 Thread Agent M

Dear Hackers,

I would like to thank all of you for organizing, hosting, and attending 
the 10th Anniversary PostgreSQL Conference last weekend. I was 
especially interested in future PostgreSQL directions and that was 
definitely the conference's theme. It was great to meet the community's 
big wigs, too.


Thanks especially to Josh, Gavin, and the other main organizers for 
making sure everything ran smoothly. I would most certainly be 
interested in future annual meetings.


If any of you are ever in the Boston area, drop me a line for a free 
beer.*


And, as mentioned by Bruce Momjian during his keynote, thanks for 
making a great database product that allows so many of us to pay the 
bills!


Best regards,
M

*Offer void where prohibited.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] binds only for s,u,i,d?

2006-07-03 Thread Agent M
Why are only select, insert, update, and delete supported for $X binds? 
Why can't preparation be used as a global anti-injection facility?


Example using the backend protocol for binds:
PREPARE TRANSACTION $1;
bind $1 ['text']
--syntax error at $1

Why am I able to prepare statements with the backend protocol that I 
can't prepare with PREPARE:

agentm=# prepare gonk as prepare transaction $1;
ERROR:  syntax error at or near prepare at character 17
LINE 1: prepare gonk as prepare transaction $1;

whereas the backend protocol only emits an error when the statement is 
executed [and the binds are ignored].


-M


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] Turning off disk caching

2006-06-27 Thread Dhanaraj M


Hi

Is there anybody who knows about  Turning off disk caching in solaris 
machines.

If so, pl. reply back.

Thanks
Dhanaraj

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] That EXPLAIN ANALYZE patch still needs work

2006-06-08 Thread Agent M
It's worth noting that on Darwin (on Apple hardware) gettimeofday is 
never a syscall whereas on Linux (AFAIK), it always is.


On Jun 8, 2006, at 7:58 PM, Mark Kirkwood wrote:


Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:

Wow, that is slow.  Maybe a problem in the kernel?  Perhaps something
similar to this:
http://www.ussg.iu.edu/hypermail/linux/kernel/0603.2/index.html#1282

Yeah, that's a pretty interesting thread.  I came across something
similar on a Red Hat internal list.  It seems there are three or four
different popular standards for clock hardware in the Intel world,
and some good implementations and some pretty bad implementations
of each.  So the answer may well boil down to if you're using cheap
junk PC hardware then gettimeofday will be slow.


OS seems to matter as well - I've got two identical Supermicro P3TDER 
dual intel boxes. 1 running FreeBSD 6.1, one running Gentoo Linux 
2.6.16.


Doing the 'select count(*) vs explain analyze select count(*) on 
10 row table gives:


Freebsd : select 108 ms  explain analyze 688 ms
Linux   : select 100 ms  explain analyze 196 ms

Both systems have ACPI enabled in BIOS (which means there is a better 
timecounter than 'i8254' available (FreeBSD says its using 'ACPI-safe' 
- not sure how to check on Linux).


¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] Performance Issues

2006-05-23 Thread Dhanaraj M

I have the following doubts.

1. Does postgres create an index on every primary key?  Usually, queries 
are performed against a table on the primary key, so, an index on it 
will be very useful.


2. If 'm executing a complex query and it takes 10 seconds to return the 
results -- it takes 10 seconds to execute the next time also.  I'm 
wondering if there's any kind of caching that can be enabled -- so, the 
next time it takes 10 seconds to return the results.


Thanks
Dhanaraj

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Performance Issues

2006-05-23 Thread Dhanaraj M
Thank you for your help. I found that an implicit index is created for 
the primary key in the current version. However, it is not done in 7.x 
version.



Mark Woodward wrote:


Dhanaraj M wrote:
   


I have the following doubts.

1. Does postgres create an index on every primary key?  Usually, queries
are performed against a table on the primary key, so, an index on it
will be very useful.
 


Yes, a unique index is used to enforce the primary-key.
   



Well, here is an interesting question that I have suddenly become very
curious of, if you have a primary key, obviously a unique index, is it, in
fact, use this index regardless of analyzing the table?


 


2. If 'm executing a complex query and it takes 10 seconds to return the
results -- it takes 10 seconds to execute the next time also.  I'm
wondering if there's any kind of caching that can be enabled -- so, the
next time it takes 10 seconds to return the results.
 


Not of query results. Obviously data itself might be cached. You might
want to look at memcached for this sort of thing.
   




I am looking at this string of posts and it occurs to me that he should
run analyze. Maybe I'm jumping at the wrong point.
 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Clarification required

2006-05-18 Thread Dhanaraj M
The server log file means the files that are in pg_xlog. or some 
more/some other files?


The log files in pg_xlog dir are not human readable. How is it useful on 
the client side?


Dave Page wrote:





could not understand the following TODO item.
Can anyone explain this.

Monitoring:
=

Allow server logs to be remotely read and removed using SQL commands


Does it mean that the server log file should be read by the 
client? What kind of removal (using SQL)?
   



This is possible using the functions in the adminpack contrib module
that's currently in the patch queue:
http://candle.pha.pa.us/mhonarc/patches/msg6.html

 


Also I like to know where the server log file is stored.
   


Check your postgresql.conf file.

Regards, Dave.
 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Clarification required

2006-05-12 Thread Dhanaraj M

I could not understand the following TODO item.
Can anyone explain this.

Monitoring:
=

Allow server logs to be remotely read and removed using SQL commands


Does it mean that the server log file should be read by the client? What 
kind of removal (using SQL)?

Also I like to know where the server log file is stored.

Thanks
Dhanaraj

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Need some clarification

2006-05-10 Thread Dhanaraj M

I could see the following in TODO list
but I am not clear what is expected out of this.
Can anyone explain this?

1. *Allow VIEW/RULE recompilation when the underlying tables change *

*Another issue is whether underlying table changes should be 
reflected in the view,
   e.g. should SELECT * show additional columns if they are added after 
the view is created. *


Dropping/altering column is not allowed in the above mentioned scenario. 
Only it fails for (select *)

What are the other cases in which it fails?

2. *Invalidate prepared queries, like INSERT, when the table definition 
is altered


*Invalidation means recompilation or deletion of the prepared stmt here.*
*Both the items look similar. i.e) needs recompilation of the query 
after altering the table.

It is right?*
*

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Need a help - Performance issues

2006-05-09 Thread Dhanaraj M


I am currently looking at some performance issues. Our product uses the 
postgres as the backend
and it uses a huge storage space.

Is there a script available, which does the following:


1. When you point it to a postgres installation, - 1.the script first  
  identifies what platform (solaris-sparc/solaris x86/ Niagara) it is 
  running on,
2. walks through all known tuning parameters (in /etc/system and 
  postgresql.conf) on the system
3. lists out what parameters are optimal/suboptimal for that 
  platform, and makes recommendations.
4. identifies when the last vaccumdb /fullvacuum (and other such 
  database health maintenance commands) were run

5. comments about the growth of certain tables/databases


If any body has the scripts or can give some links which could be useful for 
our group.



Thanks
Dhanaraj


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   3   >