Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov


_
From: David G. Johnston 
Sent: Monday, May 30, 2016 23:44
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov 
Cc:  , Scott Marlowe 


On Mon, May 30, 2016 at 4:22 PM, Alex Ignatov  wrote:

_
From: Scott Marlowe 
Sent: Monday, May 30, 2016 20:14
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov 
Cc:  


On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov  wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

It is not about modern distros it is about possible silent data loss in old 
distros. We have replication, have some form of data check summing, but we are 
powerless in front of this XFS bug just because "zeroed file is you good friend 
in Postgres". With "zero file is good file" paradigm and this noted XFS bug PG  
as it is now is "colossus with feet of clay" It can do many things but it cant 
even tell us that we have some trouble with our precious data. No need to 
prevent or to some other AI magic and so on when zero doom day has come.What we 
need now is some error report about suspicious zeroed file. To make us sure 
that something went wrong and we have to do recovery.Today PG "power loss" 
recovery and this XFS bug poisoning our ensurance that  recovery went well . It 
went well even with zeroed file. It it not healthy behavior. It like a walk on 
a mine field with eyes closed. I think it is  very dangerous view on data to 
have data files without any header in it and without any files checking at 
least on PG start.With this known XFS bug  it can leads to undetected and 
unavoidable loss of data.

​For those not following -general this is basically an extension of the 
following thread.
"Deleting a table file does not raise an error when the table is touched 
afterwards, why?"
https://www.postgresql.org/message-id/flat/184509399.5590018.1464622534207.javamail.zim...@dbi-services.com#184509399.5590018.1464622534207.javamail.zim...@dbi-services.com
David J.
It is not extension of that thread it is about XFS bug and how PG ignoring 
zeroed file even during poweloss recovery. That thread is just topic starter on 
such important theme as how to silently loose your data with broken XFS and PG. 
Key words is silently without any human intervention and "zero length file is 
good file " paradigm. It is not even like on unlinking files by hands.

Alex IgnatovPostgres Professional: http://www.postgrespro.comRussian Postgres 
Company



  

Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread David G. Johnston
On Mon, May 30, 2016 at 3:32 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> ​I have to think that we can reasonably ascribe unexpected system state to
> causes other than human behavior.  In both of the other examples PostgreSQL
> would fail to start so I'd say we have expected behavior in the face of
> those particular unexpected system states.
>

Alex Ignatov started a new thread was started on this topic as well...​

https://www.postgresql.org/message-id/c571dfc5-91b0-0df2-4e3f-45bc94c11...@postgrespro.ru

I posted a link to this thread on his new one as well.

David J.​


Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread David G. Johnston
On Mon, May 30, 2016 at 4:22 PM, Alex Ignatov 
wrote:

>
> _
> From: Scott Marlowe 
> Sent: Monday, May 30, 2016 20:14
> Subject: Re: [GENERAL] Silent data loss in its pure form
> To: Alex Ignatov 
> Cc: 
>
>
>
> On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov 
> wrote:
> > Following this bug reports from redhat
> > https://bugzilla.redhat.com/show_bug.cgi?id=845233
> >
> > it rising some dangerous issue:
> >
> > If on any reasons you data file is zeroed after some power loss(it is the
> > most known issue on XFS in the past) when you do
> > select count(*) from you_table you got zero if you table was in one
> > 1GB(default) file or some other numbers !=count (*) from you_table before
> > power loss
> > No errors, nothing suspicious in logs. No any checksum errors. Nothing.
> >
> > Silent data loss is its pure form.
> >
> > And thanks to all gods that you notice it before backup recycling which
> > contains good data.
> > Keep in mind it while checking you "backups" in any forms (pg_dump or the
> > more dangerous and short-spoken PITR file backup)
> >
> > You data is always in danger with "zeroed data file is normal file"
> > paradigm.
>
> That bug shows as having been fixed in 2012. Are there any modern,
> supported distros that would still have it? It sounds really bad btw.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
> It is not about modern distros it is about possible silent data loss in
> old distros. We have replication, have some form of data check summing, but
> we are powerless in front of this XFS bug just because "zeroed file is you
> good friend in Postgres".
>  With "zero file is good file" paradigm and this noted XFS bug PG  as it
> is now is "colossus with feet of clay" It can do many things but it cant
> even tell us that we have some trouble with our precious data.
>  No need to prevent or to some other AI magic and so on when zero doom day
> has come.
> What we need now is some error report about suspicious zeroed file. To
> make us sure that something went wrong and we have to do recovery.
> Today PG "power loss" recovery and this XFS bug poisoning our ensurance
> that  recovery went well . It went well even with zeroed file. It it not
> healthy behavior. It like a walk on a mine field with eyes closed.
> I think it is  very dangerous view on data to have data files without any
> header in it and without any files checking at least on PG start.
> With this known XFS bug  it can leads to undetected and unavoidable loss
> of data.
>


​For those not following -general this is basically an extension of the
following thread.

"Deleting a table file does not raise an error when the table is touched
afterwards, why?"

https://www.postgresql.org/message-id/flat/184509399.5590018.1464622534207.javamail.zim...@dbi-services.com#184509399.5590018.1464622534207.javamail.zim...@dbi-services.com

David J.


​


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-30 Thread Andreas Joseph Krogh
På mandag 30. mai 2016 kl. 22:27:11, skrev Oleg Bartunov >:
    On Sun, May 29, 2016 at 12:59 AM, Oleg Bartunov > wrote:     On Thu, May 26, 2016 at 11:26 PM, 
Andreas Joseph Krogh> wrote: Hi.
 
Any news about when slides for $subject will be available?
 
I submitted slides to pgcon site, but it usually takes awhile, so you can 
download our presentation directly
http://www.sai.msu.su/~megera/postgres/talks/pgcon-2016-fts.pdf 

  



 
Please, download new version of slides. I added CREATE INDEX commands in 
examples.



 
Great!
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-30 Thread Oleg Bartunov
On Sun, May 29, 2016 at 12:59 AM, Oleg Bartunov  wrote:

>
>
> On Thu, May 26, 2016 at 11:26 PM, Andreas Joseph Krogh  > wrote:
>
>> Hi.
>>
>> Any news about when slides for $subject will be available?
>>
>
> I submitted slides to pgcon site, but it usually takes awhile, so you can
> download our presentation directly
> http://www.sai.msu.su/~megera/postgres/talks/pgcon-2016-fts.pdf
>
>
Please, download new version of slides. I added CREATE INDEX commands in
examples.



> There are some missing features in rum index, but I hope we'll update
> github repository really soon.
>
>
>>
>> --
>> *Andreas Joseph Krogh*
>> CTO / Partner - Visena AS
>> Mobile: +47 909 56 963
>> andr...@visena.com
>> www.visena.com
>> 
>>
>
>


Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov


_
From: Scott Marlowe 
Sent: Monday, May 30, 2016 20:14
Subject: Re: [GENERAL] Silent data loss in its pure form
To: Alex Ignatov 
Cc:  


On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov  wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

It is not about modern distros it is about possible silent data loss in old 
distros. We have replication, have some form of data check summing, but we are 
powerless in front of this XFS bug just because "zeroed file is you good friend 
in Postgres". With "zero file is good file" paradigm and this noted XFS bug PG  
as it is now is "colossus with feet of clay" It can do many things but it cant 
even tell us that we have some trouble with our precious data. No need to 
prevent or to some other AI magic and so on when zero doom day has come.What we 
need now is some error report about suspicious zeroed file. To make us sure 
that something went wrong and we have to do recovery.Today PG "power loss" 
recovery and this XFS bug poisoning our ensurance that  recovery went well . It 
went well even with zeroed file. It it not healthy behavior. It like a walk on 
a mine field with eyes closed. I think it is  very dangerous view on data to 
have data files without any header in it and without any files checking at 
least on PG start.With this known XFS bug  it can leads to undetected and 
unavoidable loss of data.
  

Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread David G. Johnston
On Mon, May 30, 2016 at 2:50 PM, Tom Lane  wrote:

> Daniel Westermann  writes:
> > - if the above is correct why does PostgreSQL only write a partial file
> back to disk/wal? For me this still seems dangerous as potentially nobody
> will notice it
>
> In quiescent circumstances, Postgres wouldn't have written anything at
> all, and the file would have disappeared completely at server shutdown,
> and you would have gotten some sort of file-not-found error when you tried
> the "count(*)" after restarting.  I hypothesize that you did an unclean
> shutdown leading to replaying some amount of WAL at restart, and that WAL
> included writing at least one block of the file (perhaps as a result of a
> hint-bit update, or some other not-user-visible maintenance operation,
> rather than anything you did explicitly).  The WAL replay code will
> recreate the file if it doesn't exist on-disk --- this is important for
> robustness.  Then you'd have a file that exists on-disk but is partially
> filled with empty pages, which matches the observed behavior.  Depending
> on various details you haven't provided, this might be indistinguishable
> from a valid database state.
>
>
I suspect that page checksums might have detected the broken state, but if
any of the written pages were partials since the non-overwritten-zeros on
the partially written pages would have resulted in a different hash.

> - PostgreSQL assumes that someone with write access to the files knows
> what she/he is doing. ok, but still, in the real world cases like this
> happen (for whatever reason)
>
> [ shrug... ] There's also an implied contract that you don't do "rm -rf /",
> or shoot the disk drive full of holes with a .45, or various other
> unrecoverable actions.  We're not really prepared to expend large amounts
> of developer effort, or large amounts of runtime overhead, to detect such
> cases.  (In particular, the fact that all-zero pages are a valid state is
> unfortunate from this perspective, but it's more or less forced by
> robustness concerns associated with table-extension behavior.  Most users
> would not thank us for making table extension slower in order to issue a
> more intelligible error for examples like this one.)
>

​rant​

​I have to think that we can reasonably ascribe unexpected system state to
causes other than human behavior.  In both of the other examples PostgreSQL
would fail to start so I'd say we have expected behavior in the face of
those particular unexpected system states.

​IMO too much attention is being paid to the act of recreation.  But even
if we presume that the only viable way to recreate this circumstance is to
do so intentionally we've documented a clever way for someone to mess with
the system in a subtle manner.

Up until Tom's last email I got very little out of the discussion.  It
doesn't fill me with confidence when such an important topic is taken too
glibly.  I suspect a large number of uses of PostgreSQL are in situations
where if the application works everything is assumed to be fine.  People
know that random things happen to hardware and that software can have
bugs.  That is what this thread describes -  a potential situation that
could happen due to non-human causes that results in a somewhat silently
mis-operating system.

​There is still quite a bit of hand-waving here though - and I don't know
whether being more precise really doesn't an end-user enough good that it
would be worth writing up in the user-facing docs.  Like all areas I'm sure
this is open to improvement but I'm sufficiently happy that the probability
of an event of this precision is sufficiently unlikely to thus warrant the
present behavior.​

​/rant​

David J.


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Tom Lane
Daniel Westermann  writes:
> - if the above is correct why does PostgreSQL only write a partial file back 
> to disk/wal? For me this still seems dangerous as potentially nobody will 
> notice it 

In quiescent circumstances, Postgres wouldn't have written anything at
all, and the file would have disappeared completely at server shutdown,
and you would have gotten some sort of file-not-found error when you tried
the "count(*)" after restarting.  I hypothesize that you did an unclean
shutdown leading to replaying some amount of WAL at restart, and that WAL
included writing at least one block of the file (perhaps as a result of a
hint-bit update, or some other not-user-visible maintenance operation,
rather than anything you did explicitly).  The WAL replay code will
recreate the file if it doesn't exist on-disk --- this is important for
robustness.  Then you'd have a file that exists on-disk but is partially
filled with empty pages, which matches the observed behavior.  Depending
on various details you haven't provided, this might be indistinguishable
from a valid database state.

> - PostgreSQL assumes that someone with write access to the files knows what 
> she/he is doing. ok, but still, in the real world cases like this happen (for 
> whatever reason) 

[ shrug... ] There's also an implied contract that you don't do "rm -rf /",
or shoot the disk drive full of holes with a .45, or various other
unrecoverable actions.  We're not really prepared to expend large amounts
of developer effort, or large amounts of runtime overhead, to detect such
cases.  (In particular, the fact that all-zero pages are a valid state is
unfortunate from this perspective, but it's more or less forced by
robustness concerns associated with table-extension behavior.  Most users
would not thank us for making table extension slower in order to issue a
more intelligible error for examples like this one.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plugin dev, oid to pointer map

2016-05-30 Thread Attila Soki

> On 30 May 2016, at 02:43, Julien Rouhaud  wrote:
> 
> On 29/05/2016 22:10, Attila Soki wrote:
>> i am about to begin with postgresql plugin development.
>> H Currently i'm trying to become somewhat familiar with the postgresql 
>> sources.
> 
>> 
>> Without going too deep into details about the plugin, i want to use
>> many Oid to pointer relations.
>> The pointer is a pointer to my own struct (allocated with palloc).
>> There will be approx. 1000 unique oid/pointer pairs.
>> 
>> Basically, what i want is, to be able to get the pointer to my struct by Oid.
>> 
>> Is there is a suitable hashmap or key-value storage solution in the pg code?
>> if so, please point me to the right part of the source.
>> 
> 
> Yes, there's an hashtable implementation, see dynahash.c
> 
> If you want to use that in shared memory in your extension, you can look
> at the pg_stat_statements extension (look for pgss_hash) for an example.


Hello Julien,

exactly what i need

Thank you

Attila Soki





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread David W Noon
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 30 May 2016 19:49:36 +0200 (CEST), Daniel Westermann
(daniel.westerm...@dbi-services.com) wrote about "Re: [GENERAL]
Deleting a table file does not raise an error when the table is
touched afterwards, why?" (in
<1247360337.5599235.1464630576430.javamail.zim...@dbi-services.com>):

[snip]
> Thanks all for your answers. Maybe I should have provided more 
> background information: We had an internal workshop today and one
> of the topics was backup/restore. One of the questions was what
> will happen if (for any reason) a file gets deleted so we tested
> it. I am aware that this is not the common use case. But still I
> want to understand why PostgreSQL works the way described. From the
> answers I understand this:
> 
> - the file is not really deleted because PostgreSQL is still using
> it => correct?

It is correct.

> - if the above is correct why does PostgreSQL only write a partial
> file back to disk/wal?

PG only writes modified pages to WAL, even if another process has
requested the deletion of the tablespace file. In fact, PG is not even
aware of the deletion request.

> For me this still seems dangerous as potentially nobody will notice
> it

It is intrinsically dangerous, which is why only root and postgres
userids have write permissions on physical filesystems to be used by PG.

Indeed, if you use an alternative security model even root will not
have write permissions, only postgres.

> - PostgreSQL assumes that someone with write access to the files
> knows what she/he is doing. ok, but still, in the real world cases
> like this happen (for whatever reason)

It is very unlikely to happen once; it should never happen twice. No
competent DBA would do that once, let alone twice; if a DBA does that
once, he/she ceases to be a DBA.

The issue you are raising here is the misuse of filesystem
permissions. There is nothing PG can do here beyond what it already
does: check that the permissions mask and ownership are as tight as
possible during start-up of each tablespace.

This reminds me of the old music hall joke: a patient goes to the
doctor, raises his arm and says "Doc, it hurts when I do this!", to
which the doctor replies "Well don't do that."

Basically, it behoves your support staff to manage the physical
filesystems correctly and not damage them.
- -- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
david.w.n...@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAldMiSMACgkQogYgcI4W/5T4UgCgzQQGvhdo+yxr7VSUPyzTJMa8
xAwAn3vPHQ4UOOhSL4kjCtl6Cq5sVeb0
=/lld
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] swarm of processes in BIND state?

2016-05-30 Thread Jeff Janes
On Sat, May 28, 2016 at 11:32 AM, hubert depesz lubaczewski
 wrote:
> On Sat, May 28, 2016 at 10:32:15AM -0700, Jeff Janes wrote:

>> If that wasn't informative, I'd attach to one of the processes with
>> the gdb debugger and get a backtrace.  (You might want to do that a
>> few times, just in case the first one accidentally caught the code
>> during a part of its execution which was not in the bottlenecked
>> spot.)
>
> I did:
> for a in $( ps uww -U postgres | grep BIND | awk '{print $2}' ); do echo "bt" 
> | gdb -p $a > $a.bt.log 2>&1; done
>
> Since there is lots of output, I made a tarball with it, and put it on
> https://depesz.com/various/all.bt.logs.tar.gz
>
> The file is ~ 19kB.

If you look at the big picture, it is what I thought: the planner
probing the index end points when planning a merge join.  Although I
don't know how that turns into the low-level specifics you are seeing.

It looks like everyone becomes interested in the same disk page at the
same time.  One process starts reading in that page, and all the
others queue up on a lock waiting for that one it to finish.  So what
you see is 1 disk wait and N-1 semop waits.

But, if the the page is that popular, why is it not staying in cache?
Either which page is popular is moving around quickly (which is hard
to see how that would be plausible if ti represents the index
end-points) or there are so many simultaneously popular pages that
they can't all fit in cache.

So my theory is that you deleted a huge number of entries off from
either end of the index, that transaction committed, and that commit
became visible to all.  Planning a mergejoin needs to dig through all
those tuples to probe the true end-point.  On master, the index
entries quickly get marked as LP_DEAD so future probes don't have to
do all that work, but on the replicas those index hint bits are, for
some unknown to me reason, not getting set.  So it has to scour the
all the heap pages which might have the smallest/largest tuple, on
every planning cycle, and that list of pages is very large leading to
occasional IO stalls.

Or perhaps the master realizes the deleting transaction is
committed-visible-to-all, but the replicas believe there are still
some transactions which could care about them, and that is the reason
they are not getting hinted?


>> > So far we've:
>> > 1. ruled out IO problems (enough io both in terms of bandwidth and iops)
>>
>> Are you saying that you are empirically not actually doing any IO
>> waits, or just that the IO capacity is theoretically sufficient?
>
> there are no iowaits per what iostat returns. Or, there are but very low.

If each IO wait has a pile-up of processes waiting behind it on
semops, then it could have a much larger effect than the raw numbers
would indicate.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Daniel Westermann
>>> 
>>>On Mon, 30 May 2016 17:35:34 +0200 (CEST), Daniel Westermann 
>>>(daniel.westerm...@dbi-services.com) wrote about "[GENERAL] Deleting a 
>>>table file does not raise an error when the table is touched 
>>>afterwards, why?" (in 
>>><184509399.5590018.1464622534207.javamail.zim...@dbi-services.com>): 
>>> 
>>>[snip] 
 Then I delete the file: 
 
 postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 
 32809 
>>> 
>>>Actually, you are not deleting the file. You are asking the filesystem 
>>>driver to delete it when it has stopped being used. The directory 
>>>entry is removed immediately though, so that no other process can open i 
>>>t 
>>> 
 When doing the count(*) on the table again: 
 
 (postgres@[local]:5432) [sample] > select count(*) from t5; count 
 - 100 (1 row) 
 
 No issue in the log. This is probably coming from the cache, isn't 
 it? 
>>> 
>>>No, the file still exists because a PG back-end still has it open. 
>>> 
 Is this intended and safe? 
>>> 
>>>It is standard UNIX behaviour. It is not safe because you are not 
>>>supposed to do things that way. 
>>>- -- 
>>>Regards, 
>>> 
>>>Dave [RLU #314465] 

Thanks all for your answers. Maybe I should have provided more background 
information: We had an internal workshop today and one of the topics was 
backup/restore. One of the questions was what will happen if (for any reason) a 
file gets deleted so we tested it. I am aware that this is not the common use 
case. But still I want to understand why PostgreSQL works the way described. 
From the answers I understand this: 

- the file is not really deleted because PostgreSQL is still using it => 
correct? 
- if the above is correct why does PostgreSQL only write a partial file back to 
disk/wal? For me this still seems dangerous as potentially nobody will notice 
it 
- PostgreSQL assumes that someone with write access to the files knows what 
she/he is doing. ok, but still, in the real world cases like this happen (for 
whatever reason) 

Simon's answer: 
- It's a very good thing that we remain flying even with multiple bullet holes 
in the wings. 
Really? It depends on how you look at that, doesn't it? I'd prefer to get an 
error in this case, maybe I am wrong but I prefer to be noticed if a file is 
missing instead of getting results 

Tom's answer: 
- Well, yes, but it would impose huge amounts of overhead in order to raise an 
error a bit sooner for a stupid user action. The ideal thing would be to 
prevent users from breaking their database in the first place --- but there's 
not much we can do in that direction beyond setting the directory permissions. 
Ok, makes sense. But "a bit sooner" to what? The count(*) just returns a 
result. From a user perspective I have no idea that the result is wrong 

Thanks again 
Daniel 




Re: [GENERAL] Silent data loss in its pure form

2016-05-30 Thread Scott Marlowe
On Mon, May 30, 2016 at 10:57 AM, Alex Ignatov  wrote:
> Following this bug reports from redhat
> https://bugzilla.redhat.com/show_bug.cgi?id=845233
>
> it rising some dangerous issue:
>
> If on any reasons you data file is zeroed after some power loss(it is the
> most known issue on XFS in the past) when you do
> select count(*) from you_table you got zero if you table was in one
> 1GB(default) file or some other numbers !=count (*) from you_table before
> power loss
> No errors, nothing suspicious in logs. No any checksum errors. Nothing.
>
> Silent data loss is its pure form.
>
> And thanks to all gods that you notice it before backup recycling which
> contains good data.
> Keep in mind it while checking you "backups" in any forms (pg_dump or the
> more dangerous and short-spoken PITR file backup)
>
> You data is always in danger with "zeroed data file is normal file"
> paradigm.

That bug shows as having been fixed in 2012. Are there any modern,
supported distros that would still have it? It sounds really bad btw.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread David W Noon
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 30 May 2016 17:35:34 +0200 (CEST), Daniel Westermann
(daniel.westerm...@dbi-services.com) wrote about "[GENERAL] Deleting a
table file does not raise an error when the table is touched
afterwards, why?" (in
<184509399.5590018.1464622534207.javamail.zim...@dbi-services.com>):

[snip]
> Then I delete the file:
> 
> postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 
> 32809

Actually, you are not deleting the file. You are asking the filesystem
driver to delete it when it has stopped being used. The directory
entry is removed immediately though, so that no other process can open i
t.

> When doing the count(*) on the table again:
> 
> (postgres@[local]:5432) [sample] > select count(*) from t5; count 
> - 100 (1 row)
> 
> No issue in the log. This is probably coming from the cache, isn't 
> it?

No, the file still exists because a PG back-end still has it open.

> Is this intended and safe?

It is standard UNIX behaviour. It is not safe because you are not
supposed to do things that way.
- -- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
david.w.n...@googlemail.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAldMcjcACgkQogYgcI4W/5T/xgCfaQBh6g0WCBRkeNOlRK4Kbc43
Gs4An0UXb+piw+BQUGJupPtN+oHJZjVH
=td+i
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Silent data loss in its pure form

2016-05-30 Thread Alex Ignatov
Following this bug reports from redhat 
https://bugzilla.redhat.com/show_bug.cgi?id=845233


it rising some dangerous issue:

If on any reasons you data file is zeroed after some power loss(it is 
the most known issue on XFS in the past) when you do
select count(*) from you_table you got zero if you table was in one 
1GB(default) file or some other numbers !=count (*) from you_table 
before power loss

No errors, nothing suspicious in logs. No any checksum errors. Nothing.

Silent data loss is its pure form.

And thanks to all gods that you notice it before backup recycling which 
contains good data.
Keep in mind it while checking you "backups" in any forms (pg_dump or 
the more dangerous and short-spoken PITR file backup)


You data is always in danger with "zeroed data file is normal file" 
paradigm.



--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Alex Ignatov


On 30.05.2016 18:35, Daniel Westermann wrote:

Hi,

I need to understand something: Lets assume I have a table t5 with 
1'000'000 rows:


(postgres@[local]:5432) [sample] > select count(*) from t5;
  count
-
 100
(1 row)

Time: 2363.834 ms
(postgres@[local]:5432) [sample] >

I get the file for that table:

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] oid2name 
-d sample -t t5

From database "sample":
  Filenode  Table Name
--
 32809  t5


Then I delete the file:

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809

When doing the count(*) on the table again:

(postgres@[local]:5432) [sample] > select count(*) from t5;
  count
-
 100
(1 row)

No issue in the log. This is probably coming from the cache, isn't it? 
Is this intended and safe?


Then I restart the instance and do the select again:

2016-05-30 19:25:20.633 CEST - 9 - 2777 -  - @ FATAL:  could not open 
file "base/16422/32809": No such file or directory
2016-05-30 19:25:20.633 CEST - 10 - 2777 -  - @ CONTEXT:  writing 
block 8192 of relation base/16422/32809


(postgres@[local]:5432) [sample] > select count(*) from t5;
 count

 437920
(1 row)

Can someone please tell me the intention behind that? From my point of 
view this is dangerous. If nobody is monitoring the log (which sadly 
is the case in reality) nobody will notice that only parts of the 
table are there. Wouldn't it be much more safe to raise an error as 
soon as the table is touched?


PostgreSQL version:

(postgres@[local]:5432) [sample] > select version();
-[ RECORD 1 
]
version | PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by 
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit


Thanks in advance
Daniel



Hi if you delete file from external process that open this file this 
external process never ever notice it. Only after it  close this file 
handler you fall it some issues with "file not exist" and other.


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Tom Lane
Daniel Westermann  writes:
> Then I delete the file: 
> postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809 

There's a reason why the database directory is not readable/writable
by unprivileged users: it's to prevent them from doing dumb things
like that.  People who do have write access on the database are
assumed to know better.

> Wouldn't it be much more safe to raise an error as soon as the table is 
> touched? 

Well, yes, but it would impose huge amounts of overhead in order to
raise an error a bit sooner for a stupid user action.  The ideal
thing would be to prevent users from breaking their database in the
first place --- but there's not much we can do in that direction
beyond setting the directory permissions.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical

2016-05-30 Thread Simon Riggs
On 25 May 2016 at 18:26, Igor Neyman  wrote:

> This page:
>
> http://2ndquadrant.com/en/resources/pglogical/
>
> states that “It has also been submitted to PostgreSQL core as a candidate
> for inclusion in PostgreSQL 9.6.”
>
>
>
> So, my question is: did pglogical make the 9.6 core, or not?
>

pglogical is not in core for 9.6, though we have concrete plans for logical
replication to be in core for 10.0.


> The reason I’m asking is that pglogical is available (for 9.4 and 9.5) on
> various Linux platforms, and I’m stuck with Windows, hence the question.
>
> If it didn’t make 9.6 core, is there plan to include it in 9.7, or may be
> pglogical becomes available on Windows?
>
>
Currently pglogical does not support Windows.

It's free software, so funding for any new features or requirements is
always welcome.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Simon Riggs
On 30 May 2016 at 16:35, Daniel Westermann <
daniel.westerm...@dbi-services.com> wrote:
...

> Then I delete the file:
>
...

> No issue in the log. This is probably coming from the cache, isn't it? Is
> this intended and safe?
>

Postgres manages your data for you. What you're doing is not a supported
use case and I recommend not to do that in the future.


> Can someone please tell me the intention behind that? From my point of
> view this is dangerous. If nobody is monitoring the log (which sadly is the
> case in reality) nobody will notice that only parts of the table are there.
> Wouldn't it be much more safe to raise an error as soon as the table is
> touched?
>

How would we know that an external agent had deleted the file? What action
should we take if we did notice?

It's a very good thing that we remain flying even with multiple bullet
holes in the wings.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Francisco Olarte
Hi Daniel:

On Mon, May 30, 2016 at 5:35 PM, Daniel Westermann
 wrote:
> I get the file for that table:
...
> Then I delete the file:

Well, you corrupted the database and invoked undefined behaviour ( not
exactly, but postgres is not designed for this ).

> No issue in the log. This is probably coming from the cache, isn't it? Is
> this intended and safe?

It's probably not intended. It can come from the cache or it can
arrive from the fact that you are running a unix flavour. In unix ( at
the OS level, in the clasical filesystems ) you do not delete a file,
you unlink it ( remove the pointer to it in the directory  ), the file
is removed by the OS when nobody can reach it, which means nobody has
it open an no directory points to it ( so no one else can open it, is
like reference counting ) ( In fact this behaviour is used on purpose
for temporary files, you open it, unlink it and know when you exit,
either normaly or crashing, the OS deletes it ). Postgres has the file
open, and probably does not bother checking wether somebody removed it
under from the directory, as there is no correct behaviour in this
case, so no point in checking it.

> Then I restart the instance and do the select again:
> 2016-05-30 19:25:20.633 CEST - 9 - 2777 -  - @ FATAL:  could not open file
> "base/16422/32809": No such file or directory

As expected.

> Can someone please tell me the intention behind that? From my point of view
> this is dangerous. If nobody is monitoring the log (which sadly is the case
> in reality) nobody will notice that only parts of the table are there.
> Wouldn't it be much more safe to raise an error as soon as the table is
> touched?

If you are going to implement idealised behaviour, prohibiting people
from deleting it would be better.

Any user with minimu knwledge and enouugh privileges can put programs
in states from which they cannot recover, there is not point in
checking every corner case. In fact, if you can remove the file under
the servers feet you can probably alter the running server memory,
which would you think the correct behaviour would be for a 'poke
rand(),rand()' in the server process? It could have triple redundancy
copy of every page and try to vote and detect in each instruction, but
is pointless.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Melvin Davidson
On Mon, May 30, 2016 at 11:35 AM, Daniel Westermann <
daniel.westerm...@dbi-services.com> wrote:

> Hi,
>
> I need to understand something: Lets assume I have a table t5 with
> 1'000'000 rows:
>
> (postgres@[local]:5432) [sample] > select count(*) from t5;
>   count
> -
>  100
> (1 row)
>
> Time: 2363.834 ms
> (postgres@[local]:5432) [sample] >
>
> I get the file for that table:
>
> postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] oid2name -d
> sample -t t5
> From database "sample":
>   Filenode  Table Name
> --
>  32809  t5
>
>
> Then I delete the file:
>
> postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809
>
> When doing the count(*) on the table again:
>
> (postgres@[local]:5432) [sample] > select count(*) from t5;
>   count
> -
>  100
> (1 row)
>
> No issue in the log. This is probably coming from the cache, isn't it? Is
> this intended and safe?
>
> Then I restart the instance and do the select again:
>
> 2016-05-30 19:25:20.633 CEST - 9 - 2777 -  - @ FATAL:  could not open file
> "base/16422/32809": No such file or directory
> 2016-05-30 19:25:20.633 CEST - 10 - 2777 -  - @ CONTEXT:  writing block
> 8192 of relation base/16422/32809
>
> (postgres@[local]:5432) [sample] > select count(*) from t5;
>  count
> 
>  437920
> (1 row)
>
> Can someone please tell me the intention behind that? From my point of
> view this is dangerous. If nobody is monitoring the log (which sadly is the
> case in reality) nobody will notice that only parts of the table are there.
> Wouldn't it be much more safe to raise an error as soon as the table is
> touched?
>
> PostgreSQL version:
>
> (postgres@[local]:5432) [sample] > select version();
> -[ RECORD 1
> ]
> version | PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc
> (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
>
> Thanks in advance
> Daniel
>
>
I have not heard of pg_essentials, but obviously it is external to the
PostgreSQL server.
PostgreSQL cannot tell is someone is intentionally messing with the file
system. You have removed only the first file node with rm 32809.
*First off, you should never do that.* *If you want to drop the table, then
do DROP TABLE t5;*
*That will drop all the file nodes for that table.*
*You may as well ask "If I shoot myself in the head, why don't I feel any
pain?"*.
*You could also do rm -r *.* if you really want to screw the pooch.* *The
O/S won't complain, but you will be very sorry!*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Deleting a table file does not raise an error when the table is touched afterwards, why?

2016-05-30 Thread Daniel Westermann
Hi, 

I need to understand something: Lets assume I have a table t5 with 1'000'000 
rows: 

(postgres@[local]:5432) [sample] > select count(*) from t5; 
count 
- 
100 
(1 row) 

Time: 2363.834 ms 
(postgres@[local]:5432) [sample] > 

I get the file for that table: 

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] oid2name -d sample 
-t t5 
>From database "sample": 
Filenode Table Name 
-- 
32809 t5 


Then I delete the file: 

postgres@pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809 

When doing the count(*) on the table again: 

(postgres@[local]:5432) [sample] > select count(*) from t5; 
count 
- 
100 
(1 row) 

No issue in the log. This is probably coming from the cache, isn't it? Is this 
intended and safe? 

Then I restart the instance and do the select again: 

2016-05-30 19:25:20.633 CEST - 9 - 2777 - - @ FATAL: could not open file 
"base/16422/32809": No such file or directory 
2016-05-30 19:25:20.633 CEST - 10 - 2777 - - @ CONTEXT: writing block 8192 of 
relation base/16422/32809 

(postgres@[local]:5432) [sample] > select count(*) from t5; 
count 
 
437920 
(1 row) 

Can someone please tell me the intention behind that? From my point of view 
this is dangerous. If nobody is monitoring the log (which sadly is the case in 
reality) nobody will notice that only parts of the table are there. Wouldn't it 
be much more safe to raise an error as soon as the table is touched? 

PostgreSQL version: 

(postgres@[local]:5432) [sample] > select version(); 
-[ RECORD 1 
]
 
version | PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 
4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit 

Thanks in advance 
Daniel 



Re: [GENERAL] How to find business partners from PostgreSQL communities?

2016-05-30 Thread CN
Hi! Adrian,

Many thanks for your wisdom!

I am going to take a look of "announce" archive in order to get some
idea of its characteristics.

On Mon, May 30, 2016, at 10:30 PM, Adrian Klaver wrote:

> Deliver a working prototype of an idea that other folks can look at.

I do have product and services in place rather than just an idea.

Best Regards,
CN

-- 
http://www.fastmail.com - Email service worth paying for. Try it for free



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find business partners from PostgreSQL communities?

2016-05-30 Thread Adrian Klaver

On 05/30/2016 02:15 AM, CN wrote:

I have a business plan for my product and services both developed on top
of PostgreSQL. I am looking for partners to form a start-up to work on
these product and services.

My ideal candidates are PostgreSQL endorsers. In addition, I hope the
technical details in my plan be exposed  during the discussions in
primitive stages as few as possible to potential competitors, non
PostgreSQL endorsers in particular.

As most predecessors did, I feel my idea unprecedented. However, I also
understand that these days many people, including myself in most cases,
tend to interpret terminologies like "business plan", "idea",
"start-up", "cloud", "big data", etc. into "propaganda", "spam", or
worse - "scam". This is why I not only draft the following targets to
which I might send my solicitations, but also attach obvious concerns to
them:

- PostgreSQL mailing lists
  concerns:
(a) Will my messages deemed as spam or harassment?
(b) Which mailing list is appropriate for my messages if it really
is? "-jobs" is definitely inappropriate because my pocket is empty
and I am unable to hire anyone.


Assuming you actually have something to announce the only appropriate 
list I can think of is --announce. Elsewhere would be deemed spam, in my 
opinion.



Above all, the last result I want to get from my inquiries is *silence*.


Well you are basically 'cold calling' people, so I would expect there 
would be silence for the most part.




EnterpriseDB and Xtuple are two successful examples I heard of.
Hopefully I will be able to follow their pattens of success.  I wonder
how they achieved them, such as:

- How did those initiators find and attract in the first place those
individuals who are willing to discuss their grreat plans?
- How did they discuss their plans without fearing too many of their
sensitive plan details exposed before their companies were formed?

I need your enlightenment! What routes are the least offensive, yet most
effective, efficient, and appropriate ones for me to take to have my
awesome or yet another awful "idea" be delivered to those targeted
PostgreSQL endorsers?


Deliver a working prototype of an idea that other folks can look at.



Thank you in advance!

Best Regards,
CN




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] After replication failover: could not read block X in file Y read only 0 of 8192 bytes

2016-05-30 Thread Brian Sutherland
I'm running a streaming replication setup with PostgreSQL 9.5.2 and have
started seeing these errors on a few INSERTs:

ERROR:  could not read block 8 in file "base/3884037/3885279": read only 0 
of 8192 bytes

on a few tables. If I look at that specific file, it's only 6 blocks
long:

# ls -la base/3884037/3885279
-rw--- 1 postgres postgres 49152 May 30 12:56 base/3884037/3885279

It seems that this is the case on most tables in this state. I havn't
seen any error on SELECT and I can SELECT * on the all tables I know
have this problem. The database is machine is under reasonable load.

On some tables an "ANALYZE tablename" causes the error.

We recently had a streaming replication failover after loading a large
amount of data with pg_restore. The problems seem to have started after
that, but I'm not perfectly sure.

I have data_checksums switched on so am suspecting a streaming
replication bug.  Anyone know of a recent bug which could have caused
this?

-- 
Brian Sutherland


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slides for PGCon2016; "FTS is dead ? Long live FTS !"

2016-05-30 Thread Oleg Bartunov
On Sun, May 29, 2016 at 10:04 PM, Karsten Hilbert
 wrote:
>>> I submitted slides to pgcon site, but it usually takes awhile, so you can
>>> download our presentation directly
>>> http://www.sai.msu.su/~megera/postgres/talks/pgcon-2016-fts.pdf
>
> Looking at slide 39 (attached) I get the impression that I
> should be able to do the following:
>
>
> - turn a coding system (say, ICD-10) into a dictionary
>   by splitting the terms into single words
>
> say, "diabetes mellitus -> "diabetes", "mellitus"
>
> - define stop words like "left", "right", ...
>
> say, "fracture left ulna" -> the "left" doesn't
> matter as far as coding is concerned
>
> - also turn that coding system into queries by splitting
>   the terms into single words, concatenating them
>   with "&", and setting the ICD 10 code as tag on them
>
> say, "diabetes mellitus" -> "diabetes & mellitus [E11]"
>
> - run an inverse FTS (FQS) against a user supplied string
>   thereby finding queries (= tags = ICD10 codes) likely
>   relevant to the input
>
> say, to_tsvector("patient was suspected to suffer from diabetes 
> mellitus")
> -> tag = E11
>
>
> Possible, not possible, insane, unintended use ?

why not, it's the same kind of usage I used at slide #39.

create table icd10 (q tsquery, code text);
insert into icd10 values(to_tsquery('diabetes & mellitus'), '[E11]');
select * from icd10 where to_tsvector('patient was suspected to suffer
from diabetes mellitus') @@ q;
   q   | code
---+---
 'diabet' & 'mellitus' | [E11]
(1 row)



>
> Thanks,
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID datatype

2016-05-30 Thread Sridhar N Bamandlapally
This I got, need some implicit way, like maybe in RULE on SELECT can we
write this ?

Thanks
Sridhar
OpenText

On Mon, May 30, 2016 at 1:05 PM, Michael Paquier 
wrote:

> On Mon, May 30, 2016 at 4:25 PM, Sridhar N Bamandlapally
>  wrote:
> > Hi
> >
> > Is there a way to implicit SELECT on UUID datatype in uppercase ?
>
> You could always cast an UUID back to text and use that with upper(),
> though you are not explaining what you are tying to achieve:
> =# select upper(gen_random_uuid()::text);
>
>  upper
> --
>  057A3BC2-0E62-4D68-B01A-C44D20F91450
> (1 row)
> --
> Michael
>


[GENERAL] How to find business partners from PostgreSQL communities?

2016-05-30 Thread CN
I have a business plan for my product and services both developed on top
of PostgreSQL. I am looking for partners to form a start-up to work on
these product and services.

My ideal candidates are PostgreSQL endorsers. In addition, I hope the
technical details in my plan be exposed  during the discussions in
primitive stages as few as possible to potential competitors, non
PostgreSQL endorsers in particular.

As most predecessors did, I feel my idea unprecedented. However, I also
understand that these days many people, including myself in most cases,
tend to interpret terminologies like "business plan", "idea",
"start-up", "cloud", "big data", etc. into "propaganda", "spam", or
worse - "scam". This is why I not only draft the following targets to
which I might send my solicitations, but also attach obvious concerns to
them:

- PostgreSQL mailing lists
  concerns:
(a) Will my messages deemed as spam or harassment?
(b) Which mailing list is appropriate for my messages if it really
is? "-jobs" is definitely inappropriate because my pocket is empty
and I am unable to hire anyone.

- manually compiling the names and their associated e-mail addresses of
core developers, hackers, contributers, users, etc.
  concerns:
(a) This is a very ineffective approach.
(b) I recall the horrible past that someone harvested years ago the
e-mail addresses in mailing list archives and used them in a way I
no longer remember now. That activity caused huge anger from many
community members.

- LinkedIn private message services
  concern: First I will have to invite many people I do not really know
  to link me in, then I ultimately fall into their black list.

- LinkedIn interest groups
  concern: I have a feeling that many articles and discussions posted in
  the groups I joined are in essence propaganda coated with technology.

- Twitter
  concerns:
(a) People do not like advertisements. Such messages are most likely
be ignored.
(b) I have few followers.

- Requesting for a new PostgreSQL mailing list, which might be called
"-biz-opportunities" or "-biz-partners"
  concern: My messages will be delivered only to few recipients because
  new list will have only quite a few subscribers.

Above all, the last result I want to get from my inquiries is *silence*.

EnterpriseDB and Xtuple are two successful examples I heard of. 
Hopefully I will be able to follow their pattens of success.  I wonder
how they achieved them, such as:

- How did those initiators find and attract in the first place those
individuals who are willing to discuss their grreat plans?
- How did they discuss their plans without fearing too many of their
sensitive plan details exposed before their companies were formed?

I need your enlightenment! What routes are the least offensive, yet most
effective, efficient, and appropriate ones for me to take to have my
awesome or yet another awful "idea" be delivered to those targeted
PostgreSQL endorsers?

Thank you in advance!

Best Regards,
CN

-- 
http://www.fastmail.com - Or how I learned to stop worrying and
  love email again



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID datatype

2016-05-30 Thread Michael Paquier
On Mon, May 30, 2016 at 4:25 PM, Sridhar N Bamandlapally
 wrote:
> Hi
>
> Is there a way to implicit SELECT on UUID datatype in uppercase ?

You could always cast an UUID back to text and use that with upper(),
though you are not explaining what you are tying to achieve:
=# select upper(gen_random_uuid()::text);

 upper
--
 057A3BC2-0E62-4D68-B01A-C44D20F91450
(1 row)
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UUID datatype

2016-05-30 Thread Sridhar N Bamandlapally
Hi

Is there a way to implicit SELECT on UUID datatype in uppercase ?

Please

Thanks
Sridhar