Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Karsten Hilbert
On Thu, Dec 08, 2016 at 07:47:56PM -0500, Metare Solve wrote:

> But, what I'm gathering is, you think this is a crutch too. Will Python
> enable me to do the same things that I do with that kind of big data
> processing program?

Yes and no. Python will enable you to do _way_ more (because
you tell it what to do) but at first you can't do much of
anything (because you need to learn programming in it first
and don't have buttons to click).

> Should I be aiming for that as the ultimate rather than "mastering" KNIME

Yes.

> I was educated as a pianist, writer,
> and historian, lol. I have a lot to learn.

Well, not really.

Pianist - you need to tell the piano / computer _exactly_
  what to do and there's a special notation for it
  in each realm

Writer - plan you strategy for evolving a story/solving a problem

Historian - diligently research what you don't know and don't
know you don't know

The one thing cross access people typically don't bring in is
strong set-related skills. Maybe you can get a class on set
management from somewhere ?  Other than that try to see
things from your background.

Regards,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to loop through multi-dimentional array in PL/PGSQL

2016-12-08 Thread VENKTESH GUTTEDAR
Hello,

Please help me in accessing multi-dimentional array in postgresql
PL/PGSQL.

for i in array_lower(product_list, 1) .. array_upper(product_list, 1)
LOOP
product_list[i][0];
END LOOP;

Is the above code right?

Or is there any other way to access, i am getting null for
product_list[i][0];

Any help would be appreciated.

-- 
Regards :
Venktesh Guttedar.


Re: [GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Sreekanth Palluru
Correcting typos
Michael,
Thanks for your prompt reply

In my environment those two parameters are enabled . Just give you brief of
PG database envornment
Version 9.2.4.1
Windows 7 Professional SP1
fsync=on
full_page_writes=on
wal_sync_method=open_datasync

My Customer is into building Cancer related systems and we ship Dell
systems with our software image contains PG. Few of the customers are
facing corruption issues say around 5% .
We are in process of reproducing the issue , since there are different
variables involved in reproducing issue like  Dell HW, Software image
versions, Application versions, write-cache settings RAID/Disk, RAID
controllers with no battery backup and power failures  etc  , I am trying
to understand is there possibility that PG can end up in having corrupted
blocks due to system crash though we set these parameters

a)As I understand fsycn will write the block from memory to disk and block
just after step 4) would have written disk assuming disk cache did not lie
b)and assume that full_page_writes=on has dumped the whole 8k block into WAL
before it updates block i.e. after step 2) and before 3)
c) if crash happens after step4) , since there is no PageHeader data ,
after system restarts PG will complain that it is corrupted block or
invalid header

Please correct me if my understanding about play fsync and full_page_writes
are correct ? if so , I see that there is possibility getting corruptions
whenever PG extends a relation and crash happens just after step 4)

I am not sure will the same applicable to  existing page (not a new page)
and how it handles if there is PageHeader available as part of
full_page_writes, will same corruption can be happen or will PG can recover
database as I am not sure
recovery process can update the PageHeader   from WAL records it wrote recptr
as part of step 4) during the recovery process .

-Sreekanth


On Fri, Dec 9, 2016 at 2:09 PM, Sreekanth Palluru  wrote:

> Michael,
> Thanks for your prompt reply
>
> In my environment those two parameters are enabled . Just give you brief
> of PG database envornment
> Version 9.2.4.1
> Windows 7 Professional SP1
> fsync=on
> full_page_writes=on
> wal_sync_method=open_datasync
>
> My Customer is into building Cancer related systems and we ship Dell
> systems with our software image contains PG. Few of the customers are
> facing corruption issues say around 5% .
> We are in process of reproducing the issue , since there are different
> variables involved in reproducing issue like  Dell HW, Software image
> versions, Application versions, write-cache settings RAID/Disk, RAID
> controllers with no backup and power failures  etc  , I am trying to
> understand is there possibility that PG can end up in having corrupted
> blocks due to system crash.
>
> 1)As I understand fsycn will write the block from memory to disk and block
> just after step 4) would have written disk assuming disk cache did not lie
> 2)and assume that full_page_writes=on has dumped the whole 8k block into
> WAL
> before it updates block i.e. after step 2) and before 3)
> 3) if crash happens after step4) , since there is no PageHeader data ,
> after system restarts PG will complain that it is corrupted block or
> invalid header
>
> Please correct me if my understanding about play fsync and
> full_page_writes are correct ? if so , I see that there is possibility
> getting corruptions whenever PG extends a relation and crash happens just
> after step 4)
>
> I am not sure will the same applicable to  existing page (not a new page)
> and how it handles if there is PageHeader available as part of
> full_page_writes, will same corruption can be happen or will PG can recover
> database as I am not sure
> recovery process can update the PageHeader   from WAL records it wrote recptr
> as part of step 4) during the recovery process .
>
>
> -Sreekanth
>
>
>
> On Fri, Dec 9, 2016 at 12:44 PM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> (Please top-post that's annoying)
>>
>> On Fri, Dec 9, 2016 at 10:28 AM, Sreekanth Palluru 
>> wrote:
>> > Can I generalize that, if after step 4)  page ( new page or old page)
>> got
>> > written disk from buffer  and crash happens between step 4) and 5)  we
>> > always get
>> > block corruption issues with Postgres which can only be recovered by
>> setting
>> > zero_damaged_pages if we just have pg_dump backups and we are OK lose
>> data
>> > in the affected blocks?
>> >
>> > I am also looking at ways of reproducing the issue ? appreciate your
>> advice
>> > on it ?
>>
>> Postgres is designed to avoid such corruption problems if
>> full_page_writes and fsync are enabled, that's a base stone of its
>> reliability. If you can create a self-contained scenario able to
>> reproduce a failure, that could be treated as a Postgres bug, but you
>> are giving no evidence that this is the case.
>> --
>> Michael
>>
>
>
>
> --
> Regards
> Sreekanth
>



-- 
Regards
Sreekanth


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Tom Lane
Adrian Klaver  writes:
> On 12/08/2016 04:47 PM, Igor Korot wrote:
>> Igors-MacBook-Air:dbhandler igorkorot$ psql -U postgres -d postgres
>> psql: could not connect to server: No such file or directory
>> Is the server running locally and accepting
>> connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
>> 
>> Any idea why I can't connect?

> Because you are trying a local socket connection and psql cannot find 
> the socket. Not sure where the Apple install would create it, so cannot 
> help there.

I think "/var/pgsql_socket" *is* the Apple-blessed location; at least,
"strings" finds that string in /usr/lib/libpq.5.6.dylib on my Mac.

So I guess Igor is using Apple's copy of psql, or anyway a psql that is
linked to Apple's build of libpq, but the server he wishes to talk to is
using some other directory to keep the socket file in.  The out-of-the-box
default directory is /tmp, but if this server was built by someone else
they might have changed that.  Look around for a socket file named
".s.PGSQL.5432".

> FYI, not having the -h tells psql to connect using a socket.

Also, you can use -h /path/to/socket/dir to specify connecting
using a socket file in a specific directory.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Sreekanth Palluru
Michael,
Thanks for your prompt reply

In my environment those two parameters are enabled . Just give you brief of
PG database envornment
Version 9.2.4.1
Windows 7 Professional SP1
fsync=on
full_page_writes=on
wal_sync_method=open_datasync

My Customer is into building Cancer related systems and we ship Dell
systems with our software image contains PG. Few of the customers are
facing corruption issues say around 5% .
We are in process of reproducing the issue , since there are different
variables involved in reproducing issue like  Dell HW, Software image
versions, Application versions, write-cache settings RAID/Disk, RAID
controllers with no backup and power failures  etc  , I am trying to
understand is there possibility that PG can end up in having corrupted
blocks due to system crash.

1)As I understand fsycn will write the block from memory to disk and block
just after step 4) would have written disk assuming disk cache did not lie
2)and assume that full_page_writes=on has dumped the whole 8k block into WAL
before it updates block i.e. after step 2) and before 3)
3) if crash happens after step4) , since there is no PageHeader data ,
after system restarts PG will complain that it is corrupted block or
invalid header

Please correct me if my understanding about play fsync and full_page_writes
are correct ? if so , I see that there is possibility getting corruptions
whenever PG extends a relation and crash happens just after step 4)

I am not sure will the same applicable to  existing page (not a new page)
and how it handles if there is PageHeader available as part of
full_page_writes, will same corruption can be happen or will PG can recover
database as I am not sure
recovery process can update the PageHeader   from WAL records it wrote recptr
as part of step 4) during the recovery process .


-Sreekanth



On Fri, Dec 9, 2016 at 12:44 PM, Michael Paquier 
wrote:

> (Please top-post that's annoying)
>
> On Fri, Dec 9, 2016 at 10:28 AM, Sreekanth Palluru 
> wrote:
> > Can I generalize that, if after step 4)  page ( new page or old page)
> got
> > written disk from buffer  and crash happens between step 4) and 5)  we
> > always get
> > block corruption issues with Postgres which can only be recovered by
> setting
> > zero_damaged_pages if we just have pg_dump backups and we are OK lose
> data
> > in the affected blocks?
> >
> > I am also looking at ways of reproducing the issue ? appreciate your
> advice
> > on it ?
>
> Postgres is designed to avoid such corruption problems if
> full_page_writes and fsync are enabled, that's a base stone of its
> reliability. If you can create a self-contained scenario able to
> reproduce a failure, that could be treated as a Postgres bug, but you
> are giving no evidence that this is the case.
> --
> Michael
>



-- 
Regards
Sreekanth


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Steve Litt
On Thu, 8 Dec 2016 17:08:41 -0800 (PST)
Rich Shepard  wrote:

> On Thu, 8 Dec 2016, Adrian Klaver wrote:
> 
> > GUI's only get you so far. At some point you will need to dive
> > deeper to get what you. I am mostly a self taught
> > programmer(biologist by training) so I understand the hill you are
> > facing. The language I use is Python, mainly because to me it made
> > sense. For you it might be a good choice as it is quite prevalent
> > in the data analysis world. There are a lot of places to turn to to
> > learn how to use it. My suggestion for seeing if it is something
> > you can use would be to start here:  
> 
> +1 for Python
> 
> > Go through at least the Introduction to Python part. The rest has
> > to do with Django, Web framework built using Python.  
> 
>Mike Driscoll has a blog (I don't recall the URL) and his Python
> 101 is a very good introduction. There are also a lot of online
> tutorials.
> 
>I would suggest starting by learning a general programming language
> (specifically Python). That puts you in a learnable mindset. 

I absolutely concur that Python's a great starting point. It's easy, it
puts you in a learnable mindset, there are many directions you can
follow once you know Python, and Python knowledge might even get you a
job.

> SQL is a
> set-oriented language and is quite different from procedural, object
> oriented, and functional languages.

I think she could learn SQL concurrently with Python, as long as she
completely understands that they don't do anything the same way as each
other, and they're not even for the same purpose.

SteveT

Steve Litt 
November 2016 featured book: Quit Joblessness: Start Your Own Business
http://www.troubleshooters.com/startbiz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:47 PM, Igor Korot wrote:

Hi, guys,





This is the result of running "SQL shell":

[code]
Last login: Thu Dec  8 19:46:41 on ttys001
Igors-MacBook-Air:~ igorkorot$ /Library/PostgreSQL/9.1/scripts/runpsql.sh; exit
Server [localhost]:
Database [postgres]:
Port [5432]:
Username [postgres]:
Password for user postgres:
psql (9.1.24)
Type "help" for help.

postgres=#
[/code]

And this is the result of running "psql" command in Terminal:

[code]
Igors-MacBook-Air:dbhandler igorkorot$ psql -U postgres -d postgres
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
[/code]

Any idea why I can't connect?


Because you are trying a local socket connection and psql cannot find 
the socket. Not sure where the Apple install would create it, so cannot 
help there. However if you replicate the connection that worked with SQL 
shell you should be able to connect. So:


psql -U postgres -d postgres -h localhost

FYI, not having the -h tells psql to connect using a socket.



Also is PostgreSQL is set initially to use "Transaction-mode"? I.e.
every SQL command should end up with COMMIT?


That is more of a client side attribute. For psql the default is to 
enclose every command in a transaction, so no explicit BEGIN/COMMIT is 
needed:


https://www.postgresql.org/docs/9.5/static/app-psql.html
"AUTOCOMMIT

When on (the default), each SQL command is automatically committed 
upon successful completion. To postpone commit in this mode, you must 
enter a BEGIN or START TRANSACTION SQL command. When off or unset, SQL 
commands are not committed until you explicitly issue COMMIT or END. The 
autocommit-off mode works by issuing an implicit BEGIN for you, just 
before any command that is not already in a transaction block and is not 
itself a BEGIN or other transaction-control command, nor a command that 
cannot be executed inside a transaction block (such as VACUUM).


Note: In autocommit-off mode, you must explicitly abandon any 
failed transaction by entering ABORT or ROLLBACK. Also keep in mind that 
if you exit the session without committing, your work will be lost.


Note: The autocommit-on mode is PostgreSQL's traditional 
behavior, but autocommit-off is closer to the SQL spec. If you prefer 
autocommit-off, you might wish to set it in the system-wide psqlrc file 
or your ~/.psqlrc file.


"



Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Michael Paquier
(Please top-post that's annoying)

On Fri, Dec 9, 2016 at 10:28 AM, Sreekanth Palluru  wrote:
> Can I generalize that, if after step 4)  page ( new page or old page)  got
> written disk from buffer  and crash happens between step 4) and 5)  we
> always get
> block corruption issues with Postgres which can only be recovered by setting
> zero_damaged_pages if we just have pg_dump backups and we are OK lose data
> in the affected blocks?
>
> I am also looking at ways of reproducing the issue ? appreciate your advice
> on it ?

Postgres is designed to avoid such corruption problems if
full_page_writes and fsync are enabled, that's a base stone of its
reliability. If you can create a self-contained scenario able to
reproduce a failure, that could be treated as a Postgres bug, but you
are giving no evidence that this is the case.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Sreekanth Palluru
Michael,
Can I generalize that, if after step 4)  page ( new page or old page)  got
written disk from buffer  and crash happens between step 4) and 5)  we
always get
block corruption issues with Postgres which can only be recovered by
setting zero_damaged_pages if we just have pg_dump backups and we are OK
lose data in the affected blocks?

I am also looking at ways of reproducing the issue ? appreciate your advice
on it ?


On Fri, Dec 9, 2016 at 12:01 PM, Michael Paquier 
wrote:

> On Fri, Dec 9, 2016 at 9:46 AM, Sreekanth Palluru 
> wrote:
> > Hi ,
> > I am working on page corruption issue want to know if below scenario is
> > possible
> >
> > 1)  Insert command from client , I understand heap_insert is called from
> > heampam.c
> > 2) Let us say table is full and relation is extended and added a new
> block
> > 3) Tuple is inserted into new page for the block
> RelationPutHeapTuple/hio.c
> > 4) Later  WAL record is inserted  through recptr = XLogInsert(RM_HEAP_ID,
> > info);
> > 5) Then backend update the PageHeader with WAL LSN details
> PageSetLSN(page,
> > recptr);
> >
> > If my server got crashed after step 4) is there a possibility that after
> > postgres database restart I get  below error when I access the relation
> or
> > vacuum is run on this relation or taking backup through pg_dump  ?
> > ERROR:  invalid page header in block 204 of relation base/16413/16900 ?
>
> So the block is corrupted. You may want to move to another server.
>
> > or
> > Postgres can automatically recover the page  without throwing any error ?
>
> At crash recovery, Postgres would redo things from a point where
> everything was consistent on disk. If this corrupted page made it to
> disk, there is not much that can be done except restoring from a
> backup. You could as well zero_damaged_pages to help here, but you
> would lose the data on this page, still you would be able to perform
> pg_dump and get back as much data as you can. At the same time,
> corruption can spread as well as if that's a hardware problem, so you
> are just seeing the beginning of a series of problems.
> --
> Michael
>



-- 
Regards
Sreekanth


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 05:14 PM, Metare Solve wrote:

I'm happy with postgres. It came recommended by a girl whose skills I
totally respect. When i told my old boss I was going to use it, she
redirected me to MySQL because that's what our CRM used. I've always
looked back at postgres, though, and for some reason like it,
conceptually, better.


Something I did not mention before, you can use different languages 
inside Postgres:


https://www.postgresql.org/docs/9.5/static/xplang.html

https://www.postgresql.org/docs/9.5/static/external-pl.html

In the end what you choose to work with is probably determined as much 
by what makes sense to you as anything else.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Metare Solve
I'm happy with postgres. It came recommended by a girl whose skills I
totally respect. When i told my old boss I was going to use it, she
redirected me to MySQL because that's what our CRM used. I've always looked
back at postgres, though, and for some reason like it, conceptually, better.


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, Metare Solve wrote:


Will Python enable me to do the same things that I do with that kind of
big data processing program?


  Yes.


I need SOMETHING to use for analysis for the tests I'm going to have to
take when I job hunt so I'm exploring KNIME right now and doing the data
manipulation into what I'd do in access for analysis. I know, I need stats
too. You were educated as a biologist? I was educated as a pianist,
writer, and historian, lol. I have a lot to learn.


  While R is a great statistical and data analytical language (and can do
general programming, too), you'd be better off learning Python first and
looking beyond that only when you hit a dead end.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:47 PM, Metare Solve wrote:

I'll check out the django girls, thanks. There's also a group of women
coders in DC who hold a lot of trainings and events. I'll see if they
have a level I python coming up.

You made me think about something about the GUI's: In my old job I used
alteryx because we were working with datasets from 20M-250M. When I got
laid off, I lost access to alteryx (unless I want to get a free trial).
I asked a guy who's pretty familiar with my skillset what I should be
using, R? He said to look for an open source gui/alteryx alternative. I
found one called KNIME that I like a lot.


R is very much a viable language for statistical analysis, FYI, Python 
has binding to R, in other words you can use Python to work with R. 
Though if you go that route I would suggest Pandas:


http://pandas.pydata.org/




But, what I'm gathering is, you think this is a crutch too. Will Python
enable me to do the same things that I do with that kind of big data
processing program? Should I be aiming for that as the ultimate rather
than "mastering" KNIME (or whatever, just looking to design my
curriculum). At my old job when I asked the same guy about SQL and how
some of the coding bits we did in the tool configuration looked like
SQL, he said alteryx is SQL on crack.

I need SOMETHING to use for analysis for the tests I'm going to have to
take when I job hunt so I'm exploring KNIME right now and doing the data
manipulation into what I'd do in access for analysis. I know, I need
stats too. You were educated as a biologist? I was educated as a
pianist, writer, and historian, lol. I have a lot to learn.


There is nothing wrong with GUI's per se, it is just that you often are 
constrained by whatever tools the developers provide. They are good way 
to start though, especially if they give you the ability to see what 
they are passing to the datasource. Then you can learn as you work.




However, we're veering away from postgres, which may not be appropriate
on the forum. I'd like to continue the conversation, though.


Hey, if gets you using Postgres all is good.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, Adrian Klaver wrote:


GUI's only get you so far. At some point you will need to dive deeper to
get what you. I am mostly a self taught programmer(biologist by training)
so I understand the hill you are facing. The language I use is Python,
mainly because to me it made sense. For you it might be a good choice as
it is quite prevalent in the data analysis world. There are a lot of
places to turn to to learn how to use it. My suggestion for seeing if it
is something you can use would be to start here:


+1 for Python


Go through at least the Introduction to Python part. The rest has to do
with Django, Web framework built using Python.


  Mike Driscoll has a blog (I don't recall the URL) and his Python 101 is a
very good introduction. There are also a lot of online tutorials.

  I would suggest starting by learning a general programming language
(specifically Python). That puts you in a learnable mindset. SQL is a
set-oriented language and is quite different from procedural, object
oriented, and functional languages.

  You can learn on your own, and help is readily available on various
maillists. I'm an ecologist who learned FORTRAN (mostly self-taught) in grad
school, C on my own, and then moved to Python on my own. If you're
interested in data analysis Python's the way to go. Your description of what
you want to accomplish can be achieved using three components: postgres (and
SQL as the language it uses), Python as the glue between the database back
end and the user interface, and a widget set (PyQt5 is a good one) for the
UI.

  If you want to develop a web-based application then replace the UI with
django.

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread John R Pierce

On 12/8/2016 4:45 PM, Adrian Klaver wrote:

I don't know anything about Tableau, but I don't see anything on their
fluff-heavy website mentioning postgresql, rather, it all seems


http://onlinehelp.tableau.com/current/pro/desktop/en-us/help.htm#examples_postgresql.html


ah, more digging... the Desktop Professional Edition ($2000) and the 
Server Edition ($1 for 10 users) supports postgresql.  The Desktop 
Personal Edition ($999) only supports...  Microsoft Excel; Text files 
(like .csv files); Microsoft Access for Windows; JSON; Statistical 
files; Tableau Data Extract; OData; Google Sheets; Web Data Connector




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Michael Paquier
On Fri, Dec 9, 2016 at 9:46 AM, Sreekanth Palluru  wrote:
> Hi ,
> I am working on page corruption issue want to know if below scenario is
> possible
>
> 1)  Insert command from client , I understand heap_insert is called from
> heampam.c
> 2) Let us say table is full and relation is extended and added a new block
> 3) Tuple is inserted into new page for the block RelationPutHeapTuple/hio.c
> 4) Later  WAL record is inserted  through recptr = XLogInsert(RM_HEAP_ID,
> info);
> 5) Then backend update the PageHeader with WAL LSN details  PageSetLSN(page,
> recptr);
>
> If my server got crashed after step 4) is there a possibility that after
> postgres database restart I get  below error when I access the relation or
> vacuum is run on this relation or taking backup through pg_dump  ?
> ERROR:  invalid page header in block 204 of relation base/16413/16900 ?

So the block is corrupted. You may want to move to another server.

> or
> Postgres can automatically recover the page  without throwing any error ?

At crash recovery, Postgres would redo things from a point where
everything was consistent on disk. If this corrupted page made it to
disk, there is not much that can be done except restoring from a
backup. You could as well zero_damaged_pages to help here, but you
would lose the data on this page, still you would be able to perform
pg_dump and get back as much data as you can. At the same time,
corruption can spread as well as if that's a hardware problem, so you
are just seeing the beginning of a series of problems.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Metare Solve
I'll check out the django girls, thanks. There's also a group of women
coders in DC who hold a lot of trainings and events. I'll see if they have
a level I python coming up.

You made me think about something about the GUI's: In my old job I used
alteryx because we were working with datasets from 20M-250M. When I got
laid off, I lost access to alteryx (unless I want to get a free trial). I
asked a guy who's pretty familiar with my skillset what I should be using,
R? He said to look for an open source gui/alteryx alternative. I found one
called KNIME that I like a lot.

But, what I'm gathering is, you think this is a crutch too. Will Python
enable me to do the same things that I do with that kind of big data
processing program? Should I be aiming for that as the ultimate rather than
"mastering" KNIME (or whatever, just looking to design my curriculum). At
my old job when I asked the same guy about SQL and how some of the coding
bits we did in the tool configuration looked like SQL, he said alteryx is
SQL on crack.

I need SOMETHING to use for analysis for the tests I'm going to have to
take when I job hunt so I'm exploring KNIME right now and doing the data
manipulation into what I'd do in access for analysis. I know, I need stats
too. You were educated as a biologist? I was educated as a pianist, writer,
and historian, lol. I have a lot to learn.

However, we're veering away from postgres, which may not be appropriate on
the forum. I'd like to continue the conversation, though.

On Thu, Dec 8, 2016 at 7:38 PM, Adrian Klaver 
wrote:

> On 12/08/2016 04:19 PM, Metare Solve wrote:
>
>> Sorry, I got on so many lists yesterday. I'm really not that dense.
>>
>> I have absolutely no language programming skills and it is very very
>> frustrating. I can HTML and that's it. I desperately want to develop the
>> skills but whenever I try on my own, I hit the same wall as I do with
>> SQL. I'm just bad at the deep dives into code and really need some
>> honest help. I kind of had this kind of help at my old job but I got
>> laid off on 11/30 and I don't think it's appropriate to ask them anymore.
>>
>> SQL experience: Only MS Access. I know it sucks, but it's all my boss
>> "let" me use (it didn't occur to me for a while to pursue it on my own
>> after she told me "no" for whatever odd reason she had). But! I can use
>> that GUI like a champ. I used Access for all of my data crunching, my
>> biggest record set was 600k so I didn't need anything more powerful. The
>> reason I think I can learn SQL with just a bit of guidance is I know the
>> concepts behind "where" and "group buys" and "unique," etc, but I miss a
>> ; and get an error and then get frustrated.
>>
>
> GUI's only get you so far. At some point you will need to dive deeper to
> get what you. I am mostly a self taught programmer(biologist by training)
> so I understand the hill you are facing. The language I use is Python,
> mainly because to me it made sense. For you it might be a good choice as it
> is quite prevalent in the data analysis world. There are a lot of places to
> turn to to learn how to use it. My suggestion for seeing if it is something
> you can use would be to start here:
>
> https://tutorial.djangogirls.org/en/
>
> Go through at least the Introduction to Python part. The rest has to do
> with Django, Web framework built using Python.
>
> If you want an immersion into programming the events that Django Girls put
> on are a good place to start:
>
> https://djangogirls.org/events/
>
> I have seen it action at my local Linux Fest and it is impressive.
>
>
>
>
>> Purpose of the project:
>>
>> Eh, a bunch of things.
>>
>> - The project is to build my Tableau skills, for one. There are a lot of
>> (work from home) jobs out there that call for Tableau skills and I know
>> the best way to learn something is to do it. I think using the SQL
>> database as a source will enable me to learn a lot of things in one
>> stroke.
>>
>> - The project will also give me a product (eventually multiple) to embed
>> on a website when I start looking for "clients," or what have you. Just
>> starting a portfolio.
>>
>> - I have two projects, one of them is to analyze crime rates around the
>> moon phases. Just a question my dad once posed as a science project that
>> I blew off. Now seems kind of interesting to me to pursue. Will give me
>> date experience, mapping if I want it, can go down to the precinct
>> level, etc. The other is some data I've been collecting for about 15
>> months on a pixel dragon game I play. I want to build a dashboard to
>> manage my lair/income/value, etc. That is definitely where the SQL
>> database comes in. I think the moon one is just data blending.
>>
>> - To give me intellectual stimulation because I am a nerd.
>>
>> Just a note, I'm a female gen x with a master's degree in library and
>> information science. I took a database design class in grad school and
>> it was the biggest waste of money and time. 

Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Igor Korot
Hi, guys,

On Thu, Dec 8, 2016 at 10:19 AM, Charles Clavadetscher
 wrote:
> Hello
>
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org 
>> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
>> Sent: Donnerstag, 8. Dezember 2016 16:09
>> To: Igor Korot 
>> Cc: pgsql-general@postgresql.org
>> Subject: Re: [GENERAL] Importing SQLite database
>>
>> On 12/08/2016 06:54 AM, Igor Korot wrote:
>> > Adrian,
>> >
>> > On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver  
>> > wrote:
>> >> On 12/08/2016 04:54 AM, Igor Korot wrote:
>> >>>
>> >>> Hi, ALL,
>> >>> I have a text file which I got from exporting the SQLite database.
>> >>>
>> >>> The file contains an SQL statement which will generate the database.
>> >>
>> >>
>> >> Is there a CREATE DATABASE statement in the file you are referencing?
>> >
>> > Well there is no CREATE DATABASE() in the SQLite.
>> > But I created a database by hand, so no issue here.
>> >
>> >>
>> >> Otherwise you will have to create the database first and then load
>> >> the file into it.
>> >>
>> >>>
>> >>> Excerpt from this file:
>> >>>
>> >>> [code]
>> >>> CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
>> >>> VARCHAR(100),balance DOUBLE(10,2)); CREATE TABLE players(playerid
>> >>> INTEGER AUTOINCREMENT PRIMARY KEY, player_name VARCHAR(60),
>> >>> player_position CHAR(1)); CREATE TABLE player_draft(id INTEGER,
>> >>> playerid INTEGER, drafted_position CHAR(1), FOREIGN KEY id REFERENCE
>> >>> leagues(id), FOREIGN KEY playerid REFERENCE players(playerid));
>> >>> INSERT INTO leagues VALUES(1, 'Test1', 260.00); INSERT INTO leagues
>> >>> VALUES(2, 'Test2', 260.00); INSERT INTO players VALUES(1, 'Player1',
>> >>> 'D'); INSERT INTO players VALUES(2, ''Player2', 'F'); [/code]
>> >>>
>> >>> My question would be:
>> >>> Is there a command in Postgres which will open this file and execute
>> >>> all those commands one-by-one in a transaction?
>> >>
>> >>
>> >> Yes there is assuming the [code][/code] tags are for the email only.
>> >
>> > Yes, "code" tags are for e-mail only.
>> >
>> >>
>> >> The issue is that Postgres will not understand AUTOINCREMENT. In Postgres 
>> >> to
>> >> get the same behavior you would do:
>> >>
>> >> id SERIAL PRIMARY KEY
>> >
>> > I'm not very familiar with Postgres, so let me ask you - is
>> > autoincrementing behavior set
>> > by default for the primary key?
>> > Like if I want to insert the record in the table and if I omit this
>> > column it will get the last inserted
>> > value in this table plus 1.
>>
>> No that is a Sqlite thing:
>> http://sqlite.org/autoinc.html
>>
>>
>> If you want to replicate in Postgres you will need to use the SERIAL type:
>>
>> https://www.postgresql.org/docs/9.5/static/datatype-numeric.html#DATATYPE-SERIAL
>>
>> along with PRIMARY KEY so:
>>
>> id SERIAL PRIMARY KEY
>>
>> This sets up a SEQUENCE:
>>
>> https://www.postgresql.org/docs/9.5/static/sql-createsequence.html
>>
>> for the id column, which supplies an incrementing, but not necessarily
>> gap free sequence of numbers.
>
> Adding to that. The sequence is unaware of the value that already are 
> available in the column. Your insert statements will create rows without 
> changing the sequence. That means that after finishing the import you will 
> need to set the value of the sequence to the maximum value available in the 
> column.
>
> Here an example:
>
> db=> create table test (id serial primary key, txt text);
> CREATE TABLE
> db=> \d test
>  Table "public.test"
>  Column |  Type   | Modifiers
> +-+---
>  id | integer | not null default nextval('test_id_seq'::regclass)
>  txt| text|
> Indexes:
> "test_pkey" PRIMARY KEY, btree (id)
>
> db=> insert into test values (1,'asdf'),(2,'fdgd'),(3,'werwe');
> INSERT 0 3
>
> db=> select * from test_id_seq;
>  sequence_name | last_value | start_value | increment_by |  max_value 
>  | min_value | cache_value | log_cnt | is_cycled | is_called
> ---++-+--+-+---+-+-+---+---
>  test_id_seq   |  1 |   1 |1 | 
> 9223372036854775807 | 1 |   1 |   0 | f | f
> (1 row)
>
> Since the value of the sequence still is 1 you may get into trouble:
>
> db=> insert into test (txt) values ('hallo');
> ERROR:  duplicate key value violates unique constraint "test_pkey"
> DETAIL:  Key (id)=(1) already exists.
>
> So you set the value of the sequence:
>
> db.archivedb=> select setval('test_id_seq'::regclass,(select max(id) from 
> test));
>  setval
> 
>   3
> (1 row)
>
> And then everything works as expected.
>
> db=> insert into test (txt) values ('hallo');
> INSERT 0 1
> kofadmin@kofdb.archivedb=> select * from test;
>  id |  txt

[GENERAL] Would like to below scenario is possible for getting page/block corruption

2016-12-08 Thread Sreekanth Palluru
Hi ,
I am working on page corruption issue want to know if below scenario is
possible

1)  Insert command from client , I understand heap_insert is called
from heampam.c
2) Let us say table is full and relation is extended and added a new block
3) Tuple is inserted into new page for the block RelationPutHeapTuple/hio.c
4) Later  WAL record is inserted  through recptr = XLogInsert(RM_HEAP_ID,
info);
5) Then backend update the PageHeader with WAL LSN details
 PageSetLSN(page, recptr);

If my server got crashed after step 4) is there a possibility that after
postgres database restart I get  below error when I access the relation or
vacuum is run on this relation or taking backup through pg_dump  ?
*ERROR:  invalid page header in block 204 of relation base/16413/16900 ?*

or
Postgres can automatically recover the page  without throwing any error ?

Appreciate your valuable response on this

-- 
Regards
Sreekanth


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:39 PM, John R Pierce wrote:

On 12/8/2016 4:19 PM, Metare Solve wrote:

- The project is to build my Tableau skills, for one. There are a lot
of (work from home) jobs out there that call for Tableau skills and I
know the best way to learn something is to do it. I think using the
SQL database as a source will enable me to learn a lot of things in
one stroke.


I don't know anything about Tableau, but I don't see anything on their
fluff-heavy website mentioning postgresql, rather, it all seems


http://onlinehelp.tableau.com/current/pro/desktop/en-us/help.htm#examples_postgresql.html


cloud-centric and fluffy, they mention Microsoft SQL Server, Hadoop,
Oracle, 'web sources', CSV files, and others, but no mention of
Postgresql.   and, wow.  $1000 for a single seat license of the desktop
'personal' version, $2000 for the 'professional' version ?  yikes.


I have taken a glance at:

http://docs.redash.io/en/latest/

which at least is free and Open Source.


mostly it seems to be a tool that extracts data from pre-existing data
sources and crunches and presents it in 'executive dashboard' sort of
formats, it doesn't actually use the source database as anything other
than a place to fetch from.

pgadmin is just a database administration tool, its one way of managing
databases, creating and modifying table structures, etc, but its not
really suitable for production data entry and stuff, for that you'd
normally have application software, or use ETL tools to bulk load data
from other sources, etc.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread John R Pierce

On 12/8/2016 4:19 PM, Metare Solve wrote:
- The project is to build my Tableau skills, for one. There are a lot 
of (work from home) jobs out there that call for Tableau skills and I 
know the best way to learn something is to do it. I think using the 
SQL database as a source will enable me to learn a lot of things in 
one stroke.


I don't know anything about Tableau, but I don't see anything on their 
fluff-heavy website mentioning postgresql, rather, it all seems 
cloud-centric and fluffy, they mention Microsoft SQL Server, Hadoop, 
Oracle, 'web sources', CSV files, and others, but no mention of 
Postgresql.   and, wow.  $1000 for a single seat license of the desktop 
'personal' version, $2000 for the 'professional' version ?  yikes.


mostly it seems to be a tool that extracts data from pre-existing data 
sources and crunches and presents it in 'executive dashboard' sort of 
formats, it doesn't actually use the source database as anything other 
than a place to fetch from.


pgadmin is just a database administration tool, its one way of managing 
databases, creating and modifying table structures, etc, but its not 
really suitable for production data entry and stuff, for that you'd 
normally have application software, or use ETL tools to bulk load data 
from other sources, etc.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:19 PM, Metare Solve wrote:

Sorry, I got on so many lists yesterday. I'm really not that dense.

I have absolutely no language programming skills and it is very very
frustrating. I can HTML and that's it. I desperately want to develop the
skills but whenever I try on my own, I hit the same wall as I do with
SQL. I'm just bad at the deep dives into code and really need some
honest help. I kind of had this kind of help at my old job but I got
laid off on 11/30 and I don't think it's appropriate to ask them anymore.

SQL experience: Only MS Access. I know it sucks, but it's all my boss
"let" me use (it didn't occur to me for a while to pursue it on my own
after she told me "no" for whatever odd reason she had). But! I can use
that GUI like a champ. I used Access for all of my data crunching, my
biggest record set was 600k so I didn't need anything more powerful. The
reason I think I can learn SQL with just a bit of guidance is I know the
concepts behind "where" and "group buys" and "unique," etc, but I miss a
; and get an error and then get frustrated.


GUI's only get you so far. At some point you will need to dive deeper to 
get what you. I am mostly a self taught programmer(biologist by 
training) so I understand the hill you are facing. The language I use is 
Python, mainly because to me it made sense. For you it might be a good 
choice as it is quite prevalent in the data analysis world. There are a 
lot of places to turn to to learn how to use it. My suggestion for 
seeing if it is something you can use would be to start here:


https://tutorial.djangogirls.org/en/

Go through at least the Introduction to Python part. The rest has to do 
with Django, Web framework built using Python.


If you want an immersion into programming the events that Django Girls 
put on are a good place to start:


https://djangogirls.org/events/

I have seen it action at my local Linux Fest and it is impressive.





Purpose of the project:

Eh, a bunch of things.

- The project is to build my Tableau skills, for one. There are a lot of
(work from home) jobs out there that call for Tableau skills and I know
the best way to learn something is to do it. I think using the SQL
database as a source will enable me to learn a lot of things in one stroke.

- The project will also give me a product (eventually multiple) to embed
on a website when I start looking for "clients," or what have you. Just
starting a portfolio.

- I have two projects, one of them is to analyze crime rates around the
moon phases. Just a question my dad once posed as a science project that
I blew off. Now seems kind of interesting to me to pursue. Will give me
date experience, mapping if I want it, can go down to the precinct
level, etc. The other is some data I've been collecting for about 15
months on a pixel dragon game I play. I want to build a dashboard to
manage my lair/income/value, etc. That is definitely where the SQL
database comes in. I think the moon one is just data blending.

- To give me intellectual stimulation because I am a nerd.

Just a note, I'm a female gen x with a master's degree in library and
information science. I took a database design class in grad school and
it was the biggest waste of money and time. lol, I wish I'd had these
project ideas then, I could have gone to the teacher with them.

Meta





On Thu, Dec 8, 2016 at 6:59 PM, Adrian Klaver > wrote:

On 12/08/2016 03:46 PM, Metare Solve wrote:

Hi,

I'm looking for an online mentor, someone I can chat/email and ask
questions of. Maybe "meet up" and have a direct chat. I've been
trying
to get into SQL for years and I just... have... problems. It's very
frustrating and at this point in my career is going to keep me from
getting a new job. (I was just laid off on 11/30.)

I have a project in mind: I want to build an SQL db to house some
datasets and sit a tableau dashboard/storyboard on top. A data
engineer
at my old job said that would be his preferred method, and I
think it
would be a useful project to learn SQL.

I know this is the pgadmin support and not a postgres, but I'm
technically going to be using pgadmin so it's tangentially
related ;)


Actually this is the Postgres mailing list.

FYI, the pgAdmin list is here:

https://www.postgresql.org/list/pgadmin-support/


Though pgAdmin is often discussed on this list(pgsql-general).


Any input, even on the project?



What language(s) do you plan to program in?

What studying on SQL have you done?

What will the purpose of the project?



thanks,
meta



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via 

Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Metare Solve
Sorry, I got on so many lists yesterday. I'm really not that dense.

I have absolutely no language programming skills and it is very very
frustrating. I can HTML and that's it. I desperately want to develop the
skills but whenever I try on my own, I hit the same wall as I do with SQL.
I'm just bad at the deep dives into code and really need some honest help.
I kind of had this kind of help at my old job but I got laid off on 11/30
and I don't think it's appropriate to ask them anymore.

SQL experience: Only MS Access. I know it sucks, but it's all my boss "let"
me use (it didn't occur to me for a while to pursue it on my own after she
told me "no" for whatever odd reason she had). But! I can use that GUI like
a champ. I used Access for all of my data crunching, my biggest record set
was 600k so I didn't need anything more powerful. The reason I think I can
learn SQL with just a bit of guidance is I know the concepts behind "where"
and "group buys" and "unique," etc, but I miss a ; and get an error and
then get frustrated.

Purpose of the project:

Eh, a bunch of things.

- The project is to build my Tableau skills, for one. There are a lot of
(work from home) jobs out there that call for Tableau skills and I know the
best way to learn something is to do it. I think using the SQL database as
a source will enable me to learn a lot of things in one stroke.

- The project will also give me a product (eventually multiple) to embed on
a website when I start looking for "clients," or what have you. Just
starting a portfolio.

- I have two projects, one of them is to analyze crime rates around the
moon phases. Just a question my dad once posed as a science project that I
blew off. Now seems kind of interesting to me to pursue. Will give me date
experience, mapping if I want it, can go down to the precinct level, etc.
The other is some data I've been collecting for about 15 months on a pixel
dragon game I play. I want to build a dashboard to manage my
lair/income/value, etc. That is definitely where the SQL database comes in.
I think the moon one is just data blending.

- To give me intellectual stimulation because I am a nerd.

Just a note, I'm a female gen x with a master's degree in library and
information science. I took a database design class in grad school and it
was the biggest waste of money and time. lol, I wish I'd had these project
ideas then, I could have gone to the teacher with them.

Meta





On Thu, Dec 8, 2016 at 6:59 PM, Adrian Klaver 
wrote:

> On 12/08/2016 03:46 PM, Metare Solve wrote:
>
>> Hi,
>>
>> I'm looking for an online mentor, someone I can chat/email and ask
>> questions of. Maybe "meet up" and have a direct chat. I've been trying
>> to get into SQL for years and I just... have... problems. It's very
>> frustrating and at this point in my career is going to keep me from
>> getting a new job. (I was just laid off on 11/30.)
>>
>> I have a project in mind: I want to build an SQL db to house some
>> datasets and sit a tableau dashboard/storyboard on top. A data engineer
>> at my old job said that would be his preferred method, and I think it
>> would be a useful project to learn SQL.
>>
>> I know this is the pgadmin support and not a postgres, but I'm
>> technically going to be using pgadmin so it's tangentially related ;)
>>
>
> Actually this is the Postgres mailing list.
>
> FYI, the pgAdmin list is here:
>
> https://www.postgresql.org/list/pgadmin-support/
>
> Though pgAdmin is often discussed on this list(pgsql-general).
>
>
>> Any input, even on the project?
>>
>
>
> What language(s) do you plan to program in?
>
> What studying on SQL have you done?
>
> What will the purpose of the project?
>
>
>
>> thanks,
>> meta
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Looking for an online mentor

2016-12-08 Thread Adrian Klaver

On 12/08/2016 03:46 PM, Metare Solve wrote:

Hi,

I'm looking for an online mentor, someone I can chat/email and ask
questions of. Maybe "meet up" and have a direct chat. I've been trying
to get into SQL for years and I just... have... problems. It's very
frustrating and at this point in my career is going to keep me from
getting a new job. (I was just laid off on 11/30.)

I have a project in mind: I want to build an SQL db to house some
datasets and sit a tableau dashboard/storyboard on top. A data engineer
at my old job said that would be his preferred method, and I think it
would be a useful project to learn SQL.

I know this is the pgadmin support and not a postgres, but I'm
technically going to be using pgadmin so it's tangentially related ;)


Actually this is the Postgres mailing list.

FYI, the pgAdmin list is here:

https://www.postgresql.org/list/pgadmin-support/

Though pgAdmin is often discussed on this list(pgsql-general).



Any input, even on the project?



What language(s) do you plan to program in?

What studying on SQL have you done?

What will the purpose of the project?




thanks,
meta



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Looking for an online mentor

2016-12-08 Thread Metare Solve
Hi,

I'm looking for an online mentor, someone I can chat/email and ask
questions of. Maybe "meet up" and have a direct chat. I've been trying to
get into SQL for years and I just... have... problems. It's very
frustrating and at this point in my career is going to keep me from getting
a new job. (I was just laid off on 11/30.)

I have a project in mind: I want to build an SQL db to house some datasets
and sit a tableau dashboard/storyboard on top. A data engineer at my old
job said that would be his preferred method, and I think it would be a
useful project to learn SQL.

I know this is the pgadmin support and not a postgres, but I'm technically
going to be using pgadmin so it's tangentially related ;)

Any input, even on the project?

thanks,
meta


Re: [GENERAL] How to config start/stop scripts in a different data dir for CentOS7/systemctl/PG9.6

2016-12-08 Thread Julien Rouhaud
On Thu, Dec 08, 2016 at 05:16:11PM -0300, Edilmar LISTAS wrote:
> I decided to use the pg_ctl to start/stop, I didn't find a way to
> configure systemctl script.

I didn't noticed your first mail.

You can find all needed informations on this blog post:
https://people.planetpostgresql.org/devrim/index.php?/archives/82-Running-more-than-one-9.4-9.3-instance-in-parallel-on-RHEL-7.html
(This link is available in the "Yum Howto" section of yum.postgresql.org)

Last time I tried it worked as intended.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to config start/stop scripts in a different data dir for CentOS7/systemctl/PG9.6

2016-12-08 Thread Edilmar LISTAS
I decided to use the pg_ctl to start/stop, I didn't find a way to 
configure systemctl script.


Em 05-12-2016 15:21, Edilmar LISTAS escreveu:

Hi,

I do these steps to config Postgresql 9.6 in CentOS 6.x using a
different data directory:
yum -y install postgresql96-server postgresql96-contrib pg_top96
chkconfig postgresql-9.6 on
service postgresql-9.6 initdb
service postgresql-9.6 start
echo "localhost:5432:*:postgres:mypass" > ~/.pgpass
chmod 0600 ~/.pgpass
echo "localhost:5432:*:postgres:mypass" > ~postgres/.pgpass
chmod 0600 ~postgres/.pgpass
chown postgres.postgres ~postgres/.pgpass
su - postgres
psql -U postgres -c "ALTER USER postgres WITH PASSWORD 'mypass';"
exit
#- => my data dir is
/sistemas/sat4/bdpg
service postgresql-9.6 stop
cd /etc/rc.d/init.d
ln postgresql-9.6 pgsat
cd /etc/sysconfig/pgsql
echo "PGDATA=/sistemas/sat4/bdpg" > pgsat
mkdir /sistemas/sat4/bdpg
chown postgres.postgres /sistemas/sat4/bdpg
chmod 700 /sistemas/sat4/bdpg
su - postgres
/usr/pgsql-9.6/bin/initdb -D /sistemas/sat4/bdpg
/usr/pgsql-9.6/bin/pg_ctl -D /sistemas/sat4/bdpg -l logfile start
sleep 5000
psql -U postgres -c "ALTER USER postgres WITH PASSWORD 'mypass';"
exit
#-
su - postgres
/usr/pgsql-9.6/bin/pg_ctl -D /sistemas/sat4/bdpg -l logfile stop
exit
service pgsat start
chkconfig --add pgsat
chkconfig pgsat on
chkconfig postgresql-9.6 off
#-

But now CentOS7 uses the new systemd service control system.
I only get to start PG manually like this:
su - postgres
/usr/pgsql-9.6/bin/pg_ctl -D /sistemas/sat4/bdpg -l logfile start









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, Chris Travers wrote:


Assuming relatively small files, bytea makes much more sense than a large
object.


Hi Chris,

  Most of the documents are only a few pages in size.


LOBs work best when you need a streaming interface (seek and friends) while
bytea's are otherwise much more pleasant to work with.


  They're not referenced frequently, only occasionally.

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread John DeSoi

> On Dec 8, 2016, at 9:25 AM, Chris Travers  wrote:
> 
> Assuming relatively small files, bytea makes much more sense than a large 
> object.  However note that encoding and decoding can be relatively memory 
> intensive depending on your environment.  This is not a problem with small 
> files and I would typically start to worry when you get into the hundreds of 
> mb in size.  At least in Perl, I expect decoding to take about 8x the size of 
> the final file in RAM.
> 
> LOBs work best when you need a streaming interface (seek and friends) while 
> bytea's are otherwise much more pleasant to work with. 

Not much I can do on the Postgres side, but you can manage the amount of RAM 
needed on the client side by returning the bytea in chunks using a set 
returning function. In my case, this returns chunks to PHP that are immediately 
written to the download stream so there is no need to have the entire document 
in RAM on the application side. I have included the function I use below.

John DeSoi, Ph.D.


create or replace function blob_content_chunked(p_dbid integer)
returns setof bytea as $$
declare
  v_chunk integer = 1048576;
  v_start integer = 1;
  v_data bytea;
  v_size integer;
begin
  select into v_data content from blob where dbid = p_dbid;
  if found and v_data is not null then
v_size = octet_length(v_data);
if v_size <= v_chunk then
  return next v_data;
else
  for i in 1..v_size by v_chunk loop
return next substring(v_data from i for v_chunk);
  end loop;
end if;
  end if;
end;
$$ language plpgsql stable;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, Adrian Klaver wrote:


http://initd.org/psycopg/docs/usage.html?highlight=binary#adapt-binary


  Thanks again, Adrian.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Chris Travers
On Thu, Dec 8, 2016 at 7:16 AM, Rich Shepard 
wrote:

> On Thu, 8 Dec 2016, John DeSoi wrote:
>
> I have been storing PDFs in Postgres for several years without any
>> problems. Documents range in size from a few pages to 100+ pages. I'm
>> using a bytea column, not large objects. I store the documents in a
>> separate database from the rest of the application data in order to make
>> it easy to exclude in database dumps or backup in some other way. I'm
>> currently managing about 600,000 documents.
>>
>
> John,
>
>   This is really good information. Rather than using a separate database I
> think that storing all PDFs in a separate table makes sense for my
> application. Backup practices will be the domain of those using the
> application (which I've decided to open-source and give away because I'm
> not
> in the software business). A simple join to the appropriate data table will
> make them available.
>
>   Not having used the bytea data type before I'll read how to work with it.
>

Assuming relatively small files, bytea makes much more sense than a large
object.  However note that encoding and decoding can be relatively memory
intensive depending on your environment.  This is not a problem with small
files and I would typically start to worry when you get into the hundreds
of mb in size.  At least in Perl, I expect decoding to take about 8x the
size of the final file in RAM.

LOBs work best when you need a streaming interface (seek and friends) while
bytea's are otherwise much more pleasant to work with.

>
> Thanks very much for your insights,
>
> Rich
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Adrian Klaver

On 12/08/2016 07:16 AM, Rich Shepard wrote:

On Thu, 8 Dec 2016, John DeSoi wrote:


I have been storing PDFs in Postgres for several years without any
problems. Documents range in size from a few pages to 100+ pages. I'm
using a bytea column, not large objects. I store the documents in a
separate database from the rest of the application data in order to make
it easy to exclude in database dumps or backup in some other way. I'm
currently managing about 600,000 documents.


John,

  This is really good information. Rather than using a separate database I
think that storing all PDFs in a separate table makes sense for my
application. Backup practices will be the domain of those using the
application (which I've decided to open-source and give away because I'm
not
in the software business). A simple join to the appropriate data table will
make them available.

  Not having used the bytea data type before I'll read how to work with it.


http://initd.org/psycopg/docs/usage.html?highlight=binary#adapt-binary



Thanks very much for your insights,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Charles Clavadetscher
Hello

> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
> Sent: Donnerstag, 8. Dezember 2016 16:09
> To: Igor Korot 
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Importing SQLite database
> 
> On 12/08/2016 06:54 AM, Igor Korot wrote:
> > Adrian,
> >
> > On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver  
> > wrote:
> >> On 12/08/2016 04:54 AM, Igor Korot wrote:
> >>>
> >>> Hi, ALL,
> >>> I have a text file which I got from exporting the SQLite database.
> >>>
> >>> The file contains an SQL statement which will generate the database.
> >>
> >>
> >> Is there a CREATE DATABASE statement in the file you are referencing?
> >
> > Well there is no CREATE DATABASE() in the SQLite.
> > But I created a database by hand, so no issue here.
> >
> >>
> >> Otherwise you will have to create the database first and then load
> >> the file into it.
> >>
> >>>
> >>> Excerpt from this file:
> >>>
> >>> [code]
> >>> CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
> >>> VARCHAR(100),balance DOUBLE(10,2)); CREATE TABLE players(playerid
> >>> INTEGER AUTOINCREMENT PRIMARY KEY, player_name VARCHAR(60),
> >>> player_position CHAR(1)); CREATE TABLE player_draft(id INTEGER,
> >>> playerid INTEGER, drafted_position CHAR(1), FOREIGN KEY id REFERENCE
> >>> leagues(id), FOREIGN KEY playerid REFERENCE players(playerid));
> >>> INSERT INTO leagues VALUES(1, 'Test1', 260.00); INSERT INTO leagues
> >>> VALUES(2, 'Test2', 260.00); INSERT INTO players VALUES(1, 'Player1',
> >>> 'D'); INSERT INTO players VALUES(2, ''Player2', 'F'); [/code]
> >>>
> >>> My question would be:
> >>> Is there a command in Postgres which will open this file and execute
> >>> all those commands one-by-one in a transaction?
> >>
> >>
> >> Yes there is assuming the [code][/code] tags are for the email only.
> >
> > Yes, "code" tags are for e-mail only.
> >
> >>
> >> The issue is that Postgres will not understand AUTOINCREMENT. In Postgres 
> >> to
> >> get the same behavior you would do:
> >>
> >> id SERIAL PRIMARY KEY
> >
> > I'm not very familiar with Postgres, so let me ask you - is
> > autoincrementing behavior set
> > by default for the primary key?
> > Like if I want to insert the record in the table and if I omit this
> > column it will get the last inserted
> > value in this table plus 1.
> 
> No that is a Sqlite thing:
> http://sqlite.org/autoinc.html
> 
> 
> If you want to replicate in Postgres you will need to use the SERIAL type:
> 
> https://www.postgresql.org/docs/9.5/static/datatype-numeric.html#DATATYPE-SERIAL
> 
> along with PRIMARY KEY so:
> 
> id SERIAL PRIMARY KEY
> 
> This sets up a SEQUENCE:
> 
> https://www.postgresql.org/docs/9.5/static/sql-createsequence.html
> 
> for the id column, which supplies an incrementing, but not necessarily
> gap free sequence of numbers.

Adding to that. The sequence is unaware of the value that already are available 
in the column. Your insert statements will create rows without changing the 
sequence. That means that after finishing the import you will need to set the 
value of the sequence to the maximum value available in the column.

Here an example:

db=> create table test (id serial primary key, txt text);
CREATE TABLE
db=> \d test
 Table "public.test"
 Column |  Type   | Modifiers
+-+---
 id | integer | not null default nextval('test_id_seq'::regclass)
 txt| text|
Indexes:
"test_pkey" PRIMARY KEY, btree (id)

db=> insert into test values (1,'asdf'),(2,'fdgd'),(3,'werwe');
INSERT 0 3

db=> select * from test_id_seq;
 sequence_name | last_value | start_value | increment_by |  max_value  
| min_value | cache_value | log_cnt | is_cycled | is_called
---++-+--+-+---+-+-+---+---
 test_id_seq   |  1 |   1 |1 | 9223372036854775807 
| 1 |   1 |   0 | f | f
(1 row)

Since the value of the sequence still is 1 you may get into trouble:

db=> insert into test (txt) values ('hallo');
ERROR:  duplicate key value violates unique constraint "test_pkey"
DETAIL:  Key (id)=(1) already exists.

So you set the value of the sequence:

db.archivedb=> select setval('test_id_seq'::regclass,(select max(id) from 
test));
 setval

  3
(1 row)

And then everything works as expected.

db=> insert into test (txt) values ('hallo');
INSERT 0 1
kofadmin@kofdb.archivedb=> select * from test;
 id |  txt
+---
  1 | asdf
  2 | fdgd
  3 | werwe
  4 | hallo
(4 rows)

Hope this helps.
Bye
Charles

> 
> >
> >>
> >> If you clean up the file you can do, assuming you created a database called
> >> some_database:
> >>
> >> psql -d some_database -U some_user -f your_file
> 

Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, John DeSoi wrote:


I have been storing PDFs in Postgres for several years without any
problems. Documents range in size from a few pages to 100+ pages. I'm
using a bytea column, not large objects. I store the documents in a
separate database from the rest of the application data in order to make
it easy to exclude in database dumps or backup in some other way. I'm
currently managing about 600,000 documents.


John,

  This is really good information. Rather than using a separate database I
think that storing all PDFs in a separate table makes sense for my
application. Backup practices will be the domain of those using the
application (which I've decided to open-source and give away because I'm not
in the software business). A simple join to the appropriate data table will
make them available.

  Not having used the bytea data type before I'll read how to work with it.

Thanks very much for your insights,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 07:02 AM, Igor Korot wrote:

Adrian,





Anything else you see?
Does Postgres uses single quotes for literal values or double quotes?


Single:

https://www.postgresql.org/docs/9.5/static/sql-syntax-lexical.html


I'm talking about
INSERT INTO... VALUES(); statements.

SQLite does use double quotes, but I already cleaned it using single ones.


That may be going away:

http://sqlite.org/lang_keywords.html

"For resilience when confronted with historical SQL statements, SQLite 
will sometimes bend the quoting rules above:


If a keyword in single quotes (ex: 'key' or 'glob') is used in a 
context where an identifier is allowed but where a string literal is not 
allowed, then the token is understood to be an identifier instead of a 
string literal.


If a keyword in double quotes (ex: "key" or "glob") is used in a 
context where it cannot be resolved to an identifier but where a string 
literal is allowed, then the token is understood to be a string literal 
instead of an identifier.


Programmers are cautioned not to use the two exceptions described in the 
previous bullets. We emphasize that they exist only so that old and 
ill-formed SQL statements will run correctly. Future versions of SQLite 
might raise errors instead of accepting the malformed statements covered 
by the exceptions above."




Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 06:54 AM, Igor Korot wrote:

Adrian,

On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver  wrote:

On 12/08/2016 04:54 AM, Igor Korot wrote:


Hi, ALL,
I have a text file which I got from exporting the SQLite database.

The file contains an SQL statement which will generate the database.



Is there a CREATE DATABASE statement in the file you are referencing?


Well there is no CREATE DATABASE() in the SQLite.
But I created a database by hand, so no issue here.



Otherwise you will have to create the database first and then load the file
into it.



Excerpt from this file:

[code]
CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
VARCHAR(100),balance DOUBLE(10,2));
CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
player_name VARCHAR(60), player_position CHAR(1));
CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
FOREIGN KEY playerid REFERENCE players(playerid));
INSERT INTO leagues VALUES(1, 'Test1', 260.00);
INSERT INTO leagues VALUES(2, 'Test2', 260.00);
INSERT INTO players VALUES(1, 'Player1', 'D');
INSERT INTO players VALUES(2, ''Player2', 'F');
[/code]

My question would be:
Is there a command in Postgres which will open this file and execute all
those
commands one-by-one in a transaction?



Yes there is assuming the [code][/code] tags are for the email only.


Yes, "code" tags are for e-mail only.



The issue is that Postgres will not understand AUTOINCREMENT. In Postgres to
get the same behavior you would do:

id SERIAL PRIMARY KEY


I'm not very familiar with Postgres, so let me ask you - is
autoincrementing behavior set
by default for the primary key?
Like if I want to insert the record in the table and if I omit this
column it will get the last inserted
value in this table plus 1.


No that is a Sqlite thing:
http://sqlite.org/autoinc.html


If you want to replicate in Postgres you will need to use the SERIAL type:

https://www.postgresql.org/docs/9.5/static/datatype-numeric.html#DATATYPE-SERIAL

along with PRIMARY KEY so:

id SERIAL PRIMARY KEY

This sets up a SEQUENCE:

https://www.postgresql.org/docs/9.5/static/sql-createsequence.html

for the id column, which supplies an incrementing, but not necessarily 
gap free sequence of numbers.






If you clean up the file you can do, assuming you created a database called
some_database:

psql -d some_database -U some_user -f your_file



The file also contains line "BEGIN TRANSACTION" as the first line and
"COMMIT" as last.
Is the syntax the same for Postgres or is it different?


It is the same.

See below for list of Postgres commands:

https://www.postgresql.org/docs/9.5/static/sql-commands.html



Thank you.



If you do not want to do the manual clean up, then Willam's suggestion looks
interesting.




Or I will have to do a manual table creation, then split this file and
use "LOAD..."
command to load the data into the tables?

Hopefully I made myself clear.
Let me know if not and I will try to clarify further.

Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Igor Korot
Adrian,

On Thu, Dec 8, 2016 at 9:54 AM, Igor Korot  wrote:
> Adrian,
>
> On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver  
> wrote:
>> On 12/08/2016 04:54 AM, Igor Korot wrote:
>>>
>>> Hi, ALL,
>>> I have a text file which I got from exporting the SQLite database.
>>>
>>> The file contains an SQL statement which will generate the database.
>>
>>
>> Is there a CREATE DATABASE statement in the file you are referencing?
>
> Well there is no CREATE DATABASE() in the SQLite.
> But I created a database by hand, so no issue here.
>
>>
>> Otherwise you will have to create the database first and then load the file
>> into it.
>>
>>>
>>> Excerpt from this file:
>>>
>>> [code]
>>> CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
>>> VARCHAR(100),balance DOUBLE(10,2));
>>> CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
>>> player_name VARCHAR(60), player_position CHAR(1));
>>> CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
>>> drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
>>> FOREIGN KEY playerid REFERENCE players(playerid));
>>> INSERT INTO leagues VALUES(1, 'Test1', 260.00);
>>> INSERT INTO leagues VALUES(2, 'Test2', 260.00);
>>> INSERT INTO players VALUES(1, 'Player1', 'D');
>>> INSERT INTO players VALUES(2, ''Player2', 'F');
>>> [/code]
>>>
>>> My question would be:
>>> Is there a command in Postgres which will open this file and execute all
>>> those
>>> commands one-by-one in a transaction?
>>
>>
>> Yes there is assuming the [code][/code] tags are for the email only.
>
> Yes, "code" tags are for e-mail only.
>
>>
>> The issue is that Postgres will not understand AUTOINCREMENT. In Postgres to
>> get the same behavior you would do:
>>
>> id SERIAL PRIMARY KEY

Anything else you see?
Does Postgres uses single quotes for literal values or double quotes?
I'm talking about
INSERT INTO... VALUES(); statements.

SQLite does use double quotes, but I already cleaned it using single ones.

Thank you.

>
> I'm not very familiar with Postgres, so let me ask you - is
> autoincrementing behavior set
> by default for the primary key?
> Like if I want to insert the record in the table and if I omit this
> column it will get the last inserted
> value in this table plus 1.
>
>>
>> If you clean up the file you can do, assuming you created a database called
>> some_database:
>>
>> psql -d some_database -U some_user -f your_file
>>
>
> The file also contains line "BEGIN TRANSACTION" as the first line and
> "COMMIT" as last.
> Is the syntax the same for Postgres or is it different?
>
> Thank you.
>
>>
>> If you do not want to do the manual clean up, then Willam's suggestion looks
>> interesting.
>>
>>
>>
>>> Or I will have to do a manual table creation, then split this file and
>>> use "LOAD..."
>>> command to load the data into the tables?
>>>
>>> Hopefully I made myself clear.
>>> Let me know if not and I will try to clarify further.
>>>
>>> Thank you.
>>>
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Who dropped a role?

2016-12-08 Thread Melvin Davidson
On Thu, Dec 8, 2016 at 4:34 AM, Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hello
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:pgsql-general-owner@
> postgresql.org] *On Behalf Of *Durumdara
> *Sent:* Donnerstag, 8. Dezember 2016 10:13
> *To:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] Who dropped a role?
>
>
>
> Hello!
>
>
>
> Sorry, meanwhile I found it as Group Role.
>
> I never used this kind of role. How to restore it to normal login role?
>
>
>
> ALTER ROLE ... LOGIN;
>
>
>
> You probably need to reassign the password (I don’t remember right now if
> the pw is dropped when changing form LOGIN to NOLOGIN).
>
>
>
> \password ...
>
>
>
> Thanks for it!
>
>
>
> dd
>
>
>
> 2016-12-08 9:53 GMT+01:00 Durumdara :
>
> Dear PG Masters!
>
>
>
> In a server today I didn't find an important role.
>
> I don't know what my colleagues did with it, but it seems to be lost.
>
> Do you know a trick, how get info about it?
>
>
>
> 1.) Who dropped it?
>
> 2.) If not dropped what happened with it?
>
>
>
> Does role dropping logged somewhere?
>
>
>
> Thanks for your help!
>
>
>
> dd
>
>
>


*In addition to the info on how to reset the role to login, you might want
to make sure that, at minimum, you have  log_statement = 'ddl' in
postgresql.conf and do a SELECT pg_reload_conf();*



*That way, all DDL type statements will be recorded in the postgres log.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Igor Korot
Adrian,

On Thu, Dec 8, 2016 at 9:47 AM, Adrian Klaver  wrote:
> On 12/08/2016 04:54 AM, Igor Korot wrote:
>>
>> Hi, ALL,
>> I have a text file which I got from exporting the SQLite database.
>>
>> The file contains an SQL statement which will generate the database.
>
>
> Is there a CREATE DATABASE statement in the file you are referencing?

Well there is no CREATE DATABASE() in the SQLite.
But I created a database by hand, so no issue here.

>
> Otherwise you will have to create the database first and then load the file
> into it.
>
>>
>> Excerpt from this file:
>>
>> [code]
>> CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
>> VARCHAR(100),balance DOUBLE(10,2));
>> CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
>> player_name VARCHAR(60), player_position CHAR(1));
>> CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
>> drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
>> FOREIGN KEY playerid REFERENCE players(playerid));
>> INSERT INTO leagues VALUES(1, 'Test1', 260.00);
>> INSERT INTO leagues VALUES(2, 'Test2', 260.00);
>> INSERT INTO players VALUES(1, 'Player1', 'D');
>> INSERT INTO players VALUES(2, ''Player2', 'F');
>> [/code]
>>
>> My question would be:
>> Is there a command in Postgres which will open this file and execute all
>> those
>> commands one-by-one in a transaction?
>
>
> Yes there is assuming the [code][/code] tags are for the email only.

Yes, "code" tags are for e-mail only.

>
> The issue is that Postgres will not understand AUTOINCREMENT. In Postgres to
> get the same behavior you would do:
>
> id SERIAL PRIMARY KEY

I'm not very familiar with Postgres, so let me ask you - is
autoincrementing behavior set
by default for the primary key?
Like if I want to insert the record in the table and if I omit this
column it will get the last inserted
value in this table plus 1.

>
> If you clean up the file you can do, assuming you created a database called
> some_database:
>
> psql -d some_database -U some_user -f your_file
>

The file also contains line "BEGIN TRANSACTION" as the first line and
"COMMIT" as last.
Is the syntax the same for Postgres or is it different?

Thank you.

>
> If you do not want to do the manual clean up, then Willam's suggestion looks
> interesting.
>
>
>
>> Or I will have to do a manual table creation, then split this file and
>> use "LOAD..."
>> command to load the data into the tables?
>>
>> Hopefully I made myself clear.
>> Let me know if not and I will try to clarify further.
>>
>> Thank you.
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread Adrian Klaver

On 12/08/2016 04:54 AM, Igor Korot wrote:

Hi, ALL,
I have a text file which I got from exporting the SQLite database.

The file contains an SQL statement which will generate the database.


Is there a CREATE DATABASE statement in the file you are referencing?

Otherwise you will have to create the database first and then load the 
file into it.




Excerpt from this file:

[code]
CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
VARCHAR(100),balance DOUBLE(10,2));
CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
player_name VARCHAR(60), player_position CHAR(1));
CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
FOREIGN KEY playerid REFERENCE players(playerid));
INSERT INTO leagues VALUES(1, 'Test1', 260.00);
INSERT INTO leagues VALUES(2, 'Test2', 260.00);
INSERT INTO players VALUES(1, 'Player1', 'D');
INSERT INTO players VALUES(2, ''Player2', 'F');
[/code]

My question would be:
Is there a command in Postgres which will open this file and execute all those
commands one-by-one in a transaction?


Yes there is assuming the [code][/code] tags are for the email only.

The issue is that Postgres will not understand AUTOINCREMENT. In 
Postgres to get the same behavior you would do:


id SERIAL PRIMARY KEY

If you clean up the file you can do, assuming you created a database 
called some_database:


psql -d some_database -U some_user -f your_file


If you do not want to do the manual clean up, then Willam's suggestion 
looks interesting.




Or I will have to do a manual table creation, then split this file and
use "LOAD..."
command to load the data into the tables?

Hopefully I made myself clear.
Let me know if not and I will try to clarify further.

Thank you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-08 Thread John DeSoi

> On Dec 6, 2016, at 1:09 PM, Eric Schwarzenbach  
> wrote:
> 
> I've often wondered if we'd have been better off storing the files in the 
> database. This design decision was made some years ago, and our concerns 
> around this had to do with performance, but I don't know that we had any real 
> data that this should have been a concern, and I suspect you could ameliorate 
> if not eliminate this as an issue by careful design. I'd loved to hear this 
> idea confirmed or debunked by someone who has more expertise (and ideally, 
> done actual testing).

I have been storing PDFs in Postgres for several years without any problems. 
Documents range in size from a few pages to 100+ pages. I'm using a bytea 
column, not large objects. I store the documents in a separate database from 
the rest of the application data in order to make it easy to exclude in 
database dumps or backup in some other way. I'm currently managing about 
600,000 documents.

I created some functions that enable a subset of the document database to be 
synchronized elsewhere. For example, we need to keep only the last 3 years of 
documents on a website for user access. Using Postgres has made this easy to 
manage and verify. And with replication we automatically have the document 
database available on the backup web server without additional effort.

John DeSoi, Ph.D.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-08 Thread William Ivanski
OmniDB [1] is able to convert databases from one RDBMS to another. I
suggest you perform a conversion from your SQLite file to a PostgreSQL
database.
This page [2] can help you on how to add connections to OmniDB and this
other page [3] explains how to use the convert feature.

[1] http://www.omnidb.com.br/en_index.aspx
[2] https://github.com/OmniDB/OmniDB/wiki/3.-Creating-Users-and-Connections
[3] https://github.com/OmniDB/OmniDB/wiki/10.-Schema-Conversion

Em qui, 8 de dez de 2016 às 10:55, Igor Korot  escreveu:

> Hi, ALL,
> I have a text file which I got from exporting the SQLite database.
>
> The file contains an SQL statement which will generate the database.
>
> Excerpt from this file:
>
> [code]
> CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
> VARCHAR(100),balance DOUBLE(10,2));
> CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
> player_name VARCHAR(60), player_position CHAR(1));
> CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
> drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
> FOREIGN KEY playerid REFERENCE players(playerid));
> INSERT INTO leagues VALUES(1, 'Test1', 260.00);
> INSERT INTO leagues VALUES(2, 'Test2', 260.00);
> INSERT INTO players VALUES(1, 'Player1', 'D');
> INSERT INTO players VALUES(2, ''Player2', 'F');
> [/code]
>
> My question would be:
> Is there a command in Postgres which will open this file and execute all
> those
> commands one-by-one in a transaction?
> Or I will have to do a manual table creation, then split this file and
> use "LOAD..."
> command to load the data into the tables?
>
> Hopefully I made myself clear.
> Let me know if not and I will try to clarify further.
>
> Thank you.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 

William Ivanski


Re: [GENERAL] When to use COMMENT vs --

2016-12-08 Thread Rich Shepard

On Thu, 8 Dec 2016, Charles Clavadetscher wrote:


IMHO the fact mentioned by Chris Travers that comments on objects included
in the database can be used by tools to generate the documentation is
probably the most important (besides their being persisted).


Charles,

  Chris' contribution was the only one to answer my question.


As a matter of fact we have integrated this feature to extract comments to
generate the DB documentation in our internal MediaWiki based wiki.


  I'm not sure this is applicable to me. One project is for my business use,
the other for clients who will see only the entire application.

Thanks for explaining,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FreeBSD 10 => 11: Dump and reload your PostgreSQL database unless you like it broken

2016-12-08 Thread Tim Clarke
On 07/12/16 21:20, Rob Sargent wrote:
>
>
> On 12/07/2016 02:06 PM, Kevin Grittner wrote:
>> I would be much happier if most developers understood the
>> difference between a character set (e.g., Unicode) and a character
>> encoding scheme (e.g., UTF-8 or UTF-16) and how the two concepts
>> relate.  If we reached a point where most DBAs understood the point
>> of being able to set a client_encoding that is different from the
>> server_encoding, I think I would need to pop the cork on some
>> champagne.
>>
>> Hm.  Maybe a topic for a blog post
>>
>> -- 
>> Kevin Grittner
>> EDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>>
> +1 - Confuses me every time
>
>

+1 me too

-- 
Tim Clarke



smime.p7s
Description: S/MIME Cryptographic Signature


[GENERAL] Importing SQLite database

2016-12-08 Thread Igor Korot
Hi, ALL,
I have a text file which I got from exporting the SQLite database.

The file contains an SQL statement which will generate the database.

Excerpt from this file:

[code]
CREATE TABLE leagues( id INTEGER AUTOINCREMENT PRIMARY KEY, name
VARCHAR(100),balance DOUBLE(10,2));
CREATE TABLE players(playerid INTEGER AUTOINCREMENT PRIMARY KEY,
player_name VARCHAR(60), player_position CHAR(1));
CREATE TABLE player_draft(id INTEGER, playerid INTEGER,
drafted_position CHAR(1), FOREIGN KEY id REFERENCE leagues(id),
FOREIGN KEY playerid REFERENCE players(playerid));
INSERT INTO leagues VALUES(1, 'Test1', 260.00);
INSERT INTO leagues VALUES(2, 'Test2', 260.00);
INSERT INTO players VALUES(1, 'Player1', 'D');
INSERT INTO players VALUES(2, ''Player2', 'F');
[/code]

My question would be:
Is there a command in Postgres which will open this file and execute all those
commands one-by-one in a transaction?
Or I will have to do a manual table creation, then split this file and
use "LOAD..."
command to load the data into the tables?

Hopefully I made myself clear.
Let me know if not and I will try to clarify further.

Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Michael Paquier
On Thu, Dec 8, 2016 at 7:08 PM, Chris Travers  wrote:
>
>
> On Thu, Dec 8, 2016 at 11:01 AM, Subhankar Chattopadhyay
>  wrote:
>>
>> pg_rewind cannot be used as I am on 9.4.
>>
>> Anything else?
>
>
> Upgrade to 9.5 or 9.6? ;-)
>
> This is a known limitation in Postgres that pg_rewind was written to
> address.

FWIW I maintain a version that works on 9.4:
https://github.com/vmware/pg_rewind
And that's battle-proven.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections limit violation not showing in pg_stat_activity

2016-12-08 Thread Charles Clavadetscher
Hello Kevin

Getting back at this.

> -Original Message-
> From: Kevin Grittner [mailto:kgri...@gmail.com]
> Sent: Mittwoch, 23. November 2016 17:04
> To: Charles Clavadetscher 
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] max_connections limit violation not showing in 
> pg_stat_activity
> 
> On Wed, Nov 23, 2016 at 4:43 AM, Charles Clavadetscher 
>  wrote:
> > From: Kevin Grittner [mailto:kgri...@gmail.com]
> 
> >> Is it possible to upgrade?  You are missing over a year's worth of
> >> fixes for serious bugs and security vulnerabilities.
> >
> > Yes. Actually it is foreseen to move to 9.6 (and RedHat 7.x).
> >
> >> https://www.postgresql.org/support/versioning/
> 
> An upgrade from 9.3.x to 9.6.x is a major release upgrade, which gets you new 
> features and usually gets you
> improvements in performance and scalability.  The 9.3 major release will be 
> supported for almost 2 more years, so I
> wasn't so concerned about that as being on 9.3.10 when the latest bug fix 
> version of 9.3 is 9.3.15.  To avoid
> hitting bugs that others have already hit and reported, with fixes published, 
> it is wise to try to upgrade to the
> latest minor release fairly quickly.  If the minor release fixes a serious 
> security vulnerability, I think it is a
> good idea to update within a day or two of release.
> 
> > I will have to check with our hosting people how many cores we have or
> > can have on the new environment.  I have seen that there is pgBouncer
> > and pgPool. Would you recommend one of those?
> 
> pgBouncer is more lightweight, so if you don't need any of the features 
> present only pgPool, I would go with
> pgBouncer.  Depending on your application software environment, it may be 
> even better to use a pool built into the
> application development framework.  There are several very good pools 
> available to Java environments.  I can
> personally attest to the quality of Apache dbcp, but others have mentioned 
> that they like other Java connection
> pools even better.
> You might want to search the archives, or maybe someone will mention the 
> others again on this thread.

Finally I set up pgbouncer and a simple first test with a somewhat heavy load 
(1000 users in 5 minutes corresponding to a total amount of 12000 http requests 
and 5000 DB requests) shows an incredible improvement in performance. Without 
the pooler roughly a fifth of the calls died in a timeout. The remaining show 
an average response time of more than 8 seconds. With the pooler all requests 
went through without any error whatsoever and the mean response time dropped to 
23 ms (the maximum being at 193 ms). At first I thought that I had some error 
in my simulation somewhere, but all 5000 DB requests have been correctly 
performed as I could check in a log table that I prepared for that specific 
purpose. There is definetely not doubt about the beneficial effect of the 
connection pooler.

Now I will take some time to better understand the configuration settings.

Maybe a question on that. I have following configuration entries:

pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 100
default_pool_size = 60
min_pool_size = 5
max_db_connections = 60
max_user_connections = 60

The pooler only serves a single application and only has a single connection 
string with the same user for all calls (that being the application user 
defined for that specific web application). Our current DB has a 
max_connections of 100. I assumed that max_client_conn should match this entry. 
With max_db_connections and max_user_connections I try to make sure that calls 
from this  application won't take up all resources (as it happened without the 
pooler). This seems to work correctly. During the simulation other applications 
that also require the database but don't use the pooler, work normally, i.e. 
there are no noticeable effects.

If you have some suggestion or see potential problems in these settings, that I 
am not aware of so far, feel free to point me to them.

Thank you very much again for the excellent tips.
Bye
Charles

> >>> SELECT count(*) FROM pg_stat_activity; watch 1;
> >>
> >> At the times when the resources are overloaded by more connections
> >> than the resources can efficiently service -- well that's precisely
> >> the time that a sleeping "monitoring" process is least likely to be
> >> given a time slice to run. If you can manage to get pgbadger to run
> >> on your environment, and you turn on logging of connections and
> >> disconnections, you will be able to get far more accurate
> >> information.
> >
> > Yes, it sounds reasonable. I assumed that this kind of measurements
> > have a higher priority or reserved slots for them.
> > In those occasions is when they are most needed.
> 
> There is no such feature in PostgreSQL.  It might be worthwhile, although how 
> that would be implemented is not
> obvious, short of a sysadmin 

Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Chris Travers
On Thu, Dec 8, 2016 at 11:01 AM, Subhankar Chattopadhyay <
subho@gmail.com> wrote:

> pg_rewind cannot be used as I am on 9.4.
>
> Anything else?
>

Upgrade to 9.5 or 9.6? ;-)

This is a known limitation in Postgres that pg_rewind was written to
address.

>
> On Thu, Dec 8, 2016 at 3:23 PM, Achilleas Mantzios
>  wrote:
> > On 08/12/2016 11:33, Subhankar Chattopadhyay wrote:
> >>
> >> Hi,
> >>
> >> Few days back I had asked if it is needed to to pg_basebackup for
> >> every database update. From John I understood that it is unnecessary
> >> and if the slave is syncing, even if it is catching up, it would be
> >> able to sync without doing pg_basebackup. This is working also for me.
> >>
> >> However, for a failover scenario, where a master goes down, and I make
> >> the slave as master, and then when the old master comes back as a
> >> slave again, if I don't take pg_basebackup from the new master, it
> >> cannot follow the new master. This is kind of an overhead. Is there a
> >> way I can make the old master follow the new master without having to
> >> do full backup?
> >
> > pg_rewind
> >>
> >>
> >>
> >>
> >> Subhankar Chattopadhyay
> >> Bangalore, India
> >>
> >>
> >
> >
> > --
> > Achilleas Mantzios
> > IT DEV Lead
> > IT DEPT
> > Dynacom Tankers Mgmt
> >
> >
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> --
>
>
>
>
> Subhankar Chattopadhyay
> Bangalore, India
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Subhankar Chattopadhyay
pg_rewind cannot be used as I am on 9.4.

Anything else?

On Thu, Dec 8, 2016 at 3:23 PM, Achilleas Mantzios
 wrote:
> On 08/12/2016 11:33, Subhankar Chattopadhyay wrote:
>>
>> Hi,
>>
>> Few days back I had asked if it is needed to to pg_basebackup for
>> every database update. From John I understood that it is unnecessary
>> and if the slave is syncing, even if it is catching up, it would be
>> able to sync without doing pg_basebackup. This is working also for me.
>>
>> However, for a failover scenario, where a master goes down, and I make
>> the slave as master, and then when the old master comes back as a
>> slave again, if I don't take pg_basebackup from the new master, it
>> cannot follow the new master. This is kind of an overhead. Is there a
>> way I can make the old master follow the new master without having to
>> do full backup?
>
> pg_rewind
>>
>>
>>
>>
>> Subhankar Chattopadhyay
>> Bangalore, India
>>
>>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 




Subhankar Chattopadhyay
Bangalore, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Achilleas Mantzios

On 08/12/2016 11:33, Subhankar Chattopadhyay wrote:

Hi,

Few days back I had asked if it is needed to to pg_basebackup for
every database update. From John I understood that it is unnecessary
and if the slave is syncing, even if it is catching up, it would be
able to sync without doing pg_basebackup. This is working also for me.

However, for a failover scenario, where a master goes down, and I make
the slave as master, and then when the old master comes back as a
slave again, if I don't take pg_basebackup from the new master, it
cannot follow the new master. This is kind of an overhead. Is there a
way I can make the old master follow the new master without having to
do full backup?

pg_rewind




Subhankar Chattopadhyay
Bangalore, India





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Subhankar Chattopadhyay
I am on 9.4. Doesn't look like we have it available on 9.4.

Do we have any other option?

On Thu, Dec 8, 2016 at 3:15 PM, Chris Travers  wrote:
>
>
> On Thu, Dec 8, 2016 at 10:33 AM, Subhankar Chattopadhyay
>  wrote:
>>
>> Hi,
>>
>> Few days back I had asked if it is needed to to pg_basebackup for
>> every database update. From John I understood that it is unnecessary
>> and if the slave is syncing, even if it is catching up, it would be
>> able to sync without doing pg_basebackup. This is working also for me.
>>
>> However, for a failover scenario, where a master goes down, and I make
>> the slave as master, and then when the old master comes back as a
>> slave again, if I don't take pg_basebackup from the new master, it
>> cannot follow the new master. This is kind of an overhead. Is there a
>> way I can make the old master follow the new master without having to
>> do full backup?
>
>
> Depending on your version and circumstance, pg_rewind may address your
> problem.
> https://www.postgresql.org/docs/9.5/static/app-pgrewind.html
>>
>>
>>
>>
>> Subhankar Chattopadhyay
>> Bangalore, India
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more



-- 




Subhankar Chattopadhyay
Bangalore, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Chris Travers
On Thu, Dec 8, 2016 at 10:33 AM, Subhankar Chattopadhyay <
subho@gmail.com> wrote:

> Hi,
>
> Few days back I had asked if it is needed to to pg_basebackup for
> every database update. From John I understood that it is unnecessary
> and if the slave is syncing, even if it is catching up, it would be
> able to sync without doing pg_basebackup. This is working also for me.
>
> However, for a failover scenario, where a master goes down, and I make
> the slave as master, and then when the old master comes back as a
> slave again, if I don't take pg_basebackup from the new master, it
> cannot follow the new master. This is kind of an overhead. Is there a
> way I can make the old master follow the new master without having to
> do full backup?
>

Depending on your version and circumstance, pg_rewind may address your
problem.
https://www.postgresql.org/docs/9.5/static/app-pgrewind.html

>
>
>
> Subhankar Chattopadhyay
> Bangalore, India
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Who dropped a role?

2016-12-08 Thread Charles Clavadetscher
Hello

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Durumdara
Sent: Donnerstag, 8. Dezember 2016 10:13
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Who dropped a role?

 

Hello!

 

Sorry, meanwhile I found it as Group Role.

I never used this kind of role. How to restore it to normal login role?

 

ALTER ROLE ... LOGIN;

 

You probably need to reassign the password (I don’t remember right now if the 
pw is dropped when changing form LOGIN to NOLOGIN).

 

\password ...

 

Thanks for it!

 

dd

 

2016-12-08 9:53 GMT+01:00 Durumdara  >:

Dear PG Masters!

 

In a server today I didn't find an important role.

I don't know what my colleagues did with it, but it seems to be lost.

Do you know a trick, how get info about it?

 

1.) Who dropped it?

2.) If not dropped what happened with it?

 

Does role dropping logged somewhere?

 

Thanks for your help!

 

dd

 



[GENERAL] pgbasebackup necessary for master slave change?

2016-12-08 Thread Subhankar Chattopadhyay
Hi,

Few days back I had asked if it is needed to to pg_basebackup for
every database update. From John I understood that it is unnecessary
and if the slave is syncing, even if it is catching up, it would be
able to sync without doing pg_basebackup. This is working also for me.

However, for a failover scenario, where a master goes down, and I make
the slave as master, and then when the old master comes back as a
slave again, if I don't take pg_basebackup from the new master, it
cannot follow the new master. This is kind of an overhead. Is there a
way I can make the old master follow the new master without having to
do full backup?



Subhankar Chattopadhyay
Bangalore, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Who dropped a role?

2016-12-08 Thread Durumdara
Hello!

Sorry, meanwhile I found it as Group Role.
I never used this kind of role. How to restore it to normal login role?

Thanks for it!

dd

2016-12-08 9:53 GMT+01:00 Durumdara :

> Dear PG Masters!
>
> In a server today I didn't find an important role.
> I don't know what my colleagues did with it, but it seems to be lost.
> Do you know a trick, how get info about it?
>
> 1.) Who dropped it?
> 2.) If not dropped what happened with it?
>
> Does role dropping logged somewhere?
>
> Thanks for your help!
>
> dd
>


[GENERAL] Who dropped a role?

2016-12-08 Thread Durumdara
Dear PG Masters!

In a server today I didn't find an important role.
I don't know what my colleagues did with it, but it seems to be lost.
Do you know a trick, how get info about it?

1.) Who dropped it?
2.) If not dropped what happened with it?

Does role dropping logged somewhere?

Thanks for your help!

dd