Re: [GENERAL] Redirect sequence access to different schema

2010-07-26 Thread Magnus Reftel
On Jul 25, 2010, at 23:13 , Joe Conway wrote:
 On 07/25/2010 12:01 PM, Magnus Reftel wrote:
 create view myseq as select * from other_schema.foo_id_seq;
 
 but when I run select nextval('myseq'); I get an error saying that
 myseq is not a sequence. What other options are there?
 
 It isn't clear (to me, at least) what you are trying to accomplish, but
 does this do what you want?

Thanks for the reply! Sorry for not being clear. What I'm after is being able 
to have some code run on the database without having to modify the application 
or its database schema. The way I'm trying to achieve this is by setting it up 
to access a different schema than it usually would, and have that schema act as 
a proxy for the real schema using views and rules that perform the alterations 
I want. It works fine for tables, but I had trouble with getting ti work with 
sequences.

One solution I came up with is to not try to emulate the sequence, but the 
functions accessing the sequence, as in:

alter function currval(regclass) rename to real_currval;
create function inject.currval(unknown) returns bigint as 'select 
real_currval(''actual.'' || CAST($1 as text));' language sql security definer;

Best Regards
Magnus Reftel
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Converting BYTEA from/to BIGINT

2010-07-26 Thread Vincenzo Romano
Hi all.

I'd like to convert an 8-bytes BYTEA into a BIGINT and possibly vice versa.
Is there any way to do it?

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting BYTEA from/to BIGINT

2010-07-26 Thread Pavel Stehule
Hello

you can try

postgres=# select int8send(256);
  int8send

 \x0100
(1 row)

for converting from bytea to int8 you need a custom function - probably in C :(

Pavel


2010/7/26 Vincenzo Romano vincenzo.rom...@notorand.it:
 Hi all.

 I'd like to convert an 8-bytes BYTEA into a BIGINT and possibly vice versa.
 Is there any way to do it?

 --
 Vincenzo Romano
 NotOrAnd Information Technologies
 NON QVIETIS MARIBVS NAVTA PERITVS

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data dumps to files - best methods?

2010-07-26 Thread Machiel Richards
Hi All

 

   Thank you for the responses so far...

 

I do however have a more specific question regarding this data dump that
I need to create for them.

 

   From what I can see in the specs and current output files, the client
needs the data output in .xml format in order to use this on their side,
still trying to understand why though...

 

   Is there a method of outputting / dumping the data into .xml format
as part of the scripts / crons/ db processes?

 

 

 

 

 

Machiel Richards

MySQL DBA

Email: machi...@rdc.co.za

Tel: 0861 732 732

RDC_Logo

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ralf Schuchardt
Sent: 23 July 2010 01:37 PM
To: Machiel Richards
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Data dumps to files - best methods?

 

Hi,

 

Am 23.07.2010 um 10:32 schrieb Machiel Richards:





   As I am fairly new to postgresql I am trying to find some more info
regarding options to dump specific data to files. 





 However, even though I can get the sql query,etc... how will I use this
to dump the data into the relevant files?

 

You can use the \copy command in psql to export the result of query into a
file. For example:

 

 psql -c \\copy smb://copy  (select * from atable) to 'myfile.csv' with
delimiter as ',' csv header adb

 

will copy all rows from atable in adb to myfile.csv in csv format.

 

 

Ralf

image001.jpg

Re: [GENERAL] Converting BYTEA from/to BIGINT

2010-07-26 Thread Vincenzo Romano
2010/7/26 Vincenzo Romano vincenzo.rom...@notorand.it:
 Hi all.

 I'd like to convert an 8-bytes BYTEA into a BIGINT and possibly vice versa.
 Is there any way to do it?

Something like:

CREATE OR REPLACE FUNCTION bytea_to_int8( ba BYTEA, OUT res INT8 )
LANGUAGE plpgsql STRICT
AS $BODY$
DECLARE
  i INT;
BEGIN
  res := 0;
  FOR i IN 0 .. 7 LOOP
res := 256*res + get_byte( ba,i );
  END LOOP;
END;
$BODY$;

gives me back errors (ERROR:  bigint out of range) because of overflow
at step no.7

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting BYTEA from/to BIGINT

2010-07-26 Thread Vincenzo Romano
2010/7/26 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 you can try

 postgres=# select int8send(256);
      int8send
 
  \x0100
 (1 row)

 for converting from bytea to int8 you need a custom function - probably in C 
 :(

int8send?

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incorrect FTS result with GIN index

2010-07-26 Thread Artur Dabrowski

Hello Oleg,

I totally agree, that the problem should be fixed. Saying this, I need to
add that:
- I have no knowledge of postgres development,
- I cannot dedicate any significant time to this problem,
- I am no longer working for the project where the problem occurred,
- In the mentioned project the problem is not considered business-critical
at the moment (although it may be in the future).

Nevertheless I think it should be still interesting for postgres developers
community to fix it. The point is I have no needed knowledge nor time to fix
it.

As to my postgres setup - it's nothing special, it's just a regular version
from postgres' webpage.

Best regards
Artur







Oleg Bartunov wrote:
 
 Artur,
 
 you could get much more problems in future. Full text search problem may
 be
 signature of more general problem with your postgres setup. So, I'd
 recommend
 to find a source of the problem
 
 
 Oleg
 

-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Incorrect-FTS-results-with-GIN-index-tp1928607p2227845.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server load statistics

2010-07-26 Thread Sam Mason
On Sun, Jul 25, 2010 at 12:15:06PM +0200, Torsten Bronger wrote:
 I need statistics about the PG server load.  At the moment, I use
 for this
 
 SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
  tup_deleted FROM pg_stat_database WHERE datname='mydb';
 
 However, the figures are absurdly high ( 100.000 rows per second).
 If a row is one dataset (one user account, one blog entry, one
 comment etc), I expect two or three orders of magnitude less.  Is my
 SQL statement nevertheless a good way to measure how busy the server
 is?

Yes, but I don't think it's measuring what you think it is.
tup_returned gives the number of tuples read during sequential scans,
so you've probably got some queries that are touching many more rows
than you're expecting.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread AlannY
Hi there.

I have a huge database with several tables. Some tables have statistics
information. And it's very huge. I don't want to loose any of this data.
But hard drives on my single server are not eternal. Very soon, there will
be no left space. And the most awful this, that it's a 1U server, and I
can't install new hard drive.

What can I do to enlarge space, without loosing data and performance?

Thanks for patience.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Ivan Voras
On 26.7.2010 12:43, AlannY wrote:
 Hi there.
 
 I have a huge database with several tables. Some tables have statistics
 information. And it's very huge. I don't want to loose any of this data.
 But hard drives on my single server are not eternal. Very soon, there will
 be no left space. And the most awful this, that it's a 1U server, and I
 can't install new hard drive.
 
 What can I do to enlarge space, without loosing data and performance?

Absolutely nothing quick and easy. In fact, about the only thing you can
do which won't cause a (long term) data loss and performance degradation
is a full backup, installing bigger drives to replace the old ones, and
full restore.

Some other ideas which might help you if you don't want to swap drives,
but generally require a lot of work and you *will* lose either data or
performance:

* use a file system which supports compression (NTFS on Windows, ZFS on
FreeBSD  Solaris, don't know any on Linux)
* move unneeded data out from the database and into a separate,
compressed data storage format (e.g. move statistical data into gzipped
csv or text files or something to that effect)
* buy external storage (NAS, or even an external USB drive), move the
database to it
* use an external data storage service like amazon s3 (actually, this is
a bad idea since you will need to completely rewrite your database and
application)
* decide that you really don't need some of the data and just delete it.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data dumps to files - best methods?

2010-07-26 Thread Ralf Schuchardt
Hi,

Am 26.07.2010 um 11:46 schrieb Machiel Richards:

Is there a method of outputting / dumping the data into .xml format as 
 part of the scripts / crons/ db processes?

It depends on your requirements of the xml file format.
psql can output data in html/xhtml format (--html switch), which might or might 
not be enough for you.

$ psql --html -c select * from atable adb  myfile.xhtml.part

Another way is to use the function described here: 
http://www.postgresql.org/docs/8.4/interactive/functions-xml.html#FUNCTIONS-XML-MAPPING
query_to_xml generates a proper xml version of the query results.


Ralf

 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ralf Schuchardt
 Sent: 23 July 2010 01:37 PM
 To: Machiel Richards
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Data dumps to files - best methods?
  
 Hi,
  
 Am 23.07.2010 um 10:32 schrieb Machiel Richards:
 
 
As I am fairly new to postgresql I am trying to find some more info 
 regarding options to dump specific data to files. 
 
 
  However, even though I can get the sql query,etc... how will I use this 
 to dump the data into the relevant files?
  
 You can use the \copy command in psql to export the result of query into a 
 file. For example:
  
  psql -c \\copy (select * from atable) to 'myfile.csv' with delimiter as 
  ',' csv header adb
  
 will copy all rows from atable in adb to myfile.csv in csv format.
  
  
 Ralf



Re: [GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread A. Kretschmer
In response to Ivan Voras :
 * buy external storage (NAS, or even an external USB drive), move the
 database to it

buy external USB-Drive, and create a new tablespace, and move some large
table into this new tablespace and/or use the new tablespace for new
tables. You can also use table-partitioning with different tablespaces.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Vincenzo Romano
2010/7/26 A. Kretschmer andreas.kretsch...@schollglas.com:
 In response to Ivan Voras :
 * buy external storage (NAS, or even an external USB drive), move the
 database to it

 buy external USB-Drive, and create a new tablespace, and move some large
 table into this new tablespace and/or use the new tablespace for new
 tables. You can also use table-partitioning with different tablespaces.

Table space on a USB drive?
You must be really sinking for this very option!
I'd rather move everything else from the crowded partition onto the USB drive,
as I would suppose that the database (performance and reliability) is
more important by far ...

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Bill Thoen



A. Kretschmer wrote:

In response to Ivan Voras :
  

* buy external storage (NAS, or even an external USB drive), move the
database to it



buy external USB-Drive, and create a new tablespace, and move some large
table into this new tablespace and/or use the new tablespace for new
tables. You can also use table-partitioning with different tablespaces.
  
Can you then unmount that USB drive without causing any damage to the 
other databases?


--
- Bill Thoen 
 GISnet - www.gisnet.com

 303-786-9961


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread A. Kretschmer
In response to Vincenzo Romano :
 2010/7/26 A. Kretschmer andreas.kretsch...@schollglas.com:
  In response to Ivan Voras :
  * buy external storage (NAS, or even an external USB drive), move the
  database to it
 
  buy external USB-Drive, and create a new tablespace, and move some large
  table into this new tablespace and/or use the new tablespace for new
  tables. You can also use table-partitioning with different tablespaces.
 
 Table space on a USB drive?
 You must be really sinking for this very option!
 I'd rather move everything else from the crowded partition onto the USB drive,
 as I would suppose that the database (performance and reliability) is
 more important by far ...

Maybe, depends but why not? Maybe there are some big, but rarely
used, read-only tables?


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Vincenzo Romano
2010/7/26 A. Kretschmer andreas.kretsch...@schollglas.com:
 In response to Vincenzo Romano :
 2010/7/26 A. Kretschmer andreas.kretsch...@schollglas.com:
  In response to Ivan Voras :
  * buy external storage (NAS, or even an external USB drive), move the
  database to it
 
  buy external USB-Drive, and create a new tablespace, and move some large
  table into this new tablespace and/or use the new tablespace for new
  tables. You can also use table-partitioning with different tablespaces.

 Table space on a USB drive?
 You must be really sinking for this very option!
 I'd rather move everything else from the crowded partition onto the USB 
 drive,
 as I would suppose that the database (performance and reliability) is
 more important by far ...

 Maybe, depends but why not? Maybe there are some big, but rarely
 used, read-only tables?

... or maybe not.
Better move other stuff away, IMHO.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread A. Kretschmer
In response to Bill Thoen :
 
 
 A. Kretschmer wrote:
 In response to Ivan Voras :
   
 * buy external storage (NAS, or even an external USB drive), move the
 database to it
 
 
 buy external USB-Drive, and create a new tablespace, and move some large
 table into this new tablespace and/or use the new tablespace for new
 tables. You can also use table-partitioning with different tablespaces.
   
 Can you then unmount that USB drive without causing any damage to the 
 other databases?

No!


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data dumps to files - best methods?

2010-07-26 Thread Michael A. Peters

 Hi All


From what I can see in the specs and current output files, the client
 needs the data output in .xml format in order to use this on their side,
 still trying to understand why though...

I don't know what they are doing but XML is an excellent data storage
format. I use it for some stuff I don't need in a relational database, I
can just import the xml into DOMDocument and operate on it that way.

You can probably use the libxml2 facilities of your favorite scripting
language (php,perl,python,ruby) to dump the database into whatever kind of
XML they want.

-
Michael A. Peters

http://www.shastaherps.org/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Blocked inserts on tables with FK to tables for which UPDATE has been revoked

2010-07-26 Thread Samuel Gilbert
Hello Alban,

  You are right on both count of me not using the test user.  At first, it 
was set-up to use the test user, but later on, I wondered if owner of a 
database got treated differently.  The behaviour is the same regardless of if 
it's the owner or any other role.

  In the end, I got mixed up and sent the wrong version of the test script to 
the list.  However, even if it is not the right user, the problem remains.

Samuel


On Saturday, July 24, 2010 06:08:23 Alban Hertroys wrote:
 On 23 Jul 2010, at 20:39, Samuel Gilbert wrote:
  Hello,
  
   I have encountered a problem with inserts failing because of permissions
  
  issues when the table in which I try to insert has foreign key
  constraints to tables for which UPDATE has been revoked.
  
  The script bellow show how to reproduce the problem with a bare-bones
  test case.  Reproducibility is 100% on PostgreSQL 8.2.5 (I know it's not
  even the latest revision of the 8.2 line, but it's what I have to work
  with :( )  I Googled the error message and a couple of meaningful
  keywords, searched the documentation and the mailing list archives
  without success.
 
 It's not a solution to your problem, but one observation I made in your test 
script:
  CREATE DATABASE test WITH OWNER afsugil ENCODING 'LATIN1';
  
  CREATE ROLE test WITH NOCREATEDB NOCREATEROLE NOCREATEUSER INHERIT
  LOGIN;
  \c test afsugil
 
 You create a new user, but you still connect with the user who created the
 database.
 
  REVOKE UPDATE ON station FROM afsugil;
 
 And then you revoke rights from that user instead of from the test user.
 Effectively you're not using the 'test' user at all in your script. Was
 that intended?
 
 Alban Hertroys
 
 --
 Screwing up is an excellent way to attach something to the ceiling.
 
 
 !DSPAM:737,4c4abba6286216566810360!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] sql dump

2010-07-26 Thread Scott Frankel


Hi all,

Is it possible to perform an SQL Dump without using pg_dump?

I have a special case situation wherein my application has access to a  
remotely-hosted PG (8.3) database, but does not have access to its  
admin tools.  (There's a longer backstory here that I'm happy to  
explain if necessary.)  I'm looking for an efficient way to dump all  
the data in the DB without having to SELECT * on each table.


Thanks in advance!
Scott

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql dump

2010-07-26 Thread Raymond O'Donnell

On 26/07/2010 16:58, Scott Frankel wrote:


Hi all,

Is it possible to perform an SQL Dump without using pg_dump?

I have a special case situation wherein my application has access to a
remotely-hosted PG (8.3) database, but does not have access to its admin
tools. (There's a longer backstory here that I'm happy to explain if
necessary.) I'm looking for an efficient way to dump all the data in the
DB without having to SELECT * on each table.


I don't think so. Are you in a position to run pg_dump locally to the 
application, using the -h parameter to connect to the remote DB? You 
really are better off to use pg_dump if you can at all, since it looks 
after FK dependencies between tables, permissions, etc.


Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql dump

2010-07-26 Thread Vincenzo Romano
2010/7/26 Scott Frankel fran...@circlesfx.com:

 Hi all,

 Is it possible to perform an SQL Dump without using pg_dump?

 I have a special case situation wherein my application has access to a
 remotely-hosted PG (8.3) database, but does not have access to its admin
 tools.  (There's a longer backstory here that I'm happy to explain if
 necessary.)  I'm looking for an efficient way to dump all the data in the DB
 without having to SELECT * on each table.

 Thanks in advance!
 Scott

You can use COPY but you won't easily have the DDL, just the data.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql dump

2010-07-26 Thread Ian Barwick
2010/7/27 Scott Frankel fran...@circlesfx.com:

 Hi all,

 Is it possible to perform an SQL Dump without using pg_dump?

 I have a special case situation wherein my application has access to a
 remotely-hosted PG (8.3) database, but does not have access to its admin
 tools.  (There's a longer backstory here that I'm happy to explain if
 necessary.)  I'm looking for an efficient way to dump all the data in the DB
 without having to SELECT * on each table.

COPY might be of use:
http://www.postgresql.org/docs/8.3/interactive/sql-copy.html

though you will have to specify each table of course.


Ian Barwick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql dump

2010-07-26 Thread David Fetter
On Mon, Jul 26, 2010 at 08:58:59AM -0700, Scott Frankel wrote:
 
 Hi all,
 
 Is it possible to perform an SQL Dump without using pg_dump?

No, but there may be more options for using pg_dump than you have
looked at.  One example would be to use pg_dump on one with an SSH
tunnel to the other one's local PostgreSQL port (5432 by default, but
check which yours is).  For example:

ssh -fNR 5432:localhost: postg...@your.host.dom

would let you connect to localhost: with pg_dump and any other
PostgreSQL tools.

Cheers,
David.
 I have a special case situation wherein my application has access to
 a remotely-hosted PG (8.3) database, but does not have access to its
 admin tools.  (There's a longer backstory here that I'm happy to
 explain if necessary.)  I'm looking for an efficient way to dump all
 the data in the DB without having to SELECT * on each table.
 
 Thanks in advance!
 Scott
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Integral PG DB replication

2010-07-26 Thread Gauthier, Dave
Hi:

Will DB replication be integral in v9?  If so, when (approx) will that be out?

I have a need for this functionality to replicate a read-only copy of a DB 
where the master and slave are 2 time zones away.  Estimating DML traffic, I'd 
say maybe a few dozen writes per hour, but busty, plus near 100% of them will 
be inside of a 10 hour period/day.   (I realize that this would probably be 
considered very low by most standards).

This is a production DB, so I'm not too keen on being a V9 beta site.  
Actually, I am, but my management wil not be :-)  .

-dave


Re: [GENERAL] sql dump

2010-07-26 Thread Gauthier, Dave
Maybe with a stored procedure that reads metadata plus data and spits 
everything out in sql-form to stdout somehow.  plperl might be a good language 
choice. 



-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Scott Frankel
Sent: Monday, July 26, 2010 11:59 AM
To: PostgreSQL List
Subject: [GENERAL] sql dump 


Hi all,

Is it possible to perform an SQL Dump without using pg_dump?

I have a special case situation wherein my application has access to a  
remotely-hosted PG (8.3) database, but does not have access to its  
admin tools.  (There's a longer backstory here that I'm happy to  
explain if necessary.)  I'm looking for an efficient way to dump all  
the data in the DB without having to SELECT * on each table.

Thanks in advance!
Scott

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Integral PG DB replication

2010-07-26 Thread Ben Chobot
On Jul 26, 2010, at 9:09 AM, Gauthier, Dave wrote:

 Hi:
  
 Will DB replication be integral in v9?  If so, when (approx) will that be out?
  
 I have a need for this functionality to replicate a read-only copy of a DB 
 where the master and slave are 2 time zones away.  Estimating DML traffic, 
 I’d say maybe a few dozen writes per hour, but busty, plus near 100% of them 
 will be inside of a 10 hour period/day.   (I realize that this would probably 
 be considered very low by most standards).  

I'm assuming you mean a hot read-only copy of the DB, because you can currently 
do a warm replication with either WAL shipping or, in your case, even DRBD on a 
special partition would probably work just fine. But if you need a hot slave, 
then yes, you'll have to wait till 9 - assuming you need the replication to be 
built into core.



Re: [GENERAL] Converting BYTEA from/to BIGINT

2010-07-26 Thread Vincenzo Romano
2010/7/26 Vincenzo Romano vincenzo.rom...@notorand.it:
 2010/7/26 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 you can try

 postgres=# select int8send(256);
      int8send
 
  \x0100
 (1 row)

 for converting from bytea to int8 you need a custom function - probably in C 
 :(

 int8send?

It seems I have the solution.

CREATE OR REPLACE FUNCTION bytea_to_int8( ba BYTEA, OUT res INT8 )
LANGUAGE plpgsql STRICT
AS $BODY$
DECLARE
  i INT;
  nb INT;
  k CONSTANT INT8 := INT8( 2^(64-8-1) );
  b8 CONSTANT INT8 := 2^8;
BEGIN
  res := 0;
  IF length( ba )  7 THEN
nb = 6; -- all but last one
  ELSE
nb = length( ba ); -- all of them
  END IF;
  FOR i IN 0 .. nb LOOP
res := b8*res + get_byte( ba,i );
  END LOOP;
  IF length( ba )  8 THEN
RETURN;
  END IF;
  IF res  k-1 THEN
res := (res-k)*-b8;
  ELSE
res := res*b8;
  END IF;
  res := res + get_byte( ba,7 );
END;
$BODY$;

This function should get at most 8 bytes from a BYTEA and pack *all
bits* into a BIGINT (aka INT8) to
be returned.
The function is somehow more verbose than needed in order to try to
make it clearer how it works and to make
it more general.
The first 7 bytes are packed into the first 7 bytes of an INT8 in a
straightforward way.
Some maths is needed to pack the 8th byte without overflowing the INT8
(unsigned INT8 are not available
at the moment).
Of course a C language functions could have been faster, cleaner and
... less interesting to me.

Now, why doing this?
I am using a plain SEQUENCE to create a (kind of) session ID. That
is simple but predictable.
The idea is to use this function in conjunction with encrypt (from
pgcrypto) and the blowfish algorithm
to make that sequence numbers somehow unpredictable.
I'm pretty sure there are better (or at least easier) solutions out
there, but there needs to be also some fun
in here.
As usual, any hint is appreciated.
Flames can go to /dev/null :-)

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] sql dump

2010-07-26 Thread Scott Frankel


Thanks for all the suggestions.  COPY may work for my purposes.  The  
SSH tunnel option for using pg_dump is very interesting.


Thanks!
Scott




On Jul 26, 2010, at 9:18 AM, David Fetter wrote:


On Mon, Jul 26, 2010 at 08:58:59AM -0700, Scott Frankel wrote:


Hi all,

Is it possible to perform an SQL Dump without using pg_dump?


No, but there may be more options for using pg_dump than you have
looked at.  One example would be to use pg_dump on one with an SSH
tunnel to the other one's local PostgreSQL port (5432 by default, but
check which yours is).  For example:

   ssh -fNR 5432:localhost: postg...@your.host.dom

would let you connect to localhost: with pg_dump and any other
PostgreSQL tools.

Cheers,
David.

I have a special case situation wherein my application has access to
a remotely-hosted PG (8.3) database, but does not have access to its
admin tools.  (There's a longer backstory here that I'm happy to
explain if necessary.)  I'm looking for an efficient way to dump all
the data in the DB without having to SELECT * on each table.

Thanks in advance!
Scott

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Greg Smith

A. Kretschmer wrote:

buy external USB-Drive, and create a new tablespace, and move some large
table into this new tablespace and/or use the new tablespace for new
tables. You can also use table-partitioning with different tablespaces.
  


There are zero USB drives on the market I'd trust to put a database on.  
That interface was not designed with things like the proper write 
caching controls needed for reliable operation.


There are some eSATA ones that might be useful for this purpose.  Those 
are essentially no different than directly connecting a drive.  Note 
that you have to be concerned about redundancy when you start doing this 
sort of thing.  External drives are more fragile than internal 
ones--there's a reason why the warranties are usually much shorter.


As for the original question here, I would look for tables that might 
have lots of dead space on them (located via VACUUM VERBOSE or 
pg_stat_user_tables) and run CLUSTER on them to try and reclaim some 
space, *before* you run out completely.  Once space is extremely 
limited, it becomes dramatically more difficult to reclaim it using that 
approach.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread Scott Marlowe
On Mon, Jul 26, 2010 at 4:43 AM, AlannY m...@alanny.ru wrote:
 Hi there.

 I have a huge database with several tables. Some tables have statistics
 information. And it's very huge. I don't want to loose any of this data.
 But hard drives on my single server are not eternal. Very soon, there will
 be no left space. And the most awful this, that it's a 1U server, and I
 can't install new hard drive.

 What can I do to enlarge space, without loosing data and performance?

So, what can you do?  Can you add external storage or some kind?  Get
access to a network mounted drive, at least temporarily?  With the
restrictions you're listing I'm not sure you have a lot of
alternatives but to back up some of those tables, restore them
offsite, then drop the tables on the server.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What to do if space for database decrease and no additional hard drive is possible?

2010-07-26 Thread John R Pierce

 On 07/26/10 3:43 AM, AlannY wrote:

Hi there.

I have a huge database with several tables. Some tables have statistics
information. And it's very huge. I don't want to loose any of this data.
But hard drives on my single server are not eternal. Very soon, there will
be no left space. And the most awful this, that it's a 1U server, and I
can't install new hard drive.

What can I do to enlarge space, without loosing data and performance?


put a SAS card in the server, and use something like
http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/12169-304616-3930445-3930445-3930445-3954787-3954788-3954790.html

http://www-03.ibm.com/systems/storage/disk/exp3000/index.html

http://www.dell.com/us/en/business/storage/sas/ct.aspx?refid=sass=bsdcs=04 
http://www.dell.com/us/en/business/storage/sas/ct.aspx?refid=sass=bsdcs=04


etc etc.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general