Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Kern Sibbald
Hello,

Thanks for all the answers; I am a bit overwhelmed by the number, so I am 
going to try to answer everyone in one email.

The first thing to understand is that it is *impossible* to know what the 
encoding is on the client machine (FD -- or File daemon).  On say a 
Unix/Linux system, the user could create filenames with non-UTF-8 then switch 
to UTF-8, or restore files that were tarred on Windows or on Mac, or simply 
copy a Mac directory.  Finally, using system calls to create a file, you can 
put *any* character into a filename.

So, rather than trying to figure everything out (impossible, I think) and 
rather than failing to backup files, Bacula gets the raw filename from the 
OS and stores it on the Volume then puts it in the database.  We treat the 
filename as if it is UTF-8 for display purposes, but in all other cases, what 
we want is for the filename to go into the database and come back out 
unchanged.

On MySQL we use BLOBS.  On PostgreSQL, we TEXT and set the encoding to 
SQL_ASCII so that PostgreSQL will not attempt to do any translation.  This 
works well, and I hope that PostgreSQL will continue to support letting 
Bacula insert text characters in the database with no character encoding 
checks in the future.

See more notes below ...


On Thursday 03 December 2009 03:54:07 Craig Ringer wrote:
 On 2/12/2009 9:18 PM, Kern Sibbald wrote:
  Hello,
 
  I am the project manager of Bacula.  One of the database backends that
  Bacula uses is PostgreSQL.

 As a Bacula user (though I'm not on the Bacula lists), first - thanks
 for all your work. It's practically eliminated all human intervention
 from something that used to be a major pain. Configuring it to handle
 the different backup frequencies, retention periods and diff/inc/full
 needs of the different data sets was a nightmare, but once set up it's
 been bliss. The 3.x `Accurate' mode is particularly nice.

  Bacula sets the database encoding to SQL_ASCII, because although
  Bacula supports UTF-8 character encoding, it cannot enforce it. 
  Certain operating systems such as Unix, Linux and MacOS can have
  filenames that are not in UTF-8 format.  Since Bacula stores filenames in
  PostgreSQL tables, we use SQL_ASCII.

 I noticed that while doing some work on the Bacula database a while ago.

 I was puzzled at the time about why Bacula does not translate file names
 from the source system's encoding to utf-8 for storage in the database,
 so all file names are known to be sane and are in a known encoding.

We don't and cannot know the encoding scheme on Unix/Linux systems (see 
above), so attempting to convert them to UTF-8 would just consume more CPU 
time and result in errors at some point.


 Because Bacula does not store the encoding or seem to transcode the file
 name to a single known encoding, it does not seem to be possible to
 retrieve files by name if the bacula console is run on a machine with a
 different text encoding to the machine the files came from. After all,
 café in utf-8 is a different byte sequence to café in iso-9660-1, and
 won't match in equality tests under SQL_ASCII.

If all the filenames go in in binary or litteral form, then any tests will 
work fine.  The only test Bacula does is equality.  Bacula doesn't worry 
about sorting.  Users may care, but for backup and restore the only test 
Bacula needs is equality, and as long as you are working with unchanged byte 
streams everything works on every system.

The one place where we do convert filenames is on Windows. We convert UCS to 
UTF-8.


 Additionally, I'm worried that restoring to a different machine with a
 different encoding may fail, and if it doesn't will result in hopelessly
 mangled file names. This wouldn't be fun to deal with during disaster
 recovery. (I don't yet know if there are provisions within Bacula its
 self to deal with this and need to do some testing).

Yes, if you restore on a different system with a different encoding, you will 
end up with the same binary string at the OS level, but when you see the 
filenames they may look different.


 Anyway, it'd be nice if Bacula would convert file names to utf-8 at the
 file daemon, using the encoding of the client, for storage in a utf-8
 database.

As I mention, this is not possible since Unix/Linux stores binary strings.  
They can be in any format.


 Mac OS X (HFS Plus) and Windows (NTFS) systems store file names as
 Unicode (UTF-16 IIRC). Unix systems increasingly use utf-8, but may use
 other encodings. If a unix system does use another encoding, this may be
 determined from the locale in the environment and used to convert file
 names to utf-8.

As I mentioned above, on Linux/Unix systems, the user is free to change the 
encoding at will, and when he does so, existing filenames remain unchanged, 
so it is not possible to choose a particular encoding and backup and restore 
files without changing the filenames.  Bacula backs them up and restores them 
using binary strings so we don't 

Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Pavel Stehule
2009/12/3 Kern Sibbald k...@sibbald.com:
 Hello,

 Thanks for all the answers; I am a bit overwhelmed by the number, so I am
 going to try to answer everyone in one email.

 The first thing to understand is that it is *impossible* to know what the
 encoding is on the client machine (FD -- or File daemon).  On say a
 Unix/Linux system, the user could create filenames with non-UTF-8 then switch
 to UTF-8, or restore files that were tarred on Windows or on Mac, or simply
 copy a Mac directory.  Finally, using system calls to create a file, you can
 put *any* character into a filename.

 So, rather than trying to figure everything out (impossible, I think) and
 rather than failing to backup files, Bacula gets the raw filename from the
 OS and stores it on the Volume then puts it in the database.  We treat the
 filename as if it is UTF-8 for display purposes, but in all other cases, what
 we want is for the filename to go into the database and come back out
 unchanged.

 On MySQL we use BLOBS.  On PostgreSQL, we TEXT and set the encoding to
 SQL_ASCII so that PostgreSQL will not attempt to do any translation.  This
 works well, and I hope that PostgreSQL will continue to support letting
 Bacula insert text characters in the database with no character encoding
 checks in the future.

Hello

just use bytea datatype instead text.

http://www.postgresql.org/docs/8.4/interactive/datatype-binary.html

it is exactly what you wont.

Regards

Pavel Stehule

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Craig Ringer
Kern Sibbald wrote:
 Hello,
 
 Thanks for all the answers; I am a bit overwhelmed by the number, so I am 
 going to try to answer everyone in one email.
 
 The first thing to understand is that it is *impossible* to know what the 
 encoding is on the client machine (FD -- or File daemon).  On say a 
 Unix/Linux system, the user could create filenames with non-UTF-8 then switch 
 to UTF-8, or restore files that were tarred on Windows or on Mac, or simply 
 copy a Mac directory.  Finally, using system calls to create a file, you can 
 put *any* character into a filename.

While true in theory, in practice it's pretty unusual to have filenames
encoded with an encoding other than the system LC_CTYPE on a modern
UNIX/Linux/BSD machine.

I'd _very_ much prefer to have Bacula back my machines up by respecting
LC_CTYPE and applying appropriate conversions at the fd if LC_CTYPE on
the fd's host is not utf-8 and the database is.

If the database was SQL_ASCII, it could retain its existing behaviour.

That way, people get to pick between the two rational behaviours:

(1) Store file names as raw byte strings (SQL_ASCII). Guaranteed to work
even on garbage file names that aren't valid in the current system
encoding, but has issues with searches, filename matches, restoring
to another system, etc.

(2) Store file names as UTF-8, performing any required translation from
the system charset at the file daemon. File names that are nonsense
in the system encoding are either (a) rejected with an error in the
fd logs, or (b) backed up with some form of name mangling.


I *strongly* suspect most people will pick the second option.

There's also a third possibility:

(3) As (2), but add a `bytea' column to `path' and `filename' tables
that's null if the fd was able to convert the filename from the
system LC_CTYPE to utf-8. In the rare cases it couldn't (due to
reasons like users running with different LC_CTYPE, nfs volumes
exported to systems with different LC_CTYPE, tarballs from
systems with different charsets, etc) the raw unconverted bytes
of the filename get stored in the bytea field, and a mangled
form of the name gets stored in the text field for user display
purposes only.

I don't know if that'd be worth the hassle, though. I'd just want to use
(2) and I suspect so would a majority of users.

 On MySQL we use BLOBS.  On PostgreSQL, we TEXT and set the encoding to 
 SQL_ASCII so that PostgreSQL will not attempt to do any translation.  This 
 works well, and I hope that PostgreSQL will continue to support letting 
 Bacula insert text characters in the database with no character encoding 
 checks in the future.

Even if that was removed (which I can't see happening) you could use the
bytea type that's designed for exactly that purpose. Pity it's a bit of
a pain to work with :-(

 Because Bacula does not store the encoding or seem to transcode the file
 name to a single known encoding, it does not seem to be possible to
 retrieve files by name if the bacula console is run on a machine with a
 different text encoding to the machine the files came from. After all,
 café in utf-8 is a different byte sequence to café in iso-9660-1, and
 won't match in equality tests under SQL_ASCII.
 
 If all the filenames go in in binary or litteral form, then any tests 
 will 
 work fine.  The only test Bacula does is equality.

Byte strings for the same sequence of characters in different encodings
are not equal. If you're looking for a file called café.txt that was
created on a machine with a latin-1 encoding, you cannot find it by
searching for 'café' in bconsole because bconsole will search for the
utf-8 byte sequence for 'café' not the latin-1 byte sequence for 'café'.

$ python
 x = ucafé
 x.encode(utf-8)
'caf\xc3\xa9'
 x.encode(latin-1)
'caf\xe9'
 x.encode(utf-8) == x.encode(latin-1)
False


or in Pg:


craig= SHOW client_encoding;
 client_encoding
-
 UTF8
(1 row)

craig= CREATE TABLE test (x text);
CREATE TABLE

craig= INSERT INTO test(x) VALUES ('café');
INSERT 0 1

craig= SELECT x, x::bytea FROM test;
  x   |  x
--+-
 café | caf\303\251
(1 row)

craig= SELECT convert_to(x, 'latin-1') from test;
 convert_to

 caf\351
(1 row)

craig= SELECT convert_to(x, 'utf-8') = x::bytea,
convert_to(x, 'utf-8') = convert_to(x, 'latin-1')
FROM test;
 ?column? | ?column?
--+--
 t| f
(1 row)

 The one place where we do convert filenames is on Windows. We convert UCS to 
 UTF-8.

Cool. I thought that must be the case, but it's good to know.

What about Mac OS X? It stores file names in UTF-16 normalized form, but
has a variety of ways to access those files, including POSIX interfaces.

Hmm. A quick test suggests that, irrespective of LC_CTYPE, LANG and
LC_ALL, Mac OS X converts file names to UTF-8 for use with POSIX APIs. A
bit of digging helps confirm that:


Re: [GENERAL] How to auto-increment?

2009-12-03 Thread A. Kretschmer
In response to Andre Lopes :
 Hi,

Pleaase answer to the list and not to me, okay?


 
 This is an exemple in ORACLE of what I need. I will see if this works in
 Postgres.

Why not? You have to rewrite it for PostgreSQL, but the way is okay.

 
 Another question. It is possible in Postgres to use more than one Trigger by
 table in Postgres?

Sure, why not? This fire in alphabetical order.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] numeric cast oddity

2009-12-03 Thread Sim Zacks
When I cast an integer to numeric using :: notation it ignores the scale
and precision that I specify, but when I use the cast function it uses
the scale and precision that I specify.

Sim

select version();
PostgreSQL 8.3.5 on i586-pc-linux-gnu, compiled by GCC
i586-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.1)

select -1::numeric(20,4)
?column?
numeric

-1.

select cast(-1 as numeric(20,4))
numeric
numeric(20,4)
---
-1.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Thom Brown
2009/12/3 Sim Zacks s...@compulab.co.il

 When I cast an integer to numeric using :: notation it ignores the scale
 and precision that I specify, but when I use the cast function it uses
 the scale and precision that I specify.

 Sim

 select version();
 PostgreSQL 8.3.5 on i586-pc-linux-gnu, compiled by GCC
 i586-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.1)

 select -1::numeric(20,4)
 ?column?
 numeric
 
 -1.

 select cast(-1 as numeric(20,4))
 numeric
 numeric(20,4)
 ---
 -1.



That looks right to me.  What you've effectively asked for is
-0001. , which resolves to -1.000.

Regards

Thom


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Thom Brown
2009/12/3 Sim Zacks s...@compulab.co.il

 When I cast an integer to numeric using :: notation it ignores the scale
 and precision that I specify, but when I use the cast function it uses
 the scale and precision that I specify.

 Sim

 select version();
 PostgreSQL 8.3.5 on i586-pc-linux-gnu, compiled by GCC
 i586-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.1)

 select -1::numeric(20,4)
 ?column?
 numeric
 
 -1.

 select cast(-1 as numeric(20,4))
 numeric
 numeric(20,4)
 ---
 -1.


I've just spotted what you mean.  Ignore my previous response.

Thom


[GENERAL] postgre...@fosdem 2010 - Call for talks

2009-12-03 Thread Dave Page
FOSDEM (http://www.fosdem.org/2010/) is a major Free and Open Source
event held annually in Brussels, Belgium, and attended by around 4000
people. As in recent years, the PostgreSQL project will have a devroom
where we will be presenting a number of talks. The event will be held
on the 6 - 7th February 2010.

We're looking for developers, users and contributors to submit talks
for inclusion on the program. Any topic related to PostgreSQL is
acceptable as long as it is non-commercial in nature. Suggested topics
might include:

- Migration of systems to PostgreSQL
- Application development
- Benchmarking and tuning
- Spatial applications
- Hacking the code
- Data warehousing
- New features
- Tips and tricks
- Replication
- Case studies

We will have a number of 45 minutes slots, and may split one or more
into 3 back-to-back 15 minute slots if we receive suitable proposals.

Please submit your proposals to:

 fos...@postgresql.eu

and include the following information:

- Your name
- The title of your talk (please be descriptive, as titles will be
listed with ~250 from other projects)
- A short abstract of one to two paragraphs
- A short biography introducing yourself
- Links to related websites/blogs etc.

The deadline for submissions is 22nd December 2009.

See you in Brussels!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Avi Rozen
Craig Ringer wrote:
 Kern Sibbald wrote:
   
 Hello,

 Thanks for all the answers; I am a bit overwhelmed by the number, so I am 
 going to try to answer everyone in one email.

 The first thing to understand is that it is *impossible* to know what the 
 encoding is on the client machine (FD -- or File daemon).  On say a 
 Unix/Linux system, the user could create filenames with non-UTF-8 then 
 switch 
 to UTF-8, or restore files that were tarred on Windows or on Mac, or simply 
 copy a Mac directory.  Finally, using system calls to create a file, you can 
 put *any* character into a filename.
 

 While true in theory, in practice it's pretty unusual to have filenames
 encoded with an encoding other than the system LC_CTYPE on a modern
 UNIX/Linux/BSD machine.
   

In my case garbage filenames are all too common. It's a the sad
*reality*, when you're mixing languages (Hebrew and English in my case)
and operating systems. Garbage filenames are everywhere: directories and
files shared between different operating systems and file systems, mail
attachments, mp3 file names based on garbage id3 tags, files in zip
archives (which seem to not handle filename encoding at all), etc.

When I first tried Bacula (version 1.38), I expected to have trouble
with filenames, since this is what I'm used to. I was rather pleased to
find out that it could both backup and restore files, regardless of
origin and destination filename encoding.

I like Bacula because, among other things, it can take the punishment
and chug along, without me even noticing that there was supposed to be a
problem (a recent example: backup/restore files with a negative mtime ...)

My 2c
Avi

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Daniel Verite
Craig Ringer wrote:

 While true in theory, in practice it's pretty unusual to have filenames
 encoded with an encoding other than the system LC_CTYPE on a modern
 UNIX/Linux/BSD machine.

It depends. In western Europe, where iso-8859-1[5] and utf8 are evenly used,
it's not unusual at all. You just have to extract an archive created by
someone who uses a different encoding than you. Since tar files don't carry
any information about the encoding of the filenames it contains, they come
out as they are, whatever LC_CTYPE is. The same problem exists for zip files.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Kern Sibbald

 Craig Ringer wrote:
 Kern Sibbald wrote:

 Hello,

 Thanks for all the answers; I am a bit overwhelmed by the number, so I
 am
 going to try to answer everyone in one email.

 The first thing to understand is that it is *impossible* to know what
 the
 encoding is on the client machine (FD -- or File daemon).  On say a
 Unix/Linux system, the user could create filenames with non-UTF-8 then
 switch
 to UTF-8, or restore files that were tarred on Windows or on Mac, or
 simply
 copy a Mac directory.  Finally, using system calls to create a file,
 you can
 put *any* character into a filename.


 While true in theory, in practice it's pretty unusual to have filenames
 encoded with an encoding other than the system LC_CTYPE on a modern
 UNIX/Linux/BSD machine.


 In my case garbage filenames are all too common. It's a the sad
 *reality*, when you're mixing languages (Hebrew and English in my case)
 and operating systems. Garbage filenames are everywhere: directories and
 files shared between different operating systems and file systems, mail
 attachments, mp3 file names based on garbage id3 tags, files in zip
 archives (which seem to not handle filename encoding at all), etc.

Yes, that is my experience too.  I understand Craig's comments, but I
would much prefer that Bacula just backup and restore and leave the
checking of filename consistencies to other programs.  At least for the
moment, that seems to work quite well.  Obviously if users mix character
sets, sometime display of filenames in Bacula will be wierd, but
nevertheless Bacula will backup and restore them so that what was on the
system before the backup is what is restored.


 When I first tried Bacula (version 1.38), I expected to have trouble
 with filenames, since this is what I'm used to. I was rather pleased to
 find out that it could both backup and restore files, regardless of
 origin and destination filename encoding.

 I like Bacula because, among other things, it can take the punishment
 and chug along, without me even noticing that there was supposed to be a
 problem (a recent example: backup/restore files with a negative mtime ...)


Thanks.  Thanks also for using Bacula :-)

Best regards,

Kern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Sam Mason
On Thu, Dec 03, 2009 at 08:33:38AM +0100, Kern Sibbald wrote:
 Bacula gets the raw filename from the OS and stores it on the Volume
 then puts it in the database.  We treat the filename as if it is UTF-8
 for display purposes, but in all other cases, what we want is for the
 filename to go into the database and come back out unchanged.

How about also storing the encoding of the path/filename as well?  This
would allow the restore to do the right thing for display purposes and
also when going to a system that uses a different encoding.  Obviously
you wouldn't know this for Unix derivatives, but for most other systems
this would seem to help.

 On MySQL we use BLOBS.  On PostgreSQL, we TEXT and set the encoding to
 SQL_ASCII so that PostgreSQL will not attempt to do any translation.
 This works well, and I hope that PostgreSQL will continue to support
 letting Bacula insert text characters in the database with no
 character encoding checks in the future.

As others have said; BYTEA is probably the best datatype for you to
use.  The encoding of BYTEA literals is a bit of a fiddle and may need
some changes, but it's going to be much more faithful to your needs of
treating the filename as an opaque blob of data.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre...@fosdem 2010 - Call for talks

2009-12-03 Thread Greg Stark
On Thu, Dec 3, 2009 at 10:48 AM, Dave Page dp...@pgadmin.org wrote:

 We will have a number of 45 minutes slots, and may split one or more
 into 3 back-to-back 15 minute slots if we receive suitable proposals.

I would like to suggest we reduce the number of talks and have instead
some more participatory round-table discussions. I think it would be
better to find out what problems people are facing, what features they
would be interested in, what's stopping non-users from adopting
postgres, etc.  I would suggest having a few general topics like
security, replication, high availability, change management,
postgres upgrades, etc.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pgeu-general] postgre...@fosdem 2010 - Call for talks

2009-12-03 Thread Magnus Hagander
2009/12/3 Greg Stark gsst...@mit.edu:
 On Thu, Dec 3, 2009 at 10:48 AM, Dave Page dp...@pgadmin.org wrote:

 We will have a number of 45 minutes slots, and may split one or more
 into 3 back-to-back 15 minute slots if we receive suitable proposals.

 I would like to suggest we reduce the number of talks and have instead
 some more participatory round-table discussions. I think it would be
 better to find out what problems people are facing, what features they
 would be interested in, what's stopping non-users from adopting
 postgres, etc.  I would suggest having a few general topics like
 security, replication, high availability, change management,
 postgres upgrades, etc.


Not a bad idea. How about scheduling one or two talk slots for that,
and then maybe just have visitors tell us beforehand which topics are
interesting? As in not now, but the day before or so?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre...@fosdem 2010 - Call for talks

2009-12-03 Thread Dave Page
On Thu, Dec 3, 2009 at 12:32 PM, Greg Stark gsst...@mit.edu wrote:
 On Thu, Dec 3, 2009 at 10:48 AM, Dave Page dp...@pgadmin.org wrote:

 We will have a number of 45 minutes slots, and may split one or more
 into 3 back-to-back 15 minute slots if we receive suitable proposals.

 I would like to suggest we reduce the number of talks and have instead
 some more participatory round-table discussions. I think it would be
 better to find out what problems people are facing, what features they
 would be interested in, what's stopping non-users from adopting
 postgres, etc.  I would suggest having a few general topics like
 security, replication, high availability, change management,
 postgres upgrades, etc.

We may not be able to adjust the sessions much time-wise (we're not
sure yet how our timeslots will be arranged), but we can certainly
consider proposals for roundtable sessions from people interested in
hosting them.


-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Bruce Momjian
Kern Sibbald wrote:
 Hello,
 
 Thanks for all the answers; I am a bit overwhelmed by the number, so I am 
 going to try to answer everyone in one email.

We aim to please, and overwhelm.  :-)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected EOF on client connection

2009-12-03 Thread Howard Cole

Howard Cole wrote:
Thanks Francisco - I currently have MinPoolSize set to 3 (I have a lot 
of databases on this cluster), I think this copes 90% of the time but 
I shall set it to 10 and see what happens.


Sampling the number of connections on my database I decided that the 
number of connections settled at 6 so I changed my MinPoolSize from 3 to 
6. I checked the current state of the database and the number of 
connections is currently 12. Tonight I shall change the MinPoolSize to 8 
and I am wondering if the number of connections used is going to appear 
as 16!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Greg Stark
On Thu, Dec 3, 2009 at 8:33 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
 While true in theory, in practice it's pretty unusual to have filenames
 encoded with an encoding other than the system LC_CTYPE on a modern
 UNIX/Linux/BSD machine.

 I'd _very_ much prefer to have Bacula back my machines up by respecting
 LC_CTYPE and applying appropriate conversions at the fd if LC_CTYPE on
 the fd's host is not utf-8 and the database is.

a) it doesn't really matter how uncommon it is, backup software is
like databases, it's supposed to always work, not just usually work.

b) LC_CTYPE is an environment variable, it can be different for different users.

c) backup software that tries to fix up the data it's backing up to
what it thinks it should look like is bogus. If I can't trust my
backup software to restore exactly the same data with exactly the same
filenames then it's useless. The last thing I want to be doing when
recovering from a disaster is trying to debug some difference of
opinion between some third party commercial software and
postgres/bacula about unicode encodings.

 (3) As (2), but add a `bytea' column to `path' and `filename' tables
    that's null if the fd was able to convert the filename from the
    system LC_CTYPE to utf-8. In the rare cases it couldn't (due to
    reasons like users running with different LC_CTYPE, nfs volumes
    exported to systems with different LC_CTYPE, tarballs from
    systems with different charsets, etc) the raw unconverted bytes
    of the filename get stored in the bytea field, and a mangled
    form of the name gets stored in the text field for user display
    purposes only.

That's an interesting thought. I think it's not quite right -- you
want to always store the raw filename in the bytea and then also store
a text field with the visual representation. That way you can also
deal with broken encodings in some application specific way too,
perhaps by trying to guess a reasonable encoding.

An alternative would be to just store them in byteas and then handle
sorting and displaying by calling the conversion procedure on the fly.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_attribute.attnum - wrong column ordinal?

2009-12-03 Thread Alvaro Herrera
Greg Stark escribió:
 On Wed, Nov 25, 2009 at 1:03 AM, Konstantin Izmailov pgf...@gmail.com wrote:
 
  My question: can pg_attribute.attnum be used to determine the sequential
  ordinal positions of columns in a table? What is a right way to get the
  ordinal numbers?
 
 You could use something like:
 
 row_number() over (partition by T.schemaname,T.viewname order by
 attnum) as ORDINAL_POSITION

Should we recast the attributes and columns views in information_schema?
I notice they still use attnum.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected EOF on client connection

2009-12-03 Thread David Boreham

Howard Cole wrote:

Howard Cole wrote:
Thanks Francisco - I currently have MinPoolSize set to 3 (I have a 
lot of databases on this cluster), I think this copes 90% of the time 
but I shall set it to 10 and see what happens.


Sampling the number of connections on my database I decided that the 
number of connections settled at 6 so I changed my MinPoolSize from 3 
to 6. I checked the current state of the database and the number of 
connections is currently 12. Tonight I shall change the MinPoolSize to 
8 and I am wondering if the number of connections used is going to 
appear as 16!



Pretty soon you'll discover that you have two instances of the pool :)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Tom Lane
Sim Zacks s...@compulab.co.il writes:
 When I cast an integer to numeric using :: notation it ignores the scale
 and precision that I specify, but when I use the cast function it uses
 the scale and precision that I specify.

Really?  Your example doesn't seem to show that.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_attribute.attnum - wrong column ordinal?

2009-12-03 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 Should we recast the attributes and columns views in information_schema?
 I notice they still use attnum.

I'd vote against it, at least until we have something better than a
row_number solution.  That would create another huge performance penalty
on views that are already ungodly slow.

When and if we get around to separating physical from logical column
position, the issue might go away for free.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Adrian Klaver
On Wednesday 02 December 2009 11:33:38 pm Kern Sibbald wrote:

  ( BTW, one way to handle incorrectly encoded filenames and paths might
  be to have a `bytea' field that's generally null to store such mangled
  file names. Personally though I'd favour just rejecting them. )
 
   We set SQL_ASCII by default when creating the database via the command
   recommended in recent versions of PostgreSQL (e.g. 8.1), with:
  
   CREATE DATABASE bacula ENCODING 'SQL_ASCII';
  
   However, with PostgreSQL 8.4, the above command is ignored because the
   default table copied is not template0.
 
  It's a pity that attempting to specify an encoding other than the safe
  one when using a non-template0 database doesn't cause the CREATE
  DATABASE command to fail with an error.

 I didn't actually run it myself, so it is possible that it produced an
 error message, but it did apparently create the database but with UTF-8
 encoding. Most of these things are done in script files, so certain
 non-fatal errors may be overlooked.

 As far as I can tell, it took the above encoding command, and perhaps
 printed an error message but went ahead and created the database with an
 encoding that was not correct.  If that is indeed the case, then it is in
 my opinion, a bad design policy.  I would much prefer that either Postgres
 accept the command or that it not create the database.  This way, either
 the database would work as the user expects or there would be no database,
 and the problem would be resolved before it creates databases that cannot
 be read.

It does not CREATE the database. If the users are seeing that happen, then as 
others have suggested it is a bug. The other option is that they are 
un-commenting the #ENCODING=ENCODING 'UTF8' line in the 
create_postgresql_database.in script to get it to run. The interesting part in 
that script is the Note:

# KES: Note: the CREATE DATABASE, probably should be
#   CREATE DATABASE ${db_name} $ENCODING TEMPLATE template0

According to the git repository this showed up in July of this year;

http://bacula.git.sourceforge.net/git/gitweb.cgi?p=bacula/bacula;a=blob;f=bacula/src/cats/create_postgresql_database.in;hb=6e024d0fe47ea0d9e6d3fbec52c4165caa44967f


 In any case we have corrected the command to include the TEMPLATE, but this
 won't help people with older Bacula's.

Could they not just get the corrected version of create_postgresql_database.in. 
It would run on the old versions as well.


 The other point I wanted to emphasize is that the documentation implied
 that future versions of Postgres may eliminate the feature of having
 SQL_ASCII (i.e. the ability to input arbritrary binary strings).  As I
 said, that would be a pity -- I suppose we could switch to using LOs or
 whatever they are called in Postgres, but that would be rather
 inconvenient.

Per Tom's previous post:
You misread it.  We are not talking about eliminating SQL_ASCII --- as
you say, that's useful.  What is deprecated is trying to use SQL_ASCII
with a non-C locale, which is dangerous, and always has been.  If you've
been putting non-UTF8 data into a database that could be running under a
UTF8-dependent locale, I'm surprised you haven't noticed problems already.'


 Thanks for all the responses,

 Best regards,

 Kern



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Tom Lane
Sam Mason s...@samason.me.uk writes:
 As others have said; BYTEA is probably the best datatype for you to
 use.  The encoding of BYTEA literals is a bit of a fiddle and may need
 some changes, but it's going to be much more faithful to your needs of
 treating the filename as an opaque blob of data.

bytea might be theoretically the best choice, but the fact remains that
99% of the entries will be text that's readable in the user's encoding
(whatever that is).  bytea will just be a serious PITA because of its
escaping issues.  Also, the fact that 8.5 may change to hex display by
default will make bytea even more of a PITA for mostly-text data.
So I think Bacula's choice to use sql_ascii with text columns is
entirely defensible.

What concerns me is the claim that PG made a database with some
arbitrary parameters after having rejected a now-considered-invalid
command.   I frankly do not believe that, but if it did happen it's
a *serious* bug that requires investigation.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Scott Marlowe
On Thu, Dec 3, 2009 at 8:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Sim Zacks s...@compulab.co.il writes:
 When I cast an integer to numeric using :: notation it ignores the scale
 and precision that I specify, but when I use the cast function it uses
 the scale and precision that I specify.

 Really?  Your example doesn't seem to show that.

I think he's talking about the headers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Craig Ringer
Frank Sweetser wrote:

 Unless, of course, you're at a good sized school with lots of
 international students, and have fileservers holding filenames created
 on desktops running in Chinese, Turkish, Russian, and other locales.

What I struggle with here is why they're not using ru_RU.UTF-8,
cn_CN.UTF-8, etc as their locales. Why mix charsets?

I don't think that these people should be forced to use a utf-8 database
and encoding conversion if they want to do things like mix-and-match
charsets for file name chaos on their machines, though. I'd just like to
be able to back up systems that _do_ have consistent charsets in ways
that permit me to later reliably search for files by name, restore to
any host, etc.

Perhaps I'm strange in thinking that all this mix-and-match encodings
stuff is bizarre and backward. The Mac OS X and Windows folks seem to
agree, though. Let the file system store unicode data, and translate at
the file system or libc layer for applications that insist on using
other encodings.

I do take Greg Stark's point (a) though. As *nix systems stand,
solutions will only ever be mostly-works, not always-works, which I
agree isn't good enough. Since there's no sane agreement about encodings
on *nix systems and everything is just byte strings that different apps
can interpret in different ways under different environmental
conditions, we may as well throw up our hands in disgust and give up
trying to do anything sensible. The alternative is saying that files the
file system considers legal can't be backed up because of file naming,
which I do agree isn't ok.

The system shouldn't permit those files to exist, either, but I suspect
we'll still have borked encoding-agnostic wackiness as long as we have
*nix systems at all since nobody will ever agree on anything for long
enough to change it.

Sigh. I think this is about the only time I've ever wished I was using
Windows (or Mac OS X).

Also: Greg, your point (c) goes two ways. If I can't trust my backup
software to restore my filenames from one host exactly correctly to
another host that may have configuration differences not reflected in
the backup metadata, a different OS revision, etc, then what good is it
for disaster recovery? How do I even know what those byte strings
*mean*? Bacula doesn't even record the default system encoding with
backup jobs so there's no way for even the end user to try to fix up the
file names for a different encoding. You're faced with some byte strings
in wtf-is-this-anyway encoding and guesswork. Even recording lc_ctype in
the backup job metadata and offering the _option_ to convert encoding on
restore would be a big step, (though it wouldn't fix the breakage with
searches by filename not matching due to encoding mismatches).

Personally, I'm just going to stick to a utf-8 only policy for all my
hosts, working around the limitation that way. It's worked ok thus far,
though I don't much like the way that different normalizations of
unicode won't match equal under SQL_ASCII so I can't reliably search for
file names.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Sim Zacks




It is more then just a
headers issue. I have a view that has a column of type numeric(20,4).
I modified the view and
added a union which cast an integer as a numeric(20,4) using the ::
notation.
I received an error
stating that I could not change the column type.
When I used the cast
function notation it allowed it through.


The fact that it
actually converts it to numeric(20,4) doesn't help me if the view
thinks that it is a regular numeric.


Sim





Scott Marlowe wrote:


  On Thu, Dec 3, 2009 at 8:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  
  
Sim Zacks s...@compulab.co.il writes:


  When I cast an integer to numeric using :: notation it ignores the scale
and precision that I specify, but when I use the cast function it uses
the scale and precision that I specify.
  

Really? Your example doesn't seem to show that.

  
  
I think he's talking about the headers
  





Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Ivan Sergio Borgonovo
On Thu, 3 Dec 2009 12:22:50 +0100 (CET)
Kern Sibbald k...@sibbald.com wrote:

 Yes, that is my experience too.  I understand Craig's comments,
 but I would much prefer that Bacula just backup and restore and
 leave the checking of filename consistencies to other programs.
 At least for the moment, that seems to work quite well.  Obviously
 if users mix character sets, sometime display of filenames in
 Bacula will be wierd, but nevertheless Bacula will backup and
 restore them so that what was on the system before the backup is
 what is restored.

I expect a backup software has a predictable, reversible behaviour
and warn me if I'm shooting myself in the foot.

It should be the responsibility of the admin to restore files in a
proper place knowing that locales may be a problem.

I think Bacula is taking the right approach.

Still I'd surely appreciate as a feature a tool that will help me
to restore files in a system with a different locale than the
original one or warn me if the locale is different or it can't be
sure it is the same.
That's exactly what Postgresql is doing: at least warning you.
Even Postgresql is taking the right approach.

An additional guessed original locale field and a tool/option to
convert/restore with selected locale could be an interesting feature.

What is Bacula going to do with xattr on different systems?

Postgresql seems to offer a good choice of tools to convert between
encodings and deal with bytea.
Formally I'd prefer bytea but in real use it may just be an
additional pain and other DB may not offer the same tools for
encoding/bytea conversions.

Is it possible to search for a file in a backup set?
What is it going to happen if I'm searching from a system that has a
different locale from the one the backup was made on?
Can I use regexp? Can accents be ignored during searches?


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Sam Mason
On Thu, Dec 03, 2009 at 10:46:54AM -0500, Tom Lane wrote:
 Sam Mason s...@samason.me.uk writes:
  As others have said; BYTEA is probably the best datatype for you to
  use.  The encoding of BYTEA literals is a bit of a fiddle and may need
  some changes, but it's going to be much more faithful to your needs of
  treating the filename as an opaque blob of data.
 
 bytea might be theoretically the best choice, but the fact remains that
 99% of the entries will be text that's readable in the user's encoding
 (whatever that is).

I agree it'll be fine most of the time and the more important thing is
normally the data rather than the filename.  Still, for non-english
speaking people I'd guess there are many more encodings floating around
than I'd ever expect to see on a daily basis.  Us English/US speakers
really do have a very easy life.

There's also the issue that the user's encoding doesn't necessarily
match the system's encoding.  Thus within an account everything may be
easy, but when a system daemon comes in and looks at things it's going
to be somewhat messy.

No hard numbers either way, I just know I see a very biased sample of
systems and would not like to make generalizations.

 What concerns me is the claim that PG made a database with some
 arbitrary parameters after having rejected a now-considered-invalid
 command.   I frankly do not believe that, but if it did happen it's
 a *serious* bug that requires investigation.

Yup, be interesting to hear more details from the OP about this.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 On Thu, Dec 3, 2009 at 8:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Really?  Your example doesn't seem to show that.

 I think he's talking about the headers

The headers I get are

regression=# select -1::numeric(20,4);
 ?column? 
--
  -1.
(1 row)

regression=# select cast(-1 as numeric(20,4));
 numeric 
-
 -1.
(1 row)

which are indeed different (might be worth looking into why) but
don't seem to have anything to do with scale/precision.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric cast oddity

2009-12-03 Thread Tom Lane
I wrote:
 which are indeed different (might be worth looking into why)

Oh: the reason they're different is that these expressions are not
actually the same thing.  Minus binds less tightly than typecast.
You get consistent results if you input equivalent expressions:

regression=# select cast(-1 as numeric(20,4));
 numeric 
-
 -1.
(1 row)

regression=# select (-1)::numeric(20,4);
 numeric 
-
 -1.
(1 row)

regression=# select - cast(1 as numeric(20,4));
 ?column? 
--
  -1.
(1 row)

regression=# select - 1::numeric(20,4);
 ?column? 
--
  -1.
(1 row)

What we're actually seeing here is that the code to guess a default
column name doesn't descend through a unary minus operator, it just
punts upon finding an Op node.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] code example for PQgetCopyData

2009-12-03 Thread Dave Huber
Does anybody have a snippet where they use PQgetCopyData? I must be calling it 
wrong as it keep crashing my program. I've attached my code below. I am writing 
this for a Code Interface Node in LabVIEW.

Thanks,
Dave

MgErr CINRun(LStrHandle conninfo, LStrHandle copystr, TD1Hdl resultValues) {
  MgErr err = noErr;
  PGconn *pConn;
  PGresult* pResult = NULL;
  char* szCopyStr = NULL;
  char* errormsg = NULL;
  char** buffer = NULL;   // for retrieving the data
  int nLen; // length of returned data

  // connect to the database
  char* szConnInfo = new char[LHStrLen(conninfo)+1];
  LToCStr((*(conninfo)), (CStr)szConnInfo);
  pConn = PQconnectdb(szConnInfo);
  delete [] szConnInfo;

  // check for errors connecting to database
  if (PQstatus(pConn) != CONNECTION_OK)
  {
DbgPrintf(Connection to database failed: %s, 
PQerrorMessage(pConn));
  }
  else
  {
// start the copy command
szCopyStr = new char[LHStrLen(copystr)+1];
LToCStr((*(copystr)), (CStr)szCopyStr);

pResult = PQexec(pConn, szCopyStr);
delete [] szCopyStr;

// get the data
int i = 0;
while (nLen = PQgetCopyData(pConn, buffer, false)  0)
{

  if (err = SetCINArraySize((UHandle)resultValues, 2, ++i))
goto out;
  if (err = NumericArrayResize(uB, 1L, 
(UHandle*)((*resultValues)-elt[i-1]), nLen-1))
goto out;
  LStrLen(*(*resultValues)-elt[i-1]) = nLen-1; 
// set the Labview String size
  MoveBlock(*buffer, LStrBuf(*(*resultValues)-elt[i-1]), 
nLen-1);  // copy the data to a new string

  PQfreemem(*buffer);   
  // free the memory from getCopy

}
(*resultValues)-dimSize = i;

out:
PQclear(pResult);

// see if there were errors
if (nLen == -2)
{
  DbgPrintf(Copy Out failed: %s, PQerrorMessage(pConn));
}

  }
  // close the connection
  PQfinish(pConn);

  return err;
}


This electronic mail message is intended exclusively for the individual(s) or 
entity to which it is addressed. This message, together with any attachment, is 
confidential and may contain privileged information. Any unauthorized review, 
use, printing, retaining, copying, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform 
Electronic Transactions Act or any other law of similar import, absent an 
express statement to the contrary contained in this e-mail, neither this e-mail 
nor any attachments are an offer or acceptance to enter into a contract, and 
are not intended to bind the sender, LeTourneau Technologies, Inc., or any of 
its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.



Re: [GENERAL] Build universal binary on Mac OS X 10.6?

2009-12-03 Thread Israel Brewster


On Dec 2, 2009, at 5:16 PM, Tom Lane wrote:


Israel Brewster isr...@frontierflying.com writes:

Well, I'm not trying to use the server or client programs from this
build - I just want the universal libraries for my programs. My point
in this last section, however, doesn't necessarily extend as far as
actual function, but rather is just with the build. MySQL and SQLite
build for multiple architectures quite happily, Postgres doesn't  
build
at all except for single architectures (the way I am trying at  
least).


Well, it's been done.


On 10.5. By myself among others. Thus the reason I think there is  
something wrong about the way I am going about this, and thus my  
question to this list. If I didn't think it was possible, I wouldn't  
bother asking how to do it :-)



Searching the PG archives for prior discussions
I find
http://archives.postgresql.org/pgsql-hackers/2008-07/msg00884.php
which describes success with multiple -arch flags in CFLAGS plus
hand creation of relevant .h files.


Thanks for the link. It explains why I wasn't finding anything in my  
searches - I was searching for OS X, while the post references  
Darwin. I'll keep that distinction in mind in the future - may help  
me solve my problems on my own, without needing outside help :-) That  
said, the post hilights a few  points:


1) From the second paragraph of that post:
If you add something like -arch i386 -arch ppc to CFLAGS and build  
normally, you get real working multiarch binaries and libraries.
Which is exactly the problem that started this whole thread - on 10.6,  
you DON'T (or at least I don't) get real working multiarch binaries  
and libraries. In fact, you don't get anything - the compile fails.  
Already we see that this post does not address my issue in any form,  
as it references a successful compile resulting in unusable binaries-  
not a failed compile, which is what I am dealing with.


The post goes on to say that if you only do that, only the arch you  
build on will work, but as I have already pointed out, I don't care -  
I just need the libraries. My app already runs fine on multiple  
platforms, I just need the multi-arch libraries to link against. From  
a Mac OS X 10.6 build machine - I already have it working just fine on  
a 10.5 build machine (with no header hacking), albeit with Postgres 8.2.


2) The post explains quite elegantly my point about Postgres being  
more difficult to create a universal binary with than other programs.  
I have compiled Qt, MySQL, SQLite, and several of my own programs for  
all four architectures, and never once needed to hack around with  
header files to get it to work. See, for example, this:  http://www.malisphoto.com/tips/mysql-on-os-x.html#Anchor-Build 
 . A Single configure command, with the right arguments, creates a 4- 
way universal binary of MySQL on Mac OS X.


That said, this really isn't an issue for me - I mean, it's not like  
I'm rebuilding Postgres every day or anything, so If I need to hack  
around with headers or whatever to get the build to work, that's fine.  
However, see point 1 - apparently the headers aren't the issue,  
because according to the posts you linked, incorrect headers result in  
non-functioning binaries, NOT in a failed build (it explicitly said  
the build itself worked).



[snip] On the whole
I'd still recommend building the reference .h files on the actual  
target

arch rather than trusting cross-compile to create them correctly.


If I was simply trying to build the Postgres server and/or client to  
run on my machines, I would agree. However, as that is not what I am  
doing (I'm not using the client or server portions of the build at  
all) this is simply not an option, for two reasons. First, it would  
require that I have four different machines to build on: ppc 32 and 64  
bit, and Intel 32 and 64 bit. I could probably get away with two (ppc  
and intel 32 bit) but regardless it would still require multiple build  
machines, and that will never happen.


Secondly, even if I had the machines, it wouldn't help me achieve what  
I need - a single, universal binary build of my application that can  
run on as many Mac OS X machines as possible. There is no way I am  
going to be distributing different binaries of my application for  
different machine architectures. That might be fine for your average  
Linux/Unix user, perhaps even for your average PosgreSQL database  
admin- but not for the general public  to whom my app is targeted.  
Most of my target audience is going to go machine type? I dunno...  
it's a mac. So I need to distribute an application that will run on a  
mac regardless of what chip is inside. For that, I need a single,  
universal, build of the Postgres libraries. Doing multiple builds on  
multiple machines is, quite simply, not an option.


I thank you for the time you have taken trying to explain this to me,  
and I apologize if I am being dense or stubborn. I'm not trying to be  
difficult, 

Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Bret
 



  _  

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Dave Huber
Sent: Thursday, December 03, 2009 9:18 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] code example for PQgetCopyData



Does anybody have a snippet where they use PQgetCopyData? I must be calling
it wrong as it keep crashing my program. I've attached my code below. I am
writing this for a Code Interface Node in LabVIEW.

 

Thanks,

Dave

 

MgErr CINRun(LStrHandle conninfo, LStrHandle copystr, TD1Hdl resultValues) {

  MgErr err = noErr;

  PGconn *pConn;

  PGresult* pResult = NULL;

  char* szCopyStr = NULL;

  char* errormsg = NULL;

  char** buffer = NULL;   // for retrieving the data

  int nLen; // length of returned data

  

  // connect to the database

  char* szConnInfo = new char[LHStrLen(conninfo)+1];

  LToCStr((*(conninfo)), (CStr)szConnInfo);

  pConn = PQconnectdb(szConnInfo);

  delete [] szConnInfo;

 

  // check for errors connecting to database

  if (PQstatus(pConn) != CONNECTION_OK)

  {

DbgPrintf(Connection to database failed: %s,
PQerrorMessage(pConn));

  }

  else

  {

// start the copy command

szCopyStr = new char[LHStrLen(copystr)+1];

LToCStr((*(copystr)), (CStr)szCopyStr);

 

pResult = PQexec(pConn, szCopyStr);

delete [] szCopyStr;

  

// get the data

int i = 0;

while (nLen = PQgetCopyData(pConn, buffer, false)  0)

{

  

  if (err = SetCINArraySize((UHandle)resultValues, 2, ++i))

goto out;

  if (err = NumericArrayResize(uB, 1L,
(UHandle*)((*resultValues)-elt[i-1]), nLen-1))

goto out;

  LStrLen(*(*resultValues)-elt[i-1]) = nLen-1;
// set the Labview String size

  MoveBlock(*buffer, LStrBuf(*(*resultValues)-elt[i-1]),
nLen-1);  // copy the data to a new string

 

  PQfreemem(*buffer);
// free the memory from getCopy



}

(*resultValues)-dimSize = i;

  

out:

PQclear(pResult);

 

// see if there were errors

if (nLen == -2)

{

  DbgPrintf(Copy Out failed: %s, PQerrorMessage(pConn));

}

 

  }

  // close the connection

  PQfinish(pConn);

 

  return err;

}

 
Where is it blowing up?
 
 

  _  

This electronic mail message is intended exclusively for the individual(s)
or entity to which it is addressed. This message, together with any
attachment, is confidential and may contain privileged information. Any
unauthorized review, use, printing, retaining, copying, disclosure or
distribution is strictly prohibited. If you have received this message in
error, please immediately advise the sender by reply email message to the
sender and delete all copies of this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform
Electronic Transactions Act or any other law of similar import, absent an
express statement to the contrary contained in this e-mail, neither this
e-mail nor any attachments are an offer or acceptance to enter into a
contract, and are not intended to bind the sender, LeTourneau Technologies,
Inc., or any of its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no
viruses are present in this email, the company cannot accept responsibility
for any loss or damage arising from the use of this email or attachments.





Re: [GENERAL] DB terminating

2009-12-03 Thread Scott Felt

Thanks.  I altered the log log_statements and added the PID to the
log_line_prefix.  I have uploaded the log.  Forgive me if I'm missing
something obvious here, but I do not see that PID referenced elsewhere.  Is
there a a crash report somewhere or is that just these logs?

Thank you.
Scott



Tom Lane-2 wrote:
 
 Scott Felt scott.f...@gmail.com writes:
 Hello.  I have been having an issue with a database.  The logs
 consistently
 show this: 
 
 2009-11-17 16:03:55 ESTLOG:  0: server process (PID 9644) exited with
 exit code 128 
 
 This looks like a fairly garden-variety backend crash, but with only
 this much information there's no way to identify the cause.  You might
 try setting log_statements = all and see if there's any consistency in
 what the process was doing just before it crashed.  (You'd need to add
 PID to log_line_prefix so you could associate the log entries with the
 crash report.)
 
   regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 

-- 
View this message in context: 
http://old.nabble.com/DB-terminating-tp26412532p26630488.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB terminating

2009-12-03 Thread Scott Felt

Sorry.  Don't think I got that uploaded on the last message.



Scott Felt wrote:
 
 Thanks, Tom.  I will alter the config file to give greater details as you
 suggest.  Perhaps that will be sufficiently informative to point me toward
 a fix.  Thank you.
 
 --Scott
 
 
 Tom Lane-2 wrote:
 
 Scott Felt scott.f...@gmail.com writes:
 Hello.  I have been having an issue with a database.  The logs
 consistently
 show this: 
 
 2009-11-17 16:03:55 ESTLOG:  0: server process (PID 9644) exited
 with
 exit code 128 
 
 This looks like a fairly garden-variety backend crash, but with only
 this much information there's no way to identify the cause.  You might
 try setting log_statements = all and see if there's any consistency in
 what the process was doing just before it crashed.  (You'd need to add
 PID to log_line_prefix so you could associate the log entries with the
 crash report.)
 
  regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 
 
 
http://old.nabble.com/file/p26630517/postgresql-2009-12-01_195253.log
postgresql-2009-12-01_195253.log 
-- 
View this message in context: 
http://old.nabble.com/DB-terminating-tp26412532p26630517.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Tom Lane
Dave Huber dhu...@letourneautechnologies.com writes:
 Does anybody have a snippet where they use PQgetCopyData? I must be calling 
 it wrong as it keep crashing my program. I've attached my code below. I am 
 writing this for a Code Interface Node in LabVIEW.

One thing you're missing is that you should check that the result from
the PQexec actually shows successful entry into COPY_OUT state.
But I think the crash is because you're confused about the indirection
level.  buffer should be char *, not char **, and the argument ought to
be buffer so that the function can assign to buffer.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Dave Huber

 Where is it blowing up?

I'm sorry, I wasn't clear. It bombs on the PQgetCopyData call. If I comment out 
the entire while loop, the program runs fine. If I simply comment out the 
contents of the while loop...kablooey!

Dave


This electronic mail message is intended exclusively for the individual(s) or 
entity to which it is addressed. This message, together with any attachment, is 
confidential and may contain privileged information. Any unauthorized review, 
use, printing, retaining, copying, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform 
Electronic Transactions Act or any other law of similar import, absent an 
express statement to the contrary contained in this e-mail, neither this e-mail 
nor any attachments are an offer or acceptance to enter into a contract, and 
are not intended to bind the sender, LeTourneau Technologies, Inc., or any of 
its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.



Re: [GENERAL] Build universal binary on Mac OS X 10.6?

2009-12-03 Thread Tom Lane
Israel Brewster isr...@frontierflying.com writes:
 1) From the second paragraph of that post:
 If you add something like -arch i386 -arch ppc to CFLAGS and build  
 normally, you get real working multiarch binaries and libraries.
 Which is exactly the problem that started this whole thread - on 10.6,  
 you DON'T (or at least I don't) get real working multiarch binaries  
 and libraries. In fact, you don't get anything - the compile fails.  
 Already we see that this post does not address my issue in any form,  

The reason it's failing is that you continue to ignore the important
point: you need arch-specific header files.

I tried this on current sources and found that the failure occurs in
code like this:

#if SIZEOF_DATUM == 8
...
switch (attlen) \
{ \
case sizeof(char): \
...
case sizeof(int16): \
...
case sizeof(int32): \
...
case sizeof(Datum): \
...
...
#else/* SIZEOF_DATUM != 8 */

Since I configured on a 64-bit Mac, the generated header file
sets SIZEOF_DATUM to 8.  When this code is fed to the 32-bit
compiler, it thinks sizeof(Datum) is 4, so it spits up on the
duplicated case values.  Had it been fed the correct header file for
a 32-bit machine, it would have gone to the other part of the #if
(which doesn't have the intended-to-be-for-8-bytes case branch).

I don't really recall whether I hit this in the experiment I did
last year.  It's possible, maybe even likely, that the code was
different then and happened not to have any compiler-visible
inconsistencies when the header was wrong for the target arch.
That doesn't change the fact that it'd fail at runtime whether
the compiler could detect a problem or not.

There's really no way around building the correct header files
if you want a usable multiarch library.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Bret
Looks like fun.
 
Tom added some comments on the PQgetCopyData function.
 
If your environment allows, put a breapoint in on the line below  and look
at the vars
 
while (nLen = PQgetCopyData(pConn, buffer, false)  0)
 
perhaps this will get you working
while (nLen = PQgetCopyData(pConn, buffer, false)  0)
 
 
here's the only sample I found
 
 while (!copydone)
  {
! ret = PQgetCopyData(g_conn, copybuf, false);
! switch (ret) {
! case -1:
! copydone = true;
! break;
! case 0:
! case -2:
! write_msg(NULL, SQL command to dump the contents of table \%s\ failed:
PQgetCopyData() failed.\n, classname);
! write_msg(NULL, Error message from server: %s, PQerrorMessage(g_conn));
! write_msg(NULL, The command was: %s\n, q-data);
! exit_nicely();
! break;
! default:
! archputs(copybuf, fout);
! PQfreemem(copybuf);
! break;
  }



  _  

From: Dave Huber [mailto:dhu...@letourneautechnologies.com] 
Sent: Thursday, December 03, 2009 11:35 AM
To: 'bret_st...@machinemanagement.com'; pgsql-general@postgresql.org
Subject: RE: [GENERAL] code example for PQgetCopyData



 

 Where is it blowing up?

 

I'm sorry, I wasn't clear. It bombs on the PQgetCopyData call. If I comment
out the entire while loop, the program runs fine. If I simply comment out
the contents of the while loop.kablooey!

 

Dave


  _  

This electronic mail message is intended exclusively for the individual(s)
or entity to which it is addressed. This message, together with any
attachment, is confidential and may contain privileged information. Any
unauthorized review, use, printing, retaining, copying, disclosure or
distribution is strictly prohibited. If you have received this message in
error, please immediately advise the sender by reply email message to the
sender and delete all copies of this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform
Electronic Transactions Act or any other law of similar import, absent an
express statement to the contrary contained in this e-mail, neither this
e-mail nor any attachments are an offer or acceptance to enter into a
contract, and are not intended to bind the sender, LeTourneau Technologies,
Inc., or any of its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no
viruses are present in this email, the company cannot accept responsibility
for any loss or damage arising from the use of this email or attachments.

 
 



Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Frank Sweetser

On 12/03/2009 10:54 AM, Craig Ringer wrote:

Frank Sweetser wrote:


Unless, of course, you're at a good sized school with lots of
international students, and have fileservers holding filenames created
on desktops running in Chinese, Turkish, Russian, and other locales.


What I struggle with here is why they're not using ru_RU.UTF-8,
cn_CN.UTF-8, etc as their locales. Why mix charsets?


The problem isn't so much what they're using on their unmanaged desktops.  The 
problem is that the server, which is the one getting backed up, holds an 
aggregation of files created by an unknown collection of applications running 
on a mish-mash of operating systems (every large edu has its horror story of 
the 15+ year old, unpatched, mission critical machine that no one dares touch) 
with wildly varying charset configurations, no doubt including horribly broken 
and pre-UTF ones.


The end result is a fileset full of filenames created on a hacked Chinese copy 
of XP, a Russian copy of winME, romanian RedHat 4.0, and Mac OS 8.


This kind of junk is, sadly, not uncommon in academic environments, where IT 
is often required to support stuff that they don't get to manage.


--
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Senior Network Engineer   |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Eitan Talmi
Hi Avi

Please have a look at this link, this is how to install Bacula with MYSQL
database with Hebrew support

Eitan


On Thu, Dec 3, 2009 at 12:35 PM, Avi Rozen avi.ro...@gmail.com wrote:

 Craig Ringer wrote:
  Kern Sibbald wrote:
 
  Hello,
 
  Thanks for all the answers; I am a bit overwhelmed by the number, so I
 am
  going to try to answer everyone in one email.
 
  The first thing to understand is that it is *impossible* to know what
 the
  encoding is on the client machine (FD -- or File daemon).  On say a
  Unix/Linux system, the user could create filenames with non-UTF-8 then
 switch
  to UTF-8, or restore files that were tarred on Windows or on Mac, or
 simply
  copy a Mac directory.  Finally, using system calls to create a file, you
 can
  put *any* character into a filename.
 
 
  While true in theory, in practice it's pretty unusual to have filenames
  encoded with an encoding other than the system LC_CTYPE on a modern
  UNIX/Linux/BSD machine.
 

 In my case garbage filenames are all too common. It's a the sad
 *reality*, when you're mixing languages (Hebrew and English in my case)
 and operating systems. Garbage filenames are everywhere: directories and
 files shared between different operating systems and file systems, mail
 attachments, mp3 file names based on garbage id3 tags, files in zip
 archives (which seem to not handle filename encoding at all), etc.

 When I first tried Bacula (version 1.38), I expected to have trouble
 with filenames, since this is what I'm used to. I was rather pleased to
 find out that it could both backup and restore files, regardless of
 origin and destination filename encoding.

 I like Bacula because, among other things, it can take the punishment
 and chug along, without me even noticing that there was supposed to be a
 problem (a recent example: backup/restore files with a negative mtime ...)

 My 2c
 Avi


 --
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 Bacula-users mailing list
 bacula-us...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Frank Sweetser

On 12/3/2009 3:33 AM, Craig Ringer wrote:

Kern Sibbald wrote:

Hello,

Thanks for all the answers; I am a bit overwhelmed by the number, so I am
going to try to answer everyone in one email.

The first thing to understand is that it is *impossible* to know what the
encoding is on the client machine (FD -- or File daemon).  On say a


Or, even worse, which encoding the user or application was thinking of when it 
wrote a particular out.  There's no guarantee that any two files on a system 
were intended to be looked at with the same encoding.



Unix/Linux system, the user could create filenames with non-UTF-8 then switch
to UTF-8, or restore files that were tarred on Windows or on Mac, or simply
copy a Mac directory.  Finally, using system calls to create a file, you can
put *any* character into a filename.


While true in theory, in practice it's pretty unusual to have filenames
encoded with an encoding other than the system LC_CTYPE on a modern
UNIX/Linux/BSD machine.


Unless, of course, you're at a good sized school with lots of international 
students, and have fileservers holding filenames created on desktops running 
in Chinese, Turkish, Russian, and other locales.


In the end, a filename is (under linux, at least) just a string of arbitrary 
bytes containing anything except / and NULL.  If bacula tries to get too 
clever, and munges or misinterprets those bytes strings - or, worse yet, if 
the database does it behind your back - then stuff _will_ end up breaking.


(A few years back, someone heavily involved in linux kernel filesystem work 
was talking about this exact issue, and made the remark that many doing 
internationalization work secretly feel it would be easier to just teach 
everyone english.  Impossible as this may be, I have since come to understand 
what they were talking about...)


--
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Senior Network Engineer   |  is simple, elegant, and wrong. - HL Mencken
 GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Daily migration on Postgresql

2009-12-03 Thread mrciken

Hello,
 
I have been looking for a data integration / transfer program able to help
with Postgresql.
I have had some advice but have not found the right software yet.

The main feature is being able to migrate important quantities of contacts
and data on a daily basis. The migration would be done manually. 

Thank you for your help!
-- 
View this message in context: 
http://old.nabble.com/Daily-migration-on-Postgresql-tp26626209p26626209.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build universal binary on Mac OS X 10.6?

2009-12-03 Thread Israel Brewster


On Dec 3, 2009, at 10:54 AM, Tom Lane wrote:


Israel Brewster isr...@frontierflying.com writes:

1) From the second paragraph of that post:

If you add something like -arch i386 -arch ppc to CFLAGS and build
normally, you get real working multiarch binaries and libraries.
Which is exactly the problem that started this whole thread - on  
10.6,

you DON'T (or at least I don't) get real working multiarch binaries
and libraries. In fact, you don't get anything - the compile fails.
Already we see that this post does not address my issue in any form,


The reason it's failing is that you continue to ignore the important
point: you need arch-specific header files.


So it WAS just me being dense. Figures :P Your explanation makes  
perfect sense now, thanks. Although in my defense, everything that I  
read up till now implied that the compile should work, it would just  
be the generated binaries that wouldn't (how else do you interpret  
that quite I included?). However with your explanation, it makes sense  
why the compile would fail. Again, my apologies.


So what it boils down to, if I now understand you correctly, is that  
(since you obviously can only have one set of headers per build) the  
only way to make this work is pretty much exactly what I ended up  
doing: build for each architecture separately (even  if on the same  
machine) and then lipo the results together. I can live with that.


Then the reason this is necessary on Postgres, and not with other  
software I have dealt with is that Postgres has 64 bit specific code,  
while the others don't? I know my code doesn't. Or maybe I was just  
doing 32 bit builds of the others, and so never ran into this sort of  
thing. Hmm. Oh well, it works. My profuse thanks for the explanations  
and bearing with me.



I don't really recall whether I hit this in the experiment I did
last year.  It's possible, maybe even likely, that the code was
different then and happened not to have any compiler-visible
inconsistencies when the header was wrong for the target arch.


Or perhaps the 10.6 compiler has better error checking than the 10.5?  
That would explain why the build would succeed on 10.5 (but give  
unusable binaries for other platforms) but die on 10.6.



That doesn't change the fact that it'd fail at runtime whether
the compiler could detect a problem or not.


Well, the libraries I created in the past from the 8.2 code work just  
fine on both PPC 32 bit and Intel 64 bit. But then, that was 8.2. The  
code probably changed between 8.2 and 8.4 though :-D. Thanks again!




There's really no way around building the correct header files
if you want a usable multiarch library.

regards, tom lane


---
Israel Brewster
Computer Support Technician II
Frontier Flying Service Inc.
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x293
---

BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Dave Huber
Tom,

Thanks for the help. Setting buffer to a char * fixed the crashing problem.

Now, I have a different issue. The result from PQgetCopyData is always 1 for 
every row of data returned. Does this not work for return data WITH BINARY? 
If I issue the same copy command to a file instead of STDOUT and examine the 
file contents, there is most definitely data for each row. The command going to 
the PQexec function is:

COPY (SELECT * FROM event_log_table) TO STDOUT WITH BINARY
Or
COPY (SELECT * FROM event_log_table) TO E'C:\\testfile' WITH BINARY

To reiterate, nLen in the following code always == 1:


pResult = PQexec(pConn, szCopyStr);
delete [] szCopyStr;

// make sure we are in the copy out state before reading
if (PGRES_COPY_OUT == PQresultStatus(pResult))
{
// get the data
int i = 0;
while (nLen = PQgetCopyData(pConn, buffer, false)  0)
{
...


Thanks again.

Dave


Tom wrote:

One thing you're missing is that you should check that the result from
the PQexec actually shows successful entry into COPY_OUT state.
But I think the crash is because you're confused about the indirection
level.  buffer should be char *, not char **, and the argument ought to
be buffer so that the function can assign to buffer.

regards, tom lane



This electronic mail message is intended exclusively for the individual(s) or 
entity to which it is addressed. This message, together with any attachment, is 
confidential and may contain privileged information. Any unauthorized review, 
use, printing, retaining, copying, disclosure or distribution is strictly 
prohibited. If you have received this message in error, please immediately 
advise the sender by reply email message to the sender and delete all copies of 
this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform 
Electronic Transactions Act or any other law of similar import, absent an 
express statement to the contrary contained in this e-mail, neither this e-mail 
nor any attachments are an offer or acceptance to enter into a contract, and 
are not intended to bind the sender, LeTourneau Technologies, Inc., or any of 
its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-03 Thread Alvaro Herrera
Craig Ringer wrote:
 Frank Sweetser wrote:
 
  Unless, of course, you're at a good sized school with lots of
  international students, and have fileservers holding filenames created
  on desktops running in Chinese, Turkish, Russian, and other locales.
 
 What I struggle with here is why they're not using ru_RU.UTF-8,
 cn_CN.UTF-8, etc as their locales. Why mix charsets?

On my own desktop computer, I switched from Latin1 to UTF8 some two
years ago, and I still have a mixture of file name encodings.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Daily migration on Postgresql

2009-12-03 Thread Adrian Klaver
On Thursday 03 December 2009 5:49:13 am mrciken wrote:
 Hello,

 I have been looking for a data integration / transfer program able to help
 with Postgresql.
 I have had some advice but have not found the right software yet.

 The main feature is being able to migrate important quantities of contacts
 and data on a daily basis. The migration would be done manually.

 Thank you for your help!
 --
 View this message in context:
 http://old.nabble.com/Daily-migration-on-Postgresql-tp26626209p26626209.htm
l Sent from the PostgreSQL - general mailing list archive at Nabble.com.

What are you transferring to/from?

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build universal binary on Mac OS X 10.6?

2009-12-03 Thread Tom Lane
Israel Brewster isr...@frontierflying.com writes:
 So what it boils down to, if I now understand you correctly, is that  
 (since you obviously can only have one set of headers per build) the  
 only way to make this work is pretty much exactly what I ended up  
 doing: build for each architecture separately (even  if on the same  
 machine) and then lipo the results together. I can live with that.

You can, but you don't have to.  The alternative is to generate the
machine-specific headers, set them up to be pulled in with #ifdefs,
and then do one compile run with multiple -arch switches.  This does
not work for pre-8.4 PG sources but does with 8.4.  The one post
I pointed you to shows a script for setting up the headers that way.
(It also talks about using lipo because it predated 8.4, but you
can skip that part of the recipe.)

 Then the reason this is necessary on Postgres, and not with other  
 software I have dealt with is that Postgres has 64 bit specific code,  
 while the others don't?

The others are just accidentally failing to fail at the build stage.
I'm still pretty suspicious about whether the executables actually
work on arches other than where you built.  I am one hundred percent
certain that MySQL has arch-dependent headers just like we do (because
I have to cope with them when I build Red Hat's mysql packages).  It's
possible that SQLite manages to avoid generating any arch-specific
source code during its build process, but I rather doubt it.

Given that you only care about client libraries and not servers, it
might be that the breakage doesn't really affect you --- the server
side is much more likely to need the sort of tricks that lead to
generating arch-dependent headers.  But I wouldn't trust them an
inch unless I'd tested them pretty thoroughly.  It would be much
safer to build with correct headers.

 I don't really recall whether I hit this in the experiment I did
 last year.  It's possible, maybe even likely, that the code was
 different then and happened not to have any compiler-visible
 inconsistencies when the header was wrong for the target arch.

 Or perhaps the 10.6 compiler has better error checking than the 10.5?  

Actually, I think I know what the difference is: 10.5 didn't have any
support for x86_64 did it?  If configure was generating declarations for
32-bit, at least the one particular bit of code we looked at would have
compiled happily on either word size --- and then failed at runtime on
64-bit because it was missing a needed case branch.  But again, I think
that's server-only code.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Build universal binary on Mac OS X 10.6?

2009-12-03 Thread Dave Page
On Thu, Dec 3, 2009 at 8:39 PM, Israel Brewster
isr...@frontierflying.com wrote:

 Well, the libraries I created in the past from the 8.2 code work just fine
 on both PPC 32 bit and Intel 64 bit. But then, that was 8.2. The code
 probably changed between 8.2 and 8.4 though :-D. Thanks again!

When I was first writing the build script that Tom referenced earlier,
I too made the mistake of not creating arch-specific config headers. I
found that the build worked fine, and even initdb ran and I could
start the server and login. pgAdmin then promptly fell over because it
tried to use a view which had been mis-compiled (in postgres, not by
gcc) as it was treating my ppc box as a little-endian platform. Of
course, the build worked fine on an intel box.

So it's entirely possible that libpq may be working OK, whilst the
server is broken horribly.

-- 
Dave Page
EnterpriseDB UK: http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Tom Lane
Dave Huber dhu...@letourneautechnologies.com writes:
 Now, I have a different issue. The result from PQgetCopyData is always
 1 for every row of data returned. Does this not work for return data
 WITH BINARY?

Weird, it should be the row length in bytes.  Are you maybe testing on
empty rows?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Testing query times

2009-12-03 Thread Christine Penner

Hi,

The first time I use a query its much slower than the second, third 
etc. I want to do some testing with the 1st (slower) time. The only 
way I have found to reproduce that slow time is to restart my 
computer which is a huge pain. Is there a better way to clear 
whatever Postgres is holding to make them faster. I have tried 
restarting the Postgres service but that doesn't help.


Christine Penner
Ingenious Software
250-352-9495
christ...@ingenioussoftware.com 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing query times

2009-12-03 Thread Scott Marlowe
On Thu, Dec 3, 2009 at 2:55 PM, Christine Penner
christ...@ingenioussoftware.com wrote:
 Hi,

 The first time I use a query its much slower than the second, third etc. I
 want to do some testing with the 1st (slower) time. The only way I have
 found to reproduce that slow time is to restart my computer which is a huge
 pain. Is there a better way to clear whatever Postgres is holding to make
 them faster. I have tried restarting the Postgres service but that doesn't
 help.

Restarting pgsql accomplishes the same thing as regards pgsql.  BUT
most OSes also cache, so you need a way to flush the kernel / file
system cache.  That depends on which OS you're running.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing query times

2009-12-03 Thread Greg Smith

Scott Marlowe wrote:

Restarting pgsql accomplishes the same thing as regards pgsql.  BUT
most OSes also cache, so you need a way to flush the kernel / file
system cache.  That depends on which OS you're running.
  
Can't tell if this is Linux or Windows from how the question was asked.  
Here's how to clear the OS cache on Linux: 


|sync; echo 3  /proc/sys/vm/drop_caches|

And on Windows you should be able to clear it with CacheSet:  
http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [GENERAL] Strange. I can logon with an invalid or no password atall

2009-12-03 Thread Daniel
Thanks.  I did not realise it was so configurable.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange. I can logon with an invalid or no password atall

2009-12-03 Thread Scott Marlowe
Better too many options than not enough eh?

On Thu, Dec 3, 2009 at 2:47 PM, Daniel danwgr...@gmail.com wrote:
 Thanks.  I did not realise it was so configurable.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
When fascism comes to America, it will be intolerance sold as diversity.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Bruce Momjian
Tom Lane wrote:
 But actually I thought we had more or less concluded that CREATE OR
 REPLACE LANGUAGE would be acceptable (perhaps only if it's given
 without any extra args?).  Or for that matter there seems to be enough
 opinion on the side of just installing plpgsql by default.  CINE is
 a markedly inferior alternative to either of those.

Based on research done as part of this thread, it seems plpgsql has
similar risks to recursive queries, so the idea of installing plpgsql by
default now makes more sense.

The attached patch installs plpgsql language by default, as well as the
three plpgsql helper functions.  The language is installed just like it
was before, but now automatically, e.g. still a separate shared object. 
One problem is that because system oids are used, it isn't possible to
drop the language:

$ droplang plpgsql test
droplang: language removal failed: ERROR:  cannot drop language plpgsql
because it is required by the database system

I assume we still want to allow the language to be uninstalled, for
security purposes.  Right?  Any suggestions?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/installation.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/installation.sgml,v
retrieving revision 1.328
diff -c -c -r1.328 installation.sgml
*** doc/src/sgml/installation.sgml	2 Dec 2009 14:07:25 -	1.328
--- doc/src/sgml/installation.sgml	3 Dec 2009 23:09:59 -
***
*** 2257,2270 
   is commandcreatelang/command failing with unusual errors.
   For example, running as the owner of the PostgreSQL installation:
  screen
! -bash-3.00$ createlang plpgsql template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plpgsql.so: A memory address is not in the address space for the process.
  /screen
  Running as a non-owner in the group posessing the PostgreSQL
  installation:
  screen
! -bash-3.00$ createlang plpgsql template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plpgsql.so: Bad address
  /screen
   Another example is out of memory errors in the PostgreSQL server
   logs, with every memory allocation near or greater than 256 MB
--- 2257,2270 
   is commandcreatelang/command failing with unusual errors.
   For example, running as the owner of the PostgreSQL installation:
  screen
! -bash-3.00$ createlang plperl template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plperl.so: A memory address is not in the address space for the process.
  /screen
  Running as a non-owner in the group posessing the PostgreSQL
  installation:
  screen
! -bash-3.00$ createlang plperl template1
! createlang: language installation failed: ERROR:  could not load library /opt/dbs/pgsql748/lib/plperl.so: Bad address
  /screen
   Another example is out of memory errors in the PostgreSQL server
   logs, with every memory allocation near or greater than 256 MB
Index: src/include/catalog/pg_language.h
===
RCS file: /cvsroot/pgsql/src/include/catalog/pg_language.h,v
retrieving revision 1.35
diff -c -c -r1.35 pg_language.h
*** src/include/catalog/pg_language.h	22 Sep 2009 23:43:41 -	1.35
--- src/include/catalog/pg_language.h	3 Dec 2009 23:09:59 -
***
*** 75,79 
--- 75,82 
  DATA(insert OID = 14 ( sql		PGUID f t 0 0 2248 _null_ ));
  DESCR(SQL-language functions);
  #define SQLlanguageId 14
+ DATA(insert OID = 9 ( plpgsql		PGUID t t 2995 2996 2997 _null_ ));
+ DESCR(SQL-language functions);
+ 
  
  #endif   /* PG_LANGUAGE_H */
Index: src/include/catalog/pg_proc.h
===
RCS file: /cvsroot/pgsql/src/include/catalog/pg_proc.h,v
retrieving revision 1.554
diff -c -c -r1.554 pg_proc.h
*** src/include/catalog/pg_proc.h	29 Nov 2009 18:14:30 -	1.554
--- src/include/catalog/pg_proc.h	3 Dec 2009 23:10:03 -
***
*** 4722,4727 
--- 4722,4734 
  DATA(insert OID = 3114 (  nth_value		PGNSP PGUID 12 1 0 0 f t f t f i 2 0 2283 2283 23 _null_ _null_ _null_ _null_ window_nth_value _null_ _null_ _null_ ));
  DESCR(fetch the Nth row value);
  
+ /* PL/pgSQL support functions */
+ DATA(insert OID = 2995 (  plpgsql_call_handler	PGNSP PGUID 13 1 0 0 f f f f f v 0 0 2280  _null_ _null_ _null_ _null_ plpgsql_call_handler $libdir/plpgsql _null_ _null_ ));
+ DESCR(PL/pgSQL function/trigger manager);
+ DATA(insert OID = 2996 (  plpgsql_inline_handler	PGNSP PGUID 13 1 0 0 f f f t f v 1 0 2278 2281 _null_ _null_ _null_ _null_ plpgsql_inline_handler $libdir/plpgsql _null_ _null_ ));
+ DESCR(PL/pgSQL anonymous 

Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 One problem is that because system oids are used, it isn't possible to
 drop the language:
 I assume we still want to allow the language to be uninstalled, for
 security purposes.

Yes.  That behavior is not acceptable.  Why aren't you just adding
a CREATE LANGUAGE call in one of the initdb scripts?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  One problem is that because system oids are used, it isn't possible to
  drop the language:
  I assume we still want to allow the language to be uninstalled, for
  security purposes.
 
 Yes.  That behavior is not acceptable.  Why aren't you just adding
 a CREATE LANGUAGE call in one of the initdb scripts?

Which scripts?  initdb.c?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Andrew Dunstan



Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:
  

One problem is that because system oids are used, it isn't possible to
drop the language:
I assume we still want to allow the language to be uninstalled, for
security purposes.



Yes.  That behavior is not acceptable.  Why aren't you just adding
a CREATE LANGUAGE call in one of the initdb scripts?


  


Before we go too far with this, I'd like to know how we will handle the 
problems outlined here: 
http://archives.postgresql.org/pgsql-hackers/2008-02/msg00916.php


cheers

andrew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Bruce Momjian
Andrew Dunstan wrote:
 
 
 Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:

  One problem is that because system oids are used, it isn't possible to
  drop the language:
  I assume we still want to allow the language to be uninstalled, for
  security purposes.
  
 
  Yes.  That behavior is not acceptable.  Why aren't you just adding
  a CREATE LANGUAGE call in one of the initdb scripts?
 
  

 
 Before we go too far with this, I'd like to know how we will handle the 
 problems outlined here: 
 http://archives.postgresql.org/pgsql-hackers/2008-02/msg00916.php

Oh, I forgot about that issue.  FYI, I believe several packages of
Postgres already pre-install plpgsql, or at least allow it as an option.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SELECTing every Nth record for better performance

2009-12-03 Thread Tom
I have a big table that is used for datalogging. I'm designing
graphing interface that will visualise the data. When the user is
looking at a small daterange I want the database to be queried for all
records, but when the user is 'zoomed out', looking at an overview, I
want run a query that skips every nth record and returns a managable
dataset that still gives a correct overview of the data without
slowing the programme down. Is there an easy way to do this that I
have overlooked? I looked at:


1. taking the primary key (which is an auto increasing integer) and
running modulo on it. This will not work in my case as the database is
being used for a number of different logging 'runs', and these runs
are not necessarily equally distributed over the primary keys.


2. Subqueries that do SELECT COUNT to generate a row number are too
slow as im dealing with thousands to tens of thousands of records.


3. My last idea was to create a sequence with CYCLE and min_Value 0
and max_value 1:

SELECT * FROM ( SELECT *, (SELECT nextval('counter_seq')) as counter
FROM table) WHERE counter = 0

this didnt work (counter was always the same for all rows), so i put
SELECT nextval('counter_seq') in a function called counter():

SELECT *, counter() as counter FROM table

this gives the table i'm looking for, however, I am unable to use
WHERE counter = 0. when I run EXPLAIN, it tells me that it is actually
not looking at the values in the table but just running the function
again to filter. So I tried this:

SELECT *, (counter()+id-id) as counter FROM table

where Id is the primary key of the table. im trying to fool the
interpreter into looking at the table instead of running the function
itself. Again, this query generates the right table. So, I tried
adding WHERE counter = 0. Again it doesnt work: it returns the same
number of rows, but changes the values of all rows in the 'counter'
column to 1. EXPLAIN does not help me (gives no information about the
filtering).


Any general thoughts on how to achieve my original goal or on how to
fix issues with my 3d attempt are appreciated.

Tom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECTing every Nth record for better performance

2009-12-03 Thread Richard Broersma
On Thu, Dec 3, 2009 at 9:26 PM, Tom t...@cstcomposites.com wrote:

 I
 want run a query that skips every nth record and returns a managable
 dataset that still gives a correct overview of the data without
 slowing the programme down. Is there an easy way to do this that I
 have overlooked? I looked at:

I've played with datalogging.  It was very easy to find nth records
when using date_trunc() on a timestamp.   The only minor problem with
data_trunc was that I couldn't create arbitrary granularity.   For
example it is easy to date_trunc() on an year, month, week, day, hour
or a minute but I wanted 5, 10 and 15 minute increments.  I bet there
could be a solution to this, but I never looked into it.


To improve the select performance, I created functional indexes using
different data_trunc() granularities.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PL/pgSQL by default

2009-12-03 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Before we go too far with this, I'd like to know how we will handle the 
 problems outlined here: 
 http://archives.postgresql.org/pgsql-hackers/2008-02/msg00916.php

Hm, I think that's only a problem if we define it to be a problem,
and I'm not sure it's necessary to do so.  Currently, access to PL
languages is controlled by superusers.  You are suggesting that if
plpgsql is installed by default, then access to it should be controlled
by non-superuser DB owners instead.  Why do we have to move the
goalposts in that direction?  It's not like we expect that DB owners
should control access to other built-in facilities, like int8 or
pg_stat_activity for example.  The argument against having plpgsql
always available is essentially one of security risks, and I would
expect that most installations think that security risks are to be
managed by superusers.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECTing every Nth record for better performance

2009-12-03 Thread A. Kretschmer
In response to Tom :
 I have a big table that is used for datalogging. I'm designing
 graphing interface that will visualise the data. When the user is
 looking at a small daterange I want the database to be queried for all
 records, but when the user is 'zoomed out', looking at an overview, I
 want run a query that skips every nth record and returns a managable
 dataset that still gives a correct overview of the data without
 slowing the programme down. Is there an easy way to do this that I
 have overlooked? I looked at:


Do you have 8.4? If yes:

test=# create table data as select s as s from generate_Series(1,1000) s;
SELECT



test=*# select s from (select *, row_number() over (order by s) from
data) foo where row_number % 3 = 0 limit 10;
 s

  3
  6
  9
 12
 15
 18
 21
 24
 27
 30
(10 rows)


-- or skip every 5. record:

test=*# select s from (select *, row_number() over (order by s) from
data) foo where row_number % 5 != 0 limit 10;
 s

  1
  2
  3
  4
  6
  7
  8
  9
 11
 12
(10 rows)


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general