Re: [GENERAL] Killing a session in windows

2007-12-16 Thread Bruce Momjian
Howard Cole wrote:
 
  Which you can do, no?  I thought pg_ctl's kill option was invented
  specifically to make this less painful on Windows.
  I shall look into the pg_ctl options to see if the kill option does 
  what taskill cannot (thanks for the heads up on that)
 
 Using
 $ pg_ctl kill TERM [pid]
 worked great. Since very few people seem to know about this, could I 
 suggest making it more prominent in the server administration pages.

Agreed. I have added the second sentence to our 8.3 beta docs:

   Alternatively, you can send the signal directly using commandkill/
   (or commandpg_ctl kill TERM [process id]/ on productnameWindows/).

You can actually use pg_ctl kill on Unix too but it seems awkward to
suggest it in the existing sentence.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] top posting

2007-12-16 Thread Bruce Momjian
Joshua D. Drake wrote:
 Don't put this one on me :). This is a community thing. AndrewS reply
 aside, if you review the will of the community on this you will see
 that top posting is frowned upon.
 
 I will be the first to step up and pick a fight when I think the
 community is being dumb (just read some of my threads ;)) but on this
 one, I have to agree. We should discourage top posting, vehemently if
 needed.

I do top-post if I am asking _about_ the email, rather than addressing
its content, like Is this a TODO item?  You don't want to trim the
email because it has context that might be needed for the reply, and
bottom-posting just makes it harder to find my question, and the
question isn't really related to the content of the email.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-16 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 We used to have the C defined MAKE_EXPIRED_TUPLES_VISIBLE that would
 make deleted rows visible, but it seems it was removed in this commit as
 part of a restructuring:

It was removed because it was utterly useless.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-16 Thread Colin Wetherbee

Greetings.

I am working on a PostgreSQL-backed mod_perl web application that's just 
in its infancy.


Let's say I have a users table that holds about 15 columns of data about 
each user.


If I write one Perl sub for each operation on the table (e.g. one that 
gets the username and password hash, another that gets the last name and 
first name, etc.), there will be a whole lot of subs, each of which 
performs one very specific task.


If I write one larger Perl sub that grabs the whole row, and then I deal 
with the contents of the row in Perl, ignoring columns as I please, it 
will require fewer subs and, in turn, imply cleaner code.


My concern is that I don't know what efficiency I would be forfeiting on 
the PostgreSQL side of the application by always querying entire rows if 
my transaction occurs entirely within a single table.  Of course, I 
would want to handle more complicated JOINs and other more intensive 
operations on the PostgreSQL side.


I don't think the application will ever store a tuple larger than about 
512 bytes in any table, so the network speed wouldn't really come into play.


Thanks.

Colin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-16 Thread Tom Lane
Colin Wetherbee [EMAIL PROTECTED] writes:
 Let's say I have a users table that holds about 15 columns of data about 
 each user.

 If I write one Perl sub for each operation on the table (e.g. one that 
 gets the username and password hash, another that gets the last name and 
 first name, etc.), there will be a whole lot of subs, each of which 
 performs one very specific task.

 If I write one larger Perl sub that grabs the whole row, and then I deal 
 with the contents of the row in Perl, ignoring columns as I please, it 
 will require fewer subs and, in turn, imply cleaner code.

 My concern is that I don't know what efficiency I would be forfeiting on 
 the PostgreSQL side of the application by always querying entire rows if 
 my transaction occurs entirely within a single table.

Not nearly as much as you would lose anytime you perform two independent
queries to fetch different fields of the same row.  What you really need
to worry about here is making sure you only fetch the row once
regardless of which field(s) you want out of it.  It's not clear to me
whether your second design concept handles that, but if it does then
I think it'd be fine.

The only case where custom field sets might be important is if you have
fields that are wide enough to potentially get TOASTed (ie more than a
kilobyte or so apiece).  Then it'd be worth the trouble to not fetch
those when you don't need them.  But that apparently isn't the case
with this table.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-16 Thread Colin Wetherbee

Tom Lane wrote:

Colin Wetherbee [EMAIL PROTECTED] writes:
Let's say I have a users table that holds about 15 columns of data about 
each user.


If I write one Perl sub for each operation on the table (e.g. one that 
gets the username and password hash, another that gets the last name and 
first name, etc.), there will be a whole lot of subs, each of which 
performs one very specific task.


If I write one larger Perl sub that grabs the whole row, and then I deal 
with the contents of the row in Perl, ignoring columns as I please, it 
will require fewer subs and, in turn, imply cleaner code.


My concern is that I don't know what efficiency I would be forfeiting on 
the PostgreSQL side of the application by always querying entire rows if 
my transaction occurs entirely within a single table.


Not nearly as much as you would lose anytime you perform two independent
queries to fetch different fields of the same row.  What you really need
to worry about here is making sure you only fetch the row once
regardless of which field(s) you want out of it.  It's not clear to me
whether your second design concept handles that, but if it does then
I think it'd be fine.


Yes, the second design concept would handle that.


The only case where custom field sets might be important is if you have
fields that are wide enough to potentially get TOASTed (ie more than a
kilobyte or so apiece).  Then it'd be worth the trouble to not fetch
those when you don't need them.  But that apparently isn't the case
with this table.


Sounds good.

Thanks.

Colin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Need to find out which process is hitting hda

2007-12-16 Thread Scott Marlowe
On Dec 14, 2007 1:33 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
 I kept looking at the io columns and didn't even think of the swap
 partition. It's true that it's moving quite erratically but I won't say
 that it's really thrashing.

  total   used   free sharedbuffers cached
 Mem:   503498  4  0  3287
 -/+ buffers/cache:207295
 Swap: 2527328   2199

 (YEP, I know I'm RAM starved on this machine)

Good lord, my laptop has more memory than that. :)

What Tom said, buy some more RAM.  Also, look at turning down the
swappiness setting as well.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Need to find out which process is hitting hda

2007-12-16 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Sun, 16 Dec 2007 17:55:55 -0600
Scott Marlowe [EMAIL PROTECTED] wrote:

 On Dec 14, 2007 1:33 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
  I kept looking at the io columns and didn't even think of the swap
  partition. It's true that it's moving quite erratically but I won't
  say that it's really thrashing.
 
   total   used   free sharedbuffers
  cached Mem:   503498  4  0
  3287 -/+ buffers/cache:207295
  Swap: 2527328   2199
 
  (YEP, I know I'm RAM starved on this machine)
 
 Good lord, my laptop has more memory than that. :)

My phone has more memory than that :P

Sincerely,

Joshua D. Drake



- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHZb6bATb/zqfZUUQRAqmxAJ4o2PzaSUrxEAT9ElAfFNdnofKwaACfR6IZ
3uf1dtRME1SUyKKbPY1iwKU=
=KJFh
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Need to find out which process is hitting hda

2007-12-16 Thread Scott Marlowe
On Dec 16, 2007 6:11 PM, Joshua D. Drake [EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Sun, 16 Dec 2007 17:55:55 -0600
 Scott Marlowe [EMAIL PROTECTED] wrote:

  On Dec 14, 2007 1:33 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
   I kept looking at the io columns and didn't even think of the swap
   partition. It's true that it's moving quite erratically but I won't
   say that it's really thrashing.
  
total   used   free sharedbuffers
   cached Mem:   503498  4  0
   3287 -/+ buffers/cache:207295
   Swap: 2527328   2199
  
   (YEP, I know I'm RAM starved on this machine)
 
  Good lord, my laptop has more memory than that. :)

 My phone has more memory than that :P

Now that you mention it, my phone does indeed have more memory than my
laptop as well.   sheesh.  technology doesn't march forward, it drag
races forwards.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings