Re: [ADMIN] Re: Database over multiple drives

2001-06-06 Thread Wm Brian McCane

Yes!!

I have done this very successfully.  I have mounts:

/usr/local/pgsql/data  8Gig slice on Primary IDE-Slave
/usr/local/pgsql/data28Gig slice on Secondary IDE-Master
/usr/local/pgsql/data34Gig slice on Primary IDE-Master

Then I move files from data/base/ to data2/base/ and create a
symbolic link with the following.

$ pg_ctl stop
*NOTE*  Make sure it really shut down, I have some long running tasks which
have bitten me during a moment of stupidity
$ cd /usr/local/pgsql/data/base/
$ mv  /usr/local/pgsql/data2/base//
$ ln -s /usr/local/pgsql/data2/base// .
$ pg_ctl start

I currently only have the pg_xlog directory on data3 because that drive is
also shared with the operating system.  But just moving the pg_xlog
directory alone gave me a significant performance boost.  By freeing up the
data drives from having to write those log files, I am less likely to have
to wait for the heads to move around after fsyncing a log file.  You can
also move entire database directories using commands similar to those above.

- brian


- Original Message -
From: "Andy Samuel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Thursday, May 31, 2001 11:55 PM
Subject: [ADMIN] Re: Database over multiple drives


> Has anybody *really* tried this solution ?
> Is it safe ?
>
> TIA
> Andy
>
> - Original Message -
> From: "Ragnar Kjørstad" <[EMAIL PROTECTED]>
> To: "David Lizano" <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Thursday, May 31, 2001 4:33 PM
> Subject: Re: Database over multiple drives
>
>
> > On Thu, May 31, 2001 at 10:37:26AM +0200, David Lizano wrote:
> > > You can't do it with Postgres. To do it, Postgres must implement
> > > "tablespaces" to spread the database in different localizations (who
can
> be
> > > different physical localizations, of course). Then a table can be
> assigned
> > > to a tablespace.
> >
> > Sure you can.
> > You can move some files to a different drive, and put a symlink in the
> > original directory.
> >
> > Or, if you have an operatingsystem that has an logical volume manager,
> > you can concatenate several disks, use striping or whatever, to get a
> > logical device that spans several physical devices.
> >
> >
> > --
> > Ragnar Kjørstad
> > Big Storage
> >
>
>
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[ADMIN] Hang when doing insert/update

2001-06-06 Thread Lee Kwok Shing

Hello,
  I am using PostgreSQL 7.0.2 on RH 6.2

  The DB works smoothly in the past 8 months but until recently, the
system would hang when doing insert or update statements.

  When I use
ps aux
  I found a lot of processes like
postgres localhost xxx xxx INSERT
postgres localhost xxx xxx SELECT
postgres localhost xxx xxx idle
  are running

  When I killed one process using
kill -TERM pid 
  all those processes gone and the DB returns to normal. But after some
insert/update, the problem exists again...

  I tried to use strace on the process and found that they seems waiting
for something since the following statements are execution repeatedly
select (0, NULL, NULL, .
  Maybe they had been fall in some infinite loop..

  Could you give me some help ?

  Best regards,
Lee Kwok Shing

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [ADMIN] Script Hangs on

2001-06-06 Thread Melvyn Sopacua

At 07:21 6-6-01, you wrote:

>$dbconn1 = @pg_connect ("host=$dbhost dbname=$dbname user=$dbuser
>password=$dbpasswd");
>
>Edit /usr/local/lib/php.ini and make sure persistent connections is
>turned off.

This will work, as long as the machine is up, but pgsql isn't. However, if the
machine isn't up, the timeout is tremendous.

Use this, or view source online at 
http://melvyn.idg.nl/phpsources/db_connect_plus.phps:
 $skipcheck)
 {
 if($sp=fsockopen($primary, $port, &$errno, 
&$errstr, $timeout))
 {
 fclose($sp);
 unlink($primcheck);
 $is_mailed=mail($cfg_vars[pg_admin], 
"Primary $primary back up.", date("l dS of F Y h:i:s A", $now), "From: 
$appname <$appemail>");
 return $connect($primary, $username, 
$password);
 }
 else
 {
 unlink($primcheck);
 touch($primcheck);
 mail($cfg_vars[pg_admin], "Primary 
$primary down", "$errno\n$errstr", "From: $appname <$appemail>\nX-Priority: 
1 (Highest)\nX-MSMail-Priority: High");
 }
 }
 }
 else
 {
 if($sp=fsockopen($primary, $port, &$errno, &$errstr, 
$timeout))
 {
 fclose($sp);
 return $connect($primary, $username, $password);
 }
 else
 {
 touch($primcheck);
 mail($cfg_vars[pg_admin], "Primary $primary down", 
"$errno\n$errstr", "From: $appname <$appemail>\nX-Priority: 1 
(Highest)\nX-MSMail-Priority: High");
 }
 }
 //If we get this far, the primary is down, so do the secondary.
 if(file_exists($seccheck))
 {
 $lastmod=filemtime($seccheck);
 if($now - $lastmod > $skipcheck)
 {
 if($sp=fsockopen($secondary, $port, &$errno, 
&$errstr, $timeout))
 {
 fclose($sp);
 unlink($seccheck);
 mail($cfg_vars[pg_admin], "Secondary 
$secondary back up.", date("l dS of F Y h:i:s A", $now), "From: $appname 
<$appemail>");
 return $connect($secondary, $username, 
$password);
 }
 else
 {
 unlink($seccheck);
 touch($seccheck);
 mail($cfg_vars[pg_admin], "secondary 
$secondary down", "$errno\n$errstr", "From: $appname 
<$appemail>\nX-Priority: 1 (Highest)\nX-MSMail-Priority: High");
 }
 }
 }
 else
 {
 if($sp=fsockopen($secondary, $port, &$errno, &$errstr, 
$timeout))
 {
 fclose($sp);
 return $connect($secondary, $username, $password);
 }
 else
 {
 touch($seccheck);
 mail($cfg_vars[pg_admin], "secondary $secondary 
down", "$errno\n$errstr", "From: $appname <$appemail>");
 }
 }
 //if we get this far, both are down!
 return 0;
}

$cs=db_connect_plus();
if(!$cs)
{
 die("Oh oh - 2 database servers down. And you think you're 
redundant..");
}
?>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[ADMIN] Intentionally splitting data in a table across files.

2001-06-06 Thread Nick Fankhauser

Does PostgreSQL support (or some day will support) "partitioned tables"?
This is a feature currently available in oracle which allows you to
physically separate the data for a table based on values in a set of columns
and attach or remove the files from the table.

While searching for the answer in the existing Docs & Archives, I noted that
a table will be split across files automatically to deal with situations
where the required space exceeds operating system limits, so it appears that
at least part of the concept already exists, but I found nothing about a way
to organize the data into a particular file or take a file that hold part of
a table off-line without making a mess. This would be a great enhancement
for warehousing applications such as ours, where the ability take a
particular large chunk of data on & off line quickly is important.

Alternately, does anyone have an idea about how to address this need in a
different way using existing tools?

-Nick

-
Nick Fankhauser

[EMAIL PROTECTED]  Phone 1.765.965.7363  Fax 1.765.962.9788
doxpop - Court records at your fingertips - http://www.doxpop.com/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [ADMIN] Intentionally splitting data in a table across files.

2001-06-06 Thread Tom Lane

"Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> Does PostgreSQL support (or some day will support) "partitioned tables"?

It's not on anyone's radar screen AFAIK.

> Alternately, does anyone have an idea about how to address this need in a
> different way using existing tools?

Make a view that's a UNION of the component tables, plus rules that
cause inserts to go into the appropriate components.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[ADMIN] changes sequences to unique

2001-06-06 Thread Dave Stokes

I have a sequence that I 'thought' was providing unique numbers but is 
not.  Is there someway to turn on unique-ness?  Should I reload after 
adding unique to the sequence?  Any other thoughts on digging my way out 
of the problem?

Thanks in advance!
-- 

Dave Stokes
[EMAIL PROTECTED]
817 329 9317


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [ADMIN] changes sequences to unique

2001-06-06 Thread Stephan Szabo

On Wed, 6 Jun 2001, Dave Stokes wrote:

> I have a sequence that I 'thought' was providing unique numbers but is 
> not.  Is there someway to turn on unique-ness?  Should I reload after 
> adding unique to the sequence?  Any other thoughts on digging my way out 
> of the problem?

They should provide unique numbers.  You could get non-unique values in
a column with a sequence default if you manually insert values (the
sequence wouldn't know about these).  Can you give a sequence of events
from a blank start to replicate what you're seeing?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [ADMIN] changes sequences to unique

2001-06-06 Thread Tom Lane

Dave Stokes <[EMAIL PROTECTED]> writes:
> I have a sequence that I 'thought' was providing unique numbers but is 
> not.  Is there someway to turn on unique-ness?

Huh?  nextval() should always produce unique values (unless the sequence
wraps around, of course).

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [ADMIN] changes sequences to unique

2001-06-06 Thread Thalis A. Kalfigopoulos

How did you conclude that it doesn't provide unique numbers? A sequence gives unique 
values by definition (unless you allow it to cycle and you actually wrapped around the 
2.1billion boundary)

cheers,
thalis


On Wed, 6 Jun 2001, Dave Stokes wrote:

> I have a sequence that I 'thought' was providing unique numbers but is 
> not.  Is there someway to turn on unique-ness?  Should I reload after 
> adding unique to the sequence?  Any other thoughts on digging my way out 
> of the problem?
> 
> Thanks in advance!
> -- 
> 
> Dave Stokes
> [EMAIL PROTECTED]
> 817 329 9317
> 
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [ADMIN] changes sequences to unique

2001-06-06 Thread Stefan Huber


>Huh?  nextval() should always produce unique values (unless the sequence
>wraps around, of course).

What about explicitly overruling a sequence by INSERTing a specific value, 
when it is not a primary key or defined as serial, but a manual sequence?

Stefan

-- 
I don't find it hard to meet expenses. They're everywhere!


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [ADMIN] Intentionally splitting data in a table across files. (fwd)

2001-06-06 Thread Brian McCane


On Wed, 6 Jun 2001, Tom Lane wrote:

> "Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> > Does PostgreSQL support (or some day will support) "partitioned tables"?
> 
> It's not on anyone's radar screen AFAIK.
> 
> > Alternately, does anyone have an idea about how to address this need in a
> > different way using existing tools?
> 
> Make a view that's a UNION of the component tables, plus rules that
> cause inserts to go into the appropriate components.

I did a poor mans version of this (sort of) in Perl.  I had some data
which had a fairly evenly distributed field (we'll call it foo :).  'foo'
was a varchar(14) field that contained a textual timestamp (don't remember
why, but it looked like '20010606123456'). I created 10 tables
(ie. bar0, bar1, bar2, ... bar9), and then had my insert function select
which table to insert into based on the last character of 'foo'.  Since I
was receiving updates about every 7 seconds during business hours, it
worked quite well.  All of the resultant tables were of similar size.

After I created the files, I distributed them to 5 drives attached via
SCSI using symlinks (bar0 and bar5 on sd0, etc, to limit the chance of
contention).

Selecting data using a UNION view was probably possible (v7.0.3) but I
never thought to try it that way.  I never found any fast way to select
data from the resulting tables, and since the goal of this data
distribution was speed, we later scrapped the idea.  They sure looked
pretty though ;).

- brian

> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl
> 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly