Re: [ADMIN] Running Postgres Daemons with same data files

2003-12-10 Thread Bhartendu Maheshwari
Dear UC,

You are right for the HA solution but at the same time we are also
implementing the load balancing solution so we can't have for one Node 2
different processing entity and database as well. We try to provide
solution for HA, load balancing both and in that there are 2 different
processing machine but sharing the common database so that both get the
latest and synchronized data files.

You are right if the NAS is down then everything get down but the
probability for the NAS is down is very less and by this we are able to
provide service for 99% cases and if you are 99% handle cases then you
are providing good service, isn't?

About the cache to file write :- If the database is writting all the
stuff to the files after each transaction then both have one
synchronized set of data file whoever want can acquire the lock and use
and then unlock it. The MySQL have command "flush tables" to enforce the
database to write all the cache contents to the files, Is there anything
similar in postgres? This will definitely degrade the performance of my
system but its much more fast since I have 2 processing unit.

Anyway if somebody have some other solution for the same please help me.
One I got have one common postmaster running on one PC and the two nodes
connect to that server to get the data. Any other please let me know.

regards
bhartendu

On Wed, 2003-12-10 at 10:38, Uwe C. Schroeder wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On Tuesday 09 December 2003 08:21 pm, Bhartendu Maheshwari wrote:
> > Dear Hal, Frank, Oli and all,
> >
> > I understand what you all trying to say, I know this is not good way of
> > designing, but we are planning for using the database for the keeping
> > mobile transactions and at the same time we need to provided the HA
> > solutions. The one solution i derive from the discussion that uses one
> > server and multiple clients but the issue in this if the system in which
> > database server was running get down the its all the way no use of HA
> > and load balancing, since without data the other one can't do anything.
> 
> Is the NAS server redundant ? If not it's not HA anyways.
> If there is a problem with the NAS or the network itself (say someone 
> accidentially cuts a bunch of network wires) - what do you do ?
> I don't see a big difference between the one server or the other server or the 
> network going down. Unless ALL components in your network are redundant and 
> have failover capabilities (for example one NAS automatically replacing the 
> other one if it fails) you don't have high availibility.
>  
> What exactly do you mean by "mobile transactions" ?
> The easiest way that probably comes close to what you intend to do is to have 
> a second server stand by and take over (i.e. mount the NAS storage and start 
> a postmaster as well as taking over the IP address of the original machine) 
> the moment the primary server fails. That will still disrupt all queries 
> currently in progress, but at least things can be used immediately after the 
> failure. 
> Still, the NAS storage is a huge point of failure. What you failed to realize 
> in the list below is that via the network to a remote storage a lot of caches 
> and buffers are involved. I bet you won't be able to tell exactly when what 
> piece of data has been physically written to the disk. Even if you close the 
> files some information could still hang around in some buffer until the 
> storage array feels it's time to actually write that stuff.
> 
> What you are trying to achive is the classic "replication" approach. Replicate 
> the database to a second server and have that one take over if the first one 
> fails. Look into the replication projects on gborg - that's more likely to 
> give you a workable solution.
> 
> 
> 
> >
> > What I have in mind is the following implementation:-
> >
> > Step 1 :- Mount the data files from NAS server.
> > Step 2 :- start the postgres with the mounted data.
> > Step 3 :- Lock the data files by one server.
> > Step 4 :- do the database operation.
> > Step 5 :- Commit in the database files.
> > Step 6 :- Unlock the database files
> > Step 7 :- Now the other one can do the same way.
> >
> > Or if anybody have other solution for this, please suggest. How can I
> > commit the data into database files and flushing cache with the latest
> > data files contents? Is there any command to refresh both.
> >
> > Thank you Hal for
> > *
> > If you really, really do need an HA solution, then I'd hunt around for
> > someone to add to your team who has extensive experience in this kind of
> > thing, since it's all too easy otherwise to unwittingly leave in lots of
> > single points of failure.
> > *
> >
> > If you really have someone to help me in this regards, please I need his
> > help in this issue and want to derive a common te

[ADMIN] How can i know the users that are loged in

2003-12-10 Thread Vasilis Ventirozos
Hi all i want to ask something about user managment ,

How can i know the users that
are loged thru an SQL command
or how can i know who i am (my username) thru SQL again

i can see that the users are loging so when i connect to a database
a triger (maybe) raises an notise that user vasilis is loged in


Thanx in advance

Vasilis Ventirozos


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] pg 7.4 on debian

2003-12-10 Thread Oliver Elphick
On Sat, 2003-12-06 at 08:04, Erwin Brandstetter wrote:
> "Miquel van Smoorenburg" wrote:
> 
> > It sure is.
> > www.mail-archive.com/[EMAIL PROTECTED]/msg71008.html
> 
> 
> Thnax for the hint - but it sure ain't.
> people.debian.org has been down since the debian hack.
> I am also desperately looking for postgresql 7.4 on debian woody.
> Is there a trusted mirror of people.debian.org or any other source to 
> get these .debs?

Peter Eisentraut has copied the binary packages to:
ftp://ftp.postgresql.org/pub/binary/v7.4/debian/

These are the md5sums of the woody release of 7.4:

Binaries:
1f0a6a7e1ddd6570d2fb069f9ebfbc78  libecpg-dev_7.4-0.woody.1_i386.deb
2e1f26504e3704e4d5f77766f5d31c70  libecpg4_7.4-0.woody.1_i386.deb
81c39de6c80e2609df05e500eb9dab8e  libpgeasy-dev_7.4-0.woody.1_i386.deb
9fe58a3e70369095b20768ac4776ce57  libpgeasy_7.4-0.woody.1_i386.deb
38d678f0436c72daaf6f568aa06e46ba  libpgperl_7.4-0.woody.1_i386.deb
cd38c2219fc9f6667e4137a62b9a81d0  libpgtcl-dev_7.4-0.woody.1_i386.deb
87a93f09420c7d7b4205d422fbeee61b  libpgtcl_7.4-0.woody.1_i386.deb
275a0b47d8e15d0c71e7477e09d14ea6  libpq3_7.4-0.woody.1_i386.deb
46398feffc9bd35a7127348d1dd466cb  odbc-postgresql_7.4-0.woody.1_i386.deb
1baa944e7d3cfb3bac92ab00aa27ed03  postgresql-client_7.4-0.woody.1_i386.deb
c0336368f09b377ae3aacfe0b9c73f85  postgresql-contrib_7.4-0.woody.1_i386.deb
336dc135850eb73189f2d4b2a2449e84  postgresql-dev_7.4-0.woody.1_i386.deb
d4a4bd7949b909e97c983f732e754823  postgresql-doc_7.4-0.woody.1_all.deb
36735269be5b5c3473c905bdce01f186  postgresql-plr_7.4-0.woody.1_i386.deb
0ef7481e2f100da3048824db28e5d15d  postgresql_7.4-0.woody.1_i386.deb

Source package:
9db69e84272746a0dfcd19706184c359  postgresql_7.4-0.woody.1.dsc
8a66893ec4502986aa9e7d82b4fae1dc  postgresql_7.4-0.woody.1.tar.gz
ce3fa832028e16dd8e2593a290b12c25  postgresql_7.4-0.woody.1_i386.changes


-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "I beseech you therefore, brethren, by the mercies of 
  God, that ye present your bodies a living sacrifice, 
  holy, acceptable unto God, which is your reasonable 
  service."   Romans 12:1 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] How can i know the users that are loged in

2003-12-10 Thread Antonis Antoniou


Vasilis Ventirozos wrote:

Hi all i want to ask something about user managment ,

How can i know the users that
are loged thru an SQL command
or how can i know who i am (my username) thru SQL again
i can see that the users are loging so when i connect to a database
a triger (maybe) raises an notise that user vasilis is loged in
If I understood correctly you need the query below:
SELECT * from pg_stat_activity;


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Running Postgres Daemons with same data files

2003-12-10 Thread William Yu
Bhartendu Maheshwari wrote:

Dear Hal, Frank, Oli and all,

I understand what you all trying to say, I know this is not good way of
designing, but we are planning for using the database for the keeping
mobile transactions and at the same time we need to provided the HA
solutions. The one solution i derive from the discussion that uses one
server and multiple clients but the issue in this if the system in which
database server was running get down the its all the way no use of HA
and load balancing, since without data the other one can't do anything.
Here's an important question. What exactly is the thinking behind your 
load balancing and HA requirements? The reason I'm asking this question 
is because there's nuances to what high available means.

As an example, you've got redundant servers but they're all in the same 
server room. A fire breaks out and kills everything. Not really HA IMO. 
Or you have redundant servers in different rooms/buildings hooked up to 
a NAS unit someplace else. A mover knocks the head off the ceiling fire 
extinguisher and floods the place (I've seen this happen) killing the 
NAS device. Again, not very HA. On the otherhand, if all your users are 
housed in same building as the servers where a fire kills the servers 
and also stops your users from doing any work, then it's not a problem.

The situation my company is in is we have users all over the U.S. 
connecting to our app so to do HA, we needed to put duplicate servers 
thousands of miles away from each other. That way, an earthquake in SF 
or a terrorist attack in D.C. doesn't bring down our app. And since 
traffic was load balanced between both locations, we needed 
master-master replication which we had to code in at the app level.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Running Postgres Daemons with same data files

2003-12-10 Thread Andrew Rawnsley
On Dec 10, 2003, at 12:25 PM, William Yu wrote:

Bhartendu Maheshwari wrote:

Dear Hal, Frank, Oli and all,
I understand what you all trying to say, I know this is not good way 
of
designing, but we are planning for using the database for the keeping
mobile transactions and at the same time we need to provided the HA
solutions. The one solution i derive from the discussion that uses one
server and multiple clients but the issue in this if the system in 
which
database server was running get down the its all the way no use of HA
and load balancing, since without data the other one can't do 
anything.
Here's an important question. What exactly is the thinking behind your 
load balancing and HA requirements? The reason I'm asking this 
question is because there's nuances to what high available means.

As an example, you've got redundant servers but they're all in the 
same server room. A fire breaks out and kills everything. Not really 
HA IMO. Or you have redundant servers in different rooms/buildings 
hooked up to a NAS unit someplace else. A mover knocks the head off 
the ceiling fire extinguisher and floods the place (I've seen this 
happen) killing the NAS device. Again, not very HA. On the otherhand, 
if all your users are housed in same building as the servers where a 
fire kills the servers and also stops your users from doing any work, 
then it's not a problem.

The situation my company is in is we have users all over the U.S. 
connecting to our app so to do HA, we needed to put duplicate servers 
thousands of miles away from each other. That way, an earthquake in SF 
or a terrorist attack in D.C. doesn't bring down our app. And since 
traffic was load balanced between both locations, we needed 
master-master replication which we had to code in at the app level.

---(end of 
broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

There are 2 basic things to remember about proper HA, by any definition:

1. It is hard.
2. It is expensive.
This is true even for systems which were built with HA/redundancy in 
mind - Postgres wasn't. Depending on what NAS device you're talking 
about, it may not
have been either (call EMC and see what the price tag is on a 
redundant, HA SAN setup. If you can afford that, you can afford the 
right tools).

This topic has floated around on the erserver mailing list - either you 
have the budget to do HA right, or you don't. If you don't, you have to 
be clear
in your specs about what is possible with the tools that you can afford 
without trying to jigger inappropriate technology to do something it 
really can't
do.  Trying to do something that the developers of a product say is not 
possible, and selling it as HA, is not going to make you many friends 
when
problems arise.



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Running Postgres Daemons with same data files

2003-12-10 Thread John Gibson
Bhartendu,

In my humble opinion, you would be well served if you listened to all 
the nice people on this list.

Use a local disk subsystems with RAID-type storage, use replication to 
have a  second "standby" system available if the first one fails.

The path you seem anxious to trod will get very muddy and slippery.

Good Luck.

...john

Bhartendu Maheshwari wrote:

Dear UC,

You are right for the HA solution but at the same time we are also
implementing the load balancing solution so we can't have for one Node 2
different processing entity and database as well. We try to provide
solution for HA, load balancing both and in that there are 2 different
processing machine but sharing the common database so that both get the
latest and synchronized data files.
You are right if the NAS is down then everything get down but the
probability for the NAS is down is very less and by this we are able to
provide service for 99% cases and if you are 99% handle cases then you
are providing good service, isn't?
About the cache to file write :- If the database is writting all the
stuff to the files after each transaction then both have one
synchronized set of data file whoever want can acquire the lock and use
and then unlock it. The MySQL have command "flush tables" to enforce the
database to write all the cache contents to the files, Is there anything
similar in postgres? This will definitely degrade the performance of my
system but its much more fast since I have 2 processing unit.
Anyway if somebody have some other solution for the same please help me.
One I got have one common postmaster running on one PC and the two nodes
connect to that server to get the data. Any other please let me know.
regards
bhartendu
 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Running Postgres Daemons with same data files

2003-12-10 Thread Bhartendu Maheshwari
Dear All,

I got all your points, thanks for such a great discussion, Now the last
thing I want is how can I close the data files and flush the cache into
the data files. How can I do this in postgresql

I will also try with RAID and other suggested way you all suggested. 

regards
bhartendu

On Thu, 2003-12-11 at 00:46, John Gibson wrote:
> Bhartendu,
> 
> In my humble opinion, you would be well served if you listened to all 
> the nice people on this list.
> 
> Use a local disk subsystems with RAID-type storage, use replication to 
> have a  second "standby" system available if the first one fails.
> 
> The path you seem anxious to trod will get very muddy and slippery.
> 
> Good Luck.
> 
> ...john
> 
> Bhartendu Maheshwari wrote:
> 
> >Dear UC,
> >
> >You are right for the HA solution but at the same time we are also
> >implementing the load balancing solution so we can't have for one Node 2
> >different processing entity and database as well. We try to provide
> >solution for HA, load balancing both and in that there are 2 different
> >processing machine but sharing the common database so that both get the
> >latest and synchronized data files.
> >
> >You are right if the NAS is down then everything get down but the
> >probability for the NAS is down is very less and by this we are able to
> >provide service for 99% cases and if you are 99% handle cases then you
> >are providing good service, isn't?
> >
> >About the cache to file write :- If the database is writting all the
> >stuff to the files after each transaction then both have one
> >synchronized set of data file whoever want can acquire the lock and use
> >and then unlock it. The MySQL have command "flush tables" to enforce the
> >database to write all the cache contents to the files, Is there anything
> >similar in postgres? This will definitely degrade the performance of my
> >system but its much more fast since I have 2 processing unit.
> >
> >Anyway if somebody have some other solution for the same please help me.
> >One I got have one common postmaster running on one PC and the two nodes
> >connect to that server to get the data. Any other please let me know.
> >
> >regards
> >bhartendu
> >
> >  
> >
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] Postrgres data restoration problem

2003-12-10 Thread Jim Cochrane
> Jim Cochrane <[EMAIL PROTECTED]> writes:
> >> How old?  We need to know the exact PG version number.
> 
> > cat PG_VERSION
> > 7.2
> 
> That's not exact, it only tells the major release number.
> "postmaster --version" was what I was looking for.

It outputs:

postmaster (PostgreSQL) 7.2.1

> 
> > However the server failed to start up, giving the following error messages:
> 
> > postmaster successfully started
> > DEBUG:  database system was shut down at 2003-12-07 14:55:22 MST
> > DEBUG:  open of /home/pgsql/data/pg_xlog/ (log file 0, segm
ent 0) failed: No such file or directory
> > DEBUG:  invalid primary checkpoint record
> > DEBUG:  open of /home/pgsql/data/pg_xlog/ (log file 0, segm
ent 0) failed: No such file or directory
> > DEBUG:  invalid secondary checkpoint record
> > FATAL 2:  unable to locate a valid checkpoint record
> > DEBUG:  startup process (pid 31411) exited with exit code 2
> > DEBUG:  aborting startup due to startup process failure
> 
> This is ungood :-(.  Your only hope at this point is to run pg_resetxlog
> (which is not a standard part of the 7.2 distribution, but is available
> as a contrib utility).  If you are lucky, that will let you into the
> database, but you should be aware of the possibility that you've lost
> parts of the last few transactions and therefore have a
> not-completely-consistent database.

I guess I'm unlucky.  I shut the server down and ran pg_resetxlog, which
gave the error message:

The database was not shut down cleanly.
Resetting the xlog may cause data to be lost!
If you want to proceed anyway, use -f to force reset.

So I used the -f option and started the server up, which started successfully:

postmaster successfully started
DEBUG:  database system was shut down at 2003-12-10 12:31:46 MST
DEBUG:  checkpoint record is at 0/210
DEBUG:  redo record is at 0/210; undo record is at 0/210; shutdown TRUE
DEBUG:  next transaction id: 158; next oid: 16556
DEBUG:  database system is ready

However, the same problem occurs as before - When I use psql to connect to
a database and then use \d to list the tables, there are not tables:
No relations found.

It appears to me that the metadata for the database tables got corrupted or
blown away.  It looks like it's time to give up.

In case anyone's concerned, this data was important, but not critical - It
hurts to loose it, but I'm not going to end up in jail because of it :-)

Anyway, I appreciate you guys' help.  I've learned some new things about
postgres.


Jim Cochrane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])