Dear all,
I am trying to install postgresplus on a Ubuntu10.4 based system.
I got a problem after installation when i issued the below command :
root@S10-MySQL:/opt/PostgresPlus/9.0SS# bin/psql -Upostgres
Password for user postgres:
psql (9.0.2)
Type help for help.
Cannot read termcap
On Wed, Feb 2, 2011 at 1:19 AM, hlcborg h.luis.card...@gmail.com wrote:
These two operations are not equivalent.
But...
Can I have this operation done in the Stored Procedure inside the Database?
Plain SHA1, which is signed with RSA signature. and in the end encoded to
base64?
I was
I add one more thing after researching i find that this a bug and the
solution is to install libtermcap library solves this problem.
I followed the below steps :
sudo apt-get install gcc
sudo apt-get install libgcc1
sudo apt-get install g++
sudo apt-get install cpp
sudo apt-get install
Rich --
I have an existing table with 15,492 rows and want to add additional rows
from a .csv file. If I use 'COPY tablename from filename with delimiter
as : csv quote as ' ' will this overwrite existing rows in the table or
append rows?
It will not overwrite any existing data; if the
Hi All,
I have a question about sequences in combination with streaming
replication.
It seems something strange is happening with sequences which are
streamed to the slave.
When updating the sequence the last_value on the slave shifts with 32
and halts at this value until the master increased
Install termcap (http://ftp.gnu.org/gnu/termcap/)
Add --enable-install-termcap configure option while running configure.
On Feb 2, 2011, at 3:29 PM, Adarsh Sharma wrote:
I add one more thing after researching i find that this a bug and the
solution is to install libtermcap library solves
Hello David,
On Wed, 02 Feb 2011 01:36:15 +0100, David Johnston pol...@yahoo.com
wrote:
If random sampling is desirable would the following construct limit
locking
only to the sampled rows?
SELECT id
FROM tasktable
WHERE id IN (SELECT random_id_sample())
FOR UPDATE
The random_id_sample
Dear all,
Today i try to perform some performance tuning for Postgresql database.
I want to change my shared memory permanently.
I have 16 GB RAM.
Please guide me how to change it permanently and what is the correct
value for it.
I am going for 8GB .
Thanks Regards
Adarsh Sharma
--
* Adarsh Sharma:
Please guide me how to change it permanently and what is the correct
value for it.
I am going for 8GB .
Usually, you can put these lines
kernel.shmall = 90
kernel.shmmax = 90
into /etc/sysctl.conf. Run sysctl -p to activate them. However,
this is a bit
To allow read-only access, ie. SELECT, to a user on a table which has a SERIAL
column as well as foreign key references do I need to GRANT the user anything
other than SELECT on the table and its corresponding sequence? As I understand,
USAGE is only required to modify the table or use the
In response to Aleksey Tsalolikhin atsaloli.t...@gmail.com:
On Mon, Jan 31, 2011 at 7:52 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Mon, Jan 31, 2011 at 5:54 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:
Situation: Disk usage on production server root filesystem is at
Hello list,
I have a large time-indexed table (states) partitioned into several
tables based on the date. The smaller tables are clustered by their
time indices.The main table is empty.
I need to select some data in the time order. When I query a separate
smaller table, the index is used an no
Hi All,
I have a question about sequences in combination with streaming
replication.
It seems something strange is happening with sequences which are
streamed to the slave.
When updating the sequence the last_value on the slave shifts with 32
and halts at this value until the master increased
On Wed, 2 Feb 2011, Sergey Konoplev wrote:
No it wont overwrite, it will append rows.
Thank you, Sergey.
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, 2 Feb 2011, Greg Williamson wrote:
It will not overwrite any existing data; if the table has constraints that
will prevent duplicates then the entire load will fail if any item fails
(it is a single transaction).
If the table doesn't have such constraints then duplicate data in the
You also don't have to import the source files directly into the live table.
Instead you could create a staging table that has no constraints where you
can import everything, do some review and updates, then merge that table
over to the live one. Depending on how many files you are dealing with
Sergey Zaharchenko doublef.mob...@gmail.com writes:
I need to select some data in the time order. When I query a separate
smaller table, the index is used an no sorting is needed. However,
when I query the main table, it occurs:
...
- Sort ...
Sort Key: ...
Sort Method:
The issue has been addressed and patch has been submitted. Refer to Npgsql
mailing thread
http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
.
--
View this message in context:
Hallo,
goal:
to write the input and query results to different files in 1 script
context example:
1 sql script demo.sql
--init messaging
\set client_min_messages warning
\set log_error_verbosity terse
--user1
SET SESSION AUTHORIZATION user1;
\pset format html
\o report/test_user1.html
\i
Wouter D'Haeseleer w...@vasco.com writes:
I have a question about sequences in combination with streaming
replication.
It seems something strange is happening with sequences which are
streamed to the slave.
When updating the sequence the last_value on the slave shifts with 32
and halts at
OKay Tom,
Thanks for this clarification
Florian Weimer schrieb:
Please guide me how to change it permanently and what is the correct
value for it.
I am going for 8GB .
Usually, you can put these lines
kernel.shmall = 90
kernel.shmmax = 90
into /etc/sysctl.conf. Run sysctl -p to activate them. However,
this is
Hi:
I'm going to be experimenting with streaming replication using v9.0.1. Here
are a few questions I have at the onset...
1) Is it possible to replicate one database out of many that exist in the DB
instance? Or do you have to replicate them all?
2) Is replication transaction based? That is
Thank you very much for your patch!
I'm going to review and apply it.
As soon as it is done, I'll let you know.
On Wed, Feb 2, 2011 at 12:52, Ahmed ahmed.shinw...@gmail.com wrote:
The issue has been addressed and patch has been submitted. Refer to Npgsql
mailing thread
On Wed, Feb 02, 2011 at 08:52:02AM -0700, Gauthier, Dave wrote:
I'm going to be experimenting with streaming replication using v9.0.1. Here
are a few questions I have at the onset...
why not 9.0.2?
5) Is there a step-by-step how to document for this?
On 02/02/2011 07:52 AM, Gauthier, Dave wrote:
Hi:
I'm going to be experimenting with streaming replication using v9.0.1.
Here are a few questions I have at the onset...
1) Is it possible to replicate one database out of many that exist in
the DB instance? Or do you have to replicate them all?
Hello,
I am importing gigabytes of data into PostgreSQL, and I don't want to wait
10 minutes just to discover an error in the 10th line of my input file.
I tried the command \copy ... from stdio in psql and it looks like psql
has to read the entire input before returning a potential error, even
atsaloli.t...@gmail.com (Aleksey Tsalolikhin) writes:
Situation: Disk usage on production server root filesystem is at 68%
utilization (80 GB used), on DR is at 51% (56 GB used). We use
SlonyII-1.2.x to keep the DR up to date. I would like to account for
the 24 GB difference.
It's more
peter.geoghega...@gmail.com (Peter Geoghegan) writes:
On 1 February 2011 03:52, Scott Marlowe scott.marl...@gmail.com wrote:
You can reclaim that space by doing a cluster or vacuum full on the
subject table.
Yes, but this is a fairly bad idea, particularly prior to PG 9.0 . 9.0
has a new
I was sitting down thinking the other day about when is it good to
generate a new database or just use an existing one. For example, lets
say my company name is called 'databasedummy.org' and I have a
database called 'dbdummy'. Now I need PostgreSQL to manage several
applications for my company:
On 02/02/11 10:20 AM, Nicolas Grilly wrote:
Is the copy protocol (aka PQputCopyData and PQputCopyEnd) designed to
send gigabytes of data with just one copy ... from stdio query, and
is there a way to be notified of a potential error before calling
PQputCopyEnd? Or do I have to send my data in
On 02/02/11 10:32 AM, Carlos Mennens wrote:
I was sitting down thinking the other day about when is it good to
generate a new database or just use an existing one. For example, lets
say my company name is called 'databasedummy.org' and I have a
database called 'dbdummy'. Now I need PostgreSQL to
2011/2/2, Tom Lane t...@sss.pgh.pa.us:
I see the database doesn't understand that there are no entries in the
main table, so it has to assume the Append data is not ordered. Is
there a way to avoid sorting?
No. In existing releases there is no plan type that can produce
presorted output
The main concern to consider is whether there are any shared relationships
that the different projects all have (e.g., common logon users). Since you
cannot query across different databases if there is shared information then
a single database would be preferred. I think the concept you want to
On Wed, 2011-02-02 at 11:08 -0800, John R Pierce wrote:
On 02/02/11 10:32 AM, Carlos Mennens wrote:
I would create a seperate database for each thing that has nothing to do
with the other things.I doubt mediawiki and phpbb will ever share
any data, they are totally different
On Wed, Feb 2, 2011 at 16:52, Gauthier, Dave dave.gauth...@intel.com wrote:
Hi:
I'm going to be experimenting with streaming replication using v9.0.1. Here
are a few questions I have at the onset...
You should use 9.0.3
1) Is it possible to replicate one database out of many that exist
On 02/02/11 11:24 AM, Joshua D. Drake wrote:
Forget separate databases. Use separate users with schemas.
for canned applications like mediawiki and phpbb? not sure they
support that.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Forget separate databases. Use separate users with schemas.
for canned applications like mediawiki and phpbb? not sure they support
that.
Mediawiki does -- I'm doing just that. It's been liberating learning how
PostgreSQL deals with schemas (and applying that knowledge).
-- Gary Chambers
On Wed, Feb 2, 2011 at 10:31 AM, Torsten Zühlsdorff
f...@meisterderspiele.de wrote:
kernel.shmall = 90
kernel.shmmax = 90
into /etc/sysctl.conf. Run sysctl -p to activate them. However,
this is a bit distribution-specific.
If you're using FreeBSD you even have to restart
I've got a large (and growing) database set up as a partitioned
database. The partitions are physically broken out by state plus a
unique id for each. There's roughly 20 million records in the whole
thing just now. My question is, why does a simple query supplying both
parts of the index key
carlos.menn...@gmail.com (Carlos Mennens) writes:
I was sitting down thinking the other day about when is it good to
generate a new database or just use an existing one. For example, lets
say my company name is called 'databasedummy.org' and I have a
database called 'dbdummy'. Now I need
Hi,
Here is the best cludge so far. To load the module 'tablefunc' from the
contrib/ directory, process the output of the 'pg_config' program with unix
commands. The 'pg_config' program is often distributed in a package
separate from postgresql.
\set tablefunc `pg_config|grep SHAREDIR|sed
On February 26th and 27th (Saturday and Sunday), PostgreSQL is assigned an
exhibitor booth. Between the hours of 10:00 am and 6:00 pm, we need booth
coverage.
The call is going out for booth attendees. This will be an excellent
opportunity to meet PostgreSQL community leaders as well as
On Wed, Feb 02, 2011 at 11:44:51AM -0800, John R Pierce wrote:
On 02/02/11 11:24 AM, Joshua D. Drake wrote:
Forget separate databases. Use separate users with schemas.
for canned applications like mediawiki and phpbb? not sure they
support that.
If they use different users you can
Wim Bertels wrote:
--user2
SET SESSION AUTHORIZATION user2;
\pset format latex
\echo ECHO queries
\o report/test_user2.tex
\i structure/test_user2.sql
This doenst seem to work,
as the ECHO queries output isnt written to the file (test_user2.tex)
Actions are performed as they are
Hi pgsql-general group,
I was wondering if there is more information about this switch in the
configuration. Does it really work? Where in the source code can I
follow how it works? sgmgr.c seems to be an entry point, but where
exactly is it used?
Greets, Yves
--
Sent via pgsql-general mailing
On 02/02/2011 12:17 PM, Bill Thoen wrote:
I've got a large (and growing) database set up as a partitioned
database
What is the setting of contstraint_exclusion?
http://www.postgresql.org/docs/9.0/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION
Cheers,
Steve
On 02/02/2011 01:35 PM, Bill Thoen wrote:
Steve Crawford wrote:
On 02/02/2011 12:17 PM, Bill Thoen wrote:
I've got a large (and growing) database set up as a partitioned
database
What is the setting of contstraint_exclusion?
/**
How do I stop PQfinish from blocking?
(try any LAN IP address that doesn't exist on your LAN.)
I compiled it with both VC and MinGW, same result.
(change C:\Program Files (x86) to C:\Program Files for 32bit Windows OS)
Command Line Compile in VC 32bit:
cl
Hello,
I just received an error message:
PGError: ERROR: duplicate key value violates unique constraint
chu_user_id_chat_room_id
DETAIL: Key (user_id, chat_room_id)=(8, 2) already exists.
CONTEXT: SQL statement insert into chat_room_users (user_id, chat_room_id,
active_at)
Thank you for the discussion.
I'm on Postgres 8.4, and the hardware between Slony master and slave
is identical,
as is the autovacuum config.
We do have transactions that fail to commit, transactions that roll back.
I'm glad to have some idea of the cause of the difference in table size
between
On Wed, Feb 2, 2011 at 10:45 AM, Chris Browne cbbro...@acm.org wrote:
peter.geoghega...@gmail.com (Peter Geoghegan) writes:
I'm not sure why you'd advocate CLUSTER as a way to reclaim disk space.
Because it works pretty well; it reorganizes the table on the basis of
the order indicated by one
On 2 February 2011 05:41, Scott Marlowe scott.marl...@gmail.com wrote:
I wouldn't increase index fill factor as an optimisation, unless you
had the unusual situation of having very static data in the table.
That makes no sense whatsoever. You decrease fill factor (not
increase btw) so there
Hmm ... It would appear that is it actually WSACleanup() that is taking
forever. I Added a WSAStartup() and a WSACleanup(), and it hung for awhile
on WSACleanup() instead of PQfinish() :)
-Original Message-
From: pgsql-general-ow...@postgresql.org
On Wed, Feb 2, 2011 at 8:49 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
On 2 February 2011 05:41, Scott Marlowe scott.marl...@gmail.com wrote:
I wouldn't increase index fill factor as an optimisation, unless you
had the unusual situation of having very static data in the table.
I am looking for a good book on the math and/or theory behind relational
databases and associated topics.. I am looking some works on set theory,
algebra, or any other books/papers on the mechanics that databases are built
on. I found one book online,
How do I know which version to upgrade to from 8.1.4?
Regards,
William Bruton
Data Retrieval Corporation
13231 Champion Forest Dr Suite 401
Houston Tx 77069
Tel: 281 444-5398
Fax: 281 444-5397
24 Hrs: 832 752-0074
http://www.spidr.com/
mailto:d...@spidr.com d...@spidr.com
On Wed, Feb 2, 2011 at 12:08 PM, William Bruton d...@spidr.com wrote:
How do I know which version to upgrade to from 8.1.4?
you should first update to 8.1.latest so you've got all the bug fixes
available. It's pretty close to painless, and unless the release
notes between 8.1.5 and 8.1.latest
On 3 Feb 2011, at 2:17, Mage wrote:
The trigger looks like:
create or replace function trf_chat_room_users_insert() returns trigger as $$
begin
if NEW.active_at is null then
insert into chat_room_users (user_id, chat_room_id, active_at)
(select NEW.user_id,
59 matches
Mail list logo