Alvaro Herrera ha scritto:
However i have no idea of what tables the autovacuum daemon is
processing because there aren't autovacuum info columns on
pg_stat_all_tables (as there are for 8.2.x).
For that, you need to change log_min_messages to debug2.
Keep track of the PID of
Denis Gasparin wrote:
Another question/idea: why don't put messages about what tables got
vacuumed by the autovacuum daemon as normal log messages (instead of
debug2)?
We did that for 8.3, actually.
I think it could be useful because in this way you can also know what
tables are used more
How is it possibile to check if autovacuum is running in 8.1.x?
Show Autovacuum gives me on and also i see evidence in logs where
autovacuum writes LOG: autovacuum: processing database .
However i have no idea of what tables the autovacuum daemon is
processing because there aren't
How is it possibile to check if autovacuum is running in 8.1.x?
Show Autovacuum gives me on and also i see evidence in logs
where,autovacuum writes LOG: autovacuum: processing database .
However i have no idea of what tables the autovacuum daemon is
processing because there aren't
Denis Gasparin wrote:
How is it possibile to check if autovacuum is running in 8.1.x?
Show Autovacuum gives me on and also i see evidence in logs
where,autovacuum writes LOG: autovacuum: processing database .
Then it is running.
However i have no idea of what tables the autovacuum
Running 8.2.4.
The following is in my postgresql.conf:
# - Query/Index Statistics Collector -
#stats_command_string = on
update_process_title = on
stats_start_collector = on # needed for block or row stats
# (change requires restart)
Karl Denninger wrote:
A manual Vacuum full analyze fixes it immediately.
But... .shouldn't autovacuum prevent this? Is there some way to look in a
log somewhere and see if and when the autovacuum is being run - and on
what?
Are your FSM settings enough to keep track of the dead space you
I don't know. How do I check?
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Alvaro Herrera wrote:
Karl Denninger wrote:
A manual Vacuum full analyze fixes it immediately.
But... .shouldn't autovacuum prevent this? Is there some way to look in a
log somewhere and see
Karl Denninger [EMAIL PROTECTED] writes:
But... .shouldn't autovacuum prevent this? Is there some way to look in
a log somewhere and see if and when the autovacuum is being run - and on
what?
There's no log messages (at the default log verbosity anyway). But you
could look into the pg_stat
Karl Denninger wrote:
Are your FSM settings enough to keep track of the dead space you have?
I don't know. How do I check?
vacuum verbose;
Toward the bottom you will see something like:
...
1200 page slots are required to track all free space.
Current limits are: 453600 page slots, 1000
Tom Lane wrote:
Karl Denninger [EMAIL PROTECTED] writes:
But... .shouldn't autovacuum prevent this? Is there some way to look in
a log somewhere and see if and when the autovacuum is being run - and on
what?
There's no log messages (at the default log verbosity anyway). But you
Steve Crawford wrote:
Karl Denninger wrote:
Are your FSM settings enough to keep track of the dead space you have?
I don't know. How do I check?
vacuum verbose;
Toward the bottom you will see something like:
...
1200 page slots are required to track all free space.
Current
On 8/28/07, Karl Denninger [EMAIL PROTECTED] wrote:
Am I correct in that this number will GROW over time? Or is what I see
right now (with everything running ok) all that the system
will ever need?
They will grow at first to accomodate your typical load of dead tuples
created between
Scott Marlowe wrote:
On 8/28/07, Karl Denninger [EMAIL PROTECTED] wrote:
Am I correct in that this number will GROW over time? Or is what I see
right now (with everything running ok) all that the system
will ever need?
They will grow at first to accomodate your typical load of
Scott Marlowe [EMAIL PROTECTED] writes:
So it's a good idea to allocate 20 to 50% more than what vacuum
verbose says you'll need for overhead. also keep in mind that vacuum
verbose only tells you what the one db in the server needs.
No, that's not true --- the numbers it prints are
Rishi Daryanani wrote:
I'm having problems with a query that's just
stalling my database. If someone could help me out -
I posted a forum topic on
http://forums.devshed.com/postgresql-help-21/postgresql-new-index-pros-and-cons-467120.html
Did you get any advice from that forum? Was it
Rishi,
I looked up that thread
1st:) p.s. I am using PostgreSQL 7.4.17
Any reason for that? Actual version is 8.2.4; or at least 8.1.9
2nd) your query is:
SELECT DISTINCT c.*
FROM customer c
LEFT OUTER JOIN weborders w
ON c.username =
Hi all,
I'm having problems with a query that's just
stalling my database. If someone could help me out -
I posted a forum topic on
http://forums.devshed.com/postgresql-help-21/postgresql-new-index-pros-and-cons-467120.html
There's just this one integer field, which when
searched on, stalls my
NetComrade wrote:
I apologize for cross-posting, but I need some help w/o too many
advices RTFM :). After Oracle and MySQL, this becomes the third
product that I need to learn to some degree, and I need a few links
which would provide a 'quick tutorial' especially for folks with
Oracle
Moving to -general.
On Jul 26, 2007, at 12:51 PM, NetComrade wrote:
I apologize for cross-posting, but I need some help w/o too many
advices RTFM :). After Oracle and MySQL, this becomes the third
product that I need to learn to some degree, and I need a few links
which would provide a 'quick
I apologize for cross-posting, but I need some help w/o too many
advices RTFM :). After Oracle and MySQL, this becomes the third
product that I need to learn to some degree, and I need a few links
which would provide a 'quick tutorial' especially for folks with
Oracle background like myself. Last
On 26 Jul, 18:51, NetComrade [EMAIL PROTECTED] wrote:
I apologize for cross-posting, but I need some help w/o too many
advices RTFM :). After Oracle and MySQL, this becomes the third
[snip]
We run Oracle 9iR2,10gR1/2 on RH4/RH3 and Solaris 10 (Sparc)
remove NSPAM to email
Contact me offline.
NetComrade wrote:
I apologize for cross-posting, but I need some help w/o too many
advices RTFM :). After Oracle and MySQL, this becomes the third
product that I need to learn to some degree, and I need a few links
which would provide a 'quick tutorial' especially for folks with
Oracle
Hello:
I am running the following query:
SELECT COUNT(*) FROM orders WHERE o_orderdate date('1995-03-15');
Here are some stats for the orders relation:
select relname, relpages, reltuples from pg_class where relname = 'orders';
orders;29278;1.49935e+06
For my query above, the reduction factor
[EMAIL PROTECTED] writes:
I am running three ways: sequential scan, bitmap index scan and index scan.
The I/O cost for the index scan is 24+ times more than the other two. I do
not
understand why this happens. If I am using a clustered index, it is my
understanding that there should be no
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
I am running three ways: sequential scan, bitmap index scan and index scan.
The I/O cost for the index scan is 24+ times more than the other two. I do
not
understand why this happens. If I am using a clustered index, it is my
understanding
Hello,
I have a question regarding Postgres 8.2.
I am trying to set the datestyle in the postrgresql.conf permanently to European. But this does seem to work. This is what I did, I hope you can help me.
This is in the postgresql.conf.
datestyle = 'SQL,DMY'
but when I restart the server and
Hi everyone,
my name is Paolo Bizzarri and I am a developer of PAFlow, an document
tracking and management system for public administrations.
We use postgres as a backend, and we are experimenting some corruption
problems on openoffice files.
As our application is rather complex (it includes
On 5/30/07, Matthew T. O'Connor [EMAIL PROTECTED] wrote:
Paolo Bizzarri wrote:
my name is Paolo Bizzarri and I am a developer of PAFlow, an document
tracking and management system for public administrations.
We use postgres as a backend, and we are experimenting some corruption
problems on
Paolo Bizzarri wrote:
my name is Paolo Bizzarri and I am a developer of PAFlow, an document
tracking and management system for public administrations.
We use postgres as a backend, and we are experimenting some corruption
problems on openoffice files.
As our application is rather complex (it
On 4/17/07, Terry Martin [EMAIL PROTECTED] wrote:
I am using redhat and RTOS.
The packets with be coming in on a 1 GBPS port and the information is
streaming UDP packets coming in.
I need it real time.
With RH ulogd should be possible; RTOS I don't know at all.
And it will need a grunty
On 13/04/07, Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:
On 4/13/07, Terry Martin [EMAIL PROTECTED] wrote:
I would like to know if I there is a utility to take a UDP packet which
has specific information in the payload and extract the information
from the packet and place it in the Postgres
I would like to know if I there is a utility to take a UDP packet which
has specific information in the payload and extract the information from
the packet and place it in the Postgres data base?
Terry Martin
Timedata Corporation
VP of Network Operations
Work: (212) 644-1600 X3
Cell:
On 4/13/07, Terry Martin [EMAIL PROTECTED] wrote:
I would like to know if I there is a utility to take a UDP packet which
has specific information in the payload and extract the information
from the packet and place it in the Postgres data base?
Which OS (in Linux ulogd and/or tcpdump spring to
Here results of what I've done.
Just a note all this was done on 7.4.16:
First of all problems:
1. cannot complete configure on flass drive
./configure --prefix=/media/sda1/app/psql/postgresql-7.4.16/bin
--without-readline
...
configure: creating ./config.status
config.status: creating
On Sat, 2007-03-24 at 11:59 +, Raymond O'Donnell wrote:
Not to mention the danger of losing the confounded thing :)
Or having what happened to me... my emergency crash recovery data (pgp
keys, Lotus Notes ID, stuff like that) on a USB drive got chewed up by
the dog. Thankfully I didn't
On 29/03/2007 20:18, Peter L. Berghold wrote:
Or having what happened to me... my emergency crash recovery data (pgp
keys, Lotus Notes ID, stuff like that) on a USB drive got chewed up by
the dog. Thankfully I didn't actually need the thing before I could
Heh heh heh.dogs are merely
Ideally, I'd like to have all postgresql related files on flush
drive. and OS on another device (CD). Database data will be small and
I don't think I will run into the problem with a space.
But how can I install postgresql (including all libraries) on flash
drive? I remember that 7.4.X rpms
On 3/28/07, Mark [EMAIL PROTECTED] wrote:
Ideally, I'd like to have all postgresql related files on flush
drive. and OS on another device (CD). Database data will be small and
I don't think I will run into the problem with a space.
But how can I install postgresql (including all libraries) on
On 24/03/2007 03:24, Stephen Liu wrote:
The advantage of running thumb drive is convenient to carry. Its
disadvantage is having life-time. I don't know how big will be your
Not to mention the danger of losing the confounded thing :) . I've
had a scare or two in the past with these
I would like to use postgresql with knopixx, Sounds like a simple
idea :-) and I would like to get full version of postgresql stored on
flash drive.
I remeber I've seen postgresql tar files before, but do not recall
the location - can anybody point?
Also, how big (in MB) postgresql takes after
Mark wrote:
I would like to use postgresql with knopixx, Sounds like a simple
idea :-) and I would like to get full version of postgresql stored on
flash drive.
I remeber I've seen postgresql tar files before, but do not recall
the location - can anybody point?
Also, how big (in MB) postgresql
SQLite database is much better choice for flash drive
from my point of view.
--- James Neff [EMAIL PROTECTED] wrote:
Mark wrote:
I would like to use postgresql with knopixx,
Sounds like a simple
idea :-) and I would like to get full version of
postgresql stored on
flash drive.
I
Since it's going to be a development environment I don't need it
fast.
So, I would still prefer to go ahead with USB drive.
Mark
--- James Neff [EMAIL PROTECTED] wrote:
Mark wrote:
I would like to use postgresql with knopixx, Sounds like a simple
idea :-) and I would like to get full
On 3/23/07, Mark [EMAIL PROTECTED] wrote:
I would like to use postgresql with knopixx, Sounds like a simple
idea :-) and I would like to get full version of postgresql stored on
flash drive.
I remeber I've seen postgresql tar files before, but do not recall
the location - can anybody point?
Hi Mark,
Since it's going to be a development environment I don't need it
fast.
So, I would still prefer to go ahead with USB drive.
I like your idea. Last year I tested installing a complete Linux OS on
USB drive (thumb drive) of 1G size. It worked booting direct on PC.
It has a
Hi,
i created the following function :
-- Function: immense.sp_a_001(username varchar, pwd varchar)
-- DROP FUNCTION immense.sp_a_001(username varchar, pwd varchar);
CREATE OR REPLACE FUNCTION immense.sp_a_001(username varchar, pwd
varchar)
RETURNS int4 AS
$BODY$
DECLARE
myrec
On 11/03/07, Alain Roger [EMAIL PROTECTED] wrote:
Hi,
i created the following function :
-- Function: immense.sp_a_001(username varchar, pwd varchar)
-- DROP FUNCTION immense.sp_a_001(username varchar, pwd varchar);
CREATE OR REPLACE FUNCTION immense.sp_a_001(username varchar, pwd
varchar)
Alain Roger [EMAIL PROTECTED] wrote:
Hi,
i created the following function :
-- Function: immense.sp_a_001(username varchar, pwd varchar)
-- DROP FUNCTION immense.sp_a_001(username varchar, pwd varchar);
CREATE OR REPLACE FUNCTION immense.sp_a_001(username varchar, pwd
varchar)
On mið, 2007-01-10 at 17:38 -0800, Mike Poe wrote:
I'm a rank newbie to Postgres am having a hard time getting my arms
around this.
I'm trying to construct a query to be run in a PHP script. I have an
HTML form were someone can enter either a last name or a social
security number then
I'm a rank newbie to Postgres am having a hard time getting my arms
around this.
I'm trying to construct a query to be run in a PHP script. I have an
HTML form were someone can enter either a last name or a social
security number then query the database based on what they entered.
My query
Mike Poe wrote:
SELECT foo, baz, bar FROM public.table WHERE lastname ~*
'$lastname' OR ssn='$ssn'
I need to leave the last name a wildcard in case someone enters a
partial name, lower case / upper case, etc.
I want the SSN to match exactly if they search by that.
The way it's written, if
To
pgsql-general@postgresql.org
cc
Subject
[GENERAL] Question - Query based on WHERE OR
I'm a rank newbie to Postgres am having a hard time getting my arms
around this.
I'm trying to construct a query to be run in a PHP script. I have an
HTML form were someone can enter either a last name
Richard Broersma Jr wrote:
Is it possible to configure PostgreSQL so that a LIKE 'a' query
will match a 'á' value, ie, make it accent-insensitive ?
I forgot this was possible using regular expressions. I don't think it is
possible using the LIKE
syntax.
What a pity, I've found a point
Is it possible to configure PostgreSQL so that a LIKE 'a' query
will match a 'á' value, ie, make it accent-insensitive ?
Maybe something like this can help you:
test= select to_ascii(convert('tête-à-tête français', 'LATIN9'),'LATIN9');
to_ascii
--
Is it possible to configure PostgreSQL so that a LIKE 'a' query
will match a 'á' value, ie, make it accent-insensitive ?
Thanks in advance,
Daniel Serodio
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Is it possible to configure PostgreSQL so that a LIKE 'a' query
will match a 'á' value, ie, make it accent-insensitive ?
I forgot this was possible using regular expressions. I don't think it is
possible using the LIKE
syntax. if you use something like:
select * from yourtable
where
On Nov 23, 2006, at 8:37 AM, Sefer Tov wrote:
oddities. Clearly the caching algorithm favors caching the indices
to data (since they are more frequently accessed) but there is
another case where *recently written* entries are often requested
shortly after and I am not sure that they get
Hi,
I'm running a very large and frequently updated database on a machine with
relatively limited memory (you can safely assume that the database disk usage
to available memory has a ratio of 10:1 - so clearly not all the pages can be
retained in memory).
The naive approach would presume
Hello!
I have to tables, component with unchanging component data and a
component_history table containing the history of some other values that can
change in time.
The table component_history holds a foreign key to the component_id column
in the component table. The table component_history has a
On 11/15/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Is there any other, and more performat way, to get the last history entryfor a given date than this query?
Create an (independent) index on history_timestamp column and use a min/max in the subquery.More specifically, your query should look
: Wednesday, November 15, 2006
4:18 PMTo: [EMAIL PROTECTED]Cc:
pgsql-general@postgresql.orgSubject: Re: [GENERAL] Question about
query optimizationOn 11/15/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Is
there any other, and more performat way, to get the last history
entryfor
On 11/15/06, Gurjeet Singh [EMAIL PROTECTED] wrote:
On 11/15/06, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Is there any other, and more performat way, to get the last history entryfor a given date than this query?
Create an (independent) index on history_timestamp column and use a min/max in
On 11/15/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hello
Gurjeet!
Tried
your suggestion but this is just a marginal improvement.
Our
query needs 126 ms time, your query 110 ms.
I do not see an index access on the component table Do you have an index on component.component_id?
Wei Weng wrote:
I have a database table that has about 90k entries, they are all
straightfoward text, and there is only one ID field that I use as
primary key for this table.
I have two threads working on this table. One of them inserting new
content constantly, (about every second) another one
Richard Huxton wrote:
Wei Weng wrote:
I have a database table that has about 90k entries, they are all
straightfoward text, and there is only one ID field that I use as
primary key for this table.
I have two threads working on this table. One of them inserting new
content constantly, (about
I have a database table that has about 90k entries, they are all
straightfoward text, and there is only one ID field that I use as
primary key for this table.
I have two threads working on this table. One of them inserting new
content constantly, (about every second) another one idles and only
Hello
A newbie to PostgreSQL from MySQL and just trying to learn tsearch2. In
one of the examples at:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html
the query given is:
SELECT intindex, strTopic FROM tblmessages
WHERE idxfti @@
' ability to define
ones own data types and functions.
HTH,
Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From: [EMAIL PROTECTED] on behalf of Ritesh Nadhani
Sent: Thu 10/19/2006 11:38 AM
To: pgsql-general@postgresql.org
Cc:
Subject:[GENERAL] Question
Jonathan Vanasco wrote:
I made a HUGE mistake, and used 'UK' as the abbreviation for the
united kingdom ( the ISO abbv is 'GB' )
I've got a database where 8 tables have an FKEY on a table
'location_country' , using the text 'uk' as the value -- so i've got
9 tables that I need to swap
I am trying to connect to machine A (192.168.1.155) from a different
machine B (192.168.1.180), with password transmitted as a MD5 string.
I have the following lines in my pg_hba.conf file.
hostall all 192.168.1.180 255.255.255.1md5
I created a database user
On Thu, 2006-10-12 at 15:38 -0400, Wei Weng wrote:
I am trying to connect to machine A (192.168.1.155) from a different
machine B (192.168.1.180), with password transmitted as a MD5 string.
I have the following lines in my pg_hba.conf file.
hostall all 192.168.1.180
Wei Weng [EMAIL PROTECTED] writes:
I have the following lines in my pg_hba.conf file.
hostall all 192.168.1.180 255.255.255.1md5
Not relevant to your immediate problem, but: you almost certainly
want 255.255.255.255 as the netmask here.
psql -h 192.168.1.155 -U
On Thu, 2006-10-12 at 15:50 -0400, Tom Lane wrote:
Wei Weng [EMAIL PROTECTED] writes:
I have the following lines in my pg_hba.conf file.
hostall all 192.168.1.180 255.255.255.1md5
Not relevant to your immediate problem, but: you almost certainly
want
I think I have found out something suspicious.
I used tcpdump to monitor the traffic to and from port 5432, and it
seems that the password the client on A sends out to the postmaster on B
is
md54570471eccef21ae3c6e43033d8d2f66
While the MD5-ed password stored in system catalog (pg_shadow) is
Wei Weng [EMAIL PROTECTED] writes:
(As you can see, all 3 strings are different)
Why the difference? Is there something missing ??
Well, the password is actually supposed to be 'md5'||md5(passwd||user),
thus:
regression=# select md5('test_passwd' || 'test_user');
md5
I made a HUGE mistake, and used 'UK' as the abbreviation for the
united kingdom ( the ISO abbv is 'GB' )
I've got a database where 8 tables have an FKEY on a table
'location_country' , using the text 'uk' as the value -- so i've got
9 tables that I need to swap data out on
can anyone
I made a HUGE mistake, and used 'UK' as the abbreviation for the
united kingdom ( the ISO abbv is 'GB' )
I've got a database where 8 tables have an FKEY on a table
'location_country' , using the text 'uk' as the value -- so i've got
9 tables that I need to swap data out on
can
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/11/06 16:10, Richard Broersma Jr wrote:
I made a HUGE mistake, and used 'UK' as the abbreviation for the
united kingdom ( the ISO abbv is 'GB' )
I've got a database where 8 tables have an FKEY on a table
'location_country' , using the
can anyone suggest a non-nightmarish way for me to do this ?
If your tables are setup to ON UPDATE CASCASE then you are fine.
Just updated the main table and PostgreSQL will take care of the rest.
I doesn't appear that ALTER TABLE can change constraint characteristics.
You'd have to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/11/06 18:53, Richard Broersma Jr wrote:
can anyone suggest a non-nightmarish way for me to do this
?
If your tables are setup to ON UPDATE CASCASE then you are
fine. Just updated the main table and PostgreSQL will take
care of the rest.
I doesn't appear that ALTER TABLE can change constraint
characteristics. You'd have to drop/recreate, no?
Now that you mention it, I've never tried it or seen it done.
Here I what I came up with:
[snip]
It is nice to see things work so well. :-)
It would be interesting to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/11/06 19:15, Richard Broersma Jr wrote:
I doesn't appear that ALTER TABLE can change constraint
characteristics. You'd have to drop/recreate, no?
Now that you mention it, I've never tried it or seen it done.
Here I what I came up with:
Greetings:
I have have a plpgsql function that creates a temporary table to facilitate
some processing. Here is the code:
CREATE TEMP TABLE tmp (code VARCHAR,
booked INTEGER,
availINTEGER,
On Wed, 2006-09-20 at 16:51 -0400, Terry Lee Tucker wrote:
Greetings:
I have have a plpgsql function that creates a temporary table to facilitate
some processing. Here is the code:
CREATE TEMP TABLE tmp (code VARCHAR,
booked INTEGER,
Thanks for the reponse Jeff. See comments below.
On Wednesday 20 September 2006 05:09 pm, Jeff Davis [EMAIL PROTECTED] thus
communicated:
-- On Wed, 2006-09-20 at 16:51 -0400, Terry Lee Tucker wrote:
-- Greetings:
--
-- I have have a plpgsql function that creates a temporary table to
On Wed, 2006-09-20 at 17:29 -0400, Terry Lee Tucker wrote:
Well, I was assuming that that the table wasn't being dropped and that was
what was causing the error. I can see from your comments, that I was wrong on
that asssumption. I can do this with and execute, but it's going to be a pain
Hi
All,
I have an issue with
timestamp with time zone I don't understand.
When I insert a time
stamp value '1903-08-07 00:00:00+02' into a table and next select it again using
psql I get '1903-08-06 22:19:32+00:19'.
I'm located in The
Netherlands and before 1940 there was a so called
Jan van der Weijde [EMAIL PROTECTED] writes:
When I insert a time stamp value '1903-08-07 00:00:00+02' into a table
and next select it again using psql I get '1903-08-06 22:19:32+00:19'.
I'm located in The Netherlands and before 1940 there was a so called
Amsterdam Time that is UTC + 20. So
Hi list,
i have a table with a column where the default value is current_timestamp, but
somehow all the tuples (around 8.000.000) have the same timestamp, which is,
honestly speaking, not what i intended to do. So is the current_timestamp
function only executed when the insert statement
am 26.07.2006, um 7:26:09 +0200 mailte Christian Rengstl folgendes:
Hi list,
i have a table with a column where the default value is
current_timestamp, but somehow all the tuples (around 8.000.000) have
the same timestamp, which is, honestly speaking, not what i intended
to do. So is the
hi,
i am new for postgres sql.i have downloaded driver for postgres sql and trying to do connectivity through it.is it type 2 driver or type 4 driver..??-- Deepak PalSoftware Developer
Wicenet ltd.Pune(M.H)
Hi Deepak,PostgreSQL JDBC is a type 4 implementation (http://jdbc.postgresql.org/documentation/80/index.html)Thanks,Shoaib Mir
EnterpriseDBOn 7/21/06, deepak pal [EMAIL PROTECTED] wrote:
hi,
i am new for postgres sql.i have downloaded driver for postgres sql and trying to do connectivity through
On Tue, Jun 27, 2006 at 01:43:21PM +0200, Christian Rengstl wrote:
i am in the middle of breaking my head over designing a
database and came to the following question/problem: i have
persons whose values (integer) have to be entered in the db,
but per person the amount of values ranges from
Hi list,
i am in the middle of breaking my head over designing a database and came to
the following question/problem: i have persons whose values (integer) have to
be entered in the db, but per person the amount of values ranges from 10 to
around 50. Now my question is if it makes sense,
Christian Rengstl [EMAIL PROTECTED] writes:
i am in the middle of breaking my head over designing a database and came to
the following question/problem: i have persons whose values (integer) have to
be entered in the db, but per person the amount of values ranges from 10 to
around 50. Now my
On Tue, Jun 27, 2006 at 13:43:21 +0200,
Christian Rengstl [EMAIL PROTECTED] wrote:
Hi list,
i am in the middle of breaking my head over designing a database and came to
the following question/problem: i have persons whose values (integer) have to
be entered in the db, but per person the
hi,all. A strange question is as follows:I have two PCs:machine A: FreeBSD 5.4machine B: Windows XP.Both of them
I am so sorry. I sent a mail draft.On 6/20/06, Lee Riquelmei [EMAIL PROTECTED] wrote:
hi,all. A strange question is as follows:I have two PCs:machine A: FreeBSD
5.4 with PostgreSQL 8.1.2machine B: Windows XP with PostgreSQL 8.1.2A and B are with same hardware configuration and in a 100Mbit LAN.
a
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tony Caduto
Sent: 19 June 2006 03:51
To: Greg Quinn; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Question about openSSL
Greg Quinn wrote:
1.) I went to the OpenSSL ste, and tried
901 - 1000 of 1400 matches
Mail list logo