Hi,
I figure it out.
If only the offset from UTC is given, you may try
select current_timestamp at time zone (select name from
pg_catalog.pg_timezone_names where utc_offset= interval '+8 hours' limit 1)
Would give the exact timestamp at those time zone.
On Fri, Jan 13, 2012 at 11:59 PM,
Hi,
As I understand it, the ctid contains both the block number and an index is
this block.
Is there a way to fetch all the table entries from the same block?
E.g. something like this:
select * from foo where ctid like '(123,%'
or ... ctid.block = 123
--
Eld på åren og sol på eng gjer mannen
On 16 Leden 2012, 15:07, Tore Halvorsen wrote:
Hi,
As I understand it, the ctid contains both the block number and an index
is
this block.
Is there a way to fetch all the table entries from the same block?
E.g. something like this:
select * from foo where ctid like '(123,%'
or ...
On Mon, Jan 16, 2012 at 3:20 PM, Tomas Vondra t...@fuzzy.cz wrote:
[...]
WHERE ctid = '(123,0)'::tid AND ctid '(124,0)'::tid
Ah, forgot a point here - without doing a sequential scan.
--
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
demo 2012 Tore Halvorsen || +052
Hello
I tired \dT and dT+ to see the type related information but I am interested
about the definition , Is there a way to see the type definition in psql
Regards
On Monday, January 16, 2012 6:38:49 am salah jubeh wrote:
Hello
I tired \dT and dT+ to see the type related information but I am
interested about the definition , Is there a way to see the type
definition in psql
\dTS+
Does the above get you the information you need?
Regards
On 16 Leden 2012, 15:28, Tore Halvorsen wrote:
On Mon, Jan 16, 2012 at 3:20 PM, Tomas Vondra t...@fuzzy.cz wrote:
[...]
WHERE ctid = '(123,0)'::tid AND ctid '(124,0)'::tid
Ah, forgot a point here - without doing a sequential scan.
Hmmm, you could create an index on the ctid column, but
On Monday, January 16, 2012 5:50:42 am Cefull Lo wrote:
Hi,
I figure it out.
If only the offset from UTC is given, you may try
select current_timestamp at time zone (select name from
pg_catalog.pg_timezone_names where utc_offset= interval '+8 hours' limit 1)
Would give the exact
On Monday, January 16, 2012 5:50:42 am Cefull Lo wrote:
Hi,
I figure it out.
If only the offset from UTC is given, you may try
select current_timestamp at time zone (select name from
pg_catalog.pg_timezone_names where utc_offset= interval '+8 hours' limit 1)
Would give the exact
On Mon, Jan 16, 2012 at 4:08 PM, Tomas Vondra t...@fuzzy.cz wrote:
The only other solution is to check all possible items on the page. There
may be up to 291 items (although it depends on block size and
architecture, see MaxHeapTuplesPerPage in access/htup.h).
Nice to know.
Something like
Dear Postgres developers
I installed the windows version of postgres database 8.4.9-1. Our use of
the postgres database is limited to this version.
The question is whether this version has support for openssl or not.
If this is not the case, is it true that it must be added by recompiling
from
On 16 Leden 2012, 17:15, Tore Halvorsen wrote:
On Mon, Jan 16, 2012 at 4:08 PM, Tomas Vondra t...@fuzzy.cz wrote:
The only other solution is to check all possible items on the page.
There
may be up to 291 items (although it depends on block size and
architecture, see MaxHeapTuplesPerPage in
Hello Community,
I am working on a database extension for PostgreSQL (8.4+) to support
functions for spectral graph theory of spatial/geometric graphs like
proteins. For this purpose we need to store and use huge multidimensional
arrays in the database (adjacency matrix for graph).
The
Hello Adrian
when I run \dTS+ I get the same result as \dT+
^
testdb=# create type test AS (a int , b int);
CREATE TYPE
testdb=# \dT+ test
List of data types
Schema | Name | Internal name | Size | Description
Dear Postgres developers
I installed the windows version of postgres database 8.4.9-1. Our use of
the postgres database is limited to this version.
The question is whether this version has support for openssl or not.
If this is not the case, is it true that it must be added by recompiling
from
Hi there
My PostgreSQL 9.1 installed on an Ubuntu/64 10.04 machine is running fine.
I installed pgAgent with sudo apt-get install pgagent (that is probably
version 3.0.1).
I executed the .sql script to create the pgagent objects in the database.
sudo su postgres
psql -d postgres
postgres-# \dn
P. Broennimann peter.broennim...@gmail.com writes:
- The problem I am having is that I can not start pgagent. When I try:
cd /usr/bin
./pgagent hostaddr=127.0.0.1 port=5432 dbname=postgres user=postgres
and I get the following errors:
/usr/bin$ WARNING: Couldn't create the primary
anyone with PG Synchronous Replication knowledge, please help me with
your views on the below questions.
thanks,
Manoj
On 01/12/2012 10:12 AM, Manoj Govindassamy wrote:
any help on this is much appreciated.
thanks,
Manoj
On 01/11/2012 01:50 PM, Manoj Govindassamy wrote:
Hi,
I have a
Hi
I was told by someone that in order to store text that isn't case sensitive
in comparisons I must use CIText or CI_Text.
I can't find the data type? Also is this char or varchar data type? Can
you create an index on this data type?
Thanks
H.F.
On 16 January 2012 20:15, Heine Ferreira heine.ferre...@gmail.com wrote:
Hi
I was told by someone that in order to store text that isn't case sensitive
in comparisons I must use CIText or CI_Text.
I can't find the data type? Also is this char or varchar data type? Can you
create an index on
On 01/16/2012 09:44 AM, salah jubeh wrote:
Hello Adrian
when I run \dTS+ I get the same result as \dT+
So you want to get information from a user created type.
So you have two options:
1) Add a comment to the type:
COMMENT ON TYPE test is 'create type test AS (a int , b int)';
test= \dT
Hey Guys.
It's been a while since I looked into this and it seems like new
options have cropped up for postgres HA and scalability. Is there a
consensus on the best way to achieve HA. My primary concern is HA
but of course any scalability gains would be more than welcome. All
the servers will
Is shared storage an option for you ? We've had a fairly pleasant experience
with shared storage partnered up with SLES and its HAE (high availability
extension) suite using a Pacemaker cluster for resource control. On top of this
we replicate to a hot standby server offsite, however used for
On Tue, Jan 17, 2012 at 10:47 AM, David Morton davidmor...@xtra.co.nz wrote:
Is shared storage an option for you ? We've had a fairly pleasant experience
with shared storage partnered up with SLES and its HAE (high availability
extension) suite using a Pacemaker cluster for resource control. On
On 01/16/12 1:34 PM, Tim Uckun wrote:
the servers will be virtual on either rackspace or amazon so that's
possibly a complication.
virtual servers tend to have lousy storage performance, for what thats
worth. the actual physical resources are being shared by who knows what
other workloads,
virtual servers tend to have lousy storage performance, for what thats
worth. the actual physical resources are being shared by who knows what
other workloads, and they tend to be higher latency than direct-attach
storage, or proper SAN.
I realize that. Eventually we might have to go to
On 01/16/12 2:04 PM, Tim Uckun wrote:
I realize that. Eventually we might have to go to physical machines
but for now we are using virtual servers and I have to make it work
within that structure.
quite the catch-22. a single well built dedicated server likely would
be MORE reliable than a
On 1/16/2012 4:09 PM, John R Pierce wrote:
On 01/16/12 2:04 PM, Tim Uckun wrote:
I realize that. Eventually we might have to go to physical machines
but for now we are using virtual servers and I have to make it work
within that structure.
quite the catch-22. a single well built dedicated
On 1/16/2012 4:13 PM, Andy Colson wrote:
On 1/16/2012 4:09 PM, John R Pierce wrote:
On 01/16/12 2:04 PM, Tim Uckun wrote:
I realize that. Eventually we might have to go to physical machines
but for now we are using virtual servers and I have to make it work
within that structure.
quite the
I wonder. If its a write heavy database, I totally agree with you. But if
its mostly read-only, and mostly fits in ram, then a pgpool of servers
should be faster.
Be nice to know the usage patterns of this database. (and size).
In this case the databases are small to medium and the usage
Have you looked at a 'shared storage' solution based on DRBD ? I configured a
test environment using SLES HAE and DRBD with relative ease and it behaved very
well (can probably supply a build script if you like), there are lots of people
running production systems on similar setups although I
On Tuesday, January 17, 2012 10:34:54 AM Tim Uckun wrote:
Hey Guys.
It's been a while since I looked into this and it seems like new
options have cropped up for postgres HA and scalability. Is there a
consensus on the best way to achieve HA. My primary concern is HA
but of course any
On Tue, Jan 17, 2012 at 12:31 PM, David Morton davidmor...@xtra.co.nz wrote:
Have you looked at a 'shared storage' solution based on DRBD ? I configured
a test environment using SLES HAE and DRBD with relative ease and it behaved
very well (can probably supply a build script if you like), there
On 17/01/2012 12:23 AM, Evert Koks wrote:
Dear Postgres developers
I installed the windows version of postgres database 8.4.9-1. Our use
of the postgres database is limited to this version.
The question is whether this version has support for openssl or not.
Windows versions of PostgreSQL
On Tue, Jan 17, 2012 at 3:51 AM, Manoj Govindassamy
ma...@nimblestorage.com wrote:
1. Transaction which was stuck right when slave going away never went
thru even after I reloaded master's config with local commit on. I do see
all new transactions on master are going thru fine, except the one
35 matches
Mail list logo