PostgreSQL 9.1.0
For some of the referential constraints listed in my
information_schema.referential_constraints table the values for the fields
unique_constraint_catalog, unique_constraint_schema, and unique_constraint_name
are NULL. There doesn't seem to be any rhyme or reason to which ones
I'm using 8.4.1
I want to add a column to a table, but there are update triggers that will fire
that don't need to fire for this operation. So, I'd like to add the column with
triggers off. Normally this operation would take 10 or so seconds, so locking
the table for that amount of time is not
--- On Thu, 12/17/09, Adrian Klaver wrote:
>
> Would it be possible to see the table schemas and indices
> ?
>
> >
Sure (you asked for it!!) :
CREATE TABLE packet
(
id integer NOT NULL DEFAULT nextval('packet_id_seq'::regclass),
packet_uuid uniqueidentifier NOT NULL DEFAULT newid(),
5, 2009, 6:53 PM
> On Tuesday 15 December 2009 2:33:39
> pm CG wrote:
>
> >
> > Bingo. Showed right up. I did a reindex, and now it
> shows up searching via
> > sequential scan or index scan.
> >
> > So that's pretty scary to have a corrupted index.
009, 1:17 PM
> On Thu, Dec 10, 2009 at 1:21 PM, CG
>
> wrote:
> >
> > Thanks for the suggestion. I'm not sure what you mean
> when you say I should restore to a file. Do you mean I
> should dump the database to an SQL file instead of the
> "compressed"
rce
> Subject: Re: [GENERAL] pg_dump and ON DELETE CASCADE problem
> To: cgg...@yahoo.com, pgsql-general@postgresql.org
> Date: Thursday, December 10, 2009, 3:29 PM
> CG wrote:
> > Thanks for the suggestion. I'm not sure what you mean
> when you say I should restore to a file. Do
Thanks for the suggestion. I'm not sure what you mean when you say I should
restore to a file. Do you mean I should dump the database to an SQL file
instead of the "compressed" format?
What do you think I will find?
In the database dump, it is including a row that should be marked as deleted.
aig Ringer
Subject: Re: [GENERAL] pg_dump and ON DELETE CASCADE problem
To: cgg...@yahoo.com
Cc: "postgresql listserv"
Date: Wednesday, December 9, 2009, 9:02 PM
On 10/12/2009 3:31 AM, CG wrote:
> Hi all,
> We're using PostgreSQL 8.4 ... We do our nightly database backups with
>
been dumped so when it came time to dump the master table, the
referencing row was not there to be dumped. One would imagine that PostgreSQL
would have protections for that sort of thing...
Can you think of how I can protect against this in the future?
CG
I use a custom datatype that's a good candidate for being included in the
family of data types that fit in the "anynonarray" and "anyelement" categories.
How can I get PostgreSQL to include that data type when it is considering
selecting functions and operators that take the polymorphic type
better way, though!
I'm sure this only my second of several more hurdles to overcome before I'm
finished with the transition.
Your wisdom will be appreciated!
CG
____
From: Tom Lane
To: CG
Cc: pgsql-general@postgresql.org
Sent: Wednesday, July 15
;
> > On Jul 16, 8:05 am, t...@sss.pgh.pa.us (Tom Lane) wrote:
> >> CG writes:
> >>> While transitioning from 8.1 to 8.4, I need to transition to the internal
> >>> UUID type in place of the contrib/uniqueidentifier module. I've built the
> >>&
Another UUID question... I was hoping to put this off for a while, but it looks
like it's going to need to happen sooner than later. Rats! I keep telling
myself transitioning to core datatypes is good for the soul.
While transitioning from 8.1 to 8.4, I need to transition to the internal UUID
t
I'm in the process of attempting to upgrade from 8.1 to 8.4, and I've been
using the uniqueidentifier contrib module for my UUID's ... In 8.1 Postgresql
was able to figure out what to do with statements like
# SELECT 'Your new UUID is ' || newid();
?column?
---
I'm trying to debug a C function that is used for the binary send/receive part
of a user defined type. I can debug the send part fine, but the receive part
takes an input parameter of type "internal", and I can't seem to conjure up an
"internal" to feed to my function to test if the output is wo
I'm using PostgreSQL 8.1 ...
I had to terminate some clustering before it had completed. I think I have
quite a bit of wasted disk space in half-baked table files. I need some
suggestions for an easy way to find and clean out the files left over from the
failed cluster operations.
TIA!
ssage
From: Nikolay Samokhvalov <[EMAIL PROTECTED]>
To: CG <[EMAIL PROTECTED]>
Cc: postgresql listserv
Sent: Monday, July 9, 2007 4:33:04 PM
Subject: Re: [GENERAL] xpath_* namespace bug
AFAIK, contrib/xml2 doesn't support namespaces for XPath expressions
at all. Wait for Postgre
select xpath_string($xml$
baz
$xml$
,'//f:bar/text()');
This does not give me back "baz" as I was expecting it to... It seems like
xpath is ignoring the namespace directives in the source XML document.
TV
I'm not sure what I'm missing here... :)
select xpath_string($xml$
baz
$xml$
,'//f:bar/text()')
This does not give me back "baz" as I was expecting it to... How does one
clue-in the xpath functions to the namespaces in the XML document?
_
et($style_doc);
my $results = $stylesheet->transform($source);
print $stylesheet->output_string($results);
The auto/XML/LibXSLT/LibXSLT.so library compiled on the machine which it is
running should be sufficient, right?
- Original Message
From: Martin Gainty &l
artin Gainty <[EMAIL PROTECTED]>
To: CG <[EMAIL PROTECTED]>; postgresql listserv
Sent: Wednesday, June 27, 2007 12:46:59 PM
Subject: Re: [GENERAL] How do you handle shared memory corruption issues?
(contrib/xml2)
Hi CG
looks as if your 64bit box needs 64bit libraries instead of defaul
Postgresql 8.1
I made a simple modification to contrib/xml2 to include the ability to process
exslt tags... On the production side, the postmaster crashes when I try to
process my exslt stylesheet. On my development machine, everything runs without
crashing. There's a number of differences ther
hese things from being created. But, what else is there to do at
1AM on a Sunday night waiting for a 500MB table to CLUSTER? :)
CG
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an
crippled. I'll keep
trying.
CG
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> CG <[EMAIL PROTECTED]> writes:
> > 2006-05-25 08:30:50.076 EDT LOG: server process (PID 32140) was
> terminated
> > by signal 11
>
> That should be leaving a core dump file (if
I didn't find a core dump.
Perhaps I'm looking in the wrong spot or for the wrong file. The file should be
called "core.32140", correct? ... I did a "find / -name core*" ... that found
nothing useful.
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> CG <[E
006-05-25 08:30:50.103 EDT LOG: database system was not properly shut down;
automatic recovery in progress
2006-05-25 08:30:50.114 EDT LOG: redo starts at 28/3C101B38
What could be wrong? I am panicking. Please advise!
CG
__
Do You Yah
--- Teodor Sigaev <[EMAIL PROTECTED]> wrote:
> > In case you're unfamiliar with this particular horse, I'm using ltree to
> create
> > a full text index on some <= 50 char long fields for a lookup table. The
> idea
> > was to be able to tear through tons of data quickly finding case
> insensitiv
I'm still trying to wrap my brain around this one. Please forgive me if this is
the proverbial "dead horse" that I'm beating.
In case you're unfamiliar with this particular horse, I'm using ltree to create
a full text index on some <= 50 char long fields for a lookup table. The idea
was to be able
--- Bernhard Weisshuhn <[EMAIL PROTECTED]> wrote:
> On Mon, Feb 27, 2006 at 10:27:20AM -0800, CG <[EMAIL PROTECTED]> wrote:
>
> > [...] I'd need to see if the space required for the varchar+btree tables
> are
> > comparible, better, or worse than the lt
e:
> On Mon, Feb 27, 2006 at 09:14:40AM -0800, CG <[EMAIL PROTECTED]> wrote:
>
> > I could probably get even better performance out of the table, at the cost
> of a
> > significant increase in table and index size, by chopping up the columns
> into
> >
.w.o.r.l.d'
'e.l.l.o.w.o.r.l.d'
'l.l.o.w.o.r.l.d'
'l.o.w.o.r.l.d'
'o.w.o.r.l.d'
'w.o.r.l.d'
'o.r.l.d'
'r.l.d'
and using a wildcard search "search_vector ~ 'o.r.l.*'" would jump right to the
vectors which start with &qu
--- Martijn van Oosterhout wrote:
> That's very odd. Like the other person said, do you vacuum and analyse?
> But my question is: is it using the index? What does EXPLAIN / EXPLAIN
> ANALYZE tell you?
data=# explain select * from search where search_vector ~ '*.o.r.l.*'::lquery;
--- Scott Marlowe <[EMAIL PROTECTED]> wrote:
> Are you vacuuming regularly, are your fsm settings high enough, and what
> does vacuum verbose say?
Autovacuum is running, but I do a nightly vacuum analyze. When I just do a
vacuum analyze on the table I get:
data=# vacuum analyze verbose search;
I
query
like '*.o.r.l.*' and use the "~" operator in the where clause. I would link to
the table "items" by the item_id ...
What could be making this go so wrong? Is there a better way to accomplish my
task?
CG
_
Thanks to the great suggestions I've at least gotten it to not hang...
Martijn's hint about blocking led me to open up those filehandles in a
non-blocking mode. It appears that write() will write, at a maximum, only 4096
bytes when it is called from within PostgreSQL. I've tried to push data int
--- Martijn van Oosterhout wrote:
> On Fri, Oct 28, 2005 at 07:24:12AM -0700, CG wrote:
> Not entirely sure, but I'm sure the size of the write matters. For
> example, if your test rpogram, did you check that the write actually
> wrote everything?
There's beginning and en
--- Martijn van Oosterhout wrote:
> On Fri, Oct 28, 2005 at 06:38:29AM -0700, CG wrote:
>
> Umm, what *are* you trying to do? Is this running in the backend?
Yes, running on the back-end. I'm trying to utilize Adobe's FDF toolkit to
parse the FDF files stored in my database
PostgreSQL 7.4 ...
Essentially, I've written a function in C for use with PostgreSQL. The debugger
shows that the program is hanging on the part of the program that is writing
data into it's own STDIN.
[snip]
// Open up and hijack STDIN
int pipe_pair[2];
int pipe_rv = pipe(pipe_pair);
i
h2 is fantastic, but it works best for fields that contain words. I have
to sift through alphanumeric identification numbers.
Is the split-field FTI the best way to tackle my problem?
What can I do to get better performance on "SELECT * FROM table WHERE field
LIKE '%value%';" ??
39 matches
Mail list logo