Re: [HACKERS] #escape_string_warning = off

2005-08-02 Thread Jeff Davis
The documentation about this is a little brief (reading from the
developer docs, section 4.1.2.1.).

Does the SQL standard provide no way to have a NULL character in a
string constant? Is single-quote the only special character?

If I have a system on 7.4 or 8.0 right now, what is the recommended
right way to write string constants with backslashes? I can't use E''
yet, so if I need to include a backslash it seems like there's no chance
it will be forward-compatible.

In the E'' constants, the special characters are only single-quote,
backslash, and NULL right?

Regards,
Jeff Davis

Marko Kreen wrote:
 On Mon, Aug 01, 2005 at 11:58:34AM -0700, Joshua D. Drake wrote:
 
What might this be?
 
 
 Whether to warn on '\' in non-E'' strings.
 
 AFAIK Bruce wants to turn this to 'on' in 8.2.
 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Autovacuum to-do list

2005-08-02 Thread Jeff MacDonald
On Tue, Aug 02, 2005 at 08:25:00AM +0100, Dave Page wrote:
[..snipped..]
  
  Oooo... That's a lot of win32 ignorance to ignore... :-)
  
  Push control-alt-delete and look under Performance. I believe
  Windows may even keep *more* information that Linux. It's all a
  question of figuring out what the Win32 API calls are to get what you
  are looking for. Most concepts found in one system are also found in
  the other. I would assume yes before no.
 
 Yes - Win32 has Performance Counters which are an extensible set of
 monitored objects, and a rich API to access them - see
 http://msdn.microsoft.com/library/default.asp?url=/library/en-us/perfmon
 /base/performance_monitoring.asp for details. The perfmon program (Start
 - Run - perfmon - OK) gives a GUI interface to play with in which you
 can monitor any installed counter.
 
 Regards, Dave.

ok, ok.. you can tell I haven't seriously used win32 since the win 98 days
(heck, haven't written any kind of windows application since around thetime
of win 3.11 or so).. I told you I was ignorant of these things! :) I don't
have a win32 machine to try either of the above with, so I'll take the words
of the more experienced as fact :)

regards,
J

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Autovacuum to-do list

2005-08-02 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 [EMAIL PROTECTED]
 Sent: 02 August 2005 05:55
 To: Jeff MacDonald
 Cc: Matthew T. O'Connor; [EMAIL PROTECTED]; 
 pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Autovacuum to-do list
 
 On Mon, Aug 01, 2005 at 10:35:00PM -0400, Jeff MacDonald wrote:
  On Mon, Aug 01, 2005 at 10:22:14PM -0400, Matthew T. O'Connor wrote:
  [..snipped..]
   Right which is why we would need to enforce some max 
 value so that 
   vacuuming will never be totally squeezed out.
  I'm a linux guy, so please forgive my ignorance, but is it 
 even possible to
  determine load average on win32?
 
 Oooo... That's a lot of win32 ignorance to ignore... :-)
 
 Push control-alt-delete and look under Performance. I believe
 Windows may even keep *more* information that Linux. It's all a
 question of figuring out what the Win32 API calls are to get what you
 are looking for. Most concepts found in one system are also found in
 the other. I would assume yes before no.

Yes - Win32 has Performance Counters which are an extensible set of
monitored objects, and a rich API to access them - see
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/perfmon
/base/performance_monitoring.asp for details. The perfmon program (Start
- Run - perfmon - OK) gives a GUI interface to play with in which you
can monitor any installed counter.

Regards, Dave.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] #escape_string_warning = off

2005-08-02 Thread Dennis Bjorklund
On Mon, 1 Aug 2005, Jeff Davis wrote:

 Does the SQL standard provide no way to have a NULL character in a
 string constant? Is single-quote the only special character?

I don't think it forbids you from using the null character. It's not like 
the strings are zero terminated. Some encodings might not allow the null 
character, but that's different.

ps. null character does not have anything to do with the sql NULL. I'm 
sure there is someone somewhere that need this info.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] #escape_string_warning = off

2005-08-02 Thread Jeff Davis
Dennis Bjorklund wrote:
 On Mon, 1 Aug 2005, Jeff Davis wrote:
 
 
Does the SQL standard provide no way to have a NULL character in a
string constant? Is single-quote the only special character?
 
 
 I don't think it forbids you from using the null character. It's not like 
 the strings are zero terminated. Some encodings might not allow the null 
 character, but that's different.
 

But doesn't PostgreSQL forbid us from using the NULL character in a
query at all? Don't we always have to escape or encode it in some way?
Does this new attempt at standard-compliant strings allow PostgreSQL to
accept a null character in a string?

 ps. null character does not have anything to do with the sql NULL. I'm 
 sure there is someone somewhere that need this info.
 

Yeah, I was talking about '\0'.

Regards,
Jeff Davis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] #escape_string_warning = off

2005-08-02 Thread Dennis Bjorklund
On Tue, 2 Aug 2005, Jeff Davis wrote:

 Does the SQL standard provide no way to have a NULL character in a
 string constant? Is single-quote the only special character?
  
  I don't think it forbids you from using the null character. It's not like 
  the strings are zero terminated. Some encodings might not allow the null 
  character, but that's different.
 
 But doesn't PostgreSQL forbid us from using the NULL character in a
 query at all? Don't we always have to escape or encode it in some way?

Pg does not allow \0 in strings at all. Try SELECT 'abc\0def'; in the
current version of pg.

The sql standard doesn't forbid null values in strings as far as I know
and that's all I talked about. To have a sql standard string with null
inside you just insert the 0 byte (for normal single byte encodings), no
escaping needed.

Internally pg handles strings as \0-terminated entities which is a bit 
unfortunate but that's what we have. That's why 'abc\0def' became the 
string 'abc'. Most character sets forbid \0 in strings anyway.

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] bgwriter, inherited temp tables TODO items?

2005-08-02 Thread Thomas F. O'Connell
Great! Is background writer clogging worthy? That's the one that put  
postgres in a nearly unusable state after this bug was tripped.


Thanks!

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i™

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jul 29, 2005, at 10:49 PM, Bruce Momjian wrote:


Added to TODO:

* Prevent inherited tables from expanding temporary subtables  
of other

  sessions


-- 
-


Thomas F. O'Connell wrote:



On Jul 21, 2005, at 1:22 PM, Bruce Momjian wrote:



Thomas F. O'Connell wrote:



I'm switching the aftermath of this thread -- http://
archives.postgresql.org/pgsql-general/2005-07/msg00501.php -- to -
hackers since it raised issues of potential concern to developers.

At various points in the thread, Tom Lane said the following:

I have an old note to myself that persistent write errors could
clog
the bgwriter, because I was worried that after an error it would
stupidly try to write the same buffer again instead of trying to  
make

progress elsewhere.  (CVS tip might be better about this, I'm not
sure.)
A dirty buffer for a file that doesn't exist anymore would  
certainly

qualify as a persistent failure.

and

Hmm ... a SELECT from one of the actual tables would then  
scan the

temp tables too, no?

Thinking about this, I seem to recall that we had agreed to make  
the

planner ignore temp tables of other backends when expanding an
inheritance list --- but I don't see anything in the code
implementing
that, so it evidently didn't get done yet.

I don't immediately see TODO items correpsonding to these. Should
there be some? Or do these qualify as bugs and should they be
submitted to that queue?




Would you show a query that causes the problem so I can properly  
word

the TODO item for inheritance and temp tables?



It's really more of a timing issue than a specific query issue.
Here's a scenario:

CREATE TABLE parent ( ... );

begin thread1:
CREATE TEMP TABLE child ( ... ) INHERITS FROM ( parent );

begin thread2:
while( 1 ) {
 SELECT ... FROM parent WHERE ...;
}

end thread1 (thereby dropping the temp table at the end of session)

At this point, the file is gone, but, as I understand it, the planner
not ignoring temp tables of other backends means that thread2 is
inappropriately accessing the temp table child as it performs
SELECTS, thus causing potential dirty buffers in bgwriter, which at
some point during the heavy activity of the tight SELECT loop, will
have the file yanked out from under it and will throw a No such
file error.

So I guess the core issue is the failure of the planner to limit
access to temp tables.

Tom seems to come pretty close to a TODO item in his analysis in my
opinion. Something like:

Make the planner ignore temp tables of other backends when expanding
an inheritance list.

--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i?

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005




--
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square,  
Pennsylvania 19073





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] FYI: Fujitsu

2005-08-02 Thread Neil Conway
For the past 11 months or so[1], I've been working full-time on 
PostgreSQL as an employee of Fujitsu Australia Software Technology. I'm 
grateful to Fujitsu for giving me this opportunity, and I've enjoyed the 
past year. However, I'm returning to university in the fall, and 
therefore I will no longer be working full-time for Fujitsu.


Also, I'll be on vacation and pretty inactive for most of August -- I'll 
only be reading lists sporadically, so please CC me on anything you'd 
like my input on. The only pending bit of work I want to get into the 
tree for 8.1 is the RAISE patch for pl/pgsql (user exceptions), which 
I should have finished shortly.


-Neil

[1] http://archives.postgresql.org/pgsql-hackers/2004-09/msg00158.php

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Marc G. Fournier


I had posted this earlier, but had insufficient info (or access) to 
provide much detail ... now, I've got the access, and this really isn't 
making much sense ...


Have databse that \ds shows several sequences, but one in partiicular, I 
can't seem to 'dump' with pg_dump:


 public | xa_url_id_seq   | sequence | pareto_su

there is another sequence, of similiar name, in the database that I'm 
showing here since it will show up in the grep of the dump:


 public | tmp_xa_url_id_seq   | sequence | pareto_su

Now, I'm doing a simple:

pg_dump -U pareto_su --schema-only pareto  schema.out

and:

$ grep -i xa_url_id_seq schema.out
nextval('xa_url_id_seq'::text),
nextval('xa_url_id_seq'::text),
url_id bigint DEFAULT nextval('tmp_xa_url_id_seq'::text) NOT NULL,
-- Name: tmp_xa_url_id_seq; Type: SEQUENCE; Schema: public; Owner: pareto_su
CREATE SEQUENCE tmp_xa_url_id_seq
ALTER TABLE public.tmp_xa_url_id_seq OWNER TO pareto_su;
-- Name: tmp_xa_url_id_seq; Type: ACL; Schema: public; Owner: pareto_su
REVOKE ALL ON TABLE tmp_xa_url_id_seq FROM PUBLIC;
REVOKE ALL ON TABLE tmp_xa_url_id_seq FROM pareto_su;
GRANT ALL ON TABLE tmp_xa_url_id_seq TO pareto_su;
GRANT SELECT,UPDATE ON TABLE tmp_xa_url_id_seq TO GROUP pareto_app;

As you will see above, there is a CREATE SEQUENCE for the 
tmp_xa_url_id_seq SEQUENCE ... but none for the xa_url_id_seq one ...


I'm not seeing any errors generated when I do the pg_dump itself, and the 
database itself is owned by the user I'm doing the dump as, as are all the 
SEQUENCES/TABLES ...


I've even checked \dp, to make sure there was no permissions issues, and 
unless I' mmissing something, they look correct:


 public | xa_url_id_seq   | sequence | 
{pareto_su=arwdRxt/pareto_su,group pareto_app=rw/pareto_su}

So, unless I'm overlooking something, the system sees the SEQUENCE, but 
pg_dump doesn't see it ... is there something else I should be looking at 
/ verifying as to why it isn't dump?


Oh, and just in case, I've checked that the pg_dump version is correct as 
well:


$ pg_dump --version
pg_dump (PostgreSQL) 8.0.3

uname of the system shows:

Linux pareto 2.6.10-5-amd64-generic #1 Tue Apr 5 12:21:57 UTC 2005 x86_64 
GNU/Linux

So, we're on a 64bit system ... but can't see how that would make a 
different for a dump ...


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] bgwriter, inherited temp tables TODO items?

2005-08-02 Thread Tom Lane
Thomas F. O'Connell [EMAIL PROTECTED] writes:
 Tom seems to come pretty close to a TODO item in his analysis in my  
 opinion. Something like:

 Make the planner ignore temp tables of other backends when expanding  
 an inheritance list.

I've done this in CVS tip.  I'm not sure whether it should be considered
a backpatchable bug fix, though.

If you want to apply the patch locally, it's attached --- should work
fine in 8.0, but I'm not sure about 7.4 or earlier, which have slightly
different logic here.

regards, tom lane


*** src/backend/optimizer/prep/prepunion.c.orig Thu Jul 28 18:27:00 2005
--- src/backend/optimizer/prep/prepunion.c  Tue Aug  2 16:21:41 2005
***
*** 22,27 
--- 22,28 
  
  
  #include access/heapam.h
+ #include catalog/namespace.h
  #include catalog/pg_type.h
  #include nodes/makefuncs.h
  #include optimizer/clauses.h
***
*** 808,813 
--- 809,824 
Index   childRTindex;
  
/*
+* It is possible that the parent table has children that are
+* temp tables of other backends.  We cannot safely access such
+* tables (because of buffering issues), and the best thing to 
do
+* seems to be to silently ignore them.
+*/
+   if (childOID != parentOID 
+   isOtherTempNamespace(get_rel_namespace(childOID)))
+   continue;
+ 
+   /*
 * Build an RTE for the child, and attach to query's rangetable
 * list. We copy most fields of the parent's RTE, but replace
 * relation OID, and set inh = false.
***
*** 818,823 
--- 829,845 
parse-rtable = lappend(parse-rtable, childrte);
childRTindex = list_length(parse-rtable);
inhRTIs = lappend_int(inhRTIs, childRTindex);
+   }
+ 
+   /*
+* If all the children were temp tables, pretend it's a non-inheritance
+* situation.  The duplicate RTE we added for the parent table is 
harmless.
+*/
+   if (list_length(inhRTIs)  2)
+   {
+   /* Clear flag to save repeated tests if called again */
+   rte-inh = false;
+   return NIL;
}
  
/*

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] dbt3 data with 10GB scale factor

2005-08-02 Thread Mark Wong
I've started scaling dbt3 up to the 10GB scale factor against CVS and
the fast COPY patch:

http://www.testing.osdl.org/projects/dbt3testing/results/dev4-010/53/

I'm sure there are some better database parameters I should use so
please let me know what to try. ;)  What I've found interesting is the
difference in the time it takes for Q9 to run in the power test than the
throughput test:

 Power Test 00:56:27
Throughput Stream 1 00:38:13
Throughput Stream 2 00:41:33
Throughput Stream 3 00:20:16
Throughput Stream 4 00:18:11

Diffing the query plans between the Power Test and the individual
streams in the Throughput Test, I only see a Materialize and Seq Scan on
the nation table versus just a Seq Scan on the nation table between two
of the plans.  But this doesn't appear to account for the execution time
difference as the query with the same plan executes just as fast during
the Throughput Test.  Here are the plans in full:

EXPLAIN ANALYZE in the Power Test for Q9:

 EXPLAIN ANALYZE select nation, o_year, sum(amount) as sum_profit from ( select 
n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * 
(1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, 
lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = 
l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = 
l_orderkey and s_nationkey = n_nationkey and p_name like '%light%' ) as profit 
group by nation, o_year order by nation, o_year desc;
   
QUERY PLAN  
  
-
 GroupAggregate  (cost=182648.08..182648.22 rows=5 width=49) (actual 
time=3375680.779..3387407.186 rows=175 loops=1)
   -  Sort  (cost=182648.08..182648.09 rows=5 width=49) (actual 
time=3375639.390..3379116.547 rows=3254907 loops=1)
 Sort Key: nation.n_name, date_part('year'::text, 
(orders.o_orderdate)::timestamp without time zone)
 -  Nested Loop  (cost=1.27..182648.02 rows=5 width=49) (actual 
time=43.412..3276365.695 rows=3254907 loops=1)
   -  Nested Loop  (cost=1.27..182632.87 rows=5 width=49) (actual 
time=14.798..2510769.669 rows=3254907 loops=1)
 Join Filter: (outer.s_nationkey = inner.n_nationkey)
 -  Nested Loop  (cost=0.00..182628.78 rows=5 width=24) 
(actual time=14.685..2415773.175 rows=3254907 loops=1)
   -  Nested Loop  (cost=0.00..182613.65 rows=5 
width=28) (actual time=14.654..2377308.988 rows=3254907 loops=1)
 -  Nested Loop  (cost=0.00..105392.76 
rows=12804 width=16) (actual time=0.071..10991.718 rows=433752 loops=1)
   -  Seq Scan on part  
(cost=0.00..73877.55 rows=3200 width=4) (actual time=0.023..3698.415 
rows=108438 loops=1)
 Filter: ((p_name)::text ~~ 
'%light%'::text)
   -  Index Scan using i_ps_partkey on 
partsupp  (cost=0.00..8.00 rows=148 width=12) (actual time=0.045..0.059 rows=4 
loops=108438)
 Index Cond: (outer.p_partkey = 
partsupp.ps_partkey)
 -  Index Scan using i_l_suppkey_partkey on 
lineitem  (cost=0.00..6.02 rows=1 width=24) (actual time=0.891..5.441 rows=8 
loops=433752)
   Index Cond: ((outer.ps_partkey = 
lineitem.l_partkey) AND (outer.ps_suppkey = lineitem.l_suppkey))
   -  Index Scan using pk_supplier on supplier  
(cost=0.00..3.01 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=3254907)
 Index Cond: (supplier.s_suppkey = 
outer.l_suppkey)
 -  Materialize  (cost=1.27..1.52 rows=25 width=33) 
(actual time=0.000..0.011 rows=25 loops=3254907)
   -  Seq Scan on nation  (cost=0.00..1.25 rows=25 
width=33) (actual time=0.005..0.047 rows=25 loops=1)
   -  Index Scan using pk_orders on orders  (cost=0.00..3.01 
rows=1 width=8) (actual time=0.229..0.230 rows=1 loops=3254907)
 Index Cond: (orders.o_orderkey = outer.l_orderkey)
 Total runtime: 3387491.286 ms
(22 rows)


Throughput Stream 1 Q9:

 EXPLAIN ANALYZE select nation, o_year, sum(amount) as sum_profit from ( select 
n_name as nation, extract(year from o_orderdate) as o_year, l_extendedprice * 
(1 - l_discount) - ps_supplycost * l_quantity as amount from part, supplier, 
lineitem, partsupp, orders, nation where s_suppkey = l_suppkey and ps_suppkey = 
l_suppkey and ps_partkey = l_partkey and p_partkey = l_partkey and o_orderkey = 
l_orderkey and s_nationkey = n_nationkey and p_name like '%ivory%' ) 

Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Tom Lane
Marc G. Fournier [EMAIL PROTECTED] writes:
 Have databse that \ds shows several sequences, but one in partiicular, I 
 can't seem to 'dump' with pg_dump:

   public | xa_url_id_seq   | sequence | pareto_su

Given the name, this could be a SERIAL column's sequence --- which is
not dumped as a separate object by pg_dump, since recreating the SERIAL
column ought to create it.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Marc G. Fournier

On Tue, 2 Aug 2005, Tom Lane wrote:


Marc G. Fournier [EMAIL PROTECTED] writes:

Have databse that \ds shows several sequences, but one in partiicular, I
can't seem to 'dump' with pg_dump:



  public | xa_url_id_seq   | sequence | pareto_su


Given the name, this could be a SERIAL column's sequence --- which is
not dumped as a separate object by pg_dump, since recreating the SERIAL
column ought to create it.


One of their functions does an insert into the table with 
nextval('xa_url_id_seq'::text) as part of the INSERT itself, so they 
aren't using that field as a SERIAL type ... just checked the CREATE TABLE 
for teh table, and its defined as a bigint NOT NULL ... so we're not 
dealing with a SERIAL field, as far as I can tell ... in fact, grep'ng 
their whole schema, they don't use SERIAL fields at all ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Tom Lane
Marc G. Fournier [EMAIL PROTECTED] writes:
 On Tue, 2 Aug 2005, Tom Lane wrote:
 Given the name, this could be a SERIAL column's sequence --- which is
 not dumped as a separate object by pg_dump, since recreating the SERIAL
 column ought to create it.

 One of their functions does an insert into the table with 
 nextval('xa_url_id_seq'::text) as part of the INSERT itself, so they 
 aren't using that field as a SERIAL type ... just checked the CREATE TABLE 
 for teh table, and its defined as a bigint NOT NULL ...

Hmm, odd.  But maybe there are traces of a SERIAL linkage?  What do
you get from

select * from pg_depend where objid = 'xa_url_id_seq'::regclass;

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Marc G. Fournier

On Tue, 2 Aug 2005, Tom Lane wrote:


Marc G. Fournier [EMAIL PROTECTED] writes:

On Tue, 2 Aug 2005, Tom Lane wrote:

Given the name, this could be a SERIAL column's sequence --- which is
not dumped as a separate object by pg_dump, since recreating the SERIAL
column ought to create it.



One of their functions does an insert into the table with
nextval('xa_url_id_seq'::text) as part of the INSERT itself, so they
aren't using that field as a SERIAL type ... just checked the CREATE TABLE
for teh table, and its defined as a bigint NOT NULL ...


Hmm, odd.  But maybe there are traces of a SERIAL linkage?  What do
you get from

select * from pg_depend where objid = 'xa_url_id_seq'::regclass;


# select * from pg_depend where objid = 'xa_url_id_seq'::regclass;
 classid | objid  | objsubid | refclassid | refobjid | refobjsubid | deptype
-++--++--+-+-
1259 | 335539 |0 |  16672 | 2200 |   0 | n
1259 | 335539 |0 |   1259 |   335541 |   1 | i
(2 rows)

'k, checking the docs ... deptype == i is an INTERNAL, and refobjid is 
what is referencing it (in this case, xa_url, as I'd expect) ... but, 
looking at \d for xa_url, I'm not seeing anything there to cause it ... no 
serial values ... the only 'default nextval()' I can find in the schema 
is something totally unrelated ...


next? :)


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Tom Lane
Marc G. Fournier [EMAIL PROTECTED] writes:
 On Tue, 2 Aug 2005, Tom Lane wrote:
 Hmm, odd.  But maybe there are traces of a SERIAL linkage?  What do
 you get from
 
 select * from pg_depend where objid = 'xa_url_id_seq'::regclass;

 # select * from pg_depend where objid = 'xa_url_id_seq'::regclass;
   classid | objid  | objsubid | refclassid | refobjid | refobjsubid | deptype
 -++--++--+-+-
  1259 | 335539 |0 |  16672 | 2200 |   0 | n
  1259 | 335539 |0 |   1259 |   335541 |   1 | i
 (2 rows)

Well, that second line is *definitely* a SERIAL column linkage.

 'k, checking the docs ... deptype == i is an INTERNAL, and refobjid is 
 what is referencing it (in this case, xa_url, as I'd expect) ... but, 
 looking at \d for xa_url, I'm not seeing anything there to cause it ... no 
 serial values ... the only 'default nextval()' I can find in the schema 
 is something totally unrelated ...

Is it possible they did create table xa_url(id bigserial, ...) and
then later changed the default expression for the column?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] #escape_string_warning = off

2005-08-02 Thread Jeff Davis
Dennis Bjorklund wrote:
 On Tue, 2 Aug 2005, Jeff Davis wrote:
 
 
Does the SQL standard provide no way to have a NULL character in a
string constant? Is single-quote the only special character?

I don't think it forbids you from using the null character. It's not like 
the strings are zero terminated. Some encodings might not allow the null 
character, but that's different.

But doesn't PostgreSQL forbid us from using the NULL character in a
query at all? Don't we always have to escape or encode it in some way?
 
 
 Pg does not allow \0 in strings at all. Try SELECT 'abc\0def'; in the
 current version of pg.
 
 The sql standard doesn't forbid null values in strings as far as I know
 and that's all I talked about. To have a sql standard string with null
 inside you just insert the 0 byte (for normal single byte encodings), no
 escaping needed.

I guess what I'm trying to find out: does this mean that after all this
change to the way strings are handled in the future, PostgreSQL still
won't be standards-compliant for the basic '' string?

Also, let's say I have apps now in 7.4/8.0, and I want them to be
forward-compatible. Should I make a type called E so that the E''
notation will work, and then use that for strings? What is the right
way to do it?

I found a few things in the archives, but I didn't see these particular
things addressed.

Regards,
Jeff Davis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Marc G. Fournier

On Tue, 2 Aug 2005, Tom Lane wrote:


Marc G. Fournier [EMAIL PROTECTED] writes:

On Tue, 2 Aug 2005, Tom Lane wrote:

Hmm, odd.  But maybe there are traces of a SERIAL linkage?  What do
you get from

select * from pg_depend where objid = 'xa_url_id_seq'::regclass;



# select * from pg_depend where objid = 'xa_url_id_seq'::regclass;
  classid | objid  | objsubid | refclassid | refobjid | refobjsubid | deptype
-++--++--+-+-
 1259 | 335539 |0 |  16672 | 2200 |   0 | n
 1259 | 335539 |0 |   1259 |   335541 |   1 | i
(2 rows)


Well, that second line is *definitely* a SERIAL column linkage.


'k, checking the docs ... deptype == i is an INTERNAL, and refobjid is
what is referencing it (in this case, xa_url, as I'd expect) ... but,
looking at \d for xa_url, I'm not seeing anything there to cause it ... no
serial values ... the only 'default nextval()' I can find in the schema
is something totally unrelated ...


Is it possible they did create table xa_url(id bigserial, ...) and
then later changed the default expression for the column?


'k, am checking into this ... is it a simple matter of removing that 
second record above from pg_depend to fix the pg_dump issue, or 
something more involved then that?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Tom Lane
Marc G. Fournier [EMAIL PROTECTED] writes:
 On Tue, 2 Aug 2005, Tom Lane wrote:
 Well, that second line is *definitely* a SERIAL column linkage.

 Is it possible they did create table xa_url(id bigserial, ...) and
 then later changed the default expression for the column?

 'k, am checking into this ... is it a simple matter of removing that 
 second record above from pg_depend to fix the pg_dump issue, or 
 something more involved then that?

AFAIR, removing that pg_depend entry would be enough to decouple the
sequence so it appears as an independent object in the pg_dump output.
However, I'm still wondering exactly what is wrong, if anything ---
does the pg_dump output not reload correctly?  If so, what happens
exactly when you try?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [8.0.3] Not dumping all sequences ...

2005-08-02 Thread Christopher Kings-Lynne
I should point out that the bug I reported about dependencies and 
changing the type of a serial column still exists.


Once you change a serial column to something else, you cannot ever 
change the default IIRC...


Chris

Tom Lane wrote:

Marc G. Fournier [EMAIL PROTECTED] writes:


On Tue, 2 Aug 2005, Tom Lane wrote:


Well, that second line is *definitely* a SERIAL column linkage.

Is it possible they did create table xa_url(id bigserial, ...) and
then later changed the default expression for the column?



'k, am checking into this ... is it a simple matter of removing that 
second record above from pg_depend to fix the pg_dump issue, or 
something more involved then that?



AFAIR, removing that pg_depend entry would be enough to decouple the
sequence so it appears as an independent object in the pg_dump output.
However, I'm still wondering exactly what is wrong, if anything ---
does the pg_dump output not reload correctly?  If so, what happens
exactly when you try?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq