Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Tom Lane
ow [EMAIL PROTECTED] writes:
 --- Tom Lane [EMAIL PROTECTED] wrote:
 Actually you can only have 4 billion SQL commands per xid, because the
 CommandId datatype is also just 32 bits.  I've never heard of anyone
 running into that limit, though.

 Wouldn't the above put a limit on a number of records one could have
 in table?

No.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread ow

--- Tom Lane [EMAIL PROTECTED] wrote: 
  Wouldn't the above put a limit on a number of records one could have
  in table?
 
 No.

If I understand correctly, a table that has 4B+ rows cannot be restored after
the dump and that, in turn, may/will affect the ability to upgrade to new
versions of pgSql.

This is somewhat similar to saying yes to the question Can I drive this car
at 55 mph? and then forgetting to mention that the brakes will fail if the car
reaches 55 mph.

Thanks





__
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Stephan Szabo

On Mon, 1 Dec 2003, ow wrote:

  --- Tom Lane [EMAIL PROTECTED] wrote:
  Actually you can only have 4 billion SQL commands per xid, because the
  CommandId datatype is also just 32 bits.  I've never heard of anyone
  running into that limit, though.
 

 Wouldn't the above put a limit on a number of records one could have in table?
 One have to use pg_dump/pg_restore to be able to upgrade between pgSql releases
 and/or to backup data. If one cannot backup data and/or upgrade between pg
 releases, then 4B records is the effective limit on the number of records in
 pgSql ... or am I missing something?

I'd expect copy to be a single command, no matter how many rows were
copied.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Greg Stark

Stephan Szabo [EMAIL PROTECTED] writes:

 I'd expect copy to be a single command, no matter how many rows were
 copied.

It might prevent you from using pg_dump --inserts ?

-- 
greg


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Tom Lane
ow [EMAIL PROTECTED] writes:
 If I understand correctly, a table that has 4B+ rows cannot be restored after
 the dump and that, in turn, may/will affect the ability to upgrade to new
 versions of pgSql.

You don't understand correctly.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread ow

--- Tom Lane [EMAIL PROTECTED] wrote:
 ow [EMAIL PROTECTED] writes:
  If I understand correctly, a table that has 4B+ rows cannot be restored
 after
  the dump and that, in turn, may/will affect the ability to upgrade to new
  versions of pgSql.
 
 You don't understand correctly.

I see. Thanks





__
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Christopher Browne
Clinging to sanity, [EMAIL PROTECTED] (ow) mumbled into her beard:
 --- Tom Lane [EMAIL PROTECTED] wrote: 
  Wouldn't the above put a limit on a number of records one could have
  in table?
 
 No.

 If I understand correctly, a table that has 4B+ rows cannot be restored after
 the dump and that, in turn, may/will affect the ability to upgrade to new
 versions of pgSql.

 This is somewhat similar to saying yes to the question Can I drive this car
 at 55 mph? and then forgetting to mention that the brakes will fail if the car
 reaches 55 mph.

No, you are NOT understanding correctly.

Restoring a table from pg_dump generally involves _ONE_ command.

For instance, the following is what pg_dump generates for my table, stocks.

COPY stocks (symbol, description, exchange) FROM stdin;
AADBX   AADBX   NYSE
AADEX   AADEX   NYSE
AAIEX   AAIEX   NYSE
BTS.A   BTS.A   TSX
CTSTK   TD Canadian Equity  CTE
CASHCASHTSX
CTAMER  TD AmeriGrowth RSP  CTE
CTASIA  TD AsiaGrowth RSP   CTE
CTEMER  TD Emerging Markets RSP CTE
CTEURO  TD European Growth RSP  CTE
CTIBND  TD Global RSP Bond  CTE
FDIVX   FDIVX   NYSE
FDRXX   FDRXX   NYSE
FUSEX   FUSEX   NYSE
MOT MOT NYSE
NCX NOVA Chemicals Corporation  TSX
NT  NT  NYSE
PCA Petro CanadaTSX
RY  Royal Bank of CanadaTSX
TOC Thomson Corporation TSX
TRP TransCanada PipeLines Limited   TSX
WORKVE  Working VenturesOTHER
CTSPEC  TD SPECIAL EQUITY   CTE
CTUSEQ  TD US EQUITYCTE
CTMMTD MONEY MARKET PL  CTE
CTCBOND TD Canadian BondCTE
\.

Recovery from this involves the SQL processor using ONE transaction
ID, and ONE SQL statement.  If there were 8 billion rows in the table,
whatever other challenges there might be, it would still use ONE
transaction ID and ONE SQL statement.

What is there about This involves just one SQL statement that isn't
making sense?
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').
http://www.ntlug.org/~cbbrowne/lisp.html
Pagers are cases for holding dead batteries. -Richard Wolff 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Max number of rows in a table

2003-12-01 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Stephan Szabo [EMAIL PROTECTED] writes:
 I'd expect copy to be a single command, no matter how many rows were
 copied.

 It might prevent you from using pg_dump --inserts ?

Not even that, unless you *also* modified the dump output to wrap
BEGIN/END around it.  Otherwise each INSERT is a separate xid.

(Of course you could definitely take a pretty long coffee break while
waiting for a 4-billion-row table to be restored with INSERTs.  Also
I think it would be necessary to run VACUUM partway through to avoid
transaction wraparound issues.  pg_autovacuum could be expected to
take care of that for you, if it were running.  But in practice anyone
sane would use COPY for this, anyway.)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend