[SQL] PLEASE help ME , HOW TO GENERATE PRIMARY Keys on the fly
Dear friends, I have table MD_CUSTOMER MD_CUSTOMERIDPK integer primary key NAME varchar But my primary key is not in correct order like MD_CUSTOMER MD_CUSTOMERIDPK NAME 10 ANDI 33 TESTER 100 KKK , so I want to make other primary key to generate sequences 1, 2, 3, … and in MS SQL SERVER 2005 I can with Rank() function , but in Postgres how ? PLEASE any one can help me, I am really appreciate. Best regards Andi kusnadi
Re: [SQL] PLEASE help ME , HOW TO GENERATE PRIMARY Keys on the fly
On Wed, May 24, 2006 at 05:35:10PM +0700, andi wrote:
>
> But my primary key is not in correct order like
What does this mean? Is the key being generated by a sequence (i.e.
is the column DEFAULT nextval('some_sequence'))? If so, the primary
key will be assigned in COMMIT order. Note that the sequence does
not guarantee no gaps, however.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the marketplace.
--Philip Greenspun
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] PLEASE help ME , HOW TO GENERATE PRIMARY Keys on the fly
Well, you could add a serial column. I'll tell you how, but I haven't tested the code, so be sure to check it! And using BEGIN and COMMIT/ROLLBACK to delimit transactions would not be a bad idea at all ;-)To add a serial column, just write: --create new serial field ALTER TABLE md_customer ADD id SERIAL; If you check your table now, you will see that your new 'id' column contains correlative values. If this is what you wanted, you could update every tabe referencing md_customer, like this:--drop foreign key on remote table ALTER TABLE xxx DROP CONSTRAINT xxx_fk;--set old pk values to the value in the 'id' field just createdUPDATE xxx SET fk_field=md_customer.id FROM md_customer CU WHERE CU.md_customeridpk=xxx.fk_field; Check if everything is ok now. If it is, then recreate your foreign key, drop your old pk and rename the new one:--restore fk on remote tableALTER TABLE xxx ADD CONSTRAINT xxx_fk FOREIGN KEY (fk_field) REFERENCES md_customer(id) ON DELETE ... ON UPDATE ...; --drop old pkALTER TABLE md_customer DROP md_customeridpk CASCADE;--rename id to md_customeridpkALTER TABLE md_customer RENAME id TO md_customeridpk;--create pkALTER TABLE md_customer ADD CONSTRAINT md_customer_pk PRIMARY KEY(md_customeridpk). That should be it.Hope it helps. On 5/24/06, andi <[EMAIL PROTECTED]> wrote: Dear friends, I have table MD_CUSTOMER MD_CUSTOMERIDPK integer primary key NAME varchar But my primary key is not in correct order like MD_CUSTOMER MD_CUSTOMERIDPK NAME 10 ANDI 33 TESTER 100 KKK , so I want to make other primary key to generate sequences 1, 2, 3, … and in MS SQL SERVER 2005 I can with Rank() function , but in Postgres how ? PLEASE any one can help me, I am really appreciate. Best regards Andi kusnadi
Re: [SQL] PLEASE help ME , HOW TO GENERATE PRIMARY Keys on the fly
andi wrote:
Dear friends,
I have table
MD_CUSTOMER
MD_CUSTOMERIDPK integer primary key
NAME varchar
OK - two columns.
But my primary key is not in correct order like
MD_CUSTOMER
MD_CUSTOMERIDPK NAME
10
ANDI
33
TESTER
100KKK
Not sure what you mean. What does it mean for your primary key to be in
the "correct order"? I assume you know how to select rows in a specific
order using the "ORDER BY" clause?
, so I want to make other primary key to generate sequences 1, 2, 3, . and
How can you have *another* primary key? By definition there can only be
one primary key.
in MS SQL SERVER 2005
I can with Rank() function , but in Postgres how ?
If all you want to do is generate a series of numbers you might look at
generate_series(), or if you'd like a "row number" then something like:
CREATE TEMPORARY SEQUENCE myseq;
SELECT *,nextval('myseq') FROM mytable;
I have to say though, I'm not sure what you're trying to do. I do get
the feeling I'd think it was a bad idea once I found out though.
--
Richard Huxton
Archonet Ltd
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
[SQL] Warehouse Schema
I'm developing a schema for a large data warehouse (10 billion records) and had a couple of questions about how to optimize it. My biggest question is on the design of the fact table and how much normalization I should actually do of the data. The data is going to be keyed by IP address, stored as a non-unique IP4 type. Each record is also tagged with various attributes, such as a category and a type. Assuming that a category and type are VARCHAR, would it make sense to normalize these out of the fact table into their respective tables and key them by an INTEGER? I.e.CREATE TABLE big_fact_table_A ( identifier IP4, data1 BYTEA, data2 BYTEA, ... dataN BYTEA, category VARCHAR(16), type VARCHAR(16) );... vs ...CREATE TABLE big_fact_table_B ( identifier IP4, data1 BYTEA, data2 BYTEA, ... dataN BYTEA, category INTEGER REFERENCES categories (category_id), type INTEGER REFERENCES types (type_id));I figure that the normalized fact table should be quicker, as the integer is much smaller than the varchar. On query, however, the table will need to be joined against the two other tables (categories, types), but I still figure that this is a win because the other tables are fairly small and should stay resident in memory. Is this reasoning valid? The downside to this (from my perspective) is that the data comes in the form of big_fact_table_A and could be easily COPYed straight into the table. with big_fact_table_B it looks like I will have to do the "unJOIN" in a script. Also, I have separate installations of the warehouse (with different data sources) and it will be difficult to share data between them unless their categories/types tables are keyed with exactly the same integer IDs which, as I don't directly control the other installations, is not guaranteed. Any suggestions to the above "problems"?
