Hello,

My 2 pences.

We use "SQL dbms" as datawarehouse repository where we put all our U2 datas 
(Uv,Ud) but also XLS sheet, MDB files, other sql db, ... We use standard client 
tool like BusinessObject, Cognos, HummingbirdBI to request datawarehouse.

To prepare U2 datas, we have a in-house tool written in SB+:"U2 ETL" 

1 - we define table (sql image) regarding a u2file-monovaluedfields and/or a 
u2file-assocmultivaluedfields
1.1 - the system build sql statement CREATE TABLE, DROP TABLE,...

2 - for each field/column we extract/transform "field value" to "column value" 
> datatype conformity, decomposition of multipart field via SB+ expressions or 
I-Types.
2.1 - the system do it for each level - am, vm, svm
2.2 - the system build sql statements INSERT, UPDATE, DELETE, .. CREATE INDEX 
...

4 - for each tables an columns we define "meta data" for human readability 
(table/column label, shot/long description, relational model)

For performance, all SQL statements are set into a sequential file and are 
routed to the datawarehouse dbms via OS-script.

We have scheduler to refresh fully, incrementally, conditionally, ... 

We can adress all SQL db system as datawarehouse ; done for mySQL, ms-sql, 
Oracle, DB2


A sample of usage :
12 productions sites for one company , on each site there is one unidata server 
(ux & win).
On each one we deploy the U2 ETL defn and Extract scheduler.
Each night, extract data and build sql statement in a txt file, these text file 
is ftp'ed to the HQ site.
On the HQ, we've one datawarehouse server (mysql), it receive all text file and 
load it into datawarehouse for consolidation.

Another sample:
One server - ud ! uv;  20 accounts (20 instance of the same application) Only 
one ETL defn (file to table, fields to column;)  define n-source datafiles 
(account,datafile(s)) Extract from all datafiles-source and load it into one 
datawarehouse repository. 

Very easy to use, very powerfull.
Deployment is independent of applications accounts; "U2 ETL" it is a autonomic 
account.

Manu

> -----Message d'origine-----
> De : [EMAIL PROTECTED] [mailto:owner-u2-
> [EMAIL PROTECTED] De la part de Clifton Oliver
> Envoyi : vendredi 9 mai 2008 07:39
> @ : [email protected]
> Objet : DQM (was [U2] Using ETL to extract data from UD to SQL)
> 
> I wanted to change the subject line to see is we Listizens have any
> interest in discussing Data Quality Management.
> 
> How do you cleanse MultiValue non-typed data for integration with
> strongly datatyped non-MV applications?
> 
> Anyone want to kick off the discussion?
> 
> Regards,
> 
> Clif
> 
> --
> W. Clifton Oliver, CCP
> CLIFTON OLIVER & ASSOCIATES
> Tel: +1 619 460 5678    Web: www.oliver.com
> 
> 
> 
> On May 8, 2008, at 6:00 PM, Boydell, Stuart wrote:
> 
> > nd dirty
> > data to get to a set of data suitable for your DW/reporting
> > requirements.
> -------
> u2-users mailing list
> [email protected]
> To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to