We will be in the same process soon.
We'll have to consolidate over 100 instances to
something between 40-60 instances and switch from RBO
to CBO.
Any tips are welcome.
--- Steve McClure [EMAIL PROTECTED] a écrit :
I am just starting to look at converting to the cost
based optimizer, and am
I just went to the DB2 UDB fast track for experienced
DBA at IBM.
If it's 4 days then it should be fine. Our course was
only 2 days and we asked so much questions that we
only saw two third of the topics (and we had done
overtime).
As usual, it depends a lot on who is giving the
course.
---
On DW project, we have used Cognos Impromptu and
Powerplay tool in 1998-1999. We used the client-server
version as Powerplay for the web was a tool bought
from another company and was not quite integrated with
the other Cognos products.
The main drawback were that the Powerplay part to
build the
Beside Oracle, we have db2 mainframe and db2 udb on
aix.
I've asked asomeone working with DB2 mainframe and
there is spufi which is like sqlplus and DB2
Interactive wich is more like the command center on
db2 udb.
I'll more details tomorrow.
--- Thomas Day [EMAIL PROTECTED] a écrit :
I
How big was the OS process ?
Monitor what the process is doing, any memory
allocation in a loop ?
I had this error once and it was an Oracle bug (804 on
hpux 10.20), Oracle was having a problem releasing
memory so after 100 000 calls of a function the
process was crashing (ora-4030) because it
Burnt mud ???
You're supposed to say peaty !
Or you could have said :
Classic Glenmorangie, matured for 10 years in American
white oak then finished in Sherry Butts. Light gold in
colour, this product has a complex aroma full
bodied, sherry wine notes with traces of honey. Sherry
and nuts are
It's nice to recommend but since you do not seems to
really know the Oracle product how will you convince
the managers to spend at least in the six figures (US
money) ?
Do you have any criterias ?
Just to choose a reporting tool we had over 40
criterias that each companies had to to answer and
Last year I was in a biotech company and all the
systems (around 8) were communicating with Advanced
Queuing. All systems werre on 817 and one in 901.
The systems were in fact a pipeline producing data at
the beginning , refining it along the way and putting
it in a warehouse at the end.
I do
Can you be more precise.
--- [EMAIL PROTECTED] a écrit :
Hello
What are the limitations of partitioning a
table in Oracle.
Regards,
Deepa
--
Please see the official ORACLE-L FAQ:
http://www.orafaq.com
--
Author:
INET: [EMAIL PROTECTED]
Fat City Network Services
Oracle, DB2 UDB and Sql Server
--- Igor Neyman [EMAIL PROTECTED] a écrit :
Ruth,
Are you in the same boat, dealing with both: Oracle
and SQL Server?
Igor Neyman, OCP DBA
[EMAIL PROTECTED]
- Original Message -
To: Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
Have you done a conceptual data model ?
Do all transactions have the same properties ?
Do you have established any performance requirements ?
Do you favor insert or read ?
If the 10 transaction type have the same properties I
would definitively put them in one table.
If your tests show that
The best way to calculate the size of a table is to
load it with 1000 production data rows. Then calculate
the size of the predicted volume.
You should be able to handle the first year of data at
day 1.
Do not loose time to calculate the table size at the
byte level with formulas.
For the temp
The list of documents should have been established
before the contract was signed.
A professionnal contractor would have told you that.
The contractor must follow your methodology and give
you the same documents as your internal IT teams.
On the DBA side, you should expect :
database
When doing many select you should have the same order
but this is not guaranteed by Oracle.
The only official guaranty is to use an order by.
After an export/import or a move, you have the risk
that the rows will not be in the same order.
How big is the table ?
There is no unique id for the
Hi,
We're using Silverrun RDM 2.7.2
There is a conceptual/logical data model for all the
entities of the project. I must create the physical
data model for the first phase and I will have to
create the physical data models for the next phases
also.
Should I create a brand new model in a separate
It seems I was completely wrong, is it the same as for
Oracle 8 and 8i?
Last time I've worked with Windows it was on Oracle 8
and I thought that you had to stop the services
because Windows was not letting you copy the database
files.
--- Igor Neyman [EMAIL PROTECTED] a écrit :
Joe,
You can
Hi,
On all DW project I've been, the ETL tool was on the
database server containing the DW database.
On the current project, the architecture team has
decided that the ETL tool (Data Junction) will be on
its own server (Windows) to service all projects
needing ETL processing.
We are the first
?
are they using odbc?
millions of inserts using odbc over the network?
-Original Message-
From: paquette stephane
[mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 16, 2002 11:26 AM
To: Multiple recipients of list ORACLE-L
Subject: ETL architecture
Hi,
On all DW project
Thanks.
According to the proof of concept team, it seems to
still be a problem. The product do not seem to handle
any return code from the database processes. Still
investigating.
--- Frank Pettinato [EMAIL PROTECTED] a écrit :
Stephane,
We used this approach (albiet on Windows) back in
between the servers I think it's
100 megabits.
--- [EMAIL PROTECTED] a écrit : This is a
common approach.
Should be ok if:
* transporting the ftp files is part of the ETL
process
* big fat pipe to the servers.
Jared
paquette stephane [EMAIL PROTECTED]
Sent by: [EMAIL
Usually sub-queries are not the fastest way to do
things.
When a developper is talking about doing things using
cursors the big red light flashes : ho ho 3gl thinking
ahead !
The only way to know for sure is to test them. Do not
just check the elapsed time. I've tested 2 scenarios
once and they
This is quite a challenge.
This mailing list is one way to keep up with the pace
of technology. Sometimes I'm asking myself how come
people have time to post so many messages on the list.
Dan Fink has very good answers. Check on the guru's
web site. Buy and read a book from times to times.
Snowflake is often used because people still want to
normalize (and save some disk space !) which is not
the way to go to ease query.
If you do an hybrid data model, your loading will be
easier as you will have less problems to solve.
I agrre with you, the complexity comes from the number
of
Data modeling in a datawarehouse is there to ease and
make querying faster.
I have always discplined myself to use star schema and
never snowflake.
The Which one is easier to implement and easier ETL
? is not a good question as your data model should
not be design for the ETL procecess but
It depends on what you're doing. The use of a join
technique over another depend on how much data you
need to access.
If you read very few information from both tables then
a nested loop is the fastest way to get data. To use a
nested join at a cheap cost you need a good index on
the outer
HI all
We had those messages yesterday in the listener.log
file
TNS-12500: TNS:listener failed to start a dedicated
server process
TNS-12540: TNS:internal limit restriction exceeded
TNS-12560: TNS:protocol adapter error
TNS-00510: Internal limit restriction exceeded
Also on the unix
In homemade applications, by default users have a role
with read only, in the applications we change the
default role that allows insert, update, delete.
I've not tested this scenario but how about if, in a
database logon trigger, you check the
v$process.program field then depending of that
Oups ! you're right.
--- Kevin Lange [EMAIL PROTECTED] a écrit : Except
for the fact that they could always change
the program name that they
are running to match what you need. Then that
security is bypassed.
-Original Message-
Sent: Thursday, October 03, 2002 11:08 AM
Thanks.
I do not think so as DB2 was chosen because we'll be
implementing Siebel (and Siebel is recommanding DB2).
They're should be 2 DB2 databases, one for Siebel and
one for the staging area as 8 different data sources
will be loaded in Siebel. So I hope RPG won't fit in
;-)
---
Hi,
Use dynamic SQL (execute imediate).
Also, consider placing your code into a stored proc
called by the trigger.
--- George Leonard (ZA)
[EMAIL PROTECTED] a écrit : Hi guys
I am trying to create the following trigger.
The user in question is logging in using siebel
application and
Hi,
We're discussing on reference table.
One containing everything (using a type) or one per
entity. We'll have a lot of entities.
This is for a staging area where data will be validate
before going in Siebel. In theory, this staging will
become a very big staging for a datarehouse and still
in
Hi all,
I'll be seeing the dark side of the force as I'll be
the DBA on a DB2 UDB project.
Is there a list like this one for DB2 ?
Any links to DB2 stuff ?
I'd be interested in documents showing the
differences/similarities between Oracle and DB2 UDB.
Let's see our bargaining power with our
Your update is updating all rows in table1. Is that
what you want ?
nologging only works with direct path insert, not with
update.
--- Gurelei [EMAIL PROTECTED] a écrit : Hi.
I want to update a table based on data in another
table. Something like:
update table1 a set f1 =
(select f2
That's the way I've done it.
It let's you drop a partition and drop the tablespace
so nothing is left.
--- Freeman, Robert [EMAIL PROTECTED] a
écrit : We currently are creating partitions of a
given
table in individual
tablespaces (1 partition = one tablespace). To me,
this seems like a
It's late at night maybe that's why I do not
understand your answer but I do not see the link
between LMT and the number/size of datafiles.
One reason of multiple datafiles id to spread IO but
since nowadays a majority of sites goes on huge disk
box using raid 5 (that's what we have, the unix
I'm not sure either as I am rereading a document by
Craig Shallamaher where he is saying to change pctused
and pctfree in order to reduce data block
fragmentation. I have to test that.
At my new job, the DBAs are doing massive
export/import to reduce fragmentation... (with their
dictionnary
Jared,
So, that means that to remedy a case of data block
fragmentation we just need to increase the pctused for
the fragmented tables.
Of course, things won't change as fast as an
export/import but it's certainly less work to do.
--- [EMAIL PROTECTED] a écrit : John,
Someone asked a
John,
You are right, I just find out note 1029850.6 on
metalink : A block is relinked to a free list if
after DELETE or UPDATE operations, the percentage of
the used space falls below PCTUSED.
--- [EMAIL PROTECTED] a écrit : Well I was
sure about it until you had the temerity
to
Hi all,
First post since I begin a new job.
All my DBA contacts have played with Log Miner but
none of them have deployed it in a production
environment.
We want to set up LogMiner to be used across all
production DB (25+ db on Oracle 817).
The way I'm seeing this is the following :
- All
Hi,
In a test procedure I'm using successfully the default
feature for a parameter :
create or replace procedure testp1 (p1 in varchar2 :=
null) is
begin
if (p1 is null) then
dbms_output.put_line('il est null');
else
dbms_output.put_line('il est pas null');
end if;
end;
/
I'm not sure I understand what you exactly want but
here is what I do/believe.
Since 815 and up to 817, I use LMT with uniform extent
size. I kept the number of extents by objects below
100. On a 8K block size, you should not have more than
505 extents for an object because that's the number of
paquette stephane [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
08/20/2002 09:48 PM
Please respond to ORACLE-L
To: Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
cc:
Subject:Re:drop tablespace including
contents
At one client
At one client, one team was using SAP without a DBA,
only the SAP administrator using SAPDBA. They were
having poor performance.
After 2-3 days they came to see me, after 5 minutes I
told them that 4000 tables out of 16 000 were having
no statistics at all. They analyzed during the weekend
and
Hi,
Sure that Sybase has not all the nice features Oracle
has but I'm a bit surprised that you find it way too
slow, do you have Sybase 12.5 ?
12.5 has interesting new features compare to 11.9
version.
20 users doing ad hoc queries ...
I supposed you have a tool like Business Objects or
Cognos
I've install Cognos tools on hp-ux 10.20 a while ago.
The tool was Cognos Tranformer. We decided to build
the cubes on the unix box (instead Win NT) then ftp
the cubes on a Winframe server.
Which tool do you want to install ?
--- karthikeyan S [EMAIL PROTECTED] a
écrit : Hi,
Is it
At the time, we didn't get a better performance on
unix than on windows because the tool was not able to
use more than one cpu. I hope that the current version
can take advantage of multiple cpu.
--- Vergara, Michael (TEM) [EMAIL PROTECTED] a
écrit : We moved away from the Cognos-on-UNIX
Why leave beautiful Montreal for a place where
everything close at 1 o'clock ? ;-)
--- David Hill [EMAIL PROTECTED] a écrit :
Hi Guys
I was just wondering if anybody could help me and
send me some contacts
or head hunter's in GTA.
I'm currently Working in Montreal and am looking to
In a datawarehouse you want to exploit parallelism as
much as possible. So you'd better with many processors
and independant mount points.
As raid 5 is very popular because it's cheaper, you
will probably put your data on raid5 but fight to put
your redo on non raid 5 disk.
Do you have a good
He's on 8.0.5, Analytic functions are only available
in 8.1.x.
--- Connor McDonald [EMAIL PROTECTED] a écrit :
Take a look at the windowing functions under
'ana;ytic
functions' in the data warehouse guide...They are
very
very cool..
hth
connor
--- Jack van Zanen [EMAIL PROTECTED]
DW are memory hungry...
It really depends on the application needs, there will
be less than 20 users, if the design is good they
won't be hitting raw ultra-detailed data with query
using only one dimension ...
Depending on the kind of data loads, 4G of ram can be
quite enough for a small DW.
Do you want to copy or move ?
If move then partition the target table and do an
exchange partition, is the faster way to move data.
--- Ji, Richard [EMAIL PROTECTED] a
écrit : How about turn off logging and drop indexes
on the
target table. Do insert
with the APPEND hint. Re-create
You can see the sql generated by the report in
Crystal, so take that sql and run it in sqlplus to see
the access plan.
You can also check in v$sqltext the select run by the
report.
--- Baker, Barbara
[EMAIL PROTECTED] a écrit :
List:
We have a crystal report performing badly. (No! ,you
For what I've read, globally the 2 databases are equal
in performance, reliability and functionnalities.
Larryh E as many times said that it's only competition
in the database market is DB2.
I guess it really depends on your environment.
Of course Oracle works on more OS (used to be anyway),
... what was the partition key? Phylum?
Genus?
--- paquette stephane [EMAIL PROTECTED]
wrote:
We used the date as the key of the time dimension
but
we were using a time dimension to drive the
queries.
At my last client, I was surprised to see no time
dimension
It's like a cartoon I've seen in a french IT magazine.
On the first slide there is the IT director bragging
about it's huge mainframe to track the lost luggages
across all airlines.
In the next slide, you see the fresh new employee just
hired from school who says I just develop on my laptop
a
getting most of the
attention so far, though, as
we try to get the programs deployed properly without
by-hand intervention.
paquette stephane wrote:
Hi,
We will develop a new system that has a central
database (817/win2000).
From times to times, some users will worked
. We're instituting
a couple of
applications with this now, and data sync seems to
be working fine.
Application sync has been getting most of the
attention so far, though, as
we try to get the programs deployed properly without
by-hand intervention.
paquette stephane wrote:
Hi,
We
We have used MySQL in a small in-house project.
It works OK for what we want it to do, but it's far
from having Oracle's functionnality.
--- DENNIS WILLIAMS [EMAIL PROTECTED] a
écrit : Mark
Eweek recently did a head-to-head performance
test of major databases and
included MySQL. You
PROTECTED] a écrit :
Yes that's how it works, although the volume for 2-3
months might be
excessive, if the deltas get large. Light uses
Advanced Replication to
manage the deltas so the resolution process might
take a while.
paquette stephane wrote:
Hi,
Oracle Lite would not be good
Hi,
We will develop a new system that has a central
database (817/win2000).
From times to times, some users will worked with a
deployable version of the application in a region
without network connection.
When the users are back, there should be able to
synchronize with the centralized
Hi,
I've never had experiences in replication
environments.
We will develop a new system that has a central
database (817/win2000).
From times to times, some users will worked with a
deployable version of the application in a region
without network connection.
When the users are back, they're
When using delete , the highwater mark does not
change, so the table still use what was allocated.
export/import also does noty resize the table.
One way to do it, would be to precreate the table with
a smaller size then import the data.
--- mitchell [EMAIL PROTECTED] a écrit : Sorry :
I've used 1 month ago on Oracle817 without any
problem.
--- [EMAIL PROTECTED] a écrit : Hi
Can anyone give feedback good or bad on the
dbms_stats feature of
exporting statistics. Is there any gotcha's or does
it work well
Cheers
--
Luc,
I strongly recommend to use a catalog. Without
catalog, some recovery scenarios are not available.
There are some parameters you have to put in the
init.ora file to indicate the new files location.
Stéphane
--- Luc Demanche [EMAIL PROTECTED] a écrit : Hi
gurus,
I'm in process of
I'm installing Oracle 9i Developper suite on winXP and
the doc says :
If you use assistive technologies such as screen
readers to work with Java-based applications and
applets, run access_setup.bat before starting your
screen reader.
What the hell is assistive technologies such as
screen
Go at http://www.evdbt.com/papers.htm
You'll find a white paper on sql*loader for DW.
--- Smith, Ron L. [EMAIL PROTECTED] a écrit : We
have an application that calls SQL*Loader to load
data warehouse tables
every night. We would like to speed up
the loads if possible. Does anyone have
Yes it is ok to have sequences as the primary keys.
The dimension should not use keys from the source
systems as their own keys. They must be independant.
Also, since the PK of the dimensions are foreign keys
in the fact tables, if using a non-generated key you
will increase the size of the fact
As John says, you can check on what Oracle is waiting
on .
It may not solve your problem, but can make the
analyze faster. Since your fact tables are partitioned
and since you're migrating from 8.0.4 that means range
partitioning. If the tables are partioned by date and
you do you allow
I like the then use coffee-machine information part.
--- Stephane Faroult [EMAIL PROTECTED] a écrit :
Sandeep Kurliye wrote:
Hi Guys,
Sorry, if this sounds bit awkward or unrelated to
this mailing list.
Can any one of you please let me know whether
there is any tool
If your index tablespace is on the same physical
device than your table tablespace , you will have no
gain.
Is your bottleneck an IO one ?
--- Seema Singh [EMAIL PROTECTED] a écrit :
Hi
I have few of primary key and unique indexes on
main data tablespace.I am
thinking that if I moved
Check the Datawarehouse Institute :
http://www.dw-institute.com
--- Toepke, Kevin M [EMAIL PROTECTED] a
écrit : Hello!
Can anyone recommend a good training class on
DataWarehouse
design/implementation? I have a basic understanding
of the concepts,etc from
reading books, but would like
You can use insert select , export/import, create as
select to move data from a non-partitionned to a
partitionned table.
Partitionning helps in the management of large tables
more than in speeding the queries.
Will you delete data from that table one day ?
Choose the partition key carefully.
I remember that a long time ago, with Oracle 6 and
Forms 2.3 , the dba, accidentely, add a row in dual.
Since, at that time, almost all Forms trigger were
relying on dual, absolutely nothing was working in the
production environment. I do not have to tell you that
the dba spent many hours before
Hi,
Asked you sys admin to boost the maximum memory
allowable for a process.
I had that error on a hp box. I do not remember the
name of the parameters. Once the sys admin had changed
the memory process parameters, processes were able to
use way more than 150M of ram.
You can increase the
with the environment you
describe,
but it would seem good practice to separate the dev,
test and
maintenance logically, even if they do have to share
hardware.
Jared
On Friday 24 May 2002 13:33, paquette stephane
wrote:
Hi all,
The client has a dev, test, maintenance, QA and
prod
environments
Hi all,
The client has a dev, test, maintenance, QA and prod
environments. Each environment consist of a pipeline
of several applications.
QA and prod have their own independant pipelines with
their own servers with Oracle 9i, Oracle 8i, Workflow,
Portal and 9iAS
Dev, test and maintenance
You can installed java with scripts.
Check metalink note 156477.1
Use more space (10-20M more )than specified on the
note for java_pool_size and shared_pool
--- Joe LaCascio [EMAIL PROTECTED] a écrit :
Hi folks:
I'm installing IAS on our web server which is a DEC
Alpha 800, one of the
The oratab is provided by Oracle Corp.
We created our own oratab file to handle more
parameters.
--- Babu Nagarajan [EMAIL PROTECTED] a écrit :
All
On one of the database servers we have, the oratab
file has been changed to include a :parameter
after each entry and that parameter is
The alter table exchange partition lets you transfer
data from the partition of a partitioned table to a
non partitioned table. It changes the adress in the
data dictionnary, no data is moved, that's why it is
fast.
For example, I'm using it in a system to exchange old
data with new data. The
As for the adress, you have more flexibility when
storing each parts separately.
Stephane
--- Suzy Vordos [EMAIL PROTECTED] a écrit :
Curious how people are storing phone info in their
database, eg.,
separate columns for country code, city code, area
code, etc. Any
references about
It was ODL : Oracle Data Loader .
--- paquette stephane [EMAIL PROTECTED] a
écrit : Since we're in old stuff.
Anybody remember the ancestor of Sql*loader ?
Answers in 10 minutes !
--- [EMAIL PROTECTED] a écrit : Stephane,
I remember using Pro*C with Lattice C and/or
Vax
A DW stores data in a denormalised fashion, that's one
point that every body knows but it is also subject
oriented. It is also developped one subject at a time.
A DW is multi-subjects as a datamart is on one
subject.
From Bill Inmon
From the data warehouse data flows to various
departments
paquette stephane
stephane_paquette@ To:
Multiple recipients of list ORACLE-L
paquette stephane
stephane_paquette@ To:
Multiple recipients of list ORACLE-L
[EMAIL PROTECTED]
yahoo.comcc
I've found out how to generate without the storage
clause.
--- paquette stephane [EMAIL PROTECTED] a
écrit : I've been looking for that in Designer ,
where it is
?
I've used a lot Power*AMC in the past and it has
that
feature .
We're using Oracle Designer 6.5.52.1.0
TIA
--- [EMAIL
Rachel,
If a DW is built and that users do not have access to
a part of it in an ad hoc fashion, you gonna have a
lot of political meetings
They should have some data marts for their usage and
keep most of them off the raw data.
Regarding SAS tools, I've used SAS more than 10 years
ago in
I assume you're talking about the dbassist tool.
If it's working as on unix, you should find the log
files in $ORACLE_BASE/admin/$ORACLE_SID/create.
I prefer the old way, using scripts, this way I can
rerun the scripts for all the different
environnements.
HTH
--- KENNETH JANUSZ [EMAIL
Hi,
Oracle 817/Solaris 8.
Users are doing select joining using the PKs of 2
partitionned tables. Partitionned key and the primary
key are the same.
The access plan is a nested loop with a full table
scan on the first table which hold 700 000 rows.
The block size is 16K, I assume that's why
I also think that since those indexes are created by
Oracle, Oracle knows them.
I'll trace the dbms_stats and I'll look for the
'bitand(flag,)'
--- Jonathan Lewis [EMAIL PROTECTED] a
écrit : It would make sense,
I would expect Oracle to take a shortcut
with LOB Indexes, simply
I've hit bug 1499329
As a workaround, I'm analysing the tables in the
staging environment then I'm doing an exchange
partition.
I can analyse the tables/indexes without problem in
the staging environment.
My question is when creating a clob, Oracle creates a
sys_...$$ indexes. When analysing
Thanks for replying.
I've seen your post on a similar question on metalink.
I've already tested with the constraints enabled
novalidate using the default exchange mode (with
validation). The exchange is taking 3 seconds.
We'll go this way since we're replacing completely the
target tables with
I just started a couple days ago at this client.
They're using Hitachi technology in the QA and prod
environment with 181G disk. I asked the SA twice and
he confirmed the 181 G disk.
I'll ask more details to the SA as soon as I know him
better.
--- Deshpande, Kirti [EMAIL PROTECTED]
a écrit
I'm testing the exchange partition and it's taking 90
seconds to exchange a table containing 700 000 rows
with a partition containing also 700 000 rows.
I've noticed that SYS is doing a crazy select to check
on the PK of the tables even if I used whitout
validation in the exchange statement.
I
I'm on metallink since 7:30 this morning (3 hours)
Stéphane
--- Boivin, Patrice J [EMAIL PROTECTED] a
écrit : FYI,
I just went to MetaLink and clicked on the Login to
MetaLink link, and got
this:
Authorization Required
This server could not verify that you are authorized
to access
Hi,
I'm trying the following insert /*+ append */ into t1
as select * from t2;
t1 is created with nologging attribute.
The insert is not using the hint at all.
I can select on t1 (before any commit) which I should
not be able to do if the append hint was used.
Any ways to get the hing used ?
Hi all,
Back on the list after being away for a while.
I'm currently at a new client where things are to done
yesterday (as usual).
We must move data (1Gig) from a staging area to the
final area, tables have integrity constraints and
tables contain CLOB datatypes.
The 2 areas are in 2 different
Hi all,
Back on the list after being away for a while.
I'm currently at a new client where things are to done
yesterday (as usual).
We must move data (1Gig) from a staging area to the
final area, tables have integrity constraints and
tables contain CLOB datatypes.
The 2 areas are in 2 different
Hi,
I'm trying to transfer data from Sql Server 2000 into
Oracle 9.0.1/Win2000 .
I've set up heterogeneous services on Oracle
- run caths.sql
- create an ODBC connexion
- modify the tnsnames.ora
- modify the listener.ora abd restart it
- create an ORACLE_HOME\hs\admin\initXXX.ora
-
I've not worked with an IBM P660 but do have worked in
datawarehouse environments with raid5.
Raid5 will slow down your loads. The impact may be not
too bad if you're in noarchivelog, using direct path
inserts with nologging and loading on a monthly basis.
-Original Message-
Behalf Of
Hi,
Anybody has built a multi-languages datawarehouse ?
The star schema is quite pure so only the data in the
dimensions needs to be in french and in english.
Up to now the query tool is Oracle Discoverer.
Anybody has done this ?
=
Stéphane Paquette
DBA Oracle, consultant entrepôt de
1 - 100 of 200 matches
Mail list logo