Jared,
Oracle gives you a chance to use it. And it's very common to use. Isn't it?
There is no conception of phisical location in realational theory. I'm not
saying that pure theory is the best for practical use, though. :)
BTW, Oracle stores ROWID in indexes... instead of primary key (which is
Hi friends,
I have to load a input file into a oracle table . The table has only
one field which will store each line as a single row . The order of the file
is very important for my further process. sometimes the order of the file is
changed when it is stored in the file. there is no
You can call them by building a wrapper in the Java in DB and have it call
the external procedure. You can expose that internal Java to the PL/SQL
RTFM and get back with any further queries. The Oracle Java documents have
quite a bit of useful info on this
Cheers
--
Title: RE: [Q] what difference between count(0), count(1) and
Hi,
Please can someone explain the consequences of "reset database" in
rman?? Does this mean all the previous backups are lost??Is there a way around
??
Any
help/advice would be greatly appreciated...
Rgds
Hello Shankar,
You can'n rely on rowid in your case. It would be much better to add
additional field into your table and write rownum to it.
You can generate record number automatically by SQL Loader.
Here is simple example how to do it:
LOAD DATA
INFILE 'yourfile.dat'
BADFILE 'load.bad'
INSERT
Hi
Can anybody explain the events like
SQL*Net message from client,rdbms ipc message
PX Idle Wait ,slave wait ...
Can I assume a i/o bottleneck from the following
statistics as most of the i/o events are having high
wait time.
select * from v$system_event
order by TIME_WAITED;
The last few
BDY.RTF
Description: RTF file
hi all
i am facing problem , when i am importing oracle 8.1.6 backup dmp file into oracle 8.1.5 database.
pls give solution.
bye
srinivas rao
.
Do You Yahoo!?
LAUNCH - Your Yahoo! Music Experience
Hi all,
My export dumps are too big (80 GB) for my filesystem and I'm looking for a
way
to compress them on the fly -ie without taking *.dmp to disk first but
straight *.dmp.gz
Anybody with an idea on how to archive this ?
Thanking you,
---
CSW
æ¬zǶ¨}ø©ND ±@Bm§ÿðÃ
Title: RE: [Q] what difference between count(0), count(1) and
sorry
I usually do- I cant access the web at the mo thats all- but thanks for
this..
-Original Message-From: Peter Gram
[mailto:[EMAIL PROTECTED]]Sent: 16 May 2002
10:05To: Malik, FawziaCc:
[EMAIL
Hi friends,
I have to load a input file into a oracle
table . The table has only
one field which will store each line as a single
row . The order of the file
is very important for my further process.
Perhaps you should re-read a paper published in 1970 in 'Communications of the ACM' by
Can I assume a i/o bottleneck from the following
select * from v$system_event
order by TIME_WAITED;
No. Wait events may only make up a small amount of processing that Oracle
is doing for you.
--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Greg Moore
INET:
Hi DBA's,
I have the following problem,
on a node I have some tables an a MV_master (done with FAST refresh) build over these
tables.
I have to bribg this MV_master on the DB servers but
It's not possible to do this using snapshot log on MV_master and building the
MV_slave's using the refresh
BDY.RTF
Description: RTF file
Hi,
far better than getting individual descriptions is to research for yourself
(this isn't an RTFM message honestly).
Good sources of information for this subject are Anjo Kolk's excellent
document on wait events and enqueues
(http://www.dbatoolbox.com/WP2001/dbamisc/events.pdf) and the Oracle
Can someone please tell me how the value displayed in the Used M column is
calculated/dervied. I'm running DBA Studio (stand alone) and I'm trying to
relate it back to DBA view data. Any help would be much appreciated.
-
Seán O' Neill
Organon (Ireland) Ltd.
[subscribed:
Hi DBA's,=0D
I have the following problem,=0D
on a node I have some tables an a MV_master (done
with FAST refresh) build =
over these tables.=0D
I have to bribg this MV_master on the DB servers
but =0D
It's not possible to do this using snapshot log on
MV_master and building t=
he MV_slave's
Hello Simon,
You can make it like this:
mkfifo yourfifo
gzip yourfifo outfile.dmp.gz
exp ... file=yourfifo
rm yourfifo
Thursday, May 16, 2002, 4:38:40 PM, you wrote:
SW Hi all,
SW My export dumps are too big (80 GB) for my filesystem and I'm looking for a
SW way
SW to compress them on
Client is not accepting to use the utl_file for reading data from the file.
i have to fetch each line from the file and process it for storing the
information in various tables. so apart from sql loader can u suggest other
method (excluding utl_file) for doing the same operation.
Best Regards,
Arun,
Here are a couple of files (a .bat and .sql) that let me maintain a
constant number of Archived Redo Logs online.
The first batch file executes SQL*Plus to produce two other batch files to
delete the excess logs and move some others, maintaining, in this case
about 450 logs. It ran every
On Thu, May 16, 2002 at 01:53:20AM -0800, Pati Srinivas Rao wrote:
hi all
i am facing problem , when i am importing oracle 8.1.6 backup dmp file into oracle
8.1.5 database.
You have to use the exp from the lowest ora version, 8.1.5,
on the 8.1.6 host. Note:132904.1 has the
[EMAIL PROTECTED] wrote:
kick the power cable to your server...
Could work.
There a story (urban legend?) about a Sybase server at a brokerage house in
NYC that would not return correct results for an important SQL statement.
They'd been working on it for weeks with Sybase.
don't know
If you are on Unix, you can pipe the export into a split command and break the
file into multiples and compress on the fly. There's a note on metalink about it
(note 30528.1)
Also, I *think* in 8.1.7 you can specify the size and names of the export files,
so that Oracle will automatically
Hi,
if you have a link between the 8.1.5 DB and the 8.1.6 DB you should
run exp USER/PASSW@8_1_6_SID from the 8.1.5 environment to
create an 8.1.5 compatible export file; then run the Inport as you usually do.
Bye
Francesco
--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
thanks everyone for the help. =)
Sergey V Dolgov wrote:
Hello Maria,
You should look at your file in some hex editor, this symbols might be
symbols with hex code 0A or 0D 0A (it means new line).
So you have to set correct RS value.
Wednesday, May 15, 2002, 3:28:38 PM, you wrote:
MAVdlV
Sergey,
Thanks for the mkfifo idea. I've also come across mknod myfifo p;compess
myinfo myinfo.Z
I'm looking at the two options, yet to ascertain whether second method works
with gzip.
Do you know of any known troubles (Block/File corruption) with first method ?
Do I have to rm yourinfo or I
Yes, 8.1.7 exp has 2 new parameters:
filesize=53687058420# 50gb
file=file1.dmp,file2.dmp # 2 files
It helps when exporting a 1tb db. Using direct=y, gets it done in 4hours.
FWIW,
Gene
[EMAIL PROTECTED] 05/16/02 09:13AM
If you are on Unix, you can pipe the export into a split
[EMAIL PROTECTED] wrote:
If you are on Unix, you can pipe the export into a split command and break the
file into multiples and compress on the fly. There's a note on metalink about it
(note 30528.1)
Also, I *think* in 8.1.7 you can specify the size and names of the export files,
so
So DBA/ALL/USER_TAB_MODIFICATIONS cannot be used to determine an accurate
count of how many records were updated, but it can used to determine if the
table has been updated, and give you a general feel of how much has been
updated.
AND is used by the GATHER STALE parameter in
Fawzia - Why do you think you need to perform a reset database? Have you
performed an incomplete recovery on the database (opened it with RESETLOGS
option)? If so, you have created a new incarnation of your database.
Therefore, none of your backups are valid because they occurred prior to
If you are on Unix, you can pipe the export into a split command and
break the file into multiples and compress on the fly. There's a note on
metalink about it (note 30528.1)
Easier of you split the zipped result:
mknod /tmp/dump p;
gzip --fast /tmp/dump | split -b
Can anyone tell me if an ops database can be brought down because of too few
locks being allocated for the database?
Thanks,
Bryan
--
Please see the official ORACLE-L FAQ: http://www.orafaq.com
--
Author: Rodrigues, Bryan
INET: [EMAIL PROTECTED]
Fat City Network Services-- (858)
Murali
I have seen them occasionally in our 8.1.7 database, generated during a
database recovery. The recoveries in question complete normally so I
haven't researched the details.
Mike Hand
Polaroid Corp
-Original Message-
Sent: Wednesday, May 15, 2002 9:48 PM
To: Multiple recipients
Chris,
Do you know anything about monitoring and gathering stale statistics on
table partitions?
I am able to monitor and gather stale statistics on partitioned tables at
the table level
but don't seem to be doing so at the partition level.
I can't figure out how to alter my partitions to put
I don't know whether this is a tangent, but I notice that on the windows
platform, compressed exports can still get 85% compression when zipping
them with WinZip.
Obviously Oracle compressed=y doesn't mean compress the export file, it
just means that it places all the segments contiguously in
Hello,
Apologies for the slightly off-topic listing, but I
know there are several unix command gurus out there. (Bambi?)
Oracle 8.1.6 on Solaris 2.7.
I am trying to execute an rsh command against another
unix server; the actual command is
rsh pnas1 chkpntmk oradata ckpt1
if [ $? != 0 ];
We've found that when we use RMAN to recover an exact clone of a database
with the same name as the original to an alternate host (perhaps as test
database or fallback database while upgrading) we have to reset the
database in the RMAN catalog if both databases are in the same RMAN
catalog.
This
Chris, I remember from Metalink that you cannot use the 'gather stale'
option in dbms_stats.gather_schema_stats. There is a bug in 8i and supposed
to be fixed in 9i. So they still advise to run a job daily to gather the
correct statistics.
Thank you Gopalakrishnan, Kirti and John for clearing my
Oracle 8.0.5
Solaris 2.6
shared_pool_reserved_min_alloc5K
shared_pool_reserved_size 6656000
shared_pool_size 13312
total sga size is 597 megs
I'm fighting a particularly difficult ora-04031 error. The error can be
reproduced easily with
Patrice - That would be correct. If you run export interactively, the prompt
that is provided is compress extents (y/n).
Dennis Williams
DBA
Lifetouch, Inc.
[EMAIL PROTECTED]
-Original Message-
Sent: Thursday, May 16, 2002 10:08 AM
To: Multiple recipients of list ORACLE-L
I don't know
there is a BIG difference between the COMPRESS=Y parameter on an export and
compressing a file!
the parameter changes the create table statement placed in the export file so
that the initial extent is large enough to hold the entire table. It does NOT
affect the size of the export dump file in
Precisely :)
- Kirti
-Original Message-
Sent: Thursday, May 16, 2002 9:23 AM
To: Multiple recipients of list ORACLE-L
So DBA/ALL/USER_TAB_MODIFICATIONS cannot be used to determine an accurate
count of how many records were updated, but it can used to determine if the
table has been
Doesn't it mean that all rows are compressed into 1 extent?
-Original Message-
Sent: Thursday, May 16, 2002 12:08 PM
To: Multiple recipients of list ORACLE-L
Subject:RE: Compressing Export Dumps / WinZip
I don't know whether this is a tangent, but I notice that on the
That's right. compress=y is the export default; it causes all
extents of an object to be combined into one.
--- Boivin, Patrice J [EMAIL PROTECTED] wrote:
I don't know whether this is a tangent, but I notice that on the
windows
platform, compressed exports can still get 85% compression when
Thanks Prakash.
I was poking around in Metalink and discovered it. Luckily, I'm on 9i, so I
will be checking out this feature.
-Original Message-
Sent: Thursday, May 16, 2002 11:24 AM
To: Multiple recipients of list ORACLE-L
Chris, I remember from Metalink that you cannot use the
Not that I'm aware of are you thinking about the
enqueue_resources parameter? It is dynamically adjusted
by Oracle as needed.
RF
-Original Message-
Sent: Thursday, May 16, 2002 10:38 AM
To: Multiple recipients of list ORACLE-L
Can anyone tell me if an ops database can be brought
Has anyone noticed that the number of event waits in 9i seems much much
higher than in 8i.
This is the number, not the time waited mind you, so this doesn't really
have performance
implications. I'm just wondering if this betrays some internal code changes
in the way Oracle
is reporting these
perusing the archives of the list i've gotten the impression that virus
packages are frowned on; it seems that the endorsed methodology is the
restricted-access-purely-this-solution approach.
by that, the server *only* runs oracle and is configured to only allow
access to that resource.
we are
Patrice,
Yes, that's right. On our Tru64 Unix platforms, I was amazed to find that
even though I knew the export dump files to be binary files and (what I
assumed to be) not only Oracle-extent-compressed, but
binary-data-compressed...that in using gzip/gunzip we were achieving
compression
Title: RE: Export multiple targets using OEM
I am using OEM 9.0.1 with job scheduler
I am trying to export full database for multiple targets using the job scheduler.
How do you specify more than one .dmp name and location? I want a .dmp and .log file for each target.
Even though I
Depends on what you mean by down, doesn't it? Many define down as
unusable, despite the fact that connections can be made and SQL statements
can be processed.
If you're referring to the parameters that start with the prefix LM_*,
then no, the database instance won't crash/halt/abend. If you
The compress=y option doesn't have any effect on how data is stored in the
export dump file, only some of the metadata.
It directs the EXP program to recalculate the DDL for all of the tables and
indexes (instead of just using the settings in the data dictionary) so that
all space previously
Ok is this a joke?? If not, I think someone needs to crack
the Oracle Utilities manual...
RF
-Original Message-
Sent: Thursday, May 16, 2002 11:08 AM
To: Multiple recipients of list ORACLE-L
I don't know whether this is a tangent, but I notice that on the windows
platform,
Is there a way to check for the success/failure of the actual remote
command when using rsh?
$a=$(rsh blah);
and parse $a for output for an indication of the blah
command succeeding or failing.
--
Steven Lembark 2930 W. Palmer
Workhorse Computing
Cherie,
Looking at the structure of user_ind_partitions table, I don't see a
monitoring column. So I guess you can monitor only at the table level.
Prakash
-Original Message-
Sent: Thursday, May 16, 2002 11:03 AM
To: Multiple recipients of list ORACLE-L
Chris,
Do you know anything
Hello all,
There is a requirement to install our application at a site that will have
both Oracle (8.1.7) Unix (AIX 5.1) configured in the French language.
Can anybody please advise me of anything that I need to be aware of in
relation to the differences between an English French
I found two bugs on Metalink dealing with this. The first, 1890016, can be
ignored because the GATHER only fails if you specify an invalid granularity.
Well duh. The second, bug 1192012, will only cause the first table in the
schema to be skipped.
In our case, the first table is first
Murali:
The transaction can be considered DEAD for number of reasons. You can see
the
status of the transaction at anypoint of time using by querying the X$KTUXE.
KTUXESTA will give you the transaction status for any given transaction and
KTUXEFL will give the transaction flag (DEAD if it is
Yup...
-Original Message-
Sent: Thursday, May 16, 2002 11:44 AM
To: Multiple recipients of list ORACLE-L
Doesn't it mean that all rows are compressed into 1 extent?
-Original Message-
Sent: Thursday, May 16, 2002 12:08 PM
To: Multiple recipients of list ORACLE-L
I need a routine which removes archive logs via RMAN tape backups if the
archive log destination exceeds half full. I already have the RMAN part
which we can kick off manually but I'm looking for something like a basic
cron job monitoring script which triggers this based on the half full
Bill -
http://www.orafaq.com/faqiexp.htm#SPEED for some tips.
Do you have any alternatives to importing? Transportable tablespaces,
database cloning, SQL*Net, for example?
If your server has multiple CPUs, you can start multiple import sessions.
Dennis Williams
DBA
Lifetouch, Inc.
[EMAIL
Internal code changes= additional features=fine grained (event) reporting?
Best Regards,
K Gopalakrishnan
Bangalore, INDIA
- Original Message -
To: Multiple recipients of list ORACLE-L [EMAIL PROTECTED]
Sent: Thursday, May 16, 2002 9:13 PM
Has anyone noticed that the number of event
Lerone - This was discussed awhile back, so you may want to search the
archives. As I recall, the advice was pretty much along the lines you have
proposed, to avoid scanning the large Oracle dbf files because you are
wasting a lot of your system resource since dbf files aren't executed they
won't
Large chunks of export files are entirely readable. You can open them in vi
and read it because character data is stored as plain text, hence the
potentially good compression rates.
I agree I wouldn't like to have to decipher a lot of numeric fields.
Regards,
Mike Hately
BTW Robert (Freeman),
Why have you used commit=n ???
Throw a decent size buffer at it and use commit=y. You could also use
indexes=n and rebuild them after with the nologging option
HTH
Lee
-Original Message-
Sent: 16 May 2002 17:09
To: Multiple recipients of list ORACLE-L
I have a 17Gb db that I need
Hi All,
I've currently been working on loading many external binary files (PDF) intoBLOB column. After some digging, I learn that the SQL*LOADER can be used to load data from external files into table. I also got help from another forummate mentioning to use PL/SQL procedure to do so. Since I have
With McAfee you can exclude some directories from VirusShield, which speeds
things up a bit if you have to use McAfee.
since you mentioned McAfee... if you have to run Oracle on IDE drives
download DMACheck from Microsoft if you want to try using UDMA.
Another question from me... is it a bad
All,
Oracle
8.0.5
Tru64
4.0f
I was doing a stats
pack analysis and noticed that we had "latch problems". I drilled in a bit
further and it would appear that the issue was down to cache buffer
chains.
The Metalink article
(I was flying blind here) states
"To
identify the heavily
I need a routine which removes archive logs via RMAN tape backups if the
archive log destination exceeds half full. I already have the RMAN part
which we can kick off manually but I'm looking for something like a basic
cron job monitoring script which triggers this based on the half full
yes.
I just wanted to verify though.
One DBA answered that I must be joking...
: )
Regards,
Patrice Boivin
Systems Analyst (Oracle Certified DBA)
Systems Admin Operations | Admin. et Exploit. des systèmes
Technology Services| Services technologiques
Informatics Branch |
Tim,
I may be wrong but I thought that compress=y just adds up the total space
allocation from SEG$ rather than calculating them from storage parameters. I
know this is a trivial point but I'd appreciate the info if it sets about
things differently.
regards,
Mike Hately
-Original
Using compress=y means only that the value of the initial storage
parameter written to the create DDL statement in the .dmp file gets
set to the value of select sum(bytes) from dba_extents where owner=:v1
and segment_name=:v2.
compress=y is a wretched, awful thing for a number of reasons, not
But what if command blah does not output anything? In this
case, $a is null, as it is when the command fails.
Steven Lembark wrote:
Is there a way to check for the success/failure of the actual remote
command when using rsh?
$a=$(rsh blah);
and parse $a for output for an indication of
Simon,
I'm curious as to why you're creating exports that large.
Are you doing this as a backup method?
Have you ever restored an export that large?
The largest export I've ever restored is about 10 gig, and
it took far too long.
Jared
Simon Waibale [EMAIL PROTECTED]
Sent by: [EMAIL
thanks, just wanted to double-check.
it's scsi disks btw... guess i should have said that...
=-=-=-=-=-=-=-=-=-=-=
lerone
=-=-=-=-=-=-=-=-=-=-=
-Original Message-
Sent: Thursday, May 16, 2002 12:33 PM
To: Multiple recipients of list ORACLE-L
Lerone - This was discussed awhile back,
will try multiple imports - can they go against the same physical dump file?
or do I need to copy the dump file for each separate import?
will also restart with analyze=n - we're using RBO anyway
seem to be two ways with COMMIT param -
COMMIT = Y and a large buffer (someone else's post)
COMMIT
I do it all the time with a line like this :
rsh $1 . ${vTARGETPROFILE};mkdir $2;echo \$?
In this case, I am making a directory called $2 at host $1. The unix
command sets the error value so when you can now get that value over on the
calling machine.
You could also do it like this:
rsh
-- Bill Becker [EMAIL PROTECTED]
But what if command blah does not output anything? In this
case, $a is null, as it is when the command fails.
Either:
Look for a success message and change the sense of the test.
Run the remote command in verbose mode.
Wrap the remote command
It all depends which words you use -- sorry for the ambiguity...
As Cary replied earlier, EXP just queries sys.seg$ (i.e. DBA_SEGMENTS) to
find the bytes and uses that for the newly-calculated INITIAL. This can be
seen in a SQL Trace initiated on the EXP's server process...
- Original
You need to check dba_part% views to see information regarding partitioned tables.
VIEW_NAME
--
DBA_PARTIAL_DROP_TABS
Isn't that something to do with 9i being able to report wait times in
nanoseconds instead of (milliseconds? or microseconds?) in previous versions
??
Raj
__
Rajendra Jamadagni MIS, ESPN Inc.
Rajendra dot Jamadagni at ESPN dot com
Thanks to the wonderful search capabilities that Steve Adams has installed
on his website at www.ixora.com.au, the following page has some more
information about the X$KSMLRU fixed-table
(http://www.zoftware.org/tuning/tune_shared_pool.html#fixed_table)...
I did an advanced search on MetaLink
Bill - You don't say whether your system has multiple CPUs. That will
SERIOUSLY affect the advantage from multiple import jobs. You will have to
experiment with the number of import jobs that seem to produce the greatest
overall performance.
You can have multiple import jobs read the same
Alexander,
OK, we're splitting hairs here. :)
Of course ROWID's are stored in indexes, the database
has to be able to locate the rows. They are an internal mechanism
and not part of the user data.
And yes they can be used, and safely in certain situations. Updating
a row in PL/SQL comes to
I believe Cherie is looking to turn it on for specific partitions...not
always for the whole table. Which is related to why we have partitions in
the first place...
-Original Message-
Sent: Thursday, May 16, 2002 1:48 PM
To: Multiple recipients of list ORACLE-L
You need to check
Tim,
If I understood this correctly, you are saying that a DBMS_LOCK.SLEEP(600)
call
would tie up an MTS shared server for 10 minutes causing other sessions
connected to it
to hang for 10 minutes?
Jared
Tim Gorman [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
05/15/2002 09:58 PM
Please
Bill,
If the tables already exist, drop all indexes, FK and PK constraints. They
will
be re-created by the import and this will greatly speed things up.
Try setting an obscenely large SORT_AREA_SIZE before running the import
to speed up index creation. ( Like 50 - 100 meg )
Don't forget to
Does anyone know how to remove the default value from a column?
The following script illustrates:
set long 40
drop table i;
create table i ( i varchar2(10) null);
alter table i modify ( i default null );
select column_name, nullable, data_default
from user_tab_columns
where table_name = 'I'
/
Jared,
Since we have MTS around here for some applications and we also use
DBMS_LOCK.SLEEP, Tim is right and wrong. It does tie up a shared server for the
time of the sleep, but since a shared server can service one and only one
session at a time it should not affect anyone else. Of course
Tim,
I don't know if it matters, but we faced the same error in 9x, and when we
set the hash_area_size to 1M, it went away. The exact error message for us
was
ORA-04031: unable to allocate 1126656 bytes of shared memory (shared
pool,unknown object,hash-join subh,kllcqc:kllcqslt)
The 3rd
Not exactly. The granularity of capturing times increased in 9i, but as
Gopal implied, there are just a lot more wait events in 9i as compared to
the previous releases.
Check this link out to see what new events were introduced in 9i :
http://www.oraperf.com/reference.html and click on Wait
Raj:
Oracle9i gives timing information in Micro Seconds. Not Nano Seconds
though modern CPUs clocks ticks in nano seconds.
The older versions (8i and below) give timing info in Centi Seconds
(1/100th of a second) .
Best Regards,
K Gopalakrishnan
Bangalore, INDIA
- Original Message -
NAMETYPEVALUE
--- --- --
sort_area_retained_size integer 0
sort_area_size integer 2097152
hash_area_size integer 20971520
The developers might
You might notice more total event completions in 9i because there are
about 50% more segments of kernel code that are instrumented in 9i than
there were in 8i (~200 events in 8i, ~300 in 9i).
Clock granularity is 0.01 in 8i, so events that complete in the same
0.01-sec quantum as they began will
Yup! Easy to prove...
In one SQL*Plus session, connect as MTS and verify shared connection...
$ sqlplus perfstat
SQL*Plus: Release 8.0.6.0.0 - Production on Thu May 16 11:20:52 2002
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Enter password:
Has anybody duplicated a database from a previous incarnation? Oracle tells me that I
should just be able to issue a RESET DATABASE TO inc#. I am a little worried about
doing this when connected to my production database and catalog (as required for
duplicating). If would like to hear
Tim,
You hit the nail right on the head.
Thanks for your answer,
Bryan
-Original Message-
Sent: Thursday, May 16, 2002 11:59 AM
To: Multiple recipients of list ORACLE-L
Depends on what you mean by down, doesn't it? Many define down as
unusable, despite the fact that connections
I was just about to post a message asking the same thing. Many of us have seen
databases produce dumps which at first were much smaller than 2 GB, then we had to
pipe them through the native compress utility on UNIX to keep them that way, then we
used gzip which does a better job of
Chris,
Actually, sometimes I want to be able to just gather statistics for a
single stale partition. In my date-based partitioning, usually only the
most recent partition has data changes in it. The older partitions do not
change at all. It would surely be nice to monitor on a
thanks I'll try that . . . bouncing db now
-Original Message-
Sent: Thursday, May 16, 2002 1:10 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Bill,
If the tables already exist, drop all indexes, FK and PK constraints. They
will
be re-created by the import and this will greatly
1 - 100 of 129 matches
Mail list logo