On Fri, 8 Jun 2001, Steve Sapovits wrote:
..
> We've also found that committing in blocks speeds up heavy
> inserts with Perl/DBI/Oracle. How big depends on the job.
> Most people do all or nothing: AutoCommit or no commit and
> a rollback or commit at the end. Something in between usually
> is
On Fri, Jun 08, 2001 at 10:55:27AM -0400, Marcotullio, Angelo wrote:
> I need to extract about 500,000,000 rows from a table to a text file for
> loading into another database.
I regularly work with 20-40 Million. Yesterday I took a 20 Million row
file, split the file into 10 parts using a per
Hello.
I am worry about someone crashed by my email that include Korean characters in name
header.
Anyone had this problem?
Sorry my none relation email for DBI.
At 3:25 PM -0700 6/8/01, Dong Wang wrote:
>It seems that MySQL is removing ending spaces in the insert statements.
>
>mysql> create table temp (c1 varchar(10));
>Query OK, 0 rows affected (0.08 sec)
>
>mysql> insert into temp values ('a ');
>Query OK, 1 row affected (0.00 sec)
>
>mysql> select con
projectperl wrote:
>
> Is there a CGI Script similar to MySQLMAN that would support DBD::RAM, DBD::AnyData?
>
> please email replies,
> dk henderson
Well, I have a bunch of non-generic ones doing specific things on
various websites. The AnyData::Format::HTMLtable module should provide
a good t
Hi all,
Environment: Perl 5.0
OS: HP-unix
Oracle: 8.1.9
I installed the DBI modules(DBI-1.16) without problems on a HP-unix
machine, the DBD::Oracle (DBD-Oracle-1.06.)though didn't. I had to set
the oracle_home variable, which I did. Still a no
go. During the compile (make) the compile aborts wi
Is there a CGI Script similar to MySQLMAN that would support DBD::RAM, DBD::AnyData?
please email replies,
dk henderson
Hi all,
I installed the DBI modules(DBI-1.16) without problems on a HP-unix
machine, the DBD::Oracle (DBD-Oracle-1.06.)though didn't. I had to set
the oracle_home variable, which I did. Still a no
go. During the compile (make) the compile aborts with following error:
ld: Unrecognized argument: -
On Fri, Jun 08, 2001 at 11:23:07PM +0100, Tim Bunce wrote:
> For maximum perfomance I'd suggest selecting from an Oracle 'sequence'
> before you do the insert.
Done that plenty of times...
> But for minimal app code change I'd suggest a using a PL/SQL trigger
> that implements the auto incremen
It seems that MySQL is removing ending spaces in the insert statements.
mysql> create table temp (c1 varchar(10));
Query OK, 0 rows affected (0.08 sec)
mysql> insert into temp values ('a ');
Query OK, 1 row affected (0.00 sec)
mysql> select concat(c1, "***") from temp;
+---+
|
On Fri, Jun 08, 2001 at 12:24:33PM -0700, Kokarski, Anton wrote:
> Greg,
>
> Look around on mysql.com I've seen a mentioning of the utility that allows
> you to port MySQL to oracle. I think it comes from Oracle.
And is a mostly hopeless marketing tick box gimmick.
> I'm migrating from MySQL t
some thoughts, i apologize if this is redundant to other posts, I didn't
*read* all.
1) No matter how you do this it would be prudent to slice it up into smaller
pieces.
2) Direct Path formats are fussy, and, if i remember, prefer/need fixed
length fields.
Fixed lengths in a non-varchar field c
On Wed, 06 Jun 2001 20:32:07 -0700, Trevor Schellhorn wrote:
>$statement = q{
> SELECT *
>FROM table
> WHERE field1 = ?
> AND field2 = ?
>ORDER BY fieldname
> LIMIT ?
> , ?
>};
...
>When the '$offset' is changed to 30, I get the error:
>
>DBD::mysql::db selectall_arrayref fa
Hi everyone,
I'm trying to build DBD:Oracle on a HP900 on which Oracle 8.1.7 EE server
has been installed.
Evereything works fine until the 'make' step, during which ld reports that
nbeq8 cannot be found. This should be the library supporting the Oracle's
beq network protocol.
Does anyone have
You could exp(ort) and then imp(ort) the table quite quickly -- not very
perlish or DBIish I know.
- Original Message -
From: "Marcotullio, Angelo " <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 08, 2001 8:56 PM
Subject: RE: Data Extract
> You can't restore across platf
hello all
i'm attempting to install DBD::Oracle on an intel
solaris 2.8 with oracle 8.6.1, perl 5.6.1 and DBI
1.18.
make test is failing. t/base.t is failing with a
Segmentation Fault after attempting install_driver().
the output of perl Makefile.PL, make, make test, and
perl -V are listed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, Jun 08, 2001 at 03:58:26PM -0400, Steve Sapovits wrote:
>
> Does any know of a way, either directly through Oracle
> using SQL, or in conjunction with DBI to have rows returned
> in a random order?
Use Oracle's DBMS_RANDOM PL/SQL package.
You could create a sequence and assign each record a value and then generate
a random list of sequence numbers. However, that would mean multiple
selects to ensure they would come out in "random" order.
(Unless you specify an order, records are not necessarily guaranteed to come
out in the same
You can't restore across platforms. The Oracle datafile structures are
different.
Doing a DB -> DB copy (using an Oracle "DB_LINK") across the network is
going to be real slow. Also, if it fails 95% thru, it will be impossible to
load the remaining 5%. I'll have to start from the beginning.
Does any know of a way, either directly through Oracle
using SQL, or in conjunction with DBI to have rows returned
in a random order? The problem to solve is this: We fetch
similar result sets across several queries, but we want the
resulting output order to be different.
The data sets are la
If this is the full table, is there a reason you are not trying a backup/restore? I
don't know which backup utilities you are using on either side but this would avoid any
programming or file manipulation. Also, there are DB tols, like DBArtisan, that would
just about do a copy/paste. Check i
Yes there is one, as a matter effect mysql prides itself on that one.
Ilya Sterin
-Original Message-
From: Kokarski, Anton
To: 'Gregory'; [EMAIL PROTECTED]
Sent: 6/8/01 1:24 PM
Subject: RE: How can I get insert_id from DBD::Oracle?
Greg,
Look around on mysql.com I've seen a mentioning
Greg,
Look around on mysql.com I've seen a mentioning of the utility that allows
you to port MySQL to oracle. I think it comes from Oracle.
Hope that helps,
Anton Kokarski
-Original Message-
From: Gregory [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 08, 2001 12:15 PM
To: [EMAIL PRO
There is no DBD::Oracle equivalent for retrieving the value of
insert_id. You must create a sequence within oracle, then retrieve its
value and assign it to a variable within your script. You can then use
that value when you execute your insert and later in your script.
Jim
At 12:14 PM 6/8/
I'm migrating from MySQL to Oracle. When I'm executing "INSERT... " with DBD::mysql I
could get the id of inserted row with DBH->{mysql_insertid}. Is there any way to do
the same thing with Oracle (DBD::Oracle)?
Thanks. Grisha.
On Fri, Jun 08, 2001 at 01:26:00PM -0500, Michael Wray wrote:
> @nupdates=undef;
This sets @nupdates to the one element list (undef). Try this instead:
@nupdates = ();
Ronald
TMTOWTDI. One possibility you have apparently decided not to follow was
doing the SELECT and INSERT entirely inside Perl. For the size of data you
have, that may well be for the best, but the COMMIT group size can have a
dramatic effect on program speed if you are INSERTing.
The discussion has
One problem we had with SQL*Loader direct loads is that if we had duplicate
key values on unique indexes, the table was left in an unstable condition.
We avoided that problem by not creating the indexes until after the load was
complete. That also helped performance when we moved data using the
Why not use import ... BUT
1. drop all indices (to be rebuild later - you would (IMO) want to do this
regardless of imp/SQLLOADER)
2. drop all key constraints (again enabling after import)
3. disable logging on the table while it's being loaded via imp therefore
generating no redo wor
I've been doing some test and I have found that perl scripts are faster
than I used to thought and in some cases faster than C programs. The Perl
script 'automagically' does bulk fetches that speed up your application a
lot (well the wizard are the DBI and DBD::Oracle who do the dirty work). In
a
I am grabbing a list of SQL statements from a database...(successfully can
cause statements to executeas long as I don't QQ or QW them!)
Here's the problem:
I am in the middle of a for loop to define the array containing the statements
(and a few other fields...the statements are just one fi
There is no commit because i'm only SELECTing the data from Oracle. I'm
writing to ASCII files.
-Original Message-
From: Steve Sapovits [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 08, 2001 1:02 PM
To: Curt Russell Crandall; Pettit, Chris L
Cc: '[EMAIL PROTECTED]'
Subject: RE: Data Extr
It comes with Oracle and it is extremely fast. Now I
know why. For large loads and ones that work directly
off supported formats we use it. We sometimes use Perl
to prepare files to feed it.
Steve Sapovits
Global Sports Interactive
Work Email: [EMAIL PROTECTED]
Home Email: [EMAIL PROT
Yes, committing in chunks significantly sped up our apps too. However, I
guess for the original post inserts don't need to be addressed... I forgot
that SQLLoader was going to be used. Is that a Perl tool or something
that comes with Oracle... I've never heard of it until now (or just never
paid
Orlando wrote: "There's a way to bypass rollback- and index creation during
the import, which speeds things up a lot. I believe this is the same
technique used by the SQL*Loader "direct path" option."
No, this is not correct. SQL*Loader "direct path" writes database blocks
directly. It does no
We've also found that committing in blocks speeds up heavy
inserts with Perl/DBI/Oracle. How big depends on the job.
Most people do all or nothing: AutoCommit or no commit and
a rollback or commit at the end. Something in between usually
is better for most inserts.
Steve Sapovits
Global
First of all, I've never used Oracle so I'm not aware of any fancy tools
to make life easier to accomplish this.
>From the original post, asking about optimizing Perl to do this task, I
inferred that speed might be an issue. With Sybase, the difference
between using Perl and the DBI versus using
To use a wrapper script, you rename your Perl script then create a shell
script with the original name. It can make any changes needed to the
environment before calling the original program with the original arguments.
As someone else pointed out, it is much better to get the system configured
c
On Fri, 8 Jun 2001, Ronald J Kimball wrote:
..
> But the original poster isn't shifting away from Oracle; he's moving the
> data from an Oracle database on an NT machine to an Oracle database on an
> HPUX machine.
oh??!! i must've missed that one.. hehe.
exp/imp should work in that case..
--
I'd suggest you try each candidate method on 5M rows or so to see which is
faster in your environment because which works better depends on a lot of
variables.
If both databases are Oracle, you might want to look at the SQL*Plus COPY
command. I've moved 50,000,000 rows in under 45 minutes using
On Fri, Jun 08, 2001 at 11:47:54PM +0800, Orlando Andico wrote:
> Oracle imp/exp-style dumps are only compatible with Oracle. The
> PostgreSQL- and MySQL-style "SQL statement dump" is not considered
> efficient -- and it isn't!! just imagine doing 500 million inserts, with
> the overhead of SQL pa
On Fri, 8 Jun 2001, Marcotullio, Angelo wrote:
..
> Oracle export (exp) creates a binary file that can only be loaded using the
> import (imp) program. The problem with import is that it uses insert
> statements to load the data.
There's a way to bypass rollback- and index creation during the
On Fri, 8 Jun 2001, Pettit, Chris L wrote:
..
> I've used Perl to prepare large files for SQL Loader.
> 1.9 million. SQL loader is fast, I was able to load the 1.9 mill records in
> less than six minutes.
> We had to do text extraction for the data.
> DBI is absolutley awesome for that IMHO.
I be
On Fri, 8 Jun 2001, Matthew Tedder wrote:
..
> You've got to be kidding, a big database like Oracle doesn't have
> database dump utility? In PostgreSQL, you can dump the whole
> database's data and/or dump all the SQL commands needed to rebuild and
> re-populate a database, identically. This is
Oracle export (exp) creates a binary file that can only be loaded using the
import (imp) program. The problem with import is that it uses insert
statements to load the data.
Oracle's SQL*Loader uses formatted ascii files for loading. Sqlloader has a
"direct path" options that writes data block
DBI uses the same Oracle OCI libraries as a C program would use. I'm a much
better Perl coder than C.
-Original Message-
From: Curt Russell Crandall [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 08, 2001 11:24 AM
To: Marcotullio, Angelo
Cc: '[EMAIL PROTECTED]'
Subject: Re: Data Extrac
I've used Perl to prepare large files for SQL Loader.
1.9 million. SQL loader is fast, I was able to load the 1.9 mill records in
less than six minutes.
We had to do text extraction for the data.
DBI is absolutley awesome for that IMHO.
clp
-Original Message-
From: Curt Russell Crand
You've got to be kidding, a big database like Oracle doesn't have database dump
utility? In PostgreSQL, you can dump the whole database's data and/or dump all the
SQL commands needed to rebuild and re-populate a database, identically. This is very
safe method of upgrading or even switching t
Karen Ellrick wrote:
>
> I tried downloading a little set of scripts for processing my HTTP access
> log
One of the unfortunate consequences of DBD::AnyData is that it makes
even a completely non-DBI request like this have some DBI relevance :-(.
use DBI;
my $dbh = DBI->connect('dbi:AnyData
For a half a billion rows, I would seriously consider coding this in C
using whatever Oracle libraries are available for accessing the API. That
would probably be your best choice in terms of having speedy code. You
may also be able to do a large portion of your work in a stored procedure
as wel
Hi all,
I need to extract about 500,000,000 rows from a table to a text file for
loading into another database.
Source: NT, Oracle 817
Destination: HP-UX 11, Oracle 817
I plan on using SQL*Loader to load the data, so the extract will need to be
consistently formatted (fixed columns, csv, etc.).
Not sure what you mean by "properly". If you mean did Oracle through an
error, than you should turn on you error checking or use (or die
$DBI::errstr), if you mean did it update any rows or what you expected than
you should check for the return of rows from the execute(). Anyways all of
that is
Hi All,
$dbh = $DBI->connect();
$sqlstmt = "An Insert or Update Query";
$sth = $dbh->prepare($sqlstmt);
$sth->execute;
Here is my question. Is there anyway to verify whether the query executed
properly/not ?
by the by I am using Oracle.
Thanks
Prem
I sent him the README.hpux
-Original Message-
From: Wesley STROOP [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 08, 2001 10:32 AM
To: [EMAIL PROTECTED]
Subject: DBD-Oracle-1.06 installation problems on a HP-unix server
Hi all,
I installed the DBI modules(DBI-1.16) without problems on a
Yes, I just installed 1.18.
-Original Message-
From: Sterin, Ilya [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 08, 2001 7:05 AM
To: 'Wilson, Doug '; '[EMAIL PROTECTED] '
Subject: RE: fetchall_hashref
I believe there was a patch though I would think Tim fixed that for the 1.18
release,
Hi all,
I installed the DBI modules(DBI-1.16) without problems on a HP-unix
machine, the DBD::Oracle (DBD-Oracle-1.06)though didn't. I had to set
the oracle_home variable, which I did. Still a no
go. During the compile (make) the compile aborts with following error:
ld: Unrecognized argument: -W
I believe there was a patch though I would think Tim fixed that for the 1.18
release, is that the one you were talking about.
Ilya Sterin
-Original Message-
From: Wilson, Doug
To: [EMAIL PROTECTED]
Sent: 6/8/01 7:56 AM
Subject: fetchall_hashref
fetchall_hashref is in the DBI perldocs, b
You can go to CPAN to see the test statistics, though unless an unknown bug
has slipped in or something, DBI should work on all platforms that Perl is
ported to and DBD::* should compile as long as the client for the database
is available for that platform.
Ilya Sterin
-Original Message-
fetchall_hashref is in the DBI perldocs, but it doesn't
seem to actually be in the code.
Thank you for the email. My understanding is that different versions of DBI
and DBD exist for different platforms. For example, DBI 1.18 has passed the
tests on freeBSD, Sun Solaris and Windows.
I wanted to know the latest version that has passed similar tests on HP-UX
11 PA-RISC 2.0
Looking for
On Fri, Jun 08, 2001 at 05:33:22PM +0530, Ajay Madan wrote:
> Hi Michael & Drew,
>
> I tried putting within the BEGIN block without any success,
> as Michael has indicated. Actually, I am not clear what you exactly mean
> when you say "use a shell wrapper". How can another script be
Hi Michael & Drew,
I tried putting within the BEGIN block without any success, as Michael has
indicated.
Actually, I am not clear what you exactly mean when you say "use a shell wrapper". How
can
another script be called before the cgi, since the cgi itself is mentioned in the HTML
Tim,
That line happens to be the second line after the first and mandatory one
ie., #!/usr/local/bin/perl. It is not within a Block.
Ajay
"Booth, Tim" wrote:
> Ajay,
>
> Are you putting that line in a BEGIN{} block? Otherwise, it won't
> help!
>
> TIM
>
> > -Original Messa
Even putting the assignment to $ENV{LD_LIBRARY_PATH} in a BEGIN{} block
doesn't work for many UNIXes. It tends to be cached by the dynamic loader
before the script starts. If you can't get it set properly by the
webserver, you will probably need to use a shell wrapper to set it before
the script
On Fri, Jun 08, 2001 at 06:03:04PM +0900, Karen Ellrick wrote:
> > On Fri, Jun 08, 2001 at 05:32:00PM +0900, Karen Ellrick wrote:
> > > I tried downloading a little set of scripts for processing my
> > HTTP access
> > > log, and theoretically it works (the sample displayed on the web looks
> > > g
Ajay,
Are you putting that line in a BEGIN{} block? Otherwise, it won't
help!
TIM
> -Original Message-
> From: Ajay Madan [SMTP:[EMAIL PROTECTED]]
> Sent: Friday, June 08, 2001 10:20 AM
> To: Hamilton, Andrew Mr RAYTHEON 5 SIG CMD
> Cc: DBI Mailing List
> Subject: Re:
Drew,
I tried doing what you suggested. But the CGI script just seems to ignore
the setting, when it takes the connection to the DB. I am still able to execute
the same program from the command line but not able to do so thru the
webserver.
The following is the line of code to set the LD
> On Fri, Jun 08, 2001 at 05:32:00PM +0900, Karen Ellrick wrote:
> > I tried downloading a little set of scripts for processing my
> HTTP access
> > log, and theoretically it works (the sample displayed on the web looks
> > great), but it doesn't work for me. This isn't DBI-related, but I'm not
>
On Fri, Jun 08, 2001 at 05:32:00PM +0900, Karen Ellrick wrote:
> I tried downloading a little set of scripts for processing my HTTP access
> log, and theoretically it works (the sample displayed on the web looks
> great), but it doesn't work for me. This isn't DBI-related, but I'm not
> subscribe
I tried downloading a little set of scripts for processing my HTTP access
log, and theoretically it works (the sample displayed on the web looks
great), but it doesn't work for me. This isn't DBI-related, but I'm not
subscribed to any plain Perl mailing lists, so hopefully one of you will
help me.
70 matches
Mail list logo