/18/03, Denis Mercier wrote:
On Tue, 2003-11-18 at 16:40, Paul DuBois wrote:
At 16:21 -0500 11/18/03, Denis Mercier wrote:
here's what im trying to do, i have a tar file in a blob field
and i'm trying to retrieve it and pipe it directly into tar
to decompress it, without first writing
On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
At 11:03 -0500 11/19/03, Denis Mercier wrote:
i also tried:
use my_db;
select * from my_table;
so when i try shell mysql --pager test1 | tar x
the tar file does not get written to /usr/local/test1 but i still
On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
At 13:55 -0500 11/19/03, Denis Mercier wrote:
On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
At 11:03 -0500 11/19/03, Denis Mercier wrote:
i also tried:
use my_db;
select * from my_table;
so when i try
:
mysql --skip-column-names test1 | more
use test;\nselect * from test;\n
\n's are added?
-Original Message-
From: Denis Mercier [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 1:55 PM
To: [EMAIL PROTECTED]
Subject: Re: piping blob into shell command (tar
-Original Message-
From: Denis Mercier [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 2:41 PM
To: [EMAIL PROTECTED]
Subject: Re: piping blob into shell command (tar)
On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
At 13:55 -0500 11/19/03, Denis Mercier wrote
On Wed, 2003-11-19 at 15:08, Dan Greene wrote:
one more idea:
try:
mysql --skip-column-names --raw test1 | tar xf -
no output,
mysql --skip-column-names --raw test1 | more
./test1
-Original Message-
From: Denis Mercier [mailto:[EMAIL PROTECTED]
Sent: Wednesday
here's what im trying to do, i have a tar file in a blob field
and i'm trying to retrieve it and pipe it directly into tar
to decompress it, without first writing it to the hard drive,
here's what i've tried so far,
I create a text file called test1:
use my_db;
select * into dumpfile
On Tue, 2003-11-18 at 16:40, Paul DuBois wrote:
At 16:21 -0500 11/18/03, Denis Mercier wrote:
here's what im trying to do, i have a tar file in a blob field
and i'm trying to retrieve it and pipe it directly into tar
to decompress it, without first writing it to the hard drive,
here's what
try this link http://jeremy.zawodny.com/blog/archives/000796.html
setting avg_row_length at 50 worked for me I tested and got
mytable up to 9GB, (large table with variable size records )
the only reason I could see that it would matter to have an accurate
value for the avg_row_length
I also had table is full error, today
actually.
mysql alter table mytable max_rows = 2000 avg_row_length=50;
mysql show table status like 'mytable' \G
*** 1. row ***
Name: mytable
Type: MyISAM
Row_format:
hi
I am presently going over the mysql documentation to get familiar with
it,
It runs great on my development server (linux RH7.1 kernel=2.4.2-2
resin application server),
I am in the process of optimizing and testing , I am using blob datatype
in my main table,
I understand why a fixed-size
11 matches
Mail list logo