Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread JP Hindin


Greetings Brent;

Many thanks for your input. I decided that I would indeed create the table
from scratch, making certain to apply the 'proper' settings, and then
reload all of the data. This completed yesterday, with the following
results:

 Row_format: Dynamic
   Rows: 18866709
 Avg_row_length: 224
Data_length: 4236151548
Max_data_length: 4294967295
   Index_length: 1141235712

Zero improvement. I used the following CREATE:
CREATE TABLE mytable (
id int(11) NOT NULL auto_increment,
.
.
.
PRIMARY KEY(id))
AVG_ROW_LENGTH=224,
MAX_ROWS=10;

Which I believe should have incorporated the appropriate changes - but
clearly has not. I've just made a couple of simple tests and verified that
the 4GB limit isn't the OS' doing (14GB working tarball packing and
unpacking), so I'm beginning to get quite confused as to how my efforts
seem to return me to the same problem time and time again.

I look forward to any further suggestions you may have;

JP

On Thu, 15 Mar 2007, Brent Baisley wrote:
 You probably did not change the max_rows setting when you created the table. 
 If you read the manual under AVG_ROW_LENGTH for create
 table it says:
 When you create a MyISAM table, MySQL uses the product of the MAX_ROWS and 
 AVG_ROW_LENGTH options to decide how big the resulting
 table is. If you do not specify either option, the maximum size for a table 
 is 4GB. 

 The 4GB limit is more a default speed optimization setting. Readup on the 
 myisam_data_pointer_size setting for background
 information.

 - Original Message -
 From: JP Hindin [EMAIL PROTECTED]
 To: Michael Dykman [EMAIL PROTECTED]
 Cc: JP Hindin [EMAIL PROTECTED]; mysql@lists.mysql.com
 Sent: Thursday, March 15, 2007 2:09 PM
 Subject: Re: max_rows query + SegFaulting at inopportune times


 
  On Thu, 15 Mar 2007, Michael Dykman wrote:
  What host OS are you running? And which file system? MySQL is always
  limited by the file size that the host file system can handle.
 
  Deb Sarge is a Linux distribution, the large file support I mentioned
  allows files up to 2 TB in size.
 
  On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:
  
   Greetings all;
  
   I have a quandary regarding table limits, and clearly I am not
   understanding how this all works together. I have a test database which
   needs to keep long-term historical data, currently the total dataset in
   this one table is probably about 5.5GB in size - although since I have a
   4GB table limit that I can't seem to shake, I'm not entirely positive 
   yet.
  
   First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
   chaps are willing to help me with this distribution version, but I 
   imagine
   the primary question is fairly non-specific. The OS is obviously Deb
   Sarge, running on a recent x86 machine (so it does include the large file
   support in the kernel).
  
   So, when I first received a 'Table is full' error I looked up the MySQL
   documentation and found the section regarding to altering max_rows on a
   table. Nice and simple. I ran the following on my DB:
   mysql ALTER TABLE mytable max_rows=2000;
   And some four days later when I looked at it, this was on the screen:
   Segmentation fault
  
   I checked the table status, and max_data_length had not changed. I 
   thought
   perhaps I was being too pushy with the max_rows, so I dropped a zero and
   tried again - with the same results. About four days in, seg fault. So I
   figured perhaps it was getting bent out of shape with a 4.0GB table
   already in place, so I removed all rows, optimised the table, and tried
   the first query again. Success immediately! The SHOW STATUS gave this:
Row_format: Dynamic
  Rows: 0
Avg_row_length: 0
   Data_length: 0
   Max_data_length: 281474976710655
  Index_length: 1024
  
   Looks good. Nice high max_data_length - so I loaded all the data into the
   table. Again, four days pass for the data to complete the bulk INSERT, 
   and
   I run a SHOW STATUS again:
Row_format: Dynamic
  Rows: 18866709
Avg_row_length: 224
   Data_length: 4236151548
   Max_data_length: 4294967295
  Index_length: 1141235712
  
   And suddenly I'm back to square one. Now I'm suspecting that the
   max_data_length is a combination of a lot of factors, and the
   avg_row_length plays into this. The documentation suggests setting
   avg_row_length in the ALTER TABLE, however it also says:
   You have to specify avg_row_length only for tables with BLOB or TEXT
   columns, so I didn't bother as this table is a combination of ints,
   varchars and datetimes.
  
   I wanted to check with you wizened lot before I set another query going.
   I'm going to assume that running an ALTER with the data in the DB is only
   going to garner me

Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread JP Hindin

Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:
 Zero improvement. I used the following CREATE:
   MAX_ROWS=10;

At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread JP Hindin

I have, after further googling, discovered that the 4.2 billion figure
that MySQL uses as 'max_rows' is, indeed, max_rows and not a max database
size in bytes. In theory I have solved my problem, and wasted however many
peoples bandwidth by putting all these eMails to the MySQL list.

Mea culpa, mea culpa, mea maxima culpa. I slink to my corner with Google
in hand and apologise for wasting your time. Here's hoping, of course, in
four days I don't find out I'm wrong about the 4.2b rows part.

- JP

On Thu, 22 Mar 2007, JP Hindin wrote:
 Addendum;

 On Thu, 22 Mar 2007, JP Hindin wrote:
  Zero improvement. I used the following CREATE:
  MAX_ROWS=10;

 At first I thought I had spotted the obvious in the above - the MAX_ROWS I
 used is smaller than the Max_data_length that resulted, presumably MySQL
 being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
 ala:
 AVG_ROW_LENGTH=224,
 MAX_ROWS=2000;

 But after creation the 'SHOW STATUS' gives the following:

  Create_options: max_rows=4294967295 avg_row_length=224

 My guess is that MySQL has decided 4294967295 is the maximum table size
 and ALTERs nor CREATE options are able to change this imposed limit. This
 would explain why my ALTERs didn't appear to work, seg fault of the client
 aside.

 So I suppose the question now is - if MAX_ROWS doesn't increase the table
 size, what will? Where is the limit that MySQL is imposing coming from?

 Again, many thanks for anyone who can enlighten me as to what MySQL is
 thinking.

 JP





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Micah Stevens
This table size is based on your filesystem limits. This is a limit of 
the OS, not MySQL.


-Micah

On 03/22/2007 01:02 PM, JP Hindin wrote:

Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:
  

Zero improvement. I used the following CREATE:
MAX_ROWS=10;



At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP



  


Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread JP Hindin

Micah;

In the first eMail I mentioned that I had excluded filesystem size limits
by manually producing a 14GB tar file. If it was only that simple :)

On Thu, 22 Mar 2007, Micah Stevens wrote:
 This table size is based on your filesystem limits. This is a limit of
 the OS, not MySQL.

 -Micah

 On 03/22/2007 01:02 PM, JP Hindin wrote:
  Addendum;
 
  On Thu, 22 Mar 2007, JP Hindin wrote:
 
  Zero improvement. I used the following CREATE:
 MAX_ROWS=10;
 
 
  At first I thought I had spotted the obvious in the above - the MAX_ROWS I
  used is smaller than the Max_data_length that resulted, presumably MySQL
  being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
  ala:
  AVG_ROW_LENGTH=224,
  MAX_ROWS=2000;
 
  But after creation the 'SHOW STATUS' gives the following:
 
   Create_options: max_rows=4294967295 avg_row_length=224
 
  My guess is that MySQL has decided 4294967295 is the maximum table size
  and ALTERs nor CREATE options are able to change this imposed limit. This
  would explain why my ALTERs didn't appear to work, seg fault of the client
  aside.
 
  So I suppose the question now is - if MAX_ROWS doesn't increase the table
  size, what will? Where is the limit that MySQL is imposing coming from?
 
  Again, many thanks for anyone who can enlighten me as to what MySQL is
  thinking.
 
  JP
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Micah Stevens
Oh, I didn't see the first comment. My mistake. It's likely a 32bit 
integer size limit of some sort then. 32bit = 4gbytes


-Micah

On 03/22/2007 02:08 PM, JP Hindin wrote:

Micah;

In the first eMail I mentioned that I had excluded filesystem size limits
by manually producing a 14GB tar file. If it was only that simple :)

On Thu, 22 Mar 2007, Micah Stevens wrote:
  

This table size is based on your filesystem limits. This is a limit of
the OS, not MySQL.

-Micah

On 03/22/2007 01:02 PM, JP Hindin wrote:


Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:

  

Zero improvement. I used the following CREATE:
MAX_ROWS=10;



At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP




  



  


Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Brent Baisley

You can try changing the myisam_data_pointer_size setting, but the 
max_rows/avg_row_length should have worked.

Unless it's your OS restricting the mysql user to a 4GB file size. You can use ulimit or getrlimit to determine OS limits for a 
user. If a user hits a limit imposed by the OS, I think it issues a segfault.


- Original Message - 
From: Micah Stevens [EMAIL PROTECTED]

To: JP Hindin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Thursday, March 22, 2007 5:24 PM
Subject: Re: max_rows query + SegFaulting at inopportune times


Oh, I didn't see the first comment. My mistake. It's likely a 32bit 
integer size limit of some sort then. 32bit = 4gbytes


-Micah

On 03/22/2007 02:08 PM, JP Hindin wrote:

Micah;

In the first eMail I mentioned that I had excluded filesystem size limits
by manually producing a 14GB tar file. If it was only that simple :)

On Thu, 22 Mar 2007, Micah Stevens wrote:
  

This table size is based on your filesystem limits. This is a limit of
the OS, not MySQL.

-Micah

On 03/22/2007 01:02 PM, JP Hindin wrote:


Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:

  

Zero improvement. I used the following CREATE:
MAX_ROWS=10;



At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP




  



  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread Michael Dykman

What host OS are you running? And which file system? MySQL is always
limited by the file size that the host file system can handle.

- michael dykman


On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:


Greetings all;

I have a quandary regarding table limits, and clearly I am not
understanding how this all works together. I have a test database which
needs to keep long-term historical data, currently the total dataset in
this one table is probably about 5.5GB in size - although since I have a
4GB table limit that I can't seem to shake, I'm not entirely positive yet.

First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
chaps are willing to help me with this distribution version, but I imagine
the primary question is fairly non-specific. The OS is obviously Deb
Sarge, running on a recent x86 machine (so it does include the large file
support in the kernel).

So, when I first received a 'Table is full' error I looked up the MySQL
documentation and found the section regarding to altering max_rows on a
table. Nice and simple. I ran the following on my DB:
mysql ALTER TABLE mytable max_rows=2000;
And some four days later when I looked at it, this was on the screen:
Segmentation fault

I checked the table status, and max_data_length had not changed. I thought
perhaps I was being too pushy with the max_rows, so I dropped a zero and
tried again - with the same results. About four days in, seg fault. So I
figured perhaps it was getting bent out of shape with a 4.0GB table
already in place, so I removed all rows, optimised the table, and tried
the first query again. Success immediately! The SHOW STATUS gave this:
 Row_format: Dynamic
   Rows: 0
 Avg_row_length: 0
Data_length: 0
Max_data_length: 281474976710655
   Index_length: 1024

Looks good. Nice high max_data_length - so I loaded all the data into the
table. Again, four days pass for the data to complete the bulk INSERT, and
I run a SHOW STATUS again:
 Row_format: Dynamic
   Rows: 18866709
 Avg_row_length: 224
Data_length: 4236151548
Max_data_length: 4294967295
   Index_length: 1141235712

And suddenly I'm back to square one. Now I'm suspecting that the
max_data_length is a combination of a lot of factors, and the
avg_row_length plays into this. The documentation suggests setting
avg_row_length in the ALTER TABLE, however it also says:
You have to specify avg_row_length only for tables with BLOB or TEXT
columns, so I didn't bother as this table is a combination of ints,
varchars and datetimes.

I wanted to check with you wizened lot before I set another query going.
I'm going to assume that running an ALTER with the data in the DB is only
going to garner me another wasted week and a Seg Fault, so I think what I
should probably do is clean the table again, run the following:
mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
And then reload all my data and see if that helps.

Can someone explain to me if my guess that avg_row_length is a factor in
the max_data_length of the table, and is my above query going to release
me from my hovering 4GB table limit?
Has anyone seen this blasted SegFault issue before?

I appreciate any help I can get with this one, I'm obviously missing
something, flame away. Many thanks.

 - JP


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread JP Hindin

On Thu, 15 Mar 2007, Michael Dykman wrote:
 What host OS are you running? And which file system? MySQL is always
 limited by the file size that the host file system can handle.

Deb Sarge is a Linux distribution, the large file support I mentioned
allows files up to 2 TB in size.

 On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:
 
  Greetings all;
 
  I have a quandary regarding table limits, and clearly I am not
  understanding how this all works together. I have a test database which
  needs to keep long-term historical data, currently the total dataset in
  this one table is probably about 5.5GB in size - although since I have a
  4GB table limit that I can't seem to shake, I'm not entirely positive yet.
 
  First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
  chaps are willing to help me with this distribution version, but I imagine
  the primary question is fairly non-specific. The OS is obviously Deb
  Sarge, running on a recent x86 machine (so it does include the large file
  support in the kernel).
 
  So, when I first received a 'Table is full' error I looked up the MySQL
  documentation and found the section regarding to altering max_rows on a
  table. Nice and simple. I ran the following on my DB:
  mysql ALTER TABLE mytable max_rows=2000;
  And some four days later when I looked at it, this was on the screen:
  Segmentation fault
 
  I checked the table status, and max_data_length had not changed. I thought
  perhaps I was being too pushy with the max_rows, so I dropped a zero and
  tried again - with the same results. About four days in, seg fault. So I
  figured perhaps it was getting bent out of shape with a 4.0GB table
  already in place, so I removed all rows, optimised the table, and tried
  the first query again. Success immediately! The SHOW STATUS gave this:
   Row_format: Dynamic
 Rows: 0
   Avg_row_length: 0
  Data_length: 0
  Max_data_length: 281474976710655
 Index_length: 1024
 
  Looks good. Nice high max_data_length - so I loaded all the data into the
  table. Again, four days pass for the data to complete the bulk INSERT, and
  I run a SHOW STATUS again:
   Row_format: Dynamic
 Rows: 18866709
   Avg_row_length: 224
  Data_length: 4236151548
  Max_data_length: 4294967295
 Index_length: 1141235712
 
  And suddenly I'm back to square one. Now I'm suspecting that the
  max_data_length is a combination of a lot of factors, and the
  avg_row_length plays into this. The documentation suggests setting
  avg_row_length in the ALTER TABLE, however it also says:
  You have to specify avg_row_length only for tables with BLOB or TEXT
  columns, so I didn't bother as this table is a combination of ints,
  varchars and datetimes.
 
  I wanted to check with you wizened lot before I set another query going.
  I'm going to assume that running an ALTER with the data in the DB is only
  going to garner me another wasted week and a Seg Fault, so I think what I
  should probably do is clean the table again, run the following:
  mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
  And then reload all my data and see if that helps.
 
  Can someone explain to me if my guess that avg_row_length is a factor in
  the max_data_length of the table, and is my above query going to release
  me from my hovering 4GB table limit?
  Has anyone seen this blasted SegFault issue before?
 
  I appreciate any help I can get with this one, I'm obviously missing
  something, flame away. Many thanks.
 
   - JP
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread Brent Baisley
You probably did not change the max_rows setting when you created the table. If you read the manual under AVG_ROW_LENGTH for create 
table it says:
When you create a MyISAM table, MySQL uses the product of the MAX_ROWS and AVG_ROW_LENGTH options to decide how big the resulting 
table is. If you do not specify either option, the maximum size for a table is 4GB. 


The 4GB limit is more a default speed optimization setting. Readup on the myisam_data_pointer_size setting for background 
information.


- Original Message - 
From: JP Hindin [EMAIL PROTECTED]

To: Michael Dykman [EMAIL PROTECTED]
Cc: JP Hindin [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Thursday, March 15, 2007 2:09 PM
Subject: Re: max_rows query + SegFaulting at inopportune times




On Thu, 15 Mar 2007, Michael Dykman wrote:

What host OS are you running? And which file system? MySQL is always
limited by the file size that the host file system can handle.


Deb Sarge is a Linux distribution, the large file support I mentioned
allows files up to 2 TB in size.


On 3/15/07, JP Hindin [EMAIL PROTECTED] wrote:

 Greetings all;

 I have a quandary regarding table limits, and clearly I am not
 understanding how this all works together. I have a test database which
 needs to keep long-term historical data, currently the total dataset in
 this one table is probably about 5.5GB in size - although since I have a
 4GB table limit that I can't seem to shake, I'm not entirely positive yet.

 First off, I'm running 4.1.11-Debian_4sarge7-log. I'm unsure if you MySQL
 chaps are willing to help me with this distribution version, but I imagine
 the primary question is fairly non-specific. The OS is obviously Deb
 Sarge, running on a recent x86 machine (so it does include the large file
 support in the kernel).

 So, when I first received a 'Table is full' error I looked up the MySQL
 documentation and found the section regarding to altering max_rows on a
 table. Nice and simple. I ran the following on my DB:
 mysql ALTER TABLE mytable max_rows=2000;
 And some four days later when I looked at it, this was on the screen:
 Segmentation fault

 I checked the table status, and max_data_length had not changed. I thought
 perhaps I was being too pushy with the max_rows, so I dropped a zero and
 tried again - with the same results. About four days in, seg fault. So I
 figured perhaps it was getting bent out of shape with a 4.0GB table
 already in place, so I removed all rows, optimised the table, and tried
 the first query again. Success immediately! The SHOW STATUS gave this:
  Row_format: Dynamic
Rows: 0
  Avg_row_length: 0
 Data_length: 0
 Max_data_length: 281474976710655
Index_length: 1024

 Looks good. Nice high max_data_length - so I loaded all the data into the
 table. Again, four days pass for the data to complete the bulk INSERT, and
 I run a SHOW STATUS again:
  Row_format: Dynamic
Rows: 18866709
  Avg_row_length: 224
 Data_length: 4236151548
 Max_data_length: 4294967295
Index_length: 1141235712

 And suddenly I'm back to square one. Now I'm suspecting that the
 max_data_length is a combination of a lot of factors, and the
 avg_row_length plays into this. The documentation suggests setting
 avg_row_length in the ALTER TABLE, however it also says:
 You have to specify avg_row_length only for tables with BLOB or TEXT
 columns, so I didn't bother as this table is a combination of ints,
 varchars and datetimes.

 I wanted to check with you wizened lot before I set another query going.
 I'm going to assume that running an ALTER with the data in the DB is only
 going to garner me another wasted week and a Seg Fault, so I think what I
 should probably do is clean the table again, run the following:
 mysql ALTER TABLE mytable max_rows=2000 avg_row_length=224;
 And then reload all my data and see if that helps.

 Can someone explain to me if my guess that avg_row_length is a factor in
 the max_data_length of the table, and is my above query going to release
 me from my hovering 4GB table limit?
 Has anyone seen this blasted SegFault issue before?

 I appreciate any help I can get with this one, I'm obviously missing
 something, flame away. Many thanks.

  - JP


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe

RE: MAX_ROWS

2004-02-25 Thread Tucker, Gabriel
Ok

I think I am a bit confused on how the MAX_ROWS works...

This is the result of SHOW TABLE STATUS \G:
   Name: gabe_test
   Type: MyISAM
 Row_format: Fixed
   Rows: 33
 Avg_row_length: 5
Data_length: 165
Max_data_length: 327679
   Index_length: 1024
  Data_free: 0
 Auto_increment: NULL
Create_time: 2004-02-24 14:34:30
Update_time: 2004-02-24 14:37:43
 Check_time: NULL
 Create_options: max_rows=3
Comment: 

What values of MAX_ROWS and AVG_ROW_LENGTH would I need so that I could limit this 
table to 3 [or n] number of records?  How do I calculate this?

Additionally, is there a better way, not using the OS, to limit the size of MyISAM 
tables?

Thanks
Gabe

-Original Message-
From: Keith C. Ivey [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 24, 2004 5:11 PM
To: [EMAIL PROTECTED]
Subject: Re: MAX_ROWS


On 24 Feb 2004 at 22:01, Alison W wrote:

 Yes: MAX_ROWS is a *guidance* to the system in setting up the table
 and not a *limit* in any way.

Well, it is a limit in one way.  MySQL uses it (in MyISAM tables) to 
calculate the size of the pointer used for positions within the data 
file.  If the data file becomes larger than can be handled by that 
size of pointer, then you can't add any more records to the table 
(unless you increase MAX_ROWS or AVG_ROW_LENGTH so that the pointer 
size is increased).

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MAX_ROWS

2004-02-25 Thread Keith C. Ivey
On 25 Feb 2004 at 8:35, Tucker, Gabriel wrote:

 What values of MAX_ROWS and AVG_ROW_LENGTH would I need so that I
 could limit this table to 3 [or n] number of records?

You can't.  That's not what MAX_ROWS and AVG_ROW_LENGTH are for.  
They're only there to allow MySQL to decide how many bytes it needs 
to use for the pointers into the data file.  Unless you're going to 
be using tables bigger than 4 GB, there's probably no reason for you 
to worry about those values at all.

You can indirectly limit the number of records in a table to 127 or 
255 or 32,767 or 65,535 or ... by using an AUTO_INCREMENT primary key 
that's a TINYINT or UNSIGNED TINYINT or SMALLINT or UNSIGNED SMALLINT 
..., but I don't think that's going to help you either.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MAX_ROWS

2004-02-24 Thread Alison W
 I wanted to test how the max_rows parameter works.  I set it to 3 on a
 table.  And, I was able to add 33 records [I stopped at this point].  It
 never prevented me from adding more records.  The result is NOT what I
 expected.  I expected that upon attempting to add the 4th record, I would
 have received some sort of error message.

 Am I mis-using this parameter?

Yes: MAX_ROWS is a *guidance* to the system in setting up the table and
not a *limit* in any way.

Alison

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MAX_ROWS

2004-02-24 Thread Keith C. Ivey
On 24 Feb 2004 at 22:01, Alison W wrote:

 Yes: MAX_ROWS is a *guidance* to the system in setting up the table
 and not a *limit* in any way.

Well, it is a limit in one way.  MySQL uses it (in MyISAM tables) to 
calculate the size of the pointer used for positions within the data 
file.  If the data file becomes larger than can be handled by that 
size of pointer, then you can't add any more records to the table 
(unless you increase MAX_ROWS or AVG_ROW_LENGTH so that the pointer 
size is increased).

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MAX_ROWS

2002-06-18 Thread Zak Greant

On Tue, 2002-06-18 at 11:43, Aborla.net - webmaster wrote:
 Hello,
 
 I created a table using:
 CREATE TABLE a (pa VARCHAR (255) NOT NULL, pi VARCHAR (255), PRIMARY KEY
 (pa)) TYPE=HEAP MAX_ROWS=10
 
 Then I inserted 16 recors. Later I done SELECT * FFROM a and mysql returned
 16 records. Why this??? Does i should return only the last 10 records I
 inserted???
 
 Please help me

  Hello Nuno!

  Sinisa answered this question for someone else on this mailing list
  earlier today. I will repeat it for you and for anyone following this
  thread later on in the archives.

  Sinisa wrote:
  Some table handler, like HEAP, use info you specify for max_rows and
  avg_row_length to determine maxumum number of blocks that could be use
  for given table. 

-- 
Gosh, Batman. The nobility of the almost-human porpoise. --Robin
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Zak Greant [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__   MySQL AB, Advocate
/_/  /_/\_, /___/\___\_\___/   Calgary, Canada
   ___/   www.mysql.com

Feed the Dolphin! Order MySQL support from the MySQL developers at 
https://order.mysql.com/?ref=mzgr


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php