Re: Memory limit?

2005-02-10 Thread Marc Slemko
On Thu, 10 Feb 2005 10:19:32 +0900, Batara Kesuma
[EMAIL PROTECTED] wrote:
 Hi Tobias,
 
 On Wed, 9 Feb 2005 14:48:16 +0100 (CET)
 Tobias Asplund [EMAIL PROTECTED] wrote:
 
   I try to install MySQL 4.1.9 (official RPM from mysql.com). My machine
   is running linux 2.6.9, and it has 4GB of RAM. The problem is MySQL
   won't start if I set innodb_buffer_pool_size to = 2GB. Here is my
   ulimit.
 
  Are you trying this on a 32-bit cpu machine?
 
 Sorry I forgot to mention. Yes, it is a 32-bit CPU machine.

Yup, most Linux glibc's limit a malloc() to 2 gigs in a misplaced(?)
attempt to avoid errors due to signed/unsigned conversions.  Since
innodb just uses malloc() for things, getting above two gigs doesn't
work.

(the details can be a lot more complicated, ie. needing a kernel with
a 4G/4G split, changing the base address mmap()ed regions start at,
etc.)

I don't think it would be hard at all to change innodb to let you use
a 3-4 gig buffer pool on a 32 bit Linux box, but I've never had the
time to look into it that deeply.

It is unfortunate.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Replication

2005-02-10 Thread Marc Slemko
On Wed, 9 Feb 2005 22:07:19 +0100, Hannes Rohde [EMAIL PROTECTED] wrote:
 Hi all,
 
 We use MySQL as a database backend on a portal site. We have a two
 database server setup (one master, one slave). The master is a PIV 3,2 GHz.,
 2 GB Ram and a 80GB Raid-1 system. The slave is a PIV 3.2 GHz., 4 GB Ram and
 a 80GB Raid-0 system. Both run on MySQL 4.1.9 and only use InnoDB. Even
 though the slave is a bigger system and is quite fast with selects, it
 always falls behind in replication (Seconds behind the server keeps growing
 at high-load times).
 Is there any way to speed up the replication a little more? I have already
 tried a whole lot of things but have never been successful, yet :-(

Your config settings suggest you are using innodb.

That can be problematic since innodb allows much higher concurrency
than myisam, although you can still have this issue with myisam.

What you have to realize is that due to how mysql replication works,
every transaction needs to be serialized.  The slave is only running a
single statement at once.  So if you have multiple CPUs on the server,
or multiple disks that can't be saturated by a single concurrent
operation ... then multiple simultaneous operations can get better
performance on the server than you can get in replication to the
client.

If most of your stuff is innodb, then setting the innodb option to not
sync to disk on every transaction may speed things up a lot ... if you
don't care about your data.  But, then again, I don't think mysql
replication is actually fully transactional yet anyway.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Hi,
 
 We have a job that do 'select * from big-table' on a staging mysql database, then 
 dump to data warehouse, it is scheduled to run once a day, but may be run manually. 
 Also we have several other small OLTP database on the same server.
 When the big job run, it would use all the physical mem and swap, all other process 
 slow down because of this.
 
 I would like to limit the resource usage for each mysql client  so that they can use 
 only certain max amount of RAM, and don't select everything into memory before 
 display it to users. However, I couldn't find any parameter would let me implement 
 it.
 Anybody ever encounter the same issue before? Please share your experience.

How exactly are you running this select * from big-table?  From the
mysql command line client?  Is that what is using memory?  It isn't
clear from your post if it is the server or something else using
memory.

If it is the mysql command line client that is the issue, try adding a
-q parameter.  If you are using myisam tables, however, keep in mind
that table will be effectively locked for the whole duration... but if
it is the mysql command line client (which defaults to buffering
everything in memory), it may be faster to use -q anyway.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
Due to the nature of myisam tables, when you are doing a query then
the table will be locked for writes.  Reads will still be permitted
until another write request is made, at which time all further reads
and writes will be blocked until the query completes.

This, however, is already happening even without -q and adding the
-q will likely significantly shorten the time to execute, depending
on exactly how large this table is.

myisam is a very limiting table type as soon as you want to do
anything more than read from or write to a single row at a time using
indexed lookups.  innodb tables do not have this problem, although
they have limitations of their own.

On Thu, 2 Sep 2004 14:30:24 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 The command is issued from mysql command line. Is there any parameters or options I 
 can use without locking the table?
 
 
 
 
 -Original Message-
 From: Marc Slemko [mailto:[EMAIL PROTECTED]
 Sent: Thursday, September 02, 2004 2:24 PM
 To: Sun, Jennifer
 Cc: [EMAIL PROTECTED]
 Subject: Re: tuning suggestion for large query
 
 On Wed, 1 Sep 2004 11:40:34 -0400, Sun, Jennifer
 [EMAIL PROTECTED] wrote:
  Hi,
 
  We have a job that do 'select * from big-table' on a staging mysql database, then 
  dump to data warehouse, it is scheduled to run once a day, but may be run 
  manually. Also we have several other small OLTP database on the same server.
  When the big job run, it would use all the physical mem and swap, all other 
  process slow down because of this.
 
  I would like to limit the resource usage for each mysql client  so that they can 
  use only certain max amount of RAM, and don't select everything into memory before 
  display it to users. However, I couldn't find any parameter would let me implement 
  it.
  Anybody ever encounter the same issue before? Please share your experience.
 
 How exactly are you running this select * from big-table?  From the
 mysql command line client?  Is that what is using memory?  It isn't
 clear from your post if it is the server or something else using
 memory.
 
 If it is the mysql command line client that is the issue, try adding a
 -q parameter.  If you are using myisam tables, however, keep in mind
 that table will be effectively locked for the whole duration... but if
 it is the mysql command line client (which defaults to buffering
 everything in memory), it may be faster to use -q anyway.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: tuning suggestion for large query

2004-09-02 Thread Marc Slemko
On Thu, 2 Sep 2004 15:19:44 -0400, Sun, Jennifer
[EMAIL PROTECTED] wrote:
 Thanks Marc,
 
 What version of myisam table you are talking about? We are on 4.0.20, when I ran the 
 big table query, I tried to insert to it twice without any issues.
 The -q worked good for mysql client. Thanks.

There is an optimization that can allow inserts (note: not updates)
and selects to happen at the same time, which may be what you are
seeing.  There are lots of corner cases, etc. so your best bet is to
check out the documentation which does a reasonable job of explaining
them, in particular:

http://dev.mysql.com/doc/mysql/en/Internal_locking.html
http://dev.mysql.com/doc/mysql/en/Table_locking.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB TableSpace Question

2004-08-03 Thread Marc Slemko
On Tue, 3 Aug 2004 10:07:25 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
 Hi all,
 
 I've been searching the archives  mysql documentation for a while and I
 can't seem to find an answer to my question -
 
 Is there a way to force InnoDB to shrink its filesize? I just dropped a 7GB
 table, but it hasn't freed up the disk space and I need it back. From what
 I've been reading, a restart will cause this to happen, but I'm in a
 production environment, and I'm afraid that InnoDB will take its sweet time
 while my users are holding their breath.
 
 Does anyone have any experience with this?

No, a restart will not shrink it.

Currently the only option I can think of is to do a dump and restore,
using mysqldump (since innodb hot backup just copies the data file, it
won't be of any use in shrinking it).

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB TableSpace Question

2004-08-03 Thread Marc Slemko
On Tue, 3 Aug 2004 12:42:03 -0400 , David Seltzer [EMAIL PROTECTED] wrote:
 Thanks Marc,
 
 Is there really no way to reclaim unused space in an InnoDB table space? If
 not, why is this not considered a tremendous limitation?

Some do consider it a tremendous limitation.  It all depends on how it
is being used.

Oh, and one thing I forgot... in newer 4.1 versions, if you set things
up so each table has its own file with innodb_file_per_table, then I
think if you do an optimize table it will end up shrinking the file
for that table since it will recreate it.  However that really is just
a workaround, and there are a lot of disadvantages to that method ...
especially the fact that free space is now per table instead of per
tablespace.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: using mysql in commercial software

2004-08-02 Thread Marc Slemko
On Mon, 2 Aug 2004 01:35:44 -0700, Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Mon, Jul 26, 2004 at 01:26:15PM -0500, gerald_clark wrote:
 
  Steve Richter wrote:
 
  exactly!  Is Linux distributed under the same type of license as MySql?  If
  I sell software that runs on linux I dont have to give away my code, right?
  To use my software you first have to install no charge Linux and MySql.  Why
  would that not be permitted?
 
  Because the MySQL license does not allow you to use it free with
  commercial software that requires MySQL.  If you are running
  commercial software that requires MySQL you must buy a license.
 
 And this is where the confusion start.  MySQL is covered by the GPL.
 So is Linux.

As far as the server goes, sure.

However there is a key difference in that APIs such as glibc on Linux
are licensed under the LGPL.  The mysql client libraries used to be
the same way, then they changed them to be under the GPL.  This means
that, according to the most common interpretation of the GPL, just
linking with them automatically requires your code be under the GPL. 
Does this still apply to, say, Java code where you are using the
standard JDBC interface?  How about if you use mysql specific SQL
calls?  I would suggest perhaps not, but it isn't a simple question.

So, for example, you could not sell an application under terms not
compatible with the GPL and include the mysql client drivers.  Even
selling an application that is linked against them, but requires the
user to get them themselves, is arguably not permitted.  You can go
read the FSF's FAQ for their interpretation, but that is just their
interpretation.

However, remember the GPL only covers copying, distribution, and
modification.  Not use.

Also note that MySQL AB allows an exception designed for the client
libraries to be more compatible with other open source licenses:
http://dev.mysql.com/doc/mysql/en/MySQL_FOSS_License_Exception.html

I believe that MySQL AB is deliberately vague and confusing on their
licensing page to try to get people to buy mysql licenses.  All their
words there don't matter though, what matters is the actual license. 
It would, however, be nice if their commentary were a bit closer to
the reality of what the GPL means.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-07-31 Thread Marc Slemko
On Sat, 31 Jul 2004 17:50:38 -0500, Keith Thompson [EMAIL PROTECTED] wrote:

 I just discovered that two of my tables (out of about 300)
 show a very unusual behavior.  This is that select count(*) ...
 and selecting all the rows and counting them do not produce
 the same number.
 
 This is on MySQL 4.1.3 on Solaris9.  Look at this:
 
 $ mysql -e select count(*) from pstat.plist
 +--+
 | count(*) |
 +--+
 |15315 |
 +--+
 $ mysql -e select * from pstat.plist | wc -l
15372
 
 Actually, these counts shouldn't quite be the same.  The second
 produces a header line that's getting counted, so it should be
 one more than the count(*).  But, it's off by 57!  The other bad
 table is off by 3.

First, have you verified there is no data in the table with embedded
newlines or some such?

Perhaps there is some index corruption..  Do an explain on the
count(*), it is likely doing an index scan.

Then try a select column_in_index_that_is_being_used from pstat.plist
and see if that returns the same as the count(*), or try doing the
select count(*) with an ignore index of whichever index it is using.

If it seems to be related to that one index, you could try dropping
and rebuilding the index.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT if record NOT EXISTS

2004-07-26 Thread Marc Slemko
On Mon, 26 Jul 2004 17:47:37 +0100, Adaikalavan Ramasamy
[EMAIL PROTECTED] wrote:
 This seems more like the solution I want. I am using perl-DBI and when
 there is an error (i.e. duplicate insert), the rest of the scrip it not
 executed. But this is gives me the following error. What am I doing
 wrong ?
 
 mysql desc tb;
 +---+-+--+-+-++
 | Field | Type| Null | Key | Default | Extra  |
 +---+-+--+-+-++
 | myID  | int(11) |  | PRI | NULL| auto_increment |
 | firstname | varchar(10) | YES  | MUL | NULL||
 | lastname  | varchar(10) | YES  | | NULL||
 +---+-+--+-+-++
 3 rows in set (0.00 sec)
 
 mysql INSERT INTO tb (firstname, lastname) VALUES ('Jack', 'Doe') ON
 DUPLICATE KEY UPDATE lastname = lastname;
 ERROR 1064: You have an error in your SQL syntax near 'ON DUPLICATE KEY
 UPDATE lastname = lastname' at line 1

You are probably running an older version of mysql that doesn't support this.

Try insert ignore.

 
 Alternatively, I am looking for 'try' equivalent in perl, so that if the
 insert is duplicate, the rest of the script is still run. Thank you.

eval.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SHOW INNODB STATUS

2004-07-26 Thread Marc Slemko
On Mon, 26 Jul 2004 18:03:25 -0700, Matt Solnit [EMAIL PROTECTED] wrote:


 How is it possible to have a hit rate of 1000/1000?  Doesn't the buffer
 get inOn Mon, 26 Jul 2004 18:03:25 -0700, Matt Solnit [EMAIL PROTECTED] wrote:
 
 How is it possible to have a hit rate of 1000/1000?  Doesn't the buffer
 get initialized by cache misses?

That is a number after rounding so it may not be exactly 100%, and
ISTR it is one of the states that is either reset every time you read
or every so many seconds so any misses before then won't be included.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Marc Slemko
On Mon, 19 Jul 2004 18:13:36 +0200, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Hi,
 
 We are currently using a 4.0.16-replication-setup (debian-linux, kernel
 2.4.21, xfs) of two 2.4ghz Intel-Pentium4 systems with  3gig RAM each
 and SCSI-Hardware-Raid, connected via gigabit-ethernet. We are reaching
 the limit of those systems and are going to buy new hardware as well as
 upgrade to mysql 4.1.x. We will start testing our applications on 4.1.3
 within the next few weeks but our main problem is that we are not quite
 sure what hardware to buy...
 
 We are planning to buy something like a dual-xeon system with 10-16gb of
 RAM and hardware raid10 with 8 sata-disks and as much cache as possible.
 Will mysql be able to use the ram efficiently or are we hitting limits?
 AMD or Intel? 32bit or 64bit?

Whatever you do, get a 64 bit system.  Opteron recommended, if you
really prefer Intel and can get your hands on one of their 64-bit
Xeons that is acceptable, although it may take a little longer for
Linux to catch up.

Even if the software isn't there yet (it is, it may just be a bit of a
hassle to all get working), in the worst case you'll have to run it in
32-bit mode until you can figure that out.

You can't use more than 2 gig most of the time / close to 4 gig if you
hack things up right innodb cache on a 32 bit system.  The rest of the
memory will be used by the OS (less efficiently than on a 64-bit
system though), but that may or may not be as efficient as innodb
doing it.  That depends a lot on your application's data access
patterns.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Linux 2GB Memory Limit

2004-07-14 Thread Marc Slemko
On Tue, 13 Jul 2004 23:26:48 +0100, Marvin Wright
[EMAIL PROTECTED] wrote:
 Hi,
 
 I'm now running redhat AS 3.0 with kernel version 2.4 and have 8GB of RAM.
 
 If I set my innodb_buffer_pool to 2048M, it just will not start, I get this
 error.
 
 040713 22:10:24  mysqld started
 040713 22:10:24  Warning: Asked for 196608 thread stack, but got 126976
 InnoDB: Fatal error: cannot allocate 2147500032 bytes of
 InnoDB: memory with malloc! Total allocated memory

Now I remember what I tracked down the limit to be ...

2147500032  is just above 2 gigabytes of memory.  From what I have
seen, glibc (not sure if this is fixed in recent versions) just
refuses to allocate chunks of memory larger than 2 gigs in a single
call.  This seems a little odd given the library the malloc code is
based on, but I haven't dug deeper.

You can probably get around this if you do both of:

1. replace the call to malloc() in the innodb source with one that does a mmap()
2. run a kernel that has the 4G/4G patch, and possibly also moves
where mmap()ed regions start to be a bit lower than 1 gig (not sure
what the 4G/4G patch does with that).

A pain in the ass.  I strongly encourage people wanting larger innodb
buffers to consider 64-bit Opterons or, less desirably, Intel's xeons
w/64-bit support when they become generally available fairly soon.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Linux 2GB Memory Limit

2004-07-09 Thread Marc Slemko
On Fri, 9 Jul 2004 15:46:37 +0100 , Marvin Wright
[EMAIL PROTECTED] wrote:
 Hi,
 
 Current Platform
  RH version is 7.3
  IBM Blade Server - 2 x Intel(R) Xeon(TM) CPU 3.20GHz
  32 GB SCSI
  4 GB Ram
 
 This is the platform we are moving to in a week or so
  RH Enterprise AS 2.1 or 3.0
  4 x Intel(R) Xeon(TM) MP CPU 2.70GHz
  128 GB SCSI Raid
  16 GB Ram
 
 So with the new platform I'll be able to have a much bigger InnoDB buffer

Note it will still be limited to something that is definitely no
bigger than 4 gigs, and may be smaller... I haven't had any luck with
~2 gig innodb buffer sizes even on systems with 3 or 3.5 gigs of
addess space available per process, but I never looked into that too
deeply so it may work fine with the right setup.

This is probably a bit late, but I would have definitely recommended
running 64-bit opterons in your configuration since then you could
have a larger innodb buffer.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concurrency Question

2004-07-05 Thread Marc Slemko
On Mon, 5 Jul 2004 16:07:58 +0100 , Javier Diaz [EMAIL PROTECTED] wrote:
 
 We have changed all our tables to InnoDB and now the server is not able to
 handle the load, even when we are not running the SELECTs statements against
 these tables yet.
 
 As I mentioned in my email we make a lots of INSERTS and UPDATES in these
 tables (more than 3000 per second). So far using MyISAM everything was OK,
 but now when we moved the tables to InnoDB (to be able to make Read/Write
 operations in these tables) the performance was down completely and the
 server can not handle it.
 
 Does anyone have a rough idea when you change from MyISAM to InnoDB how the
 performance is affected?

That all depends on how you are using transactions.  If you are trying
to do each of these operations in a separate transaction, then
definitely that will be a problem since transactions inherently have a
certain cost to them since they need to commit changes to durable
storage.

If this is the case, then a horribly ugly now you don't have
durability any more in your transactions hack you could try is
setting innodb_flush_log_at_trx_commit to 2, see the docs for details.
 Be warned that doing so means you can loose committed transactions if
the machine crashes.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concurrency Question

2004-07-05 Thread Marc Slemko
On Mon, 5 Jul 2004 18:48:50 +0100 , Javier Diaz [EMAIL PROTECTED] wrote:

 
 I really don't like the idea to set innodb_flush_log_at_trx_commit to 2, the
 information in these tables is important. On the other hand there is nothing
 I can do from the point of view of the number of transactions. Each process
 run its own set of INSERTs and UPDATEs statements, so I can not reduce the
 number of transactions being executed.
 
 Looking to the MySQL documentation:
Since the rotation speed of a disk is typically at most 167
 revolutions/second, that constrains the number of commits to the same
 167th/second
if the disk does not fool the operating system
 
 And that we are doing a LOT MORE INSERTs by second,  I'm afraid maybe the
 only solution is go back to MyISAM :-(
 
 By the way this figure of 167 revolutions/second is based on what kind of
 hard disk?

Well, if you are using myisam you already have even fewer guarantees
about transactional integrity than innodb with
innodb_flush_log_at_trx_commit set to 2.  That is the only reason that
myisam can perform as it does in the manner you are using it.  So if
that is all that is worrying you, no reason not to try innodb setup
that way.

You may want to look more closely at how you may be able to re
architect your system to not require so many transactions, such as by
having a middle tier that can aggregate information before committing
it.  Unfortunately, myisam tricks people into thinking disk based
databases can safely handle the sort of operation you are doing, then
leaves them in an unfortunate situation when they realize that myisam
has no durability guarantees.

A ballpark figure that applies to disk based databases is that you can
do approximately one write operation per rotation, which translates
into one transaction per rotation.  This logic makes some assumptions
and isn't exact with modern disks, but is a reasonable ballpark.

167 revolutions per second is a 10k RPM drive.  You can improve this
with the right type of RAID, you can improve it with faster disks, but
it is still a fairly small number.

You can improve it further with a battery backed disk controller that
can cache writes, although the reliability of some of the cheaper
options there isn't great.  You can improve it with a database that
doesn't commit to disk, such as mysql cluster however that is a whole
different ballpark and a ways from being ready for prime time and has
all sorts of issues of its own.  Some databases can be smart and
coalesce commits from multiple connections into one write to disk, but
this is a fairly uncommon feature.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: strange table speed issue

2004-06-24 Thread Marc Slemko
On Thu, 24 Jun 2004, MerchantSense wrote:

 Seems ok to me...

 It seems to be checking all the rows in the explain for some reason too...

 mysql show index from ip2org;
 +++--+--+-+---+-
 +--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name | Collation |
 Cardinality | Sub_part | Packed | Comment |
 +++--+--+-+---+-
 +--++-+
 | ip2org |  1 | ip_start |1 | ip_start| A |
 2943079 | NULL | NULL   | |
 | ip2org |  1 | ip_end   |1 | ip_end  | A |
 2943079 | NULL | NULL   | |
 +++--+--+-+---+-
 +--++-+

mysql can only use one index from a particular table in any one
query.  So if you want to do a query that uses both ip_start and
ip_end, you would need to create a multicolumn index on ip_start,ip_end
or vice versa.

What you have is one index on ip_start, and another on ip_end.  So
it can use one of the indexes, but then it has to scan each row that
matches.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-08 Thread Marc Slemko
On Mon, 7 Jun 2004, Haitao Jiang wrote:

 Marc

 mysqld runs on a very powerful Operton machine with
 16GB memory and barely any other application process
 running, it is hard to believe that a simple select
 that runs under 2 second will utilize all the
 resources...that is why I tend to think there is
 something in the mysql set up that caused this...any
 idea where I should look?

How many processors?

If there is only one and the query is CPU bound (as it probably is if
everything is cached, given 16 gigs of ram), then why shouldn't it
use all the CPU?

Or, to phrase the question differently: why should the query take 2
seconds to run if there are free resources?

Now, on a multiprocessor box it clearly starts to get more complicated.
mysql has no capability to spread one query across multiple CPUs
in parallel, and while it can spread multiple queries across CPUs the
scalability has its limits.

The fact that is a simple query is irrelevant (some of the simplest can
be the slowest if it has to do a full table scan).  From the fact
that it takes 2 seconds it is clear it is not an entirely trivial query.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-08 Thread Marc Slemko
On Tue, 8 Jun 2004, Haitao Jiang wrote:


 Each of 4 individual query only took 0.6 seconds,
 there is no other clients, it hardly to believe taht
 mysql query performance will degrade 300% (from 0.6s
 to ~1.9s) if we have 4 concurrent connections...

 As far as I know, MySQL should be able to handle
 hundreds of connections on a single CPU box without
 degrading performance like above.

You are completely missing the point.

It is nothing to do with concurrent _connections_ it has to do with
running concurrent _queries_.

What you are saying is like well, if you can sit down and solve
this equation in 10 minutes, why does it take you 40 minutes to
solve 4 different equations?

There is no magic way for the machine to do a hundred things at
once on a single processor (assuming you don't yet have a quantum
computer), they all get run for brief periods interleaved with one
another.  If you are running 4 at once, then each will only run 1/4 of
the time.  The box is working as hard as it can to process one query,
do you think it should slow down how quickly it processes one concurrent
query just so that number will change less if you have more than one?

I'll repeat what I said before: a query that takes 600ms on such a machine
is not a trivial query.  If you real question is why is my query so slow
then you should probably ask that instead of getting confused about
why your machine can't do 4 things at once.

P.S. Please do not go around reposting your same question on multiple
lists, it has already been answered.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Idea to speed up multiple jdbc connections?

2004-06-07 Thread Marc Slemko
On Mon, 7 Jun 2004, Haitao Jiang wrote:


 Yes. The time I measure like I said is purely around
 statement.execQuery() call. Connection creation is not
 a factor here at all.

 My database has 1.64 million rows and 4 queries are
 all selects, which are identical in both serial and
 parallel cases.

 In serial cases:
 Query 0 took 590
 Query 1 took 431
 Query 2 took 461
 Query 3 took 440

 In parallel cases:
 Queryer 3 query took 1552
 Queryer 1 query took 1632
 Queryer 2 query took 1783
 Queryer 0 query took 1923

 I don't understand why in 4 concurrent connection
 cases (already created not included in the timing) it
 takes more than 3 times longer to exec. a query.

Umh... if your queries are limited by some bottleneck on the server (such
as, for example, CPU) then why would running them in parallel make it any
faster?

It seems that in the sequential case they are taking a total of 1922
(whatever those units are) while in the parallel case they are taking
1923.  What this is telling you is that, in this case, a single query is
able to fully utilize the resources (likely CPU given these numbers,
although it is possible it could be disk) on the server.  If a single
query can fully utilize the server, all that adding more concurrency
can possibly do is slow the total throughput down.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Table Locking Issue

2004-04-28 Thread Marc Slemko
On Tue, 27 Apr 2004, Scott Switzer wrote:

 Hi,

 I am having a difficult time with a query.  My environment is MySQL
 v4.0.16 (InnoDB tables) running on Linux (latest 2.4 kernel).
 Basically, I am running a query of the form:

 INSERT INTO temp_tbl
 SELECT c1,c2...
 FROM t1,t2,t3,t4
 WHERE ...

 It is a very complex query, which looks at millions of rows of data to
 produce results.  The issue is this:

 When running from the MySQL command line:
 Normally, when the query is run, it takes about 5 minutes to complete.
 When I LOCK t1,t2,t3,t4 before running, it takes 8 seconds to run.

Are you sure it is actually _working_ when it takes 8 seconds to run?
You say it is a very complex query that looks at millions of rows ...
unless those are all cached (and they could be, depending on your setup),
8 seconds would probably be too fast for it to run no matter how mysql
optimized it.

Triple check that if you start with an empty temp_tbl it actually
inserts everything it should into temp_tbl.

If you aren't locking temp_tbl, I wouldn't expect the query to actually
work.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INNODB SHOW STATUS

2004-04-21 Thread Marc Slemko
On Tue, 20 Apr 2004, Emmett Bishop wrote:

 Howdy all,

 Quick question about what I'm seeing in the BUFFER
 POOL AND MEMORY section...

 I've configured the innodb_buffer_pool_size to be 128M
 and when I do a show variables like 'innodb%' I see

 | innodb_buffer_pool_size | 134217728  |

 So that looks good. However, I see the following in
 the BUFFER POOL AND MEMORY section of the output from
 the innodb monitor:

 --
 BUFFER POOL AND MEMORY
 --
 Total memory allocated 152389988; in additional pool
 allocated 1048576
 Buffer pool size   8192
 Free buffers   0
 Database pages 7947
 Modified db pages  0
 Pending reads 0
 Pending writes: LRU 0, flush list 0, single page 0
 Pages read 20345325, created 9857, written 763089
 0.00 reads/s, 0.00 creates/s, 0.00 writes/s
 Buffer pool hit rate 1000 / 1000

 Why does it say the buffer pool size is only 8M?
 Shouldn't it be 128M? Also, could someone explain the
 hit rate? I remember seeing in someone's recent post
 that the 1000/1000 is good, but I don't know what that
 means. Can someone suggest a good resouce that
 explains the contents of Innodb show status in detail.
 The page on www.mysql.com gives a very cursory
 overview of the output.

Buffer pool size, free buffers, database pages, and modified database
pages are in 16k pages.

The buffer pool hit rate simply says the fraction of page reads satisfied
from the innodb buffer cache, in this case 1000/1000 == 100%.

Unfortunately, I'm not really aware of a better reference.  Perhaps some
of this is explained in High Performance MySQL, but I don't have a
copy yet.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Gripe with MySQL

2004-04-19 Thread Marc Slemko
On Mon, 19 Apr 2004, Stormblade wrote:

 Ok. Love MySQL and I will be using it for my personal use and recommending
 it to clients as a lower cost alternative. I've only been using it for a
 very short time but there one major gripe I have with it and I believe it's
 just a design thing.

 MySQL seems to put the burden of error checking and such on the client.

 - All fields have a default value even when I don't tell it to?
 - Allow Null will only result in an error if I explicitly try to set the
 field to null.

 These are the two things that I really dislike. I think it's a poor design
 to rely on clients only for error checking. MySQL supports foreign keys.
 This is good because the database handles referential integrity. But it
 shouldn't stop there. I should also be able to tell the database not to
 allow a field to be empty/null and it should not put anything in there that
 I have not told it to.

Yup, it is a bad idea, and thankfully it seems to be slowly improving
as more and more people try to use mysql as a more serious database.

However, keep in mind that it is not an adhoc arbitrary decision,
but it based on a very fundamental traditional mysql design fundamental:
not to support transactions because they aren't required most of
the time, or so the claim goes.  While there are a few mysql storage
engines now that do support transactions, and at least one is in
widespread use, this history explains why it is the way it is.

If you don't support transactions, what do you do if you are running
a statement that updates multiple rows and get an error with one
row? If you just abort the whole statement, it is really ugly since
then you leave the statement half executed.  If you try to be able
to undo the entire statement, it is really ugly because without
transactions you are unlikely to have the backend support for doing
that or for avoiding dirty reads, etc since that is one of the
fundamentals of what a transaction is.  So ... you bravely soldier
on.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql/innodb configuration

2004-04-17 Thread Marc Slemko
On Fri, 16 Apr 2004, mayuran wrote:

 I would like to optimize the configuration settings
 for this beast of a machine, here are the specs:

 Quad Xeon 3ghz (4x2 = 8 cpus), 512 cache
 16 gigs ram
 running Redhat Enterprise 3.0 AS
 All tables are InnoDB.

 I read this warning in the MySQL documentation:
 *Warning:* On GNU/Linux x86, you must be careful not to set memory
 usage too high. |glibc| will allow the process heap to grow over
 thread stacks, which will crash your server.

 But at the same time it says:

 # Set buffer pool size to 50-80% of your computer's memory,
 # but make sure on Linux x86 total memory usage is  2GB

 Does this mean that MySQL wont make use of the 16gb it has total ?

 I had to set the value to 1G to make it even start up.

You should be able to get higher than 1 gig ... a bit ... 1.5 gigs perhaps.

But yes, unfortunately mysql and innodb can't directly use most of the
memory.

innodb has support on windows for using Intel's paged address
extensions (PAE) to have paged access to more memory using the AWE
interface, with a bit of a performance hit for doing so.  However,
that feature of innodb isn't available on Linux, plus it disables
innodb's adaptive hashing support, which can be annoying especially
considering mysql doesn't otherwise support anything like a hash
join.

The memory will still be used by your OS for caching files, which will
help... but that isn't really as good as if innodb could use it, since
multilevel caching can be a bit sketchy and some features of innodb
(again, adaptive hashing...) can only be done if innodb has the data
in it's cache.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Cluster

2004-04-15 Thread Marc Slemko
On Wed, 14 Apr 2004, Tim Cutts wrote:


 On 14 Apr 2004, at 10:57 pm, Adam Erickson wrote:

  (This is probably not the best place for this post, but here goes...)
 
  The (soon to be released) MySQL cluster software docs use a sample
  cluster node configured with Dual Xeons and 16GB of ram.  MySQL  has
  never been able to use more than 2 gigs of system memory (on 32 bit
  platforms.)  With MySQL Cluster, will MySQL finally start using the
  memory paging trick Oracle and others have been using for years?
  Otherwise, what is the point of having 16 gigs of ram for one MySQL
  server?

 Disk cache.  Tables which MySQL doesn't have in its own buffers but
 which nevertheless are frequently accessed will already be in RAM, and
 therefore faster to access.

Well ... that doesn't tie in with what I'm reading about mysql cluster,
namely it being a main memory database where all data is kept in memory.

I guess you can probably run multiple instances of the cluster node
on one machine, having the data split across them in a fairly transparent
manner.

However, there is ... very minimal technical information available
on mysql.com about exactly what mysql cluster (ie. mysql on top
of NDB) is and what it is really designed for.  I looked at the
NDB API docs in the bitkeeper tree, which help a bit ... but not
all that much.

It doesn't look like the current ndb code has any PAE support ...
at least on Unix.  It does some AWE-ish calls on windows but I don't
think those are to actually allows more than somewhere between 2 and 4
gigs per process the way it is being used, unless I am missing
something.

My overview so far is that it is designed for very though transaction rate
systems, with a large number of fairly simple transactions, and also
possibly systems with a large amount of read activity.  All of this needs
to be on a moderately sized data set, since the design is based on it being
an in memory database.

In any case, since the NDB storage engine is used in place of myisam or
innodb... even if it could address more memory using PAE, that wouldn't
mean other storage engines could.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Strange Index Usage: select ... where foo = 90 on a varchar

2004-04-15 Thread Marc Slemko
On Thu, 15 Apr 2004, Max Campos wrote:

 On Apr 13, 2004, at 2:01pm, Michael Stassen wrote:

  You shouldn't be surprised.  This is normal behavior.
  interchangeStatus is a varchar, so
 
select fileName from outDocInterchange where interchangeStatus = 91;
 
  requires that interchangeStatus be converted to an int for each row so
  it can be compared to 91, rendering the index useless.  On the other
  hand,
 
select fileName from outDocInterchange where interchangeStatus =
  '91';
 
  compares interchangeStatus to a string, which the index is designed to
  do. In general, an index on a column won't help if the column is input
  to a function.

 Shouldn't MySQL just cast the constant integer to a string instead?
 Perhaps this optimization isn't done.  Also, I'm not completely sure,
 but I think this type of query was indexed in 3.23.  Or more precisely,
 these queries didn't become slow until after I upgraded to 4.0.18
 (from 3.23.40).   Granted, ultimately I needed (and did) change the
 column type, but I'm curious to see if  why the behavior changed.

Except there are multiple ways that something that is numerically
equal to 91 can be represented as a string, eg. 91.0,  91, etc.
So using the index would result in different behaviour in some situations.

I think that if things worked how I would like them, mysql wouldn't
automatically do the cast at all so it would be obvious that something
that is possibly unexpected is happening.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb deadlock detection failing?

2004-03-25 Thread Marc Slemko
Has anyone seen situations where innodb's deadlock detection fails
to detect a deadlock, and things remain deadlocked until the lock
wait timeout expires and the server returns a Lock wait timeout exceeded;
Try restarting transaction, or have any ideas for why it may be
happening?

There are no mysql locks (ie. lock table ..., get_lock(), etc.)
explicitly being used.

I was under the impression that innodb's deadlock detection was guaranteed
to always work if no mysql locks were involved.

This is running 4.0.15a on Linux, all tables on the server (aside from the
mysql database) are innodb.

I have some innodb lock monitor dumps, but they aren't overly illuminating,
at least to me... all the active transactions are blocking waiting
for something, but I can't fully trace who is waiting for who since
only 10 locks are printed for each one.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select speed

2004-02-26 Thread Marc Slemko
On Thu, 26 Feb 2004, Lorderon wrote:

 Hi All,

 If I got one table A_table with many columns, and a second table B_table is
 the same but with just primary field and unique field...
 How much meaningful is the time difference between these queries?
 1. SELECT unique_field FROM A_table WHERE prim_field='val';
 2. SELECT unique_field FROM B_table WHERE prim_field='val';

 If I split A_table into some tables, and define C_table to be MERGE on the
 A_table pieces.
 Is the time difference between selecting from A_table or C_table is
 meaningful?

Unless you have other unstated requirements, from the performance
perspective you are probably better off just making an index on
(prim_field, unique_field) in A_table and getting rid of everything
else.  Then mysql should be able to execute query 1 just as fast
as if it were in B_table; do an explain on it, and it should have
using index in it, which means it only uses the index to do the
query and doesn't even look at the table rows directly at all.  And you
have no worries about keeping the two in sync.

As for the time difference between query 1 and 2 now, it may be very
tiny or it could be huge, depending largely on if the full table can
fit in cache or if only the primary field / unique field can fit in cache.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Two indexing questions

2004-02-26 Thread Marc Slemko
On Thu, 26 Feb 2004, Keith Thompson wrote:

 Given these two tables:

 create table t1 (
id int unsigned auto_increment,
a int,
... [other fields]
primary key (id),
index aid (a,id)
 ) type=innodb;

 create table t2 (
id int unsigned,
b int,
... [other fields]
index id (id),
index bid (b,id)
 ) type=innodb;

 Using searches of the form:
select *
from t1, t2
where t1.id = t2.id
and t1.a = somevalue
and t2.b = somevalue

 Now, let's say that the data is such that the driving table is t2
 (order of tables with EXPLAIN is t2, t1).

 Can MySQL take advantage of the bid index to retrieve the id
 for the join out of the index rather than pulling the data row,
 or is there no advantage to using index bid (b,id) over just
 using index bid (b) for this query?

Sure, it can do that.


 Similarly, can MySQL use aid for this query to satisfy both the
 join and the t1.a = somevalue comparison together when t1 is
 not the driving table like this?  It appears to only want to use
 the primary key for t1 for this query, which leads me to believe
 that on non-driving tables the only index it can use is one to do
 the join and that it can't use an index that could satisfy both
 the join and another field comparison at the same time.

When I just created your test tables with no extra columns, explain
shows it didn't want to use the multicolumn index on the second
table (ie. using index) unless I explicitly did a force index,
but then it did so just fine:

mysql explain select * from t1, t2 force index(bid) where t1.id = t2.id and t1.a= 
'xxx' and t2.b = 'yy' \G
*** 1. row ***
table: t1
 type: ref
possible_keys: PRIMARY,aid
  key: aid
  key_len: 5
  ref: const
 rows: 1
Extra: Using where; Using index
*** 2. row ***
table: t2
 type: ref
possible_keys: bid
  key: bid
  key_len: 10
  ref: const,t1.id
 rows: 1
Extra: Using where; Using index
2 rows in set (0.00 sec)

It may well change its perspective and decide to use the index automatically
if I actually had more columns in the table, or had data in it, but I
don't know offhand if it is smart enough for that...

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow performance with large or list in where

2003-10-05 Thread Marc Slemko
On Sun, 5 Oct 2003, Alexis da Cruz Henriques Guia wrote:


 Your problem is that MySQL don't use indeces in selects with 'or' (See MySQL
 manual).

If you are referring to:

http://www.mysql.com/doc/en/Searching_on_two_keys.html

then that page doesn't say mysql doesn't use indexes on selects with
or.  It is, as far as I can tell, talking about the case where you have
two different keys in the OR clause.  In my simplified query, I only
have one key with multiple OR values and the page says searching on
one key with different OR parts is optimised quite well.

Granted, it isn't clear if the page is referring to one single column
or all the columns involved in one unique index so it isn't clear if the
case that they say is optimized well would be of use to me or not.

mysql certainly is using an index for the relevant queries, it is just
doing a range scan of that index, instead of individual lookups.  The
range scan may well be faster if it were reading all the data from disk...
but it isn't, and never will be for my query.


 But i didn't have understand what's your problem executing individual
 selects...
 You don't need to open parallel connections. Put the UPDATE commands in a file,
 and execute them in mysql prompt like this:

 mysql source fileName;

 (for help, type 'help' in mysql prompt)

 Is this your problem?

no... my problem is that it is still far too slow to do it that
way across a network given that you can only send one command to
the server at once, then you have to wait for the response.  Not only is
there the network latency that adds up when you want to do thousands
of operations per second, even on 100 mbit or gigabit ethernet, but there
is the OS scheduling latency involved.  When doing batch updates, passing
the update in batches to the server is nearly always going to result in
far better performance... if the server supports it properly.

Sticking multiple statements in one file and passing it to the
mysql command line client doesn't change the problem in any way.  In any
case, I can't do any of this by writing commands to a file and
loading them using the mysql client, I need to do it using JDBC.  I
have no problem executing all my updates sequentially except for the
fact that it is far too slow.

But thanks for the response...


 ;)
 Alexis

 Quoting Marc Slemko [EMAIL PROTECTED]:

  If I do a query such as:
 
  SELECT * from foo where fooid = 10 or fooid = 20 or fooid = 03 ...
 
  with a total of around 1900 or fooid = parts on a given table with 500k
  rows, it takes about four times longer than doing 1900 separate
  queries in the form:
 
  SELECT * from foo where fooid = 10
 
  fooid is the primary key.
 
  My real goal is to do updates in a similar way, they have the same issue.
  And my real goal is actually doing the same thing where instead of
  fooid=10 I have a multicolumn primary key and use (keycol1=10 and
  keycol2=2
  and ...)  My examples here are the simplest case I can find that exhibit
  the issue.
 
  Explain plan on the select shows:
 
  table   typepossible_keys   key key_len ref rowsExtra
  adsummary   range   PRIMARY PRIMARY 31  NULL1915Using where
 
  so it is doing a range scan in some manner.  Given that the rows
  I'm updating will normally be cached in memory, is there anything I
  can do to force it to just do repeated index lookups like the
  individual queries are doing?  The reason I don't want to use
  individual queries is because my performance is then bottlenecked
  by latency, and when trying to update thousands of rows a second
  that doesn't work out.  Just opening multiple connections and
  doing them in parallel is a problem because of where the updates are
  coming from and the fact that they have to be done inside a transaction
  so other connections would just end up blocked anyway.
 
  Currently running 4.0.15a.
 
  I'm trying to avoid going to 4.1 at this point, but it does look
  like the INSERT ... ON DUPLICATE KEY UPDATE ...  syntax may do
  individual index lookups the way I want... I haven't verified
  that yet though.
 
  Any suggestions are greatly appreciated.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow performance with large or list in where

2003-10-05 Thread Marc Slemko
On Sun, 5 Oct 2003, Santino wrote:

 Have You test in operator?

 select * from table where id in (10,20,30,50,60,90, )

Yes, IN does perform at the levels I want and works for the simplified
example I gave, but doesn't work for the generalized case I need,
which is matching individual rows in a table with a multicolumn
primary key which is why I can't use it.  Well, I could use it but
it would require creating an extra column that is a string with
all the component columns of the primary key combined or a binary
field that I pack myself then have a unique index on that... but
I'd really like to avoid that since this table will have hundreds
of thousands of rows added a day and has half a dozen columns that
form the primary key.

Interestingly, the explain output is exactly the same for the in and
the fooid=10 or fooid=20 or ... case.

thanks for the suggestion.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow performance with large or list in where

2003-10-04 Thread Marc Slemko
If I do a query such as:

SELECT * from foo where fooid = 10 or fooid = 20 or fooid = 03 ...

with a total of around 1900 or fooid = parts on a given table with 500k
rows, it takes about four times longer than doing 1900 separate
queries in the form:

SELECT * from foo where fooid = 10

fooid is the primary key.

My real goal is to do updates in a similar way, they have the same issue.
And my real goal is actually doing the same thing where instead of
fooid=10 I have a multicolumn primary key and use (keycol1=10 and keycol2=2
and ...)  My examples here are the simplest case I can find that exhibit
the issue.

Explain plan on the select shows:

table   typepossible_keys   key key_len ref rowsExtra
adsummary   range   PRIMARY PRIMARY 31  NULL1915Using where

so it is doing a range scan in some manner.  Given that the rows
I'm updating will normally be cached in memory, is there anything I
can do to force it to just do repeated index lookups like the
individual queries are doing?  The reason I don't want to use
individual queries is because my performance is then bottlenecked
by latency, and when trying to update thousands of rows a second
that doesn't work out.  Just opening multiple connections and
doing them in parallel is a problem because of where the updates are
coming from and the fact that they have to be done inside a transaction
so other connections would just end up blocked anyway.

Currently running 4.0.15a.

I'm trying to avoid going to 4.1 at this point, but it does look
like the INSERT ... ON DUPLICATE KEY UPDATE ...  syntax may do
individual index lookups the way I want... I haven't verified
that yet though.

Any suggestions are greatly appreciated.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDB, Replication, and Data warehouse: Oil, Water, and little floating plastic men

2003-09-24 Thread Marc Slemko
On Wed, 24 Sep 2003, Misaochankun wrote:

 Error(using 2.5G RAM out of 4G total):

 030924 15:39:55  mysqld started
 Warning: Ignoring user change to 'mysql' because the user was set to
 'mysql' earlier on the command line
 InnoDB: Fatal error: cannot allocate 2684370944 bytes of
 InnoDB: memory with malloc! Total allocated memory
 InnoDB: by InnoDB 24482732 bytes. Operating system errno: 12
 InnoDB: Cannot continue operation!

If you set a smaller size and start mysql, so it is running
successfully, how much memory does top and ps show mysqld as using?

If your processes are getting close to 2 gigs of RAM total, that may be
There are other parts of mysql that can be configured to use (sometimes
a lot of) memory.
the limit you are running into.

By default on Linux, a process can only allocate about 1.9 gigs of
memory using mmap(), which is what malloc() is probably using for
large allocations.  redhat may tweak these values in their kernels,
but I don't know... that is the default limit on 2.4.x kernels
anyway.  You can tweak this to go higher, but only with modifying the
kernel source.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: 64-Bit and INNODB

2003-08-26 Thread Marc Slemko
On Mon, 25 Aug 2003, Wendell Dingus wrote:

 I didn't notice a reply to this when first posted. Surely someone has
 stuffed a lot of memory into an Opteron or Itanium by now and knows the
 answer. Is a 64-bit Malloc all that is necessary or does INNODB have to
 specifically support more memory in some other fashion? Heikki?  Thanks in
 advance!

well, interestingly according to the innodb release notes, on windows:

MySQL/InnoDB-4.1.0, April 3, 2003

* InnoDB now supports up to 64 GB of buffer pool memory in a
Windows 32-bit Intel computer. This is possible because InnoDB
can use the AWE extension of Windows to address memory over
the 4 GB limit of a 32-bit process. A new startup variable
innodb_buffer_pool_awe_mem_mb enables AWE and sets the size of
the buffer pool in megabytes.

not sure what it would take to make that work on linux, but if all
you need is more memory, and the fairly reasonable performance hit
is ok, you may be a lot better off just getting an x86 box with 8
dimm slots and loading them up with 1 or 2 gig dimms... then making
AWE in mysql work on linux.  The cost you pay to go the 64 bit box is
pretty hefty.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb use outside of explicit transactions

2003-08-23 Thread Marc Slemko
On Sat, 23 Aug 2003, Heikki Tuuri wrote:

 Marc,

 - Original Message -
 From: Marc Slemko [EMAIL PROTECTED]
 Newsgroups: mailing.database.mysql
 Sent: Saturday, August 23, 2003 7:19 AM
 Subject: innodb use outside of explicit transactions


  Suppose I have an innodb table in 4.0.14 and do:
 
  LOCK TABLE maggie
  INSERT INTO maggie values(123, 'simpson');
  UNLOCK TABLES
 
  As soon as I issue LOCK TABLE, any transaction in progress is
  automatically committed.
 
  By what point is this INSERT guaranteed to be committed to disk
  (ie. redo log)?
 
  Is it:
 
  1. before INSERT returns?

 because you have AUTOCOMMIT=1, the transaction is committed in MySQL-4.0.14
 before the INSERT returns. There was a bug/'feature' and this was only fixed
 in 4.0.14:

Ahh, ok.  This is where I was confused.  Some of the wording of
the documentation made me think that transactions were not allowed
when you held a table lock, although I can't find anything specific
in the docs that would imply that looking at them now.  After
looking at it more closely, I am mistaken and transactions work as
normal when tables are locked, it is just that any open transaction
is automatically committed when you lock or unlock a table.

thanks for the clarification.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb use outside of explicit transactions

2003-08-22 Thread Marc Slemko
Suppose I have an innodb table in 4.0.14 and do:

LOCK TABLE maggie
INSERT INTO maggie values(123, 'simpson');
UNLOCK TABLES

As soon as I issue LOCK TABLE, any transaction in progress is
automatically committed.

By what point is this INSERT guaranteed to be committed to disk
(ie. redo log)?

Is it:

1. before INSERT returns?

2. before UNLOCK TABLES returns?

3. before it is read by any separate transaction?

4. before any separate transaction that read this data is committed?

5. sometime, no guarantee?  This would seem to violate transactional
integrity of the new transaction if it had a foreign key reference.

My best guess is (1) since nothing else makes much sense, but I am
having trouble finding any documentation addressing exactly how
table locks interact with innodbs transaction model, aside from a
reference to being able to get a table lock while innodb row locks already
exist on the table.

I do have an explicit reason for wanting to do a lock tables instead of
doing everything in transactions, but I still require some assurances
that things are committed to disk so they can be recovered (cluster
with shared disk to fail over to a secondary node).  I have an
innodb table like this:

CREATE TABLE maggie (
maggieidINT NOT NULL,
wordVARCHAR(254),
modifiedTIMESTAMP NOT NULL,

PRIMARY KEY (maggieid)
)

There are multiple writers that can each add or modify a row in the
table.

There are multiple readers, each one keeping a full representation
of the table in memory and polling at intervals for modified columns
based on the timestamp.

They loop:

1. grab new timestamp
2. select from maggie where modified = old timestamp
3. save new timestamp for the next round

The problem is that if I just do a normal update in the writer,
then there is a race between when the timestamp is updated and the
transaction is committed.   If a reader comes in, it won't see
the update since it isn't committed, it won't block on it because of
multiversioning, and will never see it next time around since the
timestamp is too old.  I'm considering if I can remove the race by
using LOCK TABLES explicitly in the writer... hence my first
question.  I couldn't see any obvious way to do what I want using
only innodb row level locks, no matter what isolation level I used.

Suggestions or pointers at docs?

thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]