Re: Lock wait timeout exceeded Errors

2004-11-23 Thread Heikki Tuuri
Gabriele,
http://dev.mysql.com/doc/mysql/en/Innodb_deadlocks.html

SET AUTOCOMMIT=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
[do something with tables t1 and t2 here];
COMMIT;
UNLOCK TABLES;


In applications using AUTOCOMMIT=1 and MySQL's LOCK TABLES command, InnoDB's 
internal table locks that were added in 4.0.20 can cause deadlocks. Starting 
from 4.0.22, you can set innodb_table_locks=0 in `my.cnf' to fall back to 
the old behavior and remove the problem.


http://dev.mysql.com/doc/mysql/en/News-4.0.22.html

InnoDB: New mysqld option --innodb-table-locks and session variable 
innodb_table_locks (on by default). In applications using AUTOCOMMIT=1 and 
MySQL's LOCK TABLES command, InnoDB's internal table locks that were added 
in 4.0.20 can cause deadlocks. You can set innodb_table_locks=0 in `my.cnf' 
to remove that problem. See section 15.17 Restrictions on InnoDB Tables. 
(Bug #3299, Bug #5998)


Starting from 4.0.20, the correct way to use LOCK TABLES is to set 
AUTOCOMMIT=0. Then you can release the MySQL table lock with LOCK TABLES, 
and the InnoDB table lock with COMMIT at the same time.

You can also fall back to the old behavior by setting the session variable 
innodb_table_locks to 0.

We changed the table locking behavior in 4.0.20 because the old behavior was 
even more illogical: you could LOCK a table even though some other user had 
row locks on it. What we did not foresee was that this would easily generate 
deadlocks if users had AUTOCOMMIT=1.

In transactional databases all locks are released at a COMMIT. It does not 
make sense to run at AUTOCOMMIT=1, if you want to set table locks. MySQL's 
LOCK TABLES is unusual: the lock survives over transaction commits. That is 
what causes the deadlock problems. In a future version of MySQL we will 
introduce

LOCK TABLES TRANSACTIONAL ...
that no longer takes a MySQL table lock at all, only an InnoDB table lock. 
This new command will behave like in a similar table lock command in normal 
transactional databases, and the old LOCK TABLES will become deprecated for 
pure InnoDB transactions.

We will update the MySQL manual to include the above text. LOCK TABLES has 
caused a lot of confusion lately.

Best regards,
Heikki Tuuri
Innobase Oy
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL technical support from https://order.mysql.com/
- Original Message - 
From: Mr. Gabriele Somthing [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Tuesday, November 23, 2004 3:32 AM
Subject: Lock wait timeout exceeded Errors


I believe I have read or skimmed all of the posts to this group with
Lock, wait and timeout in their body over the past year (172) with
no answers in sight (if I missed a post, please let me know).
I am unable to figure out what is causing this problem that is 100%
reproducible on a 4.0.20-standard server. It works as follows:
1) Thread A locks InnoDB table Foo for writing SUCCESSFULLY.
2) Thread B locks InnoDB table Foo for writing, but has to wait for
   Thread A.
3) Thread A performs a query (I used SELECT MAX(id) FROM foo;
   for testing).
4) Thread A terminates the query with a Lock wait timeout
   exceeded error.
5) Thread B goes on its merry way.
NOTES:
1) NO OTHER THREADS are locking any of the tables in question or
   even have access to the databases in question
2) The THREAD WITH THE LOCK is the one that HAS THE ERROR
3) No other sequence of events seems to cause the problem
4) The lock on foo is not release until after Thread A either
   dies, disconnects or issues a LOCK/UNLOCK command
5) During the time that Thread A is performing the query, Thread B
   has a state of Sending data while Thread B is Locked.
This only happens on the ISP's MySQL server (4.0.20-standard), not on our
development server (4.1.7-standard). We do not have control over the
system variables (except for runtime variables, of course). It only
happens with InnoDB tables, not with MyISAM tables. I have included a perl
script that causes the bug 100% of the time below. I can reproduce the bug
using two instances of the mysql command-line utility and it manifests
using PHP as well, so it's not a driver issue.
The innodb_lock_wait_timeout is set to the default 50 seconds, but this
should NEVER be reached, as a query like SELECT MAX(id) FROM foo; should
take less than  0.005 seconds.
Questions: Why is the query slowed when the process that is making it is
the one that HAS the lock? And why does it happen only when another thread
is trying to get a lock?
What can I do to get this working properly? I'm at my wit's end.
Since this is a large commercial ISP, I am unable to get them to upgrade
MySQL because we have encoutered a bug.
Help, comments, suggestions, ridicule (accompanied by a remedy) actively
solicited.
Thank you!
-Gabriele
--- START ENVIRONMENT INFO ---
MySQL Server Version: 4.0.20-standard
OS:

MySQL 4.1.7 on Windows ME

2004-11-23 Thread Hassan Shaikh
Hi,
How do I install MySQL 4.1.7 on Windows Millenium so that it start 
automatically on startup? mysqld --install ... does not work on Win9x 
setups.

Thanks.
Hassan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: very slow concurrent queries on HEAP table

2004-11-23 Thread Bernd Heller
Hi Mike,
I tried read-locking the tables as you suggested, but regrettably it 
didn't help a bit. If there is nothing running but selects on that 
database, mysql shouldn't lock ever, should it? or is it locking the 
table if it's doing a full table scan (why?!)?

Bernd
On 23.11.2004, at 5:39 Uhr, mos wrote:
At 06:10 PM 11/21/2004, you wrote:
Hi all,
I've got a rather odd performance problem with concurrent queries 
here. My query regrettably always needs to do a full table scan - 
really can't be helped. So my
idea was to cache the data in a HEAP table to get maximum performance 
out of it and it works really well, I'm down to 0.07 seconds for a 
single query.
Now the problem I just found is when I run 10 or 20 identical queries 
at the same time: my CPU goes up to 100% for a surprisingly long 
time, and when I look at what mysqld is doing with a profiler, it's 
burning 70% of its time in pthread_cond_wait and pthread_mutex_lock.

To me this looks as if the table gets locked and the queries don't 
really execute concurrently, but I'm not sure how to find out what is 
going on and there are no updates or inserts happening, just plain 
selects. Table_locks_immediate increments by 1 for each query, but 
Table_locks_waited remains at 0. Also show processlist says all 
queries are in sending data state most of the time.

I'm not sure what to do about this, but it's not an acceptable 
performance right now. The table has 100,000 rows at present and each 
row contains only ints of different sizes totaling to about 200 bytes 
per row. The heap table is small enough to fit into memory, and there 
is also no swapping or thrashing during the queries, so it should 
execute lightning fast, right? it just doesn't. This is mysql 4.1.7 
(official max binary) on MacOS X.

Any help would be very much appreciated!
Bernd
If locking is the problem, have you tried:
lock tables mytable READ;
select ... from mytable ...;
unlock tables;
Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


ANN: Gtk2::Ex::DBI-0.5

2004-11-23 Thread Daniel Kasak
Hi all.
I'm pleased to announce the 5th release of Gtk2::Ex::DBI ...
http://enthalpy.homelinux.org/Gtk2-Ex-DBI/
Gtk2::Ex::DBI is a helper object that makes your gtk2-perl apps data 
aware.

Changes in this release:
- Formatting changes in sourcecode. I bought myself a copy of Komodo, 
and decided to reformat everything to look good in there. Appologies if 
this has the opposite effect for everyone else.

- Removed attempted locking of fields if 'read_only' is set.
- Fixed support for Gtk2::TextView
- Added support for DBI's last_insert_id() method. This brings us one 
step closer to supporting DB servers other than MySQL. Still requires 
some *very* small additions ( eg last_insert_id() on Postgres requires 
some arguments to be passed ). Volunteers? I don't use Postgres

- Added POD documentation
I'm going to Cambodia for a month on the 24th, so if questions come in 
after then, don't expect a reply until around Christmas sometime :)

Dan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 100+ databases, too many open files - am I out of line?

2004-11-23 Thread Gleb Paharenko
Hello.



MySQL was developed to hold huge count of users and databases.

In addition to:

  http://dev.mysql.com/doc/mysql/en/Not_enough_file_handles.html

You may read:

  http://dev.mysql.com/doc/mysql/en/Source_notes-Linux.html

(There is also some tips about the magic number 1024).





Andreas Karlsson [EMAIL PROTECTED] wrote:

 Hi.

 

 I have database structure of about 45 tables, and a separate database for

 each of the 100+ users.

 Everything has been working nicely for quite some time.

 

 When I wrote a function that looped through all the DBs and ran some

 queries, MySQL hung and I got too many open files in my .err-file.

 (Mysql 4.0.22, trustix linux, reiserfs, mysql:open-files-limit at 1024)

 

 I am trying different ways of raising the limit, and that shouldn't be a

 problem.

 

 My question is if I am doing this the wrong way?

 

 Will I be able to keep my structure, reach 500 users and keep things running

 smoothly?

 

 Is it a problem to have that many databases?

 

 Thanx!

 

 /Andreas

 

 ---

 Outgoing mail is certified Virus Free.

 Checked by AVG anti-virus system (http://www.grisoft.com).

 Version: 6.0.799 / Virus Database: 543 - Release Date: 2004-11-19

 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Can't see table information

2004-11-23 Thread Gleb Paharenko
Hello.



Sounds very strange. May be there is a bug, and it would be great if someone

corrects me, but I haven't found anything related in bugs database.

You said that you issued show tables statement on your database? Maybe 

it would be helpful to try not only show tables from your_database, but

also:

 use your_database;

 show tables;

or

 show table status;

 show table status from your_db;



What output the command line utility mysqlshow produced?

  mysqlshow your_database

  mysqlshow --status your_database



When you were making selects did you use db_name.tbl_name syntax to refer to 

a table? Send us your config file (however I don't know if it would be helpful,

but this is a good practice).



Jason St. Louis [EMAIL PROTECTED] wrote:

 Hi Folks,

 

 I've been running a website on php/mysql database for about a year now.  

 I typically enter data using the phpmyadmin package.  The other day I 

 attempted to add some data but it appeared to phpmyadmin that there were 

 no tables in my database.  I connected to mysql using the commandline 

 client and issued a show tables statement on my database, which came 

 back with an empty set.  However, if I run a select statement against 

 tables I know should be there, I get results.  So the data is still 

 there, I just can't see the tables.  Does anyone have any idea what 

 might be causing this?  I am the only one who has access to the database 

 so the priveleges for the user should not have changed.  I created an 

 additional user with

 Grant all privileges on *.* to 'user'@'host' identified by 'password' 

 with grant option;

 and attempted to view the tables with that user, with the same results.

 

 Any ideas?

 

 mysql version is: Ver 11.18 Distrib 3.23.55, for pc-linux (i686)

 

 Thanks.

 Jason

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Remote user problem

2004-11-23 Thread Gleb Paharenko
Hello.



I guess your MySQL client library is fresh enough to support authentication 

protocol of 4.1.x versions. Otherwise you should read:

  http://dev.mysql.com/doc/mysql/en/Old_client.html



If you have 4.1.x client library on your windows machine, probably

you should add ability to connect to MySQL server from windows host.



 GRANT ALL PRIVILEGES ON *.* TO 'root'@'windows_host' IDENTIFIED BY 

 'goodsecret';



See:

  http://dev.mysql.com/doc/mysql/en/GRANT.html

  http://dev.mysql.com/doc/mysql/en/Access_denied.html



  

Danesh Daroui [EMAIL PROTECTED] wrote:

 [-- text/plain, encoding 7bit, charset: ISO-8859-1, 29 lines --]

 

 Hi all,

 

 I have am in trouble with connecting remotely to my MySQL Server Linux 

 machine. I have installed the latest RPM (4.1.7) on a Linux Fedora Core 

 3 machine. I have created root account and everything seems to be OK. 

 The only problem is that I can not connect to my MySQL Server remotely 

 from my Windows machine. I could do that before I upgrade from 4.1.5 to 

 4.1.7 on Fedora Core 2. In fact I can connect locally on my Linux 

 machine as below:

 

 /* mysql -u root -p*/

 

 but when I use this command I get the following error:

 

 /* mysql -h 127.0.0.1 -u root -p

 Error 1045 (28000): Access  denied for user 

 'root'@'localhost.localdomain'. (using password YES)

 */

 and as a result I can not from my Windows machine either. Whats wrong ? 

 Can anybody help ?

 By the way, I could connect locally on my windows machine by using both 

 above commands.

 

 Regards,

 

 Danesh Daroui

 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Messed up on auto inc

2004-11-23 Thread Gleb Paharenko
Hello.



I think you should read good answers from Paul DuBois and

Egor Egorov at:

  http://lists.mysql.com/mysql/169769

  http://lists.mysql.com/mysql/169821



In which they've  described behavior of mysqldump when it dumps columns with 

AUTO_INCREMENT attribute.

  

Stuart Felenstein [EMAIL PROTECTED] wrote:

 While what I'm working on is only a test database, I'd

 like to see what I can preserve.

 I used a data dumping program to create records in my

 database.  However I'm not sure how I messed up , but

 here is the problem.

 

 I have 1016 records in a particular table.  The first

 column is a Primary ID Int set to auto-inc.  Now if I

 dump out all he records I can see 1 through 1016. 

 None others , sorted it follows through numerically.

 

 However it seems that the records are showing up with

 the Primary ID going up to 9000.  If I do a simple

 select * from table where primaryID = 8585 , a record

 is returned.  Yet just looking at all the records,

 nothing exceeds the 1016.

 

 Strange phenomena I guess? Not sure if it's fixable

 but wanted to throw it out to the list and see at

 least if I can understand it better.

 

 Thank you,

 Stuart

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



[Fwd: row numbers, jumping rows]

2004-11-23 Thread João Borsoi Soares
No body has answered my question so far. Does that mean there is no way
to retrieve current row numbers in a query or no way to retrieve results
jumping steps?

I would appreciate any help.

Thanks,
Joao.

-Mensagem encaminhada-
 From: João Borsoi Soares [EMAIL PROTECTED]
 To: [EMAIL PROTECTED] [EMAIL PROTECTED]
 Subject: row numbers, jumping rows
 Date: Sun, 21 Nov 2004 11:23:41 -0200
 
 Hello list,
 
 I wonder if someone could help me to get the fastest result on my query.
 I need to classify one entity, according to a specific attribute (let's
 call it X), looking at a sample extracted from a database table. 
 First I need to sort my sample ordered by a specific table field (X).
 Then I should divide my sorted sample in 10 equal groups (same number of
 rows). Finally, I should classify my entity (1 to 10), finding out at
 which group its attribute X fits in.
 
 Let me explain my thoughts. First I should make a SELECT COUNT(*).. to
 find the total number of rows in my sample. Then I make queries using
 LIMIT until I find which group the X attribute fits in. In the worst
 case I will have to do 10 queries. Which I think should take too long in
 my case.
 
 I wanted to make it in only one query. It could be possible if I could
 make a query which returns result jumping rows, like STEP N where N
 could be the number of items in each group. Is there anything in mysql
 that can give me that? I also thought if I could have the row numbers in
 the query result, I could solve it with a simple condition like, WHERE
 rowNum % N = 0.
 
 Any ideas?
 
 Thanks.
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.0.18 - 4.1.7 LEFT JOIN problem

2004-11-23 Thread Sergei Golubchik
Hi!

On Nov 22, Santino wrote:
 At 15:23 +0100 22-11-2004, Sergei Golubchik wrote:
 Hi!
 
 On Nov 22, Dilipan Sebastiampillai wrote:
  I moved a db from 4.0.18 to 4.1.7 and I realise that the SAME sql query
  using LEFT JOIN gives me different result.
  The result from 4.0.18 does s real LEFT JOIN but 4.1.7  differs
  sometimes and I don't want that.
 
  I think it comes from an 'optimization'  how can I make a query
  without using the LEFT JOIN optimization of MySQL 4.1.7 ?
 
 What is the query ?
 Can you provide a repeatable test case ?
 
 I have a query that works fine on 4.0.20 but doesn't work in 4.1.7.

Thanks for the test case.
I could repeat the bug using the 4.1.7 distribution, but not the latest
4.1.8 tree - it means that the bug was apparently fixed since 4.1.7
release.

 ==CUT
 CREATE TABLE AULE (
   AUL_ID int(11) NOT NULL auto_increment,
   PRIMARY KEY  (AUL_ID)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 CREATE TABLE OCCUPAZIONI (
   OCC_ID int(11) NOT NULL auto_increment,
   OCC_ID_AUL int(11) NOT NULL,
   OCC_DATA date,
   PRIMARY KEY  (OCC_ID)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 INSERT INTO AULE VALUES (1);
 INSERT INTO AULE VALUES (2);
 INSERT INTO AULE VALUES (3);
 
 INSERT INTO OCCUPAZIONI VALUES (1, 1, '2004-11-10');
 
 select Before index;
 
 select AUL_ID, OCC_ID from AULE left join OCCUPAZIONI on 
 OCC_ID_AUL=AUL_ID and OCC_DATA='2004-11-10'
 where OCC_ID is null;
 
 alter table OCCUPAZIONI
   add KEY OCC_ID_AUL (OCC_ID_AUL);
 
 select After Index;
 
 select AUL_ID, OCC_ID from AULE left join OCCUPAZIONI on 
 OCC_ID_AUL=AUL_ID and OCC_DATA='2004-11-10'
 where OCC_ID is null;
 ==CUT
 
 Bug #6307
 I noticed that when I create a table with 2 index (primary + key) the
 Cardinality of the primary key is 0 but the Cardinality of the key is
 null.  If I insert a record in the table the Cardinality of the
 primary key is 1 but the Cardinality of the key is null.  If I do an
 analyze TABLE ... the  Cardinality of both index is OK and the query
 works but with a truncate table ...  the Cardinality of the key is
 null.  If I create the table without the second index, add some
 records and add the second index the cardinality is null.

 At the moment I found that if I define a composite primary key ( old
 PRIMARY+KEY) the query works without the analyze.

I am not sure I understand :(
What do you mean query works ?

Regards,
Sergei

-- 
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Fwd: row numbers, jumping rows]

2004-11-23 Thread Alec . Cawley
I think the reason nobody has replied is that the term row number does 
not really have any meaning in a DBMS. How the database stores rows 
inteneally is the DBMS's private business, and should not be visible to 
you. I think it is true that MySQL does not *have* an internal row number, 
so there is nothing to skip by. All it stores in MyISAM table is file 
offsets. However, even if I am wrong, it doesn't matter: that is an 
internal implementation detail and should not be visible to you.

However, if I read you rightly, what you want it to extract a random tenth 
of your table. You could do this by something like
SELECT columns FROM table ORDER BY rand() LIMIT table size / 10 ;


João Borsoi Soares [EMAIL PROTECTED] wrote on 23/11/2004 
11:30:26:

 No body has answered my question so far. Does that mean there is no way
 to retrieve current row numbers in a query or no way to retrieve results
 jumping steps?
 
 I would appreciate any help.
 
 Thanks,
 Joao.
 
 -Mensagem encaminhada-
  From: João Borsoi Soares [EMAIL PROTECTED]
  To: [EMAIL PROTECTED] [EMAIL PROTECTED]
  Subject: row numbers, jumping rows
  Date: Sun, 21 Nov 2004 11:23:41 -0200
  
  Hello list,
  
  I wonder if someone could help me to get the fastest result on my 
query.
  I need to classify one entity, according to a specific attribute 
(let's
  call it X), looking at a sample extracted from a database table. 
  First I need to sort my sample ordered by a specific table field (X).
  Then I should divide my sorted sample in 10 equal groups (same number 
of
  rows). Finally, I should classify my entity (1 to 10), finding out at
  which group its attribute X fits in.
  
  Let me explain my thoughts. First I should make a SELECT COUNT(*).. 
to
  find the total number of rows in my sample. Then I make queries using
  LIMIT until I find which group the X attribute fits in. In the worst
  case I will have to do 10 queries. Which I think should take too long 
in
  my case.
  
  I wanted to make it in only one query. It could be possible if I could
  make a query which returns result jumping rows, like STEP N where 
N
  could be the number of items in each group. Is there anything in mysql
  that can give me that? I also thought if I could have the row numbers 
in
  the query result, I could solve it with a simple condition like, 
WHERE
  rowNum % N = 0.
  
  Any ideas?
  
  Thanks.
  
  
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Fwd: row numbers, jumping rows]

2004-11-23 Thread João Borsoi Soares
First thanks for the answer Alec. But I think you didn't understood my
problem. Maybe nobody replied because of that. Let me try again. 

Suppose I make a select which returns 100 ordered rows. I only want to
read rows number 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100.

To read the 10th row I would make SELECT columns FROM table ORDER
BY my_field LIMIT 10. To read the 20th it would be SELECT columns
FROM table ORDER BY my_field LIMIT 10,10. And so on.. 

What I want is to make all of these queries in only one.

That's why I said if I could get the row number retrieved from the
query, I could do: SELECT columns FROM table WHERE (rowNumber %
(tableSize/10)) = 0 ORDER BY my_field

Thanks again,
Joao.


Em Ter, 2004-11-23 às 09:57, [EMAIL PROTECTED] escreveu:
 I think the reason nobody has replied is that the term row number does 
 not really have any meaning in a DBMS. How the database stores rows 
 inteneally is the DBMS's private business, and should not be visible to 
 you. I think it is true that MySQL does not *have* an internal row number, 
 so there is nothing to skip by. All it stores in MyISAM table is file 
 offsets. However, even if I am wrong, it doesn't matter: that is an 
 internal implementation detail and should not be visible to you.
 
 However, if I read you rightly, what you want it to extract a random tenth 
 of your table. You could do this by something like
 SELECT columns FROM table ORDER BY rand() LIMIT table size / 10 ;
 
 
 João Borsoi Soares [EMAIL PROTECTED] wrote on 23/11/2004 
 11:30:26:
 
  No body has answered my question so far. Does that mean there is no way
  to retrieve current row numbers in a query or no way to retrieve results
  jumping steps?
  
  I would appreciate any help.
  
  Thanks,
  Joao.
  
  -Mensagem encaminhada-
   From: João Borsoi Soares [EMAIL PROTECTED]
   To: [EMAIL PROTECTED] [EMAIL PROTECTED]
   Subject: row numbers, jumping rows
   Date: Sun, 21 Nov 2004 11:23:41 -0200
   
   Hello list,
   
   I wonder if someone could help me to get the fastest result on my 
 query.
   I need to classify one entity, according to a specific attribute 
 (let's
   call it X), looking at a sample extracted from a database table. 
   First I need to sort my sample ordered by a specific table field (X).
   Then I should divide my sorted sample in 10 equal groups (same number 
 of
   rows). Finally, I should classify my entity (1 to 10), finding out at
   which group its attribute X fits in.
   
   Let me explain my thoughts. First I should make a SELECT COUNT(*).. 
 to
   find the total number of rows in my sample. Then I make queries using
   LIMIT until I find which group the X attribute fits in. In the worst
   case I will have to do 10 queries. Which I think should take too long 
 in
   my case.
   
   I wanted to make it in only one query. It could be possible if I could
   make a query which returns result jumping rows, like STEP N where 
 N
   could be the number of items in each group. Is there anything in mysql
   that can give me that? I also thought if I could have the row numbers 
 in
   the query result, I could solve it with a simple condition like, 
 WHERE
   rowNum % N = 0.
   
   Any ideas?
   
   Thanks.
   
   
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: 
 http://lists.mysql.com/[EMAIL PROTECTED]
  
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Threads 1024 statically linking against linuxthreads library

2004-11-23 Thread Willem Roos

Hi list,

We have apps that try to open  700 simultaneous connections to our
4.0.20-max mysql server running on RH7 (Linux kernel 2.4.20). We get
many 'Can't create a new thread (errno 11)' errors returned by
mysql_real_connect().

I'm trying to follow the directions given in the Linux Source
Distribution Notes (section 2.12.1.3) about
- increasing PTHREAD_THREADS_MAX in
`sysdeps/unix/sysv/linux/bits/local_lim.h',
- decreasing STACK_SIZE in `linuxthreads/internals.h',
- recompiling LinuxThreads to produce a new `libpthread.a' library, and
- relink MySQL against it.

The first three steps are ok, downloaded glibc-2.2.5 (which is also
running on my box), downloaded glibc-linuxthreads-2.2.5, hacked 
compiled everything ok.

Downloaded mysql-4.0.22 source (close enough to my 4.0.20 production
system). Now i realize i haven't the faintest idea how to 'statically
link MySQL against it'. I keep on running into all kinds of dynamic
linker errors.

The page http://www.volano.com/linuxnotes.html referred to in the manual
don't exist. Exhaustive search of this list didn't help (i followed the
recent discussion re 'MySQL 4.0.2 is topping out at 1024 threads' but
that don't help me).

Does anyone have any notes/pointers/info/idiot's guide in this regard?

Thanks,

--
  Willem Roos
  Per sercas vi malkovri - JS Bach (freely translated)

Disclaimer
http://www.shoprite.co.za/disclaimer.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Fwd: row numbers, jumping rows]

2004-11-23 Thread Alec . Cawley
João Borsoi Soares [EMAIL PROTECTED] wrote on 23/11/2004 
12:34:01:

 First thanks for the answer Alec. But I think you didn't understood my
 problem. Maybe nobody replied because of that. Let me try again. 
 
 Suppose I make a select which returns 100 ordered rows. I only want to
 read rows number 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100.
 
 To read the 10th row I would make SELECT columns FROM table ORDER
 BY my_field LIMIT 10. To read the 20th it would be SELECT columns
 FROM table ORDER BY my_field LIMIT 10,10. And so on.. 
 
 What I want is to make all of these queries in only one.
 
 That's why I said if I could get the row number retrieved from the
 query, I could do: SELECT columns FROM table WHERE (rowNumber %
 (tableSize/10)) = 0 ORDER BY my_field

I am not a real MySQL wizard, so there may be better ways. But the way I 
would do it would be with a temporary table. This may sound cumbersome, 
but as far as I can see MySQL would have to create a temporary table 
internally to satisfy your request anyway.

CREATE TEMPORARY TABLE temp
{row INT AUTOINCREMENT NOT NULL,
  other columns as you need
  ) ;
INSERT INTO temp SELECT NULL other columns FROM table ORDER BY 
criterion ;
SELECT columns FROM temp WHERE row % 10 = 0 LIMIT as needed ;
DROP TABLE temp ;

A bit clunky, I agree, but the only way I can see of solving your problem 
;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



probs with starting mysql (urgent pls)

2004-11-23 Thread A Z

MySQL 4.0.14
InnoDB (File size as it stands at present) = 6 GB
We have tried move the data folder from one hard disk
to another still the same problem.  We really are in
need of help here.  Is there any size limit in InnoDB?


I did ask this before, I did get some reponse thank
you.  But due to the nature of the problem I'm give it
another try.

On trying to start Mysqld-nt --console the following
appears.

InnoDB: No valid checkpoint found.
InnoDB: If this error appears when you are creating an
InnoDB database,
InnoDB: the problem may be that during an earlier
attempt you managed
InnoDB: to create the InnoDB data files, but log file
creation failed.
InnoDB: If that is the case, please refer to section
3.1 of
InnoDB: http://www.innodb.com/ibman.html
041123 13:38:36  Can't init databases
041123 13:38:36  Aborting

041123 13:38:36  InnoDB: Warning: shutting down a not
properly started
 InnoDB: or created database!
041123 13:38:36  mysqld-nt: Shutdown Complete


regards






___ 
ALL-NEW Yahoo! Messenger - all new features - even more fun! 
http://uk.messenger.yahoo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: making a pattern table

2004-11-23 Thread SGreen
What you are trying to make is called either a pivot table or a 
crosstab report. There is a very simple pattern to writing one. I am 
assuming that all of the data is on just one table and that there can be 
one or more entries per person per program per month. I know these do not 
line up with your column names but you didn't post your table structure 
with your question.

SELECT perid
, progid
, sum(if(month=1, 1, 0)) as m1
, sum(if(month=2, 1, 0)) as m2
, sum(if(month=3, 1, 0)) as m3
, sum(if(month=4, 1, 0)) as m4
, sum(if(month=5, 1, 0)) as m5
, sum(if(month=6, 1, 0)) as m6
, sum(if(month=7, 1, 0)) as m7
, sum(if(month=8, 1, 0)) as m8
, sum(if(month=9, 1, 0)) as m9
, sum(if(month=10, 1, 0)) as m10
, sum(if(month=11, 1, 0)) as m11
, sum(if(month=12, 1, 0)) as m12
FROM attendancetable
WHERE year=2003
GROUP by perid, progid

That query will show you how often a person attended a program during 
2003. Modify it as necessary to work with your data.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



[EMAIL PROTECTED] wrote on 11/23/2004 12:00:13 AM:

 Hi!
 
 I am wondering if there is a way using SQL to make a pattern file (I'm 
not
 sure exactly what to call it) of the sort following, which keeps track 
of
 people in programs of different kinds, by months of the year.   A given 
file
 can be set up as below examining only 1 type of program (that is what 
I'm
 doing for now) or multiple types (by using a 1 for one type of 
program, a
 2 for another type, etc.).
 
 perid   m1   m2m3m4m5m6m7m8   . . .   m12
 023 1   0  0   1   1   1  0   0
 0
 045 0   1  0   0   1   0  0   1
 1
 
 It is just a list of id numbers and then for each month (m1, m2...m12) a 
1
 is placed if the individual is in the  program and a 0 if they are not. 
The
 pattern file is used to help in the analysis of how people are using
 programs and cycling in and out of them.
 
 Thanks very much.
 
 -Alex
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: MySQL Books

2004-11-23 Thread Ugo Bellavance
Kieran Kelleher wrote:
This is my favorite advanced MySQL book. It's by Jeremy Zawodny (looks  
after MySQL installations for Yahoo.com) (fix the link if it wordwraps  
in this email):
http://www.amazon.com:80/exec/obidos/ASIN/0596003064/kieranwebobje-20? 
creative=327641camp=14573link_code=as1

-Kieran
I have mysql from Paul Duboir, 2nd ed here.  Very complete.  However, I 
like high performance mysql more because it is close to what I do - 
sysadmin/dba.  I also read MySQL enterprise solutions.  Good, but I 
like the two others more.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Alias query problem in 4.1.7?

2004-11-23 Thread Geoffrey R. Thompson
I have an interesting problem that I cannot find any clues to in the MySQL
documentation.  The following query works in 3.2.3, but does not work in
4.1.7:

 

SELECT `Product`.`Product`, `AssignedToAgent`.`AgentName` AS `AssignedTo`,
sum(`Inquiries`) AS `Inquiries` FROM `Inquiry` INNER JOIN `Product` ON
`Inquiry`.`ProductKey` = `Product`.`ProductKey` INNER JOIN `Agent` AS
`AssignedToAgent` ON `Inquiry`.`AssignedToKey` =
`AssignedToAgent`.`AgentKey` INNER JOIN `DateDim` AS `DateOpen` ON
`Inquiry`.`DateOpenKey` = `DateOpen`.`DateDimKey` WHERE `DateOpen`.`Year` =
'2003' GROUP BY `Product`.`Product`, `AssignedToAgent`.`AssignedTo`;

 

It appears that if I take the table alias AssignedToAgent out of the GROUP
BY clause (leaving just the column alias AssignedTo), the query will then
work in 4.1.7 - even though the table alias does not present a problem in
3.2.3.  Any ideas why?

 

Any help would be greatly appreciated.

 

Geoff Thompson

Avaion Support

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 

http://www.avaion.com http://www.avaion.com/ 

 



MySQL configuring on Linux

2004-11-23 Thread Danesh Daroui
Hi all,
I have installed MySQL Server 4.1.7 on both my Windows XP and Linux 
Fedore Core 3 systems. It was amazing that everything can be configured 
in a visual way and it is almost ready to use after installation when 
install it on Windows XP and I could even change the way thatI want to 
use MySQL server if it is Developing, Sever or Dedicated Server. But 
when I installed MySQL Server 4.1.7 on my Linux system, I couldn't 
configure it as Windows XP. It just installed and there was not any 
configuration window or anything elase. How can I configure my Linux 
server for example to act as Dedicated server ? Is there any extra 
program for Linux so I have to download ?

Thanks,
Danesh Daroui

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: probs with starting mysql (urgent pls)

2004-11-23 Thread Heikki Tuuri
Hi!
The size limit in InnoDB is 64 TB.
You have probably mixed up ibdata files or ib_logfiles. The ib_logfiles that 
you are using are probably completely zero-filled. No checkpoint info in 
them.

What does
mysqld-nt --console
print if you move the ib_logfiles away?
When creating new ibdata files, ib_logfiles, or moving those files to new 
locations, precisely follow the instructions at
http://dev.mysql.com/doc/mysql/en/Adding_and_removing.html
http://dev.mysql.com/doc/mysql/en/Backing_up.html (a cold backup is like 
moving the files to a new location).

Keep in mind that in InnoDB, ib_logfiles are as important as ibdata files. 
You must not forget ib_logfiles.

Best regards,
Heikki
Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables
http://www.innodb.com/order.php

Order MySQL support from http://www.mysql.com/support/index.html
/
Looks for the maximum consistent checkpoint from the log groups. */
static
ulint
recv_find_max_checkpoint(
/*=*/
   /* out: error code or DB_SUCCESS */
   log_group_t**   max_group,  /* out: max group */
   ulint*  max_field)  /* out: LOG_CHECKPOINT_1 or
   LOG_CHECKPOINT_2 */
{
   log_group_t*group;
   dulint  max_no;
   dulint  checkpoint_no;
   ulint   field;
   byte*   buf;
   group = UT_LIST_GET_FIRST(log_sys-log_groups);
   max_no = ut_dulint_zero;
   *max_group = NULL;
   buf = log_sys-checkpoint_buf;
   while (group) {
   group-state = LOG_GROUP_CORRUPTED;
   for (field = LOG_CHECKPOINT_1; field = LOG_CHECKPOINT_2;
   field += LOG_CHECKPOINT_2 - 
LOG_CHECKPOINT_1) {

   log_group_read_checkpoint_info(group, field);
   if (!recv_check_cp_is_consistent(buf)) {
   if (log_debug_writes) {
   fprintf(stderr,
   InnoDB: Checkpoint in group %lu at %lu invalid, %lu\n,
   group-id, field,
mach_read_from_4(buf
 + LOG_CHECKPOINT_CHECKSUM_1));
   }
   goto not_consistent;
   }
   group-state = LOG_GROUP_OK;
   group-lsn = mach_read_from_8(buf
   + LOG_CHECKPOINT_LSN);
   group-lsn_offset = mach_read_from_4(buf
   + LOG_CHECKPOINT_OFFSET);
   checkpoint_no =
   mach_read_from_8(buf + LOG_CHECKPOINT_NO);
   if (log_debug_writes) {
   fprintf(stderr,
   InnoDB: Checkpoint number %lu found in group 
%lu\n,
   ut_dulint_get_low(checkpoint_no), 
group-id);
   }

   if (ut_dulint_cmp(checkpoint_no, max_no) = 0) {
   *max_group = group;
   *max_field = field;
   max_no = checkpoint_no;
   }
   not_consistent:
   ;
   }
   group = UT_LIST_GET_NEXT(log_groups, group);
   }
   if (*max_group == NULL) {
   fprintf(stderr,
InnoDB: No valid checkpoint found.\n
InnoDB: If this error appears when you are creating an InnoDB database,\n
InnoDB: the problem may be that during an earlier attempt you managed\n
InnoDB: to create the InnoDB data files, but log file creation failed.\n
InnoDB: If that is the case, please refer to\n
InnoDB: http://dev.mysql.com/doc/mysql/en/Error_creating_InnoDB.html\n;);
   return(DB_ERROR);
   }
   return(DB_SUCCESS);
}

MySQL 4.0.14
InnoDB (File size as it stands at present) = 6 GB
We have tried move the data folder from one hard disk
to another still the same problem.  We really are in
need of help here.  Is there any size limit in InnoDB?
I did ask this before, I did get some reponse thank
you.  But due to the nature of the problem I'm give it
another try.
On trying to start Mysqld-nt --console the following
appears.
InnoDB: No valid checkpoint found.
InnoDB: If this error appears when you are creating an
InnoDB database,
InnoDB: the problem may be that during an earlier
attempt you managed
InnoDB: to create the InnoDB data files, but log file
creation failed.
InnoDB: If that is the case, please refer to section
3.1 of
InnoDB: http://www.innodb.com/ibman.html
041123 13:38:36  Can't init databases
041123 13:38:36  Aborting
041123 

RE: MySQL configuring on Linux

2004-11-23 Thread J.R. Bullington
Windows has always been more graphical. For Linux, depending on the version 
that you have downloaded
(binary v. source) you have to do very little to configure it for a dedicated 
server. Check out the
my.cnf files in the /support-files/ folder of your MySQL installation. Use 
these as a guideline. The
more RAM you have or the more dedicated you want the server, the different 
my.cnf files that you
want to copy to /etc/my.cnf

For Small Servers (256 - 512 RAM) or Development, use:
shell cp /usr/local/mysql/support-files/my-small.cnf /etc/my.cnf

For Medium Servers (512 - 1GB RAM) or Server, use:
shell cp /usr/local/mysql/support-files/my-medium.cnf /etc/my.cnf

For Large Servers (1GB - 2GB RAM) or Dedicated, use:
shell cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf

For Extra Large Servers ( 2GB RAM ) running only MySQL, use:
shell cp /usr/local/mysql/support-files/my-huge.cnf /etc/my.cnf


Each of these can be found in the my-(small, medium, large, huge).cnf files. 
They are commented out
sections. There is nothing new that you need to download.

Also, make sure you read the manual and installation instructions. All of this 
information and more
is located in there.

J.R.

-Original Message-
From: Danesh Daroui [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 10:18 AM
To: [EMAIL PROTECTED]
Subject: MySQL configuring on Linux

Hi all,

I have installed MySQL Server 4.1.7 on both my Windows XP and Linux Fedore Core 
3 systems. It was
amazing that everything can be configured in a visual way and it is almost 
ready to use after
installation when install it on Windows XP and I could even change the way 
thatI want to use MySQL
server if it is Developing, Sever or Dedicated Server. But when I installed 
MySQL Server 4.1.7 on my
Linux system, I couldn't configure it as Windows XP. It just installed and 
there was not any
configuration window or anything elase. How can I configure my Linux server for 
example to act as
Dedicated server ? Is there any extra program for Linux so I have to download ?

Thanks,

Danesh Daroui



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow concurrent queries on HEAP table

2004-11-23 Thread mos
At 04:16 AM 11/23/2004, Bernd Heller wrote:
Hi Mike,
I tried read-locking the tables as you suggested, but regrettably it 
didn't help a bit. If there is nothing running but selects on that 
database, mysql shouldn't lock ever, should it? or is it locking the table 
if it's doing a full table scan (why?!)?

Bernd
Bernd,
It doesn't make sense to me either. It's almost as if it is doing 
a select ... lock in share mode which adds a lock to the table. See 
http://dev.mysql.com/doc/mysql/en/InnoDB_locking_reads.html

Here are a few ideas:
1) Have you tried eliminating the Order By clause? (If this works, try 
creating the table in sorted order so the Order By clause is not needed)
2) If you don't need the indexes on the Heap table, try dropping them.
3) Have you run Analyze on the heap table or Explain to see what it is doing?
4) If you have an AutoInc column try getting rid of it. Since you are not 
adding rows to the heap table, try changing it to a simple Integer column.

Mike
On 23.11.2004, at 5:39 Uhr, mos wrote:
At 06:10 PM 11/21/2004, you wrote:
Hi all,
I've got a rather odd performance problem with concurrent queries here. 
My query regrettably always needs to do a full table scan - really can't 
be helped. So my
idea was to cache the data in a HEAP table to get maximum performance 
out of it and it works really well, I'm down to 0.07 seconds for a 
single query.
Now the problem I just found is when I run 10 or 20 identical queries at 
the same time: my CPU goes up to 100% for a surprisingly long time, and 
when I look at what mysqld is doing with a profiler, it's burning 70% of 
its time in pthread_cond_wait and pthread_mutex_lock.

To me this looks as if the table gets locked and the queries don't 
really execute concurrently, but I'm not sure how to find out what is 
going on and there are no updates or inserts happening, just plain 
selects. Table_locks_immediate increments by 1 for each query, but 
Table_locks_waited remains at 0. Also show processlist says all 
queries are in sending data state most of the time.

I'm not sure what to do about this, but it's not an acceptable 
performance right now. The table has 100,000 rows at present and each 
row contains only ints of different sizes totaling to about 200 bytes 
per row. The heap table is small enough to fit into memory, and there is 
also no swapping or thrashing during the queries, so it should execute 
lightning fast, right? it just doesn't. This is mysql 4.1.7 (official 
max binary) on MacOS X.

Any help would be very much appreciated!
Bernd
If locking is the problem, have you tried:
lock tables mytable READ;
select ... from mytable ...;
unlock tables;
Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: On but off topic Putting a file in Ram

2004-11-23 Thread Eamon Daly
The reason I ask this here.  Is I have graphics that are loaded by Mysql 
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.
I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.

Eamon Daly

- Original Message - 
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram


I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may 
call
my server 40 million times a month.

I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?
Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?
The reason I ask this here.  Is I have graphics that are loaded by Mysql 
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Alias query problem in 4.1.7?

2004-11-23 Thread Rhino
Try changing your GROUP BY to use the column name of the second column in
the SELECT, not the alias of the second column name, i.e.

GROUP BY `Product`.`Product`, `AssignedToAgent`.`AgentName`

That should work on any version of MySQL. I don't think you're allowed to
use aliases in a GROUP BY, only actual column names. Then again, I am mostly
a DB2 person and the MySQL dialect of SQL may allow aliases in a GROUP BY
for all I know.

I'm suprised that the alias worked on 3.2.3: are you sure you have
reproduced the exact query that works on 3.2.3? I'm afraid I don't have
either a 3.2.3 or 4.1.7 system so I can't try any of this for myself to see.

By the way, did you realize that your query is substantially longer than it
needs to be? You really only need to qualify column names with table names
if you are doing a join of two or more tables and even then, you only need
to qualify column names that occur in two or more of the tables read by the
query. That would also eliminate the need for you to write aliases for some
of your table names at all, further shortening the query. In your query, it
appears that only the 'ProductKey' column occurs in more than one of the
tables so your query could be as short as this:

SELECT `Product`, `AgentName` AS `AssignedTo`, sum(`Inquiries`) AS
`Inquiries`
FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
`Product`.`ProductKey`
INNER JOIN `Agent`  ON `AssignedToKey` = `AgentKey`
INNER JOIN `DateDim` ON `DateOpenKey` = `DateDimKey`
WHERE `Year` = '2003'
GROUP BY `Product`, `AssignedTo`;

Then again, perhaps it is your shop standard to fully qualify all column
names in queries; if so, you should follow your shop standard ;-) More
likely, you are probably using some sort of query generating tool in which
case you probably don't have a choice in the matter.

Rhino



- Original Message - 
From: Geoffrey R. Thompson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 12:11 AM
Subject: Alias query problem in 4.1.7?


I have an interesting problem that I cannot find any clues to in the MySQL
documentation.  The following query works in 3.2.3, but does not work in
4.1.7:



SELECT `Product`.`Product`, `AssignedToAgent`.`AgentName` AS `AssignedTo`,
sum(`Inquiries`) AS `Inquiries`
FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
`Product`.`ProductKey`
INNER JOIN `Agent` AS `AssignedToAgent` ON `Inquiry`.`AssignedToKey` =
`AssignedToAgent`.`AgentKey`
INNER JOIN `DateDim` AS `DateOpen` ON `Inquiry`.`DateOpenKey` =
`DateOpen`.`DateDimKey` WHERE `DateOpen`.`Year` = '2003'
GROUP BY `Product`.`Product`, `AssignedToAgent`.`AssignedTo`;



It appears that if I take the table alias AssignedToAgent out of the GROUP
BY clause (leaving just the column alias AssignedTo), the query will then
work in 4.1.7 - even though the table alias does not present a problem in
3.2.3.  Any ideas why?



Any help would be greatly appreciated.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Books

2004-11-23 Thread Jonathan Duncan
Very good feedback on multiple books.  Thank you.  So many good choices.
 If only I had time to read them all...

Jonathan
 
 
Ugo Bellavance [EMAIL PROTECTED] 11/23/04 7:46 am  
Kieran Kelleher wrote: 
This is my favorite advanced MySQL book. It's by Jeremy Zawodny (looks 

after MySQL installations for Yahoo.com) (fix the link if it wordwraps 

in this email): 
http://www.amazon.com:80/exec/obidos/ASIN/0596003064/kieranwebobje-20? 
creative=327641camp=14573link_code=as1 
 
-Kieran 
 
I have mysql from Paul Duboir, 2nd ed here.  Very complete.  However, I 
like high performance mysql more because it is close to what I do - 
sysadmin/dba.  I also read MySQL enterprise solutions.  Good, but I 
like the two others more. 
 
 
-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Can't see table information

2004-11-23 Thread Jason St. Louis
Hi.  Thanks for the response.
Here is all the relevent information you asked for:
[EMAIL PROTECTED] mysql]$ ./bin/mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 130 to server version: 3.23.55
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql use gb;
Database changed
mysql show tables;
Empty set (0.01 sec)
mysql show table status;
Empty set (0.00 sec)
mysql show table status from gb;
Empty set (0.01 sec)
mysql select count(*) from player;
+--+
| count(*) |
+--+
|   15 |
+--+
1 row in set (0.00 sec)
mysql select count(*) from gb.player;
+--+
| count(*) |
+--+
|   15 |
+--+
1 row in set (0.00 sec)
mysql Bye
[EMAIL PROTECTED] mysql]$ ./bin/mysqlshow -u root -p gb
Enter password:
Database: gb
++
| Tables |
++
++
[EMAIL PROTECTED] mysql]$ ./bin/mysqlshow --status -u root -p gb
Enter password:
Database: gb
+--+--++--++-+-+--+---++-+-+++-+
| Name | Type | Row_format | Rows | Avg_row_length | Data_length | 
Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time | Create_options | Comment |
+--+--++--++-+-+--+---++-+-+++-+
+--+--++--++-+-+--+---++-+-+++-+

As you can see, it doesn't matter how I go about it, I always get the 
same result.  I issued the select count(*) from player statement to 
show that even though it doesn't show any tables, I can still perform an 
sql statement against a table that should be there.

I do not have a configuration file (I assume you are reffering to the 
my.cnf file?)  I guess this is something you have to manually create and 
I suppose I never did that.  I guess that means my mysql server is using 
all defaults.

I would just re-install mysql, except that I can't even do a mysqldump 
to get my data/table structure out of the database because of this 
problem.  guess I should have been taking backups, eh?

Is there any other relevant information I can provide that may help?
Here is the mysql user table entry for the user I am using to perform 
these statements:

mysql select * from user where user = 'root' and host='localhost';
+---+--+--+-+-+-+-+-+---+-+---+--+---++-+++
| Host  | User | Password | Select_priv | Insert_priv | 
Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | 
Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv 
| Index_priv | Alter_priv |
+---+--+--+-+-+-+-+-+---+-+---+--+---++-+++
| localhost | root | -protected-- | Y   | Y   | Y 
| Y   | Y   | Y | Y   | Y 
   | Y| Y | Y  | Y   | Y 
   | Y  |
+---+--+--+-+-+-+-+-+---+-+---+--+---++-+++
1 row in set (0.00 sec)

Thanks.
Jason
Gleb Paharenko wrote:
Hello.

Sounds very strange. May be there is a bug, and it would be great if someone
corrects me, but I haven't found anything related in bugs database.
You said that you issued show tables statement on your database? Maybe 

it would be helpful to try not only show tables from your_database, but
also:
 use your_database;
 show tables;
or
 show table status;
 show table status from your_db;

What output the command line utility mysqlshow produced?
  mysqlshow your_database
  mysqlshow --status your_database

When you were making selects did you use db_name.tbl_name syntax to refer to 

a table? Send us your config file (however I don't know if it would be helpful,
but this is a good practice).

Jason St. Louis [EMAIL PROTECTED] wrote:

Hi Folks,

I've been running a website on php/mysql database for about a year now.  
I typically enter data using the phpmyadmin package.  The other day I 
attempted to add some data but it appeared to phpmyadmin that there were 
no tables in my database.  I connected to mysql using the commandline 
client and issued 

Re: [Fwd: row numbers, jumping rows]

2004-11-23 Thread SGreen
If I understand your question correctly you want to group results for 
statistical analysis. For instance if you have 2000 results (sequenced in 
some way) you want to be able to split those results into 10 equal (or 
nearly equal) groups of results and re-aggregate your results based on the 
new generated group number (like a decile or percentile)

Let's say, as an example,  you want to know how many scores are in each 
decile (tenth) of scores out of a possible maximum of 1600 per test. That 
means you want to count how many people scored from 0-159, 160 - 319, ... 
, 1440 -1600. There are several ways to do this but I choose to make a 
table of my decile ranges (groups) for this example. This also allows you 
to modify the size of each group individually.

CREATE TABLE AnalysisGroups (
groupID int not null auto_increment
, grouplow int not null
, grouphigh int not null
, primary key(id)
, key(grouplow, grouphigh)
)

INSERT AnalysisGroups (grouplow, grouphigh) 
VALUES (0,159), (160, 319), (320,479 ), (480,639), (640,799), (800,959), 
(960,1119), (1120,1279), (1280,1439), (1440,1600)

The test scores are stored in another table. If you want to collect how 
many scores fall into each category you just join them together and total 
up the matches

SELECT ag.groupID as decile
, count(tr.testID) as scores
FROM AnalysisGroups ag
INNER JOIN TestResults tr
ON tr.score = ag.grouplow
AND tr.score = ag.grouphigh
GROUP BY ag.groupID

If you want to group by population you have another trick you can use 
similar to this one. First we need to create a table that can store the 
results of the query but that also has two additional columns. One of 
those columns is an auto_increment column (so that we number each element 
of the result) and the second will be which -ile (percentile, decile, 
quartile) the row falls into. 

Sticking with our current example (test scores) assume we need to compute 
the average score for each percentile of test takers. We could do this by 
first ordering the results from lowest score to highest score (or by any 
other useful measure) then dividing the list into 100 equal groups, 
eventually averaging the score for each group.

CREATE TEMPORARY TABLE tmpAnalysis(
ID int auto_increment
, percentile int
, testID int
, score float
key(percentile)
);

INSERT tmpAnalysis(testID, score)
SELECT id, score
FROM TestResults
ORDER BY score;

SELECT @Pctl := count(id)/100 from tmpAnalysis;

UPDATE tmpAnalysis
SET percentile = FLOOR((ID-1)/@Pctl) * @Pctl;

SELECT percentile, AVG(score) as pctl_mean
FROM tmpAnalysis
GROUP BY percentile;

DROP TEMPORARY TABLE tmpAnalysis;

I added an extra column to the temp table so that you could see that you 
could use that table for multiple purposes. Once I have assigned the 
percentile numbers to each row, I could have identified which tests (and 
which test takers) fell into each percentile.

SELECT tr.taker, tr.score
FROM TestResults tr
INNER JOIN tmpAnalysis a
on a.testID = tr.id
Where a.percentile = 95

Assuming you haven't dropped the temp table yet, that query will give you 
the list of who scored in the top 5% on that particular test.

Disclaimer I haven't had time to test any of this with live data. If 
these examples don't work the first time, it's probably because I made a 
typing error. Apologies in advance.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


João Borsoi Soares [EMAIL PROTECTED] wrote on 11/23/2004 06:30:26 
AM:

 No body has answered my question so far. Does that mean there is no way
 to retrieve current row numbers in a query or no way to retrieve results
 jumping steps?
 
 I would appreciate any help.
 
 Thanks,
 Joao.
 
 -Mensagem encaminhada-
  From: João Borsoi Soares [EMAIL PROTECTED]
  To: [EMAIL PROTECTED] [EMAIL PROTECTED]
  Subject: row numbers, jumping rows
  Date: Sun, 21 Nov 2004 11:23:41 -0200
  
  Hello list,
  
  I wonder if someone could help me to get the fastest result on my 
query.
  I need to classify one entity, according to a specific attribute 
(let's
  call it X), looking at a sample extracted from a database table. 
  First I need to sort my sample ordered by a specific table field (X).
  Then I should divide my sorted sample in 10 equal groups (same number 
of
  rows). Finally, I should classify my entity (1 to 10), finding out at
  which group its attribute X fits in.
  
  Let me explain my thoughts. First I should make a SELECT COUNT(*).. 
to
  find the total number of rows in my sample. Then I make queries using
  LIMIT until I find which group the X attribute fits in. In the worst
  case I will have to do 10 queries. Which I think should take too long 
in
  my case.
  
  I wanted to make it in only one query. It could be possible if I could
  make a query which returns result jumping rows, like STEP N where 
N
  could be the number of items in each group. Is there 

Insert statement with large numbers gives Zero result

2004-11-23 Thread Stephen Thompson
I have a database that is constantly moving large numbers around.

At the moment when a simple INSERT into Table_Name 
('3573286532896523465328654654365436543'); is run
the value entered into the table is a zero. The field type that I am inserting 
into is DOUBLE(200,0) unsigned.

Also this is all being done through PHP, but I am praying that it is a database 
error



IMPORTANT INFORMATION

This message and any files transmitted with it are confidential and should be 
read only by those persons to whom it is addressed.
If you have received this message in error, please notify us immediately by way 
of reply.
Please also destroy and delete the message from you computer. Any unauthorised 
form of reproduction of this message is strictly prohibited.
It is the duty of the recipient to virus scan and otherwise test the 
information provided before loading onto any computer system.
EMRC does not warrant that the information is free of a virus or any other 
defect or error.
EMRC is not liable for the proper and complete transmission of the information 
contained in this communication, nor for any delay in its receipt.
Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of EMRC.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Alias query problem in 4.1.7?

2004-11-23 Thread Geoffrey R. Thompson
Rhino:

Thanks for the feedback.

The query is generated by a home-grown tool, which allows the users to build
ad-hoc reports.  For this reason, and because there are aliases in use (in
this case, two agents are joined to the underlying table), it would be
possible for a similar query that would group by both AgentCreatedBy and
AgentAssignedTo - which are both the same column in the underlying table,
joined to by two separate foreign keys.  So, that is the reason for all the
'extra' table qualifiers.

I will try your suggestion of using the column name vs. the alias name in
the GROUP BY clause, and see if that works.  Interestingly, however, if I
remove the table qualifier, and just do the group by on the aliased column,
it works, as such:
 
GROUP BY `Product`, `AssignedTo`

Geoff Thompson
Avaion Support
[EMAIL PROTECTED] 
http://www.avaion.com 


 -Original Message-
 From: Rhino [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, November 23, 2004 10:26 AM
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Alias query problem in 4.1.7?
 
 Try changing your GROUP BY to use the column name of the second column in
 the SELECT, not the alias of the second column name, i.e.
 
 GROUP BY `Product`.`Product`, `AssignedToAgent`.`AgentName`
 
 That should work on any version of MySQL. I don't think you're allowed to
 use aliases in a GROUP BY, only actual column names. Then again, I am
 mostly
 a DB2 person and the MySQL dialect of SQL may allow aliases in a GROUP BY
 for all I know.
 
 I'm suprised that the alias worked on 3.2.3: are you sure you have
 reproduced the exact query that works on 3.2.3? I'm afraid I don't have
 either a 3.2.3 or 4.1.7 system so I can't try any of this for myself to
 see.
 
 By the way, did you realize that your query is substantially longer than
 it
 needs to be? You really only need to qualify column names with table names
 if you are doing a join of two or more tables and even then, you only need
 to qualify column names that occur in two or more of the tables read by
 the
 query. That would also eliminate the need for you to write aliases for
 some
 of your table names at all, further shortening the query. In your query,
 it
 appears that only the 'ProductKey' column occurs in more than one of the
 tables so your query could be as short as this:
 
 SELECT `Product`, `AgentName` AS `AssignedTo`, sum(`Inquiries`) AS
 `Inquiries`
 FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
 `Product`.`ProductKey`
 INNER JOIN `Agent`  ON `AssignedToKey` = `AgentKey`
 INNER JOIN `DateDim` ON `DateOpenKey` = `DateDimKey`
 WHERE `Year` = '2003'
 GROUP BY `Product`, `AssignedTo`;
 
 Then again, perhaps it is your shop standard to fully qualify all column
 names in queries; if so, you should follow your shop standard ;-) More
 likely, you are probably using some sort of query generating tool in which
 case you probably don't have a choice in the matter.
 
 Rhino
 
 
 
 - Original Message -
 From: Geoffrey R. Thompson [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Tuesday, November 23, 2004 12:11 AM
 Subject: Alias query problem in 4.1.7?
 
 
 I have an interesting problem that I cannot find any clues to in the MySQL
 documentation.  The following query works in 3.2.3, but does not work in
 4.1.7:
 
 
 
 SELECT `Product`.`Product`, `AssignedToAgent`.`AgentName` AS `AssignedTo`,
 sum(`Inquiries`) AS `Inquiries`
 FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
 `Product`.`ProductKey`
 INNER JOIN `Agent` AS `AssignedToAgent` ON `Inquiry`.`AssignedToKey` =
 `AssignedToAgent`.`AgentKey`
 INNER JOIN `DateDim` AS `DateOpen` ON `Inquiry`.`DateOpenKey` =
 `DateOpen`.`DateDimKey` WHERE `DateOpen`.`Year` = '2003'
 GROUP BY `Product`.`Product`, `AssignedToAgent`.`AssignedTo`;
 
 
 
 It appears that if I take the table alias AssignedToAgent out of the
 GROUP
 BY clause (leaving just the column alias AssignedTo), the query will
 then
 work in 4.1.7 - even though the table alias does not present a problem in
 3.2.3.  Any ideas why?
 
 
 
 Any help would be greatly appreciated.
 
 
 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgrade?

2004-11-23 Thread DreamWerx
Read thru the changelog online and see if there's any fixes that
affect you or new features you want.. otherwise just leave it..I
have some terribly old mysql versions running, but they are rock solid
doing their job..  If it aint broke don't fix it? might apply...



On Mon, 22 Nov 2004 21:31:01 -0700, Steve Grosz [EMAIL PROTECTED] wrote:
 I'm referring to the MySql server v.4.1.5 that I'm using now, as
 compared to the newer release of v.4.1.7
 
 
 
 Paul DuBois wrote:
  At 20:09 -0700 11/22/04, Steve Grosz wrote:
 
  I'm using 4.15 currently, and it seems to be very stable.  I noticed
  that a v4.17 or something has come out recently.
 
 
  Versions 4.15 and 4.17 of what?  You're asking us whether it's worth
  upgrading without specifying what it is you're asking about?
 
  Do you mean MySQL 4.1.5 and 4.1.7?
 
 
  Is it worth upgrading?  Can I just install the new version without
  damaging my current databases or are there specified steps?  Where
  would I find them if there are?
 
  Thanks,
  Steve
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.0.18 - 4.1.7 LEFT JOIN problem

2004-11-23 Thread Dilipan Sebastiampillai

Hi!
On Nov 22, Dilipan Sebastiampillai wrote:
 

I moved a db from 4.0.18 to 4.1.7 and I realise that the SAME sql query 
using LEFT JOIN gives me different result.
The result from 4.0.18 does s real LEFT JOIN but 4.1.7  differs 
sometimes and I don't want that.

I think it comes from an 'optimization'  how can I make a query 
without using the LEFT JOIN optimization of MySQL 4.1.7 ?
   

What is the query ?
Can you provide a repeatable test case ?
Regards,
Sergei
 

the answer is amazingly wrong !
have a look :
mysql SELECT hosts.name,hosts.hostId, tries.hostId, tries.status FROM 
hosts  LEFT JOIN tries ON tries.hostId=hosts.hostId AND tries.status 
IN('running','waitkill','preemption')  LIMIT 20;
+-+++-+
| name| hostId | hostId | status  |
+-+++-+
| chimp13 |   1530 |   1393 | running |
| chimp13 |   1530 |   1485 | running |
| chimp13 |   1530 |   1418 | running |
| chimp13 |   1530 |   1499 | running |
| chimp13 |   1530 |   1499 | running |
| chimp13 |   1530 |   1438 | running |
| chimp13 |   1530 |   1514 | running |
| chimp13 |   1530 |   1491 | running |
| chimp13 |   1530 |   1587 | running |
| chimp13 |   1530 |   1471 | running |
| chimp13 |   1530 |   1471 | running |
| chimp13 |   1530 |   1416 | running |
| chimp13 |   1530 |   1477 | running |
| chimp13 |   1530 |   1416 | running |
| chimp13 |   1530 |   1477 | running |
| chimp13 |   1530 |   1493 | running |
| chimp13 |   1530 |   1520 | running |
| chimp13 |   1530 |   1518 | running |
| chimp13 |   1530 |   1502 | running |
| chimp13 |   1530 |   1598 | running |
+-+++-+
20 rows in set (0.00 sec)

mysql
. the hostId are not the same althought i do a LEFT JOIN on them ...
but sometimes the answer is ok ...
here is my.cnf for a  Xeon 2.40Ghz  * 2
6 Gig of RAM
   # This is for a large system with memory of 1G-2G where the system
   runs mainly
   # MySQL.
   # The following options will be passed to all MySQL clients
   [client]
   #password= your_password
   port= 3306
   socket= /var/lib/mysql/mysql.sock
   # Here follows entries for some specific programs
   # The MySQL server
   [mysqld]
   port= 3306
   socket= /var/lib/mysql/mysql.sock
   skip-locking
   skip-external-locking
   skip-grant-table  # added by dlp
   log-slow-queries
   log-error
   key_buffer = 512M  # same that key_buffer_size ? dlp
   max_allowed_packet = 1M
   table_cache = 512
   sort_buffer_size = 2M
   read_buffer_size = 100M  # ori=2M
   read_rnd_buffer_size = 100M  # ori= 8M
   myisam_sort_buffer_size = 64M
   thread_cache = 1024 # ori =8
   query_cache_size = 100M  # ori = 32M
   # Try number of CPU's*2 for thread_concurrency
   thread_concurrency = 4
   max_connections = 1000  # dlp
   key_buffer_size = 512M  # dlp
   server-id= 1
   # Point the following paths to different dedicated disks
   tmpdir= /tmp/
   #log-update = /path-to-dedicated-directory/hostname

   innodb_data_home_dir = /var/lib/mysql/innodb/
   innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
   innodb_log_group_home_dir = /var/lib/mysql/innodb/
   innodb_log_arch_dir = /var/lib/mysql/innodb/
   # You can set .._buffer_pool_size up to 50 - 80 %
   # of RAM but beware of setting memory usage too high
   innodb_buffer_pool_size = 1G
   innodb_additional_mem_pool_size = 20M
   # Set .._log_file_size to 25 % of buffer pool size
   innodb_log_file_size = 100M
   innodb_log_buffer_size = 8M
   innodb_flush_log_at_trx_commit = 1
   innodb_lock_wait_timeout = 50
   innodb_file_per_table=off
   innodb_table_locks=off # dlp
   [mysqldump]
   quick
   max_allowed_packet = 16M
   [mysql]
   no-auto-rehash
   # Remove the next comment character if you are not familiar with SQL
   #safe-updates
   [isamchk]
   key_buffer = 512M
   sort_buffer_size = 512M
   read_buffer = 2M
   write_buffer = 2M
   [myisamchk]
   key_buffer = 512M
   sort_buffer_size = 512M
   read_buffer = 2M
   write_buffer = 2M
   [mysqlhotcopy]
   interactive-timeout

--
Dilipan 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Mysql Performance

2004-11-23 Thread Carlos Augusto
Well that´s me again with a new problem. I am runnig another database
with 7gb of data and this db can´t stop. All the time we have queries
being inserted by a plain text file and all the time customers get
information of this database. I obvserved that for  a insert query is
taking about 5 to 15 seconds for each insert. I saw some variables
like: slow_query that output me a number of 1388 slow queries. I am
new in mysql and database stuff and i have a task to improve mysql´s
performance. I need help since in a simple Pentium 4 the same
operations are almost 10 times higher(in time of inserting a querie)
and this machine that is too slow for inserting is an dual xeon, 4gb
ram and 500gb hard disk.
I aprecciate if some one has a solution for improving this server performance.

Thanks.
Carlos

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: On but off topic Putting a file in Ram

2004-11-23 Thread gunmuse
Actually no.  I have a file that is determined to be requested by mysql (Top
100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram


 The reason I ask this here.  Is I have graphics that are loaded by Mysql
 and
 was wondering if I can do the same for them since some of these sites can
 call my server 10-20,000 times a day for that same graphic.

I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.


Eamon Daly



- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram


I have a small file that calls a search function at Findwhat in case Mysql
 locally overloads.  I just put on a new partner who looks like they may
 call
 my server 40 million times a month.

 I know there is some way to put a file into Ram for super fast response.
 Question is how do I do this?

 Will it still write to Mysql from the Ram Drive?  What is the downside of
 doing this?

 The reason I ask this here.  Is I have graphics that are loaded by Mysql
 and
 was wondering if I can do the same for them since some of these sites can
 call my server 10-20,000 times a day for that same graphic.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance

2004-11-23 Thread Ronan Lucio
Carlos,

Give us more details about our system:

What are the table types you´re using?
What are the configs in your my.cnf file?

Ronan

- Original Message -
From: Carlos Augusto [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 3:41 PM
Subject: Mysql Performance


Well that´s me again with a new problem. I am runnig another database
with 7gb of data and this db can´t stop. All the time we have queries
being inserted by a plain text file and all the time customers get
information of this database. I obvserved that for  a insert query is
taking about 5 to 15 seconds for each insert. I saw some variables
like: slow_query that output me a number of 1388 slow queries. I am
new in mysql and database stuff and i have a task to improve mysql´s
performance. I need help since in a simple Pentium 4 the same
operations are almost 10 times higher(in time of inserting a querie)
and this machine that is too slow for inserting is an dual xeon, 4gb
ram and 500gb hard disk.
I aprecciate if some one has a solution for improving this server
performance.

Thanks.
Carlos

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance

2004-11-23 Thread Hagen Hoepfner
I do not believe that your problem is based on your hardware but ... 
Inserting data requires to reorganize your indexes. Please try to drop 
them (only for testing ;-) )

Hagen
Carlos Augusto wrote:
Well that´s me again with a new problem. I am runnig another database
with 7gb of data and this db can´t stop. All the time we have queries
being inserted by a plain text file and all the time customers get
information of this database. I obvserved that for  a insert query is
taking about 5 to 15 seconds for each insert. I saw some variables
like: slow_query that output me a number of 1388 slow queries. I am
new in mysql and database stuff and i have a task to improve mysql´s
performance. I need help since in a simple Pentium 4 the same
operations are almost 10 times higher(in time of inserting a querie)
and this machine that is too slow for inserting is an dual xeon, 4gb
ram and 500gb hard disk.
I aprecciate if some one has a solution for improving this server performance.
Thanks.
Carlos
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: On but off topic Putting a file in Ram

2004-11-23 Thread Victor Pendleton
Is the actual data stored in the database or somewhere in the file 
system? If you do not have text or blob columns you may be able to use 
heap/memory tables.

[EMAIL PROTECTED] wrote:
Actually no.  I have a file that is determined to be requested by mysql (Top
100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.
Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183

-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram
 

The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.
   

I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.

Eamon Daly

- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram
 

I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may
call
my server 40 million times a month.
I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?
Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?
The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.
   


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: On but off topic Putting a file in Ram

2004-11-23 Thread gunmuse
Heap/Memory tables that is the phrase I couldn't remember.  The data is
stored in the file system.

I have one file that that is linked to via JavaScript to run a php file and
send an output.  That file accesses MySql  OR if I am overloaded it bypasses
my local system and goes directly to Findwhat.com to produce the search.  By
putting that file into memory I should be able to handle any load fairly
easily.

Any suggestions on where I should read to learn how to use heap/memory on
Linux/enterprise?

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 11:10 AM
To: [EMAIL PROTECTED]
Cc: Eamon Daly; [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram


Is the actual data stored in the database or somewhere in the file
system? If you do not have text or blob columns you may be able to use
heap/memory tables.

[EMAIL PROTECTED] wrote:

Actually no.  I have a file that is determined to be requested by mysql
(Top
100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram




The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.



I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.


Eamon Daly



- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram




I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may
call
my server 40 million times a month.

I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?

Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?

The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]










-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL configuring on Linux

2004-11-23 Thread J.R. Bullington
This error results from Berkley Databases (bdb) not being enabled. This is 
normally OK (unless you
want to use them), but it seems as though your have it set in your my.cnf file 
to have them enabled
and not in MySQL. Go back into your my.cnf file and put a '#' in front of all 
the bdb and innodb
configuration options.
 
Secondly, the socket will be missing as it is a temp. file that is created and 
deleted whenever the
server is started or shut down. It will be recreated when the server is 
restarted.
 
J.R.

  _  

From: Danesh Daroui [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 1:30 PM
To: J.R. Bullington
Subject: Re: MySQL configuring on Linux


I just did it and configured it to be a Server but after all when I restarted 
mysqld I got this
error:

shell Can't connect to local mysql server throught socket 
'var/lib/mysql/mysql.sock' 

I also checked and socket files has been deleted. There was aölso an error log 
and there was:

unknown argument bdb_cache_size=4 MB

or something like that ? How can I re-generate socket file ? Why the original 
configuration for
Server has errors in it ? 


Re: On but off topic Putting a file in Ram

2004-11-23 Thread Victor Pendleton
Are you inquiring about database heap/memory tables or file system 
memory tables? This is the link for MySQL heap/memory tables, 
http://dev.mysql.com/doc/mysql/en/MEMORY_storage_engine.html. As for the 
file system you could look at using ramfs, 
http://www.linuxfocus.org/English/July2001/article210.shtml, or just 
google for it.

[EMAIL PROTECTED] wrote:
Heap/Memory tables that is the phrase I couldn't remember.  The data is
stored in the file system.
I have one file that that is linked to via JavaScript to run a php file and
send an output.  That file accesses MySql  OR if I am overloaded it bypasses
my local system and goes directly to Findwhat.com to produce the search.  By
putting that file into memory I should be able to handle any load fairly
easily.
Any suggestions on where I should read to learn how to use heap/memory on
Linux/enterprise?
Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 11:10 AM
To: [EMAIL PROTECTED]
Cc: Eamon Daly; [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram
Is the actual data stored in the database or somewhere in the file
system? If you do not have text or blob columns you may be able to use
heap/memory tables.
[EMAIL PROTECTED] wrote:
 

Actually no.  I have a file that is determined to be requested by mysql
   

(Top
 

100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.
Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183

-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram

   

The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.
 

I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.

Eamon Daly

- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram

   

I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may
call
my server 40 million times a month.
I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?
Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?
The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


   


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


String function

2004-11-23 Thread Jerry Swanson
I need to make query that adds numbers. The fields are varchar format
so I have some value 12121 and some values 121212,121212.

I think I need string replace function that replace comma. What
function can do this?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance

2004-11-23 Thread Brent Baisley
I could see how the Pentium 4 may be faster for certain things. In some 
cases, older Pentiums with larger caches (i.e. 2MB) would outperform a 
Pentium 4. Rumor has it that MySQL loves level 2 cache, but don't tell 
PHP. But in this case, the Pentium 4 and Xeon I'm pretty sure both have 
512K level 2 cache. However, the Xeon is designed as a server 
processor, meaning it can handle many tasks very well. Give the Xeon 
just one task and it's kind of mediocre. This is fairly evident with 
reviews comparing the Xeon to the AMD fx5X series. The AMD chip beats 
the Xeon pretty convincingly in single tasks. But the Xeon really 
shines when the system is doing multiple tasks and there may be a lot 
of context switching. The Pentium is like the AMD, it can do one task 
very well.
The Xeon should perform much better then the Pentium under heavy load, 
multitasking loads.

Also, I think the Xeon's interface to memory is DDR266, where the 
Pentium is DDR400. That can be a pretty significant speed difference 
when you are really pumping data around.

What's the difference in hard drive speeds? 500GB doesn't really say 
much except that if it's just one drive, it's probably some sort of ATA 
with a slow rotation speed (i.e. 7200). An 18GB SCSI would probably 
outperform the 500GB ATA drive under heavy multitasking loads because 
of command queuing. Command queueing is just starting to become 
available in SATA drives. What kind of drive is in the Pentium? The 
cache on the drive matters too.

So, that said, I'm assuming you didn't try the lowly Pentium 4 under 
the typical heavy load you expect.

Now, since this is a MySQL discussion area, I think I should move away 
from hardware. You should serialize your insert queries if you haven't 
already. Meaning, run them one at a time instead of concurrently. And 
of course, before you do anything, make sure you've tweaked your MySQL 
configuration settings.

On Nov 23, 2004, at 12:41 PM, Carlos Augusto wrote:
Well that´s me again with a new problem. I am runnig another database
with 7gb of data and this db can´t stop. All the time we have queries
being inserted by a plain text file and all the time customers get
information of this database. I obvserved that for  a insert query is
taking about 5 to 15 seconds for each insert. I saw some variables
like: slow_query that output me a number of 1388 slow queries. I am
new in mysql and database stuff and i have a task to improve mysql´s
performance. I need help since in a simple Pentium 4 the same
operations are almost 10 times higher(in time of inserting a querie)
and this machine that is too slow for inserting is an dual xeon, 4gb
ram and 500gb hard disk.
I aprecciate if some one has a solution for improving this server 
performance.

Thanks.
Carlos
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL configuration on Linux

2004-11-23 Thread Barry Drake
I installed mysql on windows XP pro without any problems.  But, on Linux 
 can't get it to work so far.

I get the error:
ERROR 2002: Can't connect to local MySQL server through socket 
'/var/lib/mysql/mysql.sock' (2)

The directory is there.  I thought that mysqld would create what it 
needed at boot.  What's going on?

I checked all of the Berkley Databases (bdb) statements in my.conf are 
commented out with '#'.

The documentation had a little info.  Tried all that; didn't work.
System:
Dell Inspiron 8500; 1GB ram
Suse 9.2
Any help would be appreciated.
Thanks.
Barry
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Mysql Performance

2004-11-23 Thread Vasiliy Boulytchev

Brent,
Would you be so kind to throw out some links to tweaking mysql to run
to its full performance.  I am googling right now for the answeres.  Is
there books you would recommend?

THANKS

On Tue, 2004-11-23 at 14:29 -0500, Brent Baisley wrote:
 I could see how the Pentium 4 may be faster for certain things. In some 
 cases, older Pentiums with larger caches (i.e. 2MB) would outperform a 
 Pentium 4. Rumor has it that MySQL loves level 2 cache, but don't tell 
 PHP. But in this case, the Pentium 4 and Xeon I'm pretty sure both have 
 512K level 2 cache. However, the Xeon is designed as a server 
 processor, meaning it can handle many tasks very well. Give the Xeon 
 just one task and it's kind of mediocre. This is fairly evident with 
 reviews comparing the Xeon to the AMD fx5X series. The AMD chip beats 
 the Xeon pretty convincingly in single tasks. But the Xeon really 
 shines when the system is doing multiple tasks and there may be a lot 
 of context switching. The Pentium is like the AMD, it can do one task 
 very well.
 The Xeon should perform much better then the Pentium under heavy load, 
 multitasking loads.
 
 Also, I think the Xeon's interface to memory is DDR266, where the 
 Pentium is DDR400. That can be a pretty significant speed difference 
 when you are really pumping data around.
 
 What's the difference in hard drive speeds? 500GB doesn't really say 
 much except that if it's just one drive, it's probably some sort of ATA 
 with a slow rotation speed (i.e. 7200). An 18GB SCSI would probably 
 outperform the 500GB ATA drive under heavy multitasking loads because 
 of command queuing. Command queueing is just starting to become 
 available in SATA drives. What kind of drive is in the Pentium? The 
 cache on the drive matters too.
 
 So, that said, I'm assuming you didn't try the lowly Pentium 4 under 
 the typical heavy load you expect.
 
 Now, since this is a MySQL discussion area, I think I should move away 
 from hardware. You should serialize your insert queries if you haven't 
 already. Meaning, run them one at a time instead of concurrently. And 
 of course, before you do anything, make sure you've tweaked your MySQL 
 configuration settings.
 
 
 On Nov 23, 2004, at 12:41 PM, Carlos Augusto wrote:
 
  Well that´s me again with a new problem. I am runnig another database
  with 7gb of data and this db can´t stop. All the time we have queries
  being inserted by a plain text file and all the time customers get
  information of this database. I obvserved that for  a insert query is
  taking about 5 to 15 seconds for each insert. I saw some variables
  like: slow_query that output me a number of 1388 slow queries. I am
  new in mysql and database stuff and i have a task to improve mysql´s
  performance. I need help since in a simple Pentium 4 the same
  operations are almost 10 times higher(in time of inserting a querie)
  and this machine that is too slow for inserting is an dual xeon, 4gb
  ram and 500gb hard disk.
  I aprecciate if some one has a solution for improving this server 
  performance.
 
  Thanks.
  Carlos
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 -- 
 Brent Baisley
 Systems Architect
 Landover Associates, Inc.
 Search  Advisory Services for Advanced Technology Environments
 p: 212.759.6400/800.759.0577
 
 
-- 

-
Vasiliy Boulytchev
Colorado Information Technologies Inc.
http://www.coinfotech.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: String function

2004-11-23 Thread Paul DuBois
At 14:12 -0500 11/23/04, Jerry Swanson wrote:
I need to make query that adds numbers. The fields are varchar format
so I have some value 12121 and some values 121212,121212.
I think I need string replace function that replace comma. What
function can do this?
It's called REPLACE().
http://dev.mysql.com/doc/mysql/en/String_functions.html
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: 4.0.18 - 4.1.7 LEFT JOIN problem

2004-11-23 Thread Sergei Golubchik
Hi!

On Nov 23, Dilipan Sebastiampillai wrote:
 
 
 Hi!
 
 On Nov 22, Dilipan Sebastiampillai wrote:
  
 
 I moved a db from 4.0.18 to 4.1.7 and I realise that the SAME sql query 
 using LEFT JOIN gives me different result.
 The result from 4.0.18 does s real LEFT JOIN but 4.1.7  differs 
 sometimes and I don't want that.
 
 I think it comes from an 'optimization'  how can I make a query 
 without using the LEFT JOIN optimization of MySQL 4.1.7 ?
 
 What is the query ?
 Can you provide a repeatable test case ?
 
 the answer is amazingly wrong !
 have a look :
 
 mysql SELECT hosts.name,hosts.hostId, tries.hostId, tries.status FROM 
 hosts  LEFT JOIN tries ON tries.hostId=hosts.hostId AND tries.status 
 IN('running','waitkill','preemption')  LIMIT 20;
 +-+++-+
 | name| hostId | hostId | status  |
 +-+++-+
 | chimp13 |   1530 |   1393 | running |
 | chimp13 |   1530 |   1485 | running |
 | chimp13 |   1530 |   1418 | running |
 | chimp13 |   1530 |   1499 | running |

I agree that it doesn't look right.
But the query alone is not enough for me to repeat the bug.
I need also both tables hosts and tries.
If they are big, you may try to remove unrelated rows, or create a
completely independent test case. Actually you can even upload big
tables if you don't want to spend time on a test case.

But only with a repeatable test case you can make sure that the bug
won't be present in 4.1.8.

Regards,
Sergei

-- 
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance

2004-11-23 Thread Brent Baisley
No books to recommend, although there was a review of one on 
slashdot.org this morning, but you can start with the manual here:
http://dev.mysql.com/doc/mysql/en/Server_parameters.html

Other articles you may find helpful:
http://www.f3n.de/doku/mysql/manual_10.html
http://www.onlamp.com/pub/a/onlamp/2004/02/05/lamp_tuning.html
http://www.databasejournal.com/features/mysql/article.php/3367871
On Nov 23, 2004, at 1:59 PM, Vasiliy Boulytchev wrote:
Brent,
	Would you be so kind to throw out some links to tweaking mysql to 
run
to its full performance.  I am googling right now for the answeres.  Is
there books you would recommend?

THANKS
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


I need to understand INNO better.

2004-11-23 Thread Titus
I have converted some tables from MyISAM to INNO
using an ALTER TABLE statement.  It seems to work
fine.  However, when I copy that database to another
directory for purposes of backup, a subsequent 'use'
statement on the backup directory (db) reports that
 Didn't find any fields in table 'tblAssays'
After exploring I get a report that
   Can't open file: 'tblAssays.InnoDB' (errno: 1)
Can anyone shed any light on this problem?  Many
thanks.
titus sends


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Help optimizing query

2004-11-23 Thread Jesse Sheidlower

I have what I thought was a simple, well-indexed query, but it
turns out that it's acting as a pretty big drag. The one thing
that's clearly a problem (though I'm not sure of the extent of
the problem), I'm not sure how to fix.

There are three tables: citations, subjects, and a many-to-many
table linking these. They look like this (edited to remove
extraneous fields):

CREATE TABLE `citation` (
  `id` int(11) NOT NULL auto_increment,
  `word` varchar(50) NOT NULL default '',
  `last_modified` timestamp(14) NOT NULL,
  `deleted` datetime default NULL,
  PRIMARY KEY  (`id`),
  KEY `deleted` (`deleted`),
  KEY `word` (`word`)
)

CREATE TABLE `subject` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(50) NOT NULL default '',
  `last_modified` timestamp(14) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `name` (`name`)
) 
CREATE TABLE `citation_subject` (
  `id` int(11) NOT NULL auto_increment,
  `citation_id` int(11) NOT NULL default '0',
  `subject_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`id`),
  KEY `citation_id` (`citation_id`,`subject_id`)
) 

A usual query is to get (some number of) the 
citations for a given subject, ordering by the
word which is stripped of spaces and hyphens.
I don't know of any other way to accomplish
this ordering. The EXPLAIN looks like this:

mysql EXPLAIN SELECT citation.*, REPLACE(REPLACE(citation.word,' ',''), 
'-','') AS stripped_word
- FROM citation, subject, citation_subject
- WHERE subject.name = 'History'
- AND citation_subject.subject_id = subject.id
- AND citation_subject.citation_id = citation.id
- AND (citation.deleted IS NULL OR citation.deleted = 0)
- ORDER BY stripped_word\G
*** 1. row ***
table: subject
 type: ref
possible_keys: PRIMARY,name
  key: name
  key_len: 50
  ref: const
 rows: 1
Extra: Using where; Using temporary; Using filesort
*** 2. row ***
table: citation_subject
 type: index
possible_keys: citation_id
  key: citation_id
  key_len: 8
  ref: NULL
 rows: 1247
Extra: Using where; Using index
*** 3. row ***
table: citation
 type: eq_ref
possible_keys: PRIMARY,deleted
  key: PRIMARY
  key_len: 4
  ref: citation_subject.citation_id
 rows: 1
Extra: Using where
3 rows in set (0.00 sec)



The number of records involved is relatively small (a few thousands; in
practice this query would also have a LIMIT statement and would be
preceded by a COUNT(*)), but it's dragging down the application it's
running in. (I have a considerably more complex query that involves
millions of records and twice as many joins, that is faster.) I'm
running this in Perl.

Any suggestions? I'd like to get rid of the whole temporary and
filesort things, but I'm not sure if that's what matters given
that there's only 1 row being returned there.

Thanks.

Jesse Sheidlower

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: 4.0.18 - 4.1.7 LEFT JOIN problem

2004-11-23 Thread Donny Simonton
Move this:
tries.status IN('running','waitkill','preemption'

to a where clause and remove it from the join.

Never actually tried to do a IN in a join before.  I personally don't think
it should work.

Donny
 -Original Message-
 From: Sergei Golubchik [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, November 23, 2004 2:13 PM
 To: Dilipan Sebastiampillai
 Cc: [EMAIL PROTECTED]
 Subject: Re: 4.0.18 - 4.1.7 LEFT JOIN problem
 
 Hi!
 
 On Nov 23, Dilipan Sebastiampillai wrote:
  
  
  Hi!
  
  On Nov 22, Dilipan Sebastiampillai wrote:
  
  
  I moved a db from 4.0.18 to 4.1.7 and I realise that the SAME sql
 query
  using LEFT JOIN gives me different result.
  The result from 4.0.18 does s real LEFT JOIN but 4.1.7  differs
  sometimes and I don't want that.
  
  I think it comes from an 'optimization'  how can I make a query
  without using the LEFT JOIN optimization of MySQL 4.1.7 ?
  
  What is the query ?
  Can you provide a repeatable test case ?
  
  the answer is amazingly wrong !
  have a look :
 
  mysql SELECT hosts.name,hosts.hostId, tries.hostId, tries.status FROM
  hosts  LEFT JOIN tries ON tries.hostId=hosts.hostId AND tries.status
  IN('running','waitkill','preemption')  LIMIT 20;
  +-+++-+
  | name| hostId | hostId | status  |
  +-+++-+
  | chimp13 |   1530 |   1393 | running |
  | chimp13 |   1530 |   1485 | running |
  | chimp13 |   1530 |   1418 | running |
  | chimp13 |   1530 |   1499 | running |
 
 I agree that it doesn't look right.
 But the query alone is not enough for me to repeat the bug.
 I need also both tables hosts and tries.
 If they are big, you may try to remove unrelated rows, or create a
 completely independent test case. Actually you can even upload big
 tables if you don't want to spend time on a test case.
 
 But only with a repeatable test case you can make sure that the bug
 won't be present in 4.1.8.
 
 Regards,
 Sergei
 
 --
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
  / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
 /_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
___/  www.mysql.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Alias query problem in 4.1.7?

2004-11-23 Thread Randy Clamons
Ok, when you have a GROUP BY clause, you can specifiy the column name, the 
column alias or an ordinal number representing the column position. That's why 
`AssignedTo` works -- it is the column alias. `AssignedToAgent`.`AssignedTo` 
doesn't work because AssignedTo is not a column name in table `Agent`. So, you 
have three choices on writing your GROUP BY:

GROUP BY 1,2

GROUP BY `Product`.`Product`, `AssignedTo`

GROUP BY `Product`.`Product`, `AssignedToAgent`.`AgentName`

Use the statement that makes most sense to you. It will save you time if you 
need to change the statement later. MySQL doesn't act any differently no matter 
which way you do it.


Randy Clamons
Systems Programming
Astro-auction.com


 Original Message
 From: Rhino [EMAIL PROTECTED]
 To: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
 Date: Tue, Nov-23-2004 10:25 AM
 Subject: Re: Alias query problem in 4.1.7?

 Try changing your GROUP BY to use the column name of the second column 
 in
 the SELECT, not the alias of the second column name, i.e.
 
 GROUP BY `Product`.`Product`, `AssignedToAgent`.`AgentName`
 
 That should work on any version of MySQL. I don't think you're allowed 
 to
 use aliases in a GROUP BY, only actual column names. Then again, I am 
 mostly
 a DB2 person and the MySQL dialect of SQL may allow aliases in a GROUP 
 BY
 for all I know.
 
 I'm suprised that the alias worked on 3.2.3: are you sure you have
 reproduced the exact query that works on 3.2.3? I'm afraid I don't have
 either a 3.2.3 or 4.1.7 system so I can't try any of this for myself to 
 see.
 
 By the way, did you realize that your query is substantially longer 
 than it
 needs to be? You really only need to qualify column names with table 
 names
 if you are doing a join of two or more tables and even then, you only 
 need
 to qualify column names that occur in two or more of the tables read by 
 the
 query. That would also eliminate the need for you to write aliases for 
 some
 of your table names at all, further shortening the query. In your 
 query, it
 appears that only the 'ProductKey' column occurs in more than one of 
 the
 tables so your query could be as short as this:
 
 SELECT `Product`, `AgentName` AS `AssignedTo`, sum(`Inquiries`) AS
 `Inquiries`
 FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
 `Product`.`ProductKey`
 INNER JOIN `Agent`  ON `AssignedToKey` = `AgentKey`
 INNER JOIN `DateDim` ON `DateOpenKey` = `DateDimKey`
 WHERE `Year` = '2003'
 GROUP BY `Product`, `AssignedTo`;
 
 Then again, perhaps it is your shop standard to fully qualify all 
 column
 names in queries; if so, you should follow your shop standard ;-) More
 likely, you are probably using some sort of query generating tool in 
 which
 case you probably don't have a choice in the matter.
 
 Rhino
 
 
 
 - Original Message - 
 From: Geoffrey R. Thompson [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Tuesday, November 23, 2004 12:11 AM
 Subject: Alias query problem in 4.1.7?
 
 
 I have an interesting problem that I cannot find any clues to in the 
 MySQL
 documentation.  The following query works in 3.2.3, but does not work 
 in
 4.1.7:
 
 
 
 SELECT `Product`.`Product`, `AssignedToAgent`.`AgentName` AS 
 `AssignedTo`,
 sum(`Inquiries`) AS `Inquiries`
 FROM `Inquiry` INNER JOIN `Product` ON `Inquiry`.`ProductKey` =
 `Product`.`ProductKey`
 INNER JOIN `Agent` AS `AssignedToAgent` ON `Inquiry`.`AssignedToKey` =
 `AssignedToAgent`.`AgentKey`
 INNER JOIN `DateDim` AS `DateOpen` ON `Inquiry`.`DateOpenKey` =
 `DateOpen`.`DateDimKey` WHERE `DateOpen`.`Year` = '2003'
 GROUP BY `Product`.`Product`, `AssignedToAgent`.`AssignedTo`;
 
 
 
 It appears that if I take the table alias AssignedToAgent out of the 
 GROUP
 BY clause (leaving just the column alias AssignedTo), the query will 
 then
 work in 4.1.7 - even though the table alias does not present a problem 
 in
 3.2.3.  Any ideas why?
 
 
 
 Any help would be greatly appreciated.
 
 
 
 
 
 -- 
 MySQL Windows Mailing List
 For list archives: http://lists.mysql.com/win32
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help optimizing query

2004-11-23 Thread roger
* Jesse Sheidlower
[...]
 CREATE TABLE `citation_subject` (
   `id` int(11) NOT NULL auto_increment,
   `citation_id` int(11) NOT NULL default '0',
   `subject_id` int(11) NOT NULL default '0',
   PRIMARY KEY  (`id`),
   KEY `citation_id` (`citation_id`,`subject_id`)
 )

Try adding an index with subject_id as the first column.

ALTER TABLE `citation_subject` ADD INDEX (`subject_id`,`citation_id`);

-- 
Roger


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance

2004-11-23 Thread Ugo Bellavance
Vasiliy Boulytchev wrote:
Brent,
Would you be so kind to throw out some links to tweaking mysql to run
to its full performance.  I am googling right now for the answeres.  Is
there books you would recommend?
THANKS
High Performance Mysql (oreilly)
MySQL enterprise solutions (wiley)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: I need to understand INNO better.

2004-11-23 Thread Jeff Smelser
On Tuesday 23 November 2004 02:12 pm, Titus wrote:
 I have converted some tables from MyISAM to INNO
 using an ALTER TABLE statement.  It seems to work
 fine.  However, when I copy that database to another
 directory for purposes of backup, a subsequent 'use'
 statement on the backup directory (db) reports that

   Didn't find any fields in table 'tblAssays'

 After exploring I get a report that

 Can't open file: 'tblAssays.InnoDB' (errno: 1)

 Can anyone shed any light on this problem?  Many
 thanks.

Only two ways to make copies of innodb..  mysqlhotcopy (its not free) and 
mysqldump.. This is assuming your not coping the entire db directory tree..

Maybe someone else knows of another.

Jeff


pgp9Z7ari8lOX.pgp
Description: PGP signature


Re: Help optimizing query

2004-11-23 Thread Jesse Sheidlower
On Tue, Nov 23, 2004 at 09:55:15PM +0100, [EMAIL PROTECTED] wrote:
 * Jesse Sheidlower
 [...]
  CREATE TABLE `citation_subject` (
`id` int(11) NOT NULL auto_increment,
`citation_id` int(11) NOT NULL default '0',
`subject_id` int(11) NOT NULL default '0',
PRIMARY KEY  (`id`),
KEY `citation_id` (`citation_id`,`subject_id`)
  )
 
 Try adding an index with subject_id as the first column.
 
 ALTER TABLE `citation_subject` ADD INDEX (`subject_id`,`citation_id`);

Thanks. This did help slightly--I didn't realize that the
order of this would make such a difference, if both were
always being used.

I'm now coming to the determination that there are other
parts of the application functioning as the biggest drags.
If this is so, I apologize for the wasted bandwidth; I'm
still poking at this query though.

Jesse Sheidlower

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using ODBC.

 

Any help will be highly appreciable. 

 

Regards,



Re: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Victor Pendleton
Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:
I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

Indise MydbPAL, I click on Workshop-click on Object Browser-select the
data source name

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using ODBC.

Any help will be highly appreciable. 


Regards,
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
YES ODBC work perfect. I am using same data source for MySQL
administrator and Query Browser too. 

Anyway, if you are using can you go over the steps for db connection

1. Open dbPAL, Click on Workshop
2. Choose data source by clicking ODBC from the list under MySQL 
3. What next after that??
4. I clicked on new user and gave entry, it just gave me one a open lock
icon on toolbar. How do I test my database connection. Db-test-connect
is grade out, not clickable.

No error reported in err file

Regards,


-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:32 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: how to connect to MyDBPAL from MySQL

Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:

I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is
having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select
the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using
ODBC.

 

Any help will be highly appreciable. 

 

Regards,


  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Autocommit and kill?

2004-11-23 Thread Gary Richardson
Hey guys,

We just had a small incident. I was hoping for confirmation on why it happened.

A person updated a large table. They were trying to update one record
but due to keyboard fumbling sent the update without a where
statement. Left unchecked, it would have made a lot of records in the
database identical.

The table has around 6 million rows and is InnoDB. The command was
entered from the standard command line client.

The query ran for under 60 seconds before it was killed. As we were
preping to restore the table from backups, someone did a group by to
find out how many rows got fragged. It turns out the command didn't
affect the table at all.

I have two guesses as to why this happened:

1) For Inno tables, entering an update query actually runs BEGIN;
$query; COMMIT; Killing the query prevented the commit from happening.

2) The first 60 seconds of the query were spent preparing or sorting
or something.

Most likely I'm thinking it was #1, but I just wanted to see if that's
actually the case.

Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Sanjeev Sagar
I guess I am getting close

I drag the odbc connection line to database icon in db-test-connection.
I created new user and drag it to user part of db-test-connection. Now
the waiting tray has go icon. I drag the go icon to console tray,
nothing happened.

What is the next step to see if database connection is going through.

-Original Message-
From: Sanjeev Sagar [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:55 PM
To: Victor Pendleton
Cc: [EMAIL PROTECTED]
Subject: RE: how to connect to MyDBPAL from MySQL

YES ODBC work perfect. I am using same data source for MySQL
administrator and Query Browser too. 

Anyway, if you are using can you go over the steps for db connection

1. Open dbPAL, Click on Workshop
2. Choose data source by clicking ODBC from the list under MySQL 
3. What next after that??
4. I clicked on new user and gave entry, it just gave me one a open lock
icon on toolbar. How do I test my database connection. Db-test-connect
is grade out, not clickable.

No error reported in err file

Regards,


-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:32 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: how to connect to MyDBPAL from MySQL

Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:

I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is
having
Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

 

Indise MydbPAL, I click on Workshop-click on Object Browser-select
the
data source name

 

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using
ODBC.

 

Any help will be highly appreciable. 

 

Regards,


  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Connector/J 3.0.16 Has Been Released

2004-11-23 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

MySQL Connector/J 3.0.16, a new version of the Type-IV all-Java JDBC
driver for MySQL has been released.

Version 3.0.16 is a bugfix release for the stable tree that is
suitable for use with any MySQL version including MySQL-4.1 or MySQL-5.0
(although it provides minimal 'new' functionality with MySQL-4.1 or
MySQL-5.0, users connecting to MySQL-4.1 or 5.0 should be using
Connector/J 3.1.x).

It is now available in source and binary form from the Connector/J
download pages at http://dev.mysql.com/downloads/connector/j/3.0.html
and mirror sites (note that not all mirror sites may be up to date at
this point of time - if you can't find this version on some mirror,
please try again later or choose another download site.)

-Mark

- From the changelog:

11-15-04 - Version 3.0.16-ga
- Re-issue character set configuration commands when re-using
  pooled connections and/or Connection.changeUser() when
  connected to MySQL-4.1 or newer.

- Fixed ResultSetMetaData.isReadOnly() to detect non-writable
  columns when connected to MySQL-4.1 or newer, based on existence
  of 'original' table and column names.

- Fixed BUG#5664, ResultSet.updateByte() when on insert row
  throws ArrayOutOfBoundsException.

- Fixed DatabaseMetaData.getTypes() returning incorrect (i.e.
  non-negative) scale for the 'NUMERIC' type.

- Fixed BUG#6198, off-by-one bug in Buffer.readString(string).

- Made TINYINT(1) - BIT/Boolean conversion configurable via
  'tinyInt1isBit' property (default 'true' to be JDBC compliant out
  of the box).

- Only set 'character_set_results' during connection establishment
  if server version = 4.1.1.

- Fixed regression where useUnbufferedInput was defaulting to
  'false'.

- Fixed BUG#6231, ResultSet.getTimestamp() on a column with TIME in
  it fails.
- --
Mark Matthews
MySQL AB, Software Development Manager - Client Connectivity
Office: +1 708 332 0507
www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFBo9GgtvXNTca6JD8RAu/JAKCqk41l3U4YWd7qlBU3/bLC/QTEjACcC49l
Plr24IF4Ex93baUeVcKgu1o=
=YXAA
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to connect to MyDBPAL from MySQL

2004-11-23 Thread Victor Pendleton
You could look at the processlist in the MySQL monitor to see if a 
connection has been made. You may also want to enable ODBC tracing.

Sanjeev Sagar wrote:
I guess I am getting close
I drag the odbc connection line to database icon in db-test-connection.
I created new user and drag it to user part of db-test-connection. Now
the waiting tray has go icon. I drag the go icon to console tray,
nothing happened.
What is the next step to see if database connection is going through.
-Original Message-
From: Sanjeev Sagar [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:55 PM
To: Victor Pendleton
Cc: [EMAIL PROTECTED]
Subject: RE: how to connect to MyDBPAL from MySQL

YES ODBC work perfect. I am using same data source for MySQL
administrator and Query Browser too. 

Anyway, if you are using can you go over the steps for db connection
1. Open dbPAL, Click on Workshop
2. Choose data source by clicking ODBC from the list under MySQL 
3. What next after that??
4. I clicked on new user and gave entry, it just gave me one a open lock
icon on toolbar. How do I test my database connection. Db-test-connect
is grade out, not clickable.

No error reported in err file
Regards,
-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 23, 2004 3:32 PM
To: Sanjeev Sagar
Cc: [EMAIL PROTECTED]
Subject: Re: how to connect to MyDBPAL from MySQL

Are there any MySQL errors being logged? Did you confirm that the ODBC 
connection is working?

Sanjeev Sagar wrote:
 

I have seen mails talking about MydbPAL. I downloaded it and I have
MySQL ODBC data source too but not able to connect to MydbPal at all. I
have tested my odbc data source, it work just fine. My machine is
   

having
 

Linux and Windows through vmvare workstation. I installed dbpal on
windows and try to connect to database on my local linux based
partition.

Indise MydbPAL, I click on Workshop-click on Object Browser-select
   

the
 

data source name

I still see wait symbol in toolbar. Also when I click on db test
connect, nothing happen. Am I missing anything. I checked the
documentation but it's not that clear that how to connect by using
   

ODBC.
 

Any help will be highly appreciable. 


Regards,

   


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Autocommit and kill?

2004-11-23 Thread Paul DuBois
At 16:09 -0800 11/23/04, Gary Richardson wrote:
Hey guys,
We just had a small incident. I was hoping for confirmation on why 
it happened.

A person updated a large table. They were trying to update one record
but due to keyboard fumbling sent the update without a where
statement. Left unchecked, it would have made a lot of records in the
database identical.
The table has around 6 million rows and is InnoDB. The command was
entered from the standard command line client.
The query ran for under 60 seconds before it was killed. As we were
preping to restore the table from backups, someone did a group by to
find out how many rows got fragged. It turns out the command didn't
affect the table at all.
I have two guesses as to why this happened:
1) For Inno tables, entering an update query actually runs BEGIN;
$query; COMMIT; Killing the query prevented the commit from happening.
The query was still running when it was killed (at least, that's what
it sounds like).  Given that a query can't be committed until after
it finishes, the commit never happened.
2) The first 60 seconds of the query were spent preparing or sorting
or something.
Most likely I'm thinking it was #1, but I just wanted to see if that's
actually the case.
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Reset permissions

2004-11-23 Thread Tim Trice
How can I reset the permissions on my database?  I have no clue when I did
it (at least several months ago), haven't used it until today but I cannot
for the life of me figure out how I can create a new user with all
permissions.  The only user I have does not have grant or create privileges.
Am I screwed?

 

Thanks,

Tim



order by question

2004-11-23 Thread dan orlic
i have an question about ordering a set of records...

ab   c d
-
1Tax 120001.33
1Tax 115002.5
1Tax 110003.25
1Tax 10 4.5


But I want the records to return as such:
ascending by (c) with the zero being the last record:
like below:

1Tax 110003.25
1Tax 115002.5
1Tax 12000 1.33
1Tax 10  4.5

any suggestions?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



difference command

2004-11-23 Thread TM
Do you know what is differnet between command mysql_connect and
mysql_pconnect

Thankx


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Data loss problem with mysql

2004-11-23 Thread ravi.ramachandra

Dear all,

We are running mysql 4.0.17 on linux environment.  Our database resides
on external disk connected via FC cables.   We recently noticed a loss
of data in the following scenario.

Inserted a row in a table in a separate transaction by a java
application,
queried a row in the table in a separate transaction by a java
application and was successful.
Then the FC cable connecting to external db disks was pulled and after
sometime put it back
Now the inserted row is missing in the database.

From our logs, we have a query log that shows the inserted statement
prior to FC cable disconnection.  After cable pull, we have taken
database dump that reveals the missing row that was inserted prior to FC
cable disconnection. 

If somebody would have accidentally deleted, then we can expect the
delete statement in the query log.  But there is no delete statement in
the query log.

Can anybody help.   

With Regards,
Ravi





Confidentiality Notice

The information contained in this electronic message and any attachments to 
this message are intended
for the exclusive use of the addressee(s) and may contain confidential or 
privileged information. If
you are not the intended recipient, please notify the sender at Wipro or [EMAIL 
PROTECTED] immediately
and destroy all copies of this message and any attachments.

Re: difference command

2004-11-23 Thread Spenser
The mysql_pconnect() is a persistent connection and won't close even
with mysql_close().  Try to avoid it if you can.

On Wed, 2004-11-24 at 00:54, TM wrote:
 Do you know what is differnet between command mysql_connect and
 mysql_pconnect
 
 Thankx
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: order by question

2004-11-23 Thread Hagen Hoepfner
You can try to use two subqueries and an union ala
SELECT * FROM tab where c0 ORDER by C UNION ALL SELECT * FROM tab where 
c=0;

Hagen
dan orlic wrote:
i have an question about ordering a set of records...
ab   c d
-
1Tax 120001.33
1Tax 115002.5
1Tax 110003.25
1Tax 10 4.5
But I want the records to return as such:
ascending by (c) with the zero being the last record:
like below:
1Tax 110003.25
1Tax 115002.5
1Tax 12000 1.33
1Tax 10  4.5
any suggestions?
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Data loss problem with mysql

2004-11-23 Thread andy thomas
On Wed, 24 Nov 2004 [EMAIL PROTECTED] wrote:


 Dear all,

 We are running mysql 4.0.17 on linux environment.  Our database resides
 on external disk connected via FC cables.   We recently noticed a loss
 of data in the following scenario.

 Inserted a row in a table in a separate transaction by a java
 application,
 queried a row in the table in a separate transaction by a java
 application and was successful.
 Then the FC cable connecting to external db disks was pulled and after
 sometime put it back
 Now the inserted row is missing in the database.

 From our logs, we have a query log that shows the inserted statement
 prior to FC cable disconnection.  After cable pull, we have taken
 database dump that reveals the missing row that was inserted prior to FC
 cable disconnection.

 If somebody would have accidentally deleted, then we can expect the
 delete statement in the query log.  But there is no delete statement in
 the query log.

 Can anybody help.

What operating system(s) are you using for the system you are making the
query from and also for the external database server?

mysqld makes as much use of database server system memory as possible and
a lot the live database will be cached in memory. If you insert a row and
then read it back, it will be in the table but the table is in memory and
hasn't necessarily been written to physical disk. Also, UNIX and Unix-like
systems normally work with disk buffers so that when a file is written to,
it is the disk buffer that is written to, not the physical disk itself.
The disk buffers are then flushed out to disk every 30 seconds.

It could be that the FC cable was unplugged during the buffer flush,
causing the operating system to abort the flush and not update the file on
the physical disk.

Andy


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]