Re: single database ... many aplications

2006-05-08 Thread Barry

jehova villa martinez schrieb:
Hi, as newbie and with some trubles understanding English language, i 
have a question that I don’t know howto put on search engines (I don’t 
know technical keywords for my particular case). This is why I post here.


This is the whole picture:
I have four programs running on my place,
Freeradius
http://www.freeradius.org/

metadot web portal
http://www.metadot.com/index_static.html

mail toaster con qmail
http://www.tnpi.biz/internet/mail/toaster/

y Xpanel
http://www.xpanel.com/

each of them use MySQL and they has different databases and different 
tables...this four applications handle same usernames and password for 
same users, I mean same user has an e-mail account, and uses same 
username and password for authenticate freeradius modem access for a 
protmaster3, same username and password for login our  metadot portal, 
and…. Same thing for access Xpanel virtual hosting for their home page.


How can I use just one shared common database for use of all of my 
portal applications? Is this possible?  It would be nice if I have a 
single login feature for all… is MySQL related topic or is particular 
hack for each program?


Thank you for share  any experience related or knowledge about this doubt.

best regards from mexico,




Yes.
You have to set the GRANT provilegies that way, that Applications can 
connect to the Database which has the Users from outside.

Just replace 'localhost' with '%' in the Mysql tables.

That way the other servers can access this Database and get the Users 
from it.


Greets
Barry

--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: question about utf and collation

2006-05-08 Thread Marko Zmak
Thanks for the info. I checked the links you suggested, but it doesn't 
look like they address my problem. All the discussion write about 
croatian colaltion in latin2, but I'm interested in utf-8 charset. For 
utf-8 there's no croatian collation. There is one in slovenian 
(utf8_slovenian_ci) that is similar to croatiab, but it's not exact.


Is there any way that i can create my own collation from 
utf8_slovenian_ci (modify it for croatian)? If yes, how do I do it?


sheeri kritzer napisao:


I don't know what version of MySQL you're using, but a google search
on mysql croatian got me:

http://bugs.mysql.com/bug.php?id=16373
and
http://bugs.mysql.com/bug.php?id=6504

which implies you can use CHARACTER SET latin2 COLLATE latin2_croatian_ci
but also shows that it's not quite working yet.  Follow those bugs,
and you'll find what you want.

(note the link at the bottom of one of those bugs: 
http://www.ambra.rs.ba/  I can't read croatian so I can't tell if that

website is of any use).


-Sheeri 



--
Marko Žmak, dipl.ing.mat.
Mob: +385 98 212 801
Email: [EMAIL PROTECTED]
Web: http://www.studioartlan.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Memory Problem causing mysql to crash

2006-05-08 Thread Dobromir Velev
Hi,
I'm aware of the fact that this is a 32 bit system - and  I've tried to make 
sure that mysqld will not use more than 4 GB. As you can see the 
innodb_buffer_pool_size is 2 Gb and the total amount of memory used by the 
MyISAM key buffer size and  the per thread variables is less then 2 GB. There 
are no other services on this machine so the memory should not be a problem.

This server was working fine for almost a year until recently it started 
crashing. Could it be some memory problem I've ran into and can you suggest 
anything I can do to avoid similar problems in the future.

Thanks
Dobromir Velev


On Saturday 06 May 2006 01:23, Heikki Tuuri wrote:
 Dobromir,

 you are running a 32-bit operating system. Then the size of the mysqld
 process is limited to 2 GB, or at most to 4 GB. The amount of total RAM 8
 GB does not help here, since 2^32 = 4 G.

 You should reduce the key_buffer_size or innodb_buffer_pool_size in my.cnf.

 Best regards,

 Heikki

 Oracle Corp./Innobase Oy
 InnoDB - transactions, row level locking, and foreign keys for MySQL

 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
 tables
 http://www.innodb.com/order.php

 - Original Message -
 From: sheeri kritzer [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Friday, May 05, 2006 10:50 PM
 Subject: Re: InnoDB Memory Problem causing mysql to crash

  Well, according to my calculations:
  innodb_buffer_pool_size + key_buffer_size
  + max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size)
  + max_connections*2MB
 
  (I used the default binlog_cache_size value of 32K plus your settings)
 
  MySQL could use up to 4.991913 G of memory.  Shouldn't be a problem,
  unless of course your 8G of machine is running something other than
  MySQL.  Is it?  Because the fact that it could not allocate memory
  means that something was trying to use memory that didn't exist
 
  Did MySQL dump a core file?
 
  Did you follow this advice?
 
  You seem to be running 32-bit Linux and have 473 concurrent connections.
  If you have not changed STACK_SIZE in LinuxThreads and built the binary
  yourself, LinuxThreads is quite likely to steal a part of the global
  heap=
 
  for
 
  the thread stack. Please read http://www.mysql.com/doc/L/i/Linux.html
 
  Did you read the man page?
 
  The manual page at http://www.mysql.com/doc/en/Crashing.html contains
  information that should help you find out what is causing the crash.
 
  Also, did you try to look at your slow query logs to see if there was
  some kind of query hogging memory?  What about backups running at the
  same time?
 
  I'll note that you maxxed out your connections, which shouldn't cause
  a crash, but might indicate that your server tuning is not up-to-date
  with your actual usage.
 
  Are your data and logfiles are on a diffferent partitions?  We had
  problems with one machine where the data and logfiles were on the same
  partition, and it would crash -- we moved to a machine that was the
  same except for the different OS partitions, and it didn't crash!  We
  figure the disk seeking just killed the OS so it segfaulted the mysql
  process.
 
  -Sheeri
 
  On 5/4/06, Dobromir Velev [EMAIL PROTECTED] wrote:
  Hi,
  I'm trying to resolve why InnoDB is crashing. It happened twice for the
  l=
 
  ast
 
  month without obvoius reason
 
  Any help will be appreciated.
 
  Dobromir Velev
 
  My Server is
  Red Hat Enterprise Linux ES release 3 (Taroon Update 7)
  2.4.21-32.0.1.ELs=
 
  mp
 
  Dual 3.2 GHz Intel Xeon
  8 GB RAM
  with 3 x 146GB SCA Ultra 320 10K RPM SCSI Drives
 
 
  my.cnf settings
 
  innodb_buffer_pool_size=3D2000M
  innodb_additional_mem_pool_size=3D20M
  innodb_log_file_size=3D150M
  innodb_log_buffer_size=3D8M
  innodb_flush_log_at_trx_commit=3D0
  innodb_lock_wait_timeout=3D50
  key_buffer_size=3D1000M
  read_buffer_size=3D500K
  read_rnd_buffer_size=3D1200K
  sort_buffer_size=3D1M
  thread_cache=3D256
  thread_concurrency=3D8
  thread_stack=3D126976
  myisam_sort_buffer_size=3D64M
  max_connections=3D600
 
 
  The error log shows the following message:
 
  InnoDB: Fatal error: cannot allocate 1048576 bytes of
  InnoDB: memory with malloc! Total allocated memory
  InnoDB: by InnoDB 2263507272 bytes. Operating system errno: 12
  InnoDB: Cannot continue operation!
  InnoDB: Check if you should increase the swap file or
  InnoDB: ulimits of your operating system.
  InnoDB: On FreeBSD check you have compiled the OS with
  InnoDB: a big enough maximum process size.
  InnoDB: We now intentionally generate a seg fault so that
  InnoDB: on Linux we get a stack trace.
  mysqld got signal 11;
  This could be because you hit a bug. It is also possible that this
  binary or one of the libraries it was linked against is corrupt,
  improperly buil=
 
  t,
 
  or misconfigured. This error can also be caused by malfunctioning
  hardwar=
 
  e.
 
  We will try our best to scrape up some info that will hopefully help
  diag=
 
 

Re: How to see why client got blocked

2006-05-08 Thread Dominik Klein

sheeri kritzer schrieb:

If your server has log-warnings set to ON, you can check the error
logs, and use a script to count how many times for each host, in a
row, this happens.


+---+---+
| Variable_name | Value |
+---+---+
| log_warnings  | 1 |

I did not turn it off and documentation says it is on by default. I do 
not see any error regarding replication in the log on the slave. 
(`hostname`.err)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimizing range search with two-table ORDER BY

2006-05-08 Thread Jesse Sheidlower

Is there any way to optimize a range query that includes
an ORDER BY with keys from two different tables? I'm running
MySQL 4.1.18 on FreeBSD.

I've been struggling with some queries that are _incredibly_
slow--from 1-5 minutes on slowish but decent hardware. When I
try versions without the ORDER BY they're fast, and whatever
tweaks I do to the indexing do speed things up even faster,
but have no effect on the situation with the ORDER BY. The
docs suggest that indexes can't help here, but I find it hard
to believe that sorting on keys in different tables is that
rare a requirement; is there any way to restructure the
query to speed things up?

To take a few simple examples (most actual queries are more
complicated, but the slowdown isn't a result of the
complication), I have three tables (edited to remove fields
not used in these examples), part has_many quotation
has_many cwGroup:

mysql desc part;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| id| int(10) unsigned |  | PRI | NULL| auto_increment |
| cit   | text | YES  | | NULL||
| d | int(11)  | YES  | MUL | NULL||
+---+--+--+-+-++

mysql desc quotation;
+-+--+--+-+-++
| Field   | Type | Null | Key | Default | Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL| auto_increment |
| part_id | int(10) unsigned |  | MUL | 0   ||
| qt  | text | YES  | | NULL||
+-+--+--+-+-++

mysql desc cwGroup;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| id   | int(10) unsigned |  | PRI | NULL| auto_increment |
| quotation_id | int(10) unsigned |  | MUL | 0   ||
| cw   | varchar(100) | YES  | | NULL||
| stripped_cw  | varchar(100) | YES  | MUL | NULL||
+--+--+--+-+-++

The rough numbers are 100K rows in part, 2.7M in quotation, and
3.3M in cwGroup.

For example, the following query, which would return 460 rows
without the LIMIT, takes about 51s:

SELECT part.d, quotation.qt, cwGroup.cw
FROM cwGroup 
JOIN quotation ON (quotation.id = cwGroup.quotation_id ) 
JOIN part ON ( part.id = quotation.part_id ) 
WHERE ( part.d BETWEEN 1950 AND 1970 AND cwGroup.stripped_cw LIKE 'man%' ) 
ORDER BY part.d, cwGroup.stripped_cw 
LIMIT 25

and the EXPLAIN for it looks like:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: cwGroup
 type: range
possible_keys: quotation_id,stripped_cw
  key: stripped_cw
  key_len: 101
  ref: NULL
 rows: 8489
Extra: Using where; Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: quotation
 type: eq_ref
possible_keys: PRIMARY,part_id
  key: PRIMARY
  key_len: 4
  ref: rqs_incs.cwGroup.quotation_id
 rows: 1
Extra:
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: part
 type: eq_ref
possible_keys: PRIMARY,d
  key: PRIMARY
  key_len: 4
  ref: rqs_incs.quotation.part_id
 rows: 1
Extra: Using where

Without the ORDER BY it drops to about 1.5s and EXPLAIN no longer
shows the use of temporary and filesort.

An even worse example, but unfortunately a common need in this
app, is a query that returns a lot of rows (but which I'm paging
through, of course), such as:

SELECT part.d, quotation.qt, cwGroup.cw
FROM cwGroup 
JOIN quotation ON (quotation.id = cwGroup.quotation_id ) 
JOIN part ON ( part.id = quotation.part_id ) 
WHERE ( cwGroup.stripped_cw BETWEEN 'ant' AND 'asx' ) 
ORDER BY cwGroup.stripped_cw, part.d
LIMIT 25

This takes 2m31s to execute, obviously due to the large number
of rows (the total result is about 47K rows), but a similar
query without the ORDER BY took only .08s (though a COUNT(*)
took a similar 2-3m):

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: cwGroup
 type: range
possible_keys: quotation_id,stripped_cw
  key: stripped_cw
  key_len: 101
  ref: NULL
 

[Fwd: Getting next birthdays]

2006-05-08 Thread ESV Media GmbH

Hey, i´ve a problem with getting the next and the actual birthdays.

This my actual birthday sql :

SELECT SQL_CACHE birthday,mem.lname, mem.fname,mem.mem_id FROM members mem
INNER JOIN network net ON (net.mem_id = mem.mem_id AND net.frd_id =1)
WHERE
(( DAYOFYEAR(FROM_UNIXTIME( mem.birthday ))  DAYOFYEAR(now()) 
)*DAYOFYEAR(CONCAT(YEAR(NOW()),'-12-31')))+DAYOFYEAR(FROM_UNIXTIME( 
mem.birthday )) = DAYOFYEAR(now())
ORDER BY (( DAYOFYEAR(FROM_UNIXTIME( mem.birthday ))DAYOFYEAR(NOW()) 
)*DAYOFYEAR(CONCAT(YEAR(NOW()),'-12-31')))+DAYOFYEAR(FROM_UNIXTIME( 
mem.birthday )) LIMIT 4


The field birthday is in a Unix timestamp format.
I need the birthdays from yesterday, today and the next 4 or 5 birthdays.
And i need the table network to get my friends list.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need help in recreating .MYD files

2006-05-08 Thread balaraju mandala

Dear Comunity,

I need your help. I accidently deleted some '.MYD' files. I want to restore
them, without stopping the running server. how i can do this. i am using
Linux OS, i tried to create file using --- vi tablename.MYD(a blank file)
but it is not accepted by MySql.

regards,
bala


Outfile syntax and out of memory

2006-05-08 Thread Johan Lundqvist

Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!



If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.


This is a big problem, cause the import function that exist where I send 
the data only accept the format I get using mysql  queryfile.sql  
outfile.txt.


Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??


Regards,
/Johan Lundqvist

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Outfile syntax and out of memory

2006-05-08 Thread Dilipkumar

Hi,

Increase max_allowed packet to 1.5 gb and then try to import your data .

eg ;

In mysql prompt run the file as

*use database   
\. /tmp/filename.txt

*
Johan Lundqvist wrote:


Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!



If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.


This is a big problem, cause the import function that exist where I 
send the data only accept the format I get using mysql  
queryfile.sql  outfile.txt.


Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??


Regards,
/Johan Lundqvist




--
Thanks  Regards,
Dilipkumar
DBA Support


** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to 
Sify Limited and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If this is a 
forwarded message, the content of this E-MAIL may not have been sent with 
the authority of the Company. If you are not the intended recipient, an 
agent of the intended recipient or a  person responsible for delivering the 
information to the named recipient,  you are notified that any use, 
distribution, transmission, printing, copying or dissemination of this 
information in any way or in any manner is strictly prohibited. If you have 
received this communication in error, please delete this mail  notify us 
immediately at [EMAIL PROTECTED]


Watch India vs. England LIVE, Hot videos and more only on Sify Max! Click Here. 
www.sifymax.com

Get to see what's happening in your favourite City on Bangalore Live! 
www.bangalorelive.in



Re: Need help in recreating .MYD files

2006-05-08 Thread Dilipkumar

Hi,

If you have deleted .MYD files then truncate the table and restore it 
from the backup if yu have.

MYD means  your precious data which contains.

balaraju mandala wrote:


Dear Comunity,

I need your help. I accidently deleted some '.MYD' files. I want to 
restore

them, without stopping the running server. how i can do this. i am using
Linux OS, i tried to create file using --- vi tablename.MYD(a blank 
file)

but it is not accepted by MySql.

regards,
bala





--
Thanks  Regards,
Dilipkumar
DBA Support

** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to 
Sify Limited and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If this is a 
forwarded message, the content of this E-MAIL may not have been sent with 
the authority of the Company. If you are not the intended recipient, an 
agent of the intended recipient or a  person responsible for delivering the 
information to the named recipient,  you are notified that any use, 
distribution, transmission, printing, copying or dissemination of this 
information in any way or in any manner is strictly prohibited. If you have 
received this communication in error, please delete this mail  notify us 
immediately at [EMAIL PROTECTED]


Watch India vs. England LIVE, Hot videos and more only on Sify Max! Click Here. 
www.sifymax.com

Get to see what's happening in your favourite City on Bangalore Live! 
www.bangalorelive.in


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Outfile syntax and out of memory

2006-05-08 Thread George Law
 
Johan,

have you thought about doing this incrementally?
ie - 25% at a time x 4

to show something for NULL, you can use the 
COALESCE function.

ie - COALESCE(column,'nothing') 


--
George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
 

-Original Message-
From: Johan Lundqvist [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 10:16 AM
To: mysql@lists.mysql.com
Subject: Outfile syntax and out of memory

Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!


If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.

This is a big problem, cause the import function that exist where I send

the data only accept the format I get using mysql  queryfile.sql  
outfile.txt.

Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??

Regards,
/Johan Lundqvist

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to repair a table,

2006-05-08 Thread Dilipkumar

Hi,

Yes it can be repaired using

myisamchk -u root -p  datadirectory the table name as tablename.* 
password

this will check the data and also the index file also.

Payne wrote:

hi, I got a table where the myi isn't able to re be read. I tried to 
run myisam but it give an error about the index.


Do I need to drop the table? Can it be repaired?


Payne




--
Thanks  Regards,
Dilipkumar
DBA Support

** DISCLAIMER **
Information contained and transmitted by this E-MAIL is proprietary to 
Sify Limited and is intended for use only by the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential or exempt from disclosure under applicable law. If this is a 
forwarded message, the content of this E-MAIL may not have been sent with 
the authority of the Company. If you are not the intended recipient, an 
agent of the intended recipient or a  person responsible for delivering the 
information to the named recipient,  you are notified that any use, 
distribution, transmission, printing, copying or dissemination of this 
information in any way or in any manner is strictly prohibited. If you have 
received this communication in error, please delete this mail  notify us 
immediately at [EMAIL PROTECTED]


Watch India vs. England LIVE, Hot videos and more only on Sify Max! Click Here. 
www.sifymax.com

Get to see what's happening in your favourite City on Bangalore Live! 
www.bangalorelive.in


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Outfile syntax and out of memory

2006-05-08 Thread Johan Lundqvist

Hi George,

To do it incrementally is not really an option, since i have to run it 
as a script during a short time-frame every night, and theres simply not 
time to process the files.


The outfile is about 2 - 10 Gb every time.

The tables have about 100 - 180 columns, and to do a COALESCE would 
create humongous sql-statements.


I might also have wrote it a bit ambigous in my question; I don't want 
the word nothing, I really want the field to contain nothing - as in ''.


Regards,
/Johan - Ua, Sweden


George Law wrote:
 
Johan,


have you thought about doing this incrementally?
ie - 25% at a time x 4

to show something for NULL, you can use the 
COALESCE function.


ie - COALESCE(column,'nothing') 



--
George Law
VoIP Network Developer
864-678-3161
[EMAIL PROTECTED]
MSN: [EMAIL PROTECTED]
 


-Original Message-
From: Johan Lundqvist [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 10:16 AM

To: mysql@lists.mysql.com
Subject: Outfile syntax and out of memory

Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!



If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.


This is a big problem, cause the import function that exist where I send

the data only accept the format I get using mysql  queryfile.sql  
outfile.txt.


Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??


Regards,
/Johan Lundqvist



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Outfile syntax and out of memory

2006-05-08 Thread Johan Lundqvist

Hi,

Where should I increase max_allowed packet??
I get a error from Windows (yes, I know... it's running on a M$-os, not 
my bad - not my desicion).


The results is about 2 - 10 Gb of data.

Regards,
/Johan

Dilipkumar wrote:

Hi,

Increase max_allowed packet to 1.5 gb and then try to import your data .

eg ;

In mysql prompt run the file as

*use database   \. /tmp/filename.txt
*
Johan Lundqvist wrote:


Hi,

I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!



If I instead insert the following code in queryfile.sql:
INTO OUTFILE 'outfile.txt'

Now my outfile.txt don't get the first row with the column names, and 
any NULL values are exported as \N.


This is a big problem, cause the import function that exist where I 
send the data only accept the format I get using mysql  
queryfile.sql  outfile.txt.


Any help??! Ideas??

Can I in any way format my output to print the column names and print 
NULL values as 'nothing'??


Regards,
/Johan Lundqvist






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Fwd: Getting next birthdays]

2006-05-08 Thread Peter Brawley
 I need the birthdays from yesterday, today and the next 4 or 5 
birthdays.


You don;t need to manually compute every date component. Try something 
like ...


 SELECT ...
 WHERE DATE_SUB(NOW(),INTERVAL 1 DAY) = mem.birthday
   AND DATE_ADD(NOW(),INTERVAL 5 DAY) = mem.birthday
 ORDER BY mem.birthday;

PB

-

ESV Media GmbH wrote:

Hey, i´ve a problem with getting the next and the actual birthdays.

This my actual birthday sql :

SELECT SQL_CACHE birthday,mem.lname, mem.fname,mem.mem_id FROM members 
mem

INNER JOIN network net ON (net.mem_id = mem.mem_id AND net.frd_id =1)
WHERE
(( DAYOFYEAR(FROM_UNIXTIME( mem.birthday ))  DAYOFYEAR(now()) 
)*DAYOFYEAR(CONCAT(YEAR(NOW()),'-12-31')))+DAYOFYEAR(FROM_UNIXTIME( 
mem.birthday )) = DAYOFYEAR(now())
ORDER BY (( DAYOFYEAR(FROM_UNIXTIME( mem.birthday ))DAYOFYEAR(NOW()) 
)*DAYOFYEAR(CONCAT(YEAR(NOW()),'-12-31')))+DAYOFYEAR(FROM_UNIXTIME( 
mem.birthday )) LIMIT 4


The field birthday is in a Unix timestamp format.
I need the birthdays from yesterday, today and the next 4 or 5 birthdays.
And i need the table network to get my friends list.






--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.392 / Virus Database: 268.5.5/333 - Release Date: 5/5/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



On what factors does speed of mysql depends

2006-05-08 Thread abhishek jain

Dear Friends,
I have a database with approximately 10 tables with about 1 lakh records
each in 3 tables, I need to know that on what factors does the speed of
mysql depends,
1)Does a table having records effects the speed of data fetch of another
table in the same database.
2)Whats the approximate size of a table ideal for mysql,

Any other factors you want mine attention to be foccused on.
Thanks,
Abhishek Jain


Re: Backups with MySQL/InnoDB

2006-05-08 Thread David Hillman

On May 7, 2006, at 11:29 PM, Robert DiFalco wrote:

Fast, incremental, compressed, and no max-size limitations. Must be
transaction safe; able to run while transactions are going on without
including any started after the backup began; the usual stuff.


   Incremental, transaction safe, compressed, fast, no-max-size.  
( In order )


--
David Hillman
LiveText, Inc
1.866.LiveText x235



Connection Pooling

2006-05-08 Thread romyd misc

Hi Everyone,

I'm developing an application using C# .NET and mysql as database. It's a
multithreaded application, we open a mysql database connection at the very
beginning when the application is started and all the database requests use
the same connection. But under stress or when more than one request try to
access database, i get object reference errors. I don't get this error
when frequency of database calls is low. Does it sounds like i need to
implement connection pooling?

I tried to lookup online, but couldn't find any help under mysql
documentation. Can someone help me setting up mysql connection pooling with
C#.NET.

Thanks in advance,
Romy


Re: Backups with MySQL/InnoDB

2006-05-08 Thread Daniel da Veiga

On 5/8/06, David Hillman [EMAIL PROTECTED] wrote:

On May 7, 2006, at 11:29 PM, Robert DiFalco wrote:
 Fast, incremental, compressed, and no max-size limitations. Must be
 transaction safe; able to run while transactions are going on without
 including any started after the backup began; the usual stuff.

Incremental, transaction safe, compressed, fast, no-max-size.
( In order )



Those are certainly the most important features (and I'll be glad to
beta-test it ;) I'll add: manage multiple servers, deal with
replication (using the replicated server as a backup would be cool),
manage binlogs (date and purge) and be compatible with version 4.1 and
above (I don't plan on using the 5 version any time soon).


--
David Hillman
LiveText, Inc
1.866.LiveText x235






--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Unidata to Mysql

2006-05-08 Thread Brett Harvey


Has anyone converted from Unidata db to Mysql?   How easy/difficult 
is it to do?  Does the Mysql Migration toolkit help with that process?


an old consulting company setup a website with Unidata and perl... we 
want to convert to mysql...


Thanks!

--

/Brett C. Harvey;
/Creative-Pages.Net, President;
/Facility Management Systems, CTO (www.fmsystems.biz);
/Lasso Partner Association Member ID #LPA135259 
(www.omnipilot.com/www.lassopartner.com);



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: novice on SQL

2006-05-08 Thread tony yau
Hi John,

tried your suggestion but I can't get it to work. This is because I don't
know how to set conditions in the following clauses (because there isn't
any)

  and Table1.[condition for Changes1]
  and Table2.[condition for Changes2]
  and Table3.[condition for Changes3]

the result I've got was similar to the following (note the ID is pkey of
another table)

IDChanges1Changes2Changes3
-
 1  10.010.0same as
 1  10.310.3
 1  12.212.2
 2  31.031.0
 3  1.021.02
 3  4.94.9

thanks for your help anyway

Tony

tony yau [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi John,

 I didn't know you can do that! (such a novice indeed!)
 Thank you for your reply, I will put it to the test first thing when i get
 back to the office tomo.

 Cheers

 John Hicks [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
  tony yau wrote:
   Hello,
  
   I can get a select result like the following: (SELECT ID,Changes FROM
   mytable WHERE somecondition;)
  
   IDChanges
   -
   1  10.0
   1  10.3
   1  12.2
   2  31.0
   3  1.02
   3  4.9
  
   how can I get the above result sets into the following format (columns
   'Changes1','Changes2',... are all from 'Changes')
  
   IDChanges1Changes2Changes3 (limits of 5)
   
   1  10.010.312.2
   2  31.0
   3  1.024.9
  
  
   I have got a method that works (I think) by first do a SELECT getting
   DISTINCT id values and then foreach of these ID I do another SELECT to
 get
   the Changes values and then just massage the display.
  
   Is there another way of doing this by using a single SQL query?
 
  There may be a simpler way, but this should work:
 
  select Table.ID,
  Table1.Changes as Changes1,
  Table2.Changes as Changes2,
  Table3.Changes as Changes3
 
  from Table,
  Table as Table1,
  Table as Table2,
  Table as Table3
 
  where Table.ID = Table1.ID
  and Table.ID = Table2.ID
  and Table.ID = Table3.ID
 
  and Table1.[condition for Changes1]
  and Table2.[condition for Changes2]
  and Table3.[condition for Changes3]
 
  order by table.ID
 
 
  --J
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 




 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Connection Pooling

2006-05-08 Thread Tim Lucia
I don't hear you need to implement connection pooling.  Maybe, but I think
you might still have errors under load, as you approach the maximum
connection count in the pool.

Tim


-Original Message-
From: romyd misc [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 2:37 PM
To: mysql@lists.mysql.com
Subject: Connection Pooling

Hi Everyone,

I'm developing an application using C# .NET and mysql as database. It's a
multithreaded application, we open a mysql database connection at the very
beginning when the application is started and all the database requests use
the same connection. But under stress or when more than one request try to
access database, i get object reference errors. I don't get this error
when frequency of database calls is low. Does it sounds like i need to
implement connection pooling?

I tried to lookup online, but couldn't find any help under mysql
documentation. Can someone help me setting up mysql connection pooling with
C#.NET.

Thanks in advance,
Romy


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connection Pooling

2006-05-08 Thread William R. Mussatto
romyd misc said:
 Hi Everyone,

 I'm developing an application using C# .NET and mysql as database. It's a
 multithreaded application, we open a mysql database connection at the very
 beginning when the application is started and all the database requests
 use
 the same connection. But under stress or when more than one request try to
 access database, i get object reference errors. I don't get this error
 when frequency of database calls is low. Does it sounds like i need to
 implement connection pooling?

 I tried to lookup online, but couldn't find any help under mysql
 documentation. Can someone help me setting up mysql connection pooling
 with
 C#.NET.

 Thanks in advance,
 Romy
Your comment about one connection for all of the threads disturbs me.

Your application will have to ensure that each thread is finished with the
connection and returns it to the pool.  Two threads cannot, at the same
time, use the same connection.  Say thread A had performed a select which
returned 2000 row resultset.  Until that thread had read in all 2000 rows,
they would still be in the connection.  If thread B tried to use the same
connection and asked for a different result set when thread A went back
for the rest of its results where would they be?

When you put stress on your application this is more likely to happen.

What a pool does is allow your threads to formally release their
connections back to the pool when they are done with them and re-aquire
them later without the full overhead to going all the way back to the
server to open a connection.  Instead you go to some intermediate point
where a set of threads are already avaiable (in Apache its at the child
level and in java its at the container level).


Bill



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning a Server with 10,000 databases

2006-05-08 Thread Alex
That's what I actually did now. We have got the databases start with 
usernames + number appended situation here so i patched sql_show.cc 
code to only do acl checks on databases starting with the username.


Still not optimal but cuts down a show databases on a server with 60.000 
databases from 15 seconds to 0.14 seconds which is ok.



Alexey Polyakov schrieb:

That's not a bug, that's a missing feature (or we can call it
unoptimal behaviour).

I think that having 1 databases and 1 users on a single mysqld
and doing show databases query often isn't what developers see as
intended usage for MySQL. :)

Here's what happens when you do show databases query:
1) mysqld gets a list of subdirectories of mysql data dir (usually
/var/lib/mysql). Each directory is a database
2) It loops through all entries, and adds an entry to result set if:
 a) an user has global show databases privilege
 b) an user has been granted access for this database
Part b) is what actually takes time. For each entry the server first
checks ACL cache. It's a hash table and lookups against it are very
fast. But for show databases query most lookups will return a miss,
and a miss means full scan of the whole acl db. So for 1 databases
it scans table with 1 rows 1 times, which means 10 million
scanned records. That's why it's slow.
As a workaround, if one has some rules regarding which user can see
which DB (for example, if usernames and database names start with same
substring), they can add this check to the code (so scans will be
avoided for most entries). That's still far from optimal, but at least
show databases will take dozens of milliseconds instead of seconds.

On 5/6/06, sheeri kritzer [EMAIL PROTECTED] wrote:

Perhaps it's time to file a bug report, then?

-Sheeri

On 5/3/06, Alex [EMAIL PROTECTED] wrote:
 This problem is indeed not related to OS / Hardware Problems.

 Take a look at this thread:

 http://lists.mysql.com/mysql/197542

 Read the part about show databases as root vs standard user

 + observed file system activity.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]






--
Alexey Polyakov




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning a Server with 10,000 databases

2006-05-08 Thread Alexey Polyakov

On 5/9/06, Alex [EMAIL PROTECTED] wrote:


That's what I actually did now. We have got the databases start with
usernames + number appended situation here so i patched sql_show.cc
code to only do acl checks on databases starting with the username.

Still not optimal but cuts down a show databases on a server with 60.000
databases from 15 seconds to 0.14 seconds which is ok.


Same here - my database names start with username+underscore, so query
now takes 0.05 instead of 6 secs (I have about 15000 DBs).

--
Alexey Polyakov

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connection Pooling

2006-05-08 Thread romyd misc

What i meant by implementing connection pooling i meant if i need to do any
code changes other than changes in connection string.

Thanks,
Romy



On 5/8/06, Tim Lucia [EMAIL PROTECTED] wrote:


I don't hear you need to implement connection pooling.  Maybe, but I
think
you might still have errors under load, as you approach the maximum
connection count in the pool.

Tim


-Original Message-
From: romyd misc [mailto:[EMAIL PROTECTED]
Sent: Monday, May 08, 2006 2:37 PM
To: mysql@lists.mysql.com
Subject: Connection Pooling

Hi Everyone,

I'm developing an application using C# .NET and mysql as database. It's a
multithreaded application, we open a mysql database connection at the very

beginning when the application is started and all the database requests
use
the same connection. But under stress or when more than one request try to
access database, i get object reference errors. I don't get this error
when frequency of database calls is low. Does it sounds like i need to
implement connection pooling?

I tried to lookup online, but couldn't find any help under mysql
documentation. Can someone help me setting up mysql connection pooling
with
C#.NET.

Thanks in advance,
Romy




RE: Connection Pooling

2006-05-08 Thread Tim Lucia
It sounds like you need to either synchronize access to 'the connection'
(one user at a time), or have a connection per request.  In the latter case,
obtaining a connection from a pool of connections makes sense.
Unfortunately, I have only done this with Java -- not with .NET.  I would be
surprised, however, if you had to do anything special, other then connecting
via a pooling connection string.  In the Java case, connection.close() is
overridden to simply return the connection to the pool (and .open() borrows
one, ...)

Tim


-Original Message-
From: romyd misc [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 08, 2006 4:33 PM
To: Tim Lucia
Cc: mysql@lists.mysql.com
Subject: Re: Connection Pooling

What i meant by implementing connection pooling i meant if i need to do any
code changes other than changes in connection string.

Thanks,
Romy



On 5/8/06, Tim Lucia [EMAIL PROTECTED] wrote:

 I don't hear you need to implement connection pooling.  Maybe, but I 
 think you might still have errors under load, as you approach the 
 maximum connection count in the pool.

 Tim


 -Original Message-
 From: romyd misc [mailto:[EMAIL PROTECTED]
 Sent: Monday, May 08, 2006 2:37 PM
 To: mysql@lists.mysql.com
 Subject: Connection Pooling

 Hi Everyone,

 I'm developing an application using C# .NET and mysql as database. 
 It's a multithreaded application, we open a mysql database connection 
 at the very

 beginning when the application is started and all the database 
 requests use the same connection. But under stress or when more than 
 one request try to access database, i get object reference errors. I 
 don't get this error when frequency of database calls is low. Does it 
 sounds like i need to implement connection pooling?

 I tried to lookup online, but couldn't find any help under mysql 
 documentation. Can someone help me setting up mysql connection pooling 
 with C#.NET.

 Thanks in advance,
 Romy




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Backups with MySQL/InnoDB

2006-05-08 Thread Duzenbury, Rich


 -Original Message-
 From: Daniel da Veiga [mailto:[EMAIL PROTECTED] 
 Sent: Monday, May 08, 2006 1:55 PM
 To: mysql@lists.mysql.com
 Subject: Re: Backups with MySQL/InnoDB
 
 On 5/8/06, David Hillman [EMAIL PROTECTED] wrote:
  On May 7, 2006, at 11:29 PM, Robert DiFalco wrote:
   Fast, incremental, compressed, and no max-size 
 limitations. Must be 
   transaction safe; able to run while transactions are going on 
   without including any started after the backup began; the 
 usual stuff.
 
  Incremental, transaction safe, compressed, fast, no-max-size.
  ( In order )
 
 
 Those are certainly the most important features (and I'll be 
 glad to beta-test it ;) I'll add: manage multiple servers, 
 deal with replication (using the replicated server as a 
 backup would be cool), manage binlogs (date and purge) and be 
 compatible with version 4.1 and above (I don't plan on using 
 the 5 version any time soon).
 
  --
  David Hillman
  LiveText, Inc
  1.866.LiveText x235
 
 
 
 

In addition, I'd like to see a configurable option for how often to take
a full and or incremental backups, a mechanism to age the backups and
drop them after a certain amount of time.  For example, I want a simple
way to keep four weekly near line backups each month, then age off and
keep one backup for each of the previous 11 months, and then just one
backup per year.  This would be about 1T of data for us.

It would then be really sweet to be able to say 'restore a full backup
of x database as of April 2, 2005 at 8:42 am' and have it create a new
instance on a user defined port, then restore the closest previous full,
then apply the binlogs up to the correct point in time.

Thanks.

Regards,
Rich

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: novice on SQL

2006-05-08 Thread John Hicks

tony yau wrote:

Hi John,

tried your suggestion but I can't get it to work. This is because I don't
know how to set conditions in the following clauses (because there isn't
any)


and Table1.[condition for Changes1]
and Table2.[condition for Changes2]
and Table3.[condition for Changes3]


What values do you want for Changes1, Changes2, etc.? (How are you 
selecting for them.)


Post your SQL here if you need further help.

--J




the result I've got was similar to the following (note the ID is pkey of
another table)

IDChanges1Changes2Changes3
-
 1  10.010.0same as
 1  10.310.3
 1  12.212.2
 2  31.031.0
 3  1.021.02
 3  4.94.9

thanks for your help anyway

Tony

tony yau [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]

Hi John,

I didn't know you can do that! (such a novice indeed!)
Thank you for your reply, I will put it to the test first thing when i get
back to the office tomo.

Cheers

John Hicks [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]

tony yau wrote:

Hello,

I can get a select result like the following: (SELECT ID,Changes FROM
mytable WHERE somecondition;)

IDChanges
-
1  10.0
1  10.3
1  12.2
2  31.0
3  1.02
3  4.9

how can I get the above result sets into the following format (columns
'Changes1','Changes2',... are all from 'Changes')

IDChanges1Changes2Changes3 (limits of 5)

1  10.010.312.2
2  31.0
3  1.024.9


I have got a method that works (I think) by first do a SELECT getting
DISTINCT id values and then foreach of these ID I do another SELECT to

get

the Changes values and then just massage the display.

Is there another way of doing this by using a single SQL query?

There may be a simpler way, but this should work:

select Table.ID,
Table1.Changes as Changes1,
Table2.Changes as Changes2,
Table3.Changes as Changes3

from Table,
Table as Table1,
Table as Table2,
Table as Table3

where Table.ID = Table1.ID
and Table.ID = Table2.ID
and Table.ID = Table3.ID

and Table1.[condition for Changes1]
and Table2.[condition for Changes2]
and Table3.[condition for Changes3]

order by table.ID


--J

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Backups with MySQL/InnoDB

2006-05-08 Thread Logan, David (SST - Adelaide)
Hi Greg,

Maybe similar features to that of bacula (my current backup software of
choice for my wifes business servers). This is a very comprehensive open
source solution that has many of the features requested below. eg.
multiple servers, pooling, aging etc. It is a good example of what my
own requirements would be.

Is the intention to have a MySQL type plugin? eg. will it have an api
that will be open to other backup solutions being able to utilise what
will be written?

It would be nice to be able to utilise a standard XBSA solution giving
access to the database from any one of the major enterprise backup
solutions, eg. Legato Networker, Veritas Netbackup, HP Dataprotector
etc. etc. This would allow an enormously simple and straightforward
integration into many of the existing corporate solutions that exist
around the world.

Regards


---
** _/ **  David Logan 
***   _/ ***  ITO Delivery Specialist - Database
*_/*  Hewlett-Packard Australia Ltd
_/_/_/  _/_/_/    E-Mail: [EMAIL PROTECTED]
   _/  _/  _/  _/     Desk:   +618 8408 4273
  _/  _/  _/_/_/  Mobile: 0417 268 665
*_/   **
**  _/    Postal: 148 Frome Street,
   _/ **  Adelaide SA 5001
  Australia 
invent   
---

-Original Message-
From: Duzenbury, Rich [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 9 May 2006 6:39 AM
To: mysql@lists.mysql.com
Subject: RE: Backups with MySQL/InnoDB



 -Original Message-
 From: Daniel da Veiga [mailto:[EMAIL PROTECTED] 
 Sent: Monday, May 08, 2006 1:55 PM
 To: mysql@lists.mysql.com
 Subject: Re: Backups with MySQL/InnoDB
 
 On 5/8/06, David Hillman [EMAIL PROTECTED] wrote:
  On May 7, 2006, at 11:29 PM, Robert DiFalco wrote:
   Fast, incremental, compressed, and no max-size 
 limitations. Must be 
   transaction safe; able to run while transactions are going on 
   without including any started after the backup began; the 
 usual stuff.
 
  Incremental, transaction safe, compressed, fast, no-max-size.
  ( In order )
 
 
 Those are certainly the most important features (and I'll be 
 glad to beta-test it ;) I'll add: manage multiple servers, 
 deal with replication (using the replicated server as a 
 backup would be cool), manage binlogs (date and purge) and be 
 compatible with version 4.1 and above (I don't plan on using 
 the 5 version any time soon).
 
  --
  David Hillman
  LiveText, Inc
  1.866.LiveText x235
 
 
 
 

In addition, I'd like to see a configurable option for how often to take
a full and or incremental backups, a mechanism to age the backups and
drop them after a certain amount of time.  For example, I want a simple
way to keep four weekly near line backups each month, then age off and
keep one backup for each of the previous 11 months, and then just one
backup per year.  This would be about 1T of data for us.

It would then be really sweet to be able to say 'restore a full backup
of x database as of April 2, 2005 at 8:42 am' and have it create a new
instance on a user defined port, then restore the closest previous full,
then apply the binlogs up to the correct point in time.

Thanks.

Regards,
Rich

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unidata to Mysql

2006-05-08 Thread David Logan
Hi Brett,

Which version of Unidata? I doubt very much if the migration toolkit would
assist with this.

You will probably have to re-normalise the data due to the multi-value
aspects of the Unidata/Universe database. This would probably require the
addition of several more tables to cope (dependent on the original design
of the database).

Regards


 Has anyone converted from Unidata db to Mysql?   How easy/difficult
 is it to do?  Does the Mysql Migration toolkit help with that process?

 an old consulting company setup a website with Unidata and perl... we
 want to convert to mysql...

 Thanks!

 --
 
 /Brett C. Harvey;
 /Creative-Pages.Net, President;
 /Facility Management Systems, CTO (www.fmsystems.biz);
 /Lasso Partner Association Member ID #LPA135259
 (www.omnipilot.com/www.lassopartner.com);
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
David Logan
South Australia

when in trouble, or in doubt
run in circles, scream and shout
run in circles, scream and shout


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unidata to Mysql

2006-05-08 Thread Brett Harvey

Hello David,

Thanks for the response.  I don't know which version yet.  I just 
started a month ago with the company and am just starting on this 
project.  I will find out.  The site has not been updated in 5 years 
though... so the Unidata database must be at least 6 years old.


Brett

At 06:57 AM +0930 05/09/06, David Logan wrote:

Hi Brett,

Which version of Unidata? I doubt very much if the migration toolkit would
assist with this.

You will probably have to re-normalise the data due to the multi-value
aspects of the Unidata/Universe database. This would probably require the
addition of several more tables to cope (dependent on the original design
of the database).

Regards


--

/Brett C. Harvey;
/Creative-Pages.Net, President;
/Facility Management Systems, CTO (www.fmsystems.biz);
/Lasso Partner Association Member ID #LPA135259 
(www.omnipilot.com/www.lassopartner.com);



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Streaming LOB Data

2006-05-08 Thread Jeremy Cole

Hi Robert,


Anyone know for sure if the memory needed to insert a LOB is a
percentage of the system's available memory or if it is allocated from
the innodb_buffer_pool_size? IOW, how should my configuration settings
be modified to allow the insertion of larger blobs? :)


The majority of the memory needed for that operation will come from the 
system's available memory.  How much memory it will consume will depend 
somewhat on how the query is sent over.


You should count on at least 2x the size of the blob being needed on the 
server for a query of the form:


  INSERT INTO tbl (id, image) VALUES (id, blob data);

The original query will be stored in its original form, and the binary 
data will be stored in its parsed and unescaped form.


The storage engine may still make yet another copy of it, but I'm not 
sure that InnoDB does.  I suspect it does not.


One thing you can do to save some of the memory is to run it as:

  INSERT INTO tbl (id, image) VALUES (id, LOAD_FILE(filename));

This of course would require that you have the file available on the 
MySQL server to load in.  LOAD_FILE() will return the contents of the file.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Streaming LOB Data

2006-05-08 Thread Rick James
Related inequalities:
Given a blob of N bytes:

max_allowed_packet  N
innodb_log_file_size  10 * N   (if InnoDB)

And maybe issues with
  bulk_insert_buffer_size
  innodb_log_buffer_size


 -Original Message-
 From: Jeremy Cole [mailto:[EMAIL PROTECTED] 
 Sent: Monday, May 08, 2006 2:55 PM
 To: Robert DiFalco
 Cc: Sergei Golubchik; [EMAIL PROTECTED]; mysql@lists.mysql.com
 Subject: Re: Streaming LOB Data
 
 Hi Robert,
 
  Anyone know for sure if the memory needed to insert a LOB is a
  percentage of the system's available memory or if it is 
 allocated from
  the innodb_buffer_pool_size? IOW, how should my 
 configuration settings
  be modified to allow the insertion of larger blobs? :)
 
 The majority of the memory needed for that operation will 
 come from the 
 system's available memory.  How much memory it will consume 
 will depend 
 somewhat on how the query is sent over.
 
 You should count on at least 2x the size of the blob being 
 needed on the 
 server for a query of the form:
 
INSERT INTO tbl (id, image) VALUES (id, blob data);
 
 The original query will be stored in its original form, and 
 the binary 
 data will be stored in its parsed and unescaped form.
 
 The storage engine may still make yet another copy of it, but I'm not 
 sure that InnoDB does.  I suspect it does not.
 
 One thing you can do to save some of the memory is to run it as:
 
INSERT INTO tbl (id, image) VALUES (id, LOAD_FILE(filename));
 
 This of course would require that you have the file available on the 
 MySQL server to load in.  LOAD_FILE() will return the 
 contents of the file.
 
 Regards,
 
 Jeremy
 
 -- 
 Jeremy Cole
 MySQL Geek, Yahoo! Inc.
 Desk: 408 349 5104
 
 -- 
 MySQL Internals Mailing List
 For list archives: http://lists.mysql.com/internals
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Outfile syntax and out of memory

2006-05-08 Thread Jeremy Cole

Hi Johan,


I need to extract some data to a textfile from a big database.


If I try to do like this:
mysql  queryfile.sql  outfile.txt

outfile.txt it looks something like:
OrderID, Quant, OrdrDate, code1, code2...
10021, 12, 20060412, 23, 95...
10022, 5, 20060412, , 75...

But, I never get a complete file. I get a out of memory error after a 
hour or 2!!


This is because the mysql client program is trying to read the entire 
result into its own memory.  Try adding the '-q' option to mysql:


  mysql -q  queryfile.sql  outfile.txt

Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: updates during database dump

2006-05-08 Thread Jeremy Cole

Hi Luke,

When mysql is doing a dump, do the updates that happen during the dump 
get included in the dump.


I assume you mean 'mysqldump'.

I have a dump that starts at 11pm and goes for 2 hours. If someone 
updates data at say 11:45pm, does that update get included in the dump?


When does the window, on what gets included in a dump, close?


By default, mysqldump doesn't do anything to ensure a consistent backup. 
 If you want one, you have to specify an option to get one, which will 
depend on which storage engines you're using.


For InnoDB: Use the --single-transaction option.  The window closes when 
the dump starts.  Users in other transactions/sessions will still be 
able to write, but you won't see their writes in this transaction.


For MyISAM: Use the --lock-tables option.  The window closes when the 
dump starts.  Users won't be able to write at all, to any tables being 
dumped, while the dump is running.


In general, you want a consistent snapshot of all tables from the same 
point in time, and you will want to use one of the above options to get it.


Regards,

Jeremy

--
Jeremy Cole
MySQL Geek, Yahoo! Inc.
Desk: 408 349 5104

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Case confusion

2006-05-08 Thread Marcus Bointon
I just noticed that a key field (emailaddress) in my db is case  
sensitive when it should not have been, so now I've got a bunch of  
what are effectively duplicate records. I'm having trouble picking  
them out so I can manually merge/delete them before changing the  
collation on the field to be case insensitive.


SELECT * FROM mytable group by lower(emailaddress) having count 
(emailaddress)  1


This is ok, but it only shows me the records with lower case  
addresses (I can't tell which case version is the correct one without  
looking at them) when I want to see the records with all cases, that  
is all of '[EMAIL PROTECTED]', '[EMAIL PROTECTED]' and '[EMAIL PROTECTED]'.  
I'm confusing myself with the case sensitivity and self-references!


I think there are about 45 duplicates out of about 200,000.

How can I find these pesky things?

thanks,

Marcus
--
Marcus Bointon
Synchromedia Limited: Putting you in the picture
[EMAIL PROTECTED] | http://www.synchromedia.co.uk


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Backups with MySQL/InnoDB

2006-05-08 Thread Greg 'groggy' Lehey
On  Sunday, May 07, 2006 6:14 PM, Greg 'groggy' Lehey wrote:
 On Sunday,  7 May 2006 at  9:27:31 -0700, Robert DiFalco wrote:
 What are people doing for backups on very large MySQL/InnoDB
 databases?  Say for databases greater than 200 GB. Curious about
 the backup methods, procedures, and frequency.

 A second question, but not for the first time: how would you *like* to
 do backups if you had the choice?  We're currently in the final stages
 of the design of an online backup solution, and in the near future I'll
 publish the specs.  I won't mention them now to avoid influencing you,
 but now's the time to speak up if you want something specific.

On Monday,  8 May 2006 at  8:15:17 -0700, paul rivers wrote:

 I would suggest looking at the functionality of Microsoft SQL Server
 or Sybase backups.  It's extremely nice from an admin point of view,
 and certainly covers all of what Robert mentions.

Yes, from an administrative perspective we're trying to make something
that feels intuitive, and particularly the Microsoft approach seems
a good starting point for this aspect.  If you have a pet feature not
discussed below, let me know.

On  Sunday, May 07, 2006 9:30 PM, Robert DiFalco wrote:

 Fast, incremental, compressed, and no max-size limitations. Must be
 transaction safe; able to run while transactions are going on
 without including any started after the backup began; the usual
 stuff.

We're certainly planning incremental backups, but they probably won't
be in the first release.  We don't plan any size limitations (this is
a streaming backup), and it will be transaction-safe (statement-safe
for MyISAM) and online (i.e. concurrently with normal processing).

Compression is a different issue.  We haven't considered it so far,
and though it's desirable, I don't see why we can't get an external
program to do this (bzip2 or gzip, for example; the choice depends on
your personal tradeoffs between time and space).

On Monday,  8 May 2006 at 15:55:07 -0300, Daniel da Veiga wrote:
 On 5/8/06, David Hillman wrote:
 On May 7, 2006, at 11:29 PM, Robert DiFalco wrote:
 Fast, incremental, compressed, and no max-size limitations. Must be
 transaction safe; able to run while transactions are going on without
 including any started after the backup began; the usual stuff.

Incremental, transaction safe, compressed, fast, no-max-size.
 ( In order )


 Those are certainly the most important features (and I'll be glad to
 beta-test it ;) I'll add: manage multiple servers, deal with
 replication (using the replicated server as a backup would be cool),
 manage binlogs (date and purge) and be compatible with version 4.1 and
 above (I don't plan on using the 5 version any time soon).

The component we're working on at the moment is the streaming online
backup API.  Basically you issue an SQL command BACKUP DATABASE, and
it outputs a data stream that you can point at your tape drive, to a
disk, or across the network to something like VERITAS.  We're very
conscious of the multiple server issue, but it's going to have to wait
until we can back up one server properly.  Dealing with replication is
a special case of multiple servers, so that will wait too.  We will
backup the binlog, though, and our current thinking is to use it for
incremental backups, though this may change.

On Monday,  8 May 2006 at 16:09:23 -0500, Rich Duzenbury wrote:

 In addition, I'd like to see a configurable option for how often to
 take a full and or incremental backups, a mechanism to age the
 backups and drop them after a certain amount of time.  For example,
 I want a simple way to keep four weekly near line backups each
 month, then age off and keep one backup for each of the previous 11
 months, and then just one backup per year.  This would be about 1T
 of data for us.

This is also another aspect of the backup solution we're working on.
I'll put it down on the wish list.

On Tuesday,  9 May 2006 at  7:18:28 +1000, David Logan wrote:
 Hi Greg,

 Maybe similar features to that of bacula (my current backup software of
 choice for my wifes business servers). This is a very comprehensive open
 source solution that has many of the features requested below. eg.
 multiple servers, pooling, aging etc. It is a good example of what my
 own requirements would be.

I don't know Bacula, but I suppose I should investigate it.  Do you
know anybody in the project?

 Is the intention to have a MySQL type plugin? eg. will it have an
 api that will be open to other backup solutions being able to
 utilise what will be written?

Yes, this is very much the intention.  It's the API that we're
defining now.  We've been talking to Zmanda (http://www.zmanda.com/),
who are interested in extending amanda with MySQL plugins, and we'd be
more than happy for others to join in.

 It would be nice to be able to utilise a standard XBSA solution giving
 access to the database from any one of the major enterprise backup
 solutions, eg. Legato Networker, Veritas Netbackup, HP 

Re: Case confusion

2006-05-08 Thread Chris



I would run this query:

SELECT
   *
FROM mytable
WHERE LOWER(emailaddress) IN
   (SELECT
   LOWER(emailaddress)
   FROM mytable
   GROUP BY 1
   HAVING COUNT(emailaddress)  1)

This would show all duplicate emails, I would use the info this displays 
to choose which records to change/keep/delete.


May not be the best way, but it would work.

Chris

Marcus Bointon wrote:
I just noticed that a key field (emailaddress) in my db is case 
sensitive when it should not have been, so now I've got a bunch of 
what are effectively duplicate records. I'm having trouble picking 
them out so I can manually merge/delete them before changing the 
collation on the field to be case insensitive.


SELECT * FROM mytable group by lower(emailaddress) having 
count(emailaddress)  1


This is ok, but it only shows me the records with lower case addresses 
(I can't tell which case version is the correct one without looking at 
them) when I want to see the records with all cases, that is all of 
'[EMAIL PROTECTED]', '[EMAIL PROTECTED]' and '[EMAIL PROTECTED]'. I'm 
confusing myself with the case sensitivity and self-references!


I think there are about 45 duplicates out of about 200,000.

How can I find these pesky things?

thanks,

Marcus
--Marcus Bointon
Synchromedia Limited: Putting you in the picture
[EMAIL PROTECTED] | http://www.synchromedia.co.uk


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to convert strings to 'proper case' ?

2006-05-08 Thread C.R.Vegelin
Hi List,

I want to convert strings to proper-case,
where only the 1st char of each word is uppercase,
such as: This Is An Example.
Any idea how to do this with MySQL 5.0.15 ?

Thanks, Cor

Re: Need help in recreating .MYD files

2006-05-08 Thread balaraju mandala

Hi Dilip,

it means i loosed the data, correct Dilip. is there any other way to gain
that data, any binary logs etc?

regards,
bala