select recipe_id,max(maxdatetime) from data_csmeta group by recipe_id
having recipe_id=19166;
On Mon, Sep 23, 2013 at 4:15 PM, shawn green wrote:
> Hi Larry,
>
>
> On 9/23/2013 3:58 PM, Larry Martell wrote:
>
>> On Mon, Sep 23, 2013 at 1:51 PM, Sukhjinder K. Narula
>> wrote:
>>
>> Hi,
>>>
>>> I
if i u have LVM's then lock is held only for the duration of taking
snapshot, which would be few min, if there is very less activity on the db.
On Wed, Aug 28, 2013 at 3:08 PM, Ed L. wrote:
> On 8/28/13 2:00 PM, Ananda Kumar wrote:
>
>
> Why don't u try snapshot backup
Why don't u try snapshot backups, where the lock held for less duration. Or
can't u take mysql dumps during Night time when there is less bd activity
On Thursday, August 29, 2013, Ed L. wrote:
>
> Mysql newbie here, looking for some help configuring 5.0.45 master-slave
replication. Here's my sce
oomsToSell',4,4, NOW());
> SELECT * FROM tempHotelRateAvailability;
>
>
> On Wed, May 29, 2013 at 2:57 PM, Ananda Kumar wrote:
>
>> did u check if data is getting inserted into tempHotelRateAvailability
>>
>>
>> On Wed, May 29, 2013 at 7:21 PM, Nei
s call in the Trigger and change a value in the table
> it works fine;
>
> INSERT INTO AuditTrail
> (AuditTrailId,UserId,ActionType,TableName,RowKey,FieldName,OldValue,NewValue,
> LoggedOn)
> VALUES (UUID(),1,'UPDATE','HotelRateAvailability', 1,'RoomsToSell',
can you please share the code of the trigger. Any kind of error your getting
On Wed, May 29, 2013 at 6:49 PM, Neil Tompkins wrote:
> Hi,
>
> I've a trigger that writes some data to a temporary table; and at the end
> of the trigger writes all the temporary table data in one insert to our
> norm
Does your query use proper indexes.
Does your query scan less number blocks/rows
can you share the explain plan of the sql
On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:
> Hello,
>
> I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it r
rsh
> Stefan
>
>
> On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman >wrote:
>
> > --
> >
> > *From: *"Ananda Kumar"
> > *Subject: *Re: Retrieve most recent of multiple rows
> >
> >
> >
> > select qid,max(atimestamp) from
not all the rows, only the distinct q_id,
On Wed, Mar 13, 2013 at 8:28 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
> *Subject: *Re: Retrieve most recent of multiple rows
>
>
>
> select qid,max(atimestamp) from kkk
can you please share the sql that you executed to fetch the above data
On Wed, Mar 13, 2013 at 7:19 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Norah Jones"
> > Subject: Retrieve most recent of multiple rows
> >
> > 4 10Male3 1363091019
>
select * from tab where anwer_timestamp in (select max(anwer_timestamp)
from tab where q_id in (select distinct q_id from tab) group by q_id);
On Wed, Mar 13, 2013 at 6:48 PM, Norah Jones wrote:
> I have a table which looks like this:
>
> answer_id q_id answer qscore_id answer_timestamp
)
--
---
11 13-MAR-13 02.04.04.00 PM
10 13-MAR-13 02.03.36.00 PM
12 13-MAR-13 02.03.48.00 PM
On Wed, Mar 13, 2013 at 7:28 PM, Ananda Kumar wrote:
> can you please share the sql that you executed to fetch the above d
you can use checksum to make sure there are not corruption in the file
On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni wrote:
> Gary,
>
> It is always a good practice to test the whole solution backup/restore.
> So nothing is better than testing a restore, actually it should be a
> periodic procedu
why dont u create a softlink
On Tue, Oct 30, 2012 at 11:05 PM, Tim Johnson wrote:
> * Reindl Harald [121030 08:49]:
> > >The drupal mysql datafiles are located at
> > > /Applications/drupal-7.15-0/mysql/data
> > >
> > > as opposed to /opt/local/var/db/mysql5 for
> > > 'customary' mysql.
> >
> >
ke on any other unix machine.
>
> how did i connect mysql to what exactly?
>
>
>
> On 10/18/12 6:42 AM, Ananda Kumar wrote:
>
>> how did u connect mysql on your laptop
>>
>> On Thu, Oct 18, 2012 at 1:19 AM, kalin > <mailto:ka...@el.net>> wrote:
; but i still don't get the necessity of "local". i have never used it
> before.
>
> this is all on os x - 10.8.2...
>
>
>
>
> On 10/17/12 1:25 PM, Ananda Kumar wrote:
>
>> also try using "load data local infile 'file path' and see if
also try using "load data local infile 'file path' and see if it works
On Wed, Oct 17, 2012 at 10:52 PM, Ananda Kumar wrote:
> does both directory have permission "777"
>
>
> On Wed, Oct 17, 2012 at 9:27 PM, Rick James wrote:
>
>> SELinux ?
>&
does both directory have permission "777"
On Wed, Oct 17, 2012 at 9:27 PM, Rick James wrote:
> SELinux ?
>
> > -Original Message-
> > From: Lixun Peng [mailto:pengli...@gmail.com]
> > Sent: Tuesday, October 16, 2012 9:03 PM
> > To: kalin
> > Cc: Michael Dykman; mysql@lists.mysql.com
> >
> I have also gone through the firewall settings and that is only rules for
> connections.
>
>
>
>
>
> On 09/10/2012 02:40 PM, Ananda Kumar wrote:
>
> did u check if there any firewall settings, forbidding you to create
> files, check if " SELinux is disable
gt;
> we have even tried to create a temp table with only one field in order
> to insert one row for testing, but we are currently not able to create any
> temporary tables whatsoever as even the simplest form of table still gives
> the same error.
>
> Regards
>
>
>
&
this temp table will hold how many rows, what would be its size.
On Mon, Sep 10, 2012 at 5:03 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi,
> We confirmed that the /tmp directory permissions is set to rwxrwxrwxt
> and is owned by root , the same as all our other serv
start with 500MB and try
On Mon, Sep 10, 2012 at 3:31 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi, the sort_buffer_size was set to 8Mb as well as 32M for the session
> (currently 1M) and retried with same result.
>
>
>
>
>
> On 09/10/201
other transactions overwrite the info, or there is nothing logged.
>
> We even tried running the create statement and immediately running
> Show innodb status, but nothing for that statement.
>
> Regards
>
>
>
>
>
> On 09/10/2012 11:05 AM, Ananda Kumar wrote:
&g
try this command and see if you can get more info about the error
show innodb status\G
On Mon, Sep 10, 2012 at 2:25 PM, Machiel Richards - Gmail <
machiel.richa...@gmail.com> wrote:
> Hi All
>
> I am hoping someone can point me in the right direction.
>
> We have a mysql 5.0 database whi
if the server is offline , what kind of operation happens on it.
On Thu, Aug 2, 2012 at 11:31 AM, Pothanaboyina Trimurthy <
skd.trimur...@gmail.com> wrote:
> Hi everyone
> i have 4 mysql servers out of those one server will
> be online always and the remaining will be offline and
t; > On Mon, Jul 23, 2012 at 8:17 PM, walter harms wrote:
> >
> >>
> >>
> >> Am 23.07.2012 16:37, schrieb Ananda Kumar:
> >>> why dont u setup a staging env, which is very much similar to your
> >>> production and tune all long running sql
so. its more of inactive connections, right.
What do you mean by NEVER LOGOUT
On Mon, Jul 23, 2012 at 8:17 PM, walter harms wrote:
>
>
> Am 23.07.2012 16:37, schrieb Ananda Kumar:
> > why dont u setup a staging env, which is very much similar to your
> > production and tune
why dont u setup a staging env, which is very much similar to your
production and tune all long running sql
On Mon, Jul 23, 2012 at 8:02 PM, walter harms wrote:
>
>
> Am 23.07.2012 16:10, schrieb Ananda Kumar:
> > you can check the slow query log, this will give you all the sql&
you can check the slow query log, this will give you all the sql's which
are taking more time to execute
On Mon, Jul 23, 2012 at 7:38 PM, walter harms wrote:
>
>
> Am 23.07.2012 15:47, schrieb Ananda Kumar:
> > you can set this is in application server.
> > You can
you can set this is in application server.
You can also set this parameter in my.cnf
wait_timeout=120 in seconds.
But the above parameter is only for inactive session
On Mon, Jul 23, 2012 at 6:18 PM, walter harms wrote:
> Hi list,
> is there a switch where i can restrict the connect/execution t
SQL> select * from orddd;
ORDERID PRODID
-- --
2 5
1 3
1 2
2 7
1 5
SQL> select prodid,count(*) from orddd group by PRODID having count(*) > 1;
PRODID COUNT(*)
-- --
column used in the order by caluse, should be the first column in the
select statement to make the index work
On Wed, Jul 11, 2012 at 3:16 PM, Reindl Harald wrote:
>
>
> Am 11.07.2012 11:43, schrieb Ewen Fortune:
> > Hi,
> >
> > On Wed, Jul 11, 2012 at 10:31 AM, Reindl Harald
> wrote:
> >> the m
you are using a function-LOWER, which will not make use of the unique key
index on ksd.
Mysql does not support function based index, hence your query is doing a
FULL TABLE scan and taking more time.
On Tue, Jul 10, 2012 at 4:46 PM, Darek Maciera wrote:
> 2012/7/10 Ananda Kumar :
> > c
can u show the explain plan for your query
On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera wrote:
> Hello,
>
> I have table:
>
> mysql> DESCRIBE books;
>
> |id |int(255) | NO | PRI |
> NULL | auto_increment |
> | idu
looks like the value that you give for myisam_max_sort_size is not enough
for the index creation and hence it doing a "REPAIR WITH KEYCACHE"
Use the below query to set the min values required for myisam_max_sort_size
to avoid "repair with keycache"
select
a.index_name as index_name,
mysqldump --databases test --tables ananda > test.dmp
mysql> show create table ananda\G;
*** 1. row ***
Table: ananda
Create Table: CREATE TABLE `ananda` (
`id` int(11) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT
I have mysql 5.5.
I am able to use mysqldump to export data with quotes and the dump had
escape character as seen below
LOCK TABLES `ananda` WRITE;
/*!4 ALTER TABLE `ananda` DISABLE KEYS */;
INSERT INTO `ananda` VALUES
(1,'ananda'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3,'thims'),(2,'aditi'),(3
Did you try using "IGNORE" keyword while using the LOAD DATAFILE command.
This will ignore duplicate rows from getting inserted and proceed further.
On Fri, Jun 15, 2012 at 11:05 AM, Keith Keller <
kkel...@wombat.san-francisco.ca.us> wrote:
> On 2012-06-14, Gary Aitken wrote:
> >
> > So... I wa
> xl
>
> ref
>
> idx_unique_key_ib_xml\,index_message_id
>
> idx_unique_key_ib_xml
>
> 153
>
> reports.pl.Message_Id
>
> 1
>
> Using where
>
> ** **
>
> Sorry for the previous mailâ¦.. this is my execution plan
t. In my database I am having 8 innodb tables and at the same time
> I am joining 4 tables to get the report.
>
> I am maintaining 60days records because the user will try to generate the
> report out of 60 days in terms of second, minute, hourly, weekly and
> Monthly report also.
>
Did you try with myisam tables.
They are supposed to be good for reporting requirement
On Wed, Jun 13, 2012 at 11:52 PM, Rick James wrote:
> I'll second Johan's comments.
>
> "Count the disk hits!"
>
> One minor change: Don't store averages in the summary table; instead
> store the SUM(). That
is iptables service running on db server, if yes, trying stopping it and
check
On Wed, Jun 13, 2012 at 5:04 PM, Claudio Nanni wrote:
> 2012/6/13 Johan De Meersman
>
> >
> > - Original Message -
> > > From: "Claudio Nanni"
> > >
> > > @Johan, you say "I'm having trouble with clients abor
or you can check application logs to see why the client lost connectivity
from the app
On Tue, Jun 12, 2012 at 5:12 PM, Ananda Kumar wrote:
> is there anything you can see in /var/log/messages
>
>
> On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni wrote:
>
>> Johan,
>>
is there anything you can see in /var/log/messages
On Tue, Jun 12, 2012 at 5:08 PM, Claudio Nanni wrote:
> Johan,
>
> "Print out warnings such as Aborted connection... to the error log."
> the dots are not telling if they comprise Aborted clients as well.
> I find the MySQL error log extremely po
when u say redudency.
Do u just want replication like master-slave, which will be active-passive
or
Master-master which be active-active.
master-slave, will work just a DR, when ur current master fails you can
failover the slave, with NO LOAD balancing.
Master-master allows load balancing.
On Mo
is the central database server just ONE server, to which all your 50 data
center app connects
On Thu, May 24, 2012 at 2:47 PM, Anupam Karmarkar
wrote:
> Hi All,
>
>
> I need architectural help for our requirement,
>
>
> We have nearly 50 data centre through out different cities from these data
>
Hi,
How much ever tuning you do at my.cnf will not help much, if you do not
tune your sql's.
Your first priority should be tune sql's, which will give you good
performance even with decent memory allocations and other settings
regards
anandkl
On Wed, May 23, 2012 at 3:45 PM, Andrew Moore wrote:
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing
On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar wrote:
> Is you system READ intensive or WRITE intensive.
> If you have enable compression for WRITE intensive data, then CP
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.
On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman wrote:
>
>
> - Original Message -
> > From: "Reindl Harald"
> >
> > interesting because i have here a d
yes, Barracuda is limited to FILE_PER_TABLE.
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
>
>
> yes, there some
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.
For more benefits read the doc
On Tue, May 22, 20
E"
> > without real changes
> >
> > Am 22.05.2012 11:28, schrieb Kishore Vaishnav:
> > > Right now one tablespace datafile. But does it matters if i have one
> file
> > > per table.
> > >
> > > On Tue, May 22, 2012 at 2:56 PM, Ananda Kumar
On Tue, May 22, 2012 at 2:58 PM, Kishore Vaishnav
wrote:
> Right now one tablespace datafile. But does it matters if i have one file
> per table.
>
> *thanks & regards,
> __*
> Kishore Kumar Vaishnav
> *
> *
> On Tue, May 22, 2012 at 2:56 PM, Ananda K
do u have one file per table or just one system tablespace datafile.
On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav
wrote:
> Thanks for the reply, but in my case the datafile is growing 1 GB per day
> with only 1 DB (apart from mysql / information_schema / test) and the size
> of the DB is jus
why are not using any where condition in the update statment
On Wed, May 16, 2012 at 1:24 PM, GF wrote:
> Good morning,
> I have an application where the user ids were stored lowercase.
> Some batch import, in the user table some users stored a uppercase
> id, and for some applicative logic, in
I used to have these issues in mysql version 5.0.41.
On Mon, May 14, 2012 at 8:13 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Ananda Kumar"
> >
> > If numeric, then why are u using quotes. With quotes, mysql will
> > igno
r now dev team is
> updating the batch process from long secuencial process with huge slow
> inserts, to small parallel task with burst of inserts...
>
>
>
>
> On Mon, May 14, 2012 at 8:18 AM, Ananda Kumar wrote:
>
>> is accountid a number or varchar column
>>
&g
is accountid a number or varchar column
On Sat, May 12, 2012 at 7:38 PM, Andrés Tello wrote:
> While doning a batch process...
>
> show full processlist show:
>
> | 544 | prod | 90.0.0.51:51262 | tmz2012 | Query |6 |
> end | update `account` set `balance`= 0.00 +
>
which version of mysql are you using.
Is this secondary index.?
On Mon, May 7, 2012 at 12:07 PM, Zhangzhigang wrote:
> hi all:
>
> I have a question:
>
> Creating indexes after inserting massive data rows is faster than before
> inserting data rows.
> Please tell me why.
>
Do you just want to replace current value in client column to "NEW".
You can write a stored proc , with a cursor and loop through the cursor,
update each table.
regards
anandkl
On Mon, Apr 30, 2012 at 2:47 PM, Pothanaboyina Trimurthy <
skd.trimur...@gmail.com> wrote:
> Hi all,
> i have one
Create PROCEDURE qrtz_purge() BEGIN
declare l_id bigint(20);
declare NO_DATA INT DEFAULT 0;
DECLARE LST_CUR CURSOR FOR select id from table_name where id> 123;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1;
OPEN LST_CUR;
SET NO_DATA = 0;
FETCH LST_CUR INTO l_id;
WH
Why dont you create a new table where id < 2474,
rename the original table to "_old" and the new table to actual table name.
or
You need to write a stored proc to loop through rows and delete, which will
be faster.
Doing just a simple "delete" statement, for deleting huge data will take
ages.
re
hours when I do use LOCK TABLES.
>
> -Hank
>
>
>
> On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar wrote:
>
>> May be if u can let the audience know a sip-net of ur sql, some can help u
>>
>>
>> On Thu, Sep 22, 2011 at 11:43 PM, Hank wrote:
>>
>>
Your outer query "select cpe_mac,max(r3_dt) from rad_r3cap", is doing a full
table scan, you might want to check on this and use a "WHERE" condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star wrote:
>
>
> I have a table with 24 million rows, I need to figure out how to op
May be if u can let the audience know a sip-net of ur sql, some can help u
On Thu, Sep 22, 2011 at 11:43 PM, Hank wrote:
>
> Sorry, but you do not understand my original issue or question.
>
> -Hank
>
>
>
> On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar wrote:
>
&g
mmit.
>
>
>
>
> On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar wrote:
>
>> Hi,
>> Why dont u use a stored proc to update rows ,where u commit for every 1k
>> or 10k rows.
>> This will be much faster than ur individual update stmt.
>>
>> regards
&
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k or
10k rows.
This will be much faster than ur individual update stmt.
regards
anandkl
On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
> That is what I'm doing. I'm doing a correlated update on 200 million
> records. One U
or u can use "for loop", have only the database to be exported and use that
variable in --database and do mysqldump of each database.
On Thu, Sep 15, 2011 at 6:27 PM, Carsten Pedersen wrote:
> On 15-09-2011 10:31, Chris Tate-Davies wrote:
>
>> Adarsh,
>>
>> 1)
>>
>> When restoring a mysqldump you
umber of rows you cite, but it works beautifully and it is quick as
> lightning.
>
> HTH,
> Arthur
>
>
> On Wed, Sep 14, 2011 at 9:24 AM, Ananda Kumar wrote:
>
>> Dr. Doctor,
>> What kind of 10 entries? Is it insert,update delete etc.
>>
>> regards
Dr. Doctor,
What kind of 10 entries? Is it insert,update delete etc.
regards
anandkl
On Wed, Sep 14, 2011 at 6:30 PM, The Doctor wrote:
> Question:
>
> How can you optimise MySQL for 10 entires?
>
> Just running OSCemmerce and it is slow to pull up a who catalogue.
>
> --
> Member - Libe
Can you lets us know what is the output of
select * from user_info where user_id=16078845;
On Thu, Sep 8, 2011 at 1:02 PM, umapathi b wrote:
> I wanted to change the login_date of one user . The original data of that
> user is like this ..
>
> select * from user_info where user_id = 16078845 \G
Is this a production setup.
If not, take complete dump of the all databases.
Drop the xYZ database, see if you can see all the objects under XYZ.
Since the xYZ database is created, its obvious, that names are case
sensitive, and it show not show object from XYZ, when u under xYZ.
Can you please
; On Thu, Feb 10, 2011 at 02:55, Ananda Kumar wrote:
>
>> there is -p option please used that.
>>
>> On Thu, Feb 10, 2011 at 12:47 PM, Johan De Meersman > >wrote:
>>
>> > On Thu, Feb 10, 2011 at 7:43 AM, Adarsh Sharma <
>> adarsh.sha...@orkash.com
>&g
there is -p option please used that.
On Thu, Feb 10, 2011 at 12:47 PM, Johan De Meersman wrote:
> On Thu, Feb 10, 2011 at 7:43 AM, Adarsh Sharma >wrote:
>
> > I am researching all the ways to backup in mysql and donot able to find a
> > command that take individual backup of only one procedure i
what does 'show create table teste2" shows
2011/1/31 João Cândido de Souza Neto
> Please, give us some information about your server.
>
> --
> João Cândido de Souza Neto
>
> ""M. Rodrigo Monteiro"" escreveu na mensagem
> news:AANLkTikw2rDzhZU2+SmVeiPnVCYB-Q=vce5nufa7o...@mail.gmail.com...
>
Pito,
can u show us the innodb parameters in the my.cnf file.
regards
anandkl
On Sat, Jan 8, 2011 at 10:31 PM, Pito Salas wrote:
> I am very new to trying to solve a problem like this and have searched
> and searched the web for a useful troubleshooting guide but I am
> honestly stuck. I wonde
, removal of logs
> and restarting the database.
>
> Thanks in advance
>
>
> Thanks,
> Sairam Krishnamurthy
> +1 612 859 8161
>
>
> On 12/06/2010 04:47 AM, Ananda Kumar wrote:
>
>> Also, make sure your /tmp folder is on a separate and fast disk.
>> We
have u over allocated RAM/ process
regards
anandkl
On Thu, Dec 23, 2010 at 6:15 PM, Glyn Astill wrote:
> I've no idea of the status of dtrace on linux, as I've never tried, but
> failing that you could run it through gdb to get some insight into the
> issue.
>
> --- On Thu, 23/12/10, Johan De M
petitiononline.com/froyo/
>
>
> On Dec 17, 2010, at 12:06 PM, Ananda Kumar wrote:
>
> If u have used a stored proc to delete the rows, and commting freqently,
> then the kill will happen faster.
> If you have just used "delete from table_name where , then it
> would ta
If u have used a stored proc to delete the rows, and commting freqently,
then the kill will happen faster.
If you have just used "delete from table_name where , then it
would take toot much time to rollback all the deleted but not commited rows.
Regards
anandkl
On Fri, Dec 17, 2010 at 8:37 AM, Wi
copy the /etc/init.d/mysql file from your old m/c to the new and try the
start/stop.
regards
anandkl
On Wed, Dec 8, 2010 at 2:21 PM, Machiel Richards wrote:
> HI All
>
>I am hoping someone has had this before as this one is baffling me
> entirely.
>
>We did a MySQL database move from on
Also, make sure your /tmp folder is on a separate and fast disk.
We had similar issues and we moved /tmp folder from Local to SAN storage and
it was quite fast.
regards
anandkl
On Mon, Dec 6, 2010 at 4:10 PM, Johan De Meersman wrote:
> Are you saying that mass inserts go much slower now that you
If you just need specific records, you can use "-w" option of mysql to
extract only the specifc records.
Then you can run the dump file into another db.
regards
anandkl
On Fri, Nov 12, 2010 at 2:35 PM, Johan De Meersman wrote:
> From the OP:
>
> > I have a copy of the INNODB files for these two
oopss...sorry...i did not see that line
On Mon, Sep 20, 2010 at 3:44 PM, Johan De Meersman wrote:
> He did suggest doing mysqladmin create :-p
>
>
> On Mon, Sep 20, 2010 at 11:58 AM, Ananda Kumar wrote:
>
>> With the method you mentioned, you need to have the new db na
mp will add "create database" and "use
> database" statements. if you specify the db without that parameter, it
> won't.
>
>
> On Mon, Sep 20, 2010 at 11:34 AM, Ananda Kumar wrote:
>
>> The dump file has to be edited to replace old db name to the new db
The dump file has to be edited to replace old db name to the new db name.
regards
anandkl
On Mon, Sep 20, 2010 at 3:00 PM, Uwe Brauer wrote:
> Hello
>
> I would like to clone a database db_org to db_clone on the same machine.
> Could
> I use the dump command for this? Should the user of both db
It depends on what u want to recover and type of the db engine...
If u want to recover just the specifi db and if u have dump of the db, just
restore the db for innodb.
for myisam, u can just restore the files in that database.
regards
anandkl
On Thu, Sep 16, 2010 at 3:27 PM, Uwe Brauer wrote:
n my.cnf. There is 12Gb in the database server and I watch it fairly
> carefully and have not gone into swap yet in the past few years.
>
> On Thu, Sep 9, 2010 at 3:43 PM, Ananda Kumar wrote:
>
> > have u set sort_buffer_size at session level or in my.cnf.
> > Setting high valu
have u set sort_buffer_size at session level or in my.cnf.
Setting high value in my.cnf, will cause mysql to run out off MEMORY and
paging will happen
regards
anandkl
On Fri, Sep 10, 2010 at 1:10 AM, Phil wrote:
> Even prior to the group by it's still not likely to ever be more than 200
> or
>
to SUM the top 11 players from each team. Any suggestions ?
>
> Cheers
> Neil
>
>
> On Thu, Sep 9, 2010 at 9:17 AM, Ananda Kumar wrote:
>
>> did u try to use LIMIT after ORDER BY
>>
>>
>> On Thu, Sep 9, 2010 at 1:27 PM, Tompkins Neil <
>> nei
did u try to use LIMIT after ORDER BY
On Thu, Sep 9, 2010 at 1:27 PM, Tompkins Neil
wrote:
> Any help would be really appreciated ?
>
>
>
> -- Forwarded message --
> From: Tompkins Neil
> Date: Wed, Sep 8, 2010 at 5:30 PM
> Subject: Query SUM help
> To: "[MySQL]"
>
>
> Hi
>
> I
Vincent,
Since the column is indexed, it would use the index during the delete.
regards
anandkl
On Thu, Sep 9, 2010 at 5:47 AM, Daevid Vincent wrote:
> I am curious about something.
>
> I have a "glue" or "hanging" table like so:
>
> CREATE TABLE `fault_impact_has_fault_system_impact` (
> `id
Also, can u please lets u know the value's in this table.
Just one row, an example would do.
regards
anandkl
On Mon, Sep 6, 2010 at 5:35 PM, Tompkins Neil
wrote:
> These two fields
>
> home_goals and away_goals
>
> Cheers
> Neil
>
>
> On Mon, Sep 6, 2010
Tompkins,
Which field stores the result of matches.
regards
anandkl
On Mon, Sep 6, 2010 at 4:45 PM, Tompkins Neil
wrote:
> Hi,
>
> I've the following fields within a table :
>
> fixtures_results_id
> home_teams_id
> away_teams_id
> home_goals
> away_goals
> home_users_id
> away_users_id
>
> From
If you planing to migrate to a new hardware, then install the new version of
mysql and take dump of the current data and imported into ur new m/c and
test your app. If you all looks fine, then ur done.
regards
anandkl
On Fri, Sep 3, 2010 at 3:53 PM, Machiel Richards wrote:
> Good day all
>
>
Did u check the logs on the db server, to see what the issue was.
regards
anandkl
On Thu, Sep 2, 2010 at 6:25 AM, monloi perez wrote:
> All,
>
> I'm not sure if this is the right mailing list since the specific mailing
> lists
> doesn't seem to meet my concern.
>
> For some reason mysql client
did u try changing the collation for history column to UTF8
and try the update.
2010/8/31 mysql
> On 2010-08-31 15:17, Ananda Kumar wrote:
> > desc suomi_contacts2;
>
> mysql> de
> select,insert,update,references | |
> | counter | int(10) unsigned | NULL | NO | PRI |
> NULL | auto_increment |
> select,insert,update,references | |
>
> +------+------+---+--+-+---+-+--
can u please list out the table structure...as collation can also be set at
column level
regards
anandkl
On Tue, Aug 31, 2010 at 6:00 PM, mysql wrote:
> Hi listers
> mysql server here is
>
> mysql-server-5.1.48-2.fc13.x86_64
>
> this morning i created a message with a literal string in chinese
Smith,
I never said, this wont work.Some times, there are chances of lossing data.
regards
anandkl
On Thu, Aug 26, 2010 at 8:48 PM, wrote:
> Quoting Norman Khine :
>
> i see, so the best is to just stop slave and then check the master
>> status, and when the master status syncs then i start t
1 - 100 of 502 matches
Mail list logo