Remigiusz Sokoowski wrote:
matt ryan wrote:
Tobias Asplund wrote:
On Tue, 7 Sep 2004, matt ryan wrote:
I forgot, did you have multiple slaves on multiple machines? If so,
do they
have identical hardware/drivers?
Multiple slaves on same machine, one works fine
Do You tried to distribute
I use foxpro to do similar loops
I've found that I get 10 queries per second on large tables, when
connecting once, and issuing individual select statements via odbc.
It is much faster if you can narrow the recordset into an array within
php, and spool through that, unfortunatly I deal with
Still have not got this fixed, I'm all out of idea's, the slave has been
reloaded again today
gerald_clark wrote:
We have no idea what you are running, or what you are running it on.
matt ryan wrote:
040901 18:36:21 Error reading packet from server: binlog truncated
in the middle of event
Tobias Asplund wrote:
On Tue, 7 Sep 2004, matt ryan wrote:
I forgot, did you have multiple slaves on multiple machines? If so, do
they
have identical hardware/drivers?
Multiple slaves on same machine, one works fine
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
040901 18:36:21 Error reading packet from server: binlog truncated in
the middle of event (server_errno=1236)
040901 18:36:21 Got fatal error 1236: 'binlog truncated in the middle
of event' from master when reading data from binary log
040901 18:36:21 Slave I/O thread exiting, read up to log
gerald_clark wrote:
We have no idea what you are running, or what you are running it on.
matt ryan wrote:
040901 18:36:21 Error reading packet from server: binlog truncated
in the middle of event (server_errno=1236)
040901 18:36:21 Got fatal error 1236: 'binlog truncated in the
middle of event
Mikhail Entaltsev wrote:
You _could_ try adding an identical primary key to the stat_in table as
you have on the 321st_stat table. However, since we need all of the rows
from stat_in in the results, I am not sure that it will help speed up the
join (because it's a left join). Even though I think
Mikhail Entaltsev wrote:
Hi,
insert into 321st_stat select * from stat_in group by primary key fields
from 321st_stat table;
did you try to use this query?
Best regards,
Mikhail.
Ran it, it took at least 24 hours, it finished but never gave me the
total time, when I checked the server mysql
Mikhail Entaltsev wrote:
Could you execute show create table 321st_stat and show create table
stat_in
and send results back?
I have no key's on the temp table, stat_in, do you think adding keys on
the whole primary key would be faster?
I wasnt sure if you could join mysql keys, the key is
I think oracle parallel query is calling me
110,832,565 stat records
77,269,086 on weekly update, I get small daily files, but daily sql's
dont work very well, and miss records, in this case it missed 563 records.
mysql update stat_in set ctasc='321ST';
Query OK, 77269086 rows affected (24 min
One alternative, since you know you don't have many records to insert,
is to pull the IDs of the missing records and insert just those. Do an
outer join on the two tables (joining on the primary key), get a list
of the IDs of records in stat_in but not in 321st_stat, and add a
WHERE id IN
Andrew Pattison wrote:
I'm guessing that you have indexes on the 321st_stat table? If this is
the case, try dropping them before you do the insert, then rebuilding
them. MySQL is known to be slow at doing bulk inserts on indexed
tables. Also, updates are much faster than inserts since with
The table is 9 gig, and the index is 8 gig
unfortunately the primary unique key is almost every column, if I were
to make it one using concat, it would be huge.
I tried making those fields a hash, but it did not work, I had duplicate
hashes for non duplicate records!!
Matt
--
MySQL General
Still no solution on this
anybody have any ideas?
It's not network, or hard drive, it's got to be some type of bug in my
config files, attached in original email
The master is on 4.0.20a and the slave is on 4.0.20a
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
Replace deletes and inserts.
?
what do you mean?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Anybody else have any ideas?
I cant keep the slave up
only thing I have not tried is upgrading to 4.0.20, however, nothing
changed to cause this problem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I reset the master, flush logs, reset master, show master status..
shows FINANCE-bin.186 at position 79
so I started the slave
CHANGE MASTER TO
MASTER_HOST='192.168.1.168',
MASTER_USER='repl',
MASTER_PASSWORD='Daredevil22',
MASTER_LOG_FILE='FINANCE-bin.186',
MASTER_LOG_POS=79;
start slave;
and I
More info..
I dont see anythign wrong with the binlog the slave has
E:\mysql\datamysqlbinlog databasebackup-relay-bin.001
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 4
#691231 19:00:00 server id 1 log_pos 0 Rotate to
FINANCE-bin.186 pos:79
# at 46
#691231 19:00:00 server
Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Have you considered that a proxy server may be in the way. I have been
watching this thread but I can't remember if you said anything about
your network connectivity (sorry!). I have seen several programs make
what they thought was a connection then fail because they
matt ryan wrote:
Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
I've switched to innodb but performance isnt very good
while the insert runs, here's what I get for performance
select count(*) from rondon;
1 row in .13 sec
select count(*) from
matt ryan wrote:
Replace does a delete followed by an insert.
Ahh, I'm testing innodb on our tables with this problem
Doh another problem
innodb has no merge option, I have too much data, and the only way to
deal with it, is partition the data and then tie it together with merge
views
I cant keep the slave up for more than 10 minutes
constantly getting these errors
040812 10:32:25 Error reading packet from server: binlog truncated in
the middle of event (server_errno=1236)
040812 10:32:25 Got fatal error 1236: 'binlog truncated in the middle
of event' from master when
Victor Pendleton wrote:
Can you reset the slave to read the next event its relay log? If this is not
possible, is refreshing the data from the master a viable option?
I can start slave, and it runs a little while, then stops again.
I can refresh the data from the master, iv'e done it 25 times
I deleted every table off the slave, and reloaded them, I do this twice
a week because it wont replicate
The master server has a check optimize every sunday
I had a similar situation one week ago. Found one of the tables (MyISAM) had a
corrupt index. After fixing it, everything was fine again.
SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from
temp table.
This takes 2 hours to comlete, the temp table is rather large.
The table being updated is locked, the whole time, all web requests are
locked and the pages time out.
Is there any way to get this to run without
Check it out
mysql start slave;
Query OK, 0 rows affected (0.00 sec)
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run
SLAVE S
TOP first
mysql start slave;
ERROR 1198: This operation cannot be performed with a running slave, run
SLAVE S
TOP first
mysql
Should I even attempt this using mysql?
Has anyone played with this much data in mysql?
I've got two 100 gig databases in mysql, and slave replication on both
of them, the only time I have a problem is table scans, that much data
will be slow.
--
MySQL General Mailing List
For list archives:
One of my slaves has decided to stop replicating
every time I reset it, I get this
040728 8:46:46 Error reading packet from server: binlog truncated in
the middle of event (server_errno=1236)
040728 8:46:46 Got fatal error 1236: 'binlog truncated in the middle
of event' from master when
Update on this, I found that when the slave stops, all I have to do is
start the slave and it's good again
Here's what the log shows.. the only thing I did was start slave and
it picked right back up
040728 9:25:13 Error reading packet from server: binlog truncated in
the middle of event
Split the myisam table into seperate tables. We will
call each table a bucket.
Create a MERGE table of all of them. For selecting the
data.
When inserting, use a hash function on your primary
key values to determine which bucket to insert into.
If you almost always select by primary key, then
I went over your data. This is what I noticed first:
| Select_full_join | 0|
| Select_full_range_join | 0|
| Select_range | 1|
| Select_range_check | 0|
| Select_scan | 301 |
What command will provide this data?
--
Hmm
I'm guessing my stats arent too good, lots of full table scans, but this
is to be expected, my users can query any table by any column, and I
cant index all column combinations
Variable_name
Value
Resend, firefox did not send the way it looked when I typed it!
I'm guessing my stats arent too good, lots of full table scans, but this
is to be expected, my users can query any table by any column, and I
cant index all column combinations
Variable_name Value
Select_full_join
I've found the slow query log is useless to me, it's 50 meg right now.
Is there a tool that will identify common querys? I could probably
come up with some sql's if I load it into a table, but it would take
quite a while to sort out.
I posted a request on the mysql bugtraq to move it to a
David
Did you look at MYSQL LOAD DATA INFILE ???
doc is available at
http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html
regards,
Load data infile only works with a text file going into a table, if the
data is in another format, like raw oracle, or EBCDIC it wont work,
you'll need to convert it to
OK, so if I can convert it into ascii, then it will be
a text file, which I can import using the instructions
at http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html,
right?
Thanks.
Yep, just have the table structure match the ascii file and load it in
Matt
--
MySQL General Mailing List
For list
There is a perl script that comes with MySQL called mysqldumpslow.
You can just run it on your slow log and it will output summary
statistics about the slow log.
I saw that in the docs, but I definitly dont want to install perl on a
production server, I never looked to see if I could do it
I load all the data into a table with no keys
then I insert this data into a table with 225 million records, this
large table has the primary key, this is what takes a LONG time
Matt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Lachlan Mulcahy wrote:
MySQL Version: 4.0.18
Server OS: windows 2000, or 2003
Memory 2 gig
CPU(s) dual 2.6-3ghz xeon 500-2mb cache (cpu load is low)
Disks (RAIDs and Independent disk speed/types) 8x72 gig 15,000 rpm scsi
II u320 raid 5 dell perc setup
--
MySQL General Mailing List
For list
Lopez David E-r9374c wrote:
Since you have a temp table created (no keys I assume), use the command
mysqldump -v -e -n -t dbname tablename filename.sql
This creates a file that inserts the records back into the same table
it also does not do an insert ignore
I need the records to go into the
Do you ever delete from this table?
Temp table is trunicated before the EBCDIC file is loaded
Have you removed the unecessary duplicate key on the first column of your primary key?
Have not touched the DIC index yet, I need a backup server to change
indexes, it would take the main server down for
Donny Simonton wrote:
Matt,
I've been reading this thread for a while and at this point, I would say
that you would need to provide the table structures and queries that you are
running.
For example, we have one table that has 8 billion rows in it and it close to
100 gigs and we can hammer it all
Since you have a temp table created (no keys I assume), use the command
mysqldump -v -e -n -t dbname tablename filename.sql
This should create insert statements with many values in a single
insert. Then use the client program to insert them to you db.
mysql -u matt -p dbname filename.sql
This
Justin Swanhart wrote:
Indexes can generate vast amounts of random i/o.
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.
If you have the budget for it, I would consider
Tim Brody wrote:
You may find that the 'dic' KEY isn't necessary, as it's the first part of
your PRIMARY KEY.
I've found better performance for multi-column keys by putting the columns
in order of least variance first, e.g. for a list of dates:
1979-04-23
1979-07-15
1980-02-04
1980-06-04
You want
Consider replicating to some slave servers and dividing reads among them.
I already replicate to slaves, and sites will do read only queries off
these slaves
99.9 % of the tables are read only anyway, the only tables we update or
insert into, are very very small and fast.
These big tables are
You might be out of luck with MySQL ... sorry.
You may need to switch to a database that has a parallel query
facility. Then - every query becomes a massive table scan but gets
divided into multiple concurrent subqueries - and overall the job
finishes in a reasonable amount of time. The
Lopez David E-r9374c wrote:
matt
1) inserts using this format is much faster:
INSERT INTO table (col1, col2) VALUES (val1,val2), (val3,val4)
is much faster then single row insert. My experience is
2.5 hrs vs.. 36 hrs.
2) The PACK_KEYS=1 may be hurting you. I've never used it.
3) There
Egor Egorov wrote:
Are you running this under Microsoft Windows?
Yes, windows 2k and 2003, mysql-nt 4.0.16
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
You may want more indexes but you might be getting killed because you already have too
many.
To test - try loading into a table without indexes and see if it makes a difference.
At the very least - check to see if the primary index which starts with 'dic' can make
your
Rebuilding index takes 3 1/2 days!!! Growing pains with mysql..
I've got 2 dedicated servers, each with a slave, all run 32gig 15k rpm
raid 5 on u320 perc raid cards, dell 2600/4600's with single channel
backplanes (new ones will have dual channel)
All have 2 gig of ram, but I've never seen mysql
52 matches
Mail list logo