Hi,
> if i have for example
> a simple table call person
>
> with 'id' and 'name' how columns
>
> i can do a backup and get some file (A.sql)
> with all the inserts statements
>
> here, all fine
>
> but
> how i can do this?
Have a look at the 'mysqldump' command line utility.
It goes somethin
teractive_timeout' can be used.
Lowering max_join_size could be an option too (it's currently
100,000,000), but I don't want to break people's existing apps that may
rely on this hight join size.
Thanks,
--
Martijn van den Burg
ASML ITM&S MySQL/TIBCO Support
--
T
[EMAIL PROTECTED]
> Sent: Wednesday, January 02, 2008 14:06 PM
> To: Martijn van den Burg
> Cc: Martin Goldman; mysql@lists.mysql.com
> Subject: Re: Backing up via slave
>
> Hi Martijn,
> Master will not have any information of SLAVE how r u doing
> this in you second s
Hi Martin,
> Hi folks,
>
> I have two MySQL servers running in a master-slave
> configuration, and I want to set up a process for backing up
> our application's data in which backups are sent to a server
> at another location. Ideally, I'd like to do a full backup
> once a week, and then incr
Of Baron Schwartz
> Sent: Thursday, December 20, 2007 15:50 PM
> To: Martijn van den Burg
> Cc: mysql@lists.mysql.com
> Subject: Re: How to set 'sql_warnings' in the config file
>
> That is not a mysqld option, it's a session variable:
>
> mysql> select @@
95,
could that be related?
Regards,
Martijn van den Burg
--
The information contained in this communication and any attachments is
confidential and may be privileged, and is for the sole use of the intended
recipient(s). Any unauthorized review, use, disclosure or distribution is
prohibited.
Dear list,
My company wants to migrate MySQL 5.0.18 from SPARC/Solaris 8 to
Intel/RedHat Enterprise Linux 4 update 4.
Are there any incompatibilities or snags to be expected (expect from the
endian issue, which will be solved by exporting/importing the data)?
Thank you for your time,
Martijn
> > Hi,
> >
> > I have allocated 500MB to key_buffer_size, but only 324MB is in use
> > (64%).
> >
> > Am I right to assume that this can mean one of the
> following: (1) all
> > indexes have already been cached and together they are just 324 MB,
> Do "find /var/lib/mysql -name "*.MYI" -ls" (if
Hi list,
As part of a server virtualization project, we are going to migrate
MySQL from SPARC/Solaris 8 to AMD/x86 Solaris 10.
I seem to remember that it is possible to:
- stop the SPARC server
- move the mount point of the MySQL data directory to the Solaris/x86
server
- start the AMD x86 serve
Hi,
I have allocated 500MB to key_buffer_size, but only 324MB is in use
(64%).
Am I right to assume that this can mean one of the following: (1) all
indexes have already been cached and together they are just 324 MB, (2)
there is a limiting variable (open_files, inodb_open_files, for example)
tha
Paul,
> I have a Java Struts application running on Tomcat 5. I have
> mysql 5 installed on redhat EL 3. When a user enters text
> with a £ sign (Great Britain Pound) in the browser and clicks
> enter, any insert or update statement apparently gets
> truncated in mysql. The insert reliably
On Friday 19 May 2006 17:49, sheeri kritzer wrote:
> I suggest writing a very simple shell script to run "SHOW SLAVE
> STATUS" and output it to a file every 15 seconds, to verify your
> script is working. I have never seen MySQL give a bad lag time for
> replication -- it's always been accurate fo
> That is weird. If it only lasts a couple of seconds, how are you
> monitoring it to find out what the lag time is?
I've written a replication monitor script using Perl::POE, which checks
replication lag every 15 seconds or so (can't check the exact interval now -
weekend has begun here).
--
just some updates on a single database. No flushing of
logs... The strange thing is that the condition of extremely high lag lasts
only a couple of seconds, and then tapers back very quickly to zero.
Martijn
> On 5/19/06, Martijn van den Burg <[EMAIL PROTECTED]> wrote:
> > Hi,
> &g
Hi,
Recently I created a new replication set up with 5.0.18-standard-log on Solaris
8 (one master, one slave).
Replication is running, but periodically (after a bunch of INSERT/UPDATE
statements) there is a very large replication lag, with Seconds_Behind_Master
values of 21 and more. This
error
Is there anything else I can try?
Martijn
> Datum: 08/02/06 01:52 PM
> Van: "Martijn van den Burg" <[EMAIL PROTECTED]>
> Aan: mysql@lists.mysql.com
> CC: "Gleb Paharenko" <[EMAIL PROTECTED]>
> Onderwerp : Re: Error reading binlog - recoverab
ysql.com/doc/internals/en/replication-binary-log.html
>
> Check if the problem still exists with mysqlbinlog from the fresh MySQL
> shipment. The --hexdump option for mysqlbinlog can be helpful for
> debugging purposes. See:
> http://dev.mysql.com/doc/refman/5.0/en/mysqlbinlo
Hi,
MySQL 4.1.10-log, Solaris
Replication stopped with an error that closely resembles bug #8215. When
examining the binlog with mysqlbinlog, it stops before reaching the end of the
binlog, with the error
ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len:
1635131433, eve
Tony,
> Hi Is there any sql commands for dumping/exporing the content
> of a database to a file ?? Thanks. Tony.
SELECT ... INTO OUTFILE
(http://dev.mysql.com/doc/refman/5.0/en/select.html), but that doesn't
dump the entire database.
From the commandline (i.e. it's not an SQL statement) you can
Hi,
Disregard my previous post.
It seems the developer is doing stuff that creates duplicate key errors,
which seem to propagate to the slave, in stead of being ignored.
--
Martijn
> -Original Message-
> From: Martijn van den Burg [mailto:[EMAIL PROTECTED]
> Sent: d
Dear list,
Replication (between two 4.1.10 servers, Solaris) fails on the following
query with a 1064 error (parse error), but there are no reserved words
in it, and when I run the query from the command line on the master no
errors occur and the table is created.
(binlog)
/*!40019 SET @@session
Hi,
Apologies for not posting this in the 'replication' list, but that seems
not very active...
I have setup replication between two 4.1.10 databases, and I noticed
that the SLAVE THREAD had stopped because of an SQL error.
The errorlog says: "Fix the problem and restart the slave SQL thread".
B
Greetings,
I've been keeping track of Bytes_sent and Bytes_received for a while in
the fashion of 'mysqlreport': divide those values over Uptime in order
to obtain a data rate (bytes/sec).
The resulting graphs look like this:
|
|
| /| /|
|/ |/ |
| / | /
| / | /
| /|
Hi Joerg et all,
[snip]
> You said this is Solaris.
>
> AFAIK, Solaris uses the same disk area for "/tmp" and paging, so the
> available space for files in "/tmp" may vary even without files being
> manipulated.
>
> I propose you set TMPDIR to point to some other disk area.
We're on Solaris,
Hi,
Thanks for your detailed reply.
Kind regards,
Martijn
On Sunday 02 October 2005 21:18, Gleb Paharenko wrote:
> Hello.
>
> > For InnoDB I'm not sure, so I don't make guessings.
>
> I'm continuing the previous message. Here is what I've found out:
>
>
> InnoDB has its own open file statist
Hi Jörg,
On Thursday 29 September 2005 12:09, you wrote:
> Hi Martijn, all!
>
> Martijn van den Burg wrote (reordered)
>
> > On Wednesday 28 September 2005 20:10, Gleb Paharenko wrote:
> >>Martijn van den Burg wrote:
> >>>Hi list,
> >>>
> >
gt;http://dev.mysql.com/doc/mysql/en/temporary-files.html
>
> Martijn van den Burg wrote:
> > Hi list,
> >
> > After the most recent reboot (which we used to increase the
> > ulimit/open_files_limit on our Solaris 8 machine running 4.1.10-log),
> > the follow
On Wednesday 28 September 2005 20:47, Siegfried Heintze wrote:
> Other databases allow one to store comments describing each field in a
> table definition. These are stored in the database. In addition, one can
> store comments about the table. Examples include MS SQL Server and MS
> Access.
>
> Do
On Tuesday 27 September 2005 11:59, Pooly wrote:
> 2005/9/27, Ow Mun Heng <[EMAIL PROTECTED]>:
> > Is there any doc looking at benchmarks of a database which is populated
> > entirely with fixed length char compared to variable character lengths?
> >
> > I know using char is preferred over varchar
Hi list,
After the most recent reboot (which we used to increase the
ulimit/open_files_limit on our Solaris 8 machine running 4.1.10-log),
the following message has started to appear in the errorlog:
InnoDB: Error: unable to create temporary file; errno: 2
I have searched google and the lists
Hi Miguel,
Those files are the binary logs that MySQL keeps of its doings. You can find
more information here: http://dev.mysql.com/doc/mysql/en/binary-log.html.
Depending on the number of tables and the number of data-altering statements,
these logs may become very big very quickly.
Kind reg
Hi list,
If MySQL opens a table, does it also need open the corresponding table format
file?
If it does, does it increment open_files?
In other words: does opening a MyISAM table take three file handles, and
opening a InnoDB table two (assuming the tablespace file isn't opened yet)?
Kind rega
Hi list,
I have noticed that the key buffer usage (defined in MySQL Administrator
as Key_blocks_used *key_cache_block_size) always stays the same, and
that I have a large amount of Key_blocks_unused, which seems to me
there's a potential performance gain somewhere:
> show variables like 'key%';
> > Qus 2. Is there any other way to compute the db size (other
> than disk
> > quota).
> >
>
> du -s mysql_data_directory
If you want to know the size of a /single/ database (i.e. schema) then
this method works if there's just one database in the
mysql_data_directory. If there are multiple da
Hi Mark,
Thanks for your reply.
> MySQL itself doesn't generate UDP traffic (the protocol runs
> over TCP/IP). However, MySQL relies on DNS unless you've specified
> - --skip-name-resolve when you started the server. DNS
> travels over UDP.
I did not specify --skip-name-resolve.
> You also m
Hi,
While I was on holiday our production MySQL 4.1.10 (Solaris 8) had to be
restarted due to a filesystem upgrade (NetApp). After the restart, when
I came back from my holiday, I noticed that INSERT performance has
dropped dramatically and that the machine generates a lot more UDP
traffic seeming
Dirk,
Some suggestions.
1. can you still dump the table using mysqldump? Then you can import it
again (with the mysql commandline client), creating a 'fresh' table.
2. make a backup of the index file (.MYI), then throw it away. MySQL
will create a new index for you.
HTH,
Martijn
> -Ori
n_files increase?
Kind regards,
Martijn
>
>
> "Martijn van den Burg" <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I'm running out of resources even though I think I shouldn't.
> >
> > The open_files_limit=3D256, max_connections=3D24
Hi,
I'm running out of resources even though I think I shouldn't.
The open_files_limit=256, max_connections=246, and table_cache=64 and
I'm trying to actively create 'Too many open file errors' with this
configuration, to be able to quantify the effect of raising the
filesystem's ulimit, and the
Hi,
> I have a file where the data looks like this:
>
> "1","23","345","45";
> "34","4","444","1er";
>
> I am then trying to load that data using the load data local
> infile and this statement:
>
> Load data local infile '/httpd/htdocs/sql/loader.sql' into
> table vehicles fields terminate
Hi,
> trying to create a new user with the below,
>
> GRANT ALL PRIVILEGES ON *samhain* TO 'samhain'@'localhost'
> IDENTIFIED BY 'passwd' WITH GRANT OPTION;
That should be 'samhain.*', not '*samhain*'.
Regards,
Martijn
--
The information contained in this communication and any attachments
e:
> http://dev.mysql.com/doc/mysql/en/not-enough-file-handles.html
>
>
>
> "Martijn van den Burg" <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > This applies to 4.1.10 on Solaris 8, with 1 GB of RAM.=20
> >
> > Our internal cus
Hi,
This applies to 4.1.10 on Solaris 8, with 1 GB of RAM.
Our internal customer wants to be able to make 1000 concurrent
connections to our database, with a mix of MyISAM and InnoDB tables, but
according to http://dev.mysql.com/doc/mysql/en/table-cache.html and an
online Solaris reference
(http
Stephane,
> I've was wondering if anyone is using MySql as their main ERP
> production database, if so how stable and reliable is it?
SAP and MySQL have teamed up to certify (part of?) SAP's R/3 ERP system
on MaxDB. Check:
http://searchsap.techtarget.com/originalContent/0,289142,sid21_gci967139
> to flush tables, you're right :
>
> flush table TOTO <= flushes only toto
> flush table TOTO,TITI<= flushes only toto and titi
> flush tables <= flushes ALL
OK, that's good news.
> Have'nt you thought to a master-slave replication and
27; in stead of locking individual tables. What would be
the impact on queries that are being executed at the moment I set the
lock?
Kind regards,
--
Martijn van den Burg
ASML ITM&S Application Support / Webcenter
--
The information contained in this communication and any attachments
46 matches
Mail list logo