The database is MySQL, it runs on a Linux box that connects to the Nexenta
server through 10GbE using iSCSI. Just a short question - wouldn't it be
easier, and perhaps faster, to just have the MySQL DB on an NFS share? iSCSI
adds complexity, both on the target and the initiator.
Also,
Just a short question - wouldn't it be easier, and perhaps faster, to just
have the MySQL DB on an NFS share? iSCSI adds
complexity, both on the target and the initiator.
Yes, we did tried both and we didn't notice any difference in term of
performances. I've read conflicting opinions on
At this point, I will repeat my recommendation about
using
zpool-in-files as a backup (staging) target.
Depending where you
ost, and how you combine the files, you can achieve
these scenarios
without clunkery, and with all the benefits a zpool
provides.
This is another good scheme.
I
I tried to run zfs list on my system, but looks that this command
will hangs. This command can not return even if I press contrl+c as
following:
r...@intel7:/export/bench/io/filebench/results# zfs list
^C^C^C^C
^C^C^C^C
..
When this happens, I am running filebench benchmark with oltp
Just a short question - wouldn't it be easier, and perhaps faster, to just
have the MySQL DB on an NFS share? iSCSI adds
complexity, both on the target and the initiator.
Yes, we did tried both and we didn't notice any difference in term of
performances. I've read conflicting
Hi list,
Here's my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to
After having launched replace command, I had to offlined c0t9d0
because
it was generating too many warnings and slow down i/os.
Now replace seems to be finished but zpool status still displays
replacing and according to scrub status, resilver seems to continue
?
Any idea how to clarify
Tristram Scott tristram.sc...@quantmodels.co.uk wrote:
I see a number of points to consider when choosing amongst the various
suggestions for backing up zfs file systems. In no particular order, I have
these:
Let me fill this out for star ;-)
1. Does it work in place, or need an
If you have one zpool consisting of only one large raidz2, then you have a slow
raid. To reach high speed, you need maximum 8 drives in each raidz2. So one of
the reasons it takes time, is because you have too many drives in your raidz2.
Everything would be much faster if you split your zpool
- Original Message -
If you have one zpool consisting of only one large raidz2, then you
have a slow raid. To reach high speed, you need maximum 8 drives in
each raidz2. So one of the reasons it takes time, is because you have
too many drives in your raidz2. Everything would be much
On 05 July, 2010 - Roy Sigurd Karlsbakk sent me these 1,9K bytes:
- Original Message -
If you have one zpool consisting of only one large raidz2, then you
have a slow raid. To reach high speed, you need maximum 8 drives in
each raidz2. So one of the reasons it takes time, is
On 07/ 6/10 02:21 AM, Francois wrote:
Hi list,
Here's my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in
In 3.0.3+ new option would list appliance changelog going forward:
nmc$ show version -c
On 07/04/2010 05:58 PM, Bohdan Tashchuk wrote:
Where can I find a list of these?
This leads to the more generic question of: where are *any* release notes?
I saw on Genunix that Community Edition 3.0.3
On Jul 5, 2010, at 4:19 AM, Ian D wrote:
Also, are you using jumbo frames? That can usually help a bit with either
access protocol
Yes. It was off early on and we did notice a significant difference once we
switched it on. Turning naggle off as suggested by Richard also seem to
have
14 matches
Mail list logo