uname -a
SunOS 5.11 snv_132 i86pc i386 i86pc Solaris
scrub made my system unresponsive.
could not login with ssh.
had to do a hard reboot.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
On Feb 18, 2010, at 4:55 AM, Phil Harman wrote:
> This discussion is very timely, but I don't think we're done yet. I've been
> working on using NexentaStor with Sun's DVI stack. The demo I've been playing
> with glues SunRays to VirtualBox instances using ZFS zvols over iSCSI for the
> boot im
+--
| On 2010-02-20 08:45:23, Charles Hedrick wrote:
|
| I hadn't considered stress testing the disks. Obviously that's a good idea.
We'll look at doing something in May, when we have the next opportunity to take
down th
I hadn't considered stress testing the disks. Obviously that's a good idea.
We'll look at doing something in May, when we have the next opportunity to take
down the database. I doubt that doing testing during production is a good
idea...
--
This message posted from opensolaris.org
_
We had been using the same pool for a backup Mysql server for 6 months before
using it for the primary server. Neither zpool status -v nor fmdump shows any
signs of problems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
+--
| On 2010-02-20 08:12:53, Charles Hedrick wrote:
|
| We recently moved a Mysql database from NFS (Netapp) to a local disk array
(J4200 with SAS disks). Shortly after moving production, the system effectively
hung. CP
We recently moved a Mysql database from NFS (Netapp) to a local disk array
(J4200 with SAS disks). Shortly after moving production, the system effectively
hung. CPU was at 100%, and one disk drive was at 100%.
I had tried to follow the tuning recommendations for Mysql mostly:
* recordsize set to
> Doesn't this mean that if you enable write back, and you have
> a single, non-mirrored raid-controller, and your raid controller
> dies on you so that you loose the contents of the nvram, you have
> a potentially corrupt file system?
It is understood, that any single point of failure could resul
> ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
> prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
Thanks everyone who has tried to help. this has gotten a bit crazier, I
removed the 'faulty' drive and let the pool run in degraded mode. It would
appear that now another drive has decided to play up;
de-bash-4.0# zpool status
pool: data
state: DEGRADED
status: One or more devices has b
On Sat, Feb 20, 2010 at 11:14, Lutz Schumann
wrote:
> Hello list,
>
> beeing a Linux Guy I'm actually quite new to Opensolaris. One thing I miss is
> udev.
> I found that when using SATA disks with ZFS - it always required manual
> intervention (cfgadm) to do SATA hot plug.
>
> I would like to au
>Hello list,
>
>beeing a Linux Guy I'm actually quite new to Opensolaris. One thing I miss is
>udev. I found that w
hen using SATA disks with ZFS - it always required manual intervention (cfgadm)
to do SATA hot plug
.
>
>I would like to automate the disk replacement, so that it is a fully aut
Hello list,
beeing a Linux Guy I'm actually quite new to Opensolaris. One thing I miss is
udev. I found that when using SATA disks with ZFS - it always required manual
intervention (cfgadm) to do SATA hot plug.
I would like to automate the disk replacement, so that it is a fully automatic
p
On 20/02/2010 01:34, Rob Logan wrote:
This would probably work given that your computer never crashes
in an uncontrolled manner. If it does, some data may be lost
(and possibly the entire pool lost, if you are unlucky).
the pool would never be at risk, but when your server
reboots, its c
14 matches
Mail list logo