Harry Putnam rea...@newsguy.com writes:
Ethan notet...@gmail.com writes:
Assuming your drives support SMART, I'd install smartmontools and see if
there are any SMART errors on the drive. While the absence of SMART errors
[...]
I've had trouble getting smartmontools to work with some of my
Hello all,
Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
the driver supplied by Opensolaris doesn't support JBOD drives.
I'm running snv_134 but when i try to do uninstall the SUNWacc driver i
have the following error :
pkgrm SUNWaac
The following package is currently
Hi
I had the same problem with 2405
Remove aac of 134 and istall driver from adaptec . Driver for sol10u4
But working
10
On Mar 29, 2010, at 4:25 PM, Bruno Sousa bso...@epinfante.com wrote:
Hello all,
Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
the driver supplied
Just FYI, flame wars please /dev/null
http://www.cuddletech.com/blog/pivot/entry.php?id=1120
Solaris No Longer Free
28 Mar '10 - 10:14 by benr
Hot on the heals of Oracle's revamp of Solaris support, the licensing agreement
for free downloads of Solaris 10 have changed. Infoworld broke the
On Mon, Mar 29, 2010 at 4:25 PM, Bruno Sousa bso...@epinfante.com wrote:
Hello all,
Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
the driver supplied by Opensolaris doesn't support JBOD drives.
I'm running snv_134 but when i try to do uninstall the SUNWacc driver i
pkg uninstall aac
Creating Planpkg: Cannot remove
'pkg://opensolaris.org/driver/storage/a...@0.5.11
,5.11-0.134:20100302T021758Z' due to the following packages that depend
on it:
pkg://opensolaris.org/storage/storage-ser...@0.1,5.11-0.134:20100302T050950Z
pkg uninstall aac storage-server
pkg:
Both are driver modules for storage adapters
Properties can be reviewed in the documentation:
ahci: http://docs.sun.com/app/docs/doc/816-5177/ahci-7d?a=view
mpt: http://docs.sun.com/app/docs/doc/816-5177/mpt-7d?a=view
ahci has a man entry on b133, as well.
cheers,
Tonmaus
--
This message posted
On 03/29/2010 09:32 AM, Yariv Graf wrote:
Hi
I had the same problem with 2405
Remove aac of 134 and istall driver from adaptec . Driver for sol10u4
On Mar 29, 2010, at 4:25 PM, Bruno Sousa bso...@epinfante.com
mailto:bso...@epinfante.com wrote:
Currently i'm evaluating a system with an
On Mon, Mar 29, 2010 at 4:57 PM, Bruno Sousa bso...@epinfante.com wrote:
pkg uninstall aac
Creating Planpkg: Cannot remove
'pkg://opensolaris.org/driver/storage/a...@0.5.11
,5.11-0.134:20100302T021758Z' due to the following packages that depend
on it:
On Sat, 27 Mar 2010, Frank Middleton wrote:
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
Rebooted from c0t1d0s0, only rpool
On Mon, 29 Mar 2010, Victor Latushkin wrote:
On Mar 29, 2010, at 1:57 AM, Jim wrote:
Yes - but it does nothing. The drive remains FAULTED.
Try to detach one of the failed devices:
zpool detach tank 4407623704004485413
As Victor says, the detach should work. This is a known issue and
Exactly where in the menu.lst would I put the -r ?
Thanks in advance.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system)
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank
Thanks for the suggestion, but have tried detaching but it refuses reporting no
valid replicas. Capture below.
C3P0# zpool status
pool: tank
state: DEGRADED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
ad4 ONLINE 0 0 0
ad6 ONLINE 0 0 0
Just to apologize
This not only sounds lame but IS pretty lame.
Somehow in reading the output of `zpool status POOL', I just blew right
by the URL included there:
http://www.sun.com/msg/ZFS-8000-9P
Which has quite a decent discussion of what it means.
cm == Courtney Malone court...@courtneymalone.com writes:
j == Jim biainmcna...@hotmail.com writes:
j Thanks for the suggestion, but have tried detaching but it
j refuses reporting no valid replicas.
yeah this happened to someone else also, see list archives around
2008-12-03:
I have a quick ZFS question. With most hardware raid controllers all the data
and the info is stored on the disk. Therefore, the integrity of the data can
survive a controller failure or the deletion of the LUN as long as it is
recreated with the same drives in the same location. Does this
JD Trout wrote:
I have a quick ZFS question. With most hardware raid controllers all
the data and the info is stored on the disk. Therefore, the integrity
of the data can survive a controller failure or the deletion of the
LUN as long as it is recreated with the same drives in the same
A new ARC case:
There is a long-standing RFE for zfs to be able to describe what has
changed between the snapshots of a dataset. To provide this
capability, we propose a new 'zfs diff' sub-command. When run with
appropriate privilege the sub-command describes what file system level
changes
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That is great to hear. What is the command to do this? I setup a test
situation and I would like to give it a try.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, Mar 29, 2010 at 3:49 PM, JD Trout jdtr...@ucla.edu wrote:
That is great to hear. What is the command to do this? I setup a test
situation and I would like to give it a try.
If you can plan the removal, simply 'zpool export' your pool, then 'zpool
import' it on the new controller /
zfs diff is incredibly cool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 30-3-2010 0:39, Nicolas Williams wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Nico
Any prevision about the release target? snv_13x?
Bruno
smime.p7s
Description: S/MIME Cryptographic Signature
r...@cs6:~# zpool import
pool: content3
id: 14184872052409584084
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Or to generate the list of files for incremental backups via NetBackup
or similar. This is especially important for file
Perfect. Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/29/10 16:44, Mike Gerdts wrote:
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Or to generate the list of files for incremental backups via NetBackup
or
On Mon, Mar 29, 2010 at 06:38:47PM -0400, David Magda wrote:
A new ARC case:
I read this earlier this morning. Welcome news indeed!
I have some concerns about the output format, having worked with
similar requirements in the past. In particular: as part of the
monotone VCS when reporting
On Mar 29, 2010, at 4:42 PM, Tom Bird wrote:
r...@cs6:~# zpool import
pool: content3
id: 14184872052409584084
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote:
There will also need to be clear rules on output ordering, with
respect to renames, where multiple changes have happened to renamed
files.
Separately, but relevant in particular to the above due to the
potential for races: what
On 3/29/10 8:02 PM, Daniel Carosone wrote:
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote:
There will also need to be clear rules on output ordering, with
respect to renames, where multiple changes have happened to renamed
files.
Separately, but relevant in particular to the
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote:
The caiman installer allows you to control the size of the partition
on the boot disk but it doesn't allow you (at least I couldn't
figure out how) to control the size of the slices. So you end with
slice0 filling the entire
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote:
You can:
- install to a partition that's the size you want rpool
- expand the partition to the full disk
- expand the s2 slice to the full disk
- leave the s0 slice for rpool alone
- make another slice for l2arc in the
35 matches
Mail list logo