If I understood correctly:
- there is no requirement for the system to boot (or be bootable)
outside of your secure locations.
- you are willing to accept separate tracking and tagging of removable
media, e.g. for key distribution.
Consider, at least for purposes of learning from the
Sorry for abusing the mailing list, but I don't know how to report
bugs anymore and have no visibility of whether this is a
known/resolved issue. So, just in case it is not...
With Solaris 11 Express, scrubbing a pool with encrypted datasets for
which no key is currently loaded, unrecoverable
(for lack of a better term. I'm open to suggestions.) a
pseudo-zvol. It's meant to be a low
overhead way to emulate a slice within a pool. So
no COW or related zfs features
Are these a zslice?
zbart - Don't have a CoW, man!
This message posted from opensolaris.org
Caveat: do not enable nonvolatile write cache for UFS.
Correction: do not enable *volatile* write cache for UFS :-)
--
Dan.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If your priorities were different, or for others pondering a similar question,
the PATA disk might be a hotspare.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Did anyone ever have success with this?
I'm trying to add a usb flash device as rpool cache, and am hitting the same
problem, even after working through the SMI/EFI label and other issues above.
r...@asura:~# zpool add rpool cache /dev/dsk/c6t0d0s0
invalid vdev specification
use '-f' to
Use the SAS drives as l2arc for a pool on sata disks. If your l2arc is the
full size of your pool, you won't see reads from the pool (once the cache is
primed).
If you're purchasing all the gear from new, consider whether SSD in this mode
would be better than 15k sas.
--
This message posted
This sounds like FUD.
There's a comprehensive test suite, and it apparently passes.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Try iSCSI?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Not a ZFS bug. [SMI vs EFI labels vs BIOS booting]
and so also only a problem for disks that are members of the root pool.
ie, I can have 1Tb disks as part of a non-bootable data pool, with EFI labels,
on a 32-bit machine?
--
This message posted from opensolaris.org
Other details - the original ZFS was created at ZFS
version 14 on SNV b105, trying to be restored to ZFS
version 15 on SNV b114. Any help would be
appreciated.
The zfs send/recv format is not warranted to be compatible between revisions.
I don't know, offhand, if that is the problem in
There was a discussion in zfs-code around error-correcting (rather than just
-detecting) properties of the checksums currently kept, an of potential
additional checksum methods with stronger properties.
It came out of another discussion about fletcher2 being both weaker than
desired, and
Sorry, don't have a thread reference
to hand just now.
http://www.opensolaris.org/jive/thread.jspa?threadID=100296
Note that there's little empirical evidence that this is directly applicable to
the kinds of errors (single bit, or otherwise) that a single failing disk
medium would produce.
Thankyou! Am I right in thinking that rpool
snapshots will include things like swap? If so, is
there some way to exclude them?
Hi Carl :)
You can't exclude them from the send -R with something like --exclude, but you
can make sure there are no such snapshots (which aren't useful anyway)
I have a gzip-9 compressed filesystem that I want to
backup to a remote system and would prefer
not to have to recompress everything
again at such great computation expense.
This would be nice, and a similar desire applies to upcoming streams after
zfs-crypto lands.
However, the present
Save the data to a file stored in zfs. Then you are
covered. :-)
Only if the stream was also separately covered in transit.
While you want in-transit protection regardless, zfs recving the stream into
a pool validates that it was not damaged in transit, as well as giving you
at-rest
On Sun, 23 Aug 2009, Daniel Carosone wrote:
Userland tools to read and verify a stream, without
having to play
it into a pool (seek and io overhead) could really
help here.
This assumes that the problem is data corruption of
the stream, which
could occur anywhere, even
You can validate a stream stored as a file at any
time using the zfs receive -n option.
Interesting. Maybe it's just a documentation issue, but the man page doesn't
make it clear that this command verifies much more than the names in the
stream, and suggests that the rest of the data could
How about if you don't 'detach' them? Just unplug
the backup device in the pair, plug in the
temporary replacement, and tell zfs to
replace the device.
Hm. I had tried a variant: a three-way mirror, with one device missing most of
the time. The annoyance of that was that the pool
Furthermore, this clarity needs to be posted somewhere much, much more visible
than buried in some discussion thread.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I welcome the re-write. The deficiencies of the current snapshot cleanup
implementation have been a source of constant background irritation to me for a
while, and the subject of a few bugs.
Regarding the issues in contention
- the send hooks capability is useful and should remain, but the
Daniel Carosone d...@geek.com.au writes:
Would there be a way to avoid taking snapshots if
they're going to be zero-sized?
I don't think it is easy to do, the txg counter is on
a pool level,
[..]
it would help when the entire pool is idle, though.
.. which is exactly the scenario
you can fetch the cr_txg (cr for creation) for a
snapshot using zdb,
yes, but this is hardly an appropriate interface. zdb is also likely to cause
disk activity because it looks at many things other than the specific item in
question.
but the very creation of a snapshot requires a new
txg
Those are great, but they're about testing the zfs software. There's a small
amount of overlap, in that these injections include trying to simulate the
hoped-for system response (e.g, EIO) to various physical scenarios, so it's
worth looking at for scenario suggestions.
However, for most of
[verify on real hardware and share results]
Agree 110%.
Good :)
Yanking disk controller and/or power cables is an
easy and obvious test.
The problem is that yanking a disk tests the failure
mode of yanking a disk.
Yes, but the point is that it's a cheap and easy test, so you might as
So we also need a txg dirty or similar
property to be exposed from the kernel.
Or not..
if you find this condition, defer, but check again in a minute (really, after a
full txg_interval has passed) rather than on the next scheduled snapshot.
on that next check, if the txg has advanced again,
Speaking practically, do you evaluate your chipset
and disks for hotplug support before you buy?
Yes, if someone else has shared their test results previously.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I haven't used it myself, but you could look at the EON software NAS appliance:
http://eonstorage.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Isn't this only true if the file sizes are such that the concatenated
blocks are perfectly aligned on the same zfs block boundaries they used
before? This seems unlikely to me.
Yes that would be the case.
While eagerly awaiting b128 to appear in IPS, I have been giving this issue
Jokes aside, this is too easy to make a mistake with
the consequences that are
too hard to correct. Anyone disagrees?
No, and this sums up the situation nicely, in that there are two parallel paths
toward a resolution:
- make the mistake harder to make (various ideas here)
- make the
but if you attempt to add a disk to a redundant
config, you'll see an error message similar [..]
Doesn't the mismatched replication message help?
Not if you're trying to make a single disk pool redundant by adding .. er,
attaching .. a mirror; then there won't be such a warning, however
Doesn't the mismatched replication message help?
Not if you're trying to make a single disk pool redundant by adding ..
er, attaching .. a mirror; then there won't be such a warning, however
effective that warning might or might not be otherwise.
Not a problem because you can then detach
I can't (yet!) say I've seen the same, with respect to disappearing snapshots.
However, I can confirm that I am seeing the same thing, with respect to
snapshots without the frequent prefix..
$ zfs list -t snapshot | fgrep :-
rp...@zfs-auto-snap:-2009-12-14-13:15
There was an announcement made in November about auto
snapshots being made obsolete in build 128
That thread (which I know well) talks about the replacement of the
[b]implementation[/b], while retaining the (majority of) the behaviour and
configuration interface. The old implementation had
None of these look like the issue either. With 128, I did have to edit the
code to avoid the month rollover error, and add the missing dependency
dbus-python26.
I think I have a new install that went to 129 without having auto snapshots
enabled yet. When I can get to that machine later, I
Your parenthetical comments here raise some concerns, or at least eyebrows,
with me. Hopefully you can lower them again.
compress, encrypt, checksum, dedup.
(and you need to use zdb to get enough info to see the
leak - and that means you have access to the raw devices)
An attacker with
Yet another way to thin-out the backing devices for a zpool on a
thin-provisioned storage host, today: resilver.
If your zpool has some redundancy across the SAN backing LUNs, simply
drop and replace one at a time and allow zfs to resilver only the
blocks currently in use onto the replacement
On Sun, Jan 10, 2010 at 09:54:56AM -0600, Bob Friesenhahn wrote:
WTF?
urandom is a character device and is returning short reads (note the
0+n vs n+0 counts). dd is not padding these out to the full blocksize
(conv=sync) or making multiple reads to fill blocks (conv=fullblock).
Evidently the
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25 drive bays. They were designed to power the
I have a netbook with a small internal ssd as rpool. I have an
external usb HDD with much larger storage, as a separate pool, which
is sometimes attached to the netbook.
I created a zvol on the external pool, the same size as the internal
ssd, and attached it as a mirror to rpool for backup. I
I should have mentioned:
- opensolaris b130
- of course I could use partitions on the usb disk, but that's so much less
flexible.
--
Dan.
pgp5A8rwHnenC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Jan 12, 2010 at 02:38:56PM +1300, Ian Collins wrote:
How did you set the subdevice in the off line state?
# zpool offline rpool /dev/zvol/dsk/
sorry if that wasn't clear.
Did you detach the device from the mirror?
No, because then:
- it will have to resilver fully on next
On Mon, Jan 11, 2010 at 06:03:40PM -0800, Richard Elling wrote:
IMHO, a split mirror is not as good as a decent backup :-)
I know.. that was more by way of introduction and background. It's
not the only method of backup, but since this disk does get plugged
into the netbook frequently enough it
[google server with batteries]
These are cool, and a clever rethink of the typical data centre power
supply paradigm. They keep the server running, until either a
generator is started or a graceful shutdown can be done.
Just to be clear, I'm talking about something much smaller, that
provides
On Mon, Jan 11, 2010 at 10:10:37PM -0800, Lutz Schumann wrote:
p.s. While writing this I'm thinking if a-card handles this case well ? ...
maybe not.
apart from the fact that they seem to be hard to source, this is a big
question about this interesting device for me too. I hope so, since
it
On Tue, Jan 12, 2010 at 01:26:15PM -0700, Cindy Swearingen wrote:
I see now how you might have created this config.
I tried to reproduce this issue by creating a separate pool on another
disk and a volume to attach to my root pool, but my system panics when
I try to attach the volume to the
On Fri, Jan 15, 2010 at 10:37:15AM -0500, Charles Menser wrote:
Perhaps an ISCSI mirror for a laptop? Online it when you are back
home to keep your backup current.
I do exactly this, but:
- It's not the only thing I do for backup.
- The iscsi initiator is currently being a major PITA for me.
On Sun, Jan 17, 2010 at 08:05:27AM -0800, Richard Elling wrote:
Personally, I like to start with a fresh full image once a month, and
then do daily incrementals for the rest of the month.
This doesn't buy you anything.
.. as long as you scrub both the original pool and the backup pool
On Sun, Jan 17, 2010 at 05:31:39AM -0500, Edward Ned Harvey wrote:
Instead, it is far preferable to zfs send | zfs receive ... That is,
receive the data stream on external media as soon as you send it.
Agree 100% - but..
.. it's hard to beat the convenience of a backup file format, for
all
On Sun, Jan 17, 2010 at 04:38:03PM -0600, Bob Friesenhahn wrote:
On Mon, 18 Jan 2010, Daniel Carosone wrote:
.. as long as you scrub both the original pool and the backup pool
with the same regularity. sending the full backup from the source is
basically the same as a scrub of the source
On Sun, Jan 17, 2010 at 06:21:45PM +1300, Ian Collins wrote:
I have a Solaris 10 update 6 system with a snapshot I can't remove.
zfs destroy -f snap reports the device as being busy. fuser doesn't
shore any process using the filesystem and it isn't shared.
Is it the parent snapshot for a
On Mon, Jan 18, 2010 at 03:24:19AM -0500, Edward Ned Harvey wrote:
Unless I am mistaken, I believe, the following is not possible:
On the source, create snapshot 1
Send snapshot 1 to destination
On the source, create snapshot 2
Send incremental, from 1 to 2 to the destination.
On the
On Mon, Jan 18, 2010 at 07:34:51PM +0100, Lassi Tuura wrote:
Consider then, using a zpool-in-a-file as the file format, rather than
zfs send streams.
This is an interesting suggestion :-)
Did I understand you correctly that once a slice is written, zfs
won't rewrite it? In other words,
On Mon, Jan 18, 2010 at 01:38:16PM -0800, Richard Elling wrote:
The Solaris 10 10/09 zfs(1m) man page says:
The format of the stream is committed. You will be able
to receive your streams on future versions of ZFS.
I'm not sure when that hit snv, but obviously it was
On Mon, Jan 18, 2010 at 03:25:56PM -0800, Erik Trimble wrote:
Hopefully, once BP rewrite materializes (I know, I'm treating this
much to much as a Holy Grail, here to save us from all the ZFS
limitations, but really...), we can implement defragmentation which
will seriously reduce the amount
On Mon, Jan 18, 2010 at 05:52:25PM +1300, Ian Collins wrote:
Is it the parent snapshot for a clone?
I'm almost certain it isn't. I haven't created any clones and none show
in zpool history.
What about snapshot holds? I don't know if (and doubt whether) these
are in S10, but since they
On Tue, Jan 19, 2010 at 12:16:01PM +0100, Joerg Schilling wrote:
Daniel Carosone d...@geek.com.au wrote:
I also don't recommend files 1Gb in size for DVD media, due to
iso9660 limitations. I haven't used UDF enough to say much about any
limitations there.
ISO-9660 supports files up
There is a tendency to conflate backup and archive, both generally
and in this thread. They have different requirements.
Backups should enable quick restore of a full operating image with all
the necessary system level attributes. They concerned with SLA and
uptime and outage and data loss
On Wed, Jan 20, 2010 at 12:42:35PM -0500, Wajih Ahmed wrote:
Mike,
Thank you for your quick response...
Is there a way for me to test the compression from the command line to
see if lzjb is giving me more or less than the 12.5% mark? I guess it
will depend if there is a lzjb command
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
Though the ARC case, PSARC/2007/618 is unpublished, I gather from
googling and the source that L2ARC devices are considered auxiliary,
in the same category as spares. If so, then it is perfectly reasonable to
expect that it gets
On Wed, Jan 20, 2010 at 10:04:34AM -0800, Willy wrote:
To those concerned about this issue, there is a patched version of
smartmontools that enables the querying and setting of TLER/ERC/CCTL
values (well, except for recent desktop drives from Western
Digitial).
[Joining together two recent
On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote:
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
Though the ARC case, PSARC/2007/618 is unpublished, I gather from
googling and the source that L2ARC devices
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
The error messages are in the original post. They are...
/mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2
Hard Disk.vhd: File too large
/mirror2/applications/virtualboximages/xp/xp.tar.bz2: File too
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
For performance (rather than space) issues, I look at dedup as simply
increasing the size of the working set, with a goal of reducing the
amount of IO (avoided duplicate writes) in return.
I should add and avoided future
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
--
Dan.
pgpw54qWyHczW.pgp
Description: PGP signature
___
On Thu, Jan 21, 2010 at 11:14:33PM +0100, Henrik Johansson wrote:
I think this could scare or even make new users do terrible things,
even if the errors could be fixed. I think I'll file a bug, agree?
Yes, very much so.
--
Dan.
pgp7OGc773Bqe.pgp
Description: PGP signature
On Thu, Jan 21, 2010 at 03:33:28PM -0800, Richard Elling wrote:
[Richard makes a hobby of confusing Dan :-)]
Heh.
Lutz, is the pool autoreplace property on? If so, god help us all
is no longer quite so necessary.
I think this is a different issue.
I agree. For me, it was the main
On Thu, Jan 21, 2010 at 05:52:57PM -0800, Richard Elling wrote:
I agree with this, except for the fact that the most common installers
(LiveCD, Nexenta, etc.) use the whole disk for rpool[1].
Er, no. You certainly get the option of whole disk or make
partitions, at least with the opensolaris
On Thu, Jan 21, 2010 at 07:33:47PM -0800, Younes wrote:
Hello all,
I have a small issue with zfs.
I create a volume 1TB.
# zfs get all tank/test01
NAMEPROPERTY VALUE
On Sat, Jan 23, 2010 at 12:30:01PM -0800, Simon Breden wrote:
And regarding mirror vdevs etc, I can see the usefulness of being
able to build a mirror vdev of multiple drives for cases where you
have really critical data -- e.g. a single 4-drive mirror vdev. I
suppose regular backups can help
On Sat, Jan 23, 2010 at 09:04:31AM -0800, Simon Breden wrote:
For resilvering to be required, I presume this will occur mostly in
the event of a mechanical failure. Soft failures like bad sectors
will presumably not require resilvering of the whole drive to occur,
as these types of error are
On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote:
On January 23, 2010 5:17:16 PM -0600 Tim Cook t...@cook.ms wrote:
Smaller devices get you to raid-z3 because they cost less money.
Therefore, you can afford to buy more of them.
I sure hope you aren't ever buying for my company! :)
As I said in another post, it's coming time to build a new storage
platform at home. I'm revisiting all the hardware options and
permutations again, for current kit.
Build 125 added something I was very eager for earlier, sata
port-multiplier support.Since then, I've seen very little, if
On Fri, Jan 22, 2010 at 04:12:48PM -0500, Miles Nordin wrote:
w http://www.csc.liv.ac.uk/~greg/projects/erc/
dead link?
Works for me - this is someone who's written patches for smartctl to
set this feature; these are standardised/documented commands, no
reverse engineering of DOS tools
Another issue with all this arithmetic: one needs to factor in the
cost of additional spare disks (what were you going to resilver onto?).
I look at it like this: you purchase the same number of total disks
(active + hot spare + cold spare), and raidz2 vs raidz3 simply moves a
disk from one of
On Thu, Jan 21, 2010 at 03:55:59PM +0100, Matthias Appel wrote:
I have a serious issue with my zpool.
Yes. You need to figure out what the root cause of the issue is.
My zpool consists of 4 vdevs which are assembled to 2 mirrors.
One of this mirrors got degraded cause of too many errors on
Some other points and recommendations to consider:
- Since you have the bays, get the controller to drive them,
regardless. They will have many uses, some of which below.
A 4-port controller would allow you enough ports for both the two
empty hotswap bays, plus the dual 2.5 carrier.
On Mon, Jan 25, 2010 at 04:08:04PM -0600, David Dyer-Bennet wrote:
- Don't be afraid to dike out the optical drive, either for case
space or available ports. [..]
[..] Put the drive in an external USB case if you want,
or leave it in the case connected via a USB bridge
On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote:
et You cannot import a stream into a zpool of earlier revision,
et thought the reverse is possible.
This is very bad, because it means if your backup server is pool
version 22, then you cannot use it to back up pool
On Mon, Jan 25, 2010 at 05:36:35PM -0500, Miles Nordin wrote:
sb == Simon Breden sbre...@gmail.com writes:
sb 1. In simple non-RAID single drive 'desktop' PC scenarios
sb where you have one drive, if your drive is experiencing
sb read/write errors, as this is the only drive you
On Tue, Jan 26, 2010 at 07:32:05PM -0800, David Dyer-Bennet wrote:
Okay, so this SuperMicro AOC-USAS-L8i is an SAS card? I've never
done SAS; is it essentially a controller as flexible as SCSI that
then talks to SATA disks out the back?
Yes, or SAS disks.
Amazon seems to be the only
On Wed, Jan 27, 2010 at 12:01:36PM -0800, Gregory Durham wrote:
Hello All,
I read through the attached threads and found a solution by a poster and
decided to try it.
That may have been mine - good to know it helped, or at least started to.
The solution was to use 3 files (in my case I made
On Wed, Jan 27, 2010 at 02:34:29PM -0600, David Dyer-Bennet wrote:
Google is working heavily with the philosophy that things WILL fail, so
they plan for it, and have enough redundance to survive it -- and then
save lots of money by not paying for premium components. I like that
On Wed, Jan 27, 2010 at 02:47:47PM -0800, Christo Kutrovsky wrote:
In the case of a ZVOL with the following settings:
primarycache=off, secondarycache=all
How does the L2ARC get populated if the data never makes it to ARC ?
Is this even a valid configuration?
It's valid, I assume, in the
In a thread elsewhere, trying to analyse why the zfs auto-snapshot
cleanup code was cleaning up more aggressively than expected, I
discovered some interesting properties of a zvol.
http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000232.html
The zvol is not thin-provisioned.
On Wed, Jan 27, 2010 at 09:57:08PM -0800, Bill Sommerfeld wrote:
Hi Bill! :-)
On 01/27/10 21:17, Daniel Carosone wrote:
This is as expected. Not expected is that:
usedbyrefreservation = refreservation
I would expect this to be 0, since all the reserved space has been
allocated
On Thu, Jan 28, 2010 at 07:26:42AM -0800, Ed Fang wrote:
4 x x6 vdevs in RaidZ1 configuration
3 x x8 vdevs in RaidZ2 configuration
Another choice might be
2 x x12 vdevs in raidz2 configuration
This gets you the space of the first, with the recovery properties of
the second - at a cost in
On Thu, Jan 28, 2010 at 09:33:19PM -0800, Ed Fang wrote:
We considered a SSD ZIL as well but from my understanding it won't
help much on sequential bulk writes but really helps on random
writes (to sequence going to disk better).
slog will only help if your write load involves lots of
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote:
On 01/30/10 05:33 PM, Ross Walker wrote:
Just install the OS on the first drive and add the second drive to form
a mirror.
After more than a year or so of experience with ZFS on drive constrained
systems, I am convinced that
Two related questions:
- given an existing pool with dedup'd data, how can I find the
current size of the DDT? I presume some zdb work to find and dump the
relevant object, but what specifically?
- what's the expansion ratio for the memory overhead of L2ARC entries?
If I know my DDT
On Sat, Feb 06, 2010 at 09:22:57AM -0800, Richard Elling wrote:
I'm interested in anecdotal evidence which suggests there is a
problem as it is currently designed.
I like to look at it differently: I'm not sure if there is a
problem. I'd like to have a simple way to discover a problem, using
On Mon, Feb 08, 2010 at 04:58:38AM +0100, Felix Buenemann wrote:
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I have one answer. The other questions are mostly related to your
raid controller, which I can't answer directly.
- Is it safe to run the L2ARC without
On Thu, Feb 04, 2010 at 04:17:17PM -0800, Scott Meilicke wrote:
At this point, my server Gallardo can see the LUN, but like I said, it looks
blank to the OS. I suspect the 'sbdadm create-lu' phase.
Yeah, try the import version of that command.
--
Dan.
pgphS37DCPdV0.pgp
Description: PGP
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote:
Created a pool on head1 containing just the cache
device (c0t0d0).
This is not possible, unless there is a bug. You
cannot create a pool
with only a cache device. I have verified this on
b131:
# zpool create
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote:
Only with the zdb(1M) tool but note that the
checksums are NOT of files
but of the ZFS blocks.
Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so
nice :(
If you're comparing the current data to a
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote:
Ok thanks I know that the amount of used space will vary, but what's
the usefulness of the total size when ie in my pool above 4 x 1G
(roughly, depending on recordsize) are reserved for parity, it's not
like it's useable for
This is a long thread, with lots of interesting and valid observations
about the organisation of the industry, the segmentation of the
market, getting what you pay for vs paying for what you want, etc.
I don't really find within, however, an answer to the original
question, at least the way I
On Mon, Feb 08, 2010 at 05:23:29PM -0700, Cindy Swearingen wrote:
Hi Lasse,
I expanded this entry to include more details of the zpool list and
zfs list reporting.
See if the new explanation provides enough details.
Cindy, feel free to crib from or refer to my text in whatever way might
Although I am in full support of what sun is doing, to play devils
advocate: supermicro is.
They're not the only ones, although the most-often discussed here.
Dell will generally sell hardware and warranty and service add-ons in
any combination, to anyone willing and capable of figuring
1 - 100 of 297 matches
Mail list logo