a files for cpio.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
driver on top of
zfs. If you have enough RAM, try copying the iso file to /tmp, lofi
mount it from there, then try again.
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
files over samba to a couple
Windows machines + a media player.
Side note: Is this right? ditto blocks are extra parity blocks
stored on the same disk (won't prevent total disk failures, but could
provide data recovery if enough parity is available)
Thanks,
mike
completely failed...
- mike
On 5/4/07, Al Hopper [EMAIL PROTECTED] wrote:
On Fri, 4 May 2007, Lee Fyock wrote:
Hi--
I'm looking forward to using zfs on my Mac at some point. My desktop
server (a dual-1.25GHz G4) has a motley collection of discs that has
accreted over the years: internal EIDE
or anything as long as it's PCI-e and has 4
or 5 eSATA ports that can work with a port multipler (for 4-5 disks
per port) ... I don't think there is a clear fully supported option
yet or I'd be using it right now.
- mike
___
zfs-discuss mailing list
zfs
i am attempting to install b62 from the b62_zfsboot.iso that was posted last
week.
Mike makes a good point. We have some severe
problems
with build 63. I've been hoping to get an answer for
what's
going on with it, but so far, I don't have one.
So, note to everyone: for zfs boot
the
reboot? I'm kinda new at this OpenSolaris stuff, so any debugging tips/tricks
would be greatly appreciated.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
no errors that i could see. I'm going to re-try it w/o the cluster line. i'm
not sure if that line is required or not.
div id=jive-html-wrapper-div
The only big difference I see between what you did
and what I did was I didn't have the cluster
line.brOn reboot, mine said something like
for it
setup.
But that's just my viewpoint...
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/28/07, Mike Dotson [EMAIL PROTECTED] wrote:
And this changes the scenario how? I've actually been pondering this
for quite some time now. Why do we backup the root disk? With many of
the tools out now, it makes far more sense to do a flar/incremental
flars of the systems and or create
Peter Tribble wrote:
On 4/24/07, Darren J Moffat [EMAIL PROTECTED]
wrote:
With reference to Lori's blog posting[1] I'd like
to throw out a few of
my thoughts on spliting up the namespace.
Just a plea with my sysadmin hat on - please don't
go overboard
and make new filesystems just
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Thanks...
Mike Dotson
Area System Support Engineer - ACS West
Phone: (503) 343-5157
[EMAIL PROTECTED
Could it be an order problem? NFS trying to start before zfs is mounted?
Just a guess, of course. I'm not real savvy in either realm.
HTH,
Mike
Ben Miller wrote:
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the
extra space of the internal ide disk. There's just
but with just a different target or LUN
range.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. There are a
couple folks out here still running sparc. Is there any news to
report related to the sparc variant ZFS boot?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs
is, but 512 bytes at a time should be fine.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I noticed that there is still an open bug regarding removing devices
from a zpool:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can
.
Thanks a ton. Again, any input (good, bad, ugly, personal experiences
or opinions) is appreciated A LOT!
- mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[EMAIL PROTECTED] wrote:
Mike,
Take a look at
http://video.google.com/videoplay?docid=8100808442979626078q=CSI%3Amunich
Granted, this was for demo purposes, but the team in Munich is clearly
leveraging USB sticks for their purposes.
HTH,
Bev.
mike wrote:
I still haven't got any warm and fuzzy
okay so since this is fixed, Chris, would you consider using USB/FW now?
I am desperate to replace a server that is failing and I want to
replace it with a proper quiet ZFS-based solution, I hate being held
captive by NTFS issues (it may have corrupted my data now a second
time)
ZFS's
Would the system be able to halt if something was unplugged/some
massive failure happened?
That way if something got tripped, I could fix it before any
corruption or issue occured.
That would be my safety net, I suppose.
On 3/20/07, Sanjeev Bagewadi [EMAIL PROTECTED] wrote:
Mike,
We have
,
(long long) zfs`arc.c_min / 1024/1024,
(long long) zfs`arc.c_max / 1024/1024,
(long long) zfs`arc.size / 1024/1024,
(long long) zfs`arc.c / 1024/1024);
}
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
While the snapshot isn't RW, the clone is and would certainly be helpful
in this case
Isn't the whole idea to:
0) boot into single-user/boot-archive if you're paranoid (or just quiess
and clone if you feel lucky)
1) clone the primary OS instance+relevant-slices boot into the
primary OS
2)
be
able to use the parameters above to achieve what you are trying to do
regardless of which UNIXy file system is being used.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
no heritage with SAM-QFS.
http://www.oracle.com/technology/products/database/asm/index.html
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've used this to track down the filename and other tidbits using the object ID
from zpool status -v:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
zfspool01/nb60openv 292 1835008-1966080
zfspool01/nb60openv
I have a 100gb SAN lun in a pool, been running ok for about 6 months. panicked
the system this morning. system was running S10U2. In the course of
troubleshooting I've installed the latest recommended bundle including kjp
118833-36 and zfs patch 124204-03
created as:
zpool create zfspool01
, not a careful read of all the
parts involved.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in advance! When I saw ZFS and the upcoming crypto support
planned, it truly would meet all my needs. I have been telling all my
friends about ZFS, we're all excited but none of us have had a use or
equipment that we could use for it yet.
- mike
On 2/5/07, Richard Elling [EMAIL PROTECTED] wrote
My two (everyman's) cents - could something like this be modeled after
MySQL replication or even something like DRBD (drbd.org) ? Seems like
possibly the same idea.
On 1/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
Project Overview:
...
___
zfs-discuss
of the fs that won't unmount.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ooh. they support it? cool. i'll have to explore that option now.
however i still really want eSATA.
On 1/23/07, Samuel Hexter [EMAIL PROTECTED] wrote:
We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each)
running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a
Areca makes excellent PCI express cards - but probably have zero
support in Solaris/OpenSolaris. I use them in both Windows and Linux.
Works natively in FreeBSD too. They're the fastest cards on the market
I believe still.
However probably not very appropriate for this since it's a Solaris-based
I'm dying here - does anyone know when or even if they will support these?
I had this whole setup planned out but it requires eSATA + port multipliers.
I want to use ZFS, but currently cannot in that fashion. I'd still
have to buy some [more expensive, noisier, bulky internal drive]
solution
I would suggest using a CompactFlash card for the OS. I believe it
works exactly like IDE, but is more reliable, sucks less power, and
frees up a slot for the larger drive...
On 1/22/07, Elm, Rob [EMAIL PROTECTED] wrote:
Hello ZFS Discussion Members,
I'm looking for help or advice on a
, PCI
express preferred) would be great. Assuming it works with any eSATA
multiplier-aware enclosures (such as the one above)
I think that would open up a LOT of users to ZFS. Most definately this one -
- mike
On 1/21/07, Moazam Raja [EMAIL PROTECTED] wrote:
Hi all,
I'm thinking of using
Would this be the same as failing a drive on purpose to remove it?
I was under the impression that was supported, but I wasn't sure if
shrinking a ZFS pool would work though.
On 1/18/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
This is a pretty high priority. We are working on it.
?
If you have (or download) the latest installation DVD, look in the
/UpgradePatches (or similarly named) directory.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
as that is, ZFS promises to not corrupt my data and
to tell on others that do. ZFS cannot break that promise.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
and be able to
read this new file?
Does this apply to soft-link files as well?
Does anyone have experience with such a configuration?
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mounting the same FS by several
different machines? Is there a way around this?
Mike
Wee Yeh Tan wrote:
On 1/15/07, Torrey McMahon [EMAIL PROTECTED] wrote:
Mike Papper wrote:
The alternative I am considering is to have a single filesystem
available to many clients using a SAN (iSCSI
Anton Rang wrote:
On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4
of these slices to a Solaris 10 U2 machine and added each of them
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these slices to a
Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as listed below:
This is
The following is output from getfacl on a ufs filesytem:
[EMAIL PROTECTED] maseda]$ getfacl /home/users/ahege/incoming
# file: /home/users/ahege/incoming
# owner: ahege
# group: uncmd
user::rwx
user:nobody:rwx #effective:rwx
group::r-x #effective:r-x
mask:rwx
other:r-x
I
I use zfs in a san. I have two Sun V440s running solaris 10 U2, which
have luns assigned to them from my Sun SE 3511. So far, it has worked
flawlessly.
Robert Milkowski wrote:
Hello Dave,
Friday, December 15, 2006, 9:02:31 PM, you wrote:
DB Does anyone have a document that describes ZFS in
bytes
close
The rrd file in question is 8.6 MB. There was 8KB of reads and 5472
bytes of write. This is one of the big wins over the current binary
rrd format over the original ASCII version that came with MRTG.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
.
This may be a good place to look:
http://www.oracle.com/technology/deploy/availability/htdocs/xtts.htm
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
, or
should I file one and stop complaining. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any problems with
this procedure.
However, I waited until someone else announced the features or lack
thereof found in S10 11/06. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
and little interest in creating very
complex command lines with many -x options.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that and use it for swap or whatever.
The original question was about using ZFS root on a T1000. /grub
looks suspiciously incompatible with the T1000 because it isn't x86.
I've heard rumors of brining grub to sparc, but...
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
the most?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:
Chad so -t a should show wall clock time
The capture file always records absolute time. So you (just) need to
use -t a when you decode the capture file.
Sorry for not making the clear earlier.
mike
Hey Tony...
When (properly) doing Array-based snapshots/BCVs with
EMC/Hitachi/what-have you arrays, you create lun groups out of the
luns you're interested in snappin'. You then perform snapshot/clone
operations on that lun group which will make it atomic across all
members of that group.
Where
It's a valid use case in the high-end enterprise space.
While it probably makes good sense to use ZFS for snapshot creation,
there are still cases where array-based snapshots/clones/BCVs make
sense. (DR/Array-based replication, data-verification, separate
spindle-pool, legacy/migration reasons,
the original stays put. This could be done to refresh
non-production instances from production, to perform backups in such a
way that it doesn't put load on the production spindles, networks,
etc.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
that
it is up to ZFS to generate or manage the signature.
The nice thing about it is that so long as the private key is secret,
the signature stays with the file as it is moved, taken to tape, other
file systems, etc. so long as the file manipulation mechanisms support
extended-attributes.
Mike
--
Mike
mirroring just
isn't an option.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
other.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be an awesome feature to have in ZFS, even if
the de-duplication happens as a later pass similar to zfs scrub.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On 8/26/06, Mike Gerdts [EMAIL PROTECTED] wrote:
FWIW, I saw the same backtrace on build 46 doing some wierd stuff
documented at http://mgerdts.blogspot.com/. At the time I was booted
from cdrom media importing a pool that I had previously exported.
I got thinking... how can I outdo the ME
of C code and zlib.
Mike-- Mike Gerdtshttp://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the various NDMP Internet drafts into RFC's seems
to be stalled. A quick search of existing Open Source NDMP
implementations doesn't turn up much. Do others on the list have more
insight into whether this has been considered?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
sooner than later.
If running on sun4v, consider LDOM's when they are available (November?).
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On 7/31/06, Bev Crair [EMAIL PROTECTED] wrote:
However, note the limitations on usage: 4 'user-data file systems'...
B.
And last I looked it was x86-only.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
against current quota
was part of the problem statement. My approach with rsync avoids this
but, as I said before, is an ugly hack because it doesn't use the
features of zfs.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
, another 11 GB of disk is
used. At this rate, it doesn't take long to burn through a 73 GB
disk. However, if ZFS could de-duplicate the blocks, each patch
cycle would take up only a couple hundred megabytes. But I guess that
is off-topic. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
requires a
source tree checkout, learning docbook, etc. most would-be authors or
editors will be discouraged. Else, I guess it just winds up in a
bunch of blogs that are really hard to find.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
duplicated work and an uneven user experience.
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(optional)
o archive files
It seems as though if suitably motivated, additional information about
the desired configuration could be stored in one of the above
sections, either directly or as a result of scripts (e.g. derived
profiles in jumpstart).
Mike
--
Mike Gerdts
http
+ the cost of 4 1 GB DDR DIMMs. I suppose you
could mirror across a pair of them and still have a pretty fast small
4GB of space for less than $1k.
http://www.anandtech.com/storage/showdoc.aspx?i=2480
FWIW, google gives plenty of hits for solid state disk terabyte.
Mike
--
Mike Gerdts
http
, $5/1024/1024, $NF
}'
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
401 - 472 of 472 matches
Mail list logo