to achieve 300MB/second, a few
tens of MB don't make much difference. It may be that I bought the
wrong product, but perhaps there is a configuration change which will
help make up some of the difference without sacrificing data
reliability.
Bob
==
Bob
it does not seem very suspect.
bonnie++: http://www.sunfreeware.com/programlistintel10.html
I will check it out.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
on the array, I see that it is actually
fully busy. It seems that the application is stalled during this
load. It also seems that simple operations like 'ls' get stalled
under such heavy load.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
six LUNs are active to the other controller. Based on this, I
should rebuild my pool by splitting my mirrors across this boundary.
I am really happy that ZFS makes such things easy to try out.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
:22662.79 KBps
Average Written:5423.78 KBps
Peak Written: 28036.43 KBps
Average Read Size: 127.29 KB
Average Write Size: 127.77 KB
Cache Hit %:89.30
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org
not make a difference in the results.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs
On Fri, 15 Feb 2008, Bob Friesenhahn wrote:
Notice that the first six LUNs are active to one controller while the
second six LUNs are active to the other controller. Based on this, I
should rebuild my pool by splitting my mirrors across this boundary.
I am really happy that ZFS makes
drives, with 24-drives total) so it seems that everything is working
fine.
This is a lesson for me, and I have certainly learned a fair amount
about drive arrays, fiber channel, and ZFS, in the process.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
StorageTek
products, including the more expensive 6140 and 6540 arrays. It also
compares well with similarly-sized storage products from other
vendors.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
to provide a value
suitable for humans.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in the Common Array Manager?
Does this feature maintain a redundant cache (two data copies) between
controllers?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
? Will a package
which works for Solaris 10 (which some of us are still using) be
posted?
Thanks,
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
ownership of root:other (Solaris 9
clients), root:wheel (OS-X clients), and root:daemon (FreeBSD
clients). Only Solaris 10 clients seem to preserve original ownership
and permissions.
Is there a way to resolve this problem?
Thanks,
Bob
==
Bob Friesenhahn
for that user.
Yes, of course. This would be easy if I was running a homogeneous
network, but instead I have to deal with several kinds of automounter,
some of which seem to change between each major release. This seems
like a good task for another day.
Bob
==
Bob
for this equipment since it
covers multi-thread I/O performance as well. The multi-user
performance is considerably higher.
Given ZFS's smarts, the JBOD approach seems like a good one as long as
the hardware provides a non-volatile cache.
Bob
==
Bob Friesenhahn
SourceForge:
# pkgadd -d .
pkgadd: ERROR: no packages were found in
/home/bfriesen/src/benchmark/filebench
# ls
install/ pkginfo pkgmapreloc/
My system has the latest package management patches applied. What am
I missing?
Bob
==
Bob Friesenhahn
[EMAIL
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. Installing it based
on the available documentation was an exercise in frustration.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
not support an 'uninstall' target so now I am forced
to manually remove it from my system.
It seems that the best way to deal with star is to install it into its
own directory so that it does not interfere with existing software.
Bob
==
Bob Friesenhahn
[EMAIL
On Fri, 22 Feb 2008, Bob Friesenhahn wrote:
where it decided to remove the GNU tar I had installed there. Star
does not support traditional tar command line syntax so it can't be
used with existing scripts. Performance testing showed that it was no
more efficient than the 'gtar' which comes
are adequate to deal with the massive data
storage made easy with zfs storage pools. Zfs requires similarly
innovative backup solutions to deal with it.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
who is interested.
Thanks
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
of the elections?
There was a time inversion layer in Texas. Fixed now ...
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
descriptor, but then the underlying
file is updated in a somewhat random fashion as dirty pages are
written to disk.
It seems that this hypothesis is without merit.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
available. Consider this
to be your life's mission.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
the application until
the I/O is done.
If a file is updated via memory mapping, then the data sent to the
underlying file is based on the system's virtual memory system so the
actually data sent to disk may not be coherent at all.
Bob
==
Bob Friesenhahn
[EMAIL
someone can fix this fat-fingered patch description in Sun
Update Manager?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
this file. Unsetting the MAIL environment variable
may make the noise go away.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
is incorrect rather than a way to tell that the
data is correct. There may be several permutations of wrong data
which can result in the same checksum, but the probability of
encountering those permutations due to natural causes is quite small.
Bob
==
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
that can take data from
Previously it was suggested on this list to use a special version of
tar called 'star' (ftp://ftp.berlios.de/pub/star).
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
to
performance due to the number of I/Os?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
31173 2 0 94
2 284 1 183 5216 617 27 98 670 49544 3 0 93
3 176 1 239 748 353 555 25 76 620 39334 3 0 93
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
of
thing, but mainframes are essentially closed systems so the mainframe
vendor has more control.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
ugly things before worrying about adding frosting on top.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
more
write I/Os than if it has a lot of RAM.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
, the write block size no longer makes much difference and
sometimes larger block sizes actually go slower.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
are written very quickly, that when the file
becomes bigger than the ARC, that what is contained in the ARC is
mostly stale and does not help much any more. If the file is smaller
than the ARC, then there is likely to be more useful caching.
Bob
==
Bob
administered hardware is more likely to
encounter a problem. Local disk is usually more reliable than remote
disk.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
dependencies? Will it work
on old SPARC systems?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
delay into
my application?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
that my RAID array disks are loafing with only 9MB/second
writes to each but with 82 writes/second.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
-ahead and tuning this value can help considerably for sequential
access. Using gigabit eithernet with jumbo frames will improve
performance even further. Notice that most of these tunings are for
the client-side and not for the server.
Bob
==
Bob Friesenhahn
that performance was very dependent on application
write size regardless of client NFS tunings.
Unfortunately, not everyone is using Solaris. The Solaris 10 NFS
client implementation really screams.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
and it can further sub-partition a FreeBSD partition for use in
individual filesystems.
Regardless, I am very interested to hear if ZFS pools can really be
transferred back and forth between Solaris and FreeBSD.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
minutes is sufficient for
use over the internet but seems excessive on a LAN. Have you
investigated to see if the iSCSI client timeout parameters can be
adjusted?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
) in this way?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
since GNU xargs supports --max-procs and
--max-args arguments to allow executing commands concurrently with
different sets of files.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
reliable portable storage
device. Apparently this is not to be and it will be necessary to deal
with iSCSI instead.
I have never used iSCSI so I don't know how difficult it is to use as
temporary removable storage under Windows or OS-X.
Bob
==
Bob Friesenhahn
in a USB or
Firewire disk or does it require system administrator type knowledge?
If you go to Starbucks, does your laptop attempt to mount your iSCSI
volume on a (presumably) unreachable network?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
important recovery information for your machines.
Open up the machine in advance and put sticky labels on the drives
with their device names.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
is that the author has worked really hard on a few CPU-specific
optimizations. Is the license ok for Solaris?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
cards. I see that there is an option to
populate more cache RAM.
I would be interested to know what actual throughput that one card is
capable of. The CDW site says 300MB/s.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
' performance is about the same as Henrik's. The 'cp -r'
performance is much less than disk benchmark tools would suggest.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
is all zeros.
For ext3, inspecting all blocks for zeros would be viewed as
unnecessary overhead.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
and the devices are not dependent on some of the same things
(e.g. power supplies, chassis, SATA controller, air conditioning) then
what caused one device to fail may very well cause another device to
fail.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
less than the
total available devices and since parity is distributed the parity
could be written to any drive.
I am sure that someone will correct me if the above is wrong.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
code, all
This is also a common scenario. :-)
Presumably the special slow I/O code would not kick in unless the
burst was large enough to fill quite a bit of the ARC.
Real time throttling is quite a challenge to do in software.
Bob
==
Bob Friesenhahn
[EMAIL
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Thu, 17 Apr 2008, Tim wrote:
Along those lines, I'd *strongly* suggest running Jeff's script to pin down
whether one drive is the culprit:
But that script only tests read speed and Pascal's read performance
seems fine.
Bob
==
Bob Friesenhahn
[EMAIL
is provided by
patches?
Thanks,
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
a
fourth drive which is at least as good as the drives you are using for
c1t1d0 and c1t2d0.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
swap') and ZFS itself is able to support a swap volume.
I don't think that you can put a normal swap file on ZFS so you would
want to use ZFS's built-in support for that.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
-efficient OS.
Now if we could just get ZFS ARC and Gnome Desktop to not use any
memory, we would be in nirvana. :-)
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
area, and only remember that it exists via the process table.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
with a traditional incremental
backup system?
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
backup interval would still be be limited
to the time required to do one incremental backup.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
and then copying those individual files
to optical storage.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs
On Mon, 21 Apr 2008, Dana H. Myers wrote:
Bob Friesenhahn wrote:
Are there any plans to support ZFS for write-only media such as optical
storage? It seems that if mirroring or even zraid is used that ZFS would
be a good basis for long term archival storage.
I'm just going to assume
.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
somewhat to address archiving.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of
ZFS. It just not possible to correct the failed block on the media by
re-writing it or moving its data to a new location.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
, the media can be
purchased from different vendors so there is less chance of similar
bit-rot across the lot.
With $40 to $200 million spent per project, a few extra copies is in
the noise. :-)
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
was turned off (retries and 3+ minute iSCSI failure recovery logic).
There would be additional dismal performance when the PC is turned
back on due to cumulative resilvering.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen
messages to a system logger.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
;.
Since then I am still quite satisified. ZFS has yet to report a bad
block or cause me any trouble at all.
The only complaint I would have is that 'cp -r' performance is less
than would be expected given the raw bandwidth capacity.
Bob
==
Bob Friesenhahn
[EMAIL
doing the copy) to look for suspicious device behavior.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
for Linux, I think that you will also need to figure out an
indirect-map incantation which works for its own broken automounter.
Make sure that you read all available documentation for the Linux
automounter so you know which parts don't actually work.
Bob
==
Bob
and embarrassment when
that poor system administrator returns the next day and finds that
many users can not access their home directories!
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Thu, 1 May 2008, Rustam wrote:
operating system: 5.10 Generic_127112-07 (i86pc)
Seems kind of old. I am using Generic_127112-11 here.
Probably many hundreds of nasty bugs have been eliminated since the
version you are using.
Bob
==
Bob Friesenhahn
to ZFS consistency? If NFS
consistency is lost by disabling the zil then local consistency is
also lost.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
. It is not the same thing as no
corruption. ZFS will happily lose some data in order to avoid some
corruption if the system loses power.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
free.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the
filesystem rather than reading from it. You can be sure that Sun put
as little ZFS code in Grub as was possible (and not just for license
reasons).
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer
. Disks added earlier will be
initially more loaded up than disks added later.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
better to do things the zfs way since then the pool can
still be completely active. Taking a snapshot takes less than a
second. Then you can send the filesystems to be backed up to a file
or to another system.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
?
Assuming a reasonably designed storage system, the most likely cause
of data loss is human error due to carelessness or confusion.
Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http
1 - 100 of 1392 matches
Mail list logo