complaints of repeated timeouts when the snv_90
packages were released resulting in having to restart the upgrade from
the start.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Wed, Jun 11, 2008 at 12:58 AM, Robin Guo [EMAIL PROTECTED] wrote:
Hi, Mike,
It's like 6452872, it need enough space for 'zfs promote'
Not really - in 6452872 a file system is at its quota before the
promote is issued. I expect that a promote may cause several KB of
metadata changes
in the
documentation or zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I
PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 05, 2008 1:56 PM
To: Ellis, Mike
Cc: ZFS discuss
Subject: Re: [zfs-discuss] ZFS root finally here in SNV90
Mike,
As we discussed, you can't currently break out other datasets besides
/var. I'll add this issue to the FAQ.
Thanks,
Cindy
and wrote a blog entry.
http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-environment.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
The FAQ document (
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
jumpstart profile example:
install_type initial_install
pool newpool auto auto auto mirror c0t0d0 c0t1d0
bootenv installbe bename sxce_xx
The B90 jumpstart check program (SPARC) flags
In addition to the standard containing the carnage arguments used to
justify splitting /var/tmp, /var/mail, /var/adm (process accounting
etc), is there an interesting use-case where would one split out /var
for compression reasons (as in, turn on compression for /var so that
process accounting,
related
directories is (save/patchid) may trip something up.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, May 31, 2008 at 9:38 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
$ find /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
/ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
/ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/.make.state.lock
/ws/mount/onnv-gate/usr/src/uts/sun4u
privileges has
everything they need to gain full root access.
I wish that there was a flag to open(2) to say not to update the atime
and that there was a privilege that could be granted to allow this
flag without granting file_dac_write.
--
Mike Gerdts
http://mgerdts.blogspot.com
better method for getting rid of the cruft that builds up in
/var/sadm either.
I suspect that further discussion on this topic would be best directed
to [EMAIL PROTECTED] or sun-managers mailing list (see
http://www.sunmanagers.org/).
--
Mike Gerdts
http://mgerdts.blogspot.com
/SPROcc/save/pspool/SPROcc/install/depend
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkginfo
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkgmap
Notice the lack of undo.Z files (and associated patch directories),
but the rest looks the same.
--
Mike Gerdts
http://mgerdts.blogspot.com
pool0 bootfs - default
pool0 delegation on default
pool0 autoreplace off default
pool0 temporaryoff default
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
On Sat, May 31, 2008 at 8:48 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
I just experienced a zfs-related crash. I have filed a bug (don't
know number - grumble). I have a crash dump but little free space. If
someone would like some more info from the core, please let me know in
the next few
Is there a way to to a create a zfs file system
(e.g. zpool create boot /dev/dsk/c0t0d0s1)
Then, (after vacating the old boot disk) add another
device and make the zpool a mirror?
(as in: zpool create boot mirror /dev/dsk/c0t0d0s1 /dev/dsk/c1t0d0s1)
Thanks!
emike
This message posted from
to workloads that use a lot of RAM but are fairly inactive. As
such, a $10k PCIe card may be able to allow a $42k 64 GB T5240 handle
5+ times the number of not-too-busy J2EE instances.
If anyone's done any modelling or testing of such an idea, I'd love to
hear about it.
--
Mike Gerdts
http
I like the link you sent along... They did a nice job with that.
(but it does show that mixing and matching vastly different drive-sizes
is not exactly optimal...)
http://www.drobo.com/drobolator/index.html
Doing something like this for ZFS allowing people to create pools by
with general
systemtools of a particular directory?
any idea would be appreciated
karsten
Have you tried fsstat? I think it will do what you are looking for
whether it is zfs, ufs, tmpfs, etc.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
I currently have a zpool with two 8Gbyte disks in it. I need to replace them
with a single 56Gbyte disk.
with veritas I would just add the disk in as a mirror and break off the other
plex then destroy it.
I see no way of being able to do this with zfs.
Being able to migrate data without
independently either I need to have a zpool per zone or I need
to have per-dataset replication. Considering that with some workloads
20+ zones on a T2000 is quite feasible, a T5240 could be pushing 80+
zones and as such a relatively large number of zpools.
--
Mike Gerdts
http://mgerdts.blogspot.com
Could someone kindly provide some details on using a zvol in sparse-mode?
Wouldn't the COW nature of zfs (assuming COW still applies on ZVOLS) quickly
erode the sparse nature of the zvol?
Would sparse data-presentation only work by delegating a part of a zpool to a
zone, but that's at the
name, temp.
(I am trying to move this thread over to zfs-discuss, since I originally
posted to the wrong alias)
storage-discuss trimmed in my reply.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
should take only a few
seconds longer than a standard init 6. Failback is similarly easy.
I can't remember the last time I swapped physical drives to minimize
the outage during an upgrade.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
the additional space to be seen.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Use zpool replace to swap one side of the mirror with the iscsi lun.
-- mikee
- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
Sent: Tue Jan 15 08:46:40 2008
Subject: Re: [zfs-discuss] Moving zfs to an iscsci
On 1/14/08, eric kustarz [EMAIL PROTECTED] wrote:
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
www.mozy.com appears to have unlimited backups for 4.95 a month.
Hard to beat that. And they're owned by EMC now so you know they
aren't going anywhere anytime soon.
mozy's been okay, but
except in my experience it is piss poor slow... but yes it is another
option that is -basically- built on standards (i say that only because
it's not really a traditional filesystem concept)
On 1/14/08, David Magda [EMAIL PROTECTED] wrote:
On Jan 14, 2008, at 17:15, mike wrote:
On 1/14/08
# zfs mount -a (not sure this needed)
# cd /somewhere_else
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
corruption)
- Opportunities to do things previously not possible
ZFS doesn't win on many of those, but with the improvements that I
have seen throughout the storage stack it is somewhat likely that the
required improvements are already on the roadmap.
--
Mike Gerdts
http://mgerdts.blogspot.com
? I would guess that you
don't have large file support. A variant of the following would
probably be good:
cc -c $CFLAGS `getconf LFS_CFLAGS` myprog.c
cc -o myprog $LDFLAGS `getconf LFS_LDFLAGS`
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
and likely more space in production use than
ZFS.
I think that ZFS holds a lot of promise for shared-nothing database
clusters, such as is being done by Greenplumb with their extended
variant of Postgres.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
.
Also... since there is nothing zfs-specific here, opensolaris-code may
be a more appropriate forum.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
df.xpg4
df.cdf.po df.xcl df.xpg4.o
It looks to me as though df becomes /usr/bin/df and df.xpg4 becomes
/usr/xpg4/bin/df.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that was not included with 10_Recommended?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Thanks...
Mike Dotson
Area System Support
of zfs are using something that does something along the
lines of
while readdir ; do
open file
read from file
write to backup stream
close file
done
Since files are unlikely to be on disk in a contiguous manner, this
looks like a random read operation to me.
Am I wrong?
--
Mike
- mine was SPARC) to see if it
addresses your problem.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I actually have a related motherboard, chassis, dual power-supplies
and 12x400 gig drives already up on ebay too. If I recall Areca cards
are supported in OpenSolaris...
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItemitem=300172982498
On 11/22/07, Jason P. Warr [EMAIL PROTECTED] wrote:
If you
On Thu, 2007-11-15 at 05:25 -0800, Boris Derzhavets wrote:
Thank you very much Mike for your feedback.
Just one more question.
I noticed five device under /dev/rdsk:-
c1t0d0p0
c1t0d0p1
c1t0d0p2
c1t0d0p3
c1t0d0p4
been created by system immediately after installation completed.
I believe
c0d0p4ONLINE 0 0 0
errors: No known data errors
So to create the pool in my case would be: zpool create lpool c0d0p4
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Looking for a way to mount a zfs filesystem ontop of another zfs filesystem
without resorting to legacy mode.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Mike DeMarco wrote:
Looking for a way to mount a zfs filesystem ontop
of another zfs
filesystem without resorting to legacy mode.
doesn't simply 'zfs set mountpoint=...' work for you?
--
Michael Schuster
Recursion, n.: see 'Recursion
The ideal situation it would go like:
host1# zpool export pool
host2# zpool import pool
If you know (really know) that it is offline on the other server (e.g. you
can verify the host is dead), you can use:
# zpool import -f pool
Mike
On 10/19/07, Mertol Ozyoney [EMAIL PROTECTED] wrote:
Hi
=4844356610838567439
vdev_tree
type='disk'
id=0
guid=4844356610838567439
path='/dev/dsk/c1t2d0s0'
devid='id1,[EMAIL PROTECTED]/a'
whole_disk=1
metaslab_array=14
metaslab_shift=29
ashift=9
asize=73394552832
thanks
Mike
the importance of 2 a bit.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that linked against the included
version of OpenSSL automatically gets to take advantage of the N2
crypto engine, so long as it is using one of the algorithms supported
by N2 engine.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
).
Remember marketing info his very high level, the devil as aways is in
the code.
Yeah, I know. It's often times difficult to find the right code when
you know what you are looking for. When you don't know that you
should be fact-checking, the code rarely finds its way in front of
you.
--
Mike Gerdts
cheaper on systems with lower latency between CPUs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/18/07, Gary Mills [EMAIL PROTECTED] wrote:
What's the command to show cross calls?
mpstat will show it on a system basis.
xcallsbypid.d from the DTraceToolkit (ask google) will tell you which
PID is responsible.
--
Mike Gerdts
http://mgerdts.blogspot.com
+ screens[1] on the default sized terminal window.
1. If you are in this situation, there is a good chance that the
formatting of df cause line folding or wrapping that doubles the
number of lines to 80+ screens of df output.
--
Mike Gerdts
http://mgerdts.blogspot.com
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
but checking the actual release notes shows no ZFS mention. 3.0.26 to
3.2.0? That seems an odd version bump...
3.0.x and before are GPLv2. 3.2.0 and later are GPLv3.
http://news.samba.org/announcements/samba_gplv3/
--
Mike Gerdts
http
to
administer the location mapping while providing transparency to the
end-users.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 9/20/07, Matthew Flanagan [EMAIL PROTECTED] wrote:
Mike,
I followed your procedure for cloning zones and it worked
well up until yesterday when I tried applying the S10U4
kernel patch 12001-14 and it wouldn't apply because I had
my zones on zfs :(
Thanks for sharing. That sucks.
I'm
in coordination with iSCSI.
irony
Oh, wait! What if the NAS device runs out of space while I'm
patching? Better rule out the thin provisioning capabilities of the
HDS storage that Sun sells as well.
/irony
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
Yup...
With Leadville/MPXIO targets in the 32-digit range, identifying the new
storage/LUNs is not a trivial operatrion.
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Russ
Petruzzelli
Sent: Monday, September 17, 2007 1:51 PM
To:
does, but Snap Upgrade does.
http://opensolaris.org/os/project/caiman/Snap_Upgrade/
It is likely worth considering more of the roadmap when reading that page.
http://opensolaris.org/os/project/caiman/Roadmap/
--
Mike Gerdts
http://mgerdts.blogspot.com
-writes of
data (e.g. crypto rekey) to concentrate data that had become
scattered into contiguous space.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
writes could be batched coalesced and applied
in a journaled manner such that each batch fully applies or is rolled
back on the target. I haven't heard of this being done.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
have you tried zpool clear?
Peter Tribble wrote:
On 9/13/07, Solaris [EMAIL PROTECTED] wrote:
Try exporting the pool then import it. I have seen this after moving disks
between systems, and on a couple of occasions just rebooting.
Doesn't work. (How can you export something that
On 11/09/2007, Mike DeMarco [EMAIL PROTECTED]
wrote:
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today
that
its i/o bound but
using hardly
any CPU, so I thought turning on compression
would
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote:
Striping several disks together with a stripe width
that is tuned for your data
model is how you could get your performance up.
Stripping has been left out
of the ZFS model for some reason. Where it is true
that RAIDZ will stripe
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today that
its i/o bound but
using hardly
any CPU, so I thought turning on compression would be
a quick win.
If it is io bound won't compression make it worse?
I
backups, etc. Pushing that out to desktop or laptop machines is not
really a good idea.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it with success (and
failures) in limited scope. I'm sure that with time the improvements
will come that make that scope increase dramatically, but for now it
is confined to the lab. :(
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
On 9/7/07, Mike Gerdts [EMAIL PROTECTED] wrote:
For me, quotas are likely to be a pain point that prevents me from
making good use of snapshots. Getting changes in application teams'
understanding and behavior is just too much trouble. Others are:
not to mention there are smaller-scale users
expensive - you would be charging quota to each user but
only storing one copy. Depending on the balance of CPU power vs. I/O
bandwidth, compressed zvols could be a real win, more than paying back
the space required to have a few snapshots around.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
On 9/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
This is my personal opinion and all, but even knowing that Sun
encourages open conversations on these mailing lists and blogs it seems to
falter common sense for people from @sun.com to be commenting on this
topic. It seems like
On 9/5/07, Joerg Schilling [EMAIL PROTECTED] wrote:
As I wrote before, my wofs (designed and implemented 1989-1990 for SunOS 4.0,
published May 23th 1991) is copy on write based, does not need fsck and always
offers a stable view on the media because it is COW.
Side question:
If COW is such
reset (panic, I believe) of the primary LDOM seems to have
caused the corruption in the guest LDOM. What was that about having
the redundancy as close to the consumer as possible? :)
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
and I think snv59:
panic - S10u4 backtrace is very different from snv*
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or do maintenance. It's mainly for cheap, quiet enclosures
that can export JBOD...
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
%3Amail.opensolaris.org+%28dedup+OR+%22de-duplication%22+OR+deduplication%29btnG=Google+Search
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that (I'm told) is
being worked on.
I only mention this to say that this type of problem is not restricted
to zfs boot.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
One last question, when it comes to patching these zones, is it better to patch
it normally or destroy all the local zones and patch only the global zone and
use sh file to recreate all the zones.
This message posted from opensolaris.org
___
Greetings,
Given zfs pools, how does one import these pools to another node in
the cluster.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Sorry, my question is not clear enough. These pools contain a zone each.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this is in the works. Most of my use cases for
ZFS involve use of clones. Lack of space-efficient backups and
especially restores makes me wait to use ZFS outside of the lab.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
On 7/11/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Mike Gerdts wrote:
Perhaps a better approach is to create a pseudo file system that looks like:
mntpt/pool
/@@
/@today
/@yesterday
/fs
/@@
/@2007-06-01
.
Is this something that is maybe worth spending a few more cycles on,
or is it likely broken from the beginning?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
that is absolutely unaccepted practice.
The past week of inactivity is likely related to most of Sun in the US
being on mandatory vacation. Sun typically shuts down for the week
that contains July 4 and (I think) the week between Christmas and Jan
1.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
I had a similar situation between x86 and SPARC, version number. When I
created the pool on the LOWER rev machine, it was seen by the HIGHER rev
machine. This was a USB HDD, not a stick. I can now move the drive
between boxes.
HTH,
Mike
Dick Davies wrote:
Thanks to everyone for the sanity
At what Solaris10 level (patch/update) was the single-threaded
compression situation resolved?
Could you be hitting that one?
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Roch - PAE
Sent: Tuesday, June 26, 2007 12:26 PM
To: Roshan Perera
room for two
to fail then I suppose I can look for a 14 drive space usable setup
and use raidz-2.
Thanks,
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 6/20/07, Paul Fisher [EMAIL PROTECTED] wrote:
I would not risk raidz on that many disks. A nice compromise may be 14+2
raidz2, which should perform nicely for your workload and be pretty reliable
when the disks start to fail.
Would anyone on the list not recommend this setup? I could
On 6/15/07, Brian Hechinger [EMAIL PROTECTED] wrote:
Hmmm, that's an interesting point. I remember the old days of having to
stagger startup for large drives (physically large, not capacity large).
Can that be done with SATA?
I had to link 2 600w power supplies together to be able to power
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
as a more SOHO user i like ZFS mainly for it's
(FAT32, NTFS, XFS, JFS) it is encouraging
to see more options that put emphasis on integrity...
On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:
On June 14, 2007 3:57:55 PM -0700 mike [EMAIL PROTECTED] wrote:
as a more SOHO user i like ZFS mainly for it's COW and integrity, and
huh
On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:
Yes, but there are many ways to get transactions, e.g. journalling.
ext3 is journaled. it doesn't seem to always be able to recover data.
it also takes forever to fsck. i thought COW might alleviate some of
the fsck needs... it just seems like
looks like you used 3 for a total of 15 disks, right?
I have a CM stacker too - I used the CM 4-disks-in-3-5.25-slots
though. I am currently trying to sell it too, as it is bulky and I
would prefer using eSATA/maybe Firewire/USB enclosures and a small
controller machine (like a Shuttle) so it is
/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss
Also the unmirrored memory for the rest of the system has ECC and
ChipKill, which provides at least SOME protection against random
bit-flips.
--
Question: It appears that CF and friends would make a descent live-boot
(but don't run on me like I'm a disk) type of boot-media due to the
limited
]
Sent: Tuesday, May 29, 2007 9:48 PM
To: Ellis, Mike
Cc: Carson Gaspar; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: ZFS - Use h/w raid or
not?Thoughts.Considerations.
Ellis, Mike wrote:
Also the unmirrored memory for the rest of the system has ECC and
ChipKill, which provides
]
Sent: Tuesday, May 29, 2007 9:48 PM
To: Ellis, Mike
Cc: Carson Gaspar; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Re: ZFS - Use h/w raid or
not?Thoughts.Considerations.
Ellis, Mike wrote:
Also the unmirrored memory for the rest of the system has ECC and
ChipKill, which provides
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
Mike Dotson wrote:
On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
Would help in many cases where an admin needs to work on a system but
doesn't need, say 20k users home directories mounted, to do this work.
So single-user mode
on
all file systems instead of minimal file systems.
- Eric
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
--
Mike Dotson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Fri, 2007-05-25 at 15:46 -0700, Eric Schrock wrote:
On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote:
In fact the console-login depends on filesystem/minimal which to me
means minimal file systems not all file systems and there is no software
dependent on console-login
This is probably a good place to start.
http://blogs.sun.com/realneel/entry/zfs_and_databases
Please post back to the group with your results, I'm sure many of us are
interested.
Thanks,
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
301 - 400 of 472 matches
Mail list logo