On Tue, Jul 18, 2006 at 09:46:44AM -0400, Chad Mynhier wrote:
On 7/18/06, Brian Hechinger [EMAIL PROTECTED] wrote:
On Tue, Jul 18, 2006 at 01:27:21AM -0700, Jeff Bonwick wrote:
the ability to remap blocks would be *so* useful -- it would
enable compression of preexisting data, removing
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:
If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
it label and use the disks, it will automatically turn on the write
cache for you.
What if you can't give ZFS whole disks? I run snv_38 on the Optiplex
GX620
On Wed, Jul 26, 2006 at 08:38:16AM -0600, Neil Perrin wrote:
GX620 on my desk at work and I run snv_40 on the Latitude D610 that I
carry with me. In both cases the machines only have one disk, so I need
to split it up for UFS for the OS and ZFS for my data. How do I turn on
write cache
On Fri, Jul 28, 2006 at 02:14:50PM +0200, Patrick Bachmann wrote:
systems config? There are a lot of things you know better off-hand
about your system, otherwise you need to do some benchmarking, which
ZFS would have to do too, if it was to give you the best performing
config.
How hard
On Fri, Jul 28, 2006 at 09:47:48AM -0600, Lori Alt wrote:
While the official release of zfs-boot won't be out
until Update 4 at least, we're working right now on
getting enough pieces available through OpenSolaris
so that users can put together a boot CD/DVD/image
that will directly install
On Fri, Jul 28, 2006 at 02:02:13PM -0700, Richard Elling wrote:
Joseph Mocker wrote:
Richard Elling wrote:
The problem is that there are at least 3 knobs to turn (space, RAS, and
performance) and they all interact with each other.
Good point. then how about something more like
zpool
On Fri, Aug 04, 2006 at 11:31:38PM -0400, Jim Connors wrote:
o I've got a modified Solaris miniroot with ZFS functionality which
takes up about 60 MB (The compressed image, which GRUB uses, is less
than 30MB). Solaris boots entirely into RAM. From poweron to full
functionality, it takes
On Tue, Aug 15, 2006 at 08:56:00PM +0930, Darren J Moffat wrote:
Eric Schrock wrote:
This case adds a new option, 'zfs create -o', which allows for any ZFS
property to be set at creation time. Multiple '-o' options can appear
in the same subcommand. Specifying the same property multiple
On Fri, Jul 28, 2006 at 02:26:24PM -0600, Lori Alt wrote:
What about Express?
Probably not any time soon. If it makes U4,
I think that would make it available in Express late
this year.
Is there a specific Nevada build you are going to target? I'd love to
start testing this as soon as
On Wed, Aug 30, 2006 at 10:53:20PM -0700, Stefan Johansson wrote:
Yes I did and it works ok enough for me.
Would be nice to have vmware server for Solaris instead so I can run Solaris
as the host and use zfs directly on the controllers.
You're not the only one who wants that. Maybe we
11/06 is just around the corner! What new ZFS features are going to
make it into that release?
-brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 05, 2006 at 11:19:19AM -0700, David Dyer-Bennet wrote:
On 10/5/06, Jeremy Teo [EMAIL PROTECTED] wrote:
What would a version FS buy us that cron+ zfs snapshots doesn't?
Finer granularity; no chance of missing a change.
TOPS-20 did this, and it was *tremendously* useful .
On Thu, Oct 05, 2006 at 04:08:13PM -0700, David Dyer-Bennet wrote:
when you do your session-end cleanup. What the heck was that command
on TOPS-20 anyway? Maybe purge? Sorry, 20-year-old memories are
fuzzy on some details.
It's PURGE under VMS, so knowing DEC, it was named PURGE under
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev?
Are there performance issues with mixing differently sized raidz vdevs
in a pool? If
On Thu, Oct 12, 2006 at 05:46:24PM +1000, Nathan Kroenert wrote:
A few of the RAID controllers I have played with has an option to
'rebuild' a raid set, which I get the impression (though have never
tried) allows you to essentially tell the controller there is a raid set
there, and if you
On Thu, Oct 12, 2006 at 08:52:34AM -0500, Al Hopper wrote:
On Thu, 12 Oct 2006, Brian Hechinger wrote:
Ok, previous threads have lead me to believe that I want to make raidz
vdevs [0] either 3, 5 or 9 disks in size [1]. Let's say I have 8 disks.
Do I want to create a zfs pool with a 5
On Fri, Oct 27, 2006 at 07:52:36AM -0600, [EMAIL PROTECTED] wrote:
Chris Adams wrote:
Is anyone actually booting ZFS in production and, if so, would you
recommend this approach?
ZFS-boot has not been released in any official way
yet. Only parts of it are available in OpenSolaris.
So
On Fri, Oct 27, 2006 at 01:23:37PM -0500, Christopher Scott wrote:
You can manually set up a ZFS root environment but it requires a UFS
partition to boot off of.
See: http://blogs.sun.com/tabriz/entry/are_you_ready_to_rumble
That's not was I was refering to. I'm interested in testing the
On Sat, Oct 28, 2006 at 08:19:24AM -0500, Mike Gerdts wrote:
On 10/28/06, Dick Davies [EMAIL PROTECTED] wrote:
http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-2.html
It uses a /grub partition rather than a full root FS to boot from - you
still need a UFS / for the initial
On Tue, Nov 28, 2006 at 10:48:46PM -0500, Toby Thain wrote:
Her original configuration wasn't redundant, so she should expect
this kind of manual recovery from time to time. Seems a logical
conclusion to me? Or is this one of those once-in-a-lifetime strikes?
That's not an entirely true
On Wed, Dec 13, 2006 at 03:39:58PM -0800, Richard Elling wrote:
Jochen M. Kaiser wrote:
Didn't find any decent SAS controllers though, qlogic has some,
but the PCIe model with two external ports isn't supported on
Solaris. The single port model would work though...
We (Sun) sell LSI
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
zpool import should give you a list of all the pools ZFS sees as being
mountable. zpool import [poolname] is also, conveniently, the command used
to mount the pool afterward. :)
Which is what I expected to happen, however.
If it
On Tue, Dec 19, 2006 at 02:55:59PM -0500, Rince wrote:
If it doesn't show up there, I'll be surprised.
I take that back, I just managed to restore my ability to boot the old
instance.
I will be making backups and starting clean, this old partitioning has
screwed me up for the last time.
On Tue, Dec 19, 2006 at 10:29:24PM -0500, Rince wrote:
What exactly did it say? Did it say there are some pools that couldn't be
imported, use zpool import -f to see them, or just no pools available?
no pools available
If not, then I suspect that Solaris install didn't see the relevant disk
On Fri, Jan 05, 2007 at 10:39:59PM -0800, Anton B. Rang wrote:
Summary (1.8 form factor): write: 35MB/Sec, Read: 62MB/Sec IOPS: 7,000
That is on par with a 5400 rpm disk, except for the 100x more small, random
read iops. The biggest issue is the pricing, which will become
On Thu, Jan 11, 2007 at 11:52:19AM +, Darren J Moffat wrote:
Fabian W??rner wrote:
I think of have solaris and mac os 10.5 on the same machine and mount same
filesystem on to differnet point on each os.
Is/will it possible or do I have to use sym. links?
You can NOT mount the same ZFS
On Thu, Jan 11, 2007 at 02:54:29PM +0100, [EMAIL PROTECTED] wrote:
I don't know if he can change the mountpoint however without jumping
through hoops.
Are legacy mount points recorded in ZFS? I though they just lived in
/etc/vfstab. If so, then that could work.
Oooh, I hadn't thought
After having read that, I have to say Bravo to that team. It really sounds like
they are doing a great job.
This raises the question of when will the SATA framework be available for
testing?
-brian
-Original Message-
From: Richard Elling [EMAIL PROTECTED]
To:
On Thu, Jan 25, 2007 at 12:39:25AM +0100, Robert Milkowski wrote:
Hello zfs-discuss,
On another thumper I have a failing drive (port resets, etc.) so I
issued over a week ago drive replacement. Well it still hasn't
completed even 4% in a week! The pool config is the same. It's just
wy
On Thu, Jan 25, 2007 at 05:46:09PM +, Darren J Moffat wrote:
Does my plan sound feasible from both a usability and performance
standpoint?
That is exactly what I do on my laptop, with one exception. For me it
isn't just my zones on ZFS but also my home directory and my builds area
On Thu, Jan 25, 2007 at 11:35:54AM -0600, Al Hopper wrote:
On Thu, 25 Jan 2007, SteveW wrote:
... reformatted ...
The ability to shrink a pool by removing devices is the only reason my
enterprise is not yet using ZFS, simply because it prevents us from
easily migrating storage.
That
On Fri, Jan 26, 2007 at 08:06:46AM -0800, Anantha N. Srirama wrote:
b. Your server will hang when one of the underlying disks disappear. In our
case we had a T2000 running 11/06 and had a mirrored zpool against two
internal drives. When we pulled one of the drives abruptly the server
On Fri, Jan 26, 2007 at 11:11:13AM -0800, Ed Gould wrote:
Disconnected operation is a hard problem. One of the better research
efforts in that area was CODA, at CMU. CODA was, as I recall, and
extension to AFS, but it's probably reasonable to take some of those
ideas and marry them with
On Tue, Mar 06, 2007 at 02:49:35PM -0700, Lori Alt wrote:
The latest on when the update zfsboot support will
go into Nevada is either build 61 or 62. We are
making some final fixes and getting tests run. We
are aiming for 61, but we might just miss it. In
that case, we should be putting
On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
On March 11, 2007 6:05:13 PM + Tim Foster [EMAIL PROTECTED] wrote:
* ability to add disks to mirror the root filesystem at any time,
should they become available
Can't this be done with UFS+SVM as well? A reboot would be
After the interesting revelations about the X2100 and it's hot-swap abilities,
what are the abilities of the X2200-M2's disk subsystem, and is ZFS going to
tickle any wierdness out of them?
-brian
--
The reason I don't use Gnome: every single other window manager I know of is
very powerfully
On Mon, Mar 12, 2007 at 12:14:00PM -0700, Frank Cusack wrote:
On March 12, 2007 2:50:14 PM -0400 Rayson Ho [EMAIL PROTECTED] wrote:
So what is the progress of the SATA Framework integration??
http://www.opensolaris.org/os/community/on/flag-days/pages/2006011301/
The framework is integrated
On Tue, Mar 20, 2007 at 03:35:41PM -0400, Jim Mauro wrote:
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html
I worked (briefly, left right after this, no point working there) at a place
that
lost the hdd in it's main server. (small company).
That's ok! We have backups!
Guy had been
On Sat, Mar 24, 2007 at 11:20:38AM -0700, Frank Cusack wrote:
iscsi doesn't use TCP, does it? Anyway, the problem is really transport
independent.
It does use TCP. Were you thinking UDP?
or its own IP protocol. I wouldn't have thought iSCSI would want to be
subject to the vagaries of
I've been going over this in my mind as I don't currently use
LU yet, and this sort of stuff is getting me to hold off.
While LU might not work yet, you can always do a snapshot before
the BFU/upgrade and then roll back to that if things break, which
isn't completely LU, but it does give you the
On Fri, Apr 20, 2007 at 12:25:30PM -0700, MC wrote:
I will setup a VM image that can be downloaded (I hope to get it done
tomorrow, but if not definitely by early next week) and played with
by anyone who is interested.
That would be golden, Brian. Let me know if you can't get suitable
On Mon, Apr 23, 2007 at 03:44:05PM -0700, mike wrote:
#3 FreeBSD + the controller compatibility - assuming the controller
supports the port multipliers
The way I understand it, ZFS was written into GEOM on FreeBSD, so as long
as FreeBSD can use a certain controler it can offer it up to the
On Tue, Apr 24, 2007 at 09:48:33AM -0400, Mark J Musante wrote:
I believe we should stick to the most basic config for the default Solaris
installer. Certainly it should allow the admin to create whataever
datasets might be desired, but we should keep it simple for the default
case.
I've
On Tue, Apr 24, 2007 at 09:58:54AM -0700, William D. Hathaway wrote:
I've only used Lori Alt's patch for b62 boot images via jumpstart
(http://www.opensolaris.org/jive/thread.jspa?threadID=28725tstart=15)
which made it an easy process with mirrored boot ZFS drives and no UFS
partitions
On Tue, Apr 24, 2007 at 10:20:23AM -0700, mario heimel wrote:
hi brain,
Ok, the solution to the 'bad PBR sig' issue was to wholesale delete the
VM and create a new one fresh. The install has started, we'll see how
it goes. I'll report here.
-brian
--
Perl can be fast and elegant as much as
On Tue, Apr 24, 2007 at 04:51:10PM -0700, oliver soell wrote:
I just installed a mirrored root system last night, but using Tim Foster's
zfs-actual-root-install.sh script on a clean install of b62
(http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling).
You mention that no
On Mon, Apr 23, 2007 at 03:08:21PM -0700, roland wrote:
So, at this point in time that seems pretty discouraging for an everyday
user, on Linux.
nobody told, that zfs-fuse is ready for an everyday user at it`s current
state ! ;)
Also keep in mind that FUSE has the disadvantage of data
As promised, here it is:
https://jeffshare.jefferson.edu/users/blh008/Public/Solaris/
b62_zfsboot.iso.bz2 is a bootable patched b62 DVD.
b62_zfsboot_cd1.iso.bz2 is a bootable patched b63 CD1. I'm not
sure how useful this is unless you know how to tell pfinstall how
to look to an NFS server for
On Wed, Apr 25, 2007 at 04:36:46PM -0700, Malachi de ??lfweald wrote:
That's awesome!
So, to clarify... If I burn the b62_zfsboot.iso to a DVD and boot from it
with the intention of doing a fresh install
Yes. Upgrade/Flash installs are not availble with zfs boot yet.
What are the
On 4/25/07, Brian Gupta [EMAIL PROTECTED] wrote:
Yes, dump on ZVOL isn't currently supported, so a dump slice is still
needed.
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
Actually I could see why, but I don't think
On Wed, Apr 25, 2007 at 09:05:09PM -0400, Brian Gupta wrote:
I do understand the reasons why you would want to dump to a virtual
construct. I am just not very comfortable with the concept.
My instinct is that you want the fewest layers of software involved in
the event of a system
cadaver decided to complain that the file was too large.
This means that the DVD ISO won't get uploaded until I get to work
tomorrow and can use something other than cadaver.
Sorry for the delay.
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a
On Wed, Apr 25, 2007 at 09:55:16PM -0400, Brian Gupta wrote:
In Solaris 8(?) this changed, in that crashdumps streans were
compressed as they were written out to disk. Although I've never read
this anywhere, I assumed the reasons this was done are as follows:
What happens if the dump slice
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:
IMHO, only a few people in the world care about dumps at all (and you
know who you are :-). If you care, setup dump to an NFS server somewhere,
no need to have it local.
a) what does this entail
b) with zvols not supporting
On Fri, Apr 27, 2007 at 02:44:02PM -0700, Malachi de ??lfweald wrote:
For Brian, et al: Thank you very much. I burned the DVD, and did some minor
tweaking to the profile, and it is up and running (now to just n00b admin
troubleshooting)
For everyone else who is new to this and trying it out,
On Mon, Apr 23, 2007 at 01:56:53PM -0700, Richard Elling wrote:
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US
On Tue, May 01, 2007 at 09:56:04PM -0400, Torrey McMahon wrote:
If you lose the primary drive, and your dump device points to the
metadevice, then you wouldn't have to reset it. Also, most folks use the
Eh, that's true, not something I've really ever had to think about due
to your next
On Sat, May 05, 2007 at 02:41:28AM -0700, Christian Rost wrote:
- Buying cheap 8x250 GB SATA disks at first and replacing them from time to
time by 750 GB
or bigger disks. Disadvantage: At the end i've bought 8x250 GB + 8x750 GB
Harddisks.
Look at it this way. The amount you spend on
Xorg was acting *very* strangely, so in my efforts to try and get back
to a state where I could actually get work done, I did the unthinkable.
I rebooted my box. ;)
Not only did it not fix the problem, it made it worse!! Now it won't
even boot anymore!
What happens is the BIOS splash screen
Ok, I just had an idea. I think I know what happened. I moved the
dump device to an alternate device so that I could give my pool the
whole disks. To do this, I removed the one half of the mirror and
then re-attached it as a whole disk, and then did the same with the
other half.
I have this
On Wed, May 23, 2007 at 08:03:41AM -0700, Tom Buskey wrote:
Solaris is 64 bits with support for 32 bits. I've been running 64 bit
Solaris since Solaris 7 as I imagine most Solaris users have. I don't think
any other major 64 bit OS has been in general use as long (VMS?).
IRIX, AIX,
On Thu, May 24, 2007 at 01:16:32PM +0200, Claus Guttesen wrote:
iozone. So I installed solaris 10 on this box and wanted to keep it
that way. But solaris lacks FreeBSD ports ;-) so when current upgraded
Not entirely. :)
I don't know about FreeBSD PORTS, but NetBSD's ports system works very
I'd love to be able to server zvols out as SCSI or FC targets. Are
there any plans to add this to ZFS? That would be amazingly awesome.
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most
On Fri, May 25, 2007 at 02:50:15PM -0500, Al Hopper wrote:
On Fri, 25 May 2007, Lori Alt wrote:
We've been kicking around the question of whether or
not zfs root mounts should appear in /etc/vfstab (i.e., be
legacy mount) or use the new zfs approach to mounts.
Instead of writing up the
On Fri, Jun 01, 2007 at 06:37:21PM -0700, Richard L. Hamilton wrote:
I'd love to be able to server zvols out as SCSI or FC
targets. Are
there any plans to add this to ZFS? That would be
amazingly awesome.
Can one use a spare SCSI or FC controller as if it were a target?
Most
I think this falls under the bug (of which the number I do not have handy at
the moment) where ZFS needs to more gracefully fail in a situation like this.
Yes, be probably broke his zpool, but it really shouldn't have paniced the
machine.
-brian
On Mon, Jun 11, 2007 at 03:05:19PM -0100, Mario
On Sun, Jun 17, 2007 at 01:22:34PM -0700, Darren Dunham wrote:
The configuration of any vdev that you create does not constrain you
with any vdevs you want to add to the pool in the future. You can start
with any of your three choices above and then add any of the other three
to the same
On Mon, Jun 18, 2007 at 08:57:29AM -0700, Darren Dunham wrote:
I think it's mainly to keep you from doing something silly without
meaning to.
That's certainly a good reason. :)
If you have the same type and columns, then you have the same
availability expectations. If instead you take a 5
On Thu, Jun 21, 2007 at 11:36:53AM +0200, Roch - PAE wrote:
code) or Samba might be better by being careless with data.
Well, it *is* trying to be a Microsoft replacement. Gotta get it
right, you know? ;)
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In
On Wed, Jun 20, 2007 at 12:03:02PM -0400, Will Murnane wrote:
Yes. 2 disks means when one fails, you've still got an extra. In
raid 5 boxes, it's not uncommon with large arrays for one disk to die,
and when it's replaced, the stress on the other disks causes another
failure. Then the array
On Mon, Jul 09, 2007 at 11:14:58AM -0400, Kent Watsen wrote:
Hi all,
I'm new here and to ZFS but I've been lurking for quite some time... My
question is simple: which is better 8+2 or 8+1+spare? Both follow the
(N+P) N={2,4,8} P={1,2} rule, but 8+2 results in a total or 10 disks,
On Mon, Jul 09, 2007 at 03:57:30PM -0400, Kent Watsen wrote:
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
(4+1)*2 is cheaper to upgrade in place because of its fewer elements
I'm aware of these benefits but I feel
On Wed, Feb 28, 2007 at 09:54:37AM -0600, Dean Roehrich wrote:
On Wed, Feb 28, 2007 at 07:23:44AM -0800, Thomas Roach wrote:
And yes, we're actively pushing the SAM-QFS code through the open-source
process. Here's the first blog entry:
I upgraded my U80 from Sol10U1 to B64a a couple days ago. Just yesterday
I got around to getting apache, pgsql and svn (apache module version) running
again last night.
SVN performance skyrocketted. It went from taking many seconds to push
changes into the repo (from local or remote, didn't
On Fri, Aug 10, 2007 at 10:23:49AM -0700, Neal Pollack wrote:
Server class: Chipset ESB-2 southbridge
Desktop class: Chipset ICH-8 and ICH-9
Motherboards known as i965 chipset
and Intel P35 chipsets
Are the i975 chipset boards any less
On Fri, Aug 10, 2007 at 02:20:42PM +0100, Alec Muffett wrote:
Does anyone on this list have experience with a recent board with 6 or more
SATA ports that they know is supported?
Well so far I have only populated 5 of the ports I have available,
but my writeup with my 9-port SATS ASUS
On Fri, Sep 07, 2007 at 06:19:34PM -0500, Mike Gerdts wrote:
backups and restores. Snapshots of the zvols could be mounted as
other UFS file systems that could allow for self-service restores.
Perhaps this would make it so that you can write data to tape a bit
less frequently.
This would
On Thu, Sep 13, 2007 at 10:49:36AM +0200, Louwtjie Burger wrote:
http://www.sun.com/servers/entry/x4200/optioncards.jsp#m2pcie
SG-XPCIE8SAS-E-Z ?
I believe that's one of the LSI 1068E based cards. From what I've been
able to tell, anything based on the 106x chipset will work. That's what
On Thu, Sep 13, 2007 at 10:54:41AM -0600, Lori Alt wrote:
In-place upgrade of zfs datasets is not supported and probably
never will be (LiveUpgrade will be the way to go with zfs because
the cloning features of zfs make it a natural). But the LiveUpgrade
changes aren't ready yet, so for the
How painful is this going to be? Completely?
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my
On Mon, Oct 29, 2007 at 08:55:21AM -0700, Mauro Mozzarelli wrote:
Following up, I got this message from Lori:
We are aiming to integrate zfs boot for both sparc and x86
into Nevada around the end of this calendar year.
Lori
Lori,
Thank you for your reply, I will be probably one
I'll be setting up a small server and need two SATA-II ports for an x86
box. The cheaper the better.
Thanks!!
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is
On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote:
Right now I have noticed that LSI has recently began offering some
lower-budget stuff; specifically I am looking at the MegaRAID SAS
8208ELP/XLP, which are very reasonably priced.
I looked up the 8204XLP, which is really quite
On Tue, Nov 20, 2007 at 02:01:34PM -0600, Al Hopper wrote:
a) the SuperMicro AOC-SAT2-MV8 is an 8-port SATA card available for
around $110 IIRC.
Yeah, I'd like to spend a lot less than that, especially as I only need
2 ports. :)
b) There is also a PCI-X version of the older LSI 4-port
I finally got some new drives for my Ultra 80. I have two 73gig 10K
RPM SCSI disks in it now with 60GB in a ZFS mirror. I am going to be
adding 4x500G SATA disks in a RAIDZ, and I was thinking about using
the old zfs space on the SCSI disks for intent logs.
My questions are this:
1) Is it
I will be putting 4 500GB SATA disks in my Ultra80. I currently have
two 10K rpm 73G SCSI disks in it with 10G for the OS (UFS) and the
remaining space for a ZFS pool (the two remaining partitions are setup
in a mirror).
Would it be worth my while to move all the data off of the zfs partitions
On Wed, Dec 05, 2007 at 06:12:18PM -0600, Al Hopper wrote:
PS: LsiLogic just updated their SAS HBAs and have a couple of products
very reasonably priced IMHO. Combine that with a (single ?) Fujitsu
MAX3xxxRC (where xxx represents the size) and you'll be wearing a big
smile every time you
On Wed, Dec 05, 2007 at 06:12:18PM -0600, Al Hopper wrote:
I don't think you'll see any worthwhile improvement. For a ZIL
device, you really need something like a (small) SAS 15k RPM 3.5
drive - which will sustain 700 to 900 IOPS (my number - open to
argument) - or a RAM disk or one of
On Thu, Dec 06, 2007 at 03:27:33PM -0800, Scott Laird wrote:
MAX3xxxRC (where xxx represents the size) and you'll be wearing a big
smile every time you work on a system so equipped.
Hmmm, on second glace, 36G versions of that seem to be going for $40.
Do you mean $140, or am I missing
I have a machine that the BIOS cannot see my SiL3124 controller. Solaris
of course sees it just fine. This means that I can't boot from it however.
What I've done it this. I installed Solaris onto a temporary IDE disk and
ran Tim Foster's zfs-actual-root-install.sh script on it to prep the ZFS
On Fri, Jan 11, 2008 at 12:36:17PM -0600, Brian Hechinger wrote:
root (hd0,0,a) --- (the IDE disk)
kernel /platform/i86pc/kernel/unix -kv -B bootpath=/[EMAIL
PROTECTED],0/pci1095,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
module /platform/i86pc/boot_archive
and it boots, finds the SATA disk
On Mon, Jan 14, 2008 at 09:52:38AM -0800, Scott Laird wrote:
Run 'defaults write com.apple.systempreferences
TMShowUnsupportedNetworkVolumes 1' as root. I've been using it since
November without problems, but I haven't actually had to restore
anything in anger yet.
I couldn't get that to
On Mon, Jan 14, 2008 at 10:10:26AM -0800, Scott Laird wrote:
I'm using smb. Mount the share via the finder, then go to the time
machine pref pane, and it should show up.
I guess it's time to setup SAMBA then. :)
Thanks!
I've been wanting to backup the mini and the macbook to the
On Sun, Jan 20, 2008 at 01:51:28PM +0100, Peter Schuller wrote:
So will pool get bigger just by replasing all 4 disks one-by-one ?
Yes, but a re-import (either by export/import or by reboot) is necessary
before the new space will be usable.
Is this step really nessesary? The last time I
On Tue, Jan 29, 2008 at 08:28:42PM -0500, Jim Mauro wrote:
As to the putpack schedule of recent ZFS features into Solaris 10, I'm
afraid I
don't have the information. Hopefully, someone else will know...
I've got a box that I'm setting up soon (now, really) and I'd love to know
when the
On Thu, Jan 31, 2008 at 03:15:30PM -0700, Lori Alt wrote:
Does this still seem likely to occur, or will it be pushed back further?
I see that build 81 is out today which means we are not far from seeing
ZFS boot on Sparc in Nevada?
The pressure to get this into build 86 is considerable
I realize I can't remove devices from a vdev, which, well, sucks and
all, but I'm not going to complain about that. ;)
I have 4x500G disks in a RAIDZ. I'd like to repurpose one of them as
I'm finding that all that space isn't really needed and that one disk
would serve me much better elsewhere
On Tue, Mar 04, 2008 at 09:48:05AM -0500, Rob Logan wrote:
have 4x500G disks in a RAIDZ. I'd like to repurpose [...] as the second
half of a mirror in a machine going into colo.
rsync or zfs send -R the 128G to the machine going to the colo
Yeah, that's the fallback plan, which I was
On Thu, Mar 06, 2008 at 11:39:25AM +0100, [EMAIL PROTECTED] wrote:
I think it's specfically problematic on 32 bit systems with large amounts
of RAM. Then you run out of virtual address space in the kernel quickly;
a small amount of RAM (I have one with 512MB) works fine.
I have a 32-bit
On Thu, Mar 06, 2008 at 02:07:09PM +0100, Mattias Pantzare wrote:
I don't know how to change the ARC sise, but use this to increase
kernel addres space:
eeprom kernelbase=0x5000
Ah ha, that's what I was thinking about.
Your user address space will shrink when you do that.
Yes, but
1 - 100 of 177 matches
Mail list logo