Please don't feed the troll.
:)
-brian
On Wed, Oct 21, 2009 at 06:32:42AM -0700, Robert Dupuy wrote:
There is a debate tactic known as complex argument, where so many false and
misleading statements are made at once, that it overwhelms the respondent.
I'm just going to respond this way.
On Thu, Oct 15, 2009 at 11:09:32AM -0600, Cindy Swearingen wrote:
Hi Greg,
With two disks, I would start with a mirror. Then, you could add
Additionally, with a two disk RAIDZ1 you are doing parity calculations for
no good reason. I would recommend a mirror.
-brian
--
Coding in C is like
As I sit here building netatalk (assuming it will actually build) it occurs
to me that maybe AFP could be the next protocol to be merged directly into
ZFS the way NFS and CIFS have been.
Any thoughs/opinions on this? I think this would be a great way to get ZFS
out there into OSX shops by way of
I've hit google and it looks like this is still an issue in b122. Does this
look like it will be fixed any time soon? If so, what build will it be fixed
in and is there an ETA for the build to be released?
Thanks.
-brian
--
Coding in C is like sending a 3 year old to do groceries. You gotta
://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is still an issue in b122. Does
this
look like it will be fixed any time soon
On Tue, Aug 18, 2009 at 12:37:23AM +0100, Robert Milkowski wrote:
Hi Darren,
Thank you for the update.
Have you got any ETA (build number) for the crypto project?
Also, is there any word on if this will support the hardware crypto stuff
in the VIA CPUs natively? That would be nice. :)
On Mon, Aug 10, 2009 at 05:16:27PM -0500, Bob Friesenhahn wrote:
On Mon, 10 Aug 2009, Thomas Burgess wrote:
i think you can add different types of vdevs but doesn't it make you use
-f?
i thought it was not a good idea
I have no idea. I am not brave enough to try it with my own pool and
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has that
changed in recent builds? I'm a little
On Fri, May 01, 2009 at 09:52:54AM -0400, Dale Ghent wrote:
EMC. It's where data lives.
I thought it was, EMC. It's where data goes to die. :-D
-brian
--
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full
On Sat, Apr 11, 2009 at 08:35:08PM +1200, Nicholas Lee wrote:
Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E
and AOC-USASLP-H8iR with the LSI 1078.
Are they stable? How does is the performance compare to the Marvel? Does
hot swap cause problems?
I've run both
On Sat, Feb 28, 2009 at 01:20:54AM -0600, Harry Putnam wrote:
So cutting to the chase here... would you happen to have a
recommendation from your own experience, or something you've heard
will work and that can stand more ram... my current setup tops out at
3gb.
The link to the HCL that was
On Wed, Mar 04, 2009 at 10:59:04AM +1100, Julius Roberts wrote:
However I would expect that if you could present 8 raid0 luns to
the host then that should be at least a decent config to start
using for ZFS.
I can confirm that we are doing that here (with 3 drives) and it's
been fine
On Wed, Feb 25, 2009 at 07:14:14PM -0600, Harry Putnam wrote:
My whole purpose is to experiment with zfs... would I see much
difference if opensol was installed 64 bit as compared to 32 bit?
I noticed the Simon blogs that describe how to setup a home zfs server
(
On Mon, Feb 09, 2009 at 11:56:54AM -0800, Gordon Johnson wrote:
I hope this thread catches someone's attention. I've reviewed the root pool
recovery guide as posted. It presupposes a certain level of network support,
for backup and restore, that many opensolaris users may not have.
I did
I've always felt squeamish when I had to move boxes with spinning
disks,
or when I had to watch someone else do it. Thanks for justifying my
paranoia... and good luck with the replacement drives.
This reminds me of a story. Many years ago a friend of mine had to move
some servers from one
In the previous config I had two RAID0 hardware stripes on an LSI1068
that were then mirrored together with ZFS.
I then got a PERC 6/i card (aka LSI1078) to stick in the box and so I
moved the one stripe over to that (and had to re-create the stripe of
course).
The problem is that once the
I've got a server at home that I'm about to set up. I'll be moving 3x 500GB
SATA
disks into it from a different machine, so the config of those is set. What I
need
help with is the other disks. I'm not sure what the best way to use them is.
I want to do ZFS Boot, so I'm limited to what
On Mon, Jan 12, 2009 at 04:55:15PM -0600, Al Hopper wrote:
Hi Brian,
I tried running with one 146Gb 15k SAS disk for Zil and a 2nd
(identical) disk for L2ARC and it turns out to be a bad use of that
hardware. Bad is a highly subjective assessment, based on the type
of workload you're
On Mon, Dec 08, 2008 at 10:39:45AM +0100, [EMAIL PROTECTED] wrote:
Ok, thanx for your input, guys. So Bvians comment still is valid. I
tell the Linux guys that Open Solaris on 32 bit will fragment the
memory to the point that you have to reboot once in a while. It
shouldnt corrupt your data
On Sat, Dec 06, 2008 at 11:31:06AM -0500, Toby Thain wrote:
Its not me. There are people on Linux forums that wont to try out
Solaris + ZFS and this is a concern, for them. What should I tell
them? That it is not fixed? That they have reboot every week?
Someone knows?
That it's
On Sat, Dec 06, 2008 at 12:42:44PM -0600, Tim wrote:
Solaris + ZFS and this is a concern
Sounds to me like they want to try out solaris + zfs, not zfs on fuse.
Ooops, misread what he said. Sorry about that.
I suppose my original comment still stands then. :)
-brian
--
Coding in C is like
On Sat, Dec 06, 2008 at 01:36:35PM -0800, Joseph Mocker wrote:
Does PAE help things at all on 32-bit ?
No.
-brian
--
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC
On Fri, Dec 05, 2008 at 11:35:27AM -0800, Orvar Korvar wrote:
I see this old post about ZFS fragmenting the RAM if it is 32 bit. This makes
the memory run out. Is it still true, or has it been fixed?
Don't waste your time trying to run ZFS on a 32-bit machine. The performance
is horrible. I
I was not in front of the machine, I had remote hands working with me, so I
appologize in advance for any lack of detail I'm about to give.
The server in question is running snv_81 booting ZFS Root using Tim's scripts to
convert it over to ZFS Root.
My server in colo stopped responding. I had a
On Tue, Dec 02, 2008 at 12:50:08PM -0600, Tim wrote:
On Tue, Dec 2, 2008 at 11:42 AM, Brian Hechinger [EMAIL PROTECTED] wrote:
I believe the issue you're running into is the failmode you currently have
set. Take a look at this:
http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs
On Tue, Nov 11, 2008 at 11:26:46AM -0800, Adam Leventhal wrote:
On Nov 11, 2008, at 10:41 AM, Brent Jones wrote:
Wish I could get my hands on a beta of this GUI...
Take a look at the VMware version that you can run on any machine:
On Thu, Oct 30, 2008 at 02:18:34PM +0200, Johan Hartzenberg wrote:
On Thu, Oct 30, 2008 at 12:13 PM, Sergio Arroutbi [EMAIL PROTECTED]wrote:
This is interesting to me! What fuse file system allows you to spread a
single directory (file system) across two disks in a non-redundant manner
but
On Fri, Oct 24, 2008 at 01:03:32PM -0700, John-Paul Drawneek wrote:
you have buy a lsi sas card.
Which work *great* by the way.
not cheap - around 100 GBP
Really? I picked mine up for $68US including shipping and the SAS-SATA
breakout cable. Needless to say I haven't looked at prices
On Wed, Oct 08, 2008 at 08:50:57AM -0400, Moore, Joe wrote:
I've not worked with AVS other than looking at the basic concepts, but to me
this looks like a dont-shoot-yourself-in-the-foot critical warning rather
than an actual functionality restriction. Is there a -force option to
On Wed, Oct 08, 2008 at 06:27:51PM -0400, Jim Dunham wrote:
If one wants this type of mirror functionality on a single node, use
host based or controller based mirroring software.
Is there mirroring software that can do async copies to a mirror?
-brian
--
Coding in C is like sending a 3
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help. But then, if the pool is busy writing then your slow ZIL
That would
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the backends would be
appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS- physical disk
You'd
On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote:
I wonder if an AVS-replicated storage device on the backends would be
appropriate?
write - ZFS-mirrored slog - ramdisk -AVS- physical disk
\
+-iscsi- ramdisk -AVS- physical disk
You'd
On Mon, Oct 06, 2008 at 01:13:40AM -0700, Ross wrote:
It's also worth bearing in mind that you can have multiple mirrors. I don't
know what effect that will have on the performance, but it's an easy way to
boost the reliability even further. I think this idea configured on a set of
2-3
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
So I tried this experiment this week...
On each host (OpenSolaris 2008.05), I created an 8GB ramdisk with ramdiskadm.
I shared this ramdisk on each host via the iscsi target and initiator over a
1GB crossconnect cable (jumbo
On Wed, Oct 01, 2008 at 01:03:28AM +0200, Ahmed Kamal wrote:
Hmm ... well, there is a considerable price difference, so unless someone
says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
drives. By the way, how many of those would saturate a single (non trunked)
Gig
On Thu, Sep 25, 2008 at 12:51:57PM +0200, Joerg Schilling wrote:
Mikael Karlsson [EMAIL PROTECTED] wrote:
Anyone with experience with the SIL3124 chipset? Does it work good?
Yesterday, I tried the SIL 3114 which should be the same. It comes with
a BIOS that hangs up completely directly
On Thu, Sep 25, 2008 at 04:23:42PM +0200, Joerg Schilling wrote:
THe lock I observed happened inside the BIOS of the card after the main board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
I wonder if it's not nessesarily the BIOS of the card, but
On Thu, Sep 25, 2008 at 07:40:09PM +0200, Joerg Schilling wrote:
There is no option to use the card+disk in a differnt machine.
That's a shame, that could tell you a lot.
I read that there might be a firmware upgrade but I have not been able to
find
a dowload.
On Sun, Aug 31, 2008 at 11:06:16AM -0500, Tim wrote:
The problem though for our usage with NetApp was that we actually couldn't
reserve enough space for snapshots. 50% of the pool was their maximum, and
we're interested in running ten years worth of snapshots here, which could
see us with
On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
Has anyone here had any luck using a CF to SATA adapter?
I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card that
I wanted to use for a boot pool and even though the BIOS reports the disk,
Solaris B95 (or the
On Wed, Jul 09, 2008 at 07:53:30PM -0500, Tim wrote:
Dunno how old it is, but James is right, no Raid which is why it's cheaper.
Also why I like it ;)
I have the HP badged LSA SAS3080X in my Ultra80, it's a fantastic card.
If I ever get a box with PCI-E (I'm looking to upgrade the U80 soon, so
On Mon, Jul 07, 2008 at 07:56:26PM -0500, Bob Friesenhahn wrote:
This deduplication technology seems similar to the Microsoft adds I
see on TV which advertise how their new technology saves the customer
Quantum's claim of 20:1 just doesn't jive in my head, either, for some
reason.
-brian
On Sat, Jul 05, 2008 at 03:03:34PM -0500, Mike Gerdts wrote:
You can access the kstats directly to get the counter values.
First off, let me say that: kstat++
That's too cool.
$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread 418787
On Fri, Jun 27, 2008 at 07:58:42AM -0500, [EMAIL PROTECTED] wrote:
Yes. two caveats though. ZFS is a COW filesystem, currently with no
defrag. Placing heavy write (vmware is) on this type of storage
(especially, but not only if you are planning on using snapshots) you will
tend to see
On Fri, Jun 27, 2008 at 07:22:48AM -0700, Ross wrote:
Thanks both, very good pieces of advice there.
Wonko, I was about to question how much difference the iRAM will actually
make with it being on a single SATA connection, but after googling, for ??70
+ RAM it's worth buying just as an
On Fri, Jun 27, 2008 at 08:32:23AM -0700, Richard Elling wrote:
You will want mirrored slogs.
Yes, always an excellent recommendation.
Note that there are some companies, Crucial and STEC come to mind,
sell SSDs which fit in disk form factors. IIRC, Mac Book Air and EMC
use STEC's SSDs.
On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote:
Unfortunately, we need to be careful here with our terminology.
You are completely and 100% correct, Erik. I've been throwing the
term SSD around, but in the context of what I'm thinking, by SSD I
mean this new-fangled flash based
On Fri, Jun 27, 2008 at 07:04:58PM -0400, Dale Ghent wrote:
Re-reading your question is occurs to me that you might be referring
to the ability to mount a snapshot on *another server* ?
Yes, I believe that's what he's talking about. He's thinking the way a
clustered filesystem would work.
On Wed, Jun 25, 2008 at 02:50:40PM -0500, Tim wrote:
The issue is cost. It's still cheaper for someone to buy two quad-port
gig-e cards and trunk all the interfaces than it is for them to buy a single
10Gb card.
Huh? You can get a two-port 10Gb card from Sun (without bugging your sales
rep
On Wed, Jun 25, 2008 at 04:41:56PM +1200, Ian Collins wrote:
But with ZFS, the layout is a simple matter of creating filesystems, not
slicing up drives. The filesystems are in a shared pool, so one doesn't
have to know how big a particular slice will grow.
Yes, we are agreeing on that part.
On Mon, Jun 23, 2008 at 03:16:45PM -0400, Brian H. Nelson wrote:
Limits on physical memory for 32-bit platforms also depend on the
Physical Address Extension
http://msdn.microsoft.com/en-us/library/aa366796%28VS.85%29.aspx
(PAE), which allows 32-bit Windows systems to use more than 4 GB
On Mon, Jun 23, 2008 at 01:36:53PM -0700, Erik Trimble wrote:
But, but, but, PAE works so nice on my Solaris 8 x86 boxes for
massive /tmp. :-)
What CPU? If it's a 64-bit CPU, you don't need PAE. ;)
Back on topic: the one thing I haven't tried out is ZFS on a
32-bit-only system with
On Thu, Jun 19, 2008 at 10:06:19AM +0100, Robert Milkowski wrote:
Hello Brian,
BH A three-way mirror and three disks in a double parity array are going to
get you
BH the same usable space. They are going to get you the same level of
redundancy.
BH The only difference is that the RAIDZ2
On Mon, Jun 23, 2008 at 05:45:45PM -0700, Richard Elling wrote:
I think the ability to have different policies for file systems
is pure goodness -- though you pay for it on the backup/
restore side.
That's a price I for one am willing to pay. ;)
A side question though, my friends who run
On Sun, Jun 22, 2008 at 10:37:34AM -0500, Bob Friesenhahn wrote:
Perhaps the solution is to install more RAM in the system so that the
stripe is fully cached and ZFS does not need to go back to disk prior
to writing an update. The need to read prior to write is clearly what
kills ZFS
Every time I post to this list, I get an AUTOREPLY from somebody who if
you ask me is up to no good, otherwise they would set a proper From: address
instead of spoofing my domain.
Received: from mail01.csw-datensysteme.de ([62.153.225.98])
by wiggum.4amlunch.net
(Sun Java(tm) System
On Sun, Jun 22, 2008 at 06:11:21PM +0200, Volker A. Brandt wrote:
Everyone who post gets this autoreply.
So what do the rest of you do? Ignore it?
From: [EMAIL PROTECTED]
These people are not spoofing your domain, they set a From: header
with no @domain. Many MTAs append the local
On Mon, Jun 23, 2008 at 11:13:49AM +1200, [EMAIL PROTECTED] wrote:
The cache may give RAID cards an edge, but ZFS gives near platter speeds for
its various configurations. The Thumper is a perfect example of a ZFS
appliance.
I get very acceptable performance out of my Sun Ultra-80 with
On Wed, Jun 18, 2008 at 07:37:38PM -0700, Anil Jangity wrote:
Why is it that the read operations are 0 but the read bandwidth is 0?
What is iostat
[not] accounting for? Is it the metadata reads? (Is it possible to
determine what kind of metadata
reads these are?
That question I'll leave
On Sat, Jun 14, 2008 at 02:19:05PM -0500, Bob Friesenhahn wrote:
On Sat, 14 Jun 2008, Brian Wilson wrote:
What are the odds, in that configuration of zpool (no mirroring,
just using the intelligent disk as concatenated luns in the zpool)
that if we have this silent corruption, the whole
On Sat, Jun 14, 2008 at 02:51:31PM -0500, Bob Friesenhahn wrote:
I think that none requested likely means that the administrator has
never issued a request to scrub the pool.
Or the system. That status line will show the last scrub/resilver to
have taken place. None requested means that no
On Fri, Jun 13, 2008 at 02:40:34AM -0700, Richard L. Hamilton wrote:
the SAS drives, at a mere 7200 RPM can sustain a sequential transfer
rate about 2.5x that of the 10KRPM FC drives!).
I think that't my favorite part about these new high density drives.
Don't get me wrong, a TB (or more!) in a
On Wed, Jun 11, 2008 at 10:43:26PM -0700, Richard Elling wrote:
AFAIK, SVM will not handle this problem well. ZFS and Solaris
Cluster can detect this because the configuration metadata knows
the time difference (ZFS can detect this by the latest txg).
Having been through this myself with
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a filesystem into a parent filesystem for the purposes
of NFS, that would be
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote:
Mirror mounts take care of the NFS problem (with NFSv4).
NFSv3 automounters could be made more responsive to server-side changes
is share lists, but hey, NFSv4 is the future.
So basically it's just a waiting game at this
On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
The FAQ document (
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
jumpstart profile example:
Speaking of the FAQ and mentioning the need to use slices, how does that
affect the ability of Solaris/ZFS to
On Thu, Jun 05, 2008 at 12:14:46PM +1000, Nathan Kroenert wrote:
format -e is your window to cache settings.
Ah ha!
As for the auto-enabling, I'm not sure, as IIRC, we do different things
based on disk technology.
eg: IDE + SATA - Always enabled
SCSI - Disabled by default, unless you
On Wed, May 28, 2008 at 12:01:36PM -0400, Bill McGonigle wrote:
On May 28, 2008, at 05:11, James Andrewartha wrote:
That's not a huge price difference when building a server - thanks
for the pointer. Are there any 'gotchas' the list can offer when
using a SAS card with SATA drives?
On Fri, May 23, 2008 at 05:26:34PM -0500, Bob Friesenhahn wrote:
On Fri, 23 May 2008, Bill McGonigle wrote:
The remote-disk cache makes perfect sense. I'm curious if there are
measurable benefits for caching local disks as well? NAND-flash SSD
drives have good 'seek' and slow transfer,
On Fri, May 23, 2008 at 12:47:18AM -0700, Pascal Vandeputte wrote:
I sold it and took the cheap route again with a Silicon Image 3124-based
adapter and had more problems which now probably would be solved with the
latest Solaris updates.
I'm running a 3124 with snv81 and haven't had a
On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
I'm running a 3124 with snv81 and haven't had a single problem with it.
Whatever problems you ran into have likely been resolved.
The Silicon Image 3114 also works like a champ, but it's SATA 1.0 only.
It's dirt cheap (under
On Fri, May 16, 2008 at 09:30:27AM +0800, Robin Guo wrote:
Hi, Paul
At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
As far as root zfs goes, are there any plans to support more than just single
disks or mirrors in U6, or will that be for a later date?
-brian
--
On Fri, May 16, 2008 at 02:32:34PM -0600, Lori Alt wrote:
Install of a zfs root can only be done with the tty-based installer
or with Jumpstart. I will make sure that instructions for both
are made available by the time that SXDE build 90 is
released.
Will the tty or jumpstart based
On Thu, Mar 06, 2008 at 02:07:09PM +0100, Mattias Pantzare wrote:
I don't know how to change the ARC sise, but use this to increase
kernel addres space:
eeprom kernelbase=0x5000
Ah ha, that's what I was thinking about.
Your user address space will shrink when you do that.
Yes, but
On Tue, Mar 04, 2008 at 09:48:05AM -0500, Rob Logan wrote:
have 4x500G disks in a RAIDZ. I'd like to repurpose [...] as the second
half of a mirror in a machine going into colo.
rsync or zfs send -R the 128G to the machine going to the colo
And this is what I ended up doing. So much
On Thu, Mar 06, 2008 at 10:29:46AM -0500, Rob Logan wrote:
ZFS is not 32-bit safe.
while this is kinda true, if the systems has 2G or less of ram
it shouldn't be an issue other than poor performance for lack of
ARC.
So what happens if you have a 32-bit machine with 4GB RAM like I do?
On Thu, Mar 06, 2008 at 11:39:25AM +0100, [EMAIL PROTECTED] wrote:
I think it's specfically problematic on 32 bit systems with large amounts
of RAM. Then you run out of virtual address space in the kernel quickly;
a small amount of RAM (I have one with 512MB) works fine.
I have a 32-bit
I realize I can't remove devices from a vdev, which, well, sucks and
all, but I'm not going to complain about that. ;)
I have 4x500G disks in a RAIDZ. I'd like to repurpose one of them as
I'm finding that all that space isn't really needed and that one disk
would serve me much better elsewhere
On Tue, Mar 04, 2008 at 09:48:05AM -0500, Rob Logan wrote:
have 4x500G disks in a RAIDZ. I'd like to repurpose [...] as the second
half of a mirror in a machine going into colo.
rsync or zfs send -R the 128G to the machine going to the colo
Yeah, that's the fallback plan, which I was
On Thu, Jan 31, 2008 at 03:15:30PM -0700, Lori Alt wrote:
Does this still seem likely to occur, or will it be pushed back further?
I see that build 81 is out today which means we are not far from seeing
ZFS boot on Sparc in Nevada?
The pressure to get this into build 86 is considerable
On Tue, Jan 29, 2008 at 08:28:42PM -0500, Jim Mauro wrote:
As to the putpack schedule of recent ZFS features into Solaris 10, I'm
afraid I
don't have the information. Hopefully, someone else will know...
I've got a box that I'm setting up soon (now, really) and I'd love to know
when the
On Sun, Jan 20, 2008 at 01:51:28PM +0100, Peter Schuller wrote:
So will pool get bigger just by replasing all 4 disks one-by-one ?
Yes, but a re-import (either by export/import or by reboot) is necessary
before the new space will be usable.
Is this step really nessesary? The last time I
On Fri, Jan 11, 2008 at 12:36:17PM -0600, Brian Hechinger wrote:
root (hd0,0,a) --- (the IDE disk)
kernel /platform/i86pc/kernel/unix -kv -B bootpath=/[EMAIL
PROTECTED],0/pci1095,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
module /platform/i86pc/boot_archive
and it boots, finds the SATA disk
On Mon, Jan 14, 2008 at 09:52:38AM -0800, Scott Laird wrote:
Run 'defaults write com.apple.systempreferences
TMShowUnsupportedNetworkVolumes 1' as root. I've been using it since
November without problems, but I haven't actually had to restore
anything in anger yet.
I couldn't get that to
On Mon, Jan 14, 2008 at 10:10:26AM -0800, Scott Laird wrote:
I'm using smb. Mount the share via the finder, then go to the time
machine pref pane, and it should show up.
I guess it's time to setup SAMBA then. :)
Thanks!
I've been wanting to backup the mini and the macbook to the
I have a machine that the BIOS cannot see my SiL3124 controller. Solaris
of course sees it just fine. This means that I can't boot from it however.
What I've done it this. I installed Solaris onto a temporary IDE disk and
ran Tim Foster's zfs-actual-root-install.sh script on it to prep the ZFS
On Thu, Dec 06, 2007 at 03:27:33PM -0800, Scott Laird wrote:
MAX3xxxRC (where xxx represents the size) and you'll be wearing a big
smile every time you work on a system so equipped.
Hmmm, on second glace, 36G versions of that seem to be going for $40.
Do you mean $140, or am I missing
On Wed, Dec 05, 2007 at 06:12:18PM -0600, Al Hopper wrote:
PS: LsiLogic just updated their SAS HBAs and have a couple of products
very reasonably priced IMHO. Combine that with a (single ?) Fujitsu
MAX3xxxRC (where xxx represents the size) and you'll be wearing a big
smile every time you
On Wed, Dec 05, 2007 at 06:12:18PM -0600, Al Hopper wrote:
I don't think you'll see any worthwhile improvement. For a ZIL
device, you really need something like a (small) SAS 15k RPM 3.5
drive - which will sustain 700 to 900 IOPS (my number - open to
argument) - or a RAM disk or one of
I will be putting 4 500GB SATA disks in my Ultra80. I currently have
two 10K rpm 73G SCSI disks in it with 10G for the OS (UFS) and the
remaining space for a ZFS pool (the two remaining partitions are setup
in a mirror).
Would it be worth my while to move all the data off of the zfs partitions
I finally got some new drives for my Ultra 80. I have two 73gig 10K
RPM SCSI disks in it now with 60GB in a ZFS mirror. I am going to be
adding 4x500G SATA disks in a RAIDZ, and I was thinking about using
the old zfs space on the SCSI disks for intent logs.
My questions are this:
1) Is it
On Tue, Nov 20, 2007 at 02:01:34PM -0600, Al Hopper wrote:
a) the SuperMicro AOC-SAT2-MV8 is an 8-port SATA card available for
around $110 IIRC.
Yeah, I'd like to spend a lot less than that, especially as I only need
2 ports. :)
b) There is also a PCI-X version of the older LSI 4-port
On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote:
Right now I have noticed that LSI has recently began offering some
lower-budget stuff; specifically I am looking at the MegaRAID SAS
8208ELP/XLP, which are very reasonably priced.
I looked up the 8204XLP, which is really quite
I'll be setting up a small server and need two SATA-II ports for an x86
box. The cheaper the better.
Thanks!!
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is
On Mon, Oct 29, 2007 at 08:55:21AM -0700, Mauro Mozzarelli wrote:
Following up, I got this message from Lori:
We are aiming to integrate zfs boot for both sparc and x86
into Nevada around the end of this calendar year.
Lori
Lori,
Thank you for your reply, I will be probably one
How painful is this going to be? Completely?
-brian
--
Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my
On Thu, Sep 13, 2007 at 10:49:36AM +0200, Louwtjie Burger wrote:
http://www.sun.com/servers/entry/x4200/optioncards.jsp#m2pcie
SG-XPCIE8SAS-E-Z ?
I believe that's one of the LSI 1068E based cards. From what I've been
able to tell, anything based on the 106x chipset will work. That's what
On Thu, Sep 13, 2007 at 10:54:41AM -0600, Lori Alt wrote:
In-place upgrade of zfs datasets is not supported and probably
never will be (LiveUpgrade will be the way to go with zfs because
the cloning features of zfs make it a natural). But the LiveUpgrade
changes aren't ready yet, so for the
1 - 100 of 177 matches
Mail list logo