that will be registered
# and offered by the server. The default is 4.
NFS_SERVER_VERSMAX=3
At this point I have no clue what the issue is
On the NFS server I've successfully ran 'iozone' 2X.
I've also ran scrub on this zpool.
Please advise...
Tim
This message posted from opensolaris.org
Why not create a sample dataset and throw some large .txt files out there
and see what happens? That way you'll know for certain if there's some bug
you're hitting, or if it's just not applicable to your current dataset.
On 1/27/08, Joachim Pihl [EMAIL PROTECTED] wrote:
On Sat, 26 Jan 2008
++
http://www.sunfreeware.com/programlistintel10.html#bonnie++
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you to import as. So you could:
zpool import --import-as yourpool.backup yourpool
I definitely see why you'd want to do it. I haven't a clue if you can :)
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On 2/5/08, Sam [EMAIL PROTECTED] wrote:
Hi,
I posted in the Solaris install forum as well about the fileserver I'm
building for media files but wanted to ask more specific questions about zfs
here. The setup is 8x500GB SATAII drives to start and down the road another
4x750 SATAII drives,
On 2/12/08, Johan Kooijman [EMAIL PROTECTED] wrote:
Goodmorning all,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
We're now using a 3Ware 9550SX as a S-ATA RAID controller. The
original plan was to
On 2/13/08, Tom Buskey [EMAIL PROTECTED] wrote:
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are
in the HCL as supported, but do not have 64 bit drivers. So they only work
of the LUNs? Perhaps I'm misunderstanding why you went from one
giant raid-0 to what is essentially a raid-10.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/28/08, Christine Tran [EMAIL PROTECTED] wrote:
Alan Perry wrote:
Alan Perry wrote:
I gave a talk on ZFS at a local user group meeting this evening.
What I didn't
know going in was that the meeting was hosted at a Novell consulting
shop. I got
asked a lot of what does
On 2/28/08, Alan Perry [EMAIL PROTECTED] wrote:
Tim wrote:
Don't forget, ZFS is open source, and can be ported to any other
number of platforms as well. It's also currently supported on FreeBSD
7.0, and is basically production ready on that platform.
The open source is HUGE in my mind
On 2/28/08, Alan Perry [EMAIL PROTECTED] wrote:
Tim wrote:
Don't forget, ZFS is open source, and can be ported to any other
number of platforms as well. It's also currently supported on FreeBSD
7.0, and is basically production ready on that platform.
The open source is HUGE in my
On 3/3/08, John R. Sconiers II [EMAIL PROTECTED] wrote:
Hi,
New user question. In ZFS (solaris 10) are we able to evacuate a disk
if I later decide to remove it from a ZFS pool. I know the answer use
to be no but I'm not sure if that has changed or will change.
JOHN
--
On 3/3/08, John R. Sconiers II [EMAIL PROTECTED] wrote:
Hi,
New user question. In ZFS (solaris 10) are we able to evacuate a disk
if I later decide to remove it from a ZFS pool. I know the answer use
to be no but I'm not sure if that has changed or will change.
JOHN
--
On 3/4/08, Brian Hechinger [EMAIL PROTECTED] wrote:
I realize I can't remove devices from a vdev, which, well, sucks and
all, but I'm not going to complain about that. ;)
I have 4x500G disks in a RAIDZ. I'd like to repurpose one of them as
I'm finding that all that space isn't really needed
mount it from two systems at the same
time. That is, unless something big has changed recently... I haven't been
doing a very good job of keeping up lately, but I think it'd be hard to miss
an announcement like that!
--Tim
___
zfs-discuss mailing list
zfs
On 3/14/08, Vahid Moghaddasi [EMAIL PROTECTED] wrote:
On Fri, Mar 14, 2008 at 11:26 PM, Tim [EMAIL PROTECTED] wrote:
replace your LUNs one at a time:
zpool replace -f rd_01 c4t6006048187870150525244353543d0
first_lun_off_dmx-3
zpool replace -f rd_01
, and all hot swappable.
http://www.supermicro.com/products/chassis/3U/933/SC933T-R760.cfm
I paired one of those with a pair of the supermicro 8-port sata cards.
Works like a charm.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Assuming you don't need their standard enterprise support, your plan is to
await the release of opensolaris snv_b87. This will include the zfs boot
bits, so that you can use zfs as your root filesystem.
--Tim
___
zfs-discuss mailing list
zfs-discuss
on iSCSI for working well with itself, and nothing else.
Have you tested this with vxvm or UFS before blindly pointing the finger at
zfs? It seems very unlikely ZFS is the source of your problem.
--Tim
___
zfs-discuss mailing list
zfs-discuss
On 3/20/08, Kyle McDonald [EMAIL PROTECTED] wrote:
Bart Smaalders wrote:
On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual
core AMD box sustains
100+ MB/sec read or write it happily saturates a GB nic w/ multiple
concurrent reads over
Samba.
This leads me to a
On Sat, Mar 22, 2008 at 11:33 PM, Matt Ingenthron [EMAIL PROTECTED]
wrote:
One more scrub later, and now the snapshot I was trying to send,
@laptopmigration, is now showing errors but the errors on the old snapshots
are gone, since I destroyed the snapshots.
Is this expected behavior?
What are you using to create the files? Is this x86/32bit solaris 9, or
64bit sparc?
On Tue, Mar 25, 2008 at 11:13 AM, Sachin Palav [EMAIL PROTECTED]
wrote:
Hello Team,
I have a file server running solaris 10 (X86), I have ZFS on the file
server and the file systems are exported using NFS.
What you want to do should actually be pretty easy. On the thumper's, just
do your normal raid-z/raid-z2, and export them to the solaris box. Then on
the solaris box, you just create a zpool, and add the LUN's one at a time.
No raid at all. The system should just stripe across all of the LUN's
would be across all the thumpers.
Tim wrote:
What you want to do should actually be pretty easy. On the thumper's,
just do your normal raid-z/raid-z2, and export them to the solaris box.
Then on the solaris box, you just create a zpool, and add the LUN's one
at a time. No raid at all
What's on the rest of the disk?
On Sun, Mar 30, 2008 at 7:25 PM, Michael Armbrust
[EMAIL PROTECTED] wrote:
Hello,
I have a pool of four raidz-ed drives that I created in BSD that I would
like to move to a box with a solaris kernel. However, when I run zpool
import it displays the
, 2008 at 11:11 PM, Michael Armbrust
[EMAIL PROTECTED] wrote:
On Mar 30, 2008, at 7:42 PM, Tim wrote:
What's on the rest of the disk?
Nothing, when I created the pool I used the entire disk.
On Sun, Mar 30, 2008 at 7:25 PM, Michael Armbrust
[EMAIL PROTECTED] wrote:
Hello,
I have
On Wed, Apr 2, 2008 at 4:23 PM, [EMAIL PROTECTED] wrote:
Been goggling around on this to no avail...
We're hoping to soon put into production an x4500 with a big ZFS pool,
replacing a (piece of junk) NAS head which replaced our old trusty NetApp.
In each of those older boxes, we configured
On Sat, Apr 5, 2008 at 12:25 AM, Jonathan Loran [EMAIL PROTECTED]
wrote:
This guy seems to have had lots of fun with iSCSI :)
http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.htmlhttp://web.ivy.net/%7Ecarton/oneNightOfWork/20061119-carton.html
This is scaring the heck out of
On Mon, Apr 7, 2008 at 10:40 AM, Christine Tran [EMAIL PROTECTED]
wrote:
Crazy question here... but has anyone tried this with say, a QLogic
hardware iSCSI card? Seems like it would solve all your issues. Granted,
they aren't free like the software stack, but if you're trying to setup an
On Sat, Apr 12, 2008 at 4:58 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Sat, 12 Apr 2008, roland wrote:
i'm really wondering that interest in alternative compression
schemes is that low, especially due to the fact that lzo seems to
compress better and be faster than lzjb.
LZO
On Mon, Apr 14, 2008 at 11:34 PM, Will Murnane [EMAIL PROTECTED]
wrote:
On Tue, Apr 15, 2008 at 1:25 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port cards. I see that there is an option to
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski [EMAIL PROTECTED]
wrote:
I have 16 disks in RAID 5 and I'm not worried.
I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks. 12 disks is the upper end of
what
you want even with
On Tue, Apr 15, 2008 at 12:03 PM, Keith Bierman [EMAIL PROTECTED] wrote:
On Apr 15, 2008, at 10:58 AM, Tim wrote:
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski [EMAIL PROTECTED]
wrote:
I have 16 disks in RAID 5 and I'm not worried.
I'm sure you're already aware, but if not, 22
On Fri, Apr 18, 2008 at 9:12 AM, Zlotnick Fred [EMAIL PROTECTED]
wrote:
The in-kernel CIFS stack is not in Solaris 10 Update 5,
and will never appear in any Solaris 10 update, because
the kernel changes required are too invasive. You need
OpenSolaris.
-- Fred
Fred,
Correct me if I'm
On Fri, May 23, 2008 at 2:36 PM, Brian Hechinger [EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:25:34PM -0700, Erik Trimble wrote:
I'm running a 3124 with snv81 and haven't had a single problem with it.
Whatever problems you ran into have likely been resolved.
The Silicon Image
On Fri, May 23, 2008 at 3:15 PM, Brandon High [EMAIL PROTECTED] wrote:
On Fri, May 23, 2008 at 12:43 PM, Tim [EMAIL PROTECTED] wrote:
I'm looking on their site and don't even see any data on the 3134... this
*something new* that hasn't been released or? The only thing I see is
3132
.
Personally I chose option 3 and loaded nexenta. You get regular updates,
but it still *stable* (or has been for me to date).
--Tim
On Fri, May 23, 2008 at 2:43 PM, Christopher Gibbs [EMAIL PROTECTED]
wrote:
Pretty much what the subject says. I'm wondering which platform will
have the best
Yup. They were the first to do so (as far as I know).
--Tim
On Fri, May 23, 2008 at 4:47 PM, Christopher Gibbs [EMAIL PROTECTED]
wrote:
One other thing I noticed is that OpenSolaris (.com) will
automatically install ZFS root for you. Will Nexenta do that?
On Fri, May 23, 2008 at 4:31 PM
of disks is).
The gui is a web interface. Just point your browser at
https://localhost:6789
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a way to tell zfs to manually fail it vs. physically removing the
drive? Having access to physically remove a disk isn't always possible :)
--Tim
___
zfs-discuss mailing list
zfs-discuss
USED hardware is your friend :) He wasn't quoting new prices.
On Fri, May 30, 2008 at 8:54 PM, Brandon High [EMAIL PROTECTED] wrote:
On Fri, May 30, 2008 at 5:59 PM, Erik Trimble [EMAIL PROTECTED]
wrote:
One thought on this:for a small server, which is unlikely to ever be
CPU
On Mon, Jun 2, 2008 at 8:25 AM, Darryl [EMAIL PROTECTED] wrote:
This is my first post here, and i hope it is ok that i posted in this
thread.
I have been doing a bit of reading on the solaris platforms, and seem to be
inclined to try out the open solaris os or solaris 10. My only worry is
On Wed, Jun 4, 2008 at 5:01 PM, Kyle McDonald [EMAIL PROTECTED] wrote:
andrew wrote:
With the release of the Nevada build 90 binaries, it is now possible to
install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto
a ZFS filesystem without worrying about having it
-MV8
http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm
It's the same chipset that's in the thumper, and it pretty cheap for an
8-port card.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Thu, Jun 5, 2008 at 11:12 PM, Joe Little [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED]
wrote:
Hey guys, please excuse me in advance if I say or ask anything stupid :)
Anyway
For the cifs side of the house, I think it would be in Sun's best interest
to work with a third party vendor like NTP software. The quota
functionality they provide is far more robust than anything I expect we'll
ever see come directly with zfs. And rightly so... it's what they
specialize in.
On Mon, Jun 9, 2008 at 7:16 AM, Tom Buskey [EMAIL PROTECTED] wrote:
On Fri, Jun 6, 2008 at 16:23, Tom Buskey
[EMAIL PROTECTED] wrote:
I have an AMD 939 MB w/ Nvidea on the motherboard
and 4 500GB SATA II drives in a RAIDZ.
...
I get 550 MB/s
I doubt this number a lot. That's
On Wed, Jun 11, 2008 at 10:18 AM, Lee [EMAIL PROTECTED] wrote:
If your worried about the bandwidth limitations of putting something like
the supermicro card in a pci slot how about using an active riser card to
convert from PCI-E to PCI-X. One of these, or something similar:
Sounds correct to me. The disk isn't sync'd so boot should fail. If
you pull disk0 or set disk1 as the primary boot device what does it
do? You can't expect it to resliver before booting.
On 6/11/08, Vincent Fox [EMAIL PROTECTED] wrote:
So I decided to test out failure modes of ZFS root
I guess I find the difference between b90 and opensolaris trivial
given we're supposed to be getting constant updates following the sxce
builds.
On 6/12/08, Mike Gerdts [EMAIL PROTECTED] wrote:
On Thu, Jun 12, 2008 at 9:22 PM, Tim [EMAIL PROTECTED] wrote:
They aren't even close to each other
.
I love the platform, but its nowhere near the league of a filer, or
universe of a usp/sym.
On 6/12/08, Mike Gerdts [EMAIL PROTECTED] wrote:
On Thu, Jun 12, 2008 at 10:12 PM, Tim [EMAIL PROTECTED] wrote:
I guess I find the difference between b90 and opensolaris trivial
given we're supposed
Samba is (at least in sxce) installed by default.
On 6/14/08, matt estela [EMAIL PROTECTED] wrote:
I've been testing opensolaris 2008.05 as a replacement for our main
fileserver. After following the CIFS walthrough on the genunix wiki, its
working great with our render farm.
Running into a
Why would you have to buy smaller disks? You can replace the 320's
with 1tb drives and after the last 320 is out of the raidgroup, it
will grow automatically.
On 6/16/08, Miles Nordin [EMAIL PROTECTED] wrote:
Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within
Remind me again what a veritas license is. If you can't find ram for
less than that you need to find a new var/disti
On 6/16/08, Chris Siebenmann [EMAIL PROTECTED] wrote:
| I guess I find it ridiculous you're complaining about ram when I can
| purchase 4gb for under 50 dollars on a
On Tue, Jun 17, 2008 at 5:33 AM, Darren J Moffat [EMAIL PROTECTED]
wrote:
Tim wrote:
I guess I find it ridiculous you're complaining about ram when I can
purchase 4gb for under 50 dollars on a desktop.
For many people around the world US$50 is a very significant amount of
money
On Tue, Jun 17, 2008 at 8:42 AM, Volker A. Brandt [EMAIL PROTECTED] wrote:
I have a quite old machine with an AMD Athlon 900MHz with 640Mb of RAM
serving up NFS, WebDAV locally to my house and running my webserver
(Apache)
in a Zone. For me performance is perfectly acceptable, but this
Samba cifs has been in opensolaris from day1.
No, it cannot be used to meet sun's end goal which is cifs INTEGRATION
with the core kernel. Sun cifs supports windows acl's from the kernel
up. Samba does not.
On 6/22/08, Marcelo Leal [EMAIL PROTECTED] wrote:
Hello all,
i would like to
It is indeed true and yoi can.
On 6/22/08, kevin williams [EMAIL PROTECTED] wrote:
digg linked to an article related to the apple port of ZFS
(http://www.dell.com/content/products/productdetails.aspx/print_1125?c=uscs=19l=ens=dhss).
I dont have a mac but was interested in ZFS.
The article
On Mon, Jun 23, 2008 at 11:18 AM, Edward [EMAIL PROTECTED] wrote:
Yes you are all correct. Ram cost nothing today, even though it might be
bouncing back to their normal margin. DDR2 Ram are relatively cheap. Not to
mention DDR3 will bring us double or more memory capacity.
Not likely. Their
On Mon, Jun 23, 2008 at 1:26 PM, Charles Soto [EMAIL PROTECTED] wrote:
On 6/23/08 11:59 AM, Tim [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 11:18 AM, Edward [EMAIL PROTECTED] wrote:
But the sad thing is Windows XP / Vista is still 32Bit. It doesn't
recognize more then 3.x GB
On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
I see that the configuration tested in this X4500 writeup only uses
the four built-in gigabit ethernet interfaces. This places a natural
limit on the amount of data which can stream from the system. For
local host
On Wed, Jun 25, 2008 at 1:19 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Wed, 25 Jun 2008, Tim wrote:
Uhhh... 64bit/133mhz is 17Gbit/sec. I *HIGHLY* doubt that bus will be a
limit. Without some serious offloading, you aren't pushing that amount of
bandwidth out the card. Most
On Wed, Jun 25, 2008 at 3:13 PM, Lida Horn [EMAIL PROTECTED] wrote:
Tim wrote:
On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
I see that the configuration tested in this X4500 writeup only uses
the four built-in gigabit
On Fri, Jun 27, 2008 at 2:47 PM, Christophe Dupre [EMAIL PROTECTED]
wrote:
Hi all,
based on comments on this list, I bought a new server with 8 SATA bays
and an AOC-SAT2-MV8 SATA controller. I them fired up a jumpstart of
Solaris 10 5/08 of the server. Install runs through perfectly, with a
On Fri, Jun 27, 2008 at 11:50 AM, Albert Chin
[EMAIL PROTECTED] wrote:
On Fri, Jun 27, 2008 at 08:13:14AM -0700, Ross wrote:
Bleh, just found out the i-RAM is 5v PCI only. Won't work on PCI-X
slots which puts that out of the question for the motherboad I'm
using. Vmetro have a 2GB PCI-E
On Sat, Jun 28, 2008 at 1:42 AM, Erik Trimble [EMAIL PROTECTED] wrote:
Brian Hechinger wrote:
On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote:
Unfortunately, we need to be careful here with our terminology.
You are completely and 100% correct, Erik. I've been throwing
BIOS revs? Any other pci cards in the system?
On Sun, Jun 29, 2008 at 5:16 PM, Christophe Dupre [EMAIL PROTECTED]
wrote:
Tim,
the system is a Silicon Mechanics A266; the motherboard is a SuperMicro
H8DM8E-2
I tried pluging the Marvell card in both 133MHz PCI-X slots. In one I get
On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner [EMAIL PROTECTED]
wrote:
I think Kyle might be onto something here. With ZFS it is so easy
to create file systems, one could expect many people to do so.
In the past, it was so difficult and required planning, so people
tended to be more
On Sun, Jun 29, 2008 at 8:34 PM, Matthew Gardiner [EMAIL PROTECTED]
wrote:
2008/6/30 Tim [EMAIL PROTECTED]:
On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner
[EMAIL PROTECTED] wrote:
I think Kyle might be onto something here. With ZFS it is so easy
to create file systems, one could
So what version is on you new card? Seems itd be far easier to
request from supermicro if we knew what to ask for.
On 7/1/08, Marc Bevand [EMAIL PROTECTED] wrote:
I remember a similar pb with an AOC-SAT2-MV8 controller in a system of mine:
Solaris rebooted each time the marvell88sx driver
So when are they going to release msrp?
On 7/2/08, Mertol Ozyoney [EMAIL PROTECTED] wrote:
Availibilty may depend on where you are located but J4200 and J4400 are
available for most regions.
Those equipment is engineered to go well with Sun open storage components
like ZFS.
Besides price
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Might want to provide some basics:
What build of Opensolaris are you running? What version of ZFS?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Do we have drivers available for ANY OS for these cards currently? It'd be
nice to at least be able to test if they function properly.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
-to-network bandwidth.
http://www.sun.com/servers/x64/x4540/
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So are these *tagged* drives/firmware? Do we have to buy them direct from
Sun or can we throw anything we want at it? Does it come pre-loaded with
real drive trays instead of useless blanks?
--Tim
On Wed, Jul 9, 2008 at 3:09 PM, Eric Schrock [EMAIL PROTECTED] wrote:
The X4540 uses on-board LSI SAS controllers (C1068E).
- Eric
On Wed, Jul 09, 2008 at 02:59:26PM -0500, Tim wrote:
So, I see Sun finally updated the Thumper, and it appears they're now
using
a PCI-E backplane. Anyone
, or those worthless
blanks that won't hold a drive?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
was misquoted, or someone was
confused :)
http://www.byteandswitch.com/document.asp?doc_id=158533WT.svl=news1_1
--Tim
On Wed, Jul 9, 2008 at 3:44 PM, Richard Elling [EMAIL PROTECTED]
wrote:
Yes, thanks for catching this. I'm sure it is just a copy-n-paste
mistake. I've alerted product
Dunno how old it is, but James is right, no Raid which is why it's cheaper.
Also why I like it ;)
On Wed, Jul 9, 2008 at 7:34 PM, Brandon High [EMAIL PROTECTED] wrote:
On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
Perfect. Which means good ol' supermicro would come through
the thumpers.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but they're a reality.
--Joe
Why not? There's several in the market today whom I suspect have done just
that :D I won't name names, but for anyone in the industry I doubt I have
to.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Bryan,
Where did you find the sas to sata cables? I've been looking but
haven't found anything at the usual watering holes. I assume you
grabbed mini-sas to 4 sata?
Thanks!
--tim
On 7/12/08, Bryan Wagoner [EMAIL PROTECTED] wrote:
Here's to hoping it works, I just pulled the trigger
? Does it work with
the new(er) SNV builds?
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Except the article was redacted. The reason the battery life
decreased was because the throughput increased so much that it drove
up the cpu usage up, thus bringing down battery life. It just goes to
show how SEVERELY io bound we currently are. The flash itself was
using LESS power.
--tim
On Wed, Jul 23, 2008 at 2:37 PM, Steve [EMAIL PROTECTED] wrote:
I'm a fan of ZFS since I've read about it last year.
Now I'm on the way to build a home fileserver and I'm thinking to go with
Opensolaris and eventually ZFS!!
Apart from the other components, the main problem is to choose the
On Mon, Aug 4, 2008 at 8:02 AM, Ross [EMAIL PROTECTED] wrote:
Did anybody ever get this card working? SuperMicro only have Windows and
Linux drivers listed on their site. Do Sun's generic drivers work with this
card?
Still waiting to buy a set. I've already got the supermicro marvell
Thanks for the link. I'll consider those, but it still means a new CPU, and
it appears it does not support any of the opteron line-up.
On Mon, Aug 4, 2008 at 3:58 PM, Brandon High [EMAIL PROTECTED] wrote:
On Mon, Aug 4, 2008 at 6:49 AM, Tim [EMAIL PROTECTED] wrote:
really had
chips.
Food for thought.
--Tim
On Thu, Aug 14, 2008 at 5:24 AM, Ross [EMAIL PROTECTED] wrote:
This is the problem when you try to write up a good summary of what you
found. I've got pages and pages of notes of all the tests I did here, far
more than I could include in that PDF.
What
You could always try FreeBSD :)
--Tim
On Fri, Aug 15, 2008 at 9:44 AM, Ross [EMAIL PROTECTED] wrote:
Haven't a clue, but I've just gotten around to installing windows on this
box to test and I can confirm that hot plug works just fine in windows.
Drives appear and dissappear in device
.
--Tim
On Mon, Aug 18, 2008 at 6:06 AM, Bernhard Holzer [EMAIL PROTECTED]wrote:
Hi,
I am searching for a roadmap for shrinking a pool. Is there some
project, where can I find informations, when will it be implemented in
Solars10
Thanks
Regards
Bernhard
--
Bernhard Holzer
Sun
I don't think its just b94, I recall this behavior for as long as I've
had the card. I'd also be interested to know if the sun driver team
has ever even tested with this card. I realize its probably not a top
priority, but it sure would be nice to have it working properly.
On 8/20/08, Ross
14+2 or 7+1
On 8/22/08, Miles Nordin [EMAIL PROTECTED] wrote:
m == mike [EMAIL PROTECTED] writes:
m can you combine two zpools together?
no. You can have many vdevs in one pool. for example you can have a
mirror vdev and a raidz2 vdev in the same pool. You can also destroy
pool
On Sat, Aug 23, 2008 at 11:06 PM, Todd H. Poole [EMAIL PROTECTED]wrote:
Howdy yall,
Earlier this month I downloaded and installed the latest copy of
OpenSolaris (2008.05) so that I could test out some of the newer features
I've heard so much about, primarily ZFS.
My goal was to replace our
isn't working as it should be.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm pretty sure pci-ide doesn't support hot-swap. I believe you need ahci.
On 8/24/08, Todd H. Poole [EMAIL PROTECTED] wrote:
Ah, yes - all four hard drives are connected to the motherboard's onboard
SATA II ports. There is one additional drive I have neglected to mention
thus far (the
By the way: Is there a way to pull up a text-only interface from the log in
screen (or during the boot process?) without having to log in (or just sit
there reading about SunOS Release 5.11 Version snv_86 64-bit)? It would be
nice if I could see a bit more information during boot, or if I
information back to the list.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Aug 27, 2008 at 1:08 PM, Kenny [EMAIL PROTECTED] wrote:
Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The
host system ( SUN Enterprise 5220) reconizes the disks as each having
931GB
out an iSCSI LUN to *insert destination*. Then share
out the LUN from the host it's presented to via cifs/nfs/whatever. You
can't magically make an iSCSI LUN out of the cifs data currently sitting on
the share on the Solaris host. Doesn't work that way.
--Tim
this. This is one of the many annoyances I have with
linux. The way they handle /dev is ridiculous. Did you add a new drive?
Let's renumber everything!
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
1 - 100 of 891 matches
Mail list logo