All true, I just saw too many need ubuntu and zfs and thought to state the
obvious in case the patch set for nexenta happen to differ enough to provide a
working set. I've had nexenta succeed where opensolaris quarter releases failed
and vice versa
On Jun 27, 2010, at 9:54 PM, Erik Trimble
Of course, nexenta os is a build of ubuntu on an opensolaris kernel.
On Jun 26, 2010, at 12:27 AM, Freddie Cash fjwc...@gmail.com wrote:
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles merloc...@hotmail.com wrote:
What supporting applications are there on Ubuntu for RAIDZ?
None. Ubuntu
I've had this happen to me too. I found some dtrace scripts at the
time that showed that the file system was spending too much time
finding available 128k blocks or the like as I was near full per each
disk, even though combined I still had 140GB left of my 3TB pool. The
SPA code I believe it was
On Thu, Jun 5, 2008 at 9:26 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 11:12 PM, Joe Little [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED]
wrote:
Hey guys, please
On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED]
wrote:
Hey guys, please excuse me in advance if I say or ask anything stupid :)
Anyway, Solaris newbie here. I've built for myself a new file server to
use at
On Fri, May 30, 2008 at 7:43 AM, Paul Raines [EMAIL PROTECTED] wrote:
It seems when a zfs filesystem with reserv/quota is 100% full users can no
longer even delete files to fix the situation getting errors like these:
$ rm rh.pm6895.medial.V2.tif
rm: cannot remove `rh.pm6895.medial.V2.tif':
On Fri, May 30, 2008 at 6:30 AM, Jeb Campbell [EMAIL PROTECTED] wrote:
Ok, here is where I'm at:
My install of OS 2008.05 (snv_86?) will not even come up in single user.
The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of
the missing log (and I have to import to fix
On Fri, May 30, 2008 at 7:07 AM, Hugh Saunders [EMAIL PROTECTED] wrote:
On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai
[EMAIL PROTECTED] wrote:
I think it's right. You'd have to move to a 64 bit kernel. Any reasons to
stick to a 32 bit
kernel ?
My reason would be lack of 64bit
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell [EMAIL PROTECTED] wrote:
Meant to add that zpool import -f pool doesn't work b/c of the missing log
vdev.
All the other disks are there and show up with zpool import, but it won't
import.
Is there anyway a util could clear the log device vdev
On Thu, May 29, 2008 at 8:59 PM, Joe Little [EMAIL PROTECTED] wrote:
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell [EMAIL PROTECTED] wrote:
Meant to add that zpool import -f pool doesn't work b/c of the missing log
vdev.
All the other disks are there and show up with zpool import
This past weekend, but holiday was ruined due to a log device
replacement gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can't remove a log device
On Mon, May 26, 2008 at 6:10 AM, Gerard Henry [EMAIL PROTECTED] wrote:
hello all,
i have indiana freshly installed on a sun ultra 20 machine. It only does nfs
server. During one night, the kernel had crashed, and i got this messages:
May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice]
against the log device
as you can't remove it and the replacement as you see is touchy at
best. I know the larger, but general vdev evacuation is ongoing, but
if it is simple, log evacuation would make logs useful now instead of
waiting.
- Eric
On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little
point there were plans to do this as a separate
piece of work (since the vdev changes are needed for the general case
anyway), but I don't know whether this is still the case.
- Eric
On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little wrote:
This past weekend, but holiday was ruined due
Hello list,
We discovered a failed disk with checksum errors. Took out the disk
and resilvered, which reported many errors. A few of my subvolumes to
the pool won't mount anymore, with zfs import poolname reporting
that cannot mount 'poolname/proj': I/O error
Ok, we have a problem. I can
On Tue, Apr 8, 2008 at 9:55 AM, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM:
In our environment, the politically and administratively simplest
approach to managing our storage is to give each separate group at
least one ZFS pool of their own (into which
I was playing with a Gigabyte i-RAM card and found out it works great
to improve overall performance when there are a lot of writes of small
files over NFS to such a ZFS pool.
However, I noted a frequent situation in periods of long writes over
NFS of small files. Here's a snippet of iostat
is that this is a ramdisk so to speak,
albeit connected via SATA-I to the sil3124. Any way to isolate this
further? Anyway to limit i/o timeouts to a drive? this is just two
sticks of ram.. ms would be fine :)
-- richard
Joe Little wrote:
I was playing with a Gigabyte i-RAM card and found out it works
On Nov 26, 2007 7:57 PM, Richard Elling [EMAIL PROTECTED] wrote:
Joe Little wrote:
On Nov 26, 2007 7:00 PM, Richard Elling [EMAIL PROTECTED] wrote:
I would expect such iostat output from a device which can handle
only a single queued I/O to the device (eg. IDE driver) and an I/O
On Nov 20, 2007 6:34 AM, MC [EMAIL PROTECTED] wrote:
So there is no current way to specify the creation of
a 3 disk raid-z
array with a known missing disk?
Can someone answer that? Or does the zpool command NOT accommodate the
creation of a degraded raidz array?
can't started
On Nov 18, 2007 1:44 PM, Richard Elling [EMAIL PROTECTED] wrote:
one more thing...
Joe Little wrote:
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby
On Nov 16, 2007 10:41 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe Little wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause
keeps the ZIL pegged and prohibiting
reads, its all for nought. Is there anyway to tune/configure the
ZFS/NFS combination to balance reads/writes to not starve one for the
other. Its either feast or famine or so tests have shown.
Neil.
Joe Little wrote:
I have historically noticed that in ZFS
On Nov 16, 2007 9:17 PM, Joe Little [EMAIL PROTECTED] wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now
Not for NexentaStor as yet to my knowledge. I'd like to caution that
the target of the initial product release is digital
archiving/tiering/etc and is not necessarily primary NAS usage, though
it can be used as such for those so inclined. However, interested
parties should contact them as they
On 11/2/07, MC [EMAIL PROTECTED] wrote:
I consider myself an early adopter of ZFS and pushed
it hard on this
list and in real life with regards to iSCSI
integration, zfs
performance issues with latency there of, and how
best to use it with
NFS. Well, I finally get to talk more about
On 11/2/07, Rob Logan [EMAIL PROTECTED] wrote:
I'm confused by this and NexentaStor... wouldn't it be better
to use b77? with:
Heads Up: File system framework changes (supplement to CIFS' head's up)
Heads Up: Flag Day (Addendum) (CIFS Service)
Heads Up: Flag Day (CIFS Service)
I consider myself an early adopter of ZFS and pushed it hard on this
list and in real life with regards to iSCSI integration, zfs
performance issues with latency there of, and how best to use it with
NFS. Well, I finally get to talk more about the ZFS-based product I've
been beta testing for quite
On 6/7/07, Al Hopper [EMAIL PROTECTED] wrote:
On Wed, 6 Jun 2007, Erast Benson wrote:
Announcing new direction of Open Source NexentaOS development:
NexentaCP (Nexenta Core Platform).
NexentaCP is Dapper/LTS-based core Operating System Platform distributed
as a single-CD ISO, integrates
On 2/27/07, Eric Haycraft [EMAIL PROTECTED] wrote:
I am no scripting pro, but I would imagine it would be fairly simple to create
a script and batch it to make symlinks in all subdirectories.
I've done something similar using NFS aggregation products. The real
problem is when you export,
build 57.
Jeff
On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote:
So, I attempting to find the inode from the result of a zpool status -v:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
cc 21e382 lvl=0 blkid=0
Well
So, I attempting to find the inode from the result of a zpool status -v:
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
cc 21e382 lvl=0 blkid=0
Well, 21e282 appears to not be a valid number for find . -inum blah
Any suggestions?
On 2/5/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
Hello zfs-discuss,
I've patched U2 system to 118855-36. Several zfs related bugs id
should be covered between -19 and -36 like HotSpare support.
However despite -36 is
On 2/1/07, Al Hopper [EMAIL PROTECTED] wrote:
On Thu, 1 Feb 2007, Tom Buskey wrote:
[i]
I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the
bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair
and I can use it if I upgrade my server for
On 1/24/07, Jonathan Edwards [EMAIL PROTECTED] wrote:
On Jan 24, 2007, at 09:25, Peter Eriksson wrote:
too much of our future roadmap, suffice it to say that one should
expect
much, much more from Sun in this vein: innovative software and
innovative
hardware working together to deliver
and specific models, and the driver used? Looks like there may be
stability issues with the marvell, which appear to go unanswered..
On 12/21/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Naveen,
I believe the newer LSI cards work pretty well with Solaris.
Best Regards,
Jason
On
On 12/21/06, Al Hopper [EMAIL PROTECTED] wrote:
On Thu, 21 Dec 2006, Joe Little wrote:
and specific models, and the driver used? Looks like there may be
stability issues with the marvell, which appear to go unanswered..
I've tested a box running two Marvell based 8-port controllers (which
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs volumes. So, only a power cycle works.
In both cases, we get this:
Dec 20
On 12/20/06, Joe Little [EMAIL PROTECTED] wrote:
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs volumes. So, only a power
Some further joy:
http://bugs.opensolaris.org/view_bug.do?bug_id=6504404
On 12/20/06, Joe Little [EMAIL PROTECTED] wrote:
On 12/20/06, Joe Little [EMAIL PROTECTED] wrote:
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log
On 11/22/06, Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] wrote:
On Nov 22, 2006, at 4:11 PM, Al Hopper wrote:
No problem there! ZFS rocks. NFS/ZFS is a bad combination.
Has anyone tried sharing a ZFS fs using samba or afs or something
else besides nfs? Do we have the same issues?
The latest OpenSolaris release? Perhaps Nexenta in the end is the way
to best deliver/maintain that.
On 10/27/06, David Blacklock [EMAIL PROTECTED] wrote:
What is the current recommended version of Solaris 10 for ZFS ?
-thanks,
-Dave
___
zfs-discuss
On 9/12/06, James C. McPherson [EMAIL PROTECTED] wrote:
Joe Little wrote:
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681
Yeah. I got the message from a few others, and we are hoping to
return/buy the newer one. I've sort of surprised by the limited set of
SATA RAID or JBOD cards that one can actually use. Even the one's
linked to on this list sometimes aren't supported :). I need to get up
and running like
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip stepping or unable
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high
On 8/24/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We
On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote:
On August 16, 2006 10:25:18 AM -0700 Joe Little [EMAIL PROTECTED] wrote:
Is there a way to allow simple export commands the traverse multiple
ZFS filesystems for exporting? I'd hate to have to have hundreds of
mounts required for every point
On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote:
On August 16, 2006 10:34:31 AM -0700 Joe Little [EMAIL PROTECTED] wrote:
On 8/16/06, Frank Cusack [EMAIL PROTECTED] wrote:
On August 16, 2006 10:25:18 AM -0700 Joe Little [EMAIL PROTECTED] wrote:
Is there a way to allow simple export commands
and some major
penalties for streaming writes of various sizes with the NFS
implementation and its fsync happiness (3 fsyncs per write from an NFS
client). Its all very true that its stable/safe, but its also very
slow in various use cases!
On 8/1/06, eric kustarz [EMAIL PROTECTED] wrote:
Joe Little
On 7/31/06, Dale Ghent [EMAIL PROTECTED] wrote:
On Jul 31, 2006, at 8:07 PM, eric kustarz wrote:
The 2.6.x Linux client is much nicer... one thing fixed was the
client doing too many commits (which translates to fsyncs on the
server). I would still recommend the Solaris client but i'm sure
I've always seen this curve in my tests (local disk or iscsi) and just
think its zfs as designed. I haven't seen much parallelism when I have
multiple i/o jobs going, the filesystem seems to go mostly into one or
the other mode. Perhaps per vdev (in iscsi I'm only exposing one or
two), there is
On 6/27/06, Erik Trimble [EMAIL PROTECTED] wrote:
Darren J Moffat wrote:
Peter Rival wrote:
storage arrays with the same arguments over and over without
providing an answer to the customer problem doesn't do anyone any
good. So. I'll restate the question. I have a 10TB database that's
To clarify what has just been stated. With zil disabled I got 4MB/sec.
With zil enabled I get 1.25MB/sec
On 6/23/06, Tao Chen [EMAIL PROTECTED] wrote:
On 6/23/06, Roch [EMAIL PROTECTED] wrote:
On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
On 6/22/06, Jeff Bonwick [EMAIL
On 6/23/06, Roch [EMAIL PROTECTED] wrote:
Joe Little writes:
On 6/22/06, Bill Moore [EMAIL PROTECTED] wrote:
Hey Joe. We're working on some ZFS changes in this area, and if you
could run an experiment for us, that would be great. Just do this:
echo 'zil_disable/W1' | mdb -kw
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else?
I think it would still be ideal to allow tweaking of things at runtime
to make 32-bit systems more ideal.
On 6/21/06, Mark Maybee [EMAIL PROTECTED] wrote:
Yup, your probably running up against the limitations of 32-bit
On 6/22/06, Darren J Moffat [EMAIL PROTECTED] wrote:
Rich Teer wrote:
On Thu, 22 Jun 2006, Joe Little wrote:
Please don't top post.
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else?
I think it would still be ideal to allow tweaking of things at runtime
to make 32-bit
On 6/22/06, Jeff Bonwick [EMAIL PROTECTED] wrote:
a test against the same iscsi targets using linux and XFS and the
NFS server implementation there gave me 1.25MB/sec writes. I was about
to throw in the towel and deem ZFS/NFS has unusable until B41 came
along and at least gave me 1.25MB/sec.
I guess the only hope is to find pin-compatible Xeons that are 64bit
to replace what is a large chassis with 24 slots of disks that has
specific motherboard form-factor, etc. We have 6 of these things from
a government grant that must be used for the stated purpose. So, yes,
we can buy product,
don't have time, no big deal.
--Bill
On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
On 6/22/06, Jeff Bonwick [EMAIL PROTECTED] wrote:
a test against the same iscsi targets using linux and XFS and the
NFS server implementation there gave me 1.25MB/sec writes. I was about
I've been writing via tar to a pool some stuff from backup, around
500GB. Its taken quite a while as the tar is being read from NFS. My
ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA
drives (sil3124 card)
Ever once in a while, a df stalls and during that time my io's go
Please add to the list the differences on locally or remotely attach
vdevs: FC, SCSI/SATA, or iSCSI. This is the part that is troubling me
most, as there are wildly different performance characteristics when
you use NFS with any of these backends with the various configs of
ZFS. Another thing is
Well, here's my previous summary off list to different solaris folk
(regarding NFS serving via ZFS and iSCSI):
I want to use ZFS as a NAS with no bounds on the backing hardware (not
restricted to one boxes capacity). Thus, there are two options: FC SAN
or iSCSI. In my case, I have multi-building
Well, I would caution at this point against the iscsi backend if you
are planning on using NFS. We took a long winded conversation online
and have yet to return to this list, but the gist of it is that the
latency of iscsi along with the tendency for NFS to fsync 3 times per
write causes
well, here's my first pass result:
[EMAIL PROTECTED] loges1]# time tar xf /root/linux-2.2.26.tar
real114m6.662s
user0m0.049s
sys 0m1.354s
On 5/11/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
Joe Little writes:
How did you get the average time
, XFS, JFS, etc which I've tested
before)
On 5/8/06, Nicolas Williams [EMAIL PROTECTED] wrote:
On Fri, May 05, 2006 at 11:55:17PM -0500, Spencer Shepler wrote:
On Fri, Joe Little wrote:
Thanks. I'm playing with it now, trying to get the most succinct test.
This is one thing that bothers me
Thanks for the tip. In the local case, I could send to the
iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
of 50seconds (17 seconds better than UFS). However, I didn't even both
finishing the NFS client test, since it was taking a few seconds
between multiple 27K files. So,
such a different code path?
On 5/5/06, Lisa Week [EMAIL PROTECTED] wrote:
These may help:
http://opensolaris.org/os/community/dtrace/scripts/
Check out iosnoop.d
http://www.solarisinternals.com/si/dtrace/index.php
Check out iotrace.d
- Lisa
Joe Little wrote On 05/05/06 18:59
This was fixed in snv_39.
Joe Little wrote On 05/04/06 15:47,:
I've been writing to the Solaris NFS list since I was getting some bad
performance copying via NFS (noticeably there) a large set of small
files. We have various source trees, including a tree with many linux
versions that I
70 matches
Mail list logo