On Wed, 28 Jan 2009, Cuyler Dingwell wrote:
> In the process of replacing a raidz1 of four 500GB drives with four
> 1.5TB drives on the third one I ran into an interesting issue. The
> process was to remove the old drive, put the new drive in and let it
> rebuild.
>
> The problem was the third
On 01/29/09 21:32, Greg Mason wrote:
> This problem only manifests itself when dealing with many small files
> over NFS. There is no throughput problem with the network.
>
> I've run tests with the write cache disabled on all disks, and the cache
> flush disabled. I'm using two Intel SSDs for
Kevin,
Looking at the stats I think the tank pool is about 80% full.
And at this point you are possibly hitting the bug :
6596237 - "Stop looking and start ganging
Also, there is another ZIL related bug which worsens the case
by fragmenting the space :
6683293 concurrent O_DSYNC writes to a file
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
I've run tests with the write cache disabled on all disks, and the cache
flush disabled. I'm using two Intel SSDs for ZIL devices.
This setup is faster than using the
the funny thing is that I'm showing a performance improvement over write
caches + cache flushes.
The only way these pools are being accessed is over NFS. Well, at least
the only way I care about when it comes to high performance.
I'm pretty sure it would give a performance hit locally, but I do
On Thu, Jan 29, 2009 at 22:44, Nathan Kroenert wrote:
> You could be the first...
>
> Man up! ;)
*sigh* The 9010b is ordered. Ground shipping, unfortunately, but
eventually I'll post my impressions of it.
Will
> Nathan.
>
> Will Murnane wrote:
>>
>> On Thu, Jan 29, 2009 at 21:11, Nathan Kroene
You could be the first...
Man up! ;)
Nathan.
Will Murnane wrote:
> On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert
> wrote:
>> Seems a little pricey for what it is though.
> For what it's worth, there's also a 9010B model that has only one sata
> port and room for six dimms instead of eight at
Multiple Thors (more than 2?), with performance problems.
Maybe it's the common demnominator - the network.
Can you run local ZFS IO loads and determine if performance
is expected when NFS and the network are out of the picture?
Thanks,
/jim
Greg Mason wrote:
> So, I'm still beating my head aga
On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert wrote:
> Seems a little pricey for what it is though.
For what it's worth, there's also a 9010B model that has only one sata
port and room for six dimms instead of eight at $250 instead of $400.
That might fit in your budget a little easier... I'm co
That bug has been in Solaris forever. :-(
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If all write caches are truly disabled, then disabling the cache flush won't
affect the safety of your data.
It will change your performance characteristics, almost certainly for the worse.
--
This message posted from opensolaris.org
___
zfs-discuss ma
Snapshots are not on a per-pool basis but a per-file-system basis. Thus, when
you took a snapshot of "testpol", you didn't actually snapshot the pool;
rather, you took a snapshot of the top level file system (which has an implicit
name matching that of the pool).
Thus, you haven't actually aff
As it presents as standard SATA, there should be no reason for this not
to work...
It has battery backup, and CF for backup / restore from DDR2 in the
event of power loss... Pretty cool. (Would have preferred a super-cap,
but oh, well... ;)
Should make an excellent ZIL *and* L2ARC style device
ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM while
still looking like a standard SATA drive. If anyone remember the Gigabyte I-RAM
this might be a new development in this area.
Its called ACARD ANS-9010 and up...
http://www.acard.com.tw/english/fb01-product.jsp?idno_
Nicolas Williams wrote:
> On Thu, Jan 29, 2009 at 03:02:50PM -0800, Peter Reiher wrote:
> > Does ZFS currently support actual use of extended attributes? If so, where
> > can I find some documentation that describes how to use them?
>
> man runat.1 openat.2 etcetera
Nico, the term "extended at
On 29-Jan-09, at 4:53 PM, Volker A. Brandt wrote:
>> Given the massive success of GNU based systems (Linux, OS X, *BSD)
>
> Ouch! Neither OSX nor *BSD are GNU-based.
I meant, extensive GNU userland (in OS X's case).
(sorry Ian)
--Toby
> They do ship with
> GNU-related things but that's been
Hi Peter,
Yes, ZFS supports extended attributes.
The runat.1 and fsattr.5 man pages are good places
to start.
Cindy
Peter Reiher wrote:
> Does ZFS currently support actual use of extended attributes? If so, where
> can I find some documentation that describes how to use them?
__
> "fm" == Fredrich Maney writes:
fm> put the GNU utilities in default system wide path before the
fm> native Sun utilities in order to make it easier to attract
fm> Linux users
It's a quick thing to make it feel like you're ``doing something about
the problem'' but the idea of di
On Thu, Jan 29, 2009 at 03:02:50PM -0800, Peter Reiher wrote:
> Does ZFS currently support actual use of extended attributes? If so, where
> can I find some documentation that describes how to use them?
man runat.1 openat.2 etcetera
Nico
--
___
zfs-d
On Jan 29, 2009, at 18:02, Peter Reiher wrote:
> Does ZFS currently support actual use of extended attributes? If
> so, where can I find some documentation that describes how to use
> them?
Your best bet would probably be:
http://search.sun.com/docs/index.jsp?qt=zfs+extended+attributes
Is
Does ZFS currently support actual use of extended attributes? If so, where can
I find some documentation that describes how to use them?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
For years, we resisted stopping rm -r / because people should know
better, until *finally* someone said - you know what - that's just dumb.
Then, just like that, it was fixed.
Yes - This is Unix.
Yes - Provide the gun and allow the user to point it.
Just don't let it go off in their groin or w
Volker A. Brandt wrote:
>> Given the massive success of GNU based systems (Linux, OS X, *BSD)
>>
>
> Ouch! Neither OSX nor *BSD are GNU-based.
Not here, please. This topic has been beaten to death on the discuss
list, where it's topical.
--
Ian.
_
> Given the massive success of GNU based systems (Linux, OS X, *BSD)
Ouch! Neither OSX nor *BSD are GNU-based. They do ship with
GNU-related things but that's been a long and hard battle.
And the massive success has really only been Linux due to brilliant
PR (and FUD about *BSD) and OS X due to
So, I'm still beating my head against the wall, trying to find our
performance bottleneck with NFS on our Thors.
We've got a couple Intel SSDs for the ZIL, using 2 SSDs as ZIL devices.
Cache flushing is still enabled, as are the write caches on all 48 disk
devices.
What I'm thinking of doing i
I like that, although it's a bit of an intelligence insulter. Reminds
me of the old pdp11 install (
http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --
This step makes an empty file system.
6.The next thing to do is to restore the data onto the new empty
file system. To do this y
Maybe add a timer or something? When doing a "destroy", ZFS will keep
everything for 1 minute or so, before overwriting. This way the disk won't get
as fragmented. And if you had fat fingers and typed wrong, you have up to one
minute to undo. That will catch 80% of the mistakes?
--
This message
I just noticed that my previous assessment was not quite accurate. It's
even stranger. Let's try again.
On S10/b101, I have two pools:
> zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
double 172G 116G 56.3G67% ONLINE -
single 516G 61.3G 455G11% ONLINE -
On 29-Jan-09, at 2:17 PM, Ross wrote:
> Yeah, breaking functionality in one of the main reasons people are
> going to be trying OpenSolaris is just dumb... really, really dumb.
>
> One thing Linux, Windows, OS/X, etc all get right is that they're
> pretty easy to use right out of the box. Th
Yeah, breaking functionality in one of the main reasons people are going to be
trying OpenSolaris is just dumb... really, really dumb.
One thing Linux, Windows, OS/X, etc all get right is that they're pretty easy
to use right out of the box. They're all different, but they all do their own
job
Hello Jacob,
Thursday, January 29, 2009, 5:09:41 PM, you wrote:
JR> zfs undestroy certainly would be a lifesaver for people who have to
JR> learn this the hard way. Maybe it could be implemented with an
JR> algorithm to the effect of 'keep a pointer somewhere to the old fs
JR> bits and don't tou
I am pretty sure that Oxford 911 is a family of parts. The current Oxford
Firewire parts are the 934 and 936 families. It appears that the Oxford 911
was commonly used in drive enclosures.
The most troublesome part in my experience is the Initio INIC-1430. It does not
get along with scsa1394
I hear that. Had it been a prod box, I'd have been a lot more
paranoid and careful. This was a new vdev with a fresh zone installed
on it, so I only lost a half hour of effort (whew). The seriousness
of the zfs destroy command, though, really hit home during this
process, and I wanted to find ou
Bingo, they were 0750. Thanks so much, that was the one thing I didn't think
of. I thought I was going crazy :).
Thanks again!
-Dustin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Christine Tran wrote:
>> There was a very long discussion about this a couple of weeks ago on
>> one of the lists. Apparently the decision was made to put the GNU
>> utilities in default system wide path before the native Sun utilities
>> in order to make it easier to attract Linux users by making
Dustin Marquess wrote:
> Forgot to add that a truss shows:
>
> 14960: lstat64("/a1000/..", 0xFFBFF7E8)Err#13 EACCES
> [file_dac_search]
>
> ppriv shows the error in UFS:
>
> $ ppriv -e -D -s -file_dac_search ls -ld /a1000/..
> ls[15022]: missing privilege "file_dac_search" (eui
On Wed, 28 Jan 2009 20:16:37 -0500
Christine Tran wrote:
> Everybody respects rm -f *.
+1
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv105 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
Forgot to add that a truss shows:
14960: lstat64("/a1000/..", 0xFFBFF7E8)Err#13 EACCES
[file_dac_search]
ppriv shows the error in UFS:
$ ppriv -e -D -s -file_dac_search ls -ld /a1000/..
ls[15022]: missing privilege "file_dac_search" (euid = 100, syscall = 216)
needed at ufs_ia
Hello. I have a really weird problem with a ZFS pool on one machine, and it's
only with 1 pool on that machine (the other pool is fine). Any non-root users
cannot access '..' on any directories where the pool is mounted, eg:
/a1000 on a1000
read/write/setuid/devices/nonbmand/exec/xattr/noatim
How were you running this test?
were you running it locally on the machine, or were you running it over
something like NFS?
What is the rest of your storage like? just direct-attached (SAS or
SATA, for example) disks, or are you using a higher-end RAID controller?
-Greg
kristof wrote:
> Kebab
Kebabber,
You can't expose zfs filesystems over iSCSI.
You only can expose ZFS volumes (raw volumes) over iscsi.
PS: 2 weeks ago I did a few tests, using filebench.
I saw little to no improvement using a 32GB Intel X25E SSD.
Maybe this is because filebench is flushing the cache in between test
> There was a very long discussion about this a couple of weeks ago on
> one of the lists. Apparently the decision was made to put the GNU
> utilities in default system wide path before the native Sun utilities
> in order to make it easier to attract Linux users by making the
> environment more fam
On Wed, Jan 28, 2009 at 11:16 PM, Christine Tran
wrote:
> On Wed, Jan 28, 2009 at 11:07 PM, Christine Tran
> wrote:
>> What is wrong with this?
>>
>> # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache
>> chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow
Imagine 10 SATA discs in raidz2 and one or two SSD drives as a cache. Each
Vista client reaches ~90MB/sec to the server, using Solaris CIFS and iSCSI. So
you want to use iSCSI with this. (iSCSI allows ZFS to export a file system as a
native SCSI disc to a desktop PC. The desktop PC can mount thi
On Thu, Jan 29, 2009 at 6:13 AM, Kevin Maguire wrote:
> I have tried to establish if some client or clients are thrashing the
> server via nfslogd, but without seeing anything obvious. Is there
> some kind of per-zfs-filesystem iostat?
The following should work in bash or ksh, so long as the lis
Hi,
Is anyone using ZFS with IBM now Rational (ClearCase VOB)?
I am looking for how to take the VOB backup using the ZFS snapshot and
restoring it?
Is this a supported config from vendor side?
Any help would be appreciated.
Rgds
Vikash
___
Hi
We have been using a Solaris 10 system (Sun-Fire-V245) for a while as
our primary file server. This is based on Solaris 10 06/06, plus
patches up to approx May 2007. It is a production machine, and until
about a week ago has had few problems.
Attached to the V245 is a SCSI RAID array, which pr
BJ,
>> The means to specify this is "sndradm -nE ...",
>> when 'E' is equal enabled.
>
> Got it. Nothing on the disk, nothing to replicate (yet).
:-)
>> The manner in which SNDR can guarantee that
>> two or more volumes are write-order consistent, as they are
>> replicated is place them in the
Thanx for your answers guys. :o)
I dont contemplating trying this for my ZFS raid, as the SSD drives are
expensive right now. I just want to be able to answer questions when I convert
Windows/Linux to Solaris. And therefore collect info. Has anyone tried this on
a blogg? Would be cool to blog
Hi
I am facing some problems after rolling back the snapshots created on pool.
Environment:
bash-3.00# uname -a
SunOS hostname 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Blade-100
ZFS version:
bash-3.00# zpool upgrade
This system is currently running ZFS version 2.
All pools are formatted using
50 matches
Mail list logo