On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote:
On 03/22/10 11:50 PM, Richard Elling wrote:
Look again, the checksums are different.
Whoops, you are correct, as usual. Just 6 bits out of 256 different...
Look which bits are different - digits 24, 53-56 in both cases.
You could try copying the file to /tmp (ie swap/ram) and do a continues loop of
checksums e.g.
while [ ! -f ibdlpi.so.1.x ] ; do sleep 1; cp libdlpi.so.1 libdlpi.so.1.x ;
A=`sha512sum -b libdlpi.so.1.x` ; [ $A == what it should be
libdlpi.so.1.x ] rm libdlpi.so.1.x ; done ; date
Assume the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
How about running memtest86+ (http://www.memtest.org/) on the machine
for a while? It doesn't test the arithmetics on the CPU very much, but
it stresses data paths quite a lot. Just a quick suggestion...
- --
Saso
Damon Atkins wrote:
You could try
you could also use psradm to take a CPU off-line.
At boot I would ??assume?? the system boots the same way every time unless
something changes, so you could be hiting the came CPU core every time or the
same bit of RAM until booted fully.
Or even run SunVTS Validation Test Suite which I belive
Darren J Moffat darr...@opensolaris.org wrote:
You cannot get a single file out of the zfs send datastream.
I don't see that as part of the definition of a backup - you obviously
do - so we will just have to disagree on that.
If you need to set up a file server of the same size as the
On Wed, March 24, 2010 10:36, Joerg Schilling wrote:
- A public interface to get the property state
That would come from libzfs. There are private interfaces just now that
are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't
documented or public though and are subject
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have
linux experience, but have never used ZFS. I have tried to install OpenSolaris
Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I
can only use one 2TB disk, and I cannot partition
On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic dusa...@gmail.comwrote:
Hello all,
I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do
have linux experience, but have never used ZFS. I have tried to install
OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk,
Hi
On Wednesday 24 March 2010 17:01:31 Dusan Radovanovic wrote:
connected to P212 controller in RAID-5. Could someone direct me or suggest
what I am doing wrong. Any help is greatly appreciated.
I don't know, but I would get around this like this:
My suggestion would be to configure the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Forgot to cc the list, well here goes...
- Original Message
Subject: Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller
Date: Wed, 24 Mar 2010 17:10:58 +0100
From: Svein Skogen sv...@stillbilde.net
To: Dusan Radovanovic
I believe that write caching is turned off on the boot drives or is it
the controller or both?
Which could be a big problem.
On 03/24/10 11:07, Tim Cook wrote:
On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic dusa...@gmail.com
mailto:dusa...@gmail.com wrote:
Hello all,
I am a
On Mar 23, 2010, at 11:21 PM, Daniel Carosone wrote:
On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote:
On 03/22/10 11:50 PM, Richard Elling wrote:
Look again, the checksums are different.
Whoops, you are correct, as usual. Just 6 bits out of 256 different...
Look which
Thank you all for your valuable experience and fast replies. I see your point
and will create one virtual disk for the system and one for the storage pool.
My RAID controller is battery backed up, so I'll leave write caching on.
Thanks again,
Dusan
--
This message posted from opensolaris.org
Thank you for your advice. I see your point and will create one virtual disk
for the system and one for the storage pool. My RAID controller is battery
backed up, so I'll leave write caching on.
Thanks again,
Dusan
--
This message posted from opensolaris.org
On Mar 24, 2010, at 9:14 AM, Karl Rossing wrote:
I believe that write caching is turned off on the boot drives or is it the
controller or both?
By default, ZFS will not enable volatile write caches on disks for SMI labeled
disk drives (eg boot).
Which could be a big problem.
Actually, it is
On 24.03.2010 17:42, Richard Elling wrote:
Nonvolatile write caches are not a problem.
Which is why ZFS isn't a replacement for proper array controllers
(defining proper as those with sufficient battery to leave you with a
seemingly intact filesystem), but a very nice augmentation for them.
Loaded mpt_sas, world of difference, thanks.
Then I yanked a drive out of the hot plug backplane to see what would happened.
My ZPOOL detects an IO failure and runs in degraded mode. All good, pop the
drive
back in, but a zpool replace appears not sufficient. (This works with the
1068E/mpt
On Tue, March 23, 2010 12:00, Ray Van Dolson wrote:
ZFS recognizes disks based on various ZFS special
blocks written to them.
It also keeps a cache file on where things have been
lately. If you
export a ZFS pool, swap the physical drives around,
and import it,
everything should be fine.
Is there a way to reserve space for a particular user or group? Or perhaps
to set a quota for a group which includes everyone else?
I have one big pool, which holds users' home directories, and also the
backend files for the svn repositories etc. I would like to ensure the svn
server process
On Mar 24, 2010, at 10:05 AM, Svein Skogen wrote:
On 24.03.2010 17:42, Richard Elling wrote:
Nonvolatile write caches are not a problem.
Which is why ZFS isn't a replacement for proper array controllers (defining
proper as those with sufficient battery to leave you with a seemingly intact
On Wed, Mar 24, 2010 at 10:52 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
Is there a way to reserve space for a particular user or group? Or
perhaps to set a quota for a group which includes everyone else?
I have one big pool, which holds users’ home directories, and also the
Thank you all for your valuable experience and fast replies. I see your
point and will create one virtual disk for the system and one for the
storage pool. My RAID controller is battery backed up, so I'll leave
write caching on.
I think the point is to say: ZFS software raid is both faster
zfs set reservation=100GB dataset/name
That will reserve 100 GB of space for the dataset, and will make that
space unavailable to the rest of the pool.
That doesn't make any sense to me ...
How does that allow subversionuser to use the space, and block joeuser from
using it?
zfs set reservation=100GB dataset/name
That will reserve 100 GB of space for the dataset, and will make that
space unavailable to the rest of the pool.
That doesn't make any sense to me ...
How does that allow subversionuser to use the space, and block
joeuser from using it?
Oh - I
On Wed, Mar 24, 2010 at 10:47:27AM -0700, Russ Price wrote:
On Tue, March 23, 2010 12:00, Ray Van Dolson wrote:
ZFS recognizes disks based on various ZFS special
blocks written to them.
It also keeps a cache file on where things have been
lately. If you
export a ZFS pool, swap the
On Wed, Mar 24, 2010 at 11:18 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
Out of curiosity, the more general solution would be the ability to create
a
reservation on a per-user or per-group basis (just like you create quotas
on
a per-user or per-group basis). Is this possible?
Which is why ZFS isn't a replacement for proper array controllers
(defining proper as those with sufficient battery to leave you with a
seemingly intact filesystem), but a very nice augmentation for them. ;)
Nothing prevents a clever chap from building a ZFS-based array
controller
which
srbi == Steve Radich, BitShop, Inc ste...@bitshop.com writes:
srbi
http://www.bitshop.com/Blogs/tabid/95/EntryId/78/Bug-in-OpenSolaris-SMB-Server-causes-slow-disk-i-o-always.aspx
I'm having trouble understanding many things in here like ``our file
move'' (moving what from where to where with
The question is not how to create quotas for users.
The question is how to create reservations for users.
One way to create a reservation for a user is to create a quota for everyone
else, but that's a little less manageable, so a reservation per-user would
be cleaner and more desirable.
On 03/24/10 12:54, Richard Elling wrote:
Nothing prevents a clever chap from building a ZFS-based array controller
which includes nonvolatile write cache.
+1 to that. Something that is inexpensive and small (4GB?) and works in
a PCI express slot.
CONFIDENTIALITY NOTICE: This
On Wed, March 24, 2010 14:36, Edward Ned Harvey wrote:
The question is not how to create quotas for users.
The question is how to create reservations for users.
There is currently no way to do per-user reservations. That ZFS property
is only available per-file system.
Even per-user and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 24.03.2010 19:53, Karl Rossing wrote:
On 03/24/10 12:54, Richard Elling wrote:
Nothing prevents a clever chap from building a ZFS-based array controller
which includes nonvolatile write cache.
+1 to that. Something that is inexpensive and
Sorry if this has been dicussed before. I tried searching but I couldn't find
any info about it. We would like to export our ZFS configurations in case we
need to import the pool onto another box. We do not want to backup the actual
data in the zfs pool, that is already handled through another
On Wed, Mar 24 at 12:20, Wolfraider wrote:
Sorry if this has been dicussed before. I tried searching but I
couldn't find any info about it. We would like to export our ZFS
configurations in case we need to import the pool onto another
box. We do not want to backup the actual data in the zfs
On 02/28/10 08:09 PM, Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2
vdevs that appears to be writing slowly and I notice a considerable
imbalance of both free space and write operations. The pool is
currently feeding a tape backup while receiving a
Yes, I think Eric is correct.
Funny, this is an adjunct to the thread I started entitled Thoughts on ZFS
Pool
Backup Strategies. I was going to include this point in that thread but
thought
better of it.
It would be nice if there were an easy way to extract a pool configuration,
with
all of the
Brandon High bh...@freaks.com writes:
Someone pointed out that you can use bart, but that also scans the
directories. It might do what you want, but it doesn't work at the zpool /
zfs level, just at the file level layer.
Apparently I missed any suggestion about bart, but looking it up just
Harry Putnam rea...@newsguy.com writes:
At just a quick read, it really just sounds like rsync, after its been in a
severe wreck and was badly crippled.
OOps, I may have looked at the wrong bart. One of the first hits
google turned up was:
http://www.zhornsoftware.co.uk/bart/index.html
On Wed, Mar 24, 2010 at 08:02:06PM +0100, Svein Skogen wrote:
Maybe someone should look at implementing the zfs code for the XScale
range of io-processors (such as the IOP333)?
NetBSD runs on (many of) those.
NetBSD has an (in-progress, still-some-issues) ZFS port.
Hopefully they will converge
OOps, I may have looked at the wrong bart.
I think he meant this BART:
http://blogs.sun.com/gbrunett/entry/automating_solaris_10_file_integrity
I'm going to make one quick comment about this, despite better
judgment to probably keep quiet. I don't think anyone should use ZFS
as a VCS like
Hello,
I have boxed myself into a mental corner and need some help getting out. I am
confused about working with ZFS file systems. Here is a simple example of what
has me confused: Let's say I create the ZFS file system tank/nfs and share that
over NFS. Then I create the ZFS file systems
2010/3/24 Chris Dunbar cdun...@earthside.net
I have boxed myself into a mental corner and need some help getting out. I
am confused about working with ZFS file systems. Here is a simple example of
what has me confused: Let's say I create the ZFS file system tank/nfs and
share that over NFS.
Brandon,
Thank you for the explanation. It looks like I will have to share out each file
system. I was trying to keep the number of shares manageable, but it sounds
like that won't work.
Regards,
Chris
On Mar 24, 2010, at 9:36 PM, Brandon High wrote:
2010/3/24 Chris Dunbar
On Wed, Mar 24, 2010 at 6:39 PM, Chris Dunbar cdun...@earthside.net wrote:
Thank you for the explanation. It looks like I will have to share out each
file system. I was trying to keep the number of shares manageable, but it
sounds like that won't work.
Thanks to inheritance, it's easier than
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Can you share it?
You
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Carson Gaspar wrote:
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar car...@taltos.org wrote:
Fajar A. Nugraha wrote:
You will do best if you configure the raid controller to JBOD.
Problem: HP's storage controller doesn't support that mode.
It does, ish. It forces you to create a bunch of
On Wed, Mar 24, 2010 at 4:00 PM, Khyron khyron4...@gmail.com wrote:
Yes, I think Eric is correct.
Funny, this is an adjunct to the thread I started entitled Thoughts on ZFS
Pool
Backup Strategies. I was going to include this point in that thread but
thought
better of it.
It would be
49 matches
Mail list logo