On 4 Feb 2010, at 16:35, Bob Friesenhahn wrote:
On Thu, 4 Feb 2010, Darren J Moffat wrote:
Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such my customer cannot
use compression. I have asked IBM for roadmap
Nicolas Williams nicolas.willi...@sun.com wrote:
There's no unionfs for Solaris.
(For those of you who don't know, unionfs is a BSDism and is a
pseudo-filesystem which presents the union of two underlying
filesystems, but with all changes being made only to one of the two
filesystems. The
Are the sha256/fletcher[x]/etc checksums sent to the receiver along
with the other data/metadata? And checked upon receipt of course.
Do they chain all the way back to the uberblock or to some calculated
transfer specific checksum value?
The idea is to carry through the integrity checks wherever
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/03/2010 04:35 PM, Andrey Kuzmin wrote:
At zfs_send level there are no files, just DMU objects (modified in
some txg which is the basis for changed/unchanged decision).
Would be awesome if zfs send would have an option to show files
changed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/04/2010 05:10 AM, Matthew Ahrens wrote:
This is RFE 6425091 want 'zfs diff' to list files that have changed
between snapshots, which covers both file directory changes, and file
removal/creation/renaming. We actually have a prototype of zfs
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
When a scrub/resilver finishes, you see the date and time in zpool
status. But this information doesn't persist across reboots.
Would be nice being able to see the date and time it took to scrub the
pool, even if you reboot your machine :).
PS: I am
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote:
NAMESTATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirrorDEGRADED 0 0 0
c1t4d0 DEGRADED 0 028 too many errors
c1t5d0 ONLINE 0 0 0
On Fri, Feb 05, 2010 at 02:41:35PM +0100, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
When a scrub/resilver finishes, you see the date and time in zpool
status. But this information doesn't persist across reboots.
Would be nice being able to see the date and time it
Was my raidz2 performance comment above correct?
That the write speed is that of the slowest disk?
That is what I believe I have
read.
You are
sort-of-correct that its the write speed of the
slowest disk.
My experience is not in line with that statement. RAIDZ will write a complete
On 05/02/2010 04:11, Edward Ned Harvey wrote:
Data in raidz2 is striped so that it is split across multiple disks.
Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk. It's not
simply
Hi all,
I'm building a whole new server system for my employer, and I really want to
use OpenSolaris as the OS for the new file server. One thing is keeping me
back, though: is it possible to recover a ZFS Raid Array after the OS crashes?
I've spent hours with Google to avail
To be more
On Fri, 5 Feb 2010, Rob Logan wrote:
well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC
Intel's RAM is faster because it needs to be. It is wise to see the
role that architecture plays in total
On Fri, Feb 05, 2010 at 08:35:15AM -0800, J wrote:
To be more descriptive, I plan to have a Raid 1 array for the OS, and
then I will need 3 additional Raid5/RaidZ/etc arrays for data
archiving, backups and other purposes. There is plenty of
documentation on how to recover an array if one of
Hi list,
I've a strange behaviour with autoreplace property. It is set to off by
default, ok. I want to manually manage disk replacement so default off
matches my need.
# zpool get autoreplace mypool
NAME PROPERTY VALUESOURCE
mypool autoreplace off default
Then I added 2
if zfs overlaps mirror reads across devices.
it does... I have one very old disk in this mirror and
when I attach another element one can see more reads going
to the faster disks... this past isn't right after the attach
but since the reboot, but one can still see the reads are
load balanced
Hi Francois,
The autoreplace property works independently of the spare
feature.
Spares are activated automatically when a device in the main
pool fails.
Thanks,
Cindy
On 02/05/10 09:43, Francois wrote:
Hi list,
I've a strange behaviour with autoreplace property. It is set to off by
On Fri, Feb 5, 2010 at 12:11 PM, Cindy Swearingen
cindy.swearin...@sun.comwrote:
Hi Francois,
The autoreplace property works independently of the spare
feature.
Spares are activated automatically when a device in the main
pool fails.
Thanks,
Cindy
On 02/05/10 09:43, Francois wrote:
Ah, I see!
Simple, easy, and saves me hundreds on HW-based RAID controllers ^_^
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pr == Peter Radig pe...@radig.de writes:
ls == Lutz Schumann presa...@storageconcepts.de writes:
pr I was expecting a good performance from the X25-E, but was
pr really suprised that it is that good (only 1.7 times slower
pr than it takes with ZIL completely disabled). So I will use
On Fri, 5 Feb 2010, Miles Nordin wrote:
ls r...@nexenta:/volumes# hdadm write_cache off c3t5
ls c3t5 write_cache disabled
You might want to repeat his test with X25-E. If the X25-E is also
dropping cache flush commands (it might!), you may be, compared to
disabling the ZIL, slowing
On Fri, Feb 5, 2010 at 10:55 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 5 Feb 2010, Miles Nordin wrote:
ls r...@nexenta:/volumes# hdadm write_cache off c3t5
ls c3t5 write_cache disabled
You might want to repeat his test with X25-E. If the X25-E is also
dropping
b == Brian broco...@vt.edu writes:
b (4) Hold backups from windows machines, mac (time machine),
b linux.
for time machine you will probably find yourself using COMSTAR and the
GlobalSAN iSCSI initiator because Time Machine does not seem willing
to work over NFS. Otherwise, for Macs
On 5-Feb-10, at 11:35 AM, J wrote:
Hi all,
I'm building a whole new server system for my employer, and I
really want to use OpenSolaris as the OS for the new file server.
One thing is keeping me back, though: is it possible to recover a
ZFS Raid Array after the OS crashes? I've spent
On Fri, Feb 05, 2010 at 11:55:12AM -0800, Bob Friesenhahn wrote:
On Fri, 5 Feb 2010, Miles Nordin wrote:
ls r...@nexenta:/volumes# hdadm write_cache off c3t5
ls c3t5 write_cache disabled
You might want to repeat his test with X25-E. If the X25-E is also
dropping cache flush
Trying to track down why our two Intel X-25E's are spewing out
Write/Retryable errors when being used as a ZIL (mirrored). The
system is running a LSI1068E controller with LSISASx36 expander
(box built by Silicon Mechanics).
The drives are fairly new, and it seems odd that both of the pair would
On Fri, Feb 5, 2010 at 12:20 PM, Miles Nordin car...@ivy.net wrote:
for time machine you will probably find yourself using COMSTAR and the
GlobalSAN iSCSI initiator because Time Machine does not seem willing
to work over NFS. Otherwise, for Macs you should definitely use NFS,
Slightly
Two things, mostly related, that I'm trying to find answers to for our security
team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while, asks for
the filesystem to be deleted
* New user asks for a filesystem and is given /users/nfsshare2. What
rvandol...@esri.com said:
I'm trying to figure out where I can find the firmware on the LSI
controller... are the bootup messages the only place I could expect to see
this? prtconf and prtdiag both don't appear to give firmware information.
. . .
Solaris 10 U8 x86.
The raidctl command is
On 2/5/10 3:49 PM -0500 c.hanover wrote:
Two things, mostly related, that I'm trying to find answers to for our
security team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while, asks
for the filesystem to be deleted * New user asks for a
rvd == Ray Van Dolson rvandol...@esri.com writes:
ak == Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
rvd I missed out on this thread. How would these dropped flushed
rvd writes manifest themselves?
presumably corrupted databases, lost mail, or strange NFS behavior
after the server
ch == c hanover chano...@umich.edu writes:
ch is there a way to a) securely destroy a filesystem,
AIUI zfs crypto will include this, some day, by forgetting the key.
but for SSD, zfs above a zvol, or zfs above a SAN that may do
snapshots without your consent, I think it's just logically
On Fri, Feb 05, 2010 at 03:49:15PM -0500, c.hanover wrote:
Two things, mostly related, that I'm trying to find answers to for our
security team.
Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while,
asks for the filesystem to be deleted
* New
On Fri, Feb 05, 2010 at 04:41:08PM -0500, Miles Nordin wrote:
ch == c hanover chano...@umich.edu writes:
ch is there a way to a) securely destroy a filesystem,
AIUI zfs crypto will include this, some day, by forgetting the key.
Right.
but for SSD, zfs above a zvol, or zfs above a
In our particular case, there won't be snapshots of destroyed filesystems (I
create the snapshots, and destroy them with the filesystem).
I'm not too sure on the particulars of NFS/ZFS, but would it be possible to
create a 1GB file without writing any data to it, and then use a hex editor to
I saw this in /. and thought I'd point it out to this list. It appears
to act as a L2 cache for a single drive, in theory providing better
performance.
http://www.silverstonetek.com/products/p_contents.php?pno=HDDBOOSTarea
-B
--
Brandon High : bh...@freaks.com
Indecision is the key to
On 2/5/10 5:08 PM -0500 c.hanover wrote:
would it be possible to
create a 1GB file without writing any data to it, and then use a hex
editor to access the data stored on those blocks previously?
No, not over NFS and also not locally. You'd be creating a sparse file,
which doesn't allocate
On Fri, Feb 05, 2010 at 05:08:02PM -0500, c.hanover wrote:
In our particular case, there won't be snapshots of destroyed
filesystems (I create the snapshots, and destroy them with the
filesystem).
OK.
I'm not too sure on the particulars of NFS/ZFS, but would it be
possible to create a 1GB
On Feb 5, 2010, at 5:19 PM, Nicolas Williams wrote:
ZFS crypto will be nice when we get either NFSv4 or NFSv3 w/krb5 for
over the wire encryption. Until then, not much point.
You can use NFS with krb5 over the wire encryption _now_.
Nico
--
I know, that's just something I'm working
I saw this in /. and thought I'd point it out to this list. It appears
to act as a L2 cache for a single drive, in theory providing better
performance.
http://www.silverstonetek.com/products/p_contents.php?pno=HDDBOOSTarea
It's a neat device, but the notion of a hybrid drive is nothing new.
On Feb 5, 2010, at 3:11 AM, grarpamp wrote:
Are the sha256/fletcher[x]/etc checksums sent to the receiver along
with the other data/metadata?
No. Checksums are made on the records, and there could be a different
record size for the sending and receiving file systems. The stream itself
is
On Feb 5, 2010, at 10:49 AM, Robert Milkowski mi...@task.gda.pl wrote:
Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared
to raid-10 pool you should get better throughput if at least 4
drives are used. Basically it is due to the fact that in RAID-10 the
You might also want to note that with traditional filesystems, the
'shred' utility will securely erase data, but no tools like that
will work for zfs.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
No. Checksums are made on the records, and there could be a different
record size for the sending and receiving file systems.
Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
And then entirely new stream checksums are made while sending it all off
to the pipe.
I see
On Feb 5, 2010, at 7:20 PM, grarpamp wrote:
No. Checksums are made on the records, and there could be a different
record size for the sending and receiving file systems.
Oh. So there's a zfs read to ram somewhere, which checks the sums on disk.
And then entirely new stream checksums are made
Hmm, is that configurable? Say to match the checksums being
used on the filesystem itself... ie: sha256? It would seem odd to
send with less bits than what is used on disk.
Was thinking that plaintext ethernet/wan and even some of the 'weaker'
ssl algorithms
Do you expect the same errors
Intel's RAM is faster because it needs to be.
I'm confused how AMD's dual channel, two way interleaved
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved
DDR3-1333 into an on-cpu controller.
On Feb 5, 2010, at 8:09 PM, grarpamp wrote:
Hmm, is that configurable? Say to match the checksums being
used on the filesystem itself... ie: sha256? It would seem odd to
send with less bits than what is used on disk.
Was thinking that plaintext ethernet/wan and even some of the 'weaker'
Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk]
is assumed to handle data with integrity. So say netcat is used as transport,
zfs is using sha256 on disk, but only fletcher4 over the wire with send/recv,
and your wire takes some undetected/uncorrected hits, and the
48 matches
Mail list logo