I have a number of Solaris DVD and CD iso files backed up on a ZFS mirror.
Any attempt to copy or digest one of the DVD files fails with an I/O
error. I don't appear to have any problems with the CD images.
The system is an AMD 64 (ASUS A8N-E) running build 41 with two SATA
drives connected to
Dick Davies wrote:
What does zpool status say?
Knew I'd forgotten to check something:
# zpool status -v
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if
Thomas Maier-Komor wrote:
Hi,
concerning this issue I didn't find anything in the bug database, so I thought
I report it here...
When running live-upgrade on a system with a zfs, LU creates directories for
all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go
to
David Dyer-Bennet wrote:
Actually, save early and often is exactly why versioning is
important. If you discover you've gone down a blind alley in some
code, it makes it easy to get back to the earlier spots. This, in my
experience, happens at a detail level where you won't (in fact can't)
Chad Leigh -- Shire.Net LLC wrote:
On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
Chad Leigh -- Shire.Net LLC wrote:
And this is different from any other storage system, how? (ie, JBOD
controllers and disks can also have subtle bugs that corrupt data)
Of course, but there isn't the
Chad Leigh -- Shire.Net LLC wrote:
On Dec 2, 2006, at 12:06 AM, Ian Collins wrote:
But people expect RAID to protect them from the corruption caused by a
partial failure, say a bad block, which is a common failure mode.
They do? I must admit no experience with the big standalone raid
dudekula mastan wrote:
5) Like fsck command on Linux, is there any command to check the
consistency of the ZFS file system ?
As others have mentioned, ZFS doesn't require off line consistency
checking. You can run 'zpool scrub' on a live system and check the
result with 'zpool
Richard Elling wrote:
One of the benefits of ZFS is that not only is head synchronization not
needed, but also block offsets do not have to be the same. For example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2. In ZFS, this 1:1 mapping is not required.
John Weekley wrote:
Looks like bad memory. I removed the affected DIMM and haven't had any
reboots in about 24hrs.
Give mtest86 a whirl on that system.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Rayson Ho wrote:
Interesting...
http://www.rhic.bnl.gov/RCF/LiaisonMeeting/20070118/Other/thumper-eval.pdf
I wonder where they got the information that Solaris 10 doesn't support
dual-core Intel from?
Ian.
___
zfs-discuss mailing list
[EMAIL PROTECTED] wrote:
Hello,
I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and now I thought about swtching back to
Sascha Brechenmacher wrote:
Am 13.02.2007 um 22:46 schrieb Ian Collins:
Looks like poor hardware, how was the pool built? Did you give ZFS the
entire drive?
On my nForce4 Athlon64 box with two 250G SATA drives,
zpool status tank
pool: tank
state: ONLINE
scrub: none requested
Eric Enright wrote:
Quick answer though is to download CD1 of the latest release, boot to
a shell, and take a look.
Or try
http://www.sun.com/bigadmin/hcl/hcts/device_detect.html
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Lin Ling wrote:
Ian Collins wrote:
Thanks for the heads up.
I'm building a new file server at the moment and I'd like to make sure I
can migrate to ZFS boot when it arrives.
My current plan is to create a pool on 4 500GB drives and throw in a
small boot drive.
Will I be able to drop
Jim Mauro wrote:
http://www.cnn.com/2007/US/03/20/lost.data.ap/index.html
$71,800 for computer consultants wow.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This Asus board looks promising, assuming the parts (nForce 680a
chipset) are Solaris friendly:
http://www.asus.com.tw/products4.aspx?l1=3l2=136l3=486model=1530modelmenu=2
The board boasts 12 (!) SATA2 ports.
The FX-7x series CPUs appear to be a very cost effective ($799) 4 core
solution.
Ian
Shawn Walker wrote:
On 18/04/07, Erblichs [EMAIL PROTECTED] wrote:
Rich Teer,
I have a perfect app for the masses.
A Hi-Def Video/ audio server for the hi-def TV
and audio setup.
I would think the average person would want
to have access to
Anton B. Rang wrote:
You need exactly the same bandwidth as with any other
classical backup solution - it doesn't matter how at the end you need
to copy all those data (differential) out of the box regardless if it's
a tape or a disk.
Sure. However, it's somewhat cheaper to buy 100 MB/sec
Lori Alt wrote:
Benjamin Perrault wrote:
Don't mean to be a pest - but is there an eta on when the
b62_zfsboot.iso will be posted?
I'm really looking forward to ZFS root, but I'd rather download a
working dvd image then attempt to patch the image myself :-)
Actually, we hadn't planned to
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a maxed
Bart Smaalders wrote:
Ian Collins wrote:
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several
Cyril Plisko wrote:
On 5/3/07, Ian Collins [EMAIL PROTECTED] wrote:
The system has 8MB of RAM and a was using 'cp -r' to copy the directory.
Hm, that would be a record breaking system. Did you mean 8 GB ?
Oops Yes.
___
zfs-discuss mailing list
Dale Ghent wrote:
On May 2, 2007, at 10:36 PM, Ian Collins wrote:
The files are between 15 and 50MB. It's worth pointing out that .wav
files only compress by a few percent.
Not entirely related to your maxed CPU problem, but
I don't think it was a maxed CPU problem, only one core
Roch Bourbonnais wrote:
with recent bits ZFS compression is now handled concurrently with many
CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
Would changing (selecting a smaller) filesystem record size have any effect?
So
mike wrote:
Isn't the benefit of ZFS that it will allow you to use even the most
unreliable risks and be able to inform you when they are attempting to
corrupt your data?
To me it sounds like he is a SOHO user; may not have a lot of funds to
go out and swap hardware on a whim like a company
Lee Fyock wrote:
I didn't mean to kick up a fuss.
I'm reasonably zfs-savvy in that I've been reading about it for a year
or more. I'm a Mac developer and general geek; I'm excited about zfs
because it's new and cool.
At some point I'll replace my old desktop machine with something new
and
John Smith wrote:
Hello all,
Spent the last several hours perusing the ZFS forums and some of the blog
entries regarding ZFS. I have a couple of questions and am open to any hints,
tips, or things to watch out for on implementation of my home file server.
I'm building a file server
John Smith wrote:
The original thought was 3 of the drives as storage, and one of the drives as
parity. So that would yield around 1.4TB of useable storage.
Then raidz is your only option.
I hadn't given any thought to running 64 bit. This system is being built
from the ground up. I
John Smith wrote:
Sorry about that, the specific processor in question is the Pentium D 930
which supports 64 bit computing through the Extended Memory 64 Technology.
It was my initial reaction to say I'd go with 32 bit computing because my
general experience with 64-bit is Windows, Linux,
mike wrote:
thanks for the reply.
On 5/10/07, Al Hopper [EMAIL PROTECTED] wrote:
Suggestion - try two 4-way raidz pools.
wouldn't that bring usable space down to 2 pairs of 3x750?
can those be combined into a single filesystem (for a total of 6x750
usable, but underlying would actually
Marko Milisavljevic wrote:
To reply to my own message this article offers lots of insight into why
dd access directly through raw disk is fast, while accessing a file through
the file system may be slow.
http://www.informit.com/articles/printerfriendly.asp?p=606585rl=1
So, I guess
I have to drives with the same slices and I get an odd error if I try
and create a pool on one drive, but not the other:
Part TagFlag Cylinders SizeBlocks
0 swapwu 3 - 2642.01GB(262/0/0) 4209030
1 rootwm 265 -
, I didn't spot that! I'd better fix that before I use s4 for a
Live Upgrade!
Cheers,
Ian
Ian Collins wrote:
I have to drives with the same slices and I get an odd error if I try
and create a pool on one drive, but not the other:
Part TagFlag Cylinders Size
Ian Collins wrote:
Trevor Watson wrote:
Ian,
It looks like the error message is wrong - slice 7 overlaps slice 4 -
note that slice 4 ends at c6404, but slice 7 starts at c6394.
Slice 6 is also completely contained within slice 4's range of
cylinders, but that won't matter unless you
David Bustos wrote:
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM +0800:
Gurus;
I am exceedingly impressed by the ZFS although it is my humble opinion
that Sun is not doing enough evangelizing for it.
What else do you think we should be doing?
Send Thumpers to
Brett wrote:
Hi All,
I've been reading through the documentation for ZFS and have noted in several
blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0,
etc. I am having a little trouble getting these more advanced configurations
to play nicely.
I have two disk
Will Murnane wrote:
Sorry for singling you out, Ian; I meant Reply to All. This list
doesn't set reply-to...
On 5/30/07, Ian Collins [EMAIL PROTECTED] wrote:
How about 8 two way mirrors between shelves and a couple of hot spares?
That's fine and good, but then losing just one disk from each
Will Murnane wrote:
Sorry for singling you out, Ian; I meant Reply to All. This list
doesn't set reply-to...
On 5/30/07, Ian Collins [EMAIL PROTECTED] wrote:
How about 8 two way mirrors between shelves and a couple of hot spares?
That's fine and good, but then losing just one disk from each
arb wrote:
Hello, I'm new to OpenSolaris and ZFS so my apologies if my questions are
naive!
I've got solaris express (b52) and a zfs mirror, but this command locks up my
box within 5 seconds:
% cmp first_4GB_file second_4GB_file
You are 6 months out of date, start by upgrading to the
Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section on
troubleshooting, but if it causes *anyone*
Rick Mann wrote:
BTW, I don't mind if the boot drive fails, because it will be fairly easy to
replace, and this server is only mission-critical to me and my friends.
So...suggestions? What's a good way to utilize the power and glory of ZFS in
a 4x 500 GB system, without unnecessary waste?
Bart Smaalders wrote:
Ian Collins wrote:
Rick Mann wrote:
Ian Collins wrote:
Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.
Not a bad idea; I'll have to see where I can put one.
But, I thought I read somewhere that one can't
Alec Muffett wrote:
As I understand matters, from my notes to design the perfect home
NAS server :-)
1) you want to give ZFS entire spindles if at all possible; that will
mean it can enable and utilise the drive's hardware write cache
properly, leading to a performance boost. You want to do
Rob Windsor wrote:
What 8-port-SATA motherboard models are Solaris-friendly? I've hunted
and hunted and have finally resigned myself to getting a generic
motherboard with PCIe-x16 and dropping in an Areca PCIe-x8 RAID card
(in JBOD config, of course).
I don't know about 8 port SATA, but I
Rick Mann wrote:
Richard Elling wrote:
For the time being, these SATA disks will operate in IDE compatibility mode,
so
don't worry about the write cache. There is some debate about whether the
write
cache is a win at all, but that is another rat hole. Go ahead and split off
some
David Dyer-Bennet wrote:
Richard Elling wrote:
What I would do:
2 disks: slice 0 3 root (BE and ABE), slice 1 swap/dump, slice
6 ZFS mirror
2 disks: whole disk mirrors
I don't understand slice 6 zfs mirror. A mirror takes *two* things
of the same size.
Note the 2 disks:.
Ian
Rob Windsor wrote:
Ian Collins wrote:
Rob Windsor wrote:
What 8-port-SATA motherboard models are Solaris-friendly? I've hunted
and hunted and have finally resigned myself to getting a generic
motherboard with PCIe-x16 and dropping in an Areca PCIe-x8 RAID card
(in JBOD config, of course
Joe S wrote:
I'm playing around with ZFS and want to figure out the best use of my
6x 300GB SATA drives. The purpose of the drives is to store all of my
data at home (video, photos, music, etc). I'm debating between:
6x 300GB disks in a single raidz2 pool
--or--
2x (3x 300GB disks in a
Joe S wrote:
Thanks for all the comments. Very helpful.
I have another question. The six disk raidz2 pool works, but I noticed
in Richard Elling's blog that a raidz/raidz2 pool has the read
performance of a single drive (unless I misread something). What if I
create 2x three disk raidz vdevs
Bart Smaalders wrote:
michael T sedwick wrote:
Given a 1.6TB ZFS Z-Raid consisting 6 disks:
And a system that does an extreme amount of small /(20K) /random
reads /(more than twice as many reads as writes) /
1) What performance gains, if any does Z-Raid offer over other RAID
or Large
Mario Goebbels wrote:
A 6 disk raidz set is not optimal for random reads, since each disk in
the raidz set needs to be accessed to retrieve each item.
I don't understand, if the file is contained within a single stripe, why
would it need to access the other disks, if the checksum of
Roch Bourbonnais wrote:
Le 20 juin 07 à 04:59, Ian Collins a écrit :
I'm not sure why, but when I was testing various configurations with
bonnie++, 3 pairs of mirrors did give about 3x the random read
performance of a 6 disk raidz, but with 4 pairs, the random read
performance dropped
Blake wrote:
Hi.
I'm running snv 65 and having an issue much like this:
http://osdir.com/ml/solaris.opensolaris.help/2006-11/msg00047.html
http://osdir.com/ml/solaris.opensolaris.help/2006-11/msg00047.html
Has anyone found a workaround?
Or is this the issue with the BIOS not liking EFI
[EMAIL PROTECTED] wrote:
If power consumption and heat is a consideration, the newer Intel CPUs
have an advantage in that Solaris supports native power management on
those CPUs.
Are P35 chipset boards supported?
Ian
___
zfs-discuss mailing list
Ben Middleton wrote:
I've just purchased an Asus P5K WS, which seems to work OK. I had to download
the Marvell Yukon ethernet driver - but it's all working fine. It's also got
a PCI-X slot - so I have one of those Super Micro 8 port SATA cards -
providing a total of 16 SATA ports across the
Oliver Schinagl wrote:
[EMAIL PROTECTED] wrote:
However, I found on the liveDVD/CD that nexentia and beleniX both don't
come in x86_64 flavors? Or does solaris autodetect and auto run in 64bit
mode at boottime?
Solaris autodetects the CPU type and boots in 64 bit mode on 64
Oliver Schinagl wrote:
Ian Collins wrote:
Oliver Schinagl wrote:
once I boot it in 64bit mode, i'd have to run emulation libraries to run
32bit bins right?
No. Solaris has both 32 and 64 bit libraries.
so you are saying that i can run both 32bit and 64bit code
Kent Watsen wrote:
Getting there - can anybody clue me into how much CPU/Mem ZFS needs?
I have an old 1.2Ghz with 1Gb of mem laying around - would it be sufficient?
It'll use as much memory as you can spare and it has a strong preference
for 64 bit systems. Considering how much you
Kent Watsen wrote:
Glad you brought that up - I currently have an APC 2200XL
(http://www.apcc.com/resource/include/techspec_index.cfm?base_sku=SU2200XLNET)
- its rated for 1600 watts, but my current case selections are saying
they have a 1500W 3+1, should I be worried?
Probably not,
I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion. The files appear
to be immediately replaced when they are deleted.
Is this an NFS or a ZFS issue?
I've finally gotten around to upgrading my ZFS file server to build 72
and I must say I'm impressed with the performance boost provided by the
new nVidia SATA driver.
I had noticed a drop in bonnie++ performance once the pool started to
fill up, but with the new drive its is better than the
Sandro wrote:
hi
I am currently running a linux box as my fileserver at home.
It's got eight 250 gig sata2 drives connected to two sata pci controllers and
configured as one big raid5 with linux software raid.
Linux is (and solaris will be) installed on two separate mirrored disks.
I've
Dan Pritts wrote:
On Fri, Sep 14, 2007 at 01:48:40PM -0500, Christopher Gibbs wrote:
I suspect it's probably not a good idea but I was wondering if someone
could clarify the details.
I have 4 250G SATA(150) disks and 1 250G PATA(133) disk. Would it
cause problems if I created a raidz1
Paul Kraus wrote:
I also like being able to see how much space I am using for
each with a simple df rather than a du (that takes a while to run). I
can also tune compression on a data type basis (no real point in
trying to compress media files that are already compressed MPEG and
James C. McPherson wrote:
Anil Jangity wrote:
I have pool called data.
I have zones configured in that pool. The zonepath is: /data/zone1/fs.
(/data/zone1 itself is not used for anything else, by anyone, and has no
other data.) There are no datasets being delegated to this zone.
I want
James C. McPherson wrote:
Ian Collins wrote:
...
I don't know if anything else breaks when you do this, but if you are
building software in a zone on a lofs filesystem, dmake hangs. Regular
make works fine.
The output from truss is:
stat64(/export/home, 0x08045B60) = 0
llseek(8, 0
can you guess? wrote:
There aren't free alternatives in linux or freebsd
that do what zfs does, period.
No one said that there were: the real issue is that there's not much reason
to care, since the available solutions don't need to be *identical* to offer
*comparable* value
John Klimek wrote:
I'm trying to build a simple Solaris 10 file server using ZFS + CIFS...
That means I don't need X Windows or anything like that, etc.
Answered elsewhere.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I've seen the problems with bug 6343667, but I haven't seen the problem
I have at the the moment.
I started a scrub of a b72 system that doesn't have any recent snapshots
(none since the last scrub) and the % complete is cycling:
scrub: scrub in progress, 69.08% done, 0h13m to go
scrub: scrub
James C. McPherson wrote:
The ws command hates it - hmm, the underlying device for
/scratch is /scratch maybe if I loop around stat()ing
it it'll turn into a pumpkin
:-)
As does dmake, which is a real PITA for a developer!
Ian
___
Scott L. Burson wrote:
Hi,
This is in build 74, on x64, on a Tyan S2882-D with dual Opteron 275 and 24GB
of ECC DRAM.
Not an answer, but zfs-discuss is probably the best place to ask, so
I've taken the liberty of CCing that list.
I seem to have lost the entire contents of a ZFS raidz
Ed Pate wrote:
Sam,
This sounds like something for the zfs-discuss list. Cross-posting...
What's the question?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ed Pate wrote:
Hi All,
I stumbled on Solaris after talking to my CEO about
the server I was setting up and his eyes got all big
and he started going on about Solaris and ZFS. I'm
experienced with Unix and Linux so I have some idea
of what is going to go into this but I've already
tried
Haik Aftandilian wrote:
If this was for my home server, I would go with Solaris Express or another
OpenSolaris distribution just because I like to be closer to the cutting
edge. For example, the new beta/experimental CIFS server was recently
integrated into OpenSolaris. Typically, it would
jason wrote:
-bash-3.2$ zfs share tank
cannot share 'tank': share(1M) failed
-bash-3.2$
how do i figure out what's wrong?
Create a file system and share that.
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Disk encryption easily defeated, research shows
http://www.itpro.co.uk/storage/news/170304/disk-encryption-easily-defeated-research-shows.html
Freezing RAM, whatever next?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Sam wrote:
I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
periodic maintenance to the filesystem to keep it in tip top shape?
No, but if there are problems, a periodic scrub will tip you off sooner
rather than later.
Ian
Carson Gaspar wrote:
Nathan Kroenert - Server ESG wrote:
I also *believe* (though am not certain - Perhaps someone else on the
list might be?) it would be possible to have each *event* (so - the
individual events that lead to a Fault Diagnosis) generate a message if
it was required,
David Francis wrote:
Greetings all
I was looking at creating a little ZFS storage box at home using the
following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris
X86 build
Just wanted to know if anyone out there is using these and can vouch for
them. If not if
Cyril Plisko wrote:
On Fri, Jun 6, 2008 at 2:58 AM, [EMAIL PROTECTED] wrote:
Bill Sommerfeld writes:
2. How can I do it ? (I think I can run zfs set compression=on
rpool/ROOT/snv_90 in the other window, right after the installation
begins, but I would like less hacky way.)
Claus Guttesen wrote:
I can install solaris 10 (8/07) but will this zfs-version be stable enough?
When installing solaris 10 I add the smart-array driver from HP during
install (press 5 'Apply driver updates' and choose cd/dvd and the
driver is loaded). I tried to install b79 but when I try
Volker A. Brandt wrote:
On my heavily-patched Solaris 10U4 system, the size of /var (on UFS)
has gotten way out of hand due to the remarkably large growth of
/var/sadm. Can this directory tree be safely moved to a zfs
filesystem? How much of /var can be moved to a zfs filesystem without
Bill Sommerfeld wrote:
On Thu, 2008-06-05 at 23:04 +0300, Cyril Plisko wrote:
2. How can I do it ? (I think I can run zfs set compression=on
rpool/ROOT/snv_90 in the other window, right after the installation
begins, but I would like less hacky way.)
what I did was to migrate via live
Brian Hechinger wrote:
On Tue, Jun 24, 2008 at 04:35:30PM +1200, [EMAIL PROTECTED] wrote:
We can only hope that ZFS boot will consign this never ending layout
argument to the dust of history.
The layout of disks and filesystems will always be a personal preference
and will never go
Kyle McDonald wrote:
Ian Collins wrote:
Kyle McDonald wrote:
Kyle McDonald wrote:
Hi all,
I first experienced this while booted off the DVD, but chalked it
up to a bad DVD burn, and decided to Net boot to an interactive
install instead.
I'm using a text installer over a serial
I wanted to resurrect an old dual P3 system with a couple of IDE drives
to use as a low power quiet NIS/DHCP/FlexLM server so I tried installing
ZFS boot from build 90.
The install ran through without issue, creating a mirror pool on the two
drives. On reboot, I was surprised to see 2 sets of
Will Murnane wrote:
If the prices on disks were lower on these, they would be interesting
for low-end businesses or even high-end home users. The chassis is
within reach of reasonable, but the disk prices look ludicrously high
from where I sit. An empty one only costs $3k, sure, but fill it
Richard Elling wrote:
The best news, for many folks, is that you can boot from an
(externally pluggable) CF card, so that you don't have to burn
two disks for the OS.
Can these be mirrored? I've been bitten by these cards failing (in a
camera).
Ian
Tim wrote:
On Fri, Jul 11, 2008 at 4:05 PM, Ian Collins [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Will Murnane wrote:
If the prices on disks were lower on these, they would be
interesting
for low-end businesses or even high-end home users. The chassis
Peter Tribble wrote:
On Fri, Jul 11, 2008 at 5:33 PM, Sean Cochrane - Storage Architect
[EMAIL PROTECTED] wrote:
What were the performance characteristics?
Not brilliant...
Although I suspect raid-z isn't exactly the ideal choice. Still, performance
generally is adequate for our
Brian H. Nelson wrote:
Manyam wrote:
Hi ZFS gurus -- I have a v240 with solaris10 u2 release and ZFs - could
you please tell me if by applying the latest patch bundle of update 2 -- I
will get the all the ZFS patches installed as well ?
It is possible to patch your way up
Paul B. Henson writes:
I was curious if there was any utility or library function available to
evaluate a ZFS ACL. The standard POSIX access(2) call is available to
evaluate access by the current process, but I would like to evaluate an ACL
in one process that would be able to determine
Miles Nordin writes:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/
that's very helpful. I'll reshop for nForce 570 boards. i think my
untested guess was an nForce 630 or something, so it probably won't
work.
I
Jorgen Lundman writes:
We are having slow performance with the UFS volumes on the x4500. They
are slow even on the local server. Which makes me think it is (for once)
not NFS related.
Current settings:
SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
That's a very old
I'd like to extend my ZFS root pool by adding the old swap and root slice
left over from the previous LU BE.
Are there any known issues with concatenating slices from the same drive?
Cheers,
Ian.
___
zfs-discuss mailing list
Ross wrote:
Wipe the snv_70b disks I meant.
What disks? This message makes no sense without context.
Context free messages are a pain in the arse for those of us who use the
mail list.
Ian
___
zfs-discuss mailing list
Mark Danico wrote:
Sure,
zfs set compression=on rpool
Of course this only compresses items written after compression is turned on.
On my system when I started to perform the install I opened up a
terminal and after rpool was created I set the compression to on so that
it got set before the
Rahul wrote:
Can u site the differences b/w ZFS and FAT filesystems??
You are joking, aren't you?
Have you read any of the ZFS documentation?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Jorgen Lundman writes:
So unable to login on console. Again we ended up with the problem of
knowing which HDD that actually is broken. Turns out to be drive #40.
(Has anyone got a map we can print? Since we couldn't boot it, any Unix
commands needed to map are a bit useless, nor do we
Mark Shellenbaum wrote:
Paul B. Henson wrote:
Are the libsec undocumented interfaces likely to remain the same when the
acl_t structure changes? They will still require adding the prototypes to
my code so the compiler knows what to make of them, but less chance of
breakage is good.
1 - 100 of 728 matches
Mail list logo