Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware
So if you have a single thread doing open/write/close of 8K
files and get 1.25MB/sec, that tells me you have something
like a 6ms I/O latency. Which look reasonable also.
What does iostat -x svc_t (client side) says ?
400ms seems high for the workload _and_ doesn't match my
formula, so I don't
Hi
Probbaly been reported a while back, but 'zfs list -o' does not
list the rather useful (and obvious) 'name' property, and nor does the manpage
at a quick read. snv_42.
# zfs list -o
missing argument for 'o' option
usage:
list [-rH] [-o property[,property]...] [-t type[,type]...]
About:
-I've read the threads about zfs and databases. Still I'm not 100%
convenienced about read performance. Doesn't the fragmentation of the
large database files (because of the concept of COW) impact
read-performance?
I do need to get back to this thread. The way I am currently
A lesson we learned with Solaris Zones applies here to ZFS. Accomplishing
high-level goals, e.g. prepare an appropriate environment for application XYZ
installation (Zones) or prepare an appropriate filesystem for application XYZ
data (ZFS) is different than it was before Solaris 10. For
Mike Gerdts wrote:
On 6/25/06, Nathan Kroenert [EMAIL PROTECTED] wrote:
Now, looking forward a bit, where does the ZFS integration with zones
documentation belong?
Some of it will appear in the next update to the Sun BluePrint Solaris Containers
Architecture Technology Guide.
How about
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
Hi
Now that Solaris 10 06/06 is finally downloadable I have some
questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the
moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using
James C. McPherson wrote:
James C. McPherson wrote:
Jeff Bonwick wrote:
6420204 root filesystem's delete queue is not running
The workaround for this bug is to issue to following command...
# zfs set readonly=off pool/fs_name
This will cause the delete queue to start up and should flush your
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP Chris,
NP The data will be written twice on ZFS using NFS. This is because NFS
NP on closing the file internally uses fsync to cause the writes to be
NP committed. This causes the ZIL
I had the same problem.On 6/26/06, Shannon Roddy [EMAIL PROTECTED] wrote:
Noel Dellofano wrote: Solaris 10u2 was released today.You can now download it from here: http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64.No Sparc
Noel Dellofano wrote:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64. No Sparc downloads.
Everything works perfectly.
$ ls -1
sol-10-u2-ga-sparc-lang-iso.zip
Shannon Roddy wrote:
Noel Dellofano wrote:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Seems the download links are dead except for x86-64. No Sparc downloads.
There were some problems getting the links set
Roch wrote:
And, ifthe load can accomodate a
reorder, to get top per-spindle read-streaming performance,
a cp(1) of the file should do wonders on the layout.
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility
If you've got hardware raid-5, why not just run
regular (non-raid)
pools on top of the raid-5?
I wouldn't go back to JBOD. Hardware arrays offer a
number of
advantages to JBOD:
- disk microcode management
- optimized access to storage
- large write caches
- RAID
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of either hardware or
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given
Olaf Manczak wrote:
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of
17 matches
Mail list logo