dick hoogendijk wrote:
Are there any known issues involving VirtualBox using shared folders
from a ZFS filesystem?
Why should there be? A shared folder is just a directory.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Frank Middleton f.middle...@apogeect.com wrote:
On 09/27/09 11:25 AM, Joerg Schilling wrote:
Frank Middletonf.middle...@apogeect.com wrote:
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It appeared
Hello list,
We are unfortunately still experiencing some issues regarding our support
license with Sun, or rather our Sun Vendor.
We need ZFS User quotas. (That's not the zfs file-system quota) which first
appeared in svn_114.
We would like to run something like svn_117 (don't really care
On Mon, Sep 28, 2009 at 2:20 PM, Jorgen Lundman lund...@gmo.jp wrote:
We would like to run something like svn_117 (don't really care which version
per-se, that is just the one version we have done the most testing with).
But our Vendor will only support Solaris 10. After weeks of wrangling,
Hi, this may not be correct mailinglist for this, but I'd like to share this
with you, I noticed weird network behavior with osol snv_123.
icmp for host lags randomly between 500ms-5000ms and ssh sessions seem to
tangle, I guess this could affect iscsi/nfs as well.
what was most intresting that
On 28 September, 2009 - Jorgen Lundman sent me these 1,7K bytes:
Hello list,
We are unfortunately still experiencing some issues regarding our support
license with Sun, or rather our Sun Vendor.
We need ZFS User quotas. (That's not the zfs file-system quota) which
first appeared in
Markus Kovero wrote:
Hi, this may not be correct
mailinglist for
this, but Id like to share this with you, I noticed weird network
behavior
with osol snv_123.
icmp for host lags randomly
between 500ms-5000ms
and ssh sessions seem to tangle, I guess this could affect iscsi/nfs
Joerg Schilling wrote:
Just to prove my information: I invented fbk (which Sun now calls lofi)
Sun does NOT call your fbk by the name lofi. Lofi is a completely
different implementation of the same concept.
--
Darren J Moffat
___
zfs-discuss
Hi
Yes Solaris 10/09 ( update 8 ) will contain
6501037 want user/group quotas on zfs
it should be out within a few weeks.
So if they have zpools already installed they can apply
141444-09/141445-09 ( 10/09 kernel patch ) and post reboot run zpool
upgrade to go to zpool version 15 ( the
Hi
So ship date is 19th October for Solaris 10 10/09 ( update 8 ).
Enda
Enda O'Connor wrote:
Hi
Yes Solaris 10/09 ( update 8 ) will contain
6501037 want user/group quotas on zfs
it should be out within a few weeks.
So if they have zpools already installed they can apply
141444-09/141445-09
Not that I have seen. I use them, they work.
--chris
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tomas Ögren wrote:
http://sparcv9.blogspot.com/2009/08/solaris-10-update-8-1009-is-comming.html
which is in no way official, says it'll be in 10u8 which should be
coming within a month.
/Tomas
That would be perfect. I wonder why I have so much trouble finding information
about future
On 27.09.09 19:35, Erik Ableson wrote:
Good link - thanks. I'm looking at the details for that one and learning a
little zdb at the same time. I've got a situation perhaps a little different in
that I _do_ have a current copy of the slog in a file with what appears to be
current data.
However,
On 27.09.09 14:34, Erik Ableson wrote:
Hmmm - I've got a fairly old copy of the zpool cache file (circa July), but
nothing structural has changed in pool since that date. What other data is held
in that file? There have been some filesystem changes, but nothing critical is
in the newer
TMPFS was not in the first release of 4.0. It was introduced to boost the
performance of diskless clients which no longer had the old network disk for
their root file systems and hence /tmp was now over NFS.
Whether there was a patch that brought it back into 4.0 I don't recall but I
don't
On 09/28/09 12:40 AM, Ron Watkins wrote:
Thus, im at a loss as to how to get the root pool setup as a 20Gb
slice
20GB is too small. You'll be fighting for space every time
you use pkg. From my considerable experience installing to a
20GB mirrored rpool, I would go for 32GB if you can.
Jorgen Lundman wrote:
When I approach Sun-Japan directly I just get told that they don't speak
English. When my Japanese colleagues approach Sun-Japan directly, it is
suggested to us that we stay with our current Vendor.
hey ...
I work at Sun Japan in the Yoga office. I can connect you with
Trying to move this to a new thread, although I don't think it
has anything to do with ZFS :-)
On 09/28/09 08:54 AM, Chris Gerhard wrote:
TMPFS was not in the first release of 4.0. It was introduced to boost
the performance of diskless clients which no longer had the old
network disk for their
On 29.07.09 15:18, Markus Kovero wrote:
I recently noticed that importing larger pools that are occupied by large
amounts of data can do zpool import for several hours while zpool iostat only
showing some random reads now and then and iostat -xen showing quite busy disk
usage, It's almost it
Yesterday, Paul Archer wrote:
I estimate another 10-15 hours before this disk is finished resilvering and
the zpool is OK again. At that time, I'm going to switch some hardware out
(I've got a newer and higher-end LSI card that I hadn't used before because
it's PCI-X, and won't fit on my
On 2009/09/28, at 22:09, Jim Grisanzio wrote:
Jorgen Lundman wrote:
When I approach Sun-Japan directly I just get told that they don't
speak
English. When my Japanese colleagues approach Sun-Japan directly,
it is
suggested to us that we stay with our current Vendor.
hey ...
I work at
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just upgraded
to the latest Dev build. Not only did the upgrade fail (love that instant
rollback!), but now the zpool won't come online:
r...@shebop:~# zpool import
Hi Ron,
Any reason why you want to use slices except for the root pool?
I would recommend a 4-disk configuration like this:
mirrored root pool on c1t0d0s0 and c2t0d0s0
mirrored app pool on c1t1d0 and c2t1d0
Let the install use one big slice for each disk in the mirrored root
pool, which is
On 28.09.09 18:09, Paul Archer wrote:
8:30am, Paul Archer wrote:
And the hits just keep coming...
The resilver finished last night, so rebooted the box as I had just
upgraded to the latest Dev build. Not only did the upgrade fail (love
that instant rollback!), but now the zpool won't come
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with find takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
--
albert chin (ch...@thewrittenword.com)
7:56pm, Victor Latushkin wrote:
While 'zdb -l /dev/dsk/c7d0s0' shows normal labels. So the new question is:
how do I tell ZFS to use c7d0s0 instead of c7d0? I can't do a 'zpool
replace' because the zpool isn't online.
ZFS actually uses c7d0s0 and not c7d0 - it shortens output to c7d0 in
Hi all,
There is no generic response for:
Is it better to have a small SGA + big ZFS ARC or large SGA + small
ZFS ARC?
We can awser:
Have a large enough SGA do get good cache hit ratio (higher than 90 %
for OLTP).
Have some GB ZFS arc (Not less than 500M, usually more than 16GB is not
Been there, done that, got the tee shirt A larger SGA will *always*
be more efficient at servicing Oracle requests for blocks. You avoid
going through all the IO code of Oracle and it simply reduces to a hash.
http://blogs.sun.com/glennf/entry/where_do_you_cache_oracle
al...@sun wrote:
On Sep 28, 2009, at 2:41 PM, Albert Chin wrote:
Without doing a zpool scrub, what's the quickest way to find files
in a
filesystem with cksum errors? Iterating over all files with find
takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
Scrub could be
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums requires reading the data.
So you simply need to read the data.
This should work but it does not verify the redundant metadata. For
Paul,
Thanks for additional data, please see comments inline.
Paul Archer wrote:
7:56pm, Victor Latushkin wrote:
While 'zdb -l /dev/dsk/c7d0s0' shows normal labels. So the new
question is: how do I tell ZFS to use c7d0s0 instead of c7d0? I can't
do a 'zpool replace' because the zpool isn't
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums requires reading the data.
So you simply need to read the data.
This
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums requires reading the
On Mon, Sep 28, 2009 at 12:16 PM, Richard Elling
richard.ell...@gmail.comwrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar
On Mon, September 28, 2009 07:56, Frank Middleton wrote:
On 09/28/09 12:40 AM, Ron Watkins wrote:
Thus, im at a loss as to how to get the root pool setup as a 20Gb
slice
20GB is too small. You'll be fighting for space every time
you use pkg. From my considerable experience installing to a
On Mon, 28 Sep 2009, Bob Friesenhahn wrote:
This should work but it does not verify the redundant metadata. For example,
the duplicate metadata copy might be corrupt but the problem is not detected
since it did not happen to be used.
I am finding that your tar incantation is reading hardly
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If
Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums
Liam Slusser wrote:
Long story short, my cat jumped on my server at my house crashing two drives at
the same time. It was a 7 drive raidz (next time ill do raidz2).
Long story short - we've been able to get access to data in the pool.
This involved finding better old state with the help of
On Sep 28, 2009, at 10:31 AM, Victor Latushkin wrote:
Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - .
On 09/28/09 01:22 PM, David Dyer-Bennet wrote:
That seems truly bizarre. Virtualbox recommends 16GB, and after doing an
install there's about 12GB free.
There's no way Solaris will install in 4GB if I understand what
you are saying. Maybe fresh off a CD when it doesn't have to
download a
snv114# zfs get
used,reservation,volsize,refreservation,usedbydataset,usedbyrefreservation
tww/opt/vms/images/vios/mello-0.img
NAME PROPERTY VALUE SOURCE
tww/opt/vms/images/vios/mello-0.img used 30.6G -
On 28.09.09 22:01, Richard Elling wrote:
On Sep 28, 2009, at 10:31 AM, Victor Latushkin wrote:
Richard Elling wrote:
On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be
On Sep 28, 2009, at 6:58 PM, Albert Chin wrote:
Any reason the refreservation and usedbyrefreservation properties are
not sent?
I believe this was CR 6853862, fixed in snv_121.
-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 29.09.09 03:58, Albert Chin wrote:
snv114# zfs get
used,reservation,volsize,refreservation,usedbydataset,usedbyrefreservation
tww/opt/vms/images/vios/mello-0.img
NAME PROPERTY VALUE SOURCE
tww/opt/vms/images/vios/mello-0.img used
On Mon, 28 Sep 2009, Richard Elling wrote:
In other words, I am concerned that people replace good
data protection
practices with scrubs and expecting scrub to deliver better data
protection
(it won't).
Many people here would profoundly disagree with the above. There is
Darren J Moffat darr...@opensolaris.org wrote:
Joerg Schilling wrote:
Just to prove my information: I invented fbk (which Sun now calls lofi)
Sun does NOT call your fbk by the name lofi. Lofi is a completely
different implementation of the same concept.
With this kind of driver the
Chris Gerhard chris.gerh...@sun.com wrote:
TMPFS was not in the first release of 4.0. It was introduced to boost the
performance of diskless clients which no longer had the old network disk for
their root file systems and hence /tmp was now over NFS.
I did receive the SunOS-4.0 sources for
When transferring a volume between servers, is it expected that the
usedbydataset property should be the same on both? If not, is it cause
for concern?
snv114# zfs list tww/opt/vms/images/vios/near.img
NAME USED AVAIL REFER MOUNTPOINT
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
When transferring a volume between servers, is it expected that the
usedbydataset property should be the same on both? If not, is it cause
for concern?
snv114# zfs list tww/opt/vms/images/vios/near.img
NAME
Frank Middleton f.middle...@apogeect.com wrote:
On 09/28/09 03:00 AM, Joerg Schilling wrote:
I am not sure whether my changes will be kept as wikipedia prefers to
keep badly quoted wrong information before correct information supplied by
people who have first hand information.
They
Hello,
I have been researching building a home storage server based on
OpenSolaris and ZFS, and I would appreciate any time people could take
to comment on my current leanings.
I've tried to gather old information from this list as well as the
HCL, but I would welcome anyone's experience
This seems like you're doing an awful lot of planning for only 8 SATA
+ 4 SAS bays?
I agree - SOHO usage of ZFS is still a scary will this work? deal. I
found a working setup and I cloned it. It gives me 16x SATA + 2x SATA
for mirrored boot, 4GB ECC RAM and a quad core processor - total cost
In light of all the trouble I've been having with this zpool, I bought a
2TB drive, and I'm going to move all my data over to it, then destroy the
pool and start over.
Before I do that, what is the best way on an x86 system to format/label
the disks?
Thanks,
Paul
On Sep 28, 2009, at 4:20 PM, Michael Shadle wrote:
I agree - SOHO usage of ZFS is still a scary will this work? deal. I
found a working setup and I cloned it. It gives me 16x SATA + 2x SATA
for mirrored boot, 4GB ECC RAM and a quad core processor - total cost
without disks was ~ $1k I believe.
zfs receive should allow option to disable immediately mount of received
filesystem.
In case of original filesystem have changed mountpoints, it's hard to make
clone fs with send-receive, because received filesystem immediately try to
mount to old mountpoint, that locked by sourcr fs.
In
Yeah - give me a bit to rope together the parts list and double check
it, and I will post it on my blog.
On Mon, Sep 28, 2009 at 2:34 PM, Ware Adams rwali...@washdcmail.com wrote:
On Sep 28, 2009, at 4:20 PM, Michael Shadle wrote:
I agree - SOHO usage of ZFS is still a scary will this work?
On 09/28/09 15:54, Igor Velkov wrote:
zfs receive should allow option to disable immediately mount of received filesystem.
In case of original filesystem have changed mountpoints, it's hard to make clone fs with send-receive, because received filesystem immediately try to mount to old
personally i like this case:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
it's got 20 hot swap bays, and it's surprisingly well built. For the money,
it's an amazing deal.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
rackmount chassis aren't usually designed with acoustics in mind :)
however i might be getting my closet fitted so i can put half a rack
in. might switch up my configuration to rack stuff soon.
On Mon, Sep 28, 2009 at 3:04 PM, Thomas Burgess wonsl...@gmail.com wrote:
personally i like this
I'm looking at building a high bandwidth file server to store video for
editing, as an alternative to buying a $30,000 hardware RAID and spending $2000
per seat on fibrechannel and specialized SAN drive software.
Uncompressed HD runs around 1.2 to 4 gigabits per second, putting it in 10
Wah!
Thank you, lalt!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
i own this case, it's really not that bad. It's got 4 fans but they are
really big and don't make nearly as much noise as you'd think. honestly,
it's not bad at all. I know someone who sits it vertically as well,
honestly, it's a good case for the money
On Mon, Sep 28, 2009 at 6:06 PM,
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs
recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF] filesystem|volume|snapshot
receive [-vnF] -d filesystem
For the property list, run: zfs set|get
For the delegated
On 09/28/09 16:16, Igor Velkov wrote:
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx zfs
recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF] filesystem|volume|snapshot
receive [-vnF] -d filesystem
For the property
On Mon, Sep 28, 2009 at 03:16:17PM -0700, Igor Velkov wrote:
Not so good as I hope.
zfs send -R xxx/x...@daily_2009-09-26_23:51:00 |ssh -c blowfish r...@xxx.xx
zfs recv -vuFd xxx/xxx
invalid option 'u'
usage:
receive [-vnF] filesystem|volume|snapshot
receive [-vnF] -d
On Mon, 28 Sep 2009, Richard Connamacher wrote:
I'm looking at building a high bandwidth file server to store video
for editing, as an alternative to buying a $30,000 hardware RAID and
spending $2000 per seat on fibrechannel and specialized SAN drive
software.
Uncompressed HD runs around
well when i start looking into rack configurations i will consider it. :)
here's my configuration - enjoy!
http://michaelshadle.com/2009/09/28/my-recipe-for-zfs-at-home/
On Mon, Sep 28, 2009 at 3:10 PM, Thomas Burgess wonsl...@gmail.com wrote:
i own this case, it's really not that bad. It's
On Sep 28, 2009, at 11:41 AM, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
In other words, I am concerned that people replace good data
protection
practices with scrubs and expecting scrub to deliver better
data protection
(it won't).
Many people
Thanks for the detailed information. When you get the patch, I'd love to hear
if it fixes the problems you're having. From my understanding, a working
prefetch would keep video playback from stuttering whenever the drive head
moves — is this right?
The inability to read and write
On Mon, 28 Sep 2009, Richard Connamacher wrote:
Thanks for the detailed information. When you get the patch, I'd
love to hear if it fixes the problems you're having. From my
understanding, a working prefetch would keep video playback from
stuttering whenever the drive head moves — is this
For me, agressive prefetch is most important in order to schedule
reads from enough disks in advance to produce a high data rate. This
is because I am using mirrors. When using raidz or raidz2 the
situation should be a bit different because raidz is striped. The
prefetch bug which is
On Sep 28, 2009, at 19:39, Richard Elling wrote:
Finally, there are two basic types of scrubs: read-only and
rewrite. ZFS does
read-only. Other scrubbers can do rewrite. There is evidence that
rewrites
are better for attacking superparamagnetic decay issues.
Something that may be
On Mon, 28 Sep 2009, Richard Connamacher wrote:
I'm planning on using RAIDZ2 if it can keep up with my bandwidth
requirements. So maybe ZFS could be an option after all?
ZFS certainly can be an option. If you are willing to buy Sun
hardware, they have a try and buy program which would
Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums requires reading the data.
So you simply need to read the data.
This should work but it does not verify the
Robert Milkowski wrote:
Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:
Scrub could be faster, but you can try
tar cf - . /dev/null
If you think about it, validating checksums requires reading the data.
So you simply need to read the data.
This should work but it
Paul Archer wrote:
In light of all the trouble I've been having with this zpool, I bought
a 2TB drive, and I'm going to move all my data over to it, then
destroy the pool and start over.
Before I do that, what is the best way on an x86 system to
format/label the disks?
if entire disk is
I was thinking of custom building a server, which I think I can do for around
$10,000 of hardware (using 45 SATA drives and a custom enclosure), and putting
OpenSolaris on it. It's a bit of a risk compared to buying a $30,000 server,
but would be a fun experiment.
--
This message posted from
On Mon, 28 Sep 2009, Richard Connamacher wrote:
I was thinking of custom building a server, which I think I can do
for around $10,000 of hardware (using 45 SATA drives and a custom
enclosure), and putting OpenSolaris on it. It's a bit of a risk
compared to buying a $30,000 server, but would
On Mon, 28 Sep 2009, Richard Elling wrote:
Many people here would profoundly disagree with the above. There is no
substitute for good backups, but a periodic scrub helps validate that a
later resilver would succeed. A perioic scrub also helps find system
problems early when they are less
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing with the old card.
Paul
Tomorrow,
11:04pm, Paul Archer wrote:
Cool.
FWIW, there appears to be an issue with the LSI 150-6 card I was using. I
grabbed an old server m/b from work, and put a newer PCI-X LSI card in it,
and I'm getting write speeds of about 60-70MB/sec, which is about 40x the
write speed I was seeing with the
Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Connamacher wrote:
I was thinking of custom building a server, which I think I can do
for around $10,000 of hardware (using 45 SATA drives and a custom
enclosure), and putting OpenSolaris on it. It's a bit of a risk
compared to buying a
83 matches
Mail list logo