Bob Friesenhahn wrote:
Something to be aware of is that not all SSDs are the same. In fact,
some "faster" SSDs may use a RAM write cache (they all do) and then
ignore a cache sync request while not including hardware/firmware
support to ensure that the data is persisted if there is power los
On Wed, Jul 29, 2009 at 05:34:53PM -0700, Roman V Shaposhnik wrote:
> On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote:
> > What do you think about the following feature?
> >
> > "Subdirectory is automatically a new filesystem" property - an
> > administrator turns
> > on this magic property
It seems like a lot of meida services are starting to catch on about ZFS. I
knew Last.fm makes use of it, and I also found out grooveshark (see this blog:
http://www.facebook.com/notes.php?id=7354446700&start=200&hash=fb219332a992a64f12d200435b3d24f2
).
Grooveshark looks nice for end users as
hey anil,
given that things work, i'd recommend leaving them alone.
if you really want to insist on cleaning things up aesthetically
then you need to do multiple zfs operation and you'll need to shutdown
the zones.
assuming you haven't cloned any zones (because if you did that
complicates things
I create a couple of zones. I have a zone path like this:
r...@vps1:~# zfs list -r zones/cars
NAME USED AVAIL REFER MOUNTPOINT
zones/fans 1.22G 3.78G22K /zones/fans
zones/fans/ROOT 1.22G 3.78G19K legacy
zones/fans/ROOT/zbe 1.22G 3.78G 1.22G legacy
On Wed, 2009-07-29 at 15:06 +0300, Andriy Gapon wrote:
> What do you think about the following feature?
>
> "Subdirectory is automatically a new filesystem" property - an administrator
> turns
> on this magic property of a filesystem, after that every mkdir *in the root*
> of
> that filesystem c
on 29/07/2009 17:52 Andre van Eyssen said the following:
> On Wed, 29 Jul 2009, Andriy Gapon wrote:
>
>> Well, I specifically stated that this property should not be
>> recursive, i.e. it
>> should work only in a root of a filesystem.
>> When setting this property on a filesystem an administrator
Hello,
Ive tried to find any hard information on how to install, and boot, opensolaris
from a USB stick. Ive seen a few people written a few sucessfull stories about
this, but I cant seem to get it to work.
The procedure:
Boot from LiveCD, insert USB drive, find it using `format', start install
Glen Gunselman wrote:
Here is the output from my J4500 with 48 x 1 TB
disks. It is almost the
exact same configuration as
yours. This is used for Netbackup. As Mario just
pointed out, "zpool
list" includes the parity drive
in the space calculation whereas "zfs list" doesn't.
[r...@xxx /]#>
Joseph L. Casale wrote:
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset p
Yup, somebody pointed that out to me last week and I can't wait :-)
On Wed, Jul 29, 2009 at 7:48 PM, Dave wrote:
> Anyone (Ross?) creating ZFS pools over iSCSI connections will want to pay
> attention to snv_121 which fixes the 3 minute hang after iSCSI disk
> problems:
>
> http://bugs.opensolari
Anyone (Ross?) creating ZFS pools over iSCSI connections will want to
pay attention to snv_121 which fixes the 3 minute hang after iSCSI disk
problems:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=649
Yay!
___
zfs-discuss mailing li
On Jul 28, 2009, at 6:34 PM, Eric D. Mudama wrote:
On Mon, Jul 27 at 13:50, Richard Elling wrote:
On Jul 27, 2009, at 10:27 AM, Eric D. Mudama wrote:
Can *someone* please name a single drive+firmware or RAID
controller+firmware that ignores FLUSH CACHE / FLUSH CACHE EXT
commands? Or worse, re
On 29.07.09 07:56, Andre van Eyssen wrote:
On Wed, 29 Jul 2009, Mark J Musante wrote:
Yes, if it's local. Just use df -n $path and it'll spit out the
filesystem type. If it's mounted over NFS, it'll just say something
like nfs or autofs, though.
$ df -n /opt
Filesystemkbytes
>I apologize for replying in the middle of this thread, but I never
>saw the initial snapshot syntax of mypool2, which needs to be
>recursive (zfs snapshot -r mypo...@snap) to snapshot all the
>datasets in mypool2. Then, use zfs send -R to pick up and
>restore all the dataset properties.
>
>What wa
On Wed, 29 Jul 2009, Jorgen Lundman wrote:
So, it is slower than the CF test. This is disappointing. Everyone else seems
to use Intel X25-M, which have a write-speed of 170MB/s (2nd generation) so
perhaps that is why it works better for them. It is curious that it is slower
than the CF card.
I can think of a different feature where this would be useful - storing virtual
machines.
With an automatic 1fs per folder, each virtual machine would be stored in its
own filesystem, allowing for rapid snapshots, and instant restores of any
machine.
One big limitation for me of zfs is that al
Darren J Moffat wrote:
Kyle McDonald wrote:
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an
administrator turns
on this magic property of a filesystem, after that every mkdir *in
the root* of
that filesystem c
On Wed, Jul 29, 2009 at 03:35:06PM +0100, Darren J Moffat wrote:
> Andriy Gapon wrote:
> >What do you think about the following feature?
> >
> >"Subdirectory is automatically a new filesystem" property - an
> >administrator turns
> >on this magic property of a filesystem, after that every mkdir *i
fyleow wrote:
fyleow wrote:
I have a raidz1 tank of 5x 640 GB hard drives on my
newly installed OpenSolaris 2009.06 system. I did a
zpool export tank and the process has been running
for 3 hours now taking up 100% CPU usage.
When I do a zfs list tank it's still shown as
mounted. What's going
Kyle McDonald wrote:
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an
administrator turns
on this magic property of a filesystem, after that every mkdir *in the
root* of
that filesystem creates a new filesystem.
Andre van Eyssen wrote:
On Wed, 29 Jul 2009, Andriy Gapon wrote:
Well, I specifically stated that this property should not be
recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should
carefully set
permissions to make sur
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems hav
On 29.07.09 16:59, Mark J Musante wrote:
On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T 28.
I'm curious about if there are any potential problems with using LVM
metadevices as ZFS zpool targets. I have a couple of situations where using a
device directly by ZFS causes errors on the console about "Bus and lots of
"stalled" I/O. But as soon as I wrap that device inside an LVM metadevice
On Wed, 29 Jul 2009, Mark J Musante wrote:
Yes, if it's local. Just use df -n $path and it'll spit out the filesystem
type. If it's mounted over NFS, it'll just say something like nfs or autofs,
though.
$ df -n /opt
Filesystemkbytesused avail capacity Mounted on
/dev/md/d
I did a zpool scrub recently, and while it was running it reported errors and
woed
about restoring from backup. When the scrub is complete, it reports finishing
with
0 errors though. On the next scrub some other errors are reported in different
files.
"iostat -xne" does report a few errors (1 s
On Wed, 29 Jul 2009, Andriy Gapon wrote:
Well, I specifically stated that this property should not be recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should carefully set
permissions to make sure that only trusted entitie
On Wed, 29 Jul 2009, David Magda wrote:
Which makes me wonder: is there a programmatic way to determine if a
path is on ZFS?
Yes, if it's local. Just use df -n $path and it'll spit out the filesystem
type. If it's mounted over NFS, it'll just say something like nfs or
autofs, though.
Reg
On Wed, 29 Jul 2009, David Magda wrote:
Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?
statvfs(2)
--
Andre van Eyssen.
mail: an...@purplecow.org jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
pur
on 29/07/2009 17:24 Andre van Eyssen said the following:
> On Wed, 29 Jul 2009, Andriy Gapon wrote:
>
>> "Subdirectory is automatically a new filesystem" property - an
>> administrator turns
>> on this magic property of a filesystem, after that every mkdir *in the
>> root* of
>> that filesystem cr
David Magda wrote:
On Wed, July 29, 2009 10:24, Andre van Eyssen wrote:
It'd either require major surgery to userland tools, including every
single program that might want to create a directory, or major surgery to
the kernel. The former is unworkable, the latter .. scary.
How about: add a fl
On Wed, July 29, 2009 10:24, Andre van Eyssen wrote:
> It'd either require major surgery to userland tools, including every
> single program that might want to create a directory, or major surgery to
> the kernel. The former is unworkable, the latter .. scary.
How about: add a flag ("-Z"?) to use
Andriy Gapon wrote:
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems hav
On Wed, 29 Jul 2009, Glen Gunselman wrote:
Where would I see CR 6308817 my usual search tools aren't find it.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, 29 Jul 2009, Andriy Gapon wrote:
"Subdirectory is automatically a new filesystem" property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited proper
What do you think about the following feature?
"Subdirectory is automatically a new filesystem" property - an administrator
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited p
> This is normal, and admittedly somewhat confusing
> (see CR 6308817). Even
> if you had not created the additional zfs datasets,
> it still would have
> listed 40T and 32T.
>
Mark,
Thanks for the examples.
Where would I see CR 6308817 my usual search tools aren't find it.
Glen
--
This
> Here is the output from my J4500 with 48 x 1 TB
> disks. It is almost the
> exact same configuration as
> yours. This is used for Netbackup. As Mario just
> pointed out, "zpool
> list" includes the parity drive
> in the space calculation whereas "zfs list" doesn't.
>
> [r...@xxx /]#> zpool sta
> IIRC zpool list includes the parity drives in the disk space calculation
and zfs list doesn't.
> Terabyte drives are more likely 900-something GB drives thanks to that
base-2 vs. base-10 confusion HD manufacturers introduced. Using that
900GB figure I get to both 40TB and 32TB for with and witho
On 29/07/2009, at 12:00 AM, James Lever wrote:
CR 6865661 *HOT* Created, P1 opensolaris/triage-queue zfs scrub
rpool causes zpool hang
This bug I logged has been marked as related to CR 6843235 which is
fixed in snv 119.
cheers,
James
___
zfs-d
Hi James
Many thanks for finding & posting that link.
I'm sure many people on this forum will
be interested in trying out Brad Fitzpatrick's
perl script 'diskchecker.pl'.
It will be interesting to hear their results.
I've not yet had time to work out how Brad's
script works. If would be good if
On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T 28.8K /zpool1
This is normal, and admitted
Victor,
after
# ps -ef | grep zdb | grep -v grep
root 3281 1683 1 14:22:09 pts/2 8:57 zdb -e -t 2682807 data1
i've inserted pid after 0t:
# echo "0t3281::pid2proc|::walk thread|::findstack -v" | mdb -k >mdb-k0t3281
and got a couple of records:
stack pointer for thread ff02017a
I recently noticed that importing larger pools that are occupied by large
amounts of data can do zpool import for several hours while zpool iostat only
showing some random reads now and then and iostat -xen showing quite busy disk
usage, It's almost it goes thru every bit in pool before it goes
On 29.07.09 14:42, Pavel Kovalenko wrote:
fortunately, after several hours terminal went back -->
# zdb -e data1
Uberblock
magic = 00bab10c
version = 6
txg = 2682808
guid_sum = 14250651627001887594
timestamp = 1247866318 UTC = Sat Jul 18 01:31:58 2
Jeff,
On Tue, 28 Jul 2009, Jeff Hulen wrote:
Do any of you know how to set the default ZFS ACLs for newly created
files and folders when those files and folders are created through Samba?
I want to have all new files and folders only inherit extended
(non-trivial) ACLs that are set on the pare
Hi all,
I need to know if it is possible to expand the capacity of a zpool without loss
of data by growing the LUN (2TB) presented from an HP EVA to a Solaris 10 host.
I know that there is a possible way in Solaris Express Community Edition, b117
with the autoexpand property. But I still work wi
fortunately, after several hours terminal went back -->
# zdb -e data1
Uberblock
magic = 00bab10c
version = 6
txg = 2682808
guid_sum = 14250651627001887594
timestamp = 1247866318 UTC = Sat Jul 18 01:31:58 2009
Dataset mos [META], ID 0, cr_txg 4, 27.
Hi,
thank you so much for this post. This is exactly what I was looking for.
I've been eyeing the M3A76-CM board, but will now look at 78 and M4A as
well.
Actually, not that many Asus M3A, let alone M4A boards show up yet on the
OpenSolaris HCL, so I'd like to encourage everyone to share their h
Nigel Smith wrote:
> David Magda wrote:
>> This is also (theoretically) why a drive purchased from Sun is more
>> that expensive then a drive purchased from your neighbourhood computer
>> shop: Sun (and presumably other manufacturers) takes the time and
>> effort to test things to make sure t
Hi James, I'll not reply in line since the forum software is completely munging
your post.
On the X25-E I believe there is cache, and it's not backed up. While I haven't
tested it, I would expect the X25-E to have the cache turned off while used as
a ZIL.
The 2nd generation X25-E announced by
On 29.07.09 13:04, Pavel Kovalenko wrote:
after several errors on QLogic HBA pool cache was damaged and zfs cannot import
pool, there is no any disk or cpu activity during import...
#uname -a
SunOS orion 5.11 snv_111b i86pc i386 i86pc
# zpool import
pool: data1
id: 6305414271646982336
sta
after several errors on QLogic HBA pool cache was damaged and zfs cannot import
pool, there is no any disk or cpu activity during import...
#uname -a
SunOS orion 5.11 snv_111b i86pc i386 i86pc
# zpool import
pool: data1
id: 6305414271646982336
state: ONLINE
status: The pool was last accesse
On 29/07/2009, at 5:47 PM, Ross wrote:
Everyone else should be using the Intel X25-E. There's a massive
difference between the M and E models, and for a slog it's IOPS and
low latency that you need.
Do they have any capacitor backed cache? Is this cache considered
stable storage? If s
Everyone else should be using the Intel X25-E. There's a massive difference
between the M and E models, and for a slog it's IOPS and low latency that you
need.
I've heard that Sun use X25-E's, but I'm sure that original reports had them
using STEC. I have a feeling the 2nd generation X25-E'
56 matches
Mail list logo