[OMG, you U.S. musicians, wanna do this all the way? Ok, granted.]
My dearest aunt is a lesbian!
I love my aunt!
[mail server, what do you say?]
Best,
z
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: ZFS Discussions zfs-discuss@opensolaris.org
Sent: Sunday,
[sorry, baby don't try!]
Best,
z
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: ZFS Discussions zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 3:17 AM
Subject: Re: [zfs-discuss] (no subject)
[OMG, you U.S. musicians, wanna do this all the way? Ok,
Richard Elling wrote:
...
Most folks who want performance data collection all day long will
enable accounting and use sar. sar also uses kstats. Or you can
write your own scripts. Or there are a number of third party tools
which will collect long-term stats and provide nice reports or
[oh, my Lord, I hope it will be a sunny day]
武士道
Best,
z
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: ZFS Discussions zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 4:02 AM
Subject: Re: [zfs-discuss] (no subject)
[sorry, baby don't try!]
On Sat, 17 Jan 2009 23:18:35 PST
Antonius antoni...@gmail.com wrote:
Maybe the other disk has an EFI label?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv105 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
So you're saying zfs does absolutely no right-sizing? That sounds like a
bad idea all around...
You can use a bigger disk; NOT a smaller disk.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I see snow outside.
Maybe we don't have early Sun light in the morning, but now is about
morning.
In my china view (瑞雪兆丰年), and the Xmas view, snow is a very good thing to
have on this Sunday morning.
I am happy, the sky did not let me down!
Best,
z
- Original Message -
From: JZ
[this part you may not understand]
In the three-kingdoms days, 周瑜 could not believe he said west-wind, and 诸葛
said east-wind, but the sky gave east-wind. 周瑜 was so confused, he died.
Today, I am not confused, with the snow, I will live.
;-)
Best,
z
- Original Message -
From: JZ
If so what should I do to remedy that? just reformat it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 18, 2009 at 9:21 AM, Carson Gaspar car...@taltos.org wrote:
If you write your own using kstat, you can get accurate sub-second
samples. Sadly you'll either have to use the amazingly crappy Sun perl
or write it in C, as Sun hasn't yet managed to release source for the
kstat perl
meh
- Original Message -
From: Antonius antoni...@gmail.com
To: zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 6:54 AM
Subject: Re: [zfs-discuss] replace same sized disk fails with too small
error
If so what should I do to remedy that? just reformat it?
--
This
On Sat, Jan 17, 2009 at 9:04 PM, Thomas Garner thomas...@gmail.com wrote:
Are you looking for something like:
kstat -c disk sd:::
Someone can correct me if I'm wrong, but I think the documentation for
the above should be at:
Morning,
For those of you who remember last time, this is a different Solaris,
different disk box and different host, but the epic nature of the fail
is similar.
The RAID box that is the 63T LUN has a hardware fault and has been
crashing, up to now the box and host got restarted and both came up
On Sun, Jan 18, 2009 at 8:02 AM, Tom Bird t...@marmot.org.uk wrote:
Morning,
For those of you who remember last time, this is a different Solaris,
different disk box and different host, but the epic nature of the fail
is similar.
The RAID box that is the 63T LUN has a hardware fault and
On Sun, Jan 18, 2009 at 5:18 AM, casper@sun.com wrote:
So you're saying zfs does absolutely no right-sizing? That sounds like a
bad idea all around...
You can use a bigger disk; NOT a smaller disk.
Casper
Right, which is an absolutely piss poor design decision and why every major
Right, which is an absolutely piss poor design decision and why
every major storage vendor right-sizes drives. What happens if I
have an old maxtor drive in my pool whose 500g is just slightly
larger than every other mfg on the market? You know, the one who is
no longer making their
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the market? You know, the one who is no longer making their own drives
Is it possible to share a folder with cifs without adding a zfs volume?
I also have not found out how to share a folder with zfs, is it possible?
If it's possible, how?
I searched google and this forum but found no answers to my question.
Greets Louis Hoefler
PS.: I hope this was the right
On Sun, 18 Jan 2009, Tim wrote:
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the market? You know, the one who is
Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I
guess my aggregate number for read and write bandwidth should equal the
aggregate numbers for the pool? Yes?
The downside is that fsstat has the same granularity issue as zpool iostat.
What I'd really like is nread
On Sun, Jan 18, 2009 at 16:38, Louis Hoefler louis.hoef...@struktum.com wrote:
Is it possible to share a folder with cifs without adding a zfs volume?
Try zfs set sharesmb=on mypool.
I also have not found out how to share a folder with zfs, is it possible?
I don't think sharing an individual
On Sun, Jan 18, 2009 at 16:51, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
I appreciate that in these times of financial hardship that you can
not afford a 750GB drive to replace the oversized 500GB drive. Sorry
to hear about your situation.
That's easy to say, but what if there were
On Sun, Jan 18, 2009 at 5:39 PM, Brad bst...@aspirinsoftware.com wrote:
Well if I do fsstat mountpoint on all the filesystems in the ZFS pool, then I
guess my aggregate number for read and write bandwidth should equal the
aggregate numbers for the pool? Yes?
The downside is that fsstat has
On Sun, Jan 18, 2009 at 10:17 AM, casper@sun.com wrote:
Right, which is an absolutely piss poor design decision and why every
major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
on the
On Sun, 18 Jan 2009, Will Murnane wrote:
That's easy to say, but what if there were no larger alternative?
Suppose I have a pool composed of those 1.5TB Seagate disks, and
Hitachi puts out some of the same capacity that are actually
slightly smaller. A drive fails in my array, I buy a Hitachi
On Sun, Jan 18, 2009 at 10:16 AM, Adam Leventhal a...@eng.sun.com wrote:
Right, which is an absolutely piss poor design decision and why every major
storage vendor right-sizes drives. What happens if I have an old maxtor
drive in my pool whose 500g is just slightly larger than every other mfg
On Sun, Jan 18, 2009 at 12:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 18 Jan 2009, Will Murnane wrote:
That's easy to say, but what if there were no larger alternative?
Suppose I have a pool composed of those 1.5TB Seagate disks, and
Hitachi puts out some of the
Does this all go away when BP-rewrite gets fully resolved/implemented?
Short of the pool being 100% full, it should allow a rebalancing
operation and possible LUN/device-size-shrink to match the new device
that is being inserted?
Thanks,
-- MikeE
-Original Message-
From:
On Sun, Jan 18, 2009 at 18:19, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
What do you propose that OpenSolaris should do about this?
Take drive size, divide by 100, round down to two significant digits.
Floor to a multiple of that size. This method wastes no more than 1%
of the disk
of the oldpool to the newpool with
{code}
zfs send -R oldp...@20090118-02-postupgrade | zfs recv -vF -d newpool
{code}
Larger datasets went in the normal range of 13-20Mb/s (of course, smaller
datasets and snapshots ranging in a few kilobytes of size took more time to
open-close than actually
On Sun, 18 Jan 2009, Will Murnane wrote:
Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB, etc. I don't see this changing any time
particularly soon; unless someone starts selling a 1.25 TB drive or
something, two digits will suffice. Even then,
On Sun, Jan 18, 2009 at 1:30 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 18 Jan 2009, Will Murnane wrote:
Most drives are sold with two significant digits in the size: 320 GB,
400 GB, 640GB, 1.0 TB, etc. I don't see this changing any time
particularly soon; unless
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage off... If
it's a LUN and it's so far off it still can't be added with the
percentage that works across the board for EVERYTHING ELSE, you change the
size of the LUN at the storage array
Hi Bob, Will, Tim,
I also had some off-list comments on my irrelevant comments.
So I will try to make this post less irrelevant, though my thoughts on this
topic may be off the list discussion line of thoughts, as usual.
From the major storage vendors I know, network storage systems as
On Sun, Jan 18, 2009 at 1:56 PM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage off...
If
it's a LUN and it's so far off it still can't be added with the
percentage that works across the
But what is the recommended way to share a directory?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 18, 2009 at 1:57 PM, Louis Hoefler
louis.hoef...@struktum.comwrote:
But what is the recommended way to share a directory?
--
I don't know that there currently is a good way to just share a directory
with the built-in cifs server. I'd imagine your best bet would be to use
SAMBA.
I ran into a bad label causing this once.
br/br/
Usually the s2 slice is a good bet for your whole disk device, but if it's EFI
labeled, you need to use p0 (somebody correct me if I'm wrong).
br/br/
I like to zero the first few megs of a drive before doing any of this stuff.
This will destroy
Peter Tribble wrote:
On Sat, Jan 17, 2009 at 9:04 PM, Thomas Garner thomas...@gmail.com wrote:
Are you looking for something like:
kstat -c disk sd:::
Someone can correct me if I'm wrong, but I think the documentation for
the above should be at:
Yes, I agree, command interface is more efficient and more risky than GUI.
You will have to be very careful when doing that.
Best,
z
- Original Message -
From: Al Tobey tob...@gmail.com
To: zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 3:09 PM
Subject: Re: [zfs-discuss]
Ok I found the share folder gnome gui. Its in coffecup-administration-shared
folders. But, if I add a folder with this gui, it does not show up on windows.
I tried
svcadm restart smb/server
but nothing happened. The gui created a /etc/sfw/smb.conf file, which holds the
folder I added.
I found
Louis Hoefler wrote:
But what is the recommended way to share a directory?
You should be able to use sharemgr directly to just share a directory
and not an entire file system. If you do that you shouldn't set the
sharesmb property, though. Use either the sharesmb property or use
sharemgr
Tim t...@tcsac.net wrote:
On Sun, Jan 18, 2009 at 8:02 AM, Tom Bird t...@marmot.org.uk wrote:
Those are supposedly the two inodes that are corrupt. The 0x0 is a bit
scary... you should be able to find out what file(s) they're tied to (if
any) with:
find /content -inum 0
find /content
comment at the bottom...
Tim wrote:
On Sun, Jan 18, 2009 at 1:56 PM, Eric D. Mudama
edmud...@bounceswoosh.org mailto:edmud...@bounceswoosh.org wrote:
On Sun, Jan 18 at 13:43, Tim wrote:
You look at the size of the drive and you take a set percentage
off... If
Tim wrote:
On Sun, Jan 18, 2009 at 8:02 AM, Tom Bird t...@marmot.org.uk
mailto:t...@marmot.org.uk wrote:
errors: Permanent errors have been detected in the following files:
content:0x0
content:0x2c898
r...@cs4:~# find /content
/content
r...@cs4:~#
On Sun, Jan 18, 2009 at 8:25 PM, Richard Elling richard.ell...@sun.com wrote:
Peter Tribble wrote:
See fsstat, which is based upon kstats. One of the thing I want to do with
JKstat is correlate filesystem operations with underlying disk operations.
The
hard part is actually connecting a
On Sun, Jan 18, 2009 at 2:43 PM, Richard Elling richard.ell...@sun.comwrote:
comment at the bottom...
DIY. Personally, I'd be more upset if ZFS reserved any sectors
for some potential swap I might want to do later, but may never
need to do. If you want to reserve some space for swappage,
On Sun, Jan 18, 2009 at 2:37 PM, Louis Hoefler
louis.hoef...@struktum.comwrote:
Ok I found the share folder gnome gui. Its in
coffecup-administration-shared folders. But, if I add a folder with this
gui, it does not show up on windows.
I tried
svcadm restart smb/server
but nothing
Ok I found a solution.
Thanks for your help.
svcadm enable samba wins swat
modified /etc/sfw/smb.conf:
[global]
server string = Unix-Windows share
security = SHARE
wins server = 192.168.1.2, 192.168.1.1
[apache22]
comment = Apache 2.2 share
path =
Obama just made a good speech.
I hope you were watching TV...
Best,
z
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 18 Jan 2009 16:17:52 -0500
JZ j...@excelsioritsolutions.com wrote:
Obama just made a good speech.
I hope you were watching TV...
JZ, Once more you are using a technical and highly focused
mailing list to ramble on about things which are totally
irrelevant to the charter of the list.
Yes, James, I will desist, if you insist.
Not a big deal.
Best,
z
- Original Message -
From: James C. McPherson james.mcpher...@sun.com
To: JZ j...@excelsioritsolutions.com
Cc: ZFS Discussions zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 4:31 PM
Subject: Re:
/cheers!
Sent from my BlackBerry Bold®
http://www.blackberrybold.com
-Original Message-
From: James C. McPherson james.mcpher...@sun.com
Date: Mon, 19 Jan 2009 07:31:30
To: JZj...@excelsioritsolutions.com
Cc: ZFS Discussionszfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] (no
Tim wrote:
On Sun, Jan 18, 2009 at 2:43 PM, Richard Elling richard.ell...@sun.com
mailto:richard.ell...@sun.com wrote:
comment at the bottom...
DIY. Personally, I'd be more upset if ZFS reserved any sectors
for some potential swap I might want to do later, but may never
On Sun, Jan 18, 2009 at 3:39 PM, Richard Elling richard.ell...@sun.comwrote:
Tim wrote:
It is naive to think that different storage array vendors
would care about people trying to use another array vendors
disks in their arrays. In fact, you should get a flat,
impersonal, not supported
http://forums.freebsd.org/archive/index.php/t-1197.html
Is a fairly good writeup on this subject... The short and sweet: One
disk in a non-mirrored pool dies and is replaced with a new disk...
It looks like zpool scrub is able to recover from an error of this
magnitude, but getting it to
A few questions on data replication:
Assuming I've created a pool named zfspool containing two unmirrored
disks and I create:
zfs create zfspool/test2
zfs set copies=2 zfspool/test2
Will data copied in there be guaranteed to be replicated on both
devices? Or does ZFS just try its best to
On Sun, Jan 18, 2009 at 4:36 PM, Timothy Renner timothy.ren...@gmail.comwrote:
A few questions on data replication:
Assuming I've created a pool named zfspool containing two unmirrored
disks and I create:
zfs create zfspool/test2
zfs set copies=2 zfspool/test2
Will data copied in there be
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
I'm not sure how you can class it a ZFS fail when the Disk subsystem has
failed...
Or - did I miss something? :)
Nathan.
Tom Bird wrote:
Morning,
For those of you who
Richard Elling wrote:
...
Carson Gaspar wrote:
Except sar sucks. It's scheduled via cron, and is too coarse grained for
many purposes (10 minute long samples average out almost everything
interesting).
There is a world of difference between the tools needed to perform
debugging and
On Sun, Jan 18 at 15:00, Tim wrote:
If you're so concerned with the storage *lying* or *hiding* space, I
assume you're leading the charge at Sun to properly advertise drive sizes,
right? Because the 1TB drive I can buy from Sun today is in no way,
shape, or form able to store 1TB of
Hi Folks,
Sorry if this is again not relevant to the list discussion.
Maybe I am too sensitive, maybe I am just crazy. But things I see around me
today just make me wonder if I should do one more post. [and again, I will
leave the conclusion to the mail server]
I live in the U.S. My family
Thank you Lord, for the sunny day today!
安!
z方天化戟
[read the three-kingdoms story and you will understand]
- Original Message -
From: JZ j...@excelsioritsolutions.com
To: Toby Thain t...@telegraphics.com.au; zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 9:20 PM
On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS
any
sort of redundancy to manage.
Which is particularly catastrophic when one's 'content' is organized
as a monolithic file, as it is here - unless, of
Carson Gaspar wrote:
Richard Elling wrote:
...
Carson Gaspar wrote:
Except sar sucks. It's scheduled via cron, and is too coarse grained for
many purposes (10 minute long samples average out almost everything
interesting).
There is a world of difference between the tools needed
Timothy Renner wrote:
http://forums.freebsd.org/archive/index.php/t-1197.html
Is a fairly good writeup on this subject... The short and sweet: One
disk in a non-mirrored pool dies and is replaced with a new disk...
Actually, the demo shows how you can corrupt a portion of the data
and
Tim wrote:
On Sun, Jan 18, 2009 at 4:36 PM, Timothy Renner
timothy.ren...@gmail.com mailto:timothy.ren...@gmail.com wrote:
A few questions on data replication:
Assuming I've created a pool named zfspool containing two unmirrored
disks and I create:
zfs create zfspool/test2
On Sun, Jan 18, 2009 at 10:12 PM, Richard Elling richard.ell...@sun.comwrote:
This is not quite correct. ZFS will attempt to place the copies on
different vdevs. On the same vdev, it will try to place it somewhere
which is not contiguous (spatial diversity). I'm curious where you
got the
and that's why I hated blogs!
do you know what to read that is not misleading, in a sea of blogs
and you are wondering why western folks don't like us?!
言语之中引出的生死恩怨太多了
Best,
z
- Original Message -
From: Tim
To: Richard Elling
Cc: zfs-discuss@opensolaris.org
Beloved Tim,
You challenged me a while ago, as a friend.
I did what you asked me to do, in the honor of my father.
Best,
z
- Original Message -
From: JZ
To: Tim
Cc: zfs-discuss@opensolaris.org
Sent: Sunday, January 18, 2009 11:58 PM
Subject: Re: [zfs-discuss]
On Sun, 18 Jan 2009, Tim wrote:
Honestly, I believe this list... when other people have asked if they can
use the copies= to avoid mirroring everything. I can't say I've saved any
of the threads because they didn't seem of any particular importance to me
at the time.
The extra copies help
Bob Friesenhahn wrote:
On Sun, 18 Jan 2009, Tim wrote:
Honestly, I believe this list... when other people have asked if they can
use the copies= to avoid mirroring everything. I can't say I've saved any
of the threads because they didn't seem of any particular importance to me
at the time.
72 matches
Mail list logo