Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Frank Cusack

On 6/18/10 11:25 PM -0700 Cott Lang wrote:

By detach, do you mean that you ran 'zpool detach'?


Yes.


'zpool detach' clears the information from the disk that zfs needs to
reimport the disk.  If you have a late enough version of opensolaris
you should instead run 'zpool split'.  Otherwise, shut down as normal
(ie, don't tell zfs you are about to do anything different) and then
just boot with the one disk, now in degraded state but otherwise ok.

Like you, I learned this the hard way!

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Cott Lang
> By detach, do you mean that you ran 'zpool detach'?

Yes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Garrett D'Amore
Btw, I filed a bug (bugs.os.o) for the ZFS FMRI scheme, and included a
suggested fix in the description.  I don't have a CR number for it yet.

Its possible that this should go through the request-sponsor process.
Once I have the CR number I'll be happy to follow up with it.  (It would
be nice if Nexenta were to get credit for the fix.)

- Garrett

On Fri, 2010-06-18 at 09:26 -0700, Garrett D'Amore wrote:
> On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote:
> > On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
> > 
> > > On 18/06/2010 00:18, Garrett D'Amore wrote:
> > >> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
> > >>   
> > >>> On the SS7000 series, you get an alert that the enclosure has been 
> > >>> detached from the system.  The fru-monitor code (generalization of the 
> > >>> disk-monitor) that generates this sysevent has not yet been pushed to 
> > >>> ON.
> > >>> 
> > >>> 
> > >>> 
> > > [...]
> > >> I guess the fact that the SS7000 code isn't kept up to date in ON means
> > >> that we may wind up having to do our own thing here... its a bit
> > >> unfortunate, but ok.
> > > 
> > > Eric - is it a business decision that the discussed code is not in the ON 
> > > or do you actually intent to get it integrated into ON? Because if you do 
> > > then I think that getting Nexenta guys expanding on it would be better 
> > > for everyone instead of having them reinventing the wheel...
> > 
> > Limited bandwidth.
> 
> Is there anything I can do to help?  In my opinion, its better if we can
> use solutions in the underlying ON code that everyone agrees with and
> that are available to everyone.
> 
> At the end of the day though, we'll do whatever is required to make sure
> that the problems that our customers face are solved -- at least in our
> distro.  We'd rather have shared common code for this, but if we have to
> implement our own bits, we will do so.
> 
>   -- Garrett
> 
> > 
> > - Eric
> > 
> > --
> > Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
> > 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen

Sandon Van Ness wrote:

Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:

writes:

r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0 records in
10+0 records out
10485760 bytes (105 GB) copied, 233.257 s, 450 MB/s

reads:

r...@opensolaris: 11:44 AM :/data# dd bs=1M if=./100gb.bin of=/dev/null
10+0 records in
10+0 records out
10485760 bytes (105 GB) copied, 131.051 s, 800 MB/s

zpool iostat  10

gives me about the same values that DD gives me. Maybe you have a bad
drive somewhere? Which areca controller are you using as maybe you can
pull the smart info off the drives from a linux boot cd as some of the
controllers support that. Could be a bad drive somewhere.



didn't he say he already gets 400MB/s from dd, but zpool iostat only
show a few MB/s? What does zpool iostat show, the value before or after
dedup?
Curtis, to see if your physical setup is ok you should turn of dedup and
measure again. Otherwise you only measure the power of your machine to
dedup /dev/zero.

--Arne
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Frank Cusack

On 6/18/10 9:46 PM -0700 Cott Lang wrote:

I split a mirror to reconfigure and recopy it. I detached one drive,
reconfigured it ... all after unplugging the remaining pool drive during
a shutdown to verify no accidents could happen.


By detach, do you mean that you ran 'zpool detach'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Cott Lang
I split a mirror to reconfigure and recopy it. I detached one drive, 
reconfigured it ... all after unplugging the remaining pool drive during a 
shutdown to verify no accidents could happen.  

Later, I tried to import the original pool from the drive (now plugged back 
in), only to be greeted with FAULTED - label is missing or invalid.

zdb -l shows all four labels on the disk... although it shows both original 
drives in it, which is odd.

Is there any way to recover from this scenario?

Various permutations of import -Ff does nothing.

The data is all replaceable, but requires an annoying lengthy process to 
recreate. I'd like to regain my confidence in ZFS more than anything else. :)

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Sandon Van Ness
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:

writes:

r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0 records in
10+0 records out
10485760 bytes (105 GB) copied, 233.257 s, 450 MB/s

reads:

r...@opensolaris: 11:44 AM :/data# dd bs=1M if=./100gb.bin of=/dev/null
10+0 records in
10+0 records out
10485760 bytes (105 GB) copied, 131.051 s, 800 MB/s

zpool iostat  10

gives me about the same values that DD gives me. Maybe you have a bad
drive somewhere? Which areca controller are you using as maybe you can
pull the smart info off the drives from a linux boot cd as some of the
controllers support that. Could be a bad drive somewhere.

On 06/18/2010 02:33 AM, Curtis E. Combs Jr. wrote:
> Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just
> added zeros to the count, to try to test performance for larger files.
> I didn't notice any difference at all, either with the dtrace script
> or zpool iostat. Thanks for you help, btw.
>
> On Fri, Jun 18, 2010 at 5:30 AM, Pasi Kärkkäinen  wrote:
>   
>> On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
>> 
>>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up 
>>> to 40 very rarely.
>>>
>>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>>> zpool iostat 2 in another ssh session while making files of various sizes.
>>>
>>>   
>> Yep, dd will generate sequential IO.
>> Did you specify blocksize for dd? (bs=1024k for example).
>>
>> As a default dd does 4 kB IOs.. which won't be very fast.
>>
>> -- Pasi
>>
>> 
>>> This is a test system. I'm wondering, now, if I should just reconfigure 
>>> with maybe 7 disks and add another spare. Seems to be the general consensus 
>>> that bigger raid pools = worse performance. I thought the opposite was 
>>> true...
>>> --
>>> This message posted from opensolaris.org
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>   
>> 
>
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. wrote:

> Oh! Yes. dedup. not compression, but dedup, yes.





dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Freddie Cash
On Fri, Jun 18, 2010 at 1:52 AM, Curtis E. Combs Jr.  wrote:
> I am new to zfs, so I am still learning. I'm using zpool iostat to
> measure performance. Would you say that smaller raidz2 sets would give
> me more reliable and better performance? I'm willing to give it a
> shot...

A ZFS pool is made up of vdevs.  ZFS stripes the vdevs together to
improve performance, similar in concept to how RAID0 works.  The more
vdevs in the pool, the better the performance will be.

A vdev is made up one or more disks, depending on the type of vdev and
the redundancy level that you want (cache, log, mirror, raidz1,
raidz2, raidz3, etc).

Due to the algorithms used for raidz, the smaller your individual
raidz vdevs (the fewer disks), the better the performance.  IOW a 6
disk raidz2 vdev will performance better than an 11 disk raidz2 vdev.

So, you want your individual vdevs to be made up of as few physical
disks as possible (for your size and redundancy requirements), and
your pool to be made up of as many vdevs as possible.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Marion Hakanson
doug.lin...@merchantlink.com said:
> Apparently, before Outlook there WERE no meetings, because it's clearly
> impossible to schedule one without it. 

Don't tell my boss, but I use Outlook for the scheduling, and fetchmail
plus procmail to download email out of Exchange and into my favorite
email client.  Thankfully, Exchange listens to incoming SMTP when I need
to send messages.


> And please don't mail me with your favorite OSS solution.  I've tried them
> all.  None of them integrate with Exchange *smoothly* and *cleanly*.  They're
> all workarounds and kludges that are as annoying in the end as Outlook. 

Hmm, what I'm doing doesn't _integrate_ with Exchange;  It just bypasses
it for the email portion of my needs.  Non-OSS:  Mac OS X 10.6 claims to
integrate with Exchange, although I have not yet tried it myself.

Regards,

Marion


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Linder, Doug
> People still use Outhouse?  Really?!  Next you'll be suggesting that
> some people still put up with Internet Exploder...  ;-)

Those of us who are literally forced to use it aren't too happy.  Nor am I 
happy with the giant stupid signature that gets tacked on that you all have to 
trim when you reply.  But I can't do anything about it.

The most infuriating thing is I could use Thunderbird or something actually 
good if it weren't for scheduling meetings.  Meeting scheduling.  It's the one 
killer feature that's been keeping Microsoft alive for decades.  When I say I 
want to use Thunderbird or something, the following conversation always 
happens, verbatim:

"Boss, I want to use Thunderbird."
"But... but.. how will you schedule meetings?"
"I never schedule meetings, you always do."
"But... but... how will you GET meetings?"
"Send me an email."
"But... but.. they won't show up on your Outlook Calendar!"
"I'm capable of adding them to my non-Outlook schedule manually."
"How will people see what times you're free?"
"They could just ask me 'When are you free?'"
"No.  We can't have you being a nonconformist.  You have to be able to send and 
accept meetings and have them put on your calendar.  You must use Outlook.  
Therefore we must use Exchange, forever and ever, world without end."


Apparently, before Outlook there WERE no meetings, because it's clearly 
impossible to schedule one without it.

And please don't mail me with your favorite OSS solution.  I've tried them all. 
 None of them integrate with Exchange *smoothly* and *cleanly*.  They're all 
workarounds and kludges that are as annoying in the end as Outlook.

> Agreed 100%.  I've even set up my mail server to reject HTML emails.

I have no problem with a little *basic* HTML.  I just wish I could limit it to 
a few simple tags - , , , etc - the most very basic formatting.  
No images.  No tables or any of that crap.  I do see how it's nice sometimes to 
be able to use a little emphasis.  It's nice to be able to add some personality 
with use a different font (if you don't go crazy with them).  But unfortunately 
it ends up getting horribly munged and misused.  Maybe there should be a filter 
that strips out all HTML except the basic tags.
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen

If the device driver generates or fabricates device IDs, then moving
devices around is probably okay.

I recall the Areca controllers are problematic when it comes to moving
devices under pools. Maybe someone with first-hand experience can
comment.

Consider exporting the pool first, moving the devices around, and
importing the pool.

Moving devices under pool is okay for testing but in general, I don't
recommend moving devices around under pools.

Thanks,

Cindy

On 06/18/10 14:29, artiepen wrote:

Thank you, all of you, for the super helpful responses, this is probably one of 
the most helpful forums I've been on. I've been working with ZFS on some 
SunFires for a little while now, in prod, and the testing environment with oSol 
is going really well. I love it. Nothing even comes close.

If you have time, I have one more question. We're going to try it now with 2 
12-port Arecas. When I pop the controllers in and reconnect the drives, does 
ZFS has the intelligence to adjust if I use the same hard drives? Of coarse, it 
doesn't matter, I can just destroy the pool and recreate. I'm just curious if 
that'd work.

Thanks again!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Fredrich Maney
On Fri, Jun 18, 2010 at 3:52 PM, Linder, Doug
 wrote:
>> Another thing that Gmail does that I find infuriating, is that it
>> mucks with the formatting. For some reason it, and to be fair, Outlook
>> as well, seem to think that they know how a message needs to be
>> formatted better than I do.
>
> Try doing inline quoting/response with Outlook, where you quote one section, 
> reply, quote again, etc.  It's impossible.  You can't split up the quoted 
> section to add new text - no way, no how.  Very infuriating.  It's like 
> Outlook was *designed* to force people to top post.  It reminds me of the old 
> joke:
>
> "Because people read from top to bottom."
> "Why is top-posting stupid?"
>
> #include ms-rant.std
> --

Agreed. Outlook only becomes somewhat usable when you force it to
convert all email to plain-text and then it still screws with line
length and removes line breaks that it determines to be "extra".

On a side note, the person who dreamed up HTML and RTF email needs to
be drawn and quartered. If it can't be expressed clearly in
plain-text, then you should send it as an attachment.

fpsm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Thank you, all of you, for the super helpful responses, this is probably one of 
the most helpful forums I've been on. I've been working with ZFS on some 
SunFires for a little while now, in prod, and the testing environment with oSol 
is going really well. I love it. Nothing even comes close.

If you have time, I have one more question. We're going to try it now with 2 
12-port Arecas. When I pop the controllers in and reconnect the drives, does 
ZFS has the intelligence to adjust if I use the same hard drives? Of coarse, it 
doesn't matter, I can just destroy the pool and recreate. I'm just curious if 
that'd work.

Thanks again!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen

Hi Curtis,

You might review the ZFS best practices info to help you determine
the best pool configuration for your environment:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

If you're considering using dedup, particularly on a 24T pool, then
review the current known issues, described here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Thanks,

Cindy

On 06/18/10 02:52, Curtis E. Combs Jr. wrote:

I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...

On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:

On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:

Well, I've searched my brains out and I can't seem to find a reason for this.

I'm getting bad to medium performance with my new test storage device. I've got 
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca 
raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM 
OpenSolaris upgraded to snv_134.

The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec to 
40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if I 
watch while I'm actively doing some r/w. I know that I should be getting better 
performance.


How are you measuring the performance?
Do you understand raidz2 with that big amount of disks in it will give you 
really poor random write performance?

-- Pasi


I'm new to OpenSolaris, but I've been using *nix systems for a long time, so if 
there's any more information that I can provide, please let me know. Am I doing 
anything wrong with this configuration? Thanks in advance.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Geoff Nordli
>-Original Message-
>From: Linder, Doug
>Sent: Friday, June 18, 2010 12:53 PM
>
>Try doing inline quoting/response with Outlook, where you quote one
section,
>reply, quote again, etc.  It's impossible.  You can't split up the quoted
section to
>add new text - no way, no how.  Very infuriating.  It's like Outlook was
>*designed* to force people to top post.   
>
Hi Doug.

I use Outlook too, and you are right, it is a major PITA.  

I was hoping that OL2010 was going to solve the problem, but it doesn't :(

The only way I can get it to sort of work is by "editing" the HTML message,
and saving it as plain text, then replying to that.  If you try to reply to
an HTML formatted message, it is awful. 

I also manually clean up some of the header information below "Original
Message".  

Have a great weekend!

Geoff 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Linder, Doug
> Another thing that Gmail does that I find infuriating, is that it
> mucks with the formatting. For some reason it, and to be fair, Outlook
> as well, seem to think that they know how a message needs to be
> formatted better than I do.

Try doing inline quoting/response with Outlook, where you quote one section, 
reply, quote again, etc.  It's impossible.  You can't split up the quoted 
section to add new text - no way, no how.  Very infuriating.  It's like Outlook 
was *designed* to force people to top post.  It reminds me of the old joke:

"Because people read from top to bottom."
"Why is top-posting stupid?"

#include ms-rant.std
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Oh! Yes. dedup. not compression, but dedup, yes.

On Fri, Jun 18, 2010 at 6:30 AM, Arne Jansen  wrote:
> Curtis E. Combs Jr. wrote:
>> Um...I started 2 commands in 2 separate ssh sessions:
>> in ssh session one:
>> iostat -xn 1 > stats
>> in ssh session two:
>> mkfile 10g testfile
>>
>> when the mkfile was finished i did the dd command...
>> on the same zpool1 and zfs filesystem..that's it, really
>
> No, this doesn't match. Did you enable compression or dedup?
>
>
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 > stats
in ssh session two:
mkfile 10g testfile

when the mkfile was finished i did the dd command...
on the same zpool1 and zfs filesystem..that's it, really

On Fri, Jun 18, 2010 at 6:06 AM, Arne Jansen  wrote:
> Curtis E. Combs Jr. wrote:
>> Sure. And hey, maybe I just need some context to know what's "normal"
>> IO for the zpool. It just...feels...slow, sometimes. It's hard to
>> explain. I attached a log of iostat -xn 1 while doing mkfile 10g
>> testfile on the zpool, as well as your dd with the bs set really high.
>> When I Ctl-C'ed the dd it said 460M/seclike I said, maybe I just
>> need some context...
>>
>
> These iostats don't match to the creation of any large files. What are
> you doing there? Looks more like 512 byte random writes... Are you
> generating the load locally or remote?
>
>>
>> On Fri, Jun 18, 2010 at 5:36 AM, Arne Jansen  wrote:
>>> artiepen wrote:
 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 
 2, and 6 almost 10x as many times as I see 40MB/sec. It really only bumps 
 up to 40 very rarely.

 As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
 to make files from /dev/zero, wouldn't that be sequential? I measure with 
 zpool iostat 2 in another ssh session while making files of various sizes.

 This is a test system. I'm wondering, now, if I should just reconfigure 
 with maybe 7 disks and add another spare. Seems to be the general 
 consensus that bigger raid pools = worse performance. I thought the 
 opposite was true...
>>> A quick test on a system with 21 1TB SATA-drives in a single
>>> RAIDZ2 group show a performance of about 400MB/s with a
>>> single dd, blocksize=1048576. Creating a 10G-file with mkfile
>>> takes 25 seconds also.
>>> So I'd say basically there is nothing wrong with the zpool
>>> configuration. Can you paste some "iostat -xn 1" output while
>>> your test is running?
>>>
>>> --Arne
>>>
>>
>>
>>
>
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Sure. And hey, maybe I just need some context to know what's "normal"
IO for the zpool. It just...feels...slow, sometimes. It's hard to
explain. I attached a log of iostat -xn 1 while doing mkfile 10g
testfile on the zpool, as well as your dd with the bs set really high.
When I Ctl-C'ed the dd it said 460M/seclike I said, maybe I just
need some context...


On Fri, Jun 18, 2010 at 5:36 AM, Arne Jansen  wrote:
> artiepen wrote:
>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 
>> 40 very rarely.
>>
>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>> zpool iostat 2 in another ssh session while making files of various sizes.
>>
>> This is a test system. I'm wondering, now, if I should just reconfigure with 
>> maybe 7 disks and add another spare. Seems to be the general consensus that 
>> bigger raid pools = worse performance. I thought the opposite was true...
>
> A quick test on a system with 21 1TB SATA-drives in a single
> RAIDZ2 group show a performance of about 400MB/s with a
> single dd, blocksize=1048576. Creating a 10G-file with mkfile
> takes 25 seconds also.
> So I'd say basically there is nothing wrong with the zpool
> configuration. Can you paste some "iostat -xn 1" output while
> your test is running?
>
> --Arne
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com


tests.gz
Description: GNU Zip compressed data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just
added zeros to the count, to try to test performance for larger files.
I didn't notice any difference at all, either with the dtrace script
or zpool iostat. Thanks for you help, btw.

On Fri, Jun 18, 2010 at 5:30 AM, Pasi Kärkkäinen  wrote:
> On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 
>> 40 very rarely.
>>
>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>> zpool iostat 2 in another ssh session while making files of various sizes.
>>
>
> Yep, dd will generate sequential IO.
> Did you specify blocksize for dd? (bs=1024k for example).
>
> As a default dd does 4 kB IOs.. which won't be very fast.
>
> -- Pasi
>
>> This is a test system. I'm wondering, now, if I should just reconfigure with 
>> maybe 7 disks and add another spare. Seems to be the general consensus that 
>> bigger raid pools = worse performance. I thought the opposite was true...
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
I also have a dtrace script that I found that supposedly gives a more
accurate reading. Usually, though, it's output is very close to what
zpool iostat says. Keep in mind this is a test environment, there's no
production here, so I can make and destroy the pools as much as I want
to play around with. I'm also still learning about dtrace.

On Fri, Jun 18, 2010 at 4:52 AM, Curtis E. Combs Jr.  wrote:
> I am new to zfs, so I am still learning. I'm using zpool iostat to
> measure performance. Would you say that smaller raidz2 sets would give
> me more reliable and better performance? I'm willing to give it a
> shot...
>
> On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:
>> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
>>> Well, I've searched my brains out and I can't seem to find a reason for 
>>> this.
>>>
>>> I'm getting bad to medium performance with my new test storage device. I've 
>>> got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the 
>>> Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig 
>>> of RAM OpenSolaris upgraded to snv_134.
>>>
>>> The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec 
>>> to 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if 
>>> I watch while I'm actively doing some r/w. I know that I should be getting 
>>> better performance.
>>>
>>
>> How are you measuring the performance?
>> Do you understand raidz2 with that big amount of disks in it will give you 
>> really poor random write performance?
>>
>> -- Pasi
>>
>>> I'm new to OpenSolaris, but I've been using *nix systems for a long time, 
>>> so if there's any more information that I can provide, please let me know. 
>>> Am I doing anything wrong with this configuration? Thanks in advance.
>>> --
>>> This message posted from opensolaris.org
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
>
> --
> Curtis E. Combs Jr.
> System Administrator Associate
> University of Georgia
> High Performance Computing Center
> ceco...@uga.edu
> Office: (706) 542-0186
> Cell: (706) 206-7289
> Gmail Chat: psynoph...@gmail.com
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...

On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
>> Well, I've searched my brains out and I can't seem to find a reason for this.
>>
>> I'm getting bad to medium performance with my new test storage device. I've 
>> got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the 
>> Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of 
>> RAM OpenSolaris upgraded to snv_134.
>>
>> The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec to 
>> 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if I 
>> watch while I'm actively doing some r/w. I know that I should be getting 
>> better performance.
>>
>
> How are you measuring the performance?
> Do you understand raidz2 with that big amount of disks in it will give you 
> really poor random write performance?
>
> -- Pasi
>
>> I'm new to OpenSolaris, but I've been using *nix systems for a long time, so 
>> if there's any more information that I can provide, please let me know. Am I 
>> doing anything wrong with this configuration? Thanks in advance.
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Curtis E. Combs Jr.
System Administrator Associate
University of Georgia
High Performance Computing Center
ceco...@uga.edu
Office: (706) 542-0186
Cell: (706) 206-7289
Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread Robert Milkowski

On 18/06/2010 14:47, ??? wrote:

Dear All :

   Under Sun Storage 7000 system, can we see per share ratio after enable dedup 
function ? We would like deep to see each share dedup ratio.

   On Web GUI, only show dedup ratio entire storage pool.


   
Since dedup works across all dataset with dedup enabled in a pool you 
can't really get a dedup ratio per share.


--
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raid-z - not even iops distribution

2010-06-18 Thread Robert Milkowski

Hi,


zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0 \
  raidz c0t1d0 c1t1d0 c2t1d0 c3t1d0 \
  raidz c0t2d0 c1t2d0 c2t2d0 c3t2d0 \
  raidz c0t3d0 c1t3d0 c2t3d0 c3t3d0 \
  [...]
  raidz c0t10d0 c1t10d0 c2t10d0 c3t10d0

zfs set atime=off test
zfs set recordsize=16k test
(I know...)

now if I create a one large file with filebench and simulate a 
randomread workload with 1 or more threads then disks on c2 and c3 
controllers are getting about 80% more reads. This happens both on 111b 
and snv_134. I would rather except all of them to get about the same 
number of iops.


Any idea why?


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Giovanni
Thanks guys - I will take a look at those clustered file systems.

My goal is not to stick with Windows - I would like to have a Storage pool for 
XenServer (free) so that I can have guests, but using a storage server 
(Opensolaris - ZFS) as the iSCSI storage pool.

Any suggestions for the added redundancy or failover to be implemented? Also I 
am not sure if the "Shared Storage" on XenServer has the same problem that you 
mentioned NTFS has, which is only 1 host can control at the same time.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Garrett D'Amore
On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote:
> On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
> 
> > On 18/06/2010 00:18, Garrett D'Amore wrote:
> >> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
> >>   
> >>> On the SS7000 series, you get an alert that the enclosure has been 
> >>> detached from the system.  The fru-monitor code (generalization of the 
> >>> disk-monitor) that generates this sysevent has not yet been pushed to ON.
> >>> 
> >>> 
> >>> 
> > [...]
> >> I guess the fact that the SS7000 code isn't kept up to date in ON means
> >> that we may wind up having to do our own thing here... its a bit
> >> unfortunate, but ok.
> > 
> > Eric - is it a business decision that the discussed code is not in the ON 
> > or do you actually intent to get it integrated into ON? Because if you do 
> > then I think that getting Nexenta guys expanding on it would be better for 
> > everyone instead of having them reinventing the wheel...
> 
> Limited bandwidth.

Is there anything I can do to help?  In my opinion, its better if we can
use solutions in the underlying ON code that everyone agrees with and
that are available to everyone.

At the end of the day though, we'll do whatever is required to make sure
that the problems that our customers face are solved -- at least in our
distro.  We'd rather have shared common code for this, but if we have to
implement our own bits, we will do so.

-- Garrett

> 
> - Eric
> 
> --
> Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Cindy Swearingen

P.S.

User/group quotas are available in the Solaris 10 release,
starting in the Solaris 10 10/09 release:

http://docs.sun.com/app/docs/doc/819-5461/gazvb?l=en&a=view

Thanks,

Cindy

On 06/18/10 07:09, David Magda wrote:

On Fri, June 18, 2010 08:29, Sendil wrote:


I can create 400+ file system for each users,
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.


You can create a dataset for each user, and then set a per-dataset quota
for each one:


quota=size | none

Limits the amount of space a dataset and its descendents can
consume. This property enforces a hard limit on the amount of
space used. This includes all space consumed by descendents,
including file systems and snapshots. Setting a quota on a
descendent of a dataset that already has a quota does not
override the ancestor's quota, but rather imposes an additional
limit.


Or, on newer revisions of ZFS, you can have one big data set and put all
your users in there, and then set per-user quotas:


userqu...@user=size | none

Limits the amount of space consumed by the specified
user. Similar to the refquota property, the userquota space
calculation does not include space that is used by descendent
datasets, such as snapshots and clones. User space consumption
is identified by the usersp...@user property.


There's also a "groupquota". See zfs(1M) for details:

   http://docs.sun.com/app/docs/doc/819-2240/zfs-1m

Availability of "userquota" depends on the version of (Open)Solaris that
you have; don't recall when it was introduced.

As for which one is better, that depends: per-user adds flexibility, but a
bit of overhead. Best to test things out for yourself to see if it works
in your environment.

You could always split things up into groups of (say) 50. A few jobs ago,
I was in an environment where we have a /home/students1/ and
/home/students2/, along with a separate faculty/ (using Solaris and UFS).
This had more to do with IOps than anything else.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Mike Gerdts
On Fri, Jun 18, 2010 at 8:09 AM, David Magda  wrote:
> You could always split things up into groups of (say) 50. A few jobs ago,
> I was in an environment where we have a /home/students1/ and
> /home/students2/, along with a separate faculty/ (using Solaris and UFS).
> This had more to do with IOps than anything else.

A decade or so ago when I managed similar environments and had (I
think) 6 file systems handling about 5000 students.  Each file system
had about 1/6 of the students.  Challenges I found in this were:

- Students needed to work on projects together.  The typical way to do
this was for them to request a group, then create a group writable
directory in one of their home directories.  If all students in the
group had home directories on the same file system, there was nothing
special to consider.  If they were on different file systems then at
least one would need to have a non-zero quota (that is, not 0 blocks
soft, 1 block hard) quota on the file system where the group directory
resides.
- Despite your best efforts things will get imbalanced.  If you are
tight on space, this means that you will need to migrate users.  This
will become apparent only at the times of the semester where even
per-user outages are most inconvenient (i.e. at 6 and 13 weeks when
big projects tend to be due).

Its probably a good idea to consider these types of situations in the
transition plan, or at least determine they don't apply.  I was
working in a college of engineering where group projects were common
and CAD, EDA, and simulation tools could generate big files very
quickly.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread ???
Dear All :

  Under Sun Storage 7000 system, can we see per share ratio after enable dedup 
function ? We would like deep to see each share dedup ratio.

  On Web GUI, only show dedup ratio entire storage pool.

Thanks a lot,
-- Rex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Eric Schrock

On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:

> On 18/06/2010 00:18, Garrett D'Amore wrote:
>> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
>>   
>>> On the SS7000 series, you get an alert that the enclosure has been detached 
>>> from the system.  The fru-monitor code (generalization of the disk-monitor) 
>>> that generates this sysevent has not yet been pushed to ON.
>>> 
>>> 
>>> 
> [...]
>> I guess the fact that the SS7000 code isn't kept up to date in ON means
>> that we may wind up having to do our own thing here... its a bit
>> unfortunate, but ok.
> 
> Eric - is it a business decision that the discussed code is not in the ON or 
> do you actually intent to get it integrated into ON? Because if you do then I 
> think that getting Nexenta guys expanding on it would be better for everyone 
> instead of having them reinventing the wheel...

Limited bandwidth.

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Arne Jansen
David Magda wrote:
> On Fri, June 18, 2010 08:29, Sendil wrote:
> 
>> I can create 400+ file system for each users,
>> but will this affect my system performance during the system boot up?
>> Is this recommanded or any alternate is available for this issue.
> 
> You can create a dataset for each user, and then set a per-dataset quota
> for each one:
> 
>> quota=size | none
>>

as a side note, you do not need to worry about creating 400 filesystems.
A few thousand are no problem.

--Arne
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread David Magda
On Fri, June 18, 2010 08:29, Sendil wrote:

> I can create 400+ file system for each users,
> but will this affect my system performance during the system boot up?
> Is this recommanded or any alternate is available for this issue.

You can create a dataset for each user, and then set a per-dataset quota
for each one:

> quota=size | none
>
> Limits the amount of space a dataset and its descendents can
> consume. This property enforces a hard limit on the amount of
> space used. This includes all space consumed by descendents,
> including file systems and snapshots. Setting a quota on a
> descendent of a dataset that already has a quota does not
> override the ancestor's quota, but rather imposes an additional
> limit.

Or, on newer revisions of ZFS, you can have one big data set and put all
your users in there, and then set per-user quotas:

> userqu...@user=size | none
>
> Limits the amount of space consumed by the specified
> user. Similar to the refquota property, the userquota space
> calculation does not include space that is used by descendent
> datasets, such as snapshots and clones. User space consumption
> is identified by the usersp...@user property.

There's also a "groupquota". See zfs(1M) for details:

   http://docs.sun.com/app/docs/doc/819-2240/zfs-1m

Availability of "userquota" depends on the version of (Open)Solaris that
you have; don't recall when it was introduced.

As for which one is better, that depends: per-user adds flexibility, but a
bit of overhead. Best to test things out for yourself to see if it works
in your environment.

You could always split things up into groups of (say) 50. A few jobs ago,
I was in an environment where we have a /home/students1/ and
/home/students2/, along with a separate faculty/ (using Solaris and UFS).
This had more to do with IOps than anything else.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] WD caviar/mpt issues

2010-06-18 Thread Jeff Bacon
I know that this has been well-discussed already, but it's been a few months - 
WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting 
out lots of beloved " Log info 3108 received for target" messages, and just 
generally not working right. 

(SM 836EL1 and 836TQ chassis - though I have several variations on theme 
depending on date of purchase: 836EL2s, 846s and 847s - sol10u8, 1.26/1.29/1.30 
LSI firmware on LSI retail 3801 and 3081E controllers. Not that it works any 
better on the brace of 9211-8is I also tried these drives on.)  

Before signing up for the list, I "accidentally" bought a wad of caviar black 
2TBs. No, they are new enough to not respond to WDTLER.EXE, and yes, they are 
generally unhappy with my boxen. I have them "working" now, running 
direct-attach off 3 3081E-Rs with breakout cables in the SC836TQ (passthru 
backplane) chassis, set up as one pool of 2 6+2 raidz2 vdevs (16 drives total), 
but they still toss the occasional error and performance is, well, abysmal - 
zpool scrub runs at about a third the speed of the 1TB cudas that they share 
the machine with, in terms of iostat reported ops/sec or bytes/sec. They don't 
want to work in an expander chassis at all - spin up the drives and connect 
them and they'll run great for a while, then after about 12 hours they start 
throwing errors. (Cycling power on the enclosure does seem to reset them to run 
for another 12 hours, but...)

I've caved in and bought a brace of replacement cuda XTs, and I am currently 
going to resign these drives to other lesser purposes (attached to si3132s and 
ICH10 in a box to be used to store backups, running Windoze). It's kind of a 
shame, because their single-drive performance is quite good - I've been doing 
single-drive tests in another chassis against cudas and constellations, and 
they seem quite a bit faster except on random-seek. 

Have I missed any changes/updates in the situation?

Thanks,
-bacon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lsiutil for mpt_sas

2010-06-18 Thread Jeff Bacon
Is there a version of lsiutil that works for the LSI2008 controllers? I have a 
mix of both, and lsiutil is nifty, but not as nifty if it only works on half my 
controllers. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Monitoring filessytem access

2010-06-18 Thread Andreas Grüninger
Here is a dtrace script based of one of the examples for the nfs provider.
Especially useful when you use NFS for ESX or other hypervisors.

Andreas

#!/usr/sbin/dtrace -s

#pragma D option quiet

inline int TOP_FILES = 50;

dtrace:::BEGIN
{
   printf("Tracing... Hit Ctrl-C to end.\n");
   startscript = timestamp;
}

nfsv3:::op-read-start,
nfsv3:::op-write-start
{
   start[args[1]->noi_xid] = timestamp;
   size[args[1]->noi_xid] = args[2]->count;
}

nfsv3:::op-read-done,
nfsv3:::op-write-done
/start[args[1]->noi_xid] != 0/
{
  this->elapsed = timestamp - start[args[1]->noi_xid];
  this->size = size[args[1]->noi_xid];
  @rw[probename == "op-read-done" ? "read" : "write"] = 
quantize(this->elapsed / 1000);
  @host[args[0]->ci_remote] = sum(this->elapsed);
  @file[args[1]->noi_curpath] = sum(this->elapsed);
  @rwsc[probename == "op-read-done" ? "read" : "write"] = count();
  @rws[probename == "op-read-done" ? "read" : "write"] = 
quantize(this->size);
/*   @rwsl[probename == "op-read-done" ? "read" : "write"] = 
lquantize(this->size,4096,8256,64);
 */  @hosts[args[0]->ci_remote] = sum(this->size);
  @files[args[1]->noi_curpath] = sum(this->size);
  this->size = 0;
  size[args[1]->noi_xid] = 0;
  start[args[1]->noi_xid] = 0;
}

dtrace:::END
{
   this->seconds = (timestamp - startscript)/10;
   printf("\nNFSv3 read/write top %d files (total us):\n", TOP_FILES);
   normalize(@file, 1000);
   trunc(@file, TOP_FILES);
   printa(@file);

   printf("NFSv3 read/write distributions (us):\n");
   printa(@rw);

   printf("\nNFSv3 read/write top %d files (total MByte):\n", TOP_FILES);
   normalize(@files, 1024*1024);
   trunc(@files, TOP_FILES);
   printa(@files);

   printf("\nNFSv3 read/write by host (total ns):\n");
   printa(@host);

   printf("\nNFSv3 read/write by host (total s):\n");
   normalize(@host, 10);
   printa(@host);

   printf("\nNFSv3 read/write by host (total Byte):\n");
   printa(@hosts);

   printf("\nNFSv3 read/write by host (total kByte):\n");
   normalize(@hosts,1024);
   printa(@hosts);
   denormalize(@hosts);

   printf("\nNFSv3 read/write by host (total kByte/s):\n");
   normalize(@hosts,this->seconds*1024);
   printa(@hosts);

   printf("NFSv3 read/write distributions (Byte):\n");
   printa(@rws);

/*printf("NFSv3 read/write distributions (Byte):\n");
   printa(@rwsl);
 */
   printf("NFSv3 read/write counts:\n");
   printa(@rwsc);

   printf("\nScript running for %20d seconds ",this->seconds);
}
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Sendil
Hi
Currently I have 400+ users with quota set to 500MB limit. Currently the file 
system is using veritas file system. 

I am planning to migrate all these home directory to a new server with ZFS. How 
can i migrate the quotas.

I can create 400+ file system for each users, 
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Joerg Schilling
artiepen  wrote:

> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 
> 40 very rarely.

I get Read/write speeds of aprox. 630 MB/s into ZFS on
a SunFire X4540.

It seems that you missconfigured the pool.

You need to make sure that each RAID Stripe is made of disks
that are all on separate controllers that can do DMA independently.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
Curtis E. Combs Jr. wrote:
> Um...I started 2 commands in 2 separate ssh sessions:
> in ssh session one:
> iostat -xn 1 > stats
> in ssh session two:
> mkfile 10g testfile
> 
> when the mkfile was finished i did the dd command...
> on the same zpool1 and zfs filesystem..that's it, really

No, this doesn't match. Did you enable compression or dedup?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
Curtis E. Combs Jr. wrote:
> Sure. And hey, maybe I just need some context to know what's "normal"
> IO for the zpool. It just...feels...slow, sometimes. It's hard to
> explain. I attached a log of iostat -xn 1 while doing mkfile 10g
> testfile on the zpool, as well as your dd with the bs set really high.
> When I Ctl-C'ed the dd it said 460M/seclike I said, maybe I just
> need some context...
> 

These iostats don't match to the creation of any large files. What are
you doing there? Looks more like 512 byte random writes... Are you
generating the load locally or remote?

> 
> On Fri, Jun 18, 2010 at 5:36 AM, Arne Jansen  wrote:
>> artiepen wrote:
>>> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
>>> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up 
>>> to 40 very rarely.
>>>
>>> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd 
>>> to make files from /dev/zero, wouldn't that be sequential? I measure with 
>>> zpool iostat 2 in another ssh session while making files of various sizes.
>>>
>>> This is a test system. I'm wondering, now, if I should just reconfigure 
>>> with maybe 7 disks and add another spare. Seems to be the general consensus 
>>> that bigger raid pools = worse performance. I thought the opposite was 
>>> true...
>> A quick test on a system with 21 1TB SATA-drives in a single
>> RAIDZ2 group show a performance of about 400MB/s with a
>> single dd, blocksize=1048576. Creating a 10G-file with mkfile
>> takes 25 seconds also.
>> So I'd say basically there is nothing wrong with the zpool
>> configuration. Can you paste some "iostat -xn 1" output while
>> your test is running?
>>
>> --Arne
>>
> 
> 
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 
> 40 very rarely.
> 
> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to 
> make files from /dev/zero, wouldn't that be sequential? I measure with zpool 
> iostat 2 in another ssh session while making files of various sizes.
> 
> This is a test system. I'm wondering, now, if I should just reconfigure with 
> maybe 7 disks and add another spare. Seems to be the general consensus that 
> bigger raid pools = worse performance. I thought the opposite was true...

A quick test on a system with 21 1TB SATA-drives in a single
RAIDZ2 group show a performance of about 400MB/s with a
single dd, blocksize=1048576. Creating a 10G-file with mkfile
takes 25 seconds also.
So I'd say basically there is nothing wrong with the zpool
configuration. Can you paste some "iostat -xn 1" output while
your test is running?

--Arne
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, 
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 
> 40 very rarely.
> 
> As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to 
> make files from /dev/zero, wouldn't that be sequential? I measure with zpool 
> iostat 2 in another ssh session while making files of various sizes.
> 

Yep, dd will generate sequential IO. 
Did you specify blocksize for dd? (bs=1024k for example).

As a default dd does 4 kB IOs.. which won't be very fast.

-- Pasi

> This is a test system. I'm wondering, now, if I should just reconfigure with 
> maybe 7 disks and add another spare. Seems to be the general consensus that 
> bigger raid pools = worse performance. I thought the opposite was true...
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Ian Collins

On 06/18/10 09:21 PM, artiepen wrote:

This is a test system. I'm wondering, now, if I should just reconfigure with 
maybe 7 disks and add another spare. Seems to be the general consensus that 
bigger raid pools = worse performance. I thought the opposite was true...
   
No, wider vdevs gives poor performance, not big pools.  3x7 drive raidz 
will give you better performance than 2x11 drives.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Yes, and I apologize for basic nature of these questions. Like I said, I'm 
pretty wet behind the ears with zfs. The MB/sec metric comes from dd, not zpool 
iostat. zpool iostat usually gives me units of k. I think I'll try with smaller 
raid sets and come back to the thread.
Thanks, all
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote:
>On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
> 
>  On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
>  > Well, I've searched my brains out and I can't seem to find a reason
>  for this.
>  >
>  > I'm getting bad to medium performance with my new test storage device.
>  I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm
>  using the Areca raid controller, the driver being arcmsr. Quad core AMD
>  with 16 gig of RAM OpenSolaris upgraded to snv_134.
>  >
>  > The zpool has 2 11-disk raidz2's and I'm getting anywhere between
>  1MB/sec to 40MB/sec with zpool iostat. On average, though it's more like
>  5MB/sec if I watch while I'm actively doing some r/w. I know that I
>  should be getting better performance.
>  >
> 
>  How are you measuring the performance?
>  Do you understand raidz2 with that big amount of disks in it will give
>  you really poor random write performance?
>  -- Pasi
> 
>i have a media server with 2 raidz2 vdevs 10 drives wide myself without a
>ZIL (but with a 64 gb l2arc)
>I can write to it about 400 MB/s over the network, and scrubs show 600
>MB/s but it really depends on the type of i/o you haverandom i/o
>across 2 vdevs will be REALLY slow (as slow as the slowest 2 drives in
>your pool basically)
>40 MB/s might be right if it's randomthough i'd still expect to see
>more.
> 

7200 RPM SATA disk can do around 120 IOPS max (7200/60 = 120), so if you're 
doing
4 kB random IO you end up getting 4*120 = 480 kB/sec throughput max from a 
single disk 
(in the worst case).

40 MB/sec of random IO throughput using 4 kB IOs would be around 10240 IOPS..
you'd need 85x SATA 7200 RPM disks in raid-0 (striping) for that :)

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and 
6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40 
very rarely.

As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to 
make files from /dev/zero, wouldn't that be sequential? I measure with zpool 
iostat 2 in another ssh session while making files of various sizes.

This is a test system. I'm wondering, now, if I should just reconfigure with 
maybe 7 disks and add another spare. Seems to be the general consensus that 
bigger raid pools = worse performance. I thought the opposite was true...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 04:52:02AM -0400, Curtis E. Combs Jr. wrote:
> I am new to zfs, so I am still learning. I'm using zpool iostat to
> measure performance. Would you say that smaller raidz2 sets would give
> me more reliable and better performance? I'm willing to give it a
> shot...
> 

Yes, more smaller raid-sets will give you better performance, 
since zfs distributes (stripes) data on all of them.

What's your IO pattern? random writes? sequential writes? 

Basicly if you have 2x 11-disk raidz2 sets you'll be limited to around
performance of 2 disks, in the worst case of small random IO.
(the parity needs to be written and that limits the performance of raidz/z2/z3 
to the performance of single disk).

This is not really zfs specific at all, it's the same with any raid 
implementation.

-- Pasi

> On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:
> > On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> >> Well, I've searched my brains out and I can't seem to find a reason for 
> >> this.
> >>
> >> I'm getting bad to medium performance with my new test storage device. 
> >> I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm 
> >> using the Areca raid controller, the driver being arcmsr. Quad core AMD 
> >> with 16 gig of RAM OpenSolaris upgraded to snv_134.
> >>
> >> The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec 
> >> to 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec 
> >> if I watch while I'm actively doing some r/w. I know that I should be 
> >> getting better performance.
> >>
> >
> > How are you measuring the performance?
> > Do you understand raidz2 with that big amount of disks in it will give you 
> > really poor random write performance?
> >
> > -- Pasi
> >
> >> I'm new to OpenSolaris, but I've been using *nix systems for a long time, 
> >> so if there's any more information that I can provide, please let me know. 
> >> Am I doing anything wrong with this configuration? Thanks in advance.
> >> --
> >> This message posted from opensolaris.org
> >> ___
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> 
> 
> 
> -- 
> Curtis E. Combs Jr.
> System Administrator Associate
> University of Georgia
> High Performance Computing Center
> ceco...@uga.edu
> Office: (706) 542-0186
> Cell: (706) 206-7289
> Gmail Chat: psynoph...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:

> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't seem to find a reason for
> this.
> >
> > I'm getting bad to medium performance with my new test storage device.
> I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using
> the Areca raid controller, the driver being arcmsr. Quad core AMD with 16
> gig of RAM OpenSolaris upgraded to snv_134.
> >
> > The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec
> to 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if
> I watch while I'm actively doing some r/w. I know that I should be getting
> better performance.
> >
>
> How are you measuring the performance?
> Do you understand raidz2 with that big amount of disks in it will give you
> really poor random write performance?
>
> -- Pasi
>
>
i have a media server with 2 raidz2 vdevs 10 drives wide myself without a
ZIL (but with a 64 gb l2arc)

I can write to it about 400 MB/s over the network, and scrubs show 600 MB/s
but it really depends on the type of i/o you haverandom i/o across 2
vdevs will be REALLY slow (as slow as the slowest 2 drives in your pool
basically)

40 MB/s might be right if it's randomthough i'd still expect to see
more.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Robert Milkowski

On 18/06/2010 00:18, Garrett D'Amore wrote:

On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
   


On the SS7000 series, you get an alert that the enclosure has been detached 
from the system.  The fru-monitor code (generalization of the disk-monitor) 
that generates this sysevent has not yet been pushed to ON.


 

[...]

I guess the fact that the SS7000 code isn't kept up to date in ON means
that we may wind up having to do our own thing here... its a bit
unfortunate, but ok.


Eric - is it a business decision that the discussed code is not in the 
ON or do you actually intent to get it integrated into ON? Because if 
you do then I think that getting Nexenta guys expanding on it would be 
better for everyone instead of having them reinventing the wheel...




--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-18 Thread Pasi Kärkkäinen
On Thu, Jun 17, 2010 at 09:58:25AM -0700, Ray Van Dolson wrote:
> On Thu, Jun 17, 2010 at 09:54:59AM -0700, Ragnar Sundblad wrote:
> > 
> > On 17 jun 2010, at 18.17, Richard Jahnel wrote:
> > 
> > > The EX specs page does list the supercap
> > > 
> > > The pro specs page does not.
> > 
> > They do for both on the Specifications tab on the web page:
> > 
> > But not in the product brief PDFs.
> > 
> > It doesn't say how many rewrites you can do either.
> > 
> > An Intel X25-E 32G has, according to the product manual, a write
> > endurance of 1 petabyte. In full write speed, 250 MB/s, that is equal
> > to 400 seconds, or about 46 days. (On the other hand you have a
> > five year warranty, and I have been told that you can get them
> > replaced if they wear out.)
> 
> Do the drives keep any sort of internal counter so you get an idea of
> how much of the rated drive lifetime you've chewed through?
> 

Heh.. the marketing stuff on the 'front' page says:
"Vertex 2 EX has an ultra-reliable 10 million hour MTBF and comes backed by a 
three-year warranty. "

And then on the specifications:
"MTBF: 2 million hours"

:)

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> Well, I've searched my brains out and I can't seem to find a reason for this.
> 
> I'm getting bad to medium performance with my new test storage device. I've 
> got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the 
> Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of 
> RAM OpenSolaris upgraded to snv_134.
> 
> The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec to 
> 40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if I 
> watch while I'm actively doing some r/w. I know that I should be getting 
> better performance.
> 

How are you measuring the performance? 
Do you understand raidz2 with that big amount of disks in it will give you 
really poor random write performance? 

-- Pasi

> I'm new to OpenSolaris, but I've been using *nix systems for a long time, so 
> if there's any more information that I can provide, please let me know. Am I 
> doing anything wrong with this configuration? Thanks in advance.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Well, I've searched my brains out and I can't seem to find a reason for this.

I'm getting bad to medium performance with my new test storage device. I've got 
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca 
raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM 
OpenSolaris upgraded to snv_134.

The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec to 
40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if I 
watch while I'm actively doing some r/w. I know that I should be getting better 
performance.

I'm new to OpenSolaris, but I've been using *nix systems for a long time, so if 
there's any more information that I can provide, please let me know. Am I doing 
anything wrong with this configuration? Thanks in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Arve Paalsrud
And.. if you're using iSCSI towards ZFS and want to have shared access, take
a look at GlusterFS which you use in front of multiple ZFS nodes as
accessing point.

GlusterFS: http://www.gluster.org/

-Arve


> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Giovanni
> Sent: 18. juni 2010 07:44
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] COMSTAR iSCSI and two Windows computers
> 
> Hi guys
> 
> I wanted to ask how i could setup a iSCSI device to be shared by 2
> computers concurrently, by that i mean sharing files like it was a NFS
> share but use iSCSI instead.
> 
> I tried and setup iSCSI on both computers and was able to see my files
> (I had formatted it NTFS before), from my laptop I uploaded a 400MB
> video file to the root directory and from my desktop I browsed the same
> directory and the file was not there??
> 
> Thanks
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Arve Paalsrud
NTFS is not a clustered file system and thus can't handle multiple clients
accessing the data. You could use MelioFS if you're Windows based, which
handles metadata updates and locking between the accessing nodes to be able
to share a NTFS disk between them - even over iSCSI.

MelioFS: http://www.sanbolic.com/melioFS.htm

-Arve

> 
> Hi guys
> 
> I wanted to ask how i could setup a iSCSI device to be shared by 2
> computers concurrently, by that i mean sharing files like it was a NFS
> share but use iSCSI instead.
> 
> I tried and setup iSCSI on both computers and was able to see my files
> (I had formatted it NTFS before), from my laptop I uploaded a 400MB
> video file to the root directory and from my desktop I browsed the same
> directory and the file was not there??
> 
> Thanks
> --

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss