On Thu, Jun 17, 2010 at 10:44 PM, Giovanni giof...@gmail.com wrote:
Hi guys
I wanted to ask how i could setup a iSCSI device to be shared by 2 computers
concurrently, by that i mean sharing files like it was a NFS share but use
iSCSI instead.
I tried and setup iSCSI on both computers and
NTFS is not a clustered file system and thus can't handle multiple clients
accessing the data. You could use MelioFS if you're Windows based, which
handles metadata updates and locking between the accessing nodes to be able
to share a NTFS disk between them - even over iSCSI.
MelioFS:
And.. if you're using iSCSI towards ZFS and want to have shared access, take
a look at GlusterFS which you use in front of multiple ZFS nodes as
accessing point.
GlusterFS: http://www.gluster.org/
-Arve
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Well, I've searched my brains out and I can't seem to find a reason for this.
I'm getting bad to medium performance with my new test storage device. I've got
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca
raid controller, the driver being arcmsr. Quad core AMD
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
Well, I've searched my brains out and I can't seem to find a reason for this.
I'm getting bad to medium performance with my new test storage device. I've
got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the
On Thu, Jun 17, 2010 at 09:58:25AM -0700, Ray Van Dolson wrote:
On Thu, Jun 17, 2010 at 09:54:59AM -0700, Ragnar Sundblad wrote:
On 17 jun 2010, at 18.17, Richard Jahnel wrote:
The EX specs page does list the supercap
The pro specs page does not.
They do for both on the
On 18/06/2010 00:18, Garrett D'Amore wrote:
On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
On the SS7000 series, you get an alert that the enclosure has been detached
from the system. The fru-monitor code (generalization of the disk-monitor)
that generates this sysevent has not
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
Well, I've searched my brains out and I can't seem to find a reason for
this.
I'm getting bad to medium performance with my new test storage device.
I've got 24
On Fri, Jun 18, 2010 at 04:52:02AM -0400, Curtis E. Combs Jr. wrote:
I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...
Yes, more
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and
6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40
very rarely.
As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to
make files from /dev/zero, wouldn't that be
On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote:
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen [1]pa...@iki.fi wrote:
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
Well, I've searched my brains out and I can't seem to find a reason
for this.
Yes, and I apologize for basic nature of these questions. Like I said, I'm
pretty wet behind the ears with zfs. The MB/sec metric comes from dd, not zpool
iostat. zpool iostat usually gives me units of k. I think I'll try with smaller
raid sets and come back to the thread.
Thanks, all
--
This
On 06/18/10 09:21 PM, artiepen wrote:
This is a test system. I'm wondering, now, if I should just reconfigure with
maybe 7 disks and add another spare. Seems to be the general consensus that
bigger raid pools = worse performance. I thought the opposite was true...
No, wider vdevs gives
On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
40 very rarely.
As far as random vs. sequential. Correct me if I'm wrong, but
Curtis E. Combs Jr. wrote:
Sure. And hey, maybe I just need some context to know what's normal
IO for the zpool. It just...feels...slow, sometimes. It's hard to
explain. I attached a log of iostat -xn 1 while doing mkfile 10g
testfile on the zpool, as well as your dd with the bs set really
Curtis E. Combs Jr. wrote:
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 stats
in ssh session two:
mkfile 10g testfile
when the mkfile was finished i did the dd command...
on the same zpool1 and zfs filesystem..that's it, really
No, this doesn't
artiepen ceco...@uga.edu wrote:
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
40 very rarely.
I get Read/write speeds of aprox. 630 MB/s into ZFS on
a SunFire X4540.
It seems that you
Hi
Currently I have 400+ users with quota set to 500MB limit. Currently the file
system is using veritas file system.
I am planning to migrate all these home directory to a new server with ZFS. How
can i migrate the quotas.
I can create 400+ file system for each users,
but will this affect
Here is a dtrace script based of one of the examples for the nfs provider.
Especially useful when you use NFS for ESX or other hypervisors.
Andreas
#!/usr/sbin/dtrace -s
#pragma D option quiet
inline int TOP_FILES = 50;
dtrace:::BEGIN
{
printf(Tracing... Hit Ctrl-C to end.\n);
Is there a version of lsiutil that works for the LSI2008 controllers? I have a
mix of both, and lsiutil is nifty, but not as nifty if it only works on half my
controllers. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I know that this has been well-discussed already, but it's been a few months -
WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting
out lots of beloved Log info 3108 received for target messages, and just
generally not working right.
(SM 836EL1 and 836TQ chassis
On Fri, June 18, 2010 08:29, Sendil wrote:
I can create 400+ file system for each users,
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.
You can create a dataset for each user, and then set a per-dataset
David Magda wrote:
On Fri, June 18, 2010 08:29, Sendil wrote:
I can create 400+ file system for each users,
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.
You can create a dataset for each user, and
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
On 18/06/2010 00:18, Garrett D'Amore wrote:
On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
On the SS7000 series, you get an alert that the enclosure has been detached
from the system. The fru-monitor code (generalization of
Dear All :
Under Sun Storage 7000 system, can we see per share ratio after enable dedup
function ? We would like deep to see each share dedup ratio.
On Web GUI, only show dedup ratio entire storage pool.
Thanks a lot,
-- Rex
--
This message posted from opensolaris.org
On Fri, Jun 18, 2010 at 8:09 AM, David Magda dma...@ee.ryerson.ca wrote:
You could always split things up into groups of (say) 50. A few jobs ago,
I was in an environment where we have a /home/students1/ and
/home/students2/, along with a separate faculty/ (using Solaris and UFS).
This had
P.S.
User/group quotas are available in the Solaris 10 release,
starting in the Solaris 10 10/09 release:
http://docs.sun.com/app/docs/doc/819-5461/gazvb?l=ena=view
Thanks,
Cindy
On 06/18/10 07:09, David Magda wrote:
On Fri, June 18, 2010 08:29, Sendil wrote:
I can create 400+ file system
On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote:
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
On 18/06/2010 00:18, Garrett D'Amore wrote:
On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
On the SS7000 series, you get an alert that the enclosure has been
Thanks guys - I will take a look at those clustered file systems.
My goal is not to stick with Windows - I would like to have a Storage pool for
XenServer (free) so that I can have guests, but using a storage server
(Opensolaris - ZFS) as the iSCSI storage pool.
Any suggestions for the added
Hi,
zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0 \
raidz c0t1d0 c1t1d0 c2t1d0 c3t1d0 \
raidz c0t2d0 c1t2d0 c2t2d0 c3t2d0 \
raidz c0t3d0 c1t3d0 c2t3d0 c3t3d0 \
[...]
raidz c0t10d0 c1t10d0 c2t10d0
On 18/06/2010 14:47, ??? wrote:
Dear All :
Under Sun Storage 7000 system, can we see per share ratio after enable dedup
function ? We would like deep to see each share dedup ratio.
On Web GUI, only show dedup ratio entire storage pool.
Since dedup works across all dataset with
I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Fri, Jun 18,
I also have a dtrace script that I found that supposedly gives a more
accurate reading. Usually, though, it's output is very close to what
zpool iostat says. Keep in mind this is a test environment, there's no
production here, so I can make and destroy the pools as much as I want
to play around
Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just
added zeros to the count, to try to test performance for larger files.
I didn't notice any difference at all, either with the dtrace script
or zpool iostat. Thanks for you help, btw.
On Fri, Jun 18, 2010 at 5:30 AM, Pasi
Sure. And hey, maybe I just need some context to know what's normal
IO for the zpool. It just...feels...slow, sometimes. It's hard to
explain. I attached a log of iostat -xn 1 while doing mkfile 10g
testfile on the zpool, as well as your dd with the bs set really high.
When I Ctl-C'ed the dd it
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 stats
in ssh session two:
mkfile 10g testfile
when the mkfile was finished i did the dd command...
on the same zpool1 and zfs filesystem..that's it, really
On Fri, Jun 18, 2010 at 6:06 AM, Arne Jansen
Oh! Yes. dedup. not compression, but dedup, yes.
On Fri, Jun 18, 2010 at 6:30 AM, Arne Jansen sensi...@gmx.net wrote:
Curtis E. Combs Jr. wrote:
Um...I started 2 commands in 2 separate ssh sessions:
in ssh session one:
iostat -xn 1 stats
in ssh session two:
mkfile 10g testfile
when the
Another thing that Gmail does that I find infuriating, is that it
mucks with the formatting. For some reason it, and to be fair, Outlook
as well, seem to think that they know how a message needs to be
formatted better than I do.
Try doing inline quoting/response with Outlook, where you quote
-Original Message-
From: Linder, Doug
Sent: Friday, June 18, 2010 12:53 PM
Try doing inline quoting/response with Outlook, where you quote one
section,
reply, quote again, etc. It's impossible. You can't split up the quoted
section to
add new text - no way, no how. Very infuriating.
Hi Curtis,
You might review the ZFS best practices info to help you determine
the best pool configuration for your environment:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
If you're considering using dedup, particularly on a 24T pool, then
review the current known
Thank you, all of you, for the super helpful responses, this is probably one of
the most helpful forums I've been on. I've been working with ZFS on some
SunFires for a little while now, in prod, and the testing environment with oSol
is going really well. I love it. Nothing even comes close.
If
On Fri, Jun 18, 2010 at 3:52 PM, Linder, Doug
doug.lin...@merchantlink.com wrote:
Another thing that Gmail does that I find infuriating, is that it
mucks with the formatting. For some reason it, and to be fair, Outlook
as well, seem to think that they know how a message needs to be
formatted
If the device driver generates or fabricates device IDs, then moving
devices around is probably okay.
I recall the Areca controllers are problematic when it comes to moving
devices under pools. Maybe someone with first-hand experience can
comment.
Consider exporting the pool first, moving the
People still use Outhouse? Really?! Next you'll be suggesting that
some people still put up with Internet Exploder... ;-)
Those of us who are literally forced to use it aren't too happy. Nor am I
happy with the giant stupid signature that gets tacked on that you all have to
trim when you
doug.lin...@merchantlink.com said:
Apparently, before Outlook there WERE no meetings, because it's clearly
impossible to schedule one without it.
Don't tell my boss, but I use Outlook for the scheduling, and fetchmail
plus procmail to download email out of Exchange and into my favorite
email
On Fri, Jun 18, 2010 at 1:52 AM, Curtis E. Combs Jr. ceco...@uga.edu wrote:
I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...
A ZFS
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. ceco...@uga.eduwrote:
Oh! Yes. dedup. not compression, but dedup, yes.
dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:
writes:
r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0 records in
10+0 records
I split a mirror to reconfigure and recopy it. I detached one drive,
reconfigured it ... all after unplugging the remaining pool drive during a
shutdown to verify no accidents could happen.
Later, I tried to import the original pool from the drive (now plugged back
in), only to be greeted
On 6/18/10 9:46 PM -0700 Cott Lang wrote:
I split a mirror to reconfigure and recopy it. I detached one drive,
reconfigured it ... all after unplugging the remaining pool drive during
a shutdown to verify no accidents could happen.
By detach, do you mean that you ran 'zpool detach'?
Sandon Van Ness wrote:
Sounds to me like something is wrong as on my 20 disk backup machine
with 20 1TB disks on a single raidz2 vdev I get the following with DD on
sequential reads/writes:
writes:
r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero
of=./100gb.bin
10+0
51 matches
Mail list logo