yes, that's the way zpool likes it
I think I've to understand how (Open)Solaris create disks or how
the partition thing works under OSol. Do you know any guide or howto?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Folks,
what can I post to the list to make the discussion go on?
Is this what you folks want to see? which I shared with King and High but
not you folks?
http://www.excelsioritsolutions.com/jz/jzbrush/jzbrush.htm
This is not even IT stuff so that I never thought I should post this to the
milosz writes:
iperf test coming out fine, actually...
iperf -s -w 64k
iperf -c -w 64k -t 900 -i 5
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
totally steady. i could probably implement some tweaks to improve it, but
Roch Bourbonnais wrote:
Le 12 janv. 09 à 17:39, Carson Gaspar a écrit :
Joerg Schilling wrote:
Fabian Wörner fabian.woer...@googlemail.com wrote:
my post was not to start a discuss gplcddl.
It just an idea to promote ZFS and OPENSOLARIS
If it was against anything than
thank you, at least the list is alive.
ok, let me provide some comments beyond IT, since the zpool likes it.
knowledge is power.
truth is knowledge.
one can only understand truth if one can handle truth.
and that is through learning, and reasoning.
when you possess enough knowledge, you will be
Hi;
It's all about performance when you consider H/W raid. It will put
extra overhead on your OS . As ZFS is fast, I will always prefer ZFS
based RAID. It will also save cost of RAID card.
Ashish Nabira
nab...@sun.com
http://sun.com
Work is worship.
On
Hi,
I wanted to migrate a virtual disk from a S10U6 to OpenSolaris
2008.11.
In the first machine I rebooted to single-user and ran
$ zpool export disco
then copied the disk files to the target VM, rebooted as single-user
and ran
$ zpool import disco
The disc was mounted, but none of the
Turning off windows quality of service seems to have given me sustained write
speeds hitting about 90MB/s using cifs
wites to the ISCSI device are hitting about 40mb/s but the network utilisation
graph is very jagged, it's just a constant spike to 60% utilisation then a drop
to 0 and repeat
I
On Wed, Jan 14, 2009 at 10:11 AM, Nico Sabbi nicola.sa...@poste.it wrote:
Hi,
I wanted to migrate a virtual disk from a S10U6 to OpenSolaris
2008.11.
In the first machine I rebooted to single-user and ran
$ zpool export disco
then copied the disk files to the target VM, rebooted as
On Wednesday 14 January 2009 11:44:56 Peter Tribble wrote:
On Wed, Jan 14, 2009 at 10:11 AM, Nico Sabbi nicola.sa...@poste.it
wrote:
Hi,
I wanted to migrate a virtual disk from a S10U6 to OpenSolaris
2008.11.
In the first machine I rebooted to single-user and ran
$ zpool export disco
There is an update in build 105, but it is only pertaining to the Raid
Management tool:
Issues Resolved:
BUG/RFE:6776690http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6776690Areca
raid management util doesn't work on solaris
Files Changed:
On Wed, Jan 14, 2009 at 10:58 AM, mijenix mije...@gmx.ch wrote:
yes, that's the way zpool likes it
I think I've to understand how (Open)Solaris create disks or how
the partition thing works under OSol. Do you know any guide or howto?
Cute idea, maybe. But very inconsistent with the size in blocks (reported by
ls -dls dir).
Is there a particular reason for this, or is it one of those just for the heck
of it things?
Granted that it isn't necessarily _wrong_. I just checked SUSv3 for stat() and
sys/stat.h,
and it appears
I have now installed OpenSolaris 2008.11 to a harddrive so I could catch the
shutdown messages that gets written to /var/adm/messages.
When my computer reboots I get a kernel panic and this is the relevant part of
the log: (also posted here if you don't like the linebreaks:
sorry, that 60% statement was misleading... i will VERY OCCASIONALLY get a
spike to 60%, but i'm averaging more like 15%, with the throughput often
dropping to zero for several seconds at a time.
that iperf test more or less demonstrates it isn't a network problem, no?
also i have been using
Richard L. Hamilton rlha...@smart.net wrote:
Cute idea, maybe. But very inconsistent with the size in blocks (reported by
ls -dls dir).
Is there a particular reason for this, or is it one of those just for the
heck of it things?
Granted that it isn't necessarily _wrong_. I just checked
Nico,
If you want to enable snapshot display as in previous releases,
then set this parameter on the pool:
# zpool set listsnapshots=on pool-name
Cind
Nico Sabbi wrote:
On Wednesday 14 January 2009 11:44:56 Peter Tribble wrote:
On Wed, Jan 14, 2009 at 10:11 AM, Nico Sabbi
On Wednesday 14 January 2009 16:49:48 cindy.swearin...@sun.com wrote:
Nico,
If you want to enable snapshot display as in previous releases,
then set this parameter on the pool:
# zpool set listsnapshots=on pool-name
Cind
thanks, it works as I need.
mijenix wrote:
yes, that's the way zpool likes it
I think I've to understand how (Open)Solaris create disks or how
the partition thing works under OSol. Do you know any guide or howto?
We've tried to make sure the ZFS Admin Guide covers these things, including
the procedure for mirroring
Thanks for the info.I'm running the Latest Firmware for my card: V1.46
with BOOT ROM Version V1.45
Could you tell me how you have your card configured? Are you using JBOD,
RAID, or Pass Through? What is your Max SATA mode set too? How may drives
do you have attached?
What is your ZFS
Here's an update:
I thought that the error message
arcmsr0: too many outstanding commands
might be due to a Scsi queue being over ran
The areca driver has
#*define*ARCMSR_MAX_OUTSTANDING_CMD
http://src.opensolaris.org/source/s?defs=ARCMSR_MAX_OUTSTANDING_CMD256
What might be
Richard L. Hamilton rlha...@smart.net wrote:
Cute idea, maybe. But very inconsistent with the
size in blocks (reported by ls -dls dir).
Is there a particular reason for this, or is it one
of those just for the heck of it things?
Granted that it isn't necessarily _wrong_. I just
Charles Wright wrote:
Here's an update:
I thought that the error message
arcmsr0: too many outstanding commands
might be due to a Scsi queue being over ran
Rather than messing with sd_max_throttle, you might try
changing the number of iops ZFS will queue to a vdev.
IMHO this is
Richard L. Hamilton rlha...@smart.net wrote:
I did find the earlier discussion on the subject (someone e-mailed me that
there had been
such). It seemed to conclude that some apps are statically linked with old
scandir() code
that (incorrectly) assumed that the number of directory entries
Ok, ive upgraded mother board's BIOS. Installed ZFS with b105 over the existing
UFS b104. It works better now. The disk sounds almost like normal, barely
audiable. But sometimes it goes back and sounds like hell. Very seldom now.
I dont get it. Why does UFS do this? Hmm...
--
This message
Richard L. Hamilton rlha...@smart.net wrote:
I did find the earlier discussion on the subject (someone e-mailed me that
there had been
such). It seemed to conclude that some apps are statically linked with old
scandir() code
that (incorrectly) assumed that the number of directory entries
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.eduwrote:
Does creating ZFS pools on multiple partitions on the same physical drive
still run into the performance and other issues that putting pools in slices
does?
Is zfs going to own the whole drive or not? The *issue*
On Wed, Jan 14, 2009 at 20:03, Tim t...@tcsac.net wrote:
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.edu
wrote:
Does creating ZFS pools on multiple partitions on the same physical drive
still run into the performance and other issues that putting pools in slices
does?
but, the write cache on/offness is a stateful setting stored on the
disk platter, right? so it survives reboots of the disk, and ZFS
doesn't turn it off, and UFS arguably should turn it off but
doesn't---once you've dedicated a disk to ZFS, you have to turn the
write cache off yourself somehow
On Wed, Jan 14, 2009 at 2:48 PM, Miles Nordin car...@ivy.net wrote:
but, the write cache on/offness is a stateful setting stored on the
disk platter, right? so it survives reboots of the disk, and ZFS
doesn't turn it off, and UFS arguably should turn it off but
doesn't---once you've
On Wed, Jan 14, 2009 at 2:40 PM, Mattias Pantzare pantz...@gmail.comwrote:
On Wed, Jan 14, 2009 at 20:03, Tim t...@tcsac.net wrote:
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson bfwil...@doit.wisc.edu
wrote:
Does creating ZFS pools on multiple partitions on the same physical
drive
Any update on star ability to backup ZFS ACLs? Any idea if we can use pax
command to backup ZFS acls? will -p option of pax utility do the trick?
-satya
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
??
Tim wrote:
On Wed, Jan 14, 2009 at 2:40 PM, Mattias Pantzare pantz...@gmail.com
mailto:pantz...@gmail.com wrote:
On Wed, Jan 14, 2009 at 20:03, Tim t...@tcsac.net
mailto:t...@tcsac.net wrote:
On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson
satya wrote:
Any idea if we can use pax command to backup ZFS acls? will -p option of pax
utility do the trick?
pax should, according to
http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view
tar and cpio do.
It should be simple enough to test, just generate an archive and have a
look.
Hi,
I have a big problem with my ZFS drive. After a kernel panic, I cannot import
the pool anymore :
---
= zpool status
no pools available
= zpool list
no pools available
Ian Collins wrote:
satya wrote:
Any idea if we can use pax command to backup ZFS acls? will -p option of
pax utility do the trick?
pax should, according to
http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view
pax isn't ACL aware. It does handle extended attributes, though.
Here
Mark Shellenbaum wrote:
Ian Collins wrote:
satya wrote:
Any idea if we can use pax command to backup ZFS acls? will -p
option of pax utility do the trick?
pax should, according to
http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view
pax isn't ACL aware. It does handle extended
I realize that any error can occur in a storage subsystem, but most
of these have an extremely low probability. I'm interested in this
discussion in only those that do occur occasionally, and that are
not catastrophic.
Consider the common configuration of two SCSI disks connected to
the same HBA
What happens when zpool upgrade is run on a zpool that has some faulted
disks? I guess it is safe to run zpool upgrade while zpool is online.
Thanks,
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ZFS will always flush the disk cache at appropriate times. If ZFS
thinks that is alone it will turn the write cache on the disk on.
I'm not sure if you're trying to argue or agree. If you're trying to argue,
you're going to have to do a better job than zfs will always flush disk
cache at
Just to let everybody know, I'm in touch with Charles and we're
working on this problem offline. We'll report back to the list
when we've got something to talk about.
James
On Wed, 14 Jan 2009 08:37:44 -0800 (PST)
Charles Wright char...@asc.edu wrote:
Here's an update:
I thought that the
On Wed, Jan 14, 2009 at 04:39:03PM -0600, Gary Mills wrote:
I realize that any error can occur in a storage subsystem, but most
of these have an extremely low probability. I'm interested in this
discussion in only those that do occur occasionally, and that are
not catastrophic.
What level is
darn, Darren, learning fast!
best,
z
- Original Message -
From: A Darren Dunham ddun...@taos.com
To: zfs-discuss@opensolaris.org
Sent: Wednesday, January 14, 2009 6:15 PM
Subject: Re: [zfs-discuss] What are the usual suspects in data errors?
On Wed, Jan 14, 2009 at 04:39:03PM -0600,
folks, please, chatting on - don't make me stop you, we are all open folks.
[but darn]
ok, thank you much for the anticipation for something actually useful, here
is another thing I shared with MS Storage but not with you folks yet --
we win with real advantages, not lies, not scales, but
Folks, I am very sorry, for don't know how to be not misleading.
I was not challenging the Ten Commandments.
That is a good code. And maybe the first one we need to follow.
Vain and pride and arrogance and courage are all very different, but very
similar.
Before you can truely understand the
well, since this is part of how I make my living, or at least
what is in my current job description...
Gary Mills wrote:
I realize that any error can occur in a storage subsystem, but most
of these have an extremely low probability. I'm interested in this
discussion in only those that do
Hi Guys,
We are in the process of using snapshots and zfs send/recv to copy terrabytes
of database datafiles.
Due to CoW not being very friendly to databases we have a LOT of fragmentation
on the datafiles.
This really slows down zfs send.
Is it possible (or already done) to have zfs
Sorry, I just cannot tell how is this name related to Sun, IMHO.
In my days and enterprise environments, lots of CoWs are stored in
relational databases.
I dunno where the hell you come from.
best,
z
- Original Message -
From: David Shirley david.shir...@nec.com.au
To:
CoW = copy on write, a ZFS feature, which makes snapshots easier to maintain.
However it also introduces file fragmentation in database datafiles.
Sorry, I just cannot tell how is this name related to
Sun, IMHO.
In my days and enterprise environments, lots of CoWs
are stored in
Yes, that's more like it.
hahahahahahahaha, it's all happy.
chatting on, Sun folks.
best,
z
- Original Message -
From: David Shirley david.shir...@nec.com.au
To: zfs-discuss@opensolaris.org
Sent: Wednesday, January 14, 2009 11:44 PM
Subject: Re: [zfs-discuss] zfs send and file
Ok, it's also important, in many many cases, but not all -
taking the problem into tomorrow is also not very good.
IMHO, maybe all you smart open folks that know all about this and that, but
dunno how to fix your darn email address to appear zfs user on the darn
list discussion?
do I have to
On 14 Jan 2009, at 10:01, Andrew Gabriel wrote:
DOS/FAT filesystem implementations in appliances can be found in less
than 8K code and data size (mostly that's code). Limited functionality
implementations can be smaller than 1kB size.
Just for the sake of comparison, how big is the limited
Hey, all!
Using iozone (with the sequential read, sequential write, random read, and
random write categories), on a Sun X4240 system running OpenSolaris b104
(NexentaStor 1.1.2, actually), we recently ran a number of relative
performance tests using a few ZIL and L2ARC configurations (meant to
53 matches
Mail list logo