Ok, I changed the cable and also tried swapping the port on the motherboard.
The drive continued to have huge asvc_t and also started to have huge wsvc_t. I
unplugged it and the 'pool' is now operating as per expected performance wise.
See the 'storage' forum for any further updates as I am now
Hi Daniel,
Am 08.02.10 05:45, schrieb Daniel Carosone:
On Mon, Feb 08, 2010 at 04:58:38AM +0100, Felix Buenemann wrote:
I have some questions about the choice of SSDs to use for ZIL and L2ARC.
I have one answer. The other questions are mostly related to your
raid controller, which I can't
Hello,
an idea popped into my mind while talking about security and intrusion
detection.
Host based ID may use Checksumming for file change tracking. It works like
this:
Once installed and knowning the software is OK, a baseline is created.
Then in every check - verify the current status
-20100208-050907gmt
zfs list shows:
-bash-3.2$ zfs list -t snapshot,filesystem -r zp1
NAME USED AVAIL REFER MOUNTPOINT
zp1 628G 104G 33.8M /home
z...@bup-20090223-033745utc 0 - 33.8M -
z...@bup-20090225
On 08/02/2010 12:55, Lutz Schumann wrote:
Hello,
an idea popped into my mind while talking about security and intrusion
detection.
Host based ID may use Checksumming for file change tracking. It works like this:
Once installed and knowning the software is OK, a baseline is created.
Then in
On 06/02/2010 13:18, Fajar A. Nugraha wrote:
On Sat, Feb 6, 2010 at 1:32 AM, Jjahservan...@gmail.com wrote:
saves me hundreds on HW-based RAID controllers ^_^
... which you might need to fork over to buy additional memory or faster CPU :P
Don't get me wrong, zfs is awesome, but to
On Mon, 8 Feb 2010, Felix Buenemann wrote:
I was under the impression, that using HW RAID10 would save me 50% PCI
bandwidth and allow the controller to more intelligently handle its cache, so
I sticked with it. But I should run some benchmarks in RAID10 vs. JBOD with
ZFS mirrors to see if
copied from opensolaris-dicuss as this probably belongs here.
I kept on trying to migrate my pool with children (see previous threads) and
had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber
Hi,
Officially it's not supported (yet?).
Has anyone tried it with x4540 though?
--
Robert Milkowski
http://milek.blogspot.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi!
I have a OSOL box as a home file server. It has 4 1TB USB Drives and 1 TB
FW-Drive attached. The USB devices are combined to a RaidZ-Pool and the FW
Drive acts as a hot spare.
This night, one USB drive faulted and the following happened:
1. The zpool was not accessible anymore
2. changing
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00:00
data01/san/gallardo/g-testandlab
pfexec sbdadm import-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
The sbdadm import-lu gives me:
sbdadm: guid in use
which
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it should be
in
Use create-lu to give the clone a different GUID:
sbdadm create-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
--
Dave
On 2/8/10 10:34 AM, Scott Meilicke wrote:
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
Lori Alt wrote:
Can you please send a complete list of the actions taken: The commands
you used to create the send stream, the commands used to receive the
stream. Also the output of `zfs list -t all` on both the sending and
receiving sides. If you were able to collect a core dump (it
Sure, but that will put me back into the original situation.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
To add to Bob's notes...
On Feb 8, 2010, at 8:37 AM, Bob Friesenhahn wrote:
On Mon, 8 Feb 2010, Felix Buenemann wrote:
I was under the impression, that using HW RAID10 would save me 50% PCI
bandwidth and allow the controller to more intelligently handle its cache,
so I sticked with it.
On Feb 8, 2010, at 9:05 AM, Martin Mundschenk wrote:
Hi!
I have a OSOL box as a home file server. It has 4 1TB USB Drives and 1 TB
FW-Drive attached. The USB devices are combined to a RaidZ-Pool and the FW
Drive acts as a hot spare.
This night, one USB drive faulted and the following
Only with the zdb(1M) tool but note that the
checksums are NOT of files
but of the ZFS blocks.
Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so
nice :(
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Ah, I didn't see the original post. If you're using an old COMSTAR
version prior to build 115, maybe the metadata placed at the first 64K
of the volume is causing problems?
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007192.html
The clone and create-lu process works
That is likely it. I create the volume using 2009.06, then later upgraded to
124. I just now created a new zvol, connected it to my windows server,
formatted, and added some data. Then I snapped the zvol, cloned the snap, and
used 'pfexec sbdadm create-lu'. When presented to the windows server,
nw == Nicolas Williams nicolas.willi...@sun.com writes:
ch == c hanover chano...@umich.edu writes:
Trying again:
ch In our particular case, there won't be
ch snapshots of destroyed filesystems (I create the snapshots,
ch and destroy them with the filesystem).
Right, but if your
ck == Christo Kutrovsky kutrov...@pythian.com writes:
djm == Darren J Moffat darr...@opensolaris.org writes:
kth == Kjetil Torgrim Homme kjeti...@linpro.no writes:
ck The never turn off the ZIL sounds scary, but if the only
ck consequences are 15 (even 45) seconds of data loss .. i am
enh == Edward Ned Harvey sola...@nedharvey.com writes:
enh As for mac access via nfs, automounter, etc ... I found that
enh the UID/GID / posix permission bits were a problem, and I
enh found it was easier and more reliable for the macs to use SMB
I found it much less reliable, if by
I plan on filing a support request with Sun, and will try to post back with any
results.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Is it possible to install and boot MS windows 7 from zfs iscsi target?
What about Linux or even Solaris? Do the installation DVDs of these OS.
have sufficient drivers to install it on an iscsi target. Please share
if there is a document available.
Thanks,
--
Amer Ather
Senior Staff
On Mon, 8 Feb 2010, Richard Elling wrote:
If there is insufficient controller bandwidth capacity, then the
controller becomes the bottleneck.
We don't tend to see this for HDDs, but SSDs can crush a controller and
channel.
It is definitely seen with older PCI hardware.
Bob
--
Bob
Hi,
This may well have been covered before but I've not been able to find an answer
to this particular question.
I've setup a raidz2 test env using files like this:
# mkfile 1g t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 s1 s2
# zpool create dataPool raidz2 /xvm/t1 /xvm/t2 /xvm/t3 /xvm/t4 /xvm/t5
#
This is a FAQ, but the FAQ is not well maintained :-(
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
On Feb 8, 2010, at 1:35 PM, Lasse Osterild wrote:
Hi,
This may well have been covered before but I've not been able to find an
answer to this particular question.
I've setup
There's also questions of case sensitivity, locking, being mounted at
boot time rather than login time, accomodating more than one user.
I've also heard SMB is far slower.
The Macs I've switched to automounted NFS are causing me less trouble.
If you are in a ``share almost everything''
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote:
Created a pool on head1 containing just the cache
device (c0t0d0).
This is not possible, unless there is a bug. You
cannot create a pool
with only a cache device. I have verified this on
b131:
# zpool create
On 08/02/2010, at 22.50, Richard Elling wrote:
r...@vmstor01:/# zpool list
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
dataPool 9.94G 4.89G 5.04G49% 1.00x ONLINE -
Now here's what I don't get, why does it say the poo sizel is 9.94G when
it's made up of 2 x
enh == Edward Ned Harvey macenterpr...@nedharvey.com writes:
enh How are you managing UID's on the NFS server?
All the macs are installed from the same image using asr. And for the
most part, there's just one user, except where there isn't, and then I
manage uid's by hand.
enh When I
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote:
Only with the zdb(1M) tool but note that the
checksums are NOT of files
but of the ZFS blocks.
Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so
nice :(
If you're comparing the current data to a
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote:
Ok thanks I know that the amount of used space will vary, but what's
the usefulness of the total size when ie in my pool above 4 x 1G
(roughly, depending on recordsize) are reserved for parity, it's not
like it's useable for
Am 08.02.10 22:23, schrieb Bob Friesenhahn:
On Mon, 8 Feb 2010, Richard Elling wrote:
If there is insufficient controller bandwidth capacity, then the
controller becomes the bottleneck.
We don't tend to see this for HDDs, but SSDs can crush a controller and
channel.
It is definitely seen
Hi Richard,
I last updated this FAQ on 1/19.
Which part is not well-maintained?
:-)
Cindy
On 02/08/10 14:50, Richard Elling wrote:
This is a FAQ, but the FAQ is not well maintained :-(
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
On Feb 8, 2010, at 1:35 PM, Lasse Osterild
On 09/02/2010, at 00.23, Daniel Carosone wrote:
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote:
Ok thanks I know that the amount of used space will vary, but what's
the usefulness of the total size when ie in my pool above 4 x 1G
(roughly, depending on recordsize) are
Hi Lasse,
I expanded this entry to include more details of the zpool list and
zfs list reporting.
See if the new explanation provides enough details.
Thanks,
Cindy
On 02/08/10 16:51, Lasse Osterild wrote:
On 09/02/2010, at 00.23, Daniel Carosone wrote:
On Mon, Feb 08, 2010 at 11:28:11PM
zpool/zfs history does not record version upgrade events, those seem like
important events worth keeping in either the public or internal history.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
This is a long thread, with lots of interesting and valid observations
about the organisation of the industry, the segmentation of the
market, getting what you pay for vs paying for what you want, etc.
I don't really find within, however, an answer to the original
question, at least the way I
Hi,
I am loving the new dedup feature.
Few questions:
If you enable it after data is on the filesystem, it will find the
dupes on read as well as write? Would a scrub therefore make sure the
DDT is fully populated.
Re the DDT, can someone outline it's structure please? Some sort of
hash table?
On Mon, Feb 08, 2010 at 05:23:29PM -0700, Cindy Swearingen wrote:
Hi Lasse,
I expanded this entry to include more details of the zpool list and
zfs list reporting.
See if the new explanation provides enough details.
Cindy, feel free to crib from or refer to my text in whatever way might
On Tue, 9 Feb 2010, Felix Buenemann wrote:
Well to make things short: Using JBOD + ZFS Striped Mirrors vs. controller's
RAID10, dropped the max. sequential read I/O from over 400 MByte/s to below
300 MByte/s. However random I/O and sequential writes seemed to perform
Much of the difference
I've a couple of older systems that are front-ending a large backup array.
I'd like to put in a large L2ARC cache device for them to use with
dedup.Right now, they only have Ultra320 SCA 3.5 hot-swap drive
bays, and PCI-X slots.
I haven't found any SSDs (or adapters) which might work
Daniel Carosone d...@geek.com.au writes:
In that context, I haven't seen an answer, just a conclusion:
- All else is not equal, so I give my money to some other hardware
manufacturer, and get frustrated that Sun won't let me buy the
parts I could use effectively and comfortably.
Just like i said way earlier, The entire idea is like asking to buy a
Ferrari without the aluminum wheels they sell because you think they are
charging too much for them, after all, aluminum is cheap.
It's just not done that way. There are OTHER OPTIONS for people who can't
afford it. You
On Monday, February 8, 2010, Kjetil Torgrim Homme kjeti...@linpro.no wrote:
Daniel Carosone d...@geek.com.au writes:
In that context, I haven't seen an answer, just a conclusion:
- All else is not equal, so I give my money to some other hardware
manufacturer, and get frustrated that Sun
On Mon, Feb 8, 2010 at 9:13 PM, Tim Cook t...@cook.ms wrote:
On Monday, February 8, 2010, Kjetil Torgrim Homme kjeti...@linpro.no
wrote:
Daniel Carosone d...@geek.com.au writes:
In that context, I haven't seen an answer, just a conclusion:
- All else is not equal, so I give my money
Tim Cook wrote:
On Monday, February 8, 2010, Kjetil Torgrim Homme kjeti...@linpro.no wrote:
Daniel Carosone d...@geek.com.au writes:
In that context, I haven't seen an answer, just a conclusion:
- All else is not equal, so I give my money to some other hardware
manufacturer, and
May be look at rsync and rsync lib (http://librsync.sourceforge.net/) code to
see if a ZFS API could be design to help rsync/librsync in the future as well
as diff.
It might be a good idea for POSIX to have a single checksum and a
multi-checksum interface.
One problem could be block sizes,
Hi. As sometimes list-owner's aren't monitored...
I signed up for digests.
On the mailman page it hints at once daily service.
I'm getting maybe 12 per day, didn't count them.
Non-overlapping, various messages counts in each.
This is unexpected given the above hint.
Once a day would be nice :)
Nobody has any ideas? It's still hung after work.
I wonder what it will take to stop the backup and export the pool? Well,
that's nice; a straight kill terminated the processes, at least.
zpool status shows no errors. zfs list shows backup filesystems mounted.
zpool export -f is running...no
On Mon, Feb 8, 2010 at 9:04 PM, grarpamp grarp...@gmail.com wrote:
PS: Is there any way to get a copy of the list since inception
for local client perusal, not via some online web interface?
You can get monthly .gz archives in mbox format from
http://mail.opensolaris.org/pipermail/zfs-discuss/.
Although I am in full support of what sun is doing, to play devils
advocate: supermicro is.
They're not the only ones, although the most-often discussed here.
Dell will generally sell hardware and warranty and service add-ons in
any combination, to anyone willing and capable of figuring
Damon Atkins damon_atk...@yahoo.com.au writes:
One problem could be block sizes, if a file is re-written and is the
same size it may have different ZFS record sizes within, if it was
written over a long period of time (txg's)(ignoring compression), and
therefore you could not use ZFS checksum
On Mon, Feb 08, 2010 at 09:33:12PM -0500, Thomas Burgess wrote:
This is a far cry from an apples to apples comparison though.
As much as I'm no fan of Apple, it's a pity they dropped ZFS because
that would have brought considerable attention to the opportunity of
marketing and offering
grarpamp grarp...@gmail.com writes:
PS: Is there any way to get a copy of the list since inception for
local client perusal, not via some online web interface?
I prefer to read mailing lists using a newsreader and the NNTP interface
at Gmane. a newsreader tends to be better at threading etc.
On Mon, Feb 8, 2010 at 10:33 PM, Erik Trimble erik.trim...@sun.com wrote:
Erik Trimble wrote:
I've a couple of older systems that are front-ending a large backup array.
I'd like to put in a large L2ARC cache device for them to use with dedup.
Right now, they only have Ultra320 SCA 3.5
On Mon, Feb 08, 2010 at 07:33:56PM -0800, Erik Trimble wrote:
To reply to myself, the best I can do is this:
http://www.apricorn.com/product_detail.php?type=familyid=59
(it uses a sil3124 controller, so it /might/ work with OpenSolaris )
Nice. I'd certainly like to know if you try it
On Tue, Feb 09, 2010 at 03:11:38PM +1100, Daniel Carosone wrote:
I didn't find anything to indicate either way whether there was
bootable bios on board
Ah - in the install guide there's a mention about pressing F4 or
Ctrl-S when prompted at boot to configure the raid format, so
there's
I would have thought that if I write 1k then ZFS txg times out in 30secs, then
the 1k will be written to disk in a 1k record block, and then if I write 4k
then 30secs latter txg happen another 4k record size block will be written, and
then if I write 130k a 128k and 2k record block will be
On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote:
Tom Hall thattommyh...@gmail.com writes:
If you enable it after data is on the filesystem, it will find the
dupes on read as well as write? Would a scrub therefore make sure the
DDT is fully populated.
no. only written data is
On Feb 8, 2010, at 9:10 PM, Damon Atkins wrote:
I would have thought that if I write 1k then ZFS txg times out in 30secs,
then the 1k will be written to disk in a 1k record block, and then if I write
4k then 30secs latter txg happen another 4k record size block will be
written, and then if
63 matches
Mail list logo