Jim,
This makes sense.
fmdump -eV reported that your original drive was experiencing repeated
fread failures ( scsi command code 0x28)
Steve
- Original Message -
Well, I decided to bite the bullet and kick the original
disk from the pool after replacing it with the spare
The interesting bit is what happens inside arc_reclaim_needed(),
that is, how it arrives at the conclusion that there is memory pressure.
Maybe we could trace arg0, which gives the location where
we have left the function. This would finger which return path
arc_reclaim_needed() took.
Steve
, ZFS plain file }, { zap_byteswap , TRUE ,
ZFS directory }, { zap_byteswap , TRUE , ZFS master node },
{ zap_byteswap , TRUE , ZFS delete queue },
{ byteswap_uint8_array , FALSE , zvol object },
Cheers,
Steve Gonczi
- Original Message -
I just found that I do not exactly know: What
Thanks for everyone who replied to my call for help. I really appreciate it.
I turned the server off for a few hours and rebooted again. This time many of
the problems seemed to resolve. It has been running smoothly now for a day.
I took the advice of one post and updated to OpenIndiana
I have a home media server set up using OpenSolaris. All my experience with
OpenSolaris has been through setting up and maintaining this server so it is
rather limited. I have run in to some problems recently and I am not sure how
the best way to troubleshoot this. I was hoping to get some
other disk i/o.
Good luck,
Steve Radich
www.BitShop.com - Business Innovative Technology Shop
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am certainly a little late to this post, but I recently began using ZFS and
had to figure this all out.
There are ways to do this without disturbing the volume or removing it and
re-connecting it on the Windows side. I had a bit of research involved, and I
put up a blog about it. Plan to
70G
I created a snapshot of the rpool and tried to send it to the other disk but it
fails with file too large.
zfs send -R rp...@backup altpool
warning: cannot send 'rpool/bu...@backup': file too large.
is there anyway to get the data over onto the other drive at all?
Thanks Steve
Doh,
Why didn't I think of that cheers Mark, some time the most obvious options get
completely passed by, alt boot environment it is.
Thanks Steve.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
Steve Radich - www.BitShop.com - www.LinkedIn.com/in/SteveRadich
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As most others have - I've been having issues with dedup.
Here's my situation, 4TB pool for daily backups of sql server - dedup enabled -
so a typical directory has 100+ files that are mostly identical (some all are
identical).
If I do rm * OpenSolaris is dead, zfs hung, etc. sometimes it
cards? Is there another card
we should be looking at? Thanks,
Steve Jost
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that does 6gb SAS but
can't seem to find a source on the gear any ideas? Thanks!
Steve Jost
The Intel SSD is not a dual ported SAS device. This device must be supported
by the SAS expander in your external chassis.
Did you use an AAMUX transposer card for the SATA device between
occurred), ASCQ: 0x0, FRU: 0x0
Could this error be because the Intel SSDs are sata and we need a real SAS
interface for multi-initiator support or is it a bug in the firmware somewhere
that needs to be addressed? Where can we go from here to troubleshoot this
oddity? Thanks!
Steve Jost
connected to both nodes and no ssds everything works as it should.
Sorry for the confusion,
Steve Jost
Silly question - you're not trying to have the ZFS pool imported on
both hosts at the same time, are you? Maybe I misread, had a hard
time following the full description of what exact
More info:
The crashes go away just by swapping the cpu to a faster/more horsepower cpu.
On one box where the crash consistently happened (2 core slow cpu)
I no longer see the crash after swapping to a quad core.
--
This message posted from opensolaris.org
z_allocsize = 0x30
z_size = 0x30
z_ace_count = 0x6
z_ace_idx = 0x2
}
Looks to me the crash here is the same, and list_next / list_prev are garbage.
Anybody seen this?
Am I skipping too many versions when I am image-updating?
I am hoping someone who knows this code would chime in.
Steve
As I am looking at this further, I convince myself this should really be an
assert.
(I am running release builds, so assert-s do not fire).
I think in a debug build, I should be seeing the !list_empty() assert in:
list_remove(list_t *list, void *object)
{
list_node_t *lold =
of interest :(
Steve
another question, is zpool shrinking available?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On May 2, 2010, at 8:47 AM, Steve Staples wrote:
Hi there!
I am new to the list, and to OpenSolaris, as well as ZPS.
I am creating a zpool/zfs to use on my NAS server, and basically I want
some
redundancy for my files/media. What I am looking to do, is get a bunch
of
2TB drives
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Staples
My problem is, is that not all 2TB hard drives are the same size (even
though they should be 2 trillion bytes, there is still sometimes a +/-
(I've
only noticed this 2x
my media/files into basically what consists of a JBOD
type array (on steroids).
Steve
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://www.bitshop.com/Blogs/tabid/95/EntryId/78/Bug-in-OpenSolaris-SMB-Server-causes-slow-disk-i-o-always.aspx
This explains just how major of a bug this issue is IMHO - The SMB slowdown
from Windows 2003 is doing something odd in the Kernel I think now from the
symptoms - See the tests for
We have had a pool on a fishworks appliance, since June 2009, which has been
filled up with files of varying sizes... all of this data has been written
prior to the metaslab fix of reducing metaslab_df_free_pct to 5...
Whats the chances that our pool is now highly fragmented, and the metaslab
I would like to ask a question regarding ZFS performance overhead when having
hundreds of millions of files
We have a storage solution, where one of the datasets has a folder containing
about 400 million files and folders (very small 1K files)
What kind of overhead do we get from this kind of
Hei Kjetil.
Actually we are using hardware RAID5 on this setup.. so solaris only sees a
single device...
The overhead I was thinking of was more in the pointer structures... (bearing
in mind this is a 128 bit file system), I would guess that memory requirements
would be HUGE for all these
It was never the intention that this storage system should be used in this
way...
And I am now clearning alot of this stuff out..
This is very static files, and is rarely used... so traversing it any way is a
rare occasion...
What has happened is that reading and writing large files which are
thats not the issue here, as they are spread out in a folder structure based on
an integer split into hex blocks... 00/00/00/01 etc...
but the number of pointers involved with all these files, and directories
(which are files) must have an impact on a system with limited RAM?
There is 4GB RAM
Well I am deleting most of them anyway... they are not needed anymore...
Will deletion solve the problem... or do I need to do something more to defrag
the file system?
I have understood that defrag willl not be available until this block rewrite
thing is done?
--
This message posted from
it completely.
This seems like some simple method to say FORCE this drive out of the pool.
Steve Radich
www.BitShop.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
drive tonight to see if it improves things.
Steve Radich
www.BitShop.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I should note that trying zfs set primarycache=metadata tank1 took a few
minutes. Seems changing what is cached in ram would be instant (we don't need
to flush out from ram the data, just don't put it back in ram again).
During this disk i/o seemed slow, could have been unrelated.
--
This
for iSCSI and CIFS which is it's purpose, so a reboot isn't
planned unless this hangs in more ways. Hopefully it will respond in a while.
snv129 installed.
Steve Radich - Founder and Principal of Business Information Technology Shop -
www.bitshop.com
Developer Resources Site
on some of the copies I see
almost no disk writes.
If the dup check logic happens first AND it's a duplicate I shouldn't see
hardly any CPU use (because it won't need to compress the data).
Steve Radich
BitShop.com
--
This message posted from opensolaris.org
Hi,
I've raised bug 6806344 for this problem. I have been able to test the
fix by patching
fmd using mdb, but if you wish to test it, there's an x86 binary in
/home/stephh/libtopo.so.1
Steve
Hi,
James Litchfield wrote:
known issue? I've seen this 5 times over the past few days. I
=129904410624
DTL=24
LABEL 2
LABEL 3
-Steve
After having a think I've come up with the following
hypothesis:
1) When
I wonder if your problem is related to mine:
(can't import zpool after upgrade to solaris 10u6 -
http://www.opensolaris.org/jive/thread.jspa?messageID=324994#324994).
What does zdb -l give you?
zdb -l /dev/dsk/c1d1
zdb -l /dev/dsk/c2d0
zdb -l /dev/dsk/c2d1
-Steve
--
This message posted from
reason.
3) When I re-installed Solaris 10u6, I lost the zpool.cache file and now zfs
looks at the data structures on the disk and they are inconsistent.
Could the above have actually happened? It would explain what I'm seeing.
-Steve
--
This message posted from opensolaris.org
transaction IDs than zdb is showing).
Am I missing something?
-Steve
# zdb -l /dev/dsk/c0t0d0s7
LABEL 0
version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
in advance,
Steve
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s if I ever add a new 'gold vdi file', it does not effect the
s clones, [...] I'll be testing more OS's than the current ones,
s so scalability
what?
What I meant is that if I have a zfs filesystem of a bunch of gold images
(VDIs), if I would zfs snapshot/clone the filesystem. If I add
Leal,
Yes, it was stripe, so I have problems. There is really nothing I can do at
this point, but luckily I've backed up my important data elsewhere, but it'll
take awhile to get some of my other non-critical information back. Oh well, you
win some, you lose some. It's all a learning
Hi Lori,
is ZFS boot still planned for S10 update 6?
Thanks,
Steve
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I didn't throughly search, but it seems that newegg doesn't have any micro atx
mb with the chipset specified on wikipedia that is supporting ECC!... (query:
Form Factor[Micro ATX ],North Bridge[Intel 925X ],North Bridge[Intel 975X
],North Bridge[Intel X38 ],North Bridge[Intel X48 ])
So, better
Since all the other components can be the same (ram, cpu, hdd, case, etc) why
not to spend $30 more for this?
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
This message posted from opensolaris.org
___
zfs-discuss mailing list
If you're really crazy for miniaturization check out this:
http://www.elma.it/ElmaFrame.htm
Is a 4 hot swappable case for 2.5 drives that fits in 1 slot for 5.25!
You'll get low power consumption (= low heating) and will be easier to find a
mini itx case that fit just this and mobo! ;-)
I agree with mike503. If you create the awareness (of the instability of
recorded information) there is a large potential market waiting for a ZFS/NAS
little server!
Very nice the thin client idea. It will be good to also use the NAS server as a
full server and use remotely with a very thin
waynel wrote:
We have a couple of machines similar to your just
spec'ed. They have worked great. The only problem
is, the power management routine only works for K10
and later. We will move to Intel core 2 duo for
future machines (mainly b/c power management
considerations).
So is
A little case modding maybe not so difficult...there are examples (and
instructions) like:
http://www.mashie.org/casemods/udat2.html
But for sure there are more advanced like:
http://forums.bit-tech.net/showthread.php?t=76374pp=20
And here you can have a full example of the human ingenious!!
If I understood properly there is just one piece that has to be modified: a
flat alluminium board with a squared hole in the center, that any fine mechanic
around your city should do very easily...
The problem more than the noise in this tight case might be the temperature!
This message
Since the information obtained it seems that the better choice is ASUS M2A-VM:
tested happily, enough cheap (47€), not bad performing, 4 sata, gb ethernet,
dvi, firewire, ecc. The only notice was a possible DMA bug of the south bridge,
but it seems not so important. (!)
Now the options will
In order to try to track the discussion I've created a wikiable web list of
what was discussed on this thread and what I found on HCL!
The problem is still the same: what are the best ones to pick up? ;-)
Comments are open (also on feature to list) and everyone can edit the list!
This
http://app.blist.com/#/blist/mar.ste/Micro-mini-ATX-mainboards-for-Solaris-ZFS-NAS-server
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
many on HD setup:
Thanks for the replies, but actual doubt is on MB.
I would go with the suggestion of different HD (even if I think that the speed
will be aligned to the slowest of them), and may be raidz2 (even if I think
raidz is enough for a home server)
bhigh:
It seems than 780G/SB700
Following the VIA link and googling a bit I found something that seems
interesting:
- MB: http://www.avmagazine.it/forum/showthread.php?s=threadid=108695
- in the case http://www.chenbro.com/corporatesite/products_detail.php?serno=100
Are they viable??
This message posted from
PS: I scaled down to mini-ITX form factot because it seems that the
http://www.chenbro.com/corporatesite/products_detail.php?serno=100 is the
PERFECT case for the job!
This message posted from opensolaris.org
___
zfs-discuss mailing list
Aaargh! My perfect case not working!!
The back-pane should not be just a pass-trough? There was something
unmounted? The power was not enough for all the disks? Can it depend on the
disks?
Did you have some replies?
I would tell also to tech support of Chenbro directly
Thank you very much Brandon for pointing out the issue for the case!!
(anyway that's really a peaty, I hope it will find a solution!...)
About Atom a person from Sun was pointing out the only good version for ZFS
would be N200 (64bit). Anyway I wouldn't make a problem of money (still ;-),
but
On Thu, Jul 24, 2008 at 1:28 AM, Steve
[EMAIL PROTECTED] wrote:
And interesting of booting from CF, but it seems is
possible to boot from the zraid and I would go for
it!
It's not possible to boot from a raidz volume yet.
You can only boot
from a single drive or a mirror.
If I
s And, if better I'm open also to intel!
intel you can possibly get onboard AHCI that works,
and the intel
igabit MAC, and 16GB instead of 8GB RAM on a desktop
board. Also the
video may be better-supported. but it's, you know,
intel.
Miles, sorry, but probably I'm missing something to
I'm a fan of ZFS since I've read about it last year.
Now I'm on the way to build a home fileserver and I'm thinking to go with
Opensolaris and eventually ZFS!!
Apart from the other components, the main problem is to choose the motherboard.
The offer is incredibly high and I'm lost.
Minimum
Thank you for all the replays!
(and in the meantime I was just having a dinner! :-)
To recap:
tcook:
you are right, in fact I'm thinking to have just 3/4 for now, without anything
else (no cd/dvd, no videocard, nothing else than mb and drives)
the case will be the second choice, but I'll try to
ZFS cache memory is not considered free. rcapd uses the
sysconf(_SC_AVPHYS_PAGES) interface to get the amount of free memory.
One possibility could be to alter sysconf to report some about of the
zfs cache as free.
-Steve L.
On Wed, Jun 18, 2008 at 01:18:07PM -0500, Brian Smith wrote:
When
THANK YOU VERY MUCH EVERYONE!!
You have been very helpful and my questions are (mostly) resolved. While I am
not (and probably will not become) a ZFS expert, I now at least feel confident
that I can accomplish what I want to do.
My last comment on this is this:
I realize that ZFS is designed
be nearly that of the Drobo product. Which brings me to my final
question: is there a gui tool available? I can use command line just like the
next guy, but gui's sure are convenient...
Thanks for your help!
-Steve
This message posted from opensolaris.org
OK so in my (admittedly basic) understanding of raidz and raidz2, these
technologies are very similar to raid5 and raid6. BUT if you set up one disk
as a raidz vdev, you (obviously) can't maintain data after a disk failure, but
you are protected against data corruption that is NOT a result of
Sooo... I've been reading a lot in various places. The conclusion I've drawn
is this:
I can create raidz vdevs in groups of 3 disks and add them to my zpool to be
protected against 1 drive failure. This is the current status of growing
protected space in raidz. Am I correct here?
This
Hardware: Supermicro server with Adaptec 5405 SAS controller, LSI expander -
24 drives. Currently using 2x 1tb SAS drives striped and 1x750gb SATA as
another pool. I don't think hardware is related though as if I turn off zfs
compression it's fine - I seem to get same behavior on either pool.
in the same boat) and was able to
work around it by breaking my filesystem up into lots of individual zfs
filesystems. Although the performance of each one isn't great, as long as your
load is threaded and distributed across filesystems, it should balance out.
Steve Hillman
Simon Fraser University
. If there's something I'm doing wrong, I'd love to hear about it.
I'm also assuming that this ties into BugID 6535160 Lock contention on
zl_lock from zil_commit, so if that's the case, please add another vote for
making this fix available as a patch for S10u4 users
Thanks,
Steve Hillman
In general you should not allow a Solaris system to be both an NFS server and
NFS client for the same filesystem, irrespective of whether zones are involved.
Among other problems, you can run into kernel deadlocks in some (rare)
circumstances. This is documented in the NFS administration docs.
in a pool. I'd like to better understand
the risks/behaviour of ZFS before starting to work on mitigation strategies.
Thanks
Steve
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Another option is to set the TSM server to do a selective backup of the zfs
filesystems. This is probably the least friendly of the solutions provided
though ... personally I'm switching to Tomas' suggestion.
Steve
On 7/10/07, Christopher Gibbs [EMAIL PROTECTED] wrote:
We use TSM to backup
Hi All,
Where is the ZFS configuration (zpools, mountpoints, filesystems, etc) data
stored within Solaris? Is there something akin to vfstab or perhaps a database?
Thanks,
Steve
This message posted from opensolaris.org
___
zfs-discuss mailing
be *any* margin in
consumer electronics these days...
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Steve
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
once replication can
be varied on a per dataset (or per file) basis? You could have all your
'essential to boot' files mirrored across all disks, then raidz2 the
rest...
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
be a bit off, but it's the impression I was left with.
Anyhow, very slick UI, sort of dubious back end, interesting possibility
for integration with ZFS.
You were spot on with your description and conclusion. More details here:
http://arstechnica.com/staff/fatbits.ars/2006/8/15/4995
Steve
these fixes, or are they going
to be released as patches before then?
I'm presuming that U3 is scheduled for early 2007...
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
some files mirrored, with others as raidz, and others with no
resilience? This would allow a pool to initially exist on one
disk, then gracefully change between different resilience
strategies as you add disks and the requirements change.
Apologies if this is pie in the sky.
Steve
correlation between the size of a file and the amount of disk space it
uses goes away in any case.
All pretty exciting - how long are we going to have to wait for this?
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
send -i)
The ability to back stuff up well would make widespread adoption easier,
especially if thumper lives up to expectations.
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be pretty powerful.
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to
standard utils than have tools like zfs taking magical
actions in the background.
Steve.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/home/NN/username
where NN is a 2 digit number.
I don't think there's any way to specify an automount map
with multiple levels in it.
We could do it by having multiple autmount maps but then it
all starts getting messy.
Steve.
This message posted from opensolaris.org
to preserve the
/export/home/NN/username on the server, but have /home/username on the
client - we were considering this on a different system here (where
we're encountering similar problems with a panasas fileserver).
Thanks
Steve.
___
zfs-discuss mailing
that I don't have to resize later, which
inevitably means at least one filesystem being far bigger than it needs
to be.
Steve.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Thanks for your help.
Steve.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
88 matches
Mail list logo