Christophe Rolland wrote:
Hi all
we consider using ZFS for various storages (DB, etc). Most features are
great, especially the ease of use.
Nevertheless, a few questions :
- we are using SAN disks, so most JBOD recommandations dont apply, but I did
not find many experiences of zpool of a
It does. The file size is limited to the original creation size, which is 65k
for files with 1 data sample.
Unfortunately, I have zero experience with dtrace and only a little with truss.
I'm relying on the dtrace scripts from people on this thread to get by for now!
This message posted
Hi Robert,
thanks for the answer.
You are not the only one. It's somewhere on ZFS developers list...
yes, i checked this on the whole list.
so, lets wait for the feature.
Actually it should complain and using -f (force)
on the active node, yes.
but if we want to reuse the luns on the other
Hello Christophe,
Friday, February 1, 2008, 7:55:31 PM, you wrote:
CR Hi all
CR we consider using ZFS for various storages (DB, etc). Most
CR features are great, especially the ease of use.
CR Nevertheless, a few questions :
CR - we are using SAN disks, so most JBOD recommandations dont
CR
Priming the cache for ZFS should work at least after boot
When freemem is large; any read block will make it to
cache. Post boot when memory is primed with something else
(what?) then it gets more difficult for both UFS and ZFS to
guess what to keep in caches.
Did you try priming ZFS after boot
With my (COTS) LSI 1068 and 1078 based controllers I get consistently
better performance when I export all disks as jbod (MegaCli -
CfgEachDskRaid0).
Is that really 'all disks as JBOD'? or is it 'each disk as a single
drive RAID0'?
single disk raid0:
./MegaCli -CfgEachDskRaid0 Direct
Goodmorning all,
can anyone confirm that 3ware raid controllers are indeed not working
under Solaris/OpenSolaris? I can't seem to find it in the HCL.
We're now using a 3Ware 9550SX as a S-ATA RAID controller. The
original plan was to disable all it's RAID functions and use justs the
S-ATA
The latest changes to the sata and marvell88sx modules
have been put back to Solaris Nevada and should be
available in the next build (build 84). Hopefully,
those of you who use it will find the changes helpful.
This message posted from opensolaris.org
Enrico,
Is there any forecast to improve the efficiency of the replication
mechanisms of ZFS ? Fishwork - new NAS release
I would take some time to talk with and understand exactly what the
customer's expectation are for replication. i would not base my
decision on the cost of
Have you looked at AVS? (http://opensolaris.org/os/project/avs/)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is deleting the old files/directories in the ZFS file system
sufficient or do I need to destroy/recreate the pool and/or file
system itself? I've been doing the former.
The former should be sufficient, it's not necessary to destroy the pool.
-j
I ran this dtrace script and got no output. Any ideas?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Well 5 minutes after posting that the resilver completed. However despite it
saying that the resilver completed with 0 errors ten minutes ago, the device
still shows as unavailable, and my pool is still degraded.
This message posted from opensolaris.org
Found my first problems with this today. The ZFS mirror appears to work fine,
but if you disconnect one of the iSCSI targets it hangs for 5 mins or more.
I'm also seeing very concerning behaviour when attempting to re-attach the
missing disk.
My test scenario is:
- Two 35GB iSCSI targets
Ross wrote:
Bleh, found out why they weren't appearing. I was just creating a regular
ZFS filesystem and setting shareiscsi=on. If you create a volume it works
fine...
I wonder if that's something that could do with being added to the
documentation for shareiscsi? I can see now that
Is there any forecast to improve the efficiency of the replication
mechanisms of ZFS ? Fishwork - new NAS release
Considering the solution we are offering to our customer ( 5 remote
sites replicating in one central data-center ) with ZFS ( cheapest
solution ) I should consider
3 times
On Feb 12, 2008 4:45 AM, Lida Horn [EMAIL PROTECTED] wrote:
The latest changes to the sata and marvell88sx modules
have been put back to Solaris Nevada and should be
available in the next build (build 84). Hopefully,
those of you who use it will find the changes helpful.
I have indeed found
Well, I got it working, but not in a tidy way. I'm running HA-ZFS here, so I
moved the ZFS pool over to the other node in the cluster. That had exactly the
same problems however, the iSCSI disks were unavailable.
Then I found an article from November 2006
Bleh, found out why they weren't appearing. I was just creating a regular ZFS
filesystem and setting shareiscsi=on. If you create a volume it works fine...
I wonder if that's something that could do with being added to the
documentation for shareiscsi? I can see now that all the examples of
Thank you for your info. So with dumpadm I can
manage crash-dumps. And if ZFS is not capable
of handling those dumps, who cares. Then I will
create an extra slice for those purposes. No
problem.
Roman
This message posted from opensolaris.org
___
20 matches
Mail list logo