Can't say when the problems may have been introduced, but it looks like we've
got my report (b104) and another report from b111 of issues with the 1068E.
The IR firmware seems to do some sort of internal multipathing while the IT
firmware doesn't do any. With the IT firmware, I enabled multipat
I'm glad I was able to help someone.
My card is also a 3081E-R (B3). It shipped to me with the IR firmware, and I
immediately flashed the IT firmware on it because I had heard it was supposed
to be (better, faster, stable, shiny) with Solaris and ZFS.
The motherboard on that server has an LSI
As I have understood it, reading Jeff Bonwicks blog, async dedup is not
supported. The reason is that async is good if you have constraints on CPU and
RAM. But todays modern CPU can dedup in real time, so async is not needed.
Async allows dedup when you have spare clock cycles to burn (in the ni
I'm a bit unclear how to use/try de-duplication in asynchronous mode? Can
someone kindly clarify?
Is it as simple as enabling then disabling after something completes?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
On Fri, 11 Dec 2009, Bob wrote:
Thanks. Any alternatives, other than using enterprise-level drives?
You can of course use normal consumer drives. Just don't expect them
to recover from an read error very quickly.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems
Thanks. Any alternatives, other than using enterprise-level drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 11 Dec 2009, Bob wrote:
For a complete newbie, can someone simply answer the following: will
using non-enterprise level drives affect ZFS like it affects
hardware RAID?
Yes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagi
On Dec 11, 2009, at 3:26 PM, Alexander Skwar > wrote:
Hi!
On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha
wrote:
On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
wrote:
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-test
NAMEUSED AVAIL REFER MOUNTPOINT
rpool/rb-test
After a Power Outage last week my server wouldn't turn on anymore (yes, UPS
...). I tracked it down to a motherboard failure and ordered a new MB, CPU &
Memory ...
After swapping it all out the system failed to boot with:
NOTICE: error reading device label
NOTICE:
**
I'm also planning on building a home file server using ZFS, and this issue has
also come to my attention during my research. I'm afraid that I'm a complete
ZFS/NAS/RAID newbie, so honestly half the things discussed in this thread went
over my head. :)
For a complete newbie, can someone simply a
On 12/11/09 14:56, Bill Sommerfeld wrote:
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote:
"sh" == Seth Heeren writes:
sh> If you don't want/need log or cache, disable these? You might
sh> want to run your ZIL (slog) on ramdisk.
seems quite silly. why would you do that instea
It looks like 6574286[1] is fixed in OpenSolaris as of October. Anyone
know when this will show up in Solaris 10?
Running Solaris 10 10/09 (fully patched) but am still not able to
remove slog devices from a running zpool.
Thanks,
Ray
[1] http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
__
This is
CR 6907830 rquotad(1M) doesn't return quotas for ZFS if NFS client
mountpoint differs from entry in /etc/mnttab
Fix is in progress.
Thanks,
Lin
On 11/26/09 04:59, Willi Burmeister wrote:
Hi,
we have a new fileserver running on X4275 hardware with Solaris 10U8.
On this fileserver we
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote:
> > "sh" == Seth Heeren writes:
>
> sh> If you don't want/need log or cache, disable these? You might
> sh> want to run your ZIL (slog) on ramdisk.
>
> seems quite silly. why would you do that instead of just disabling
> the ZIL
Len Zaifman wrote:
We have just update a major file server to solaris 10 update 9 so that we can
control user and group disk usage on a single filesystem.
We were using qfs and one nice thing about samquota was that it told you your
soft limit, your hard limit and your usage on disk space and
Yes, in fact the Openpegasus server is already included with Opensolaris under
the SUNWcimserver package. I don't know how extensive the implementation is,
though, yet - I was able to install it and get it running, but not much beyond
that.
--
This message posted from opensolaris.org
_
On Fri, Dec 11, 2009 at 11:43 PM, Nick wrote:
> No, it is not, for a couple of reasons. First of all, rumor is that SMC is
> being discontinued in favor
> of a WBEM/CIM- based management system.
Any specific implementation meant? Are there any plans wrt OpenPegasus?
Regards,
Andrey
Second,
No, it is not, for a couple of reasons. First of all, rumor is that SMC is
being discontinued in favor of a WBEM/CIM-based management system. Second, the
SMC code is not open-source, which means it cannot be included in OpenSolaris.
It is included in Solaris Express Community Edition (SXCE),
Hi!
On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha wrote:
> On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
> wrote:
>> $ sudo zfs create rpool/rb-test
>>
>> $ zfs list rpool/rb-test
>> NAME USED AVAIL REFER MOUNTPOINT
>> rpool/rb-test 18K 170G 18K /rpool/rb-test
>>
>> $
On Thu, Dec 10, 2009 at 5:18 PM, Tom Erickson wrote:
> That's what I meant: If you 'zfs set' a property on a dataset that you
> plan to update with 'zfs receive', then receiving from 'zfs send -Ri' or
> 'zfs send -RI' will not overwrite the property on that existing dataset.
Thanks for the additi
Hi.
On Fri, Dec 11, 2009 at 15:35, Ross Walker wrote:
> On Dec 11, 2009, at 4:17 AM, Alexander Skwar
> wrote:
>
>> Hello Jeff!
>>
>> Could you (or anyone else, of course *G*) please show me how?
[...]
>> Could you please be so kind and show what exactly
>> needs to be done?
>
> I think you might
> "sh" == Seth Heeren writes:
sh> If you don't want/need log or cache, disable these? You might
sh> want to run your ZIL (slog) on ramdisk.
seems quite silly. why would you do that instead of just disabling
the ZIL? I guess it would give you a way to disable it pool-wide
instead of
Hi Stefano,
I think you are saying that you have a non-redundant configuration
with data striped across 3 LUNs that is experiencing a disk problem
as seen by the SCSI retry messages and the I/O error when accessing
this file. ZFS can't provide a whole file in a non-redundant
configuration.
See R
On Dec 10, 2009, at 2:09 PM, Stefano Pini wrote:
Hi guys,
I have a pool made with three luns striped.
After some scsi retryable messages, happened during a storage
activity, zpool status start to report one checksum error on one
file only.
The zpool scrub find it but don't solve it, and when
Hi,
just got a quote from our campus reseller, that readzilla and logzilla
are not available for the X4540 - hmm strange Anyway, wondering
whether it is possible/supported/would make sense to use a Sun Flash
Accelerator F20 PCIe Card in a X4540 instead of 2.5" SSDs?
If so, is it possible to
On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
wrote:
> $ sudo zfs create rpool/rb-test
>
> $ zfs list rpool/rb-test
> NAME USED AVAIL REFER MOUNTPOINT
> rpool/rb-test 18K 170G 18K /rpool/rb-test
>
> $ sudo zfs snapshot rpool/rb-t...@01
> $ sudo zfs snapshot rpool/rb-t...@
On Dec 11, 2009, at 4:17 AM, Alexander Skwar > wrote:
Hello Jeff!
Could you (or anyone else, of course *G*) please show me how?
Situation:
There shall be 2 snapshots of a ZFS called rpool/rb-test
Let's call those snapshots "01" and "02".
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-t
Kjetil Torgrim Homme wrote:
Brandon High writes:
Matthew Ahrens wrote:
Well, changing the "compression" property doesn't really interrupt
service, but I can understand not wanting to have even a few blocks
with the "wrong"
I was thinking of sharesmb or sharenfs settings when I wrote that.
T
Brandon High writes:
> Matthew Ahrens wrote:
>> Well, changing the "compression" property doesn't really interrupt
>> service, but I can understand not wanting to have even a few blocks
>> with the "wrong"
>
> I was thinking of sharesmb or sharenfs settings when I wrote that.
> Toggling them for
Hello Jeff!
Could you (or anyone else, of course *G*) please show me how?
Situation:
There shall be 2 snapshots of a ZFS called rpool/rb-test
Let's call those snapshots "01" and "02".
$ sudo zfs create rpool/rb-test
$ zfs list rpool/rb-test
NAMEUSED AVAIL REFER MOUNTPOINT
rpool/
Yes, although it's slightly indirect:
- make a clone of the snapshot you want to roll back to
- promote the clone
See 'zfs promote' for details.
Jeff
On Fri, Dec 11, 2009 at 08:37:04AM +0100, Alexander Skwar wrote:
> Hi.
>
> Is it possible on Solaris 10 5/09, to rollback to a Z
31 matches
Mail list logo