By default, 'zpool import' looks only in /dev/dsk. Since you are using
/dev/lofi you will need to use 'zpool import -d /dev/lofi' to import
your pool.
Thanks,
George
sunnie wrote:
> Hey, guys
> I just do the test for use loop device as vdev for zpool
> Procedures as followings:
> 1) mkfi
Hey, guys
I just do the test for use loop device as vdev for zpool
Procedures as followings:
1) mkfile -v 100m disk1
mkfile -v 100m disk2
2) lofiadm -a disk1 /dev/lofi
lofiadm -a disk2 /dev/lofi
3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2
4) zpool export pool_1and
Bernhard,
Here are the solaris 10 patches:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
See http://www.opensolaris.org/jive/thread.jspa?threadID=39951&tstart=0
for more info.
Thanks,
George
Bernhard Holzer wrote:
> Hi,
>
> this parameter (zfs_nocacheflush) is
Hi,
this parameter (zfs_nocacheflush) is now integrated into Solaris10/U4.
Is it possible to "just install a few patches" to enable this.
What patches are required?
Thanks
Bernhard
--
Bernhard Holzer
Sun Microsystems Ges.m.b.H.
Wienerbergstraße 3/7
A-1100 Vienna, Austria
Phone x60983/+43 1 60
The latest ZFS patches for Solaris 10 are now available:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
ZFS Pool Version available with patches = 4
These patches will provide access to all of the latest features and bug
fixes:
Features:
PSARC 2006/288 zpool histo
Ben,
Much of this code has been revamped as a result of:
6514331 in-memory delete queue is not needed
Although this may not fix your issue it would be good to try this test
with more recent bits.
Thanks,
George
Ben Miller wrote:
> Hate to re-open something from a year ago, but we just had th
You need to install patch 120011-14. After you reboot you will be able
to run 'zpool upgrade -a' to upgrade to the latest version.
Thanks,
George
sunnie wrote:
> Hey, guys
> Since corrent zfs software only support ZFS pool version 3, how should I
> do to upgrade the zfs software or package
Hey, guys
Since corrent zfs software only support ZFS pool version 3, how should I
do to upgrade the zfs software or package?
PS. my current os: SUNOS 5.10 Generic_118833-33 sun4u sparc
This message posted from opensolaris.org
___
zfs-discu
actually here is the first panic messages:
Sep 13 23:33:22 netra2 unix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file:
../../common/fs/zfs/space_map.c, line: 307
Sep 13 23:33:22 netra2 unix: [ID 10 kern.notice]
Sep 13
Many/most of these are available at
http://www.opensolaris.org/os/community/arc/caselog//CCC
replacing /CCC with the case numbers below, as in
http://www.opensolaris.org/os/community/arc/caselog/2007/171
for the 2nd one below. I'm not sure why the first one (2007/142) isn't
there - I'
http://www.opensolaris.org/os/community/arc/caselog/
- Eric
On Tue, Sep 18, 2007 at 09:39:51PM -0400, Kent Watsen wrote:
>
> How does one access the PSARC database to lookup the description of
> these features?
>
> Sorry if this has been asked before! - I tried google before posting
> this :
Basically, it is complaining that there aren't enough disks to read
the pool metadata. This would suggest that in your 3-disk RAID-Z
config, either two disks are missing, or one disk is missing *and*
another disk is damaged -- due to prior failed writes, perhaps.
(I know there's at least one disk
How does one access the PSARC database to lookup the description of
these features?
Sorry if this has been asked before! - I tried google before posting
this :-[
Kent
George Wilson wrote:
> ZFS Fans,
>
> Here's a list of features that we are proposing for Solaris 10u5. Keep
> in mind that
On Sep 18, 2007, at 6:25 AM, Jill Duff wrote:
> Thanks for the feedback. I attempted to enter this bug into the
> OpenSolaris
> Bug Database yesterday, 9/17. However, it looks as if it has either
> been
> filtered out or I made an error during entry. I'm willing to re-
> enter it if
> that's
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read
and write errors.
The disks was so bad that I started to have trans_err. The server lock up and
the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solar
ZFS Fans,
Here's a list of features that we are proposing for Solaris 10u5. Keep
in mind that this is subject to change.
Features:
PSARC 2007/142 zfs rename -r
PSARC 2007/171 ZFS Separate Intent Log
PSARC 2007/197 ZFS hotplug
PSARC 2007/199 zfs {create,clone,rename} -p
PSARC 2007/283 FMA for ZFS
I think they are listed in order with "zfs list".
Fred Oliver wrote:
> Along these lines, the creation time is provided to the nearest second.
> It is possible to have two snapshots (of the same file system) with the
> same time stamp.
>
> In this case, is there any way to determine which snap
Along these lines, the creation time is provided to the nearest second.
It is possible to have two snapshots (of the same file system) with the
same time stamp.
In this case, is there any way to determine which snapshot was created
earlier?
This would be helpful to know in order to predict th
Or 'zfs list -t snapshot -o name,creation'
- Eric
On Tue, Sep 18, 2007 at 01:41:19PM -0700, Richard Elling wrote:
> Try zpool history
> -- richard
>
> Poulos, Joe wrote:
> >
> >
> > Hello,
> >
> >
> >
> > Is there a way to find out what the timestamp is of a specific
> > snapshot? Curr
zfs get creation pool|filesystem|snapshot
Poulos, Joe wrote:
>
> Hello,
>
>
>
> Is there a way to find out what the timestamp is of a specific
> snapshot? Currently, I have a system with 5 snapshots, and would like
> to know the timestamp as to when it was created. Thanks JOr
>
> This messa
Matty wrote:
> On 9/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>> Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
>
> This is awesome! Will the SYNC_NV support that was integrated this
> week be added to update 5 as well? That would be super useful,
> assuming the
Try zpool history
-- richard
Poulos, Joe wrote:
>
>
> Hello,
>
>
>
> Is there a way to find out what the timestamp is of a specific
> snapshot? Currently, I have a system with 5 snapshots, and would like
> to know the timestamp as to when it was created. Thanks JOr
>
> This message an
On 9/18/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
> Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
This is awesome! Will the SYNC_NV support that was integrated this
week be added to update 5 as well? That would be super useful,
assuming the major arrays vendors suppor
Hello,
Is there a way to find out what the timestamp is of a specific snapshot?
Currently, I have a system with 5 snapshots, and would like to know the
timestamp as to when it was created. Thanks JOr
This message and its attachments may contain legally privileged or confidential
informatio
I wrote a simple tool to print out the ARC statistics exported via
kstat. Details at
http://blogs.sun.com/realneel/entry/zfs_arc_statistics
-neel
--
---
Neelakanth Nadgir PAE Performance And Availability Eng
___
zfs-discuss mailing list
zfs-discuss@
> > > Hello Matthew,
> > > Tuesday, September 12, 2006, 7:57:45 PM, you
> > wrote:
> > > MA> Ben Miller wrote:
> > > >> I had a strange ZFS problem this morning.
> The
> > entire system would
> > >> hang when mounting the ZFS filesystems. After
> > trial and error I
> > >> determined that the
On 9/18/07 2:26 PM, "Neil Perrin" <[EMAIL PROTECTED]> wrote:
>
>
> Andy Lubel wrote:
>> On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>>
>>> Hey Andy,
>>>
>>> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
I think we are very close to using zfs in our produc
Hello zfs-discuss,
I wonder if ZFS will be able to take any advantage of Niagara's
built-in crypto?
--
Best regards,
Robert Milkowskimailto:[EMAIL PROTECTED]
http://milek.blogspot.com
__
On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>
> Hey Andy,
>
> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
>> I think we are very close to using zfs in our production environment.. Now
>> that I have snv_72 installed and my pools set up with NVRAM log devices
Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
Andy Lubel wrote:
> I think we are very close to using zfs in our production environment.. Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.
>
> I've been digging to f
The log device feature integrated into snv_68.
You can read about them here:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
And starting on page 18 of the ZFS Admin Guide, here:
http://opensolaris.org/os/community/zfs/docs
Albert Chin wrote:
> On Tue, Sep 18, 2007 at 12:59:02PM -
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
> I think we are very close to using zfs in our production environment.. Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.
How did you get NVRAM log devices?
--
albert chin ([EMAIL
> > I have many small - mostly jpg - files where the original file is
> > approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
> > are currently on vxfs. I have copied all files from one partition onto
> > a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
> > files
Claus Guttesen wrote:
> I have many small - mostly jpg - files where the original file is
> approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
> are currently on vxfs. I have copied all files from one partition onto
> a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB.
I think we are very close to using zfs in our production environment.. Now
that I have snv_72 installed and my pools set up with NVRAM log devices
things are hauling butt.
I've been digging to find out whether this capability would be put into
Solaris 10, does anyone know?
If not, then I guess w
Check out this webpage:
http://www.wizy.org/wiki/ZFS_on_FUSE
Tushar Pardeshi wrote:
Hello,
I am a final year computer engg student and I am planning to implement
zfs on linux,
I have gone through the articles posted on solaris . Please let me
know about the
feasibility of zfs to be imple
Hello,
I am a final year computer engg student and I am planning to implement
zfs on linux,
I have gone through the articles posted on solaris . Please let me
know about the
feasibility of zfs to be implemented on linux.
waiting for valuable replies.
thanks in advance.
On 9/14/07, [EMAIL PROT
On 9/18/07, Larry Wake <[EMAIL PROTECTED]> wrote:
> Matty wrote:
> > George Wilson put together a list of ZFS enhancements and bug fixes
> > that were integrated into Solaris 10 update 3, and I was curious if
> > there was something similar for update 4? There have been a bunch of
> > reliability a
Matty wrote:
> George Wilson put together a list of ZFS enhancements and bug fixes
> that were integrated into Solaris 10 update 3, and I was curious if
> there was something similar for update 4? There have been a bunch of
> reliability and performance enhancements to the ZFS code over the past
>
Attached is the list of ZFS improvements / bugs fixes in S10U4.
--
Prabahar.
4894692 caching data in heap inflates crash dump
6269805 properties should be set via an nvlist.
6276925 option to sort 'zfs list' output
6281585 user defined properties
6341569 zio_alloc_blk() vdev distribution performs
> What lead you to the assumption it's ONLY those
> switches? Just because the patch is ONLY for those
> switches doesn't mean that the bug is only for them.
> The reason you only see the patch for 3xxx and newer
> is because the 2xxx was EOL before the patch was
> released...
>
> FabOS is FabOS
> Tomas Ögren wrote:
> > On 18 September, 2007 - Gino sent me these 0,3K
> bytes:
> >
> >> Hello,
> >> upgrade to snv_60 or later if you care about your
> data :)
> >
> > If there are known serious data loss bug fixes that
> have gone into
> > snv60+, but not into s10u4, then I would like to
> te
George Wilson put together a list of ZFS enhancements and bug fixes
that were integrated into Solaris 10 update 3, and I was curious if
there was something similar for update 4? There have been a bunch of
reliability and performance enhancements to the ZFS code over the past
few months, and I am cu
> I can provide the source code for my test app and one crash dump if anyone
> needs it. Yesterday, the crash was reproduced using bonnie++, an open source
> storage benchmark utility, although the crash is not as frequent as when
> using my test app.
>
>
Yes, it is appreciated if you could pro
Thanks for the feedback. I attempted to enter this bug into the OpenSolaris
Bug Database yesterday, 9/17. However, it looks as if it has either been
filtered out or I made an error during entry. I'm willing to re-enter it if
that's helpful.
I can provide the source code for my test app and one cra
Tomas Ögren wrote:
> On 18 September, 2007 - Gino sent me these 0,3K bytes:
>
>> Hello,
>> upgrade to snv_60 or later if you care about your data :)
>
> If there are known serious data loss bug fixes that have gone into
> snv60+, but not into s10u4, then I would like to tell Sun to "backport"
> t
On 18 September, 2007 - Gino sent me these 0,3K bytes:
> Hello,
> upgrade to snv_60 or later if you care about your data :)
If there are known serious data loss bug fixes that have gone into
snv60+, but not into s10u4, then I would like to tell Sun to "backport"
those into s10u4 if they care abou
Jim Mauro wrote:
>
> Hey Max - Check out the on-disk specification document at
> http://opensolaris.org/os/community/zfs/docs/.
>
> Page 32 illustration shows the rootbp pointing to a dnode_phys_t
> object (the first member of a objset_phys_t data structure).
>
> The source code indicates ub_rootbp
Hello,
upgrade to snv_60 or later if you care about your data :)
Gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul Kraus wrote:
> On 9/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>
>> Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
>>
>> Note that ALL uid and gid values below 100 are explicitly reserved for
>> use by the operating system itself and should not be used by end
> Will a different volblocksize (during creation of the partition) make
> better use of the available diskspace? Will (meta)data require less
> space if compression is enabled?
Just re-read the evil-tuning-guide and metadata is allready compressed
(http://www.solarisinternals.com/wiki/index.php/ZF
Hi.
I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
files uploaded are i
52 matches
Mail list logo