-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robert Milkowski wrote:
> CG> Yes, it is Sun's cp. I'm trying, with some difficulty, to figure out
> CG> exactly how to reproduce this error in a way not specific to my data. I
> CG> copied a set of randomly generated files with a deep directory str
Guanghui Wang wrote:
> I dont know when will U5 or U6 coming,so i just set zfs_nocacheflush=1 to
> /etc/system,and the performance will speed up like zil_disable=1,and that's
> more safe for the filesystem.
>
> the separate zlog feature is not in U4,the nfs performance on zfs will be
> too slo
I dont know when will U5 or U6 coming,so i just set zfs_nocacheflush=1 to
/etc/system,and the performance will speed up like zil_disable=1,and that's
more safe for the filesystem.
the separate zlog feature is not in U4,the nfs performance on zfs will be too
slow when you do not set zfs_nochach
Awesome work you and your team are doing. Thanks Lori!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Jan 31, 2008 at 03:15:30PM -0700, Lori Alt wrote:
>
> >Does this still seem likely to occur, or will it be pushed back further?
> >I see that build 81 is out today which means we are not far from seeing
> >ZFS boot on Sparc in Nevada?
> >
> The pressure to get this into build 86 is cons
Vincent Fox wrote:
zfs boot on sparc will not be putback on its own.
It will be putback with the rest of zfs boot support,
sometime around build 86.
Does this still seem likely to occur, or will it be pushed back further? I see
that build 81 is out today which means we are not far fro
[EMAIL PROTECTED] said:
> You still need interfaces, of some kind, to manage the device. Temp sensors?
> Drive fru information? All that information has to go out, and some in, over
> an interface of some sort.
Looks like the Sun 2530 array recently added in-band management over the
SAS (data)
> zfs boot on sparc will not be putback on its own.
> It will be putback with the rest of zfs boot support,
> sometime around build 86.
Does this still seem likely to occur, or will it be pushed back further? I see
that build 81 is out today which means we are not far from seeing ZFS boot on
Sp
Kyle McDonald wrote:
> Vincent Fox wrote:
>
>> So the point is, a JBOD with a flash drive in one (or two to mirror the
>> ZIL) of the slots would be a lot SIMPLER.
>>
>> We've all spent the last decade or two offloading functions into specialized
>> hardware, that has turned into these massiv
Nope, doesn't work.
Try presenting one of those lun snapshots to your host, run cfgadm -al,
then run zpool import.
#zpool import
no pools available to import
It would make my life so much simpler if you could do something like
this: zpool import --import-as yourpool.backup yourpool
Vincent Fox wrote:
> So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL)
> of the slots would be a lot SIMPLER.
>
> We've all spent the last decade or two offloading functions into specialized
> hardware, that has turned into these massive unneccessarily complex things.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Vincent Fox wrote:
| So the point is, a JBOD with a flash drive in one (or two to mirror
the ZIL) of the slots would be a lot SIMPLER.
I guess a USB pendrive would be slower than a harddisk. Bad performance
for the ZIL.
- --
Jesus Cea Avion
kristof wrote:
> I don't have an exact copy of the error, but the following message was
> reported by zpool status:
>
> Pool degraded. Meta data corrupted. Please restore pool from backup.
>
> All devices where online, but pool could not be imported. During import we
> got I/O error.
>
zpool
So the point is, a JBOD with a flash drive in one (or two to mirror the ZIL)
of the slots would be a lot SIMPLER.
We've all spent the last decade or two offloading functions into specialized
hardware, that has turned into these massive unneccessarily complex things.
I don't want to go to a new
The big problem that I have with non-directio is that buffering delays program
execution. When reading/writing files that are many times larger than RAM
without directio, it is very apparent that system response drops through the
floor- it can take several minutes for an ssh login to prompt for
I don't have an exact copy of the error, but the following message was reported
by zpool status:
Pool degraded. Meta data corrupted. Please restore pool from backup.
All devices where online, but pool could not be imported. During import we got
I/O error.
Krdoor
This message posted from op
On 1/31/08, Hector De Jesus <[EMAIL PROTECTED]> wrote:
>
> Hello SUN gurus I do not know if this is supported, I have a created a
> zpool consisting of the SAN resources and created a zfs file system. Using
> third part software I have taken snapshots of all luns in the zfs pool. My
> question is
Vincent Fox wrote:
> When Sun starts selling good SAS JBOD boxes equipped with appropriate
> redundancies and a flash-drive or 2 for the ZIL I will definitely go that
> route. For now I have a bunch of existing Sun HW RAID arrays so I make use
> of them mainly to make sure I can package LUNs an
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens wrote:
| I believe this is because sharemgr does an O(number of shares) operation
| whenever you try to share/unshare anything (retrieving the list of shares
| from the kernel to make sure that it isn't/is already shared). I
couldn't
|
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tomas Ă–gren wrote:
| To get similar (lower) consistency guarantees, try disabling ZIL..
| google://zil_disable .. This should up the speed, but might cause disk
| corruption if the server crashes while a client is writing data.. (just
| like with UFS)
no. It is scheduled for U6.
Lori
Jesus Cea wrote:
>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA1
>
>Lori Alt wrote:
>| zfs boot on sparc will not be putback on its own.
>| It will be putback with the rest of zfs boot support,
>| sometime around build 86.
>
>May I ask if ZFS boot will be avail
Hello SUN gurus I do not know if this is supported, I have a created a zpool
consisting of the SAN resources and created a zfs file system. Using third
part software I have taken snapshots of all luns in the zfs pool. My question
is in a recovery situation is there a way for me to mount the sn
Richard Elling wrote:
> Sergey wrote:
>
>> Hi list,
>>
>> I'd like to be able to store zfs filesystems on a tape drive that is
>> attached to
>> another Solaris U4 x86 server. The idea is to use "zfs send" together with
>> tar in order to get the list of the filesystems' snapshots stored on a
Sergey wrote:
> Hi list,
>
> I'd like to be able to store zfs filesystems on a tape drive that is attached
> to
> another Solaris U4 x86 server. The idea is to use "zfs send" together with
> tar in order to get the list of the filesystems' snapshots stored on a tape
> and be able to perform a re
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Lori Alt wrote:
| zfs boot on sparc will not be putback on its own.
| It will be putback with the rest of zfs boot support,
| sometime around build 86.
May I ask if ZFS boot will be available in Solaris 10 Update 5?.
- --
Jesus Cea Avion
I package up 5 or 6 disks into a RAID-5 LUN on our Sun 3510 and 2540 arrays.
Then I use ZFS to RAID-10 these volumes.
Safety first!
Quite frankly I've had ENOUGH of rebuilding trashed filesystems. I am tired
to chasing performance like it's the Holy Grail and shoving other
considerations asi
> Hi list,
>
> I'd like to be able to store zfs filesystems on a tape drive that is
> attached to another Solaris U4 x86 server. The idea is to use "zfs send"
> together with tar in order to get the list of the filesystems' snapshots
> stored on a tape and be able to perform a restore operation
Hi list,
I'd like to be able to store zfs filesystems on a tape drive that is attached to
another Solaris U4 x86 server. The idea is to use "zfs send" together with tar
in order to get the list of the filesystems' snapshots stored on a tape and be
able to perform a restore operation later. It's
Jorgen Lundman wrote:
> If we were to get two x4500s, with the idea of keeping one as a passive
> standby (serious hardware failure) are there any clever solutions in
> doing so?
>
> We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I
> assume there is no zpool send/recv (al
Gregory Perry wrote:
> Hello,
>
> I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0
> array. I am considering going to ZFS and I would like to get some feedback
> about which situation would yield the highest performance: using the Perc
> 5/i to provide a hardware RAID0 t
On Jan 31, 2008, at 6:13 AM, Jorgen Lundman wrote:
>
> If we were to get two x4500s, with the idea of keeping one as a
> passive
> standby (serious hardware failure) are there any clever solutions in
> doing so?
You should take a look at AVS, there are some ZFS and AVS demos online
http://op
Hi;
Why don't you buy one X4500 and one X4500 motherboard as spare a long with a
few cold standby drives.
Best regards
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED]
If we were to get two x4500s, with the idea of keeping one as a passive
standby (serious hardware failure) are there any clever solutions in
doing so?
We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I
assume there is no zpool send/recv (although, that would be pretty neat
Hi,
this may be a perl question more than a zfs question, but anyway:
are there any perl modules hanging around to access the zfs
administrative commands?!
I wish to write some scripts to to some scheduled jobs with our ZFS
systems; preferably in perl. But I found no perlmodules for zfs. I would
34 matches
Mail list logo