Btw,
When you import a pool, you must know its name.
Is there any command to get the pool name to whom a non-impoted disks belongs.
# vxdisk -o alldgs list does this with vxm.
Thanx for your replies.
C.
-Message d'origine-
De : zfs-discuss-boun...@opensolaris.org
Yep..
Just run zpool import without a poolname and it will list any pools
that are available for import.
eg:
sb2000::#zpool import
pool: mp
id: 17232673347678393572
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
mp ONLINE
raidz2 ONLINE
Is there any update on this? You suggested that Jeff had some kind of solution
for this - has it been integrated or is someone working on it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
For SCSI disks (including FC), you would use the FUA bit on the read command.
For SATA disks ... does anyone care? ;-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
To improve the performance of scripts that manipulate zfs snapshots and the zfs
snapshot service in perticular there needs to be a way to list all the
snapshots for a given object and only the snapshots for that object.
There are two RFEs filed that cover this:
OK, got it : just use zpool import.
Sorry for the inconvenience ;-)
C.
-Message d'origine-
De : zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] De la part de Cyril Payet
Envoyé : mardi 6 janvier 2009 09:16
À : D. Eckert; zfs-discuss@opensolaris.org
Objet
Hi,
Hello Bernd,
After I published a blog entry about installing
OpenSolaris 2008.11 on a
USB stick, I read a comment about a possible issue
with wearing out
blocks on the USB stick after some time because ZFS
overwrites its
uberblocks in place.
I did not understand well what you
My OpenSolaris 2008/11 PC seems to attain better throughput with one big
sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ. I know it's by
no means an exhaustive test, but catting /dev/zero to a file in the pool now
frequently exceeds 600 Megabytes per second, whereas before with
On Jan 6, 2009, at 9:44 AM 1/6/, Jacob Ritorto wrote:
but catting /dev/zero to a file in the pool now f
Do you get the same sort of results from /dev/random?
I wouldn't be surprised if /dev/zero turns out to be a special case.
Indeed, using any of the special files is probably not ideal.
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
My OpenSolaris 2008/11 PC seems to attain better throughput with one
big sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ.
I know it's by no means an exhaustive test, but catting /dev/zero to
a file in the pool now frequently exceeds
On Tue, 6 Jan 2009, Keith Bierman wrote:
Do you get the same sort of results from /dev/random?
/dev/random is very slow and should not be used for benchmarking.
Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Is urandom nonblocking?
On Tue, Jan 6, 2009 at 1:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 6 Jan 2009, Keith Bierman wrote:
Do you get the same sort of results from /dev/random?
/dev/random is very slow and should not be used for benchmarking.
Bob
On Jan 6, 2009, at 11:12 AM 1/6/, Bob Friesenhahn wrote:
On Tue, 6 Jan 2009, Keith Bierman wrote:
Do you get the same sort of results from /dev/random?
/dev/random is very slow and should not be used for benchmarking.
Not directly, no. But copying from /dev/random to a real file and
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
Is urandom nonblocking?
The OS provided random devices need to be secure and so they depend on
collecting entropy from the system so the random values are truely
random. They also execute complex code to produce the random numbers.
As a result, both
OK, so use a real io test program or at least pre-generate files large
enough to exceed RAM caching?
On Tue, Jan 6, 2009 at 1:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
Is urandom nonblocking?
The OS provided random devices need to
Ok, it gets a bit more specific
hdadm and write_cache run 'format -e -d $disk'
On this system, format will produce the list of devices in short order - format
-e, however, takes much, much longer and would explain why it takes hours to
iterate over 48 drives.
It's very curious and
Hi all,
I did an install of OpenSolaris in which I specified that the whole disk should
be used for the installation. Here is what format verify produces for that
disk:
Part TagFlag Cylinders SizeBlocks
0 rootwm 1 - 60797 465.73GB
On Tue, Jan 06, 2009 at 08:44:01AM -0800, Jacob Ritorto wrote:
Is this increase explicable / expected? The throughput calculator
sheet output I saw seemed to forecast better iops with the striped
raidz vdevs and I'd read that, generally, throughput is augmented by
keeping the number of vdevs
I noticed this issue yesterday when I first started playing around with
zfs send/recv. This is on Solaris 10U6.
It seems that a zfs send of a zvol issues 'volblocksize' reads to the
physical devices. This doesn't make any sense to me, as zfs generally
consolidates read/write requests to
Alex,
I think the root cause of your confusion is that the format utility and
disk labels are very unfriendly and confusing.
Partition 2 identifies the whole disk and on x86 systems, space is
needed for boot-related information and is currently stored in
partition 8. Neither of these partitions
I have that iozone program loaded, but its results were rather cryptic
for me. Is it adequate if I learn how to decipher the results? Can
it thread out and use all of my CPUs?
Do you have tools to do random I/O exercises?
--
Darren
___
On Mon, Jan 5, 2009 at 4:29 PM, Brent Jones br...@servuhome.net wrote:
On Mon, Jan 5, 2009 at 2:50 PM, Richard Elling richard.ell...@sun.com wrote:
Correlation question below...
Brent Jones wrote:
On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert
carsten.aulb...@aei.mpg.de wrote:
Hi
On Tue, 6 Jan 2009, Jacob Ritorto wrote:
I have that iozone program loaded, but its results were rather cryptic
for me. Is it adequate if I learn how to decipher the results? Can
it thread out and use all of my CPUs?
Yes, iozone does support threading. Here is a test with a record size
of
ZFS is the bomb. It's a great file system. What are it's real world
applications besides solaris userspace? What I'd really like is to utilize the
benefits of ZFS across all the platforms we use. For instance, we use Microsoft
Windows Servers as our primary platform here. How might I utilize
Hello,
- One way is virtualization, if you use a virtualization technology that uses
NFS for example, you could add your virtual images on a ZFS filesystem. NFS
can be used without virtualization too, but as you said the machines are
windows, i don't think the NFS client for windows is
On Tue, 6 Jan 2009, Rob wrote:
The only way I can visualize doing so would be to virtualize the
windows server and store it's image in a ZFS pool. That would add
additional overhead but protect the data at the disk level. It would
also allow snapshots of the Windows Machine's virtual file.
I am not experienced with iSCSI. I understand it's block level disk access via
TCP/IP. However I don't see how using it eliminates the need for virtualization.
Are you saying that a Windows Server can access a ZFS drive via iSCSI and store
NTFS files?
--
This message posted from
On Tue, 6 Jan 2009, Rob wrote:
Are you saying that a Windows Server can access a ZFS drive via
iSCSI and store NTFS files?
A volume is created under ZFS, similar to a large sequential file.
The iSCSI protocol is used to export that volume as a LUN. Windows
can then format it and put NTFS
On Tue, Jan 06, 2009 at 10:22:20AM -0800, Alex Viskovatoff wrote:
I did an install of OpenSolaris in which I specified that the whole disk
should be used for the installation. Here is what format verify produces
for that disk:
Part TagFlag Cylinders Size
http://docs.sun.com/app/docs/doc/817-5093/disksconcepts-20068?a=view
(To add more confusion, partitions are also referred to as slices.)
Nope, at least not on x86 systems. A partition holds the Solaris part
of the disk, and that part is subdivided into slices. Partitions
are visible to other
On Mon, Jan 5, 2009 at 5:27 AM, Roch roch.bourbonn...@sun.com wrote:
Alastair Neil writes:
I am attempting to create approx 10600 zfs file systems across two
pools. The devices underlying the pools are mirrored iscsi volumes
shared over a dedicated gigabit Ethernet with jumbo frames
On Tue, Jan 06, 2009 at 11:49:27AM -0700, cindy.swearin...@sun.com wrote:
My wish for this year is to boot from EFI-labeled disks so examining
disk labels is mostly unnecessary because ZFS pool components could be
constructed as whole disks, and the unpleasant disk
format/label/partitioning
[ok, no one replying, my spam then...]
Open folks just care about SMART so far.
http://www.mail-archive.com/linux-s...@vger.kernel.org/msg07346.html
Enterprise folks care more about spin-down.
(not an open thing yet, unless new practical industry standard is here that
I don't know. yeah right.)
Yes, iozone does support threading. Here is a test with a record size of
8KB, eight threads, synchronous writes, and a 2GB test file:
Multi_buffer. Work area 16777216 bytes
OPS Mode. Output is in operations per second.
Record Size 8 KB
SYNC Mode.
File
Wow. I will read further into this. That seems like it could have great
applications. I assume the same is true of FCoE?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Darren,
This one, ok, was a validate thought/question --
On Solaris, root pools cannot have EFI labels (the boot firmware doesn't
support booting from them).
http://blog.yucas.info/2008/11/26/zfs-boot-solaris/
But again, this is a ZFS discussion, and obvously EFI is not a ZFS, or even
Cindy,
Well, it worked. The system can boot off c4t0d0s0 now.
But I am still a bit perplexed. Here is how the invocation of installgrub went:
a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c4t0d0s0
Updating master boot sector destroys existing boot managers (if
I am running a test system with Solaris 10u6 and I am somewhat confused as to
how ACE inheritance works. I've read through
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf but it doesn't seem
to cover what I am experiencing.
The ZFS file system that I am working on has both aclmode
Hi Alex,
The fact that you have to install the boot blocks manually on the
second disk that you added with zpool attach is a bug! I should have
mentioned this bug previously.
If you had used the initial installation method to create a mirrored
root pool, the boot blocks would have been applied
ls -V file
-rw-r--r--+ 1 root root 0 Jan 6 21:42 d
user:root:rwxpdDaARWcCos:--:allow
owner@:--x---:--:deny
owner@:rw-p---A-W-Co-:--:allow
group@:-wxp--:--:deny
On Tue, Jan 06, 2009 at 01:27:41PM -0800, Peter Skovgaard Nielsen wrote:
ls -V file
--+ 1 root root 0 Jan 6 22:15 file
user:root:rwxpdDaARWcCos:--:allow
everyone@:--:--:allow
Not bad at all. However, I contend that this
I've run into this problem twice now, before I had 10x500GB drives in a ZFS+
setup and now again in a 12x500GB ZFS+ setup.
The problem is when the pool reaches ~85% capacity I get random read failures
and around ~90% capacity I get read failures AND zpool corruption. For example:
-I open a
Thanks for clearing that up. That all makes sense.
I was wondering why ZFS doesn't use the whole disk in the standard OpenSolaris
install. That explains it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Tue, Jan 06, 2009 at 01:24:17PM -0800, Alex Viskovatoff wrote:
a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c4t0d0s0
Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage1 written to partition 0 sector 0 (abs 16065)
Hi Cindy,
I now suspect that the boot blocks are located outside of the space in
partition 0 that actually belongs to the zpool, in which case it is not
necessarily a bug that zpool attach does not write those blocks, IMO. Indeed,
that must be the case, since GRUB needs to get to stage2 in
On Tue, Jan 06, 2009 at 04:10:10PM -0500, JZ wrote:
Hello Darren,
This one, ok, was a validate thought/question --
Darn, I was hoping...
On Solaris, root pools cannot have EFI labels (the boot firmware doesn't
support booting from them).
http://blog.yucas.info/2008/11/26/zfs-boot-solaris/
On Tue, Jan 6, 2009 at 2:58 PM, Rob rdyl...@yahoo.com wrote:
Wow. I will read further into this. That seems like it could have great
applications. I assume the same is true of FCoE?
--
Yes, iSCSI, FC, FCOE all present out a LUN to Windows. For the layman, from
the windows system the disk
Ok, folks, new news - [feel free to comment in any fashion, since I don't
know how yet.]
EMC ACQUIRES OPEN-SOURCE ASSETS FROM SOURCELABS
http://go.techtarget.com/r/5490612/6109175
attachment: joetucci.jpg___
zfs-discuss mailing list
On Sat, Dec 6, 2008 at 11:40 AM, Ian Collins i...@ianshome.com wrote:
Richard Elling wrote:
Ian Collins wrote:
Ian Collins wrote:
Andrew Gabriel wrote:
Ian Collins wrote:
I've just finished a small application to couple zfs_send and
zfs_receive through a socket to remove ssh from the
It is not recommended to store more than 90% on any file system, I think. For
instance, NTFS can behave very badly when it runs out of space. Similar to if
you fill up your RAM and you have no swap space. Then the computer starts to
thrash badly. Not recommended. Avoid 90% and above, and you
I was hoping that this was the problem (because just buying more discs is the
cheapest solution given time=$$) but running it by somebody at work they said
going over 90% can cause decreased performance but is unlikely to cause the
strange errors I'm seeing. However, I think I'll stick a 1TB
On 1/6/2009 4:19 PM, Sam wrote:
I was hoping that this was the problem (because just buying more
discs is the cheapest solution given time=$$) but running it by
somebody at work they said going over 90% can cause decreased
performance but is unlikely to cause the strange errors I'm seeing.
On Jan 6, 2009, at 14:21, Rob wrote:
Obviously ZFS is ideal for large databases served out via
application level or web servers. But what other practical ways are
there to integrate the use of ZFS into existing setups to experience
it's benefits.
Remember that ZFS is made up of the ZPL
On Tue, Jan 6, 2009 at 6:19 PM, Sam s...@smugmug.com wrote:
I was hoping that this was the problem (because just buying more discs is
the cheapest solution given time=$$) but running it by somebody at work they
said going over 90% can cause decreased performance but is unlikely to cause
the
Since zfs is so smart is other areas is there a particular reason why a high
water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but did
not have corruption as it ran out of space.
Nicholas
On Tue, Jan 6, 2009 at 10:25 PM, Nicholas Lee emptysa...@gmail.com wrote:
Since zfs is so smart is other areas is there a particular reason why a
high water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but
BTW, high water mark method is not perfect, here is some for Novell support of
water mark...
best,
z
http://www.novell.com/coolsolutions/tools/16991.html
Based on my own belief that there had to be a better way and the number of
issues I'd seen reported in the Support Forums, I spent a lot of
On 01/06/09 21:25, Nicholas Lee wrote:
Since zfs is so smart is other areas is there a particular reason why a
high water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but
did not have corruption as it
Hi,
Brent Jones wrote:
Using mbuffer can speed it up dramatically, but this seems like a hack
without addressing a real problem with zfs send/recv.
Trying to send any meaningful sized snapshots from say an X4540 takes
up to 24 hours, for as little as 300GB changerate.
I have not found a
59 matches
Mail list logo