2009/9/17 Brandon High bh...@freaks.com:
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on this somewhere?
1) There is some legal bla bla between Sun and green-bytes that's tying up
the IP around dedup... (someone knock some sense into green-bytes please)
2) there's an
I can flesh this out with detail if needed, but a brief chain of events is:
1. RAIDZ1 zpool with drives A, B, C D (I don't have access to see original
drive names)
2. New disk E. Replaced A with E.
3. Part way through resilver, drive D was 'removed'
4. 700+ persistent errors detected, and lots
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko cyril.pli...@mountall.comwrote:
2009/9/17 Brandon High bh...@freaks.com:
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on
On Wed, Sep 16, 2009 at 10:23:01AM -0700, Richard Elling wrote:
This line of reasoning doesn't get you very far. It is much better to
take a look at
the mean time to data loss (MTTDL) for the various configurations. I
wrote a
series of blogs to show how this is done.
On 17 September, 2009 - Eugen Leitl sent me these 2,0K bytes:
On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote:
It's possible to do 3-way (or more) mirrors too, so you may achieve better
redundancy than raidz2/3
I understand there's almost no additional performance penalty
Eugen Leitl wrote:
On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote:
It's possible to do 3-way (or more) mirrors too, so you may achieve better
redundancy than raidz2/3
I understand there's almost no additional performance penalty to raidz3
over raidz2 in terms of CPU
Owen Davies wrote:
Thanks. I took a look and that is exactly what I was looking for. Of course I
have since just reset all the permissions on all my shares but it seems that
the proper way to swap UIDs for users with permissions on CIFS shares is to:
Edit /etc/passwd
Edit /var/smb/smbpasswd
On Thu, Sep 17, 2009 at 12:55:35PM +0200, Tomas Ögren wrote:
It's not a fixed value per technology, it depends on the number of disks
per group. RAID5/RAIDZ1 loses 1 disk worth to parity per group.
RAID6/RAIDZ loses 2 disks. RAIDZ3 loses 3 disks. Raid1/mirror loses
half the disks. So in your
Erik Trimble wrote:
So SSDs for ZIL/L2ARC don't bring that much when used with raidz2/raidz3,
if I write a lot, at least, and don't access the cache very much,
according
to some recent posts on this list.
Not true.
Remember: ZIL = write cache
ZIL is NOT a write cache. The ZIL is the
Might be related to Solaris bug 6881590
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6881590-1
Matthias
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I recently (re)built a fileserver at home, using Ubuntu and zfs-fuse to
create a ZFS filesystem (RAIDz1) on five 1.5TB drives.
I had some serious issues with NFS not working properly (kept getting
stale file handles), so I tried to switch to OpenSolaris/Nexenta, but my
SATA controller wasn't
Hi,
ZFS drew my attention and I'm thinking of using it to manage the storage for a
virtual machine host. The plan is to get a Core i7 machine with 12GB RAM and 6
SATA disks (+1 PATA for boot/swap), configure the 6 disks in a tank of mirrored
pairs and keep on that pool a number of virtual
Hello all,
I have an opensolaris server which is used as an iscsi SAN on snv_122. I am
then using two ESXi boxes to connect to them and this is where the storage for
the virtual machines lies. On here are several vm's including linux and windows
servers. We have another server which is almost
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote:
I can reboot into Linux and import the pools, but haven't figured out why I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse under Linux is using ZFS version 13, where
10:09pm, Fajar A. Nugraha wrote:
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote:
I can reboot into Linux and import the pools, but haven't figured out why I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse under
Paul Archer wrote:
10:09pm, Fajar A. Nugraha wrote:
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote:
I can reboot into Linux and import the pools, but haven't figured out
why I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so),
10:40am, Paul Archer wrote:
I can reboot into Linux and import the pools, but haven't figured out why
I
can't import them in Solaris. I don't know if it makes a difference (I
wouldn't think so), but zfs-fuse under Linux is using ZFS version 13,
where
Nexenta is using version 14.
Just a
Darren J Moffat wrote:
Erik Trimble wrote:
So SSDs for ZIL/L2ARC don't bring that much when used with
raidz2/raidz3,
if I write a lot, at least, and don't access the cache very much,
according
to some recent posts on this list.
Not true.
Remember: ZIL = write cache
ZIL is NOT a write
On Thu, September 17, 2009 04:29, Chris Murray wrote:
2. New disk E. Replaced A with E.
3. Part way through resilver, drive D was 'removed'
4. 700+ persistent errors detected, and lots of checksum errors on all
drives. Surprised by this - I thought the absence of one drive could be
I have machine that had 2x 1TB drives in it. They were in the same zpool and
that entire zpool is set to copies=2. From what I understand this will store
all my data twice, and if the SPA is doing its job right it will store the
copies on different disks and store the checksum for any given
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote:
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko
cyril.pli...@mountall.comwrote:
2009/9/17 Brandon High
Hi,
Bug ID: 6775100 stat() performance on files on zfs should be improved was fixed
in snv_119.
I wanted to do a quick comparison between snv_117 and snv_122 on my workstation
to see what kind of improvement there is. I wrote a small C program which does
a stat() N times in a loop. This is of
Joe Toppi wrote:
I have machine that had 2x 1TB drives in it. They were in the same zpool and
that entire zpool is set to copies=2. From what I understand this will
store all my data twice, and if the SPA is doing its job right it will store
the copies on different disks and store the checksum
5:08pm, Darren J Moffat wrote:
Paul Archer wrote:
10:09pm, Fajar A. Nugraha wrote:
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote:
I can reboot into Linux and import the pools, but haven't figured out why
I
can't import them in Solaris. I don't know if it makes a
Thanks David. Maybe I mis-understand how a replace works? When I added disk E,
and used 'zpool replace [A] [E]' (still can't remember those drive names), I
thought that disk A would still be part of the pool, and read from in order to
build the contents of disk E? Sort of like a safer way of
Paul Archer wrote:
What kind of partition table is on the disks, is it EFI ? If not that
might be part of the issue.
I don't believe there is any partition table on the disks. I pointed zfs
to the raw disks when I setup the pool.
If you run fdisk on OpenSolaris against this disk what does
rswwal...@gmail.com said:
It's not the stripes that make a difference, but the number of controllers
there.
What's the system config on that puppy?
The zpool status -v output was from a Thumper (X4500), slightly edited,
since in our real-world Thumper, we use c6t0d0 in c5t4d0's place in the
6:44pm, Darren J Moffat wrote:
Paul Archer wrote:
What kind of partition table is on the disks, is it EFI ? If not that
might be part of the issue.
I don't believe there is any partition table on the disks. I pointed zfs to
the raw disks when I setup the pool.
If you run fdisk on
Paul Archer wrote:
r...@ubuntu:~# fdisk -l /dev/sda
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xce13f90b
Device Boot Start End Blocks Id System
/dev/sda1
Hi,
I'm trying to move disks in a zpool from one SATA-kontroller to another.
Its 16 disks in 4x4 raidz.
Just to see if it could be done, I moved one disk from one raidz over to
the new controller. Server was powered off.
After booting OS, I get this:
Zpool status
(...)
raidz1 DEGRADED
On Thu, Sep 17, 2009 at 01:32:43PM +0200, Eugen Leitl wrote:
reasons), you will lose 2 disks worth of storage to parity leaving 12
disks worth of data. With raid10 you will lose half, 7 disks to
parity/redundancy. With two raidz2 sets, you will get (5+2)+(5+2), that
is 5+5 disks worth of
vidar.nil...@palantir.no said:
I'm trying to move disks in a zpool from one SATA-kontroller to another. Its
16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from
one raidz over to the new controller. Server was powered off.
. . .
zpool replace storage c10t7d0 c11t0d0
All,
After long and long searching: I found the reason: ips package
SUNWnfsskr was missing . Thanks for all your replies and help.
Regards,
Tom.
Tom de Waal wrote:
Hi,
I'm trying to identify why my nfs server does not work. I'm using a more
or less core install of OSOL 2009.06 (release)
7:37pm, Darren J Moffat wrote:
Paul Archer wrote:
r...@ubuntu:~# fdisk -l /dev/sda
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xce13f90b
Device Boot Start End
Andrew Deason wrote:
As I'm sure you're all aware, filesize in ZFS can differ greatly from
actual disk usage, depending on access patterns. e.g. truncating a 1M
file down to 1 byte still uses up about 130k on disk when
recordsize=128k. I'm aware that this is a result of ZFS's rather
different
On Thu, 17 Sep 2009 11:50:17 -0500
Tim Cook t...@cook.ms wrote:
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote:
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
[snip]
As we should. Did the video of the
Darren J Moffat wrote:
Paul Archer wrote:
r...@ubuntu:~# fdisk -l /dev/sda
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xce13f90b
Device Boot Start End Blocks
On Thu, 17 Sep 2009 22:55:38 +0100
Robert Milkowski mi...@task.gda.pl wrote:
IMHO you won't be able to lower a file blocksize other than by
creating a new file. For example:
Okay, thank you.
If you are not worried with this extra overhead and you are mostly
concerned with proper accounting
if you would create a dedicated dataset for your cache and set quota on
it then instead of tracking a disk space usage for each file you could
easily check how much disk space is being used in the dataset.
Would it suffice for you?
Setting recordsize to 1k if you have lots of files (I
On Fri, Sep 18, 2009 at 4:08 AM, Paul Archer p...@paularcher.org wrote:
I did a little research and found that parted on Linux handles EFI
labelling. I used it to change the partition scheme on sda, creating an
sda1. I then offlined sda and replaced it with sda1. I wish I had just tried
a
On Thu, 17 Sep 2009, Joe Toppi wrote:
I filled this with data. So I added a 1.5 TB drive to the pool.
Where will my ditto blocks and checksums go? Will it migrate data
from the other drives automatically? Will it migrate data if I scrub
or re-silver? will it never migrate data and just store
Tomorrow, Fajar A. Nugraha wrote:
There was a post from Ricardo on zfs-fuse list some time ago.
Apparently if you do a zpool create on whole disks, Linux on
Solaris behaves differently:
- solaris will create EFI partition on that disk, and use the partition as vdev
- Linux will use the whole
On 18/09/2009, at 1:08 PM, Fajar A. Nugraha wrote:
There was a post from Ricardo on zfs-fuse list some time ago.
Apparently if you do a zpool create on whole disks, Linux on
Solaris behaves differently:
- solaris will create EFI partition on that disk, and use the
partition as vdev
- Linux
43 matches
Mail list logo