On 30/01/2010 09:26, Malte Schirmacher wrote:
Mirko wrote:
Hi,
I'm atmost ready to deploy my new homeserver for final testing.
Before I want to be sure that nothing big is left untouched.
Reading ZFS Admin Guide About the checksum method, there's no advice
about it.
The default is fletcher4.
Good morning.
looking at the 3ware
9650 SE raid controller for a new build... anyone have any luck with
this card? their site says they support OpenSolaris... anyone used one?
Thanks.
Tiernan OToole
Software Developer
Chat Google Talk: lsmart...@gmail.com Skype: tiernanotoole MSN:
Den 01.02.2010 10:43, skrev Tiernan OToole:
looking at the 3ware 9650 SE raid controller for a new build... anyone
have any luck with this card? their site says they support
OpenSolaris... anyone used one?
I am using one and it works with hotswap and all, but you can not
install to a drive on
I'm trying to put an older Fibre Channel RAID (Fujitsu Siemens S80) box into
use again with ZFS on a Solaris 10 (Update 8) system, but it seems ZFS gets
confused about which disk (LUN) is which...
Back in the old days when we used these disk systems on another server we had
problems with
Tiernan O'Toole lsmart...@gmail.com writes:
looking at the 3ware 9650 SE raid controller for a new build... anyone
have any luck with this card? their site says they support
OpenSolaris... anyone used one?
didn't work too well for me. it's fast and nice for a couple of days,
then the driver
Probably I'm missing something here, but what I see on my system
zfs list -o used,ratio,compression,name export/home/user
89.6G 2.86xgzip-4 export/home/user
cmsmaster ~ # du -hs /export/home/user/
90G /export/home/user/
du -hsb /export/home/user/
380781942931/export/home/user/
On 01 February, 2010 - antst sent me these 0,6K bytes:
Probably I'm missing something here, but what I see on my system
zfs list -o used,ratio,compression,name export/home/user
89.6G 2.86xgzip-4 export/home/user
cmsmaster ~ # du -hs /export/home/user/
90G /export/home/user/
I would expect to see uncompressed size by --apparent-size.
And what I see much above uncompressed size obtained by multiplication of
compressed size and ratio.
In fact, apparent size is consistent with amount used by same set of files on
linux system. (I'm moving my home directories from linux
I'm more than happy by the fact that data consumes even less physical space on
storage.
But I want to understand why and how. And want to know to what numbers I can
trust.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
antst ant.stari...@gmail.com writes:
I'm more than happy by the fact that data consumes even less physical
space on storage. But I want to understand why and how. And want to
know to what numbers I can trust.
my guess is sparse files.
BTW, I think you should compare the size returned from
On Thu, 28 Jan 2010, TheJay wrote:
Attached the zpool history.
Did the resilver ever complete on the first c6t1d0? I see a second
replace here:
2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0
2010-01-28.07:57:27 zpool scrub rzpool2
2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0
I tested some more and found that Pool disks are picked UP.
Head1: Cachedevice1 (c0t0d0)
Head2: Cachedevice2 (c0t0d0)
Pool: Shared, c1tXdY
I created a pool on shared storage.
Added the cache device on Head1.
Switched the pool to Head2 (export + import).
Created a pool on head1 containing
Hi all,
this is what I get from 'zpool status pool' after swapping 3 of 10 members of a
zpool for testing purpose.
[i]u...@zfs2:~$ zpool status pool
pool: pool
state: ONLINE
scrub: scrub in progress for 0h8m, 4,70% done, 2h51m to go
config:
NAME STATE READ WRITE CKSUM
On Jan 31, 2010, at 11:24 PM, Prakash Kochummen wrote:
Thanks for the reply.
Sorry i confused you too. when I mentioned ufs , i just meant ufs root
scenario (pre u6).
Suppose I have a 136G Hdd which as my boot disk,which has been sliced it like
s0-80gb (root slice)
s1-55Gb (swap
I use the Beta 9.5.3 ISO Opensolaris package with OSOL DEV131 build - I had the
3ware support team help me. It works like a charm on my 9650se-24m8 with 20
drives
On Feb 1, 2010, at 3:02 AM, Kjetil Torgrim Homme wrote:
Tiernan O'Toole lsmart...@gmail.com writes:
looking at the 3ware 9650
Correct - I have not yet re-assigned c6t1d0 yet though
On Feb 1, 2010, at 5:08 AM, Mark J Musante wrote:
On Thu, 28 Jan 2010, TheJay wrote:
Attached the zpool history.
Did the resilver ever complete on the first c6t1d0? I see a second replace
here:
2010-01-27.20:41:15 zpool replace
On 31 Jan 2010, Jacob Ritorto wrote:
I spent *so* many hours looking for that firmware.� Would you
please post the link?� Did the firmware dl you found come with fcode?
Running blade 2000 here (SPARC).
Well, I can't say for sure it's the right firmware for
your device. I
Hi--
Were you trying to swap out a drive in your pool's raidz1 VDEV
with a spare device? Was that your original intention?
If so, then you need to use the zpool replace command to replace
one disk with another disk including a spare.
I would put the disks back to where they were and retry with
The 4 disks attached to the ahci driver should be using NCQ. The two
cmdk disks will not have NCQ capability as they are under control of
the legacy ata driver. What does your pool topology look like? Can you
try removing the cmdk disks from your pool.
You can also verify if your disks are
Hi--
Were you trying to swap out a drive in your pool's
raidz1 VDEV
with a spare device? Was that your original
intention?
Not really. I just wanted to see what happens if the physical controller port
changes, i.e. what practical relevance it would have if I put the disks in the
same
Its Monday morning so it still doesn't make sense. :-)
I suggested putting the disks back because I'm still not sure if you
physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is still connected
and part of your pool. You might trying detaching the spare as described
in the docs. If you put the
On Sat, January 30, 2010 14:21, Dick Hoogendijk wrote:
Op 30-1-2010 20:53, Mark schreef:
Alternatively, I guess I could add a small USB drive to use solely for
the OS and then have all of the 2 750 drives for ZFS. Is that a bad
idea since the OS drive will be standalone?
Very bad idea.
On Sat, January 30, 2010 10:58, matthew patton wrote:
please forgive the 'stupid' question.
Perfectly fair thing to wonder about IMHO. And if you're wondering,
trying to find out is good :-).
Aside from having a convenient hash table of checksums to consult and upon
detection of a
You are correct. Should be fine without -m.
Thanks,
Cindy
On 01/30/10 09:15, Fajar A. Nugraha wrote:
On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Hi Michelle,
You're almost there, but install the bootblocks in s0:
# installgrub -m /boot/grub/stage1
10 disks connected in the following order:
0 1 2 3 4 5 6 7 8 9
Export pool. Remove three drives from the system:
0 1 3 4 6 7 8
Plug them back in, but into different slots:
0 1 9 3 4 1 6 7 8 5
Import the pool.
What's supposed to happen is that ZFS detects the drives, figures out where
On Feb 1, 2010, at 5:53 AM, Lutz Schumann wrote:
I tested some more and found that Pool disks are picked UP.
Head1: Cachedevice1 (c0t0d0)
Head2: Cachedevice2 (c0t0d0)
Pool: Shared, c1tXdY
I created a pool on shared storage.
Added the cache device on Head1.
Switched the pool to Head2
Hi,
I have heard references to ARC releasing memory when the demand is high. Can
someone please point me to the code path from the point of such a detection to
ARC release?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing
On 01 February, 2010 - tester sent me these 0,4K bytes:
Hi,
I have heard references to ARC releasing memory when the demand is
high. Can someone please point me to the code path from the point of
such a detection to ARC release?
ZFS can generally detect device changes on Sun hardware, but for other
hardware, the behavior is unknown.
The most harmful pool problem I see besides inadequate redundancy levels
or no backups, is device changes. Recovery can be difficult.
Follow recommended practices for replacing devices in a
Thanks for the feedback lads... dont really need the boot drives to be
on the array... was going to use the onboard controller for that... got
an adaptec card already, so might look at those again...
--Tiernan
On 01/02/2010 15:59, TheJay wrote:
I use the Beta 9.5.3 ISO Opensolaris package
On February 1, 2010 11:59:14 AM -0600 David Dyer-Bennet d...@dd-b.net
wrote:
One idea I seriously considered is to boot off a USB key. No online
redundancy (but I'd keep a second loaded key, plus the files to quickly
reimage a new key, handy).
I've just built my first USB-booting zfs system.
On February 1, 2010 10:19:24 AM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote:
ZFS has recommended ways for swapping disks so if the pool is exported,
the system shutdown and then disks are swapped, then the behavior is
unpredictable and ZFS is understandably confused about what
Hi Frank,
If you want to replace one disk with another disk, then physically
replace the disk and let ZFS know by using the zpool replace command
or set the autoreplace property.
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really
Created a pool on head1 containing just the cache
device (c0t0d0).
This is not possible, unless there is a bug. You
cannot create a pool
with only a cache device. I have verified this on
b131:
# zpool create norealpool cache /dev/ramdisk/rc1
1
invalid vdev
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote:
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on the driver--ZFS
interaction and we can't speak for all hardware.
With mpxio disks are
Hi Cindys,
I'm still
not sure if you physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is
still connected and part of your pool.
The latter is not the case according to status, the first is definitely the
case. format reports the drive as present and correctly labelled.
ZFS has
Hi again,
Follow recommended practices for replacing devices in
a live pool.
Fair enough. On the other hand I guess it has become clear that the pool went
offline as a part of the procedure. That was partly as I am not sure about the
hotplug capabilities of the controller, partly as I wanted
re == Richard Elling richard.ell...@gmail.com writes:
re It is true that SXCE b130 is the last SXCE build and only
re available until 31-jan-10.
I have a copy of b130 that I downloaded during the few-weeks-long
window it was available, but cannot legally give it to you because of
the
On February 1, 2010 4:15:10 PM -0500 Frank Cusack
frank+lists/z...@linetwo.net wrote:
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote:
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on
Hi,
Testing how ZFS reacts to a failed disk can be difficult to anticipate
because some systems don't react well when you remove a disk. On an
x4500, for example, you have to unconfigure a disk before you can remove
it.
Before removing a disk, I would consult your h/w docs to see what the
Frank,
ZFS, Sun device drivers, and the MPxIO stack all work as expected.
Cindy
On 02/01/10 14:55, Frank Cusack wrote:
On February 1, 2010 4:15:10 PM -0500 Frank Cusack
frank+lists/z...@linetwo.net wrote:
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote:
I did see that and confirmed the support has made it into the 130 release I'm
testing with.
However, the WD10EARS does not expose 4k sectors to the outside world, so it is
not identified as supporting it.
Correct alignment, to ensure best performance of the internal translation,
seems to be
Hi all.
Recent builds (b129, b130 and b131) have had me noticing some zpool performance
issues when scrubbing.
Running bare into some cheap SATA controllers, on a cheap mobo, running 6GB
of DDR2 + an Intel Q6600, with 4 * 1TB Samsung consumer grade SATA drives, I've
been accustomed to seeing
The results are in:
My timeout issue is definitely the WD10EARS disks.
Although differences in the error rate was seen with different LSI firmware
revisions, the errors persisted. The more disks on the expander, the higher the
number with iostat errors.
This then causes zpool issues (disk
my bad, since zfs/zfs root, never really bothered to look at the /etc/vfstab
file, looked at it now and everything is same as before .
Thanks a lot for your reply.
Rgds
PK
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Mon, Feb 1 at 16:12, Jake Carroll wrote:
Hi all.
Recent builds (b129, b130 and b131) have had me noticing some zpool
performance issues when scrubbing.
Running bare into some cheap SATA controllers, on a cheap mobo,
running 6GB of DDR2 + an Intel Q6600, with 4 * 1TB Samsung consumer
grade
what with the home NAS conversations, what's the trick to buy a J4500 without
any drives? SUN like every other enterprise storage vendor thinks it's ok to
rape their customers and I for one, am not interested in paying 10x for a silly
SATA hard drive.
On Mon, Feb 1, 2010 at 10:58 PM, matthew patton patto...@yahoo.com wrote:
what with the home NAS conversations, what's the trick to buy a J4500
without any drives? SUN like every other enterprise storage vendor thinks
it's ok to rape their customers and I for one, am not interested in paying
The WD10EARS disks don't work well.
I had too many issues with timeouts that disappeared when replacing them with
ST32000542AS drives.
My next challenge is to get the LSI 3081 to boot off the disk I want it to, and
then to get multipath functional.
Has anyone else had issues with the LSI IT
+--
| On 2010-02-01 23:01:33, Tim Cook wrote:
|
| On Mon, Feb 1, 2010 at 10:58 PM, matthew patton patto...@yahoo.com wrote:
|
| what with the home NAS conversations, what's the trick to buy a J4500
| without any
http://www.memoryx.net/5410456.html
I've bought sleds for X4150s and X2270s from them.
interesting mis-description on the web page. thumper doesn't use SCA
drives.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, Feb 1, 2010 at 11:58 PM, matthew patton patto...@yahoo.com wrote:
what with the home NAS conversations, what's the trick to buy a J4500
without any drives? SUN like every other enterprise storage vendor thinks
it's ok to rape their customers and I for one, am not interested in paying
charge a premium for their products but they ARE a
enterprise vendor. You
wouldn't say something like hey, where can i buy a Ferrari
without any
wheels...i'm not paying x amount for a silly aluminum
wheel
true. but I buy a Ferrari for the engine and bodywork and chassis engineering.
It
charge a premium for their products but they ARE a
enterprise vendor. You
wouldn't say something like hey, where can i buy a Ferrari
without any
wheels...i'm not paying x amount for a silly aluminum
wheel
true. but I buy a Ferrari for the engine and bodywork and chassis engineering.
It
On 2/02/10 05:17 PM, matthew patton wrote:
charge a premium for their products but they ARE a
enterprise vendor. You
wouldn't say something like hey, where can i buy a Ferrari
without any
wheels...i'm not paying x amount for a silly aluminum
wheel
true. but I buy a Ferrari for the engine
On Tue, Feb 2, 2010 at 1:17 AM, matthew patton patto...@yahoo.com wrote:
charge a premium for their products but they ARE a
enterprise vendor. You
wouldn't say something like hey, where can i buy a Ferrari
without any
wheels...i'm not paying x amount for a silly aluminum
wheel
On Tue, Feb 2, 2010 at 2:17 AM, matthew patton patto...@yahoo.com wrote:
charge a premium for their products but they ARE a
enterprise vendor. You
wouldn't say something like hey, where can i buy a Ferrari
without any
wheels...i'm not paying x amount for a silly aluminum
wheel
When you send data (as the mate did) , all data is rewritten and the settings
you made (dedupe etc.) are effectivly applied.
If you change a parameter (dedupe, compression) this holds only true for NEWLY
written data. If you do not cange data, all data is still duped.
Also when you send, all
58 matches
Mail list logo