Howdy All,
I made a 1 TB zfs volume within a 4.5 TB zpool called vault for testing iscsi.
Both DeDup and Compression were off. After my tests, I issued a zfs destroy to
remove the volume.
This command hung. After 5 hours, I hard rebooted into single user mode and
removed my zfs cache file (I
I should also mention that once the lock starts, the disk activity light on
my case stays busy for a bit (1-2 minutes MAX), then does nothing.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Banks,
Some basic stats might shed some light, e.g. vmstat 5, mpstat 5,
iostat -xnz 5, prstat -Lmc 5 ... all running from just before you
start the tests until things are normal again.
Memory starvation is certainly a possibility. The ARC can be greedy
and slow to release memory under
Lutz Schumann presa...@storageconcepts.de writes:
Actually the performance decrease when disableing the write cache on
the SSD is aprox 3x (aka 66%).
for this reason, you want a controller with battery backed write cache.
in practice this means a RAID controller, even if you don't use the RAID
I've just made a couple of consecutive scrubs, each time it found a couple of
checksum errors but on different drives. No indication of any other errors.
That a disk scrubs cleanly on a quiescent pool in one run but fails in the next
is puzzling. It reminds me of the snv_120 odd number of
On Fri, 8 Jan 2010, Rob Logan wrote:
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange
According to various posts the LSI SAS3081E-R seems to work well with
OpenSolaris.
But I've got pretty chilled-out from my recent problems with Areca-1680's.
Could anyone please confirm that the LSI SAS3081E-R works well ?
Is hotplug supported ?
Anything else I should know before buying one of
Maybe it is lost in this much text :) .. thus this re-post
Does anyone know the impact of disabeling the write cache for the write
amplification factor of the intel SSD's ?
How can I permanently disable the write cache on the Intel X25-M SSD's ?
Thanks, Robert
--
This message posted from
Hi
We have a number of customers (~150) that have a single Sun server with
directly attached storage and directly attached tape drive/library. These
servers are currently running UFS, but we are looking at deploying ZFS in
future builds.
At present, we backup the server to the local tape
On Mon, 11 Jan 2010, Kjetil Torgrim Homme wrote:
(BTW, thank you for testing forceful removal of power. the result is as
expected, but it's good to see that theory and practice match.)
Actually, the result is not as expected since the device should not
have lost any data preceding a cache
Good question. Zmanda seems to be a popular open source solution with
commercial licenses and support available. We try to keep the Best Practices
Guide up to date on this topic:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Using_ZFS_With_Enterprise_Backup_Solutions
On Wed, Jan 6, 2010 at 12:11 PM, Carl Rathman crath...@gmail.com wrote:
On Tue, Jan 5, 2010 at 10:35 AM, Carl Rathman crath...@gmail.com wrote:
On Tue, Jan 5, 2010 at 10:12 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote:
I didn't mean to
Hi Paul,
Example 11-1 in this section describes how to replace a
disk on an x4500 system:
http://docs.sun.com/app/docs/doc/819-5461/gbcet?a=view
Cindy
On 01/09/10 16:17, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
If ZFS removed the drive from the pool, why does the
Hi Gary,
You might consider running OSOL on a later build, like build 130.
Have you reviewed the fmdump -eV output to determine on which devices
the ereports below have been generated? This might give you more clues
as to what the issues are. I would also be curious if you have any
driver-level
According to various posts the LSI SAS3081E-R seems to work well
with OpenSolaris.
But I've got pretty chilled-out from my recent problems with Areca-1680's.
Could anyone please confirm that the LSI SAS3081E-R works well ?
Is hotplug supported ?
It works well in Solaris 10 including
vmstat does show something interesting. The free memory shrinks while doing
the first dd (generating the 8G file) from around 10G to 1.5Gish. The copy
operations thereafter dont consume much and it stays at 1.2G after all
operations have completed. (btw at the point of system slugishness
On Mon, 11 Jan 2010, bank kus wrote:
However I noticed something weird, long after the file operations
are done the free memory doesnt seem to grow back (below)
Essentially ZFS File Data claims to use 76% of memory long after the
file has been written. How does one reclaim it back. Is ZFS
I am sure this is not the first discussion related to this... apologies for the
duplication.
What is the recommended way to make use of a Hardware RAID controller/HBA along
with ZFS?
Does it make sense to do RAID5 on the HW and then RAIDZ on the software? OR
just stick to ZFS RAIDZ and
Ok, tested this myself ...
(same hardware used for both tests)
OpenSolaris svn_104 (actually Nexenta Core 2):
100 Snaps
r...@nexenta:/volumes# time for i in $(seq 1 100); do zfs snapshot
ssd/v...@test1_$i; done
real0m24.991s
user0m0.297s
sys 0m0.679s
Import:
On Mon, 11 Jan 2010, Anil wrote:
What is the recommended way to make use of a Hardware RAID
controller/HBA along with ZFS?
Does it make sense to do RAID5 on the HW and then RAIDZ on the
software? OR just stick to ZFS RAIDZ and connect the drives to the
controller, w/o any HW RAID (to
For example, you could set it to half your (8GB) memory so that 4GB is
immediately available for other uses.
* Set maximum ZFS ARC size to 4GB
capping max sounds like a good idea
thanks
banks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello,
On Jan 11, 2010, at 6:53 PM, bank kus wrote:
For example, you could set it to half your (8GB) memory so that 4GB is
immediately available for other uses.
* Set maximum ZFS ARC size to 4GB
capping max sounds like a good idea.
Are we still trying to solve the starvation problem?
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris if there is
no fair I/O scheduling between multiple read sources until that is fixed
individual I_am_systemstalled_while_doing_xyz problems will crop up. Started a
new
Thank you Thomas and Mertol for your feedback.
I was indeed aiming for the x25-E because of their write performance.
However since these are around 350? for 32 GB I find it disturbing to only use
it for ZIL :-)
I will do some tests with a cheap MLC disk.
I also read about the disk cache needing
ZFS will definitely benefit from battery backed RAM
on the controller
as long as the controller immediately acknowledges
cache flushes
(rather than waiting for battery-protected data to
flush to the
I am little confused with this. Do we not want the controller to ignore these
cache
On 11-Jan-10, at 1:12 PM, Bob Friesenhahn wrote:
On Mon, 11 Jan 2010, Anil wrote:
What is the recommended way to make use of a Hardware RAID
controller/HBA along with ZFS?
...
Many people will recommend against using RAID5 in hardware since
then zfs is not as capable of repairing
On Mon, 11 Jan 2010, bank kus wrote:
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris
if there is no fair I/O scheduling between multiple read sources
until that is fixed individual I_am_systemstalled_while_doing_xyz
On Mon, 11 Jan 2010, Anil wrote:
ZFS will definitely benefit from battery backed RAM on the
controller as long as the controller immediately acknowledges cache
flushes (rather than waiting for battery-protected data to flush to
the
I am little confused with this. Do we not want the
Last April we put this in /etc/system on a T2000 server with large ZFS
filesystems:
set pg_contig_disable=1
This was while we were attempting to solve a couple of ZFS problems
that were eventually fixed with an IDR. Since then, we've removed
the IDR and brought the system up to Solaris 10
comment below...
On Jan 11, 2010, at 10:00 AM, Lutz Schumann wrote:
Ok, tested this myself ...
(same hardware used for both tests)
OpenSolaris svn_104 (actually Nexenta Core 2):
100 Snaps
r...@nexenta:/volumes# time for i in $(seq 1 100); do zfs snapshot
Ben,
I have found that booting from cdrom and importing
the pool on the new host, then boot the hard disk
will prevent these issues.
That will reconfigure the zfs to use the new disk
device.
When running, zpool detach the missing mirror device
and attach a new one.
Thanks. I'm well
On 11/01/10 11:57 PM, Arnaud Brand wrote:
According to various posts the LSI SAS3081E-R seems to work well with
OpenSolaris.
But I've got pretty chilled-out from my recent problems with Areca-1680's.
Could anyone please confirm that the LSI SAS3081E-R works well ?
Is hotplug supported ?
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25 drive bays. They were designed to power the
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Mon, 11 Jan 2010, bank kus wrote:
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris
if there is no fair I/O scheduling between
On 11-Jan-10, at 5:59 PM, Daniel Carosone wrote:
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25
Hello All,
I hope this makes sense, I have two opensolaris machines with a bunch of hard
disks, one acts as a iSCSI SAN, and the other is identical other than the hard
disk configuration. The only thing being served are VMWare esxi raw disks,
which hold either virtual machines or data that the
I have a netbook with a small internal ssd as rpool. I have an
external usb HDD with much larger storage, as a separate pool, which
is sometimes attached to the netbook.
I created a zvol on the external pool, the same size as the internal
ssd, and attached it as a mirror to rpool for backup. I
I should have mentioned:
- opensolaris b130
- of course I could use partitions on the usb disk, but that's so much less
flexible.
--
Dan.
pgp5A8rwHnenC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Jan 11, 2010, at 19:00, Toby Thain wrote:
On 11-Jan-10, at 5:59 PM, Daniel Carosone wrote:
Does anyone know of such a device being made and sold? Feel like
designing and marketing one, or publising the design?
FWIW I think Google server farm uses something like this.
It looks slightly
Daniel Carosone wrote:
However, with the rpool mirror in place, I can't find a way to zpool
export black. It complains that the poool is busy, because of the
zvol in use. This happens regardless of whether I have set the zvol
submirror offline. I expected that, with the subdevice in the
On 01/11/10 17:42, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
No, it's fine. DEGRADED just means the pool is not operating at the
ideal state. By definition a hot spare is always DEGRADED. As long as
the spare itself is ONLINE it's fine.
One more question on this; so
On Tue, Jan 12, 2010 at 02:38:56PM +1300, Ian Collins wrote:
How did you set the subdevice in the off line state?
# zpool offline rpool /dev/zvol/dsk/
sorry if that wasn't clear.
Did you detach the device from the mirror?
No, because then:
- it will have to resilver fully on next
On Jan 11, 2010, at 4:42 PM, Daniel Carosone wrote:
I have a netbook with a small internal ssd as rpool. I have an
external usb HDD with much larger storage, as a separate pool, which
is sometimes attached to the netbook.
I created a zvol on the external pool, the same size as the internal
One thing which may help is the zfs import was single threaded, ie it open
every disk one disk (maybe slice) at a time and processed it, as of 128b it is
multi-threaded, ie it opens N disks/slices at once and process N disks/slices
at once. When N is the number of threads it decides to use.
On Mon, Jan 11, 2010 at 06:03:40PM -0800, Richard Elling wrote:
IMHO, a split mirror is not as good as a decent backup :-)
I know.. that was more by way of introduction and background. It's
not the only method of backup, but since this disk does get plugged
into the netbook frequently enough it
On Mon, 11 Jan 2010, Eric Schrock wrote:
No, there is no way to tell if a pool has DTL (dirty time log) entries.
Hmm, I hadn't heard that term before, but based on a quick search I take it
that's the list of data in the pool that is not fully redundant? So if a
2-way mirror vdev lost a half,
On Jan 11, 2010, at 6:35 PM, Paul B. Henson wrote:
On Mon, 11 Jan 2010, Eric Schrock wrote:
No, there is no way to tell if a pool has DTL (dirty time log) entries.
Hmm, I hadn't heard that term before, but based on a quick search I take it
that's the list of data in the pool that is not
On Mon, Jan 11, 2010 at 6:17 PM, Greg gregory.dur...@gmail.com wrote:
Hello All,
I hope this makes sense, I have two opensolaris machines with a bunch of
hard disks, one acts as a iSCSI SAN, and the other is identical other than
the hard disk configuration. The only thing being served are
[google server with batteries]
These are cool, and a clever rethink of the typical data centre power
supply paradigm. They keep the server running, until either a
generator is started or a graceful shutdown can be done.
Just to be clear, I'm talking about something much smaller, that
provides
Cause you mention the fixed / bugs I have a more general question.
Is there a way to see all commits to OSOL that are related to a Bug Report ?
Background: I'm interested in how e.g. the zfs import bug was fixed.
--
This message posted from opensolaris.org
.. however ... a lot of snaps still have a impact
on system performance. After the import of the 1
snaps volume, I saw devfsadm eating up all CPU:
If you are snapshotting ZFS volumes, then each will
create an entry in the
device tree. In other words, if these were file
systems
Actually for the ZIL you may use the a-card (memory sata disk + bbu + compact
flash write out).
For the data disks there is no solution yet - would be nice. However I prefer
the supercapacitor on disk method.
Why ? because the recharge logic is chellenging. There needs to be
communication
Lutz,
On Mon, Jan 11, 2010 at 09:38:16PM -0800, Lutz Schumann wrote:
Cause you mention the fixed / bugs I have a more general question.
Is there a way to see all commits to OSOL that are related to a Bug Report ?
You can go to : src.opensolaris.org and give the bug-id in the history field
53 matches
Mail list logo