Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 52.9G 6.11G31K /rpool
rpool/ROOT
Lori,
Thanks for taking the time to reply. Please see below.
Karl
Lori Alt wrote:
Karl Rossing wrote:
Currently running b93.
I'd like to try out b101.
I previously had b90 running on the system. I ran ludelete snv_90_zfs
but I still see snv_90_zfs:
$ zfs list
NAME
Could zfs be configured to use gzip-9 to compress small files or when
the system is idle..
When the system is busy or is handling a large file use lzjb.
Busy/Idle and large/small files would need to be defined somewhere.
Alternatively, write the file out using lzjb if the system is busy and
Would there be an advantage to using 4GB USB memory sticks on a home
system for zil and l2arc?
CONFIDENTIALITY NOTICE: This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private,
://mail.opensolaris.org/mailman/listinfo/zfs-discuss)
---
Karl Rossing
System Administrator
The Robinson Group
CONFIDENTIALITY NOTICE: This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information
Thanks for the help.
Since the v210's in question are at a remote site. It might be a bit of a pain
getting the drives swapped by end users.
So I thought of something else. Could I netboot the new v210 with snv_115, use
zfs send/receive with ssh to grab the data on the old server, install the
Is there a CR yet for this?
Thanks
Karl
Cindy Swearingen wrote:
Hi everyone,
Currently, the device naming changes in build 125 mean that you cannot
use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
mirrored root pool.
If you are considering this release for the ZFS log
When will SXCE 129 be released since 128 was passed over? There used to
be a release calendar on opensolaris.org but I can't find it anymore.
Jeff Bonwick wrote:
And, for the record, this is my fault. There is an aspect of endianness
that I simply hadn't thought of. When I have a little
I believe that write caching is turned off on the boot drives or is it
the controller or both?
Which could be a big problem.
On 03/24/10 11:07, Tim Cook wrote:
On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic dusa...@gmail.com
mailto:dusa...@gmail.com wrote:
Hello all,
I am a
On 03/24/10 12:54, Richard Elling wrote:
Nothing prevents a clever chap from building a ZFS-based array controller
which includes nonvolatile write cache.
+1 to that. Something that is inexpensive and small (4GB?) and works in
a PCI express slot.
CONFIDENTIALITY NOTICE: This
Hi,
We have a server running b134. The server runs xen and uses a vdev as
the storage.
The xen image is running nevada 134.
I took a snapshot last night to move the xen image to another server.
NAME USED AVAIL REFER MOUNTPOINT
vpool/host/snv_130 32.8G
I'm trying to pick between an Intel X25-M or Intel X25-E for a slog
device.
At some point in the future, TRIM support will become available
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html.
The X25-M support TRIM while X25-E don't support trim.
Does TRIM support
Hi,
One of our zfs volumes seems to be having some errors. So I ran zpool
scrub and it's currently showing the following.
-bash-3.2$ pfexec /usr/sbin/zpool status -x
pool: vdipool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made
-60c2-e114-e1bc-daa03d7b163f ZFS-8000-D3
This output will tell you when the problem started.
Depending on what fmdump says, which probably indicates multiple drive
problems, I would run diagnostics on the HBA or get it replaced.
Always have good backups.
Thanks,
Cindy
On 04/15/11 12:52, Karl
.
Depending on what fmdump says, which probably indicates multiple drive
problems, I would run diagnostics on the HBA or get it replaced.
Always have good backups.
Thanks,
Cindy
On 04/15/11 12:52, Karl Rossing wrote:
Hi,
One of our zfs volumes seems to be having some errors. So I ran
zpool
the reboot.
Shouldn't the resilvered information be kept across reboots?
Thanks
Karl
On 04/15/2011 03:55 PM, Cindy Swearingen wrote:
Yes, the Solaris 10 9/10 release has the fix for RAIDZ checksum errors
if you have ruled out any hardware problems.
cs
On 04/15/11 14:47, Karl Rossing wrote
I have an outage tonight and would like to swap out the LSI 3801 for an
LSI 9200
Should I zpool export before the swaping the card?
On 04/16/2011 10:45 AM, Roy Sigurd Karlsbakk wrote:
I'm going to wait until the scrub is complete before diving in some
more.
I'm wondering if replacing the
The server I have currently have only has 2GB of ram. At some point, I
will be adding more ram to the server but I'm not sure when.
I want to add a mirrored zil. I have 2 Intel 32GB SSDSA2SH032G1GN drives
As such, I have been reading the ZFS Best Practices Guide
Hi,
I have a dual xeon 64GB 1U server with two free 3.5 drive slots. I also
have a free PCI-E slot.
I'm going to run a postgress database with a business intelligence
application.
The database size is not really set. It will be between 250-500GB
running on Solaris 10 or b134.
My storage
On 10/28/2011 01:04 AM, Mark Wolek wrote:
before the forum closed.
Did I miss something?
Karl
CONFIDENTIALITY NOTICE: This communication (including all attachments) is
confidential and is intended for the use of the named addressee(s) only and
may contain information that is private,
Hi,
I'm thinking of getting LSI 9212-4i4e(4 internal and 4 external ports)
to replace a SUN Storagetek raid card.
The StorageTek raid card seems to want to have it's drives initialized
and volumes created on it before they are presented to zfs. I can't find
a way of telling it just to be an
I'm going to be moving a non root storage pool from snv_123(I think it's
pre comstar) to s10u10 box.
I have some zfs iscsi volumes on the pool. I'm wondering if zpool export
vdipool on the old system and zpool import vdipool on the new system
will work? Do i need to run any other commands to
/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
) encountered in b123?
Karl
On 01/31/2012 11:20 AM, Karl Rossing wrote:
I'm going to be moving a non root storage pool from snv_123(I think
it's pre comstar) to s10u10 box.
I have some zfs iscsi volumes on the pool. I'm
Hi,
I'm showing slow zfs send on pool v29. About 25MB/sec
bash-3.2# zpool status vdipool
pool: vdipool
state: ONLINE
scan: scrub repaired 86.5K in 7h15m with 0 errors on Mon Feb 6
01:36:23 2012
config:
NAME STATE READ WRITE CKSUM
vdipool
On 12-05-07 12:18 PM, Jim Klimov wrote:
During the send you can also monitor zpool iostat 1 and usual
iostat -xnz 1 in order to see how busy the disks are and how
many IO requests are issued. The snapshots are likely sent in
the order of block age (TXG number), which for a busy pool may
mean
On 12-05-07 8:45 PM, Bob Friesenhahn wrote:
I see that there are a huge number of reads and hardy any reads. Are
you SURE that deduplication was not enabled for this pool? This is
the sort of behavior that one might expect if deduplication was
enabled without enough RAM or L2 read cache.
I'm looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
wondering what I should get.
Are people getting intel 330's for l2arc and 520's for slog?
Karl
CONFIDENTIALITY NOTICE: This communication (including all attachments) is
confidential and
On 08/06/2012 10:06 PM, Erik Trimble wrote:
Honestly, I don't think this last point can be emphasized enough. SSDs
of all flavors and manufacturers have a track record of *consistently*
lying when returning from a cache flush command. There might exist
somebody out there who actually does it
28 matches
Mail list logo