Rocky,
Can individuals buy your products in the retail market?
Thanks.
Fred
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rocky Shek
Sent: 星期五, 一月 28, 2011 7:02
To: 'Pasi Kärkkäinen'
Cc: 'Philip Brown';
You should also check out VA Technologies (
http://www.va-technologies.com/servicesStorage.php) in the UK which supply a
range of JBOD's. I've used this is very large deployments with no JBOD
related failures to-date. Interestingly the laso list co-raid boxes.
---
W. A. Khushil Dep -
Khushil,
Thanks.
Fred
From: Khushil Dep [mailto:khushil@gmail.com]
Sent: 星期一, 一月 31, 2011 17:37
To: Fred Liu
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
You should also check out VA Technologies
Brandon High bh...@freaks.com wrote:
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
What is the status of ZFS support for TRIM?
I believe it's been supported for a while now.
Torrey McMahon tmcmah...@yahoo.com wrote:
On 1/30/2011 5:26 PM, Joerg Schilling wrote:
Richard Ellingrichard.ell...@gmail.com wrote:
ufsdump is the problem, not ufsrestore. If you ufsdump an active
file system, there is no guarantee you can ufsrestore it. The only way
to guarantee
Why do you say fssnap has the same problem?
If it write locks the file system, it is only for a matter of seconds, as I
recall.
Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
Mark
On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:
On 1/30/2011 5:26 PM, Joerg Schilling
iirc, we would notify the user community that the FS'es were going to hang
briefly.
Locking the FS'es is the best way to quiesce it, when users are worldwide, imo.
Mark
On Jan 31, 2011, at 9:45 AM, Torrey McMahon wrote:
A matter of seconds is a long time for a running Oracle database. The
He says he's using FreeBSD. ZFS recorded names like ada0 which always means
a whole disk.
In any case FreeBSD will search all block storage for the ZFS dev components if
the cached name is wrong: if the attached disks are connected to the system at
all FreeBSD will find them wherever they may
On 1/29/2011 6:18 PM, Richard Elling wrote:
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.
First, I would try least destruction method:
On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
Brandon High bh...@freaks.com wrote:
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
What is the status of ZFS support for TRIM?
I believe it's been supported
Pasi Kärkkäinen pa...@iki.fi wrote:
On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
Brandon High bh...@freaks.com wrote:
On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
What is the status of ZFS support
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption
G'day All.
I’m trying to select the appropriate disk spindle speed for a proposal and
would welcome any experience and opinions (e.g. has anyone actively chosen
10k/15k drives for a new ZFS build and, if so, why?).
This is for ZFS over NFS for VMWare storage ie. primarily random 4kB
Fred,
You can easier get them from our resellers. Our resellers are all around the
world.
Rocky
From: Fred Liu [mailto:fred_...@issi.com]
Sent: Monday, January 31, 2011 1:43 AM
To: Khushil Dep
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: RE:
On Jan 31, 2011, at 1:19 PM, Mike Tancsa wrote:
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the
I'm not sure about *docs*, but my rough estimations:
Assume 1TB of actual used storage. Assume 64K block/slab size. (Not
sure how realistic that is -- it depends totally on your data set.)
Assume 300 bytes per DDT entry.
So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RAM for
As I've said here on the list a few times earlier, the last on the
thread 'ZFS not usable (was ZFS Dedup question)', I've been doing some
rather thorough testing on zfs dedup, and as you can see from the
posts, it wasn't very satisfactory. The docs claim 1-2GB memory usage
per terabyte
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice
a process called zpool-poolname that has 99 threads. This seems to be
a limit, as it never goes above that. It is lower on workstations.
The `zpool' man page says only:
Processes
Each imported pool has an associated
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC would
you would you recommeend?
And then, _with_ dedup, what would you
- Original Message -
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for
your
test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC
would you would you recommeend?
And
On Jan 31, 2011, at 6:16 PM, Roy Sigurd Karlsbakk wrote:
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC would
you would
How do you verify that a zfs send binary object is valid?
I tried running a truncated file through zstreamdump and it completed
with no error messages and an exit() status of 0. However, I noticed it
was missing a final print statement with a checksum value,
END checksum = ...
Is there any
On 01/31/11 06:40 PM, Roy Sigurd Karlsbakk wrote:
- Original Message -
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for
your
test.
With 50TB storage, and 1TB if L2ARC, with no dedup, what
On Jan 30, 2011, at 6:03 PM, Richard Elling wrote:
On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
Is it possible to partition the global setting for the maximum ARC size
with
The threads associated with the zpool process have special purposes and are
used by the different I/O types of the ZIO pipeline. The number of threads
doesn't change for workstations or servers. They are fixed values per ZIO
type. The new process you're seeing is just exposing the work that has
26 matches
Mail list logo