Hi everybody;
I'm experimenting something weird on one of my zpool. One of my hard
drive failed (c3t3d0). The hot spare (c4t3d0) did its job, I
(physically) replaced it, and rebooted.
I have acknowledged the failure with fmadm too.
I now have this zpool config :
$ zpool status storage
pool:
After working with Sanjeev, and putting in a bunch of timing statement
throughout the code, it turns out that file writes ARE NOT the bottleneck, as
would be assumed.
It is actually reading the file into a byte buffer that is the culprit.
Specifically, this java command:
byteBuffer =
Selim,
Symantec does support ZFS as DSSU targets. I've also seen a SUN white
paper outlining the use of Thumper (Sun X4500) as a NB 6.5 media server,
where the best practice was to to configure multiple NB disk storage
units to use a distinct ZFS file system. In this case, all the ZFS file
Hello,
Is the gzip compression algorithm planned to be in Solaris 10 Update 5?
Thanks in advance,
Brad
--
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ralf Ramge schrieb:
Thomas Liesner wrote:
Does this mean, that if i have a pool of 7TB with one filesystem for all
users
with a quota of 6TB i'd be alright?
Yep. Although I *really* recommend creating individual file systems, e.g.
if you have 1,000 users on your server, I'd create 1,000
On 2/13/08, Tom Buskey [EMAIL PROTECTED] wrote:
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are
in the HCL as supported, but do not have 64 bit drivers. So they only work
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are in
the HCL as supported, but do not have 64 bit drivers. So they only work in 32
bit mode.
This message posted from
On 2/5/2008 2:45 PM, Jeremy Kister wrote:
1. What do I have to do (short of replacing the seemingly good disk) to
get c3t8d0 back online?
I ended up applying patches 124205-05 and 118855-36. things are much
better now, but there are still [at least] two issues remaining.
with my zpool in
Tom Buskey wrote:
Are you using the Supermicro in Solaris or OpenSolaris? Which version?
64 bit or 32 bits?
I'm asking because I recently went through a number of SCSI cards that are in
the HCL as supported, but do not have 64 bit drivers. So they only work in
32 bit mode.
I saw some other people have a similar problem but reports claimed this was
'fixed in release 42' which is many months old, I'm running the latest version.
I made a RAIDz2 of 8x500GB which should give me a 3TB pool:
zfs list:
NAME USED AVAIL REFER MOUNTPOINT
pile 269K 2.67T 40.4K
[EMAIL PROTECTED] said:
difference my tweaks are making. Basically, the problem users experience,
when the load shoots up are huge latencies. An ls on a non-cached
directory, which usually is instantaneous, will take 20, 30, 40 seconds or
more. Then when the storage array catches up,
On Wed, Feb 13, 2008 at 02:48:25PM -0800, Sam wrote:
I saw some other people have a similar problem but reports claimed
this was 'fixed in release 42' which is many months old, I'm running
the latest version. I made a RAIDz2 of 8x500GB which should give me a
3TB pool:
How many sectors on
2008/2/13, Sam [EMAIL PROTECTED]:
I saw some other people have a similar problem but reports claimed this was
'fixed in release 42' which is many months old, I'm running the latest
version. I made a RAIDz2 of 8x500GB which should give me a 3TB pool:
Disk manufacturers use ISO units, where
Way crude, but effective enough:
kinda cool, but isn't thats what
sar -f /var/adm/sa/sa`date +%d` -A | grep -v ,
is for? crontab -e sys
to start..
for more fun
acctadm -e extended -f /var/adm/exacct/proc process
Rob
___
zfs-discuss
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
...
I know, I know, I should have gone with a JBOD setup, but it's too late for
that in this iteration of this server. We we set this up, I had the gear
already, and it's not in my budget to get new stuff right now.
What kind of
zpool list reports 3.67T, df reports 2.71 which is pretty close to 2.73 so I
imagine you guys are right in the difference being 465GB vs 500GB for the size
of each disc, guess I'll go pick up another pair :)
Thanks!
Sam
This message posted from opensolaris.org
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The server/initiator side is a
I have a hot spare that was part of my zpool but is no longer
connected to the system. I can run the zpool remove command and it
returns fine but doesn't seem to do anything.
I have tried adding and removing spares that are connected to the
system and works properly. Is zpool remove failing
Hi ;
One of my customers is considering a 10 TB NAS box for some windows boxes.
Reliability and High performance is mandatory.
So I plan to use 2x Clustered Servers + some storage and ZFS and Solaris.
Here are my questions
1) Is any body using Clustered Solaris and ZFS
19 matches
Mail list logo