Oh no I am not bothered at all about the target ID numbering. I just wondered
if there was a problem in the way it was enumerating the disks.
Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
parameter of the command or the slice of a disk - none of my 'data' disks have
Lanky Doodle wrote:
Oh no I am not bothered at all about the target ID numbering. I just wondered
if there was a problem in the way it was enumerating the disks.
Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
parameter of the command or the slice of a disk - none of my
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle lanky_doo...@hotmail.com wrote:
Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
parameter of the command or the slice of a disk - none of my 'data' disks
have been 'configured' yet. I wanted to ID them before adding them to
On Sat, Aug 06, 2011 at 07:19:56PM +0200, Eugen Leitl wrote:
Upgrading to hacked N36L BIOS seems to have done the trick:
eugen@nexenta:~$ zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
tank
On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
gregory.dur...@gmail.com wrote:
Hello,
We just purchased two of the sc847e26-rjbod1 units to be used in a
storage environment running Solaris 11 express.
We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
9200-8e hba. We are not
Hi,
I am facing issue with zfs destroy, this takes almost 3 Hours to delete the
snapshot of size 150G.
Could you please help me to resolve this issue, why zfs destroy takes this much
time.
While taking snapshot, it's done within few seconds.
I have tried with removing with old snapshot but
Hello
I am having problems with my ZFS, I have put in a LSI 3041E-S controller and
have 2 disks on it, and a further 4 on the motherboard. I am getting read
errors on the pool but not on any disk? Any idea where I should look to find
the problem?
Thanks
Steven
\ uname -a
SunOS X..com
Thanks Andrew, Fajar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hiya,
Now I have figured out how to read disks using dd to make LEDs blink, I want to
write a little script that iterates through all drives, dd's them with a few
thousand counts, stop, then dd's them again with another few thousand counts,
so I end up with maybe 5 blinks.
I don't want
hi
most mordern server has separate ILOM that support IPMLtool that can
talk to HDD
what is your server? does it has separate remote management port?
On 8/10/2011 8:36 AM, Lanky Doodle wrote:
Hiya,
Now I have figured out how to read disks using dd to make LEDs blink, I want to
write a
Also should be getting Illegal Request errors? (no hard or soft errors)
Some more info: (I am doing a Scrub hence the high blocking levels)
var/log$ iostat -Ex
extended device statistics
devicer/sw/s kr/s kw/s wait actv svc_t %w %b
sd0 1.1 16.9 30.6
I would generally agree that dd is not a great benchmarking tool, but you could
use multiple instances to multiple files, and larger block sizes are more
efficient. And it's always good to check iostat and mpstat for io and cpu
bottlenecks. Also note that an initial run that creates files may
I would generally agree that dd is not a great benchmarking tool, but you could
use multiple instances to multiple files, and larger block sizes are more
efficient. And it's always good to check iostat and mpstat for io and cpu
bottlenecks. Also note that an initial run that creates files may
Hello All,
Sorry for the lack of information. Here is some answers to some questions:
1) createPool.sh:
essentially can take 2 params, one is number of disks in pool, the
second is either blank or mirrored, blank means number of disks in the
pool i.e. raid 0, mirrored makes 2 disk mirrors.
What sort of load will this server be serving? sync or async writes? what sort
of reads? random i/o or sequential? if sequential, how many streams/concurrent
users? those are factors you need to evaluate before running a test. A local
test will usually be using async i/o and a dd with only 4k
This system is for serving VM images through iSCSI to roughly 30
xenserver hosts. I would like to know what type of performance I can
expect in the coming months as we grow this system out. We currently
have 2 intel ssds mirrored for the zil and 2 intel ssds for the l2arc
in a stripe. I am
then create a ZVOL and share it over iSCSI and from the initiator host, run
some benchmarks. You'll never get good results from local tests. For that sort
of load, I'd guess a stripe of mirrors should be good. RAIDzN will probably be
rather bad
roy
- Original Message -
This system is
On Wed, 10 Aug 2011, steven wrote:
Also should be getting Illegal Request errors? (no hard or soft errors)
Illegal Request sounds like the OS is making a request that drive
firmware does not support. It is also possible that the request
became corrupted due to interface issue.
Bob
--
Bob
What sort of controller/backplane/etc are you using? I've seen similar iostat
output with western drives on a supermicro SAS expander
roy
- Original Message -
Also should be getting Illegal Request errors? (no hard or soft
errors)
Some more info: (I am doing a Scrub hence the high
On Wed, Aug 10, 2011 at 2:55 PM, Gregory Durham
gregory.dur...@gmail.com wrote:
3) In order to deal with caching, I am writing larger amounts of data
to the disk then I have memory for.
The other trick is to limit the ARC to a much smaller value and then
you can test with sane amounts of data.
On 08/10/11 05:13 PM, Nix wrote:
Hi,
I am facing issue with zfs destroy, this takes almost 3 Hours to delete the
snapshot of size 150G.
Could you please help me to resolve this issue, why zfs destroy takes this much
time.
Do you have dedup enabled?
--
Ian.
Hello,
Thanks for the reply. I used to use the onboard SATA ports (intel DQ965DF) but
i have added a LSI-3041E-S controller and this is where i get the problems.
The controller is a Sun version (note the S not R in model) but it is a PC
version and I tried flashing it with the firmware from
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I am facing issue with zfs destroy, this takes almost 3 Hours to
delete the snapshot of size 150G.
Do you have dedup enabled?
I have always found, zfs destroy takes some
also, snapshot destroys are much slower with older releases such as 134. i
recommend an upgrade. but an upgrade will not help much if you are using dedup.
-- Garrett D'Amore
On Aug 10, 2011, at 8:32 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From:
24 matches
Mail list logo