Hello,
I'm trying to use opensolaris and zfs to create replicated storage on
x86 platform.
Initially I wanted to use send/recv command to replicate data but it
appeared there is no descent scripts.
And there are no resources to write our own, anyway it gonna be as
primitive as other's I
--
Best regards,
Roman Naumenko
*Network Administrator
*ro...@frontline.ca mailto:ro...@frontline.ca
25 Adelaide Street East | Suite 600 | Toronto, Ontario | M5C 3A1
Helpdesk: (416) 637-3132
www.frontline.ca http://www.frontline.ca
___
storage
A weird issue:
1. avs works for connections on a local switch via local freebsd router
connected to the switch
host1 - switch - router freebsd - switch - host2
2. When trying to emulate replication using far distance remote connection
with the freebsd router on the remote side then AVS
The main problem with avs is lack of logging information. It responds with
short messages about failed something and you have no idea where to look and
what it's related to...
Strange soft really...
--
This message posted from opensolaris.org
___
Thanks for your help!
Actually I have some more questions. I need to make a decision on
replication mode for our storages: zfs send-receive, avs or even
microsoft internal tool on the iscsi volumes with independent zfs
snashots on both side.
Initially avs seemed to me a good options, but I
Jim Dunham wrote, On 04/24/2009 06:46 PM:
Roman,
Thanks for your help!
Actually I have some more questions. I need to make a decision on
replication mode for our storages: zfs send-receive, avs or even
microsoft internal tool on the iscsi volumes with independent zfs
snashots on both side.
was the reason for
putting it not on dedicated pair of disk as documentation suggests?
(avs administration guide: raw devices must be stored on a disk separate from
the disk that contains the data from the replicated volumes The bitmap must
not be stored on the same disk as replicated volumes)
Roman
to a significant degree?
--
Best regards,
Roman Naumenko
ro...@frontline.ca
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Title: Re: [storage-discuss] motherboard for storage server
Eric D. Mudama wrote, On 06/02/2009 02:22 PM:
On Tue, Jun 2 at 12:46, Bob Friesenhahn wrote:
On Tue, 2 Jun 2009, Roman Naumenko wrote:
Does it make sense to go with motherboards that support 32G of
memory?
Can
and async_item_hwm respectively.
Does it make sense to change them with 1G local connection?
Typical load is about 5Mb/s reading/writing and sometimes it goes up to
40Mb/s
Right now I can't see relationship between zpool I/O load spikes and
access delays.
--
Best regards,
Roman Naumenko
Network
://docs.sun.com/source/819-6148-10/chap5.html, search for
async_throttle_delay
I meant it's interesting why it's growing slowly over the time?
I thought this is an average value for given perion of observation.
Or is this a total delay for the whole period of replication time?
--
Regards,
Roman
Or is this a total delay for the whole period of replication time?
This is the total number of times SNDR had to delay replicating a
chunk of data, since a given replica was last enabled or resumed by
SNDR. An increment occurs during asynchronous replication with both
memory or disk queues,
Nobody wants to test since they don't live for a long time :)
I've asked around and people tell that ssd are fast but die fast as well under
the load.
--
Roman Naumenko
--
This message posted from opensolaris.org
___
storage-discuss mailing list
switch to comstar for providing iscsi targets?
Thank you,
Roman Naumenko
Network Administrator
ro...@frontline.ca
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Hi Dan,
Thanks for the information.
Can you advise what version of opensolaris is better for production
use? Is this only 0906?
Are there any features and fixes that are not available in 0906 but are
presented in newest? Especially regarding iscsi.
--
Roman Naumenko
Network Administrator
Hello Dan,
Can you point me where to download iso image for development version?
I have on hdd b11x.iso but can't find the source for it.
Thanks,
Roman Naumenko
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage
block sizes) only decreases
speed.
And writing small files (mail archive) makes zfs write to the storage
constantly, no bursts at all. Is it how it should be?
Nfs client is Ubuntu 8.04
Regards,
Roman Naumenko
ro...@frontline.ca
Message was edited by: rokka
--
This message posted from
I just wonder why zfs still writes permanently?
Regards,
Roman Naumenko
ro...@frontline.ca
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage
stmfadm
See: /var/svc/log/system-stmf:default.log
Impact: This service is not running.
r...@zsan3:~# tail -f /var/svc/log/system-stmf:default.log
[ Jul 1 13:11:35 Enabled. ]
[ Jul 1 13:11:35 Executing start method (/lib/svc/method/svc-stmf start). ]
svc-stmf: unable to load config
Thanks,
Roman
Thanks, it's resolved.
Clearing the state didn't help, but reboot did.
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Hi,
Just wonder if there is something special that makes 7120 faster than the
server you can assemble by yourself from common components. I have an ultimate
task to make a fast zfs :)
And what I'm thinking about is that there is only thing you can't get from
nearest server hardware supplier:
Hi Brent,
Thanks for this information, I'll use it.
Actually, what developers think about it's documentation? There are basically
no documentation for COMSTAR available (I don't take into account messy wiki
pages, man pages are helpful, but they are not a real documentation either).
Are there
Great, I'll look into your table.
Regarding raid controller: I disabled cache during creation arrays (for some
reason JBOD mode is not available for opensolaris on Adaptec 5085)
Actually, it writes very fast, so I can't blame controller:
r...@zsan0:/# dd if=/dev/zero
I ask you what kind of storage you use eventually for your
production (if anything with zfs)?
--
Best regards,
Roman Naumenko
Network Administrator
ro...@frontline.ca
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http
Chris Du wrote, On 42-12-23 02:59 PM:
Compression helps when you don't export volumes through NFS. SSD really helps
increase write speed and reduce latency for NFS.
This filebench was run on storage server itself. I haven't got a chance to run
it inside client. Inside client, I think the
web-interfaces for storage
appliances?
On the top there is a fishworks appliance, but it comes only with
hardware as far as I know.
There is a simple interface for zfs - smcwebserver
Is there anything for comstar available?
--
Roman Naumenko
ro...@frontline.ca
__
of
COMSTAR.
Guido
Da:
Roman Naumenko [mailto:ro...@frontline.ca]
Inviato: marted 7 luglio 2009 19.06
A: Jim Dunham
Cc: Anzuoni Guido; storage-discuss@opensolaris.org
Oggetto: Re: Re: [storage-discuss] Limiting iSCSI logon for
targets
Jim Dunham wrote, On 42-12-23 02
Ross Walker wrote, On 42-12-23 02:59 PM:
[storage-discuss] Sol 10u7 iscsitgt write performance
Ok, I have just about given up
on this one, and I was so hopeful after
having figured out the read performance issue
(which to recap was a
mix of ESX guest network latency and a
Hello,
It's Opensolaris 111b on x86.
I'm trying to configure Adaptec 5085 to work in JBOD mode.
It has 8 SATA disk connected. If disks are configured in array mode
(each disk has 1 array) - Solaris recognizes it on the fly.
r...@zsan0:~# cfgadm -lav
Ap_Id Receptacle
I upgraded box to 118 and now it's messed up.
System goes to maintenance due to:
system/devices/fc-fabric:default is in maintenance
In logs:
lib/svc/method/fcoeconfig not found
46 dependent services are not running.
How did this fcoe manag to mess all things up?
I even didn't plan to use fc
Thank you, Jonathan.
--
Roman
Jonathan Edwards wrote, On 09-07-23 04:54 PM:
http://opensolaris.org/jive/thread.jspa?threadID=107881
On Jul 23, 2009, at 4:40 PM, Roman Naumenko wrote:
I upgraded box to 118 and now it's messed up.
System goes to maintenance due to:
system/devices/fc
Thanks for looking into this.
You are right noticing unusual jambo frames on win nic.
Why it's generated with setting disabling jambo - nobody I believe will explain.
Freaking M$...
--
This message posted from opensolaris.org
___
storage-discuss
I'll post later capture file from solaris. There were no jumbo.
And I checked another win server - it generates huge amount of 50-60k packets
(targets are on 111b).
Maybe this is a feature? Can somebody take a look on own win box and check
jumbo presence (while it's not enabled on nic)?
--
Thanks, guys, for looking into it. You did excellent diagnostic without looking
on actual server (just like Dr. House does with it patients :)
It was indeed tso option which causes 64k packets on traces, after disabling it
- large packets are no any more there. Fanny, it doesn't appear if
# itadm list-target
TARGET NAME STATESESSIONS
iqn.1986-03.com.sun:02:vol1 online 1
Windows initiator has this target connected, no issues. But a new disk doesn't
appear in management console.
What
Hi Jim,
Sure, I configured lu and added a view as described in the link.
stmfadm list-lu -v
LU Name: 600144F0CC12CC004A8600DA0001
Operational Status: Online
Provider Name : sbd
Alias : lu-vol1
View Entry Count : 1
Data File :
Resplved. thanks.
view wasn't correctly defined.
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
I'd like to confirm that irq overlapping might be the issue, at least it was a
case for FreeBSD kernel.
We had a bad performance once on a freebsd based firewall. And it turned out
that embedded broadcom NICs and NICs on PCI shared the same irq.
After reassigning them, processor load decreased
I have a strange filing that attached slog device doesn't do anything on zpool.
zsan0store 203G 10.7T 47 23 5.85M 2.49M
raidz2 203G 10.7T 47 22 5.85M 2.44M
c7t0d0 - - 25 5 830K 418K
c7t1d0 - - 25 5 830K 418K
Ross Walker wrote, On 42-12-23 02:59 PM:
On Aug 17, 2009, at 12:24 PM, Roman Naumenko no-
re...@opensolaris.org wrote:
The question is about zfs slog device and how to check if it can
improve access.
Does zfs cache raw volume data on slog device or only on a filesystem?
What kind
I have complaints from potential zfs storage users who blames SUN about not
being capable to manage it's resources (LUNs).
Let me explain what they mean by citing 3 examples:
I started formatted the second drive and it killed all performance.
Exchange is doing {online defragmentation,
Ok, thanks, I've seen original before...
The problem know is that windows doesn't create at some point any sync
activity. NFS, on the other hand create huge. NFS performace is just so poor,
15MB/s - and this is with ssd drive
--
Roman
.
--
This message posted from opensolaris.org
Using mirrors just makes zfs useless. The whole idea is a reliable raid6
storage with snapshots features.
The question is how prevent saturation for one volume.
By the way, how this is designed in a large disk arrays, where are hundreds
luns are accessed simultaniosly?
--
Roman
--
This
Seems like the issue was indeed with TSO and bad windows driver.
They updated it, disabled TSO - no complaints any more.
Thank you for helping!
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
This is comstar+itadm configuration, so iscsitgt is not used. There is no
driver for storage, it's JBOD on Adaptec raid controller.
And bottom of the problem is: nfs makes async writes always (copying files,
bonnie++, any other writing activity), I see it right away on iostat for ssd
device.
Using mirrors just makes zfs useless. The whole idea
is a reliable raid6 storage with snapshots
features.br
The question is how prevent saturation for one
volume./blockquotediv
brWhat is your current
zpool format (raidz, raidz2, etc)? Using a mirror
does not make zfs useless - you can still
On Aug 18, 2009, at 10:47 AM, Roman Naumenko no-
re...@opensolaris.org wrote:
Using mirrors just makes zfs useless. The whole
idea
is a reliable raid6 storage with snapshots
features.br
The question is how prevent saturation for one
volume./blockquotediv
brWhat is your current
div id=jive-html-wrapper-div
ZFS based mirroring is only slightly less reliable
than Raid-z2, and it gives much better IOP/s.br
How it can give much better iops taking into account slow and large sata drives?
Neither exchange or SQL Server tend to be throughput bound, but both require
Roman Naumenko/br
ro...@frontline.ca
Message was edited by: rokka
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Tristan Ball wrote:
I believe Zpool iostat will include cached IO's,
and write IO's which will be coalesced into a single physical IO to your
disk.
p
The plain iostat command is a good place to start to see what's actually
going to disk. iostat -dxzcn 1 is what comes out my fingers
It's Windows 2008
If I choose quick format it gives an error almost immediately: The format did
not complete successfully.
If it's the long format action, then it starts formtting, doing something for
couple of hours and then fails with the same error.
list-lu:
LU Name:
Of course by RDP and we've been doing it for a couple of years using linux
iscsi target.
There are too many servers to format them using console.
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
Windows 2008 doesn't like it.
XP formats volume easily, it can be mounted then on 2008 over iscsi.
The only difference is that its a 2-node cluster on 2008.
--
Roman Naumenko
--
This message posted from opensolaris.org
___
storage-discuss mailing
Thank Nigel,
Disabling jumbo and tso on win server allowed formatting to be done. SUN had
unchanged config:
e1000g1: flags=1001000843UP,BROADCAST,RUNNING,MULTICAST,IPv4,FIXEDMTU mtu
9000 index 3
inet 10.10.110.101 netmask ff00 broadcast 10.10.110.255
ether 0:15:17:89:97:1
Excellent analysis. Can you add commands for tshark that you've used?
So, Opensolaris sends large packets that confuses WireShark (and probably
Windows)?
It's intel motherboard.
$ uname -a
SunOS zsan01 5.11 snv_118 i86pc i386 i86pc Solaris
That's what I have on the server, don't know how to
On Aug 27, 2009, at 1:25 PM, Roman Naumenko wrote:
Excellent analysis. Can you add commands for tshark
that you've used?
So, Opensolaris sends large packets that confuses
WireShark (and
probably Windows)?
It's intel motherboard.
$ uname -a
SunOS zsan01 5.11 snv_118 i86pc
Target portal discovery as e1000g0 ip on initiator. Although
it can't login, still confusing.
Again, I'm getting the list of targets since it listens on all interfaces.
Any references to the documentation explaining this are appreciated.
--
Roman Naumenko
ro...@bestroman.com
--
This message posted
Can anybody advise?
Bond target port to a particular interface, restrict targets appearing per
initiator - is this something available in Comstar or I'm missing something?
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss
(zfs_arc_max)
--
Roman Naumenko
ro...@frontline.ca
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
On Tue, Sep 8, 2009 at 10:43 AM, Roman
Naumenkoro...@frontline.ca wrote:
Thanks, Ross.
Just to clarify: connecting the second enclosure
doesn't require the first to be turned off?
My understanding is that expanding pool can be done
completely without service interruption?
You can
Hello list,
What are the options for building clustering storage using opensolaris? I'm
interested in HA solutions.
I tried only one option - AVS replication. Unfortunately, AVS configuration is
too complicated. Groups, bitmaps, queues, rcp timeouts, slicing - it's just a
nightmare to make it
I've heard about Nexenta.
But we're just started to move away from another linux-build appliance, so I
don't feel like start using another Linux once again.
Hopefully, sometime we'll just order proper hardware from SUN to provide
clustering or whatever is needed.
Anyway, thanks for
Hi guys,
It's kind of emergency, a customer want a fast, expandable array but I don't
have any HBA to connect jbods.
I've decided to use lsi sas 3801e card.
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3801e/index.html
There is positive feedback for this card
I have two of these sat on my desk right now - order
them on overnight delivery last week no problem.
What country are you in?
Regards,
Tim Creswick
Hi Tim,
I'm in Canada.
I wonder what was the price for this card?
And seems like we found something, guys from pc-pitstop.com are
I wonder what opensolaris guru could advise?
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
[Added ha-clusters-discuss]
Have you looked at what Open HA Cluster(OHAC)
provides?
http://opensolaris.org/os/community/ha-clusters/
There is a HA-ZFS agent for OHAC and more recently
support for shared-nothing storage with COMSTAR.
Augustus.
I can't find this. Is there any
?
--
Roman Naumenko
ro...@frontline.ca
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
From: Roman Naumenko ro...@frontline.ca
Is there are any way to build HA storage using
common components like
JBOD enclosures, lsi hba, cheap sata drivers?
Yes.
http://www.sun.com/storage/disk_systems/unified_storag
e/index.jsp
Yes, that the right option when going cheap
Yes, that's what I needed.
Thank you very much!
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Chris Du wrote, On 23.12.-28158 14:59:
You need JBOD kit. It's basically a power card and a SAS cascading cable.
Hi Chris,
Do you have any particular in mind?
--
Roman
___
storage-discuss mailing list
storage-discuss@opensolaris.org
/SC836E1-R800.cfm
Look at optional parts in the bottom, there is JBOD kit that inclues
power control card.
On Mon, Sep 21, 2009 at 2:13 PM, Roman Naumenko ro...@frontline.ca
wrote:
Chris Du wrote, On 23.12.-28158 14:59:
You need JBOD kit. It's basically a power card and a SAS cascading
cable
Chris Du wrote, On 23.12.-28158 14:59:
JBOD Kit-- Used for cascading purposes
CSE-PTJBOD-CB1 - Power Control Card
CBL-0166L - SAS 836EL2/EL1 BP External Cascading Cable
CBL-0167L - SAS 836EL1 BP 1-Port Internal Cascading Cable
Yours is not E1 model which uses SAS expander chip on the
On Tue, Sep 22, 2009 at 15:15, Chris Du
dilid...@gmail.com wrote:
I thought QT model has 16 SATA/SAS ports on the
backplane and only E1/E2 model supports JBOD mode.
Sorry, my mistake. I have the E1 model, and thought
that Roman had mentioned he had that as well. I see now in the
first
It may be cheaper and easier to just replace the backplane if the case is
already bought.
Is it an easy procedure? I doubt
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
Chenbro makes a JBOD kit. Or at least *made*. A
number of sites are showing out of stock or even
discontinued.
http://usa.chenbro.com/corporatesite/products_detail.p
hp?sku=76
UEK-12803 looks like the part number for you.
Who knows if the mounting options are compatible with
the
This is on opensolaris 118
time zfs list
real0m0.015s
user0m0.005s
sys 0m0.010s
time zfs list -t snapshot
real0m19.441s
user0m0.020s
sys 0m0.041s
time zfs list -t snapshot | wc -l
122
real0m0.045s
user0m0.018s
sys 0m0.030s
Hm, then it started to list very
And here goes mine as well:
http://opensolaris.org/jive/thread.jspa?threadID=111540tstart=0
Enjoy!
--
Roman
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
We're about to start transferring data from Linux storage boxes to OpenSolaris
storage. Linux servers provide targets that are used mainly by Windows servers.
The storage appliance is Openfiler (Linux based).
Media storage on Linux is a hardware raid5, 8 drives. There is one big 5TB GPT
Is there is an easy way to transfer data from old
volumes to comstar targets? Obviously this is can be
done by mounting two targets and copying data on
client, which is not efficient.
I'm not sure you have an easy way to do this. The
problem as I see it is the LVM'd volumes. You have a
The other way is to mount the old LUN on the
OpenSolaris server and dd the old data directly to the ZVOL. This will avoid
the copy-off copy-on the network and could speed things up
depending on the size of the volume to the amount of data being copied, but
you can choose your block size which
http://blogs.sun.com/eschrock/entry/shadow_migration
Not sure if it's relevant in your specific setup, but
is still worth a look.
Regards,
Andrey
Thanks, I checked it quickly.
unfortunately, this is about migration using nfs. No iscsi option.
--
Roman
--
This message posted from
I just wonder how much IOPS would typical 2 processor box can deliver with 1
RAID and 1 LSI HBA controller with JBODs connected.
Now dd makes 2000-4000 IOSP on 8 disks raid10 array connected internally (not
jbod).
How to translate iops number into initiator usage? I mean how many servers can
Stupid situation...
Usually I create 1 lun + 1 target at time. But then I decided to create few
LUNs.
Then I typed:
itadm create-target
It created a target, but then I realized I can't find which LUNs it belongs.
Does anybody knows how to find relationship LUNs-Targets in comstart?
--
Roman
I see the relation: only after the view is created, you'll find in the stmfadm
list-lu -v
that the lun has a view.
Very inconvenient.
Comstar developers, could you comment on this? Maybe I'm missing something?
--
Roman
--
This message posted from opensolaris.org
tim szeto wrote, On 09.10.2009 13:51:
Roman Naumenko wrote:
I see the relation: only after the view is created, you'll find in the stmfadm
list-lu -v
that the lun has a view.
Very inconvenient.
Comstar developers, could you comment on this? Maybe I'm missing something?
Take
Before I order the controller and disks, I just want
to make sure that Opensolaris will be able to see and
use these 1.5TB disks. I don't want to find out that
the controller works, but it can't see this large
sized disk.
Can anyone enlighten me please ?
Cheap models from Adaptec don't
I am using a Dell SAS 6i card with the Samsung 1.5Tb
drives, which is based on an LSI design.
I would recommend the SAS 6i as it is significantly
cheaper than LSI's normal retail channel cards.
Don't Dell screw them up?
I don't mean completely, just a little - broken driver here, unsupported
Hello list,
Can somebody advise me how this can be done in the easiest way:
server1: 8drives in raid6
server2: 8drivers replicated by AVS from server1.
There are comstar targets on server1. Now I want to have raid10 insted raid6 on
server1.
The way I see it can be done: brake avs,
Hi Jim,
AVS is configured for async replication.
Basically I would like to replace AVS with snapshots transfer. The
reason is the performance.
So, after I stop replication and configure raid10 on the second server,
AVS going to be disabled.
Both servers are 122 version (which should be
You guys have a broken link, people might be interested in X25 :)
The page is http://www.sun.com/software/x25/
The link is:
http://store.sun.com/CMTemplate/CEServlet?process=SunStorecmdViewProduct_CPcatid=95146
--
Roman
--
This message posted from opensolaris.org
Jim Dunham wrote, On 20.10.2009 09:26:
Do you have a backup policy, other than replicating the data from server1 to
server2?
Of course we have this data in the other places. Windows replicates it
with own means plus tape backups.
I wouldn't touch it though, if not performance issues. I
: 0
Host group : zsan-hg
Target group : tg-mon-exch-01
LUN : 2
2 views have the same LUN, number 2, although the GUID is different. Is this
correct configuration?
Thank you,
Roman Naumenko
ro...@frontline.ca
--
This message posted from opensolaris.org
Ok, thanks - with targets behavior it's clear. Nothing unexpected happened in
term of connectivity.
Still, what could make LUN become unregistered?
Is this because of an error on zfs level (no errors on zpool though) or
something failed in COMSTAR?
--
Roman Naumenko
ro...@frontline.ca
I have a problem on storage server related to network.
This is on OpenSolaris 121b, on the other side is Windows file server accessing
it through iscsi. When a user is hammering storage server by copying files back
and forth, other users connected to Windows file server are experiencing
Hi Greg,
Your setup is not very clear from the description... But probably you should
try snapshots for data syncronization.
iscsi targets have to be recreated in any case on the second server.
--
Roman Naumenko
ro...@naumenko.ca
--
This message posted from opensolaris.org
question:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6878539
--
Roman Naumenko
ro...@naumenko.ca
Message was edited by: rokka
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http
Hello,
I'm having some issues with iSCSI target performance.
Recently I made a 1TB ZVOL, mounted it on windows 7
Ultimate (NTFS) with Microsoft's iSCSI initiator. But
the performance, in layman's terms, just sucks.
Version is SunOS solaris 5.11 snv_123 i86pc i386
i86pc
Athlon64 2800+,
Thanks. How much memory should I have? This machine
wont take more than 3GB, best I can get is one that
takes up to 8GB. Anyway, I'm the only user, is it
really necessary?
It depends on your needs. If you are ok with the current performance, you can
go ahead I guess.
Reads would do better
98 matches
Mail list logo