Ta on the comments
I'm going to use Jorg's 'star' to simulate some sequential backup workloads,
using different blocksizes and see what the system do.
I'll save some output and post for people that might match the same config, now
or in the future.
To be clear though: (currently)
#tar
Server: T5120 on 10 U5
Storage: Internal 8 drives on SAS HW RAID (R5)
Oracle: ZFS fs, recordsize=8K and atime=off
Tape: LTO-4 (half height) on SAS interface.
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I suspect
the CPU cannot push more since it's a single thread doing
describing this !?
cheers,
TS
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Louwtjie Burger
Technical Specialist
Office Tel: +27 (021) 975 6434
Cell: +27 (0)83 457 2551
http
On 12/19/07, David Magda [EMAIL PROTECTED] wrote:
On Dec 18, 2007, at 12:23, Mike Gerdts wrote:
2) Database files - I'll lump redo logs, etc. in with this. In Oracle
RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file
system. ZFS does not do this.
If you can use
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0
0.0 60.00.0 4280.8 0.0 35.00.0 583.1 0 100 c2t9d0
0.0 55.00.0 3938.2 0.0 35.00.0 636.1 0 100 c2t10d0
0.0 56.0
The throughput when writing from a local disk to the
zpool is around 30MB/s, when writing from a client
Err.. sorry, the internal storage would be good old 1Gbit FCAL disks @
10K rpm. Still, not the fastest around ;)
___
zfs-discuss mailing list
On Dec 1, 2007 7:15 AM, Vincent Fox [EMAIL PROTECTED] wrote:
We will be using Cyrus to store mail on 2540 arrays.
We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are both
connected to same host, and mirror and stripe the LUNs. So a ZFS RAID-10 set
composed of 4 LUNs.
On Nov 28, 2007 12:58 AM, Justin Tuttle [EMAIL PROTECTED] wrote:
I have searched high and low and cannot find the answer. I read about how zfs
uses a Device ID for identification, usually provided by the firmware of the
device. So if an controller presents an (array) lun w/a unique device ID,
We are all anxiously awaiting data...
-- richard
Would it be worthwhile to build a test case:
- Build a postgresql database and import 1 000 000 (or more) lines of data.
- Run a single and multiple large table scan queries ... and watch the system
then,
- Update a column of each row in the
Hi
After a clean database load a database would (should?) look like this,
if a random stab at the data is taken...
[8KB-m][8KB-n][8KB-o][8KB-p]...
The data should be fairly (100%) sequential in layout ... after some
days though that same spot (using ZFS) would problably look like:
[8KB-m][
On 11/8/07, Mark Ashley [EMAIL PROTECTED] wrote:
Economics for one.
Yep, for sure ... it was a rhetoric question ;)
Why would I consider a new solution that is safe, fast enough, stable
.. easier to manage and lots cheaper?
Rephrase, Why would I NOT consider ...? :)
On 11/7/07, can you guess? [EMAIL PROTECTED] wrote:
Monday, November 5, 2007, 4:42:14 AM, you wrote:
cyg Having gotten a bit tired of the level of ZFS
hype floating
I think a personal comment might help here ...
I spend a large part of my life doing system administration, and like
most
Hi
What is the impact of not aligning the DB blocksize (16K) with ZFS,
especially when it comes to random reads on single HW RAID LUN.
How would one go about measuring the impact (if any) on the workload?
Thank you
___
zfs-discuss mailing list
The regular mount/umount commands can only be used if you have the
filesystems present in /etc/vfstab. To create a zfs filesystem with
the idea of using mount/umount you must specify 'mountpoint=legacy'.
Now you can 'mount /d/d5' ... as per regular ufs.
Zpools don't need mountpoints ... ie
Battery back-ed cache...
Interestingly enough, I've seen this configuration in production
(V880/SAP on Oracle) running Solaris 8 + Veritas Storage Foundation
(for the RAID-1 part).
Speed is good ... redundancy is good ... price is not (2/3).
Uptime 499 days :)
On 10/9/07, Wee Yeh Tan [EMAIL
Would it be easier to ...
1) Change ZFS code to enable a sort of directIO emulation and then run
various tests... or
2) Use Sun's performance team, which have all the experience in the
world when it comes to performing benchmarks on Solaris and Oracle ..
+ a Dtrace master to drill down and see
http://www.sun.com/servers/entry/x4200/optioncards.jsp#m2pcie
SG-XPCIE8SAS-E-Z ?
On 9/13/07, Thomas Liesner [EMAIL PROTECTED] wrote:
Hi all,
i am about to put together a one month test configuration for a
graphics-production server (prepress-filer that is). I would like to test zfs
on a
Have you tried to blank out c0t3d0s2 using dd and zeros?
Btw, zpool attach -f zpol01 ... won't work ;) (zpol01 = zpool01?)
On 8/21/07, Alderman, Sean [EMAIL PROTECTED] wrote:
I'm looking for ideas to resolve the problem below…
# zpool attach -f zpol01 c0t2d0 c0t3d0
invalid vdev
Hi
What is the general feeling for production readiness when it comes to:
ZFS
Oracle 10G R2
6140-type storage
OLTP workloads
1-3TB sizes
Running UFS with directio is stable, fast and one can sleep at night.
Can the same be said for zfs at this moment?
Should one hold out for Solaris 10 U4? (I
Roshan Perera writes:
Hi all,
I am after some help/feedback to the subject issue explained below.
We are in the process of migrating a big DB2 database from a
6900 24 x 200MHz CPU's with Veritas FS 8TB of storage Solaris 8 to
25K 12 CPU dual core x 1800Mhz with ZFS 8TB storage
Hi there
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb PCI-E SAS 8-Port Host Adapter, RoHS:Y - which is the
prefered HBA I suppose)
Did it work correctly?
Thank you
A good place to start is: http://www.opensolaris.org/os/community/zfs/
Have a look at:
http://www.opensolaris.org/os/community/zfs/docs/
as well as
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#
Create some files, which you can use as disks within zfs and demo to
HW RAID can offload some I/O bandwidth from the system, but new systems,
like Thumper, should have more than enough bandwidth, so why bother with
HW RAID?
*devils advocate mode = on*
Why bother you say...
I'll asked the Storagetek division this, next time they come round
asking (begging?) me
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to (re)configure the
controller or restore the config without destroying your data? Do you know for sure
that a
I think it's also important to note _how_ one measure performance
(which is black magic at the best of times).
I personally like to see averages since doing #iostat -xnz 10 doesn't
tell me anything really. Since zfs likes to bundle and flush I want
my (very expensive ;) Sun storage to give me
Greetings...
Although I've not tried to directly connect a 6140 JBOD unit to a
host, I've noticed that the JBOD's disk drives do not online on their
own.
Without the controller unit activated, the drives continue to flash as
if waiting to online... when the hardware controller switches on it
http://docs.sun.com/source/819-0139/index.html
On 2/17/07, Vikash Gupta [EMAIL PROTECTED] wrote:
Hi,
I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).
Can someone point to correct doc and setup.
will try to stick to what I've seen at clients ito db sizes, users, type of app, etc.On 11/4/06,
Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Louwtjie,Are you running FC or SATA-II disks in the 6140? How many spindles too?Best Regards,JasonOn 11/3/06, Louwtjie Burger [EMAIL PROTECTED]
wrote
Hi there
I'm busy with some tests on the above hardware and will post some scores soon.
For those that do _not_ have the above available for tests, I'm open to
suggestions on potential configs that I could run for you.
Pop me a mail if you want something specific _or_ you have suggestions
What are the major differences between the first zfs shipped in 06/06 Solaris
10, compared to the latest built's of OpenSolaris?
Will there be any major functionality released to 06/06 Solaris zfs via patches?
Will major zfs updates only be integrated into Solaris with the regular release
Hi there
Did a backup/restore on TSM, works fine.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No ACL's ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi there
Are there any consideration given to this feature...?
I would also agree that this will not only be a testing feature, but will
find it's way into production.
It would probably work on the same princaple of swap -a and swap -d ;) Just a
little bit more complex.
This message
33 matches
Mail list logo