After working with Sanjeev, and putting in a bunch of timing statement
throughout the code, it turns out that file writes ARE NOT the bottleneck, as
would be assumed.
It is actually reading the file into a byte buffer that is the culprit.
Specifically, this java command:
byteBuffer =
It does. The file size is limited to the original creation size, which is 65k
for files with 1 data sample.
Unfortunately, I have zero experience with dtrace and only a little with truss.
I'm relying on the dtrace scripts from people on this thread to get by for now!
This message posted
I ran this dtrace script and got no output. Any ideas?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We are going to get a 6120 for this temporarily. If all goes well, we are
going to move to a 6140 SAN solution.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Daniel. I take it you are an RRD4J user?
I didn't see anything in the performance issues area that would help. Please
let me know if I'm missing something:
- The default of RRD4J is to use NIO backend, so that is already in place.
- Pooling won't help because there is almost never a time
The other thing to keep in mind is that the tunables
like compression
and recsize only affect newly written blocks. If you
have a bunch of
data that was already laid down on disk and then you
change the tunable,
this will only cause new blocks to have the new size.
If you experiment
ith
I just installed nv82 so we'll see how that goes. I'm going to try the
recordsize idea above as well.
A note about UFS: I was told by our local Admin guru that ZFS turns on
write-caching for disks, which is something that a UFS file system should not
have turned on, so that if I convert the
Unfortunately, I don't know the record size of the writes. Is it as simple as
looking @ the size of a file, before and after a client request, and noting the
difference in size? This is binary data, so I don't know if that makes a
difference, but the average write size is a lot smaller than
To avoid making multiple posts, I'll just write everything here:
-Moving to nv_82 did not seem to do anything, so I doesn't look like fsync was
the issue.
-Disabling ZIL didn't do anything either
-Still playing with 'recsize' values but it doesn't seem to be doing much...I
don't think I have a
Slight correction. 'recsize' must be a power of 2 so it would be 8192.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
One thing I just observed is that the initial file size is 65796 bytes. When
it gets an update, the file size remains @ 65796.
Is there a minimum file size?
This message posted from opensolaris.org
___
zfs-discuss mailing list
RRD4J isn't a DB, per se, so it doesn't really have a record size. In fact,
I don't even know if, when data is written to the binary, whether it is
contiguous or not so the amount written may not directly correlate to a proper
record-size.
I did run your command and found the size patterns
I disabled file prefetch and there was no effect.
Here are some performance numbers. Note that, when the application server used
a ZFS file system to save its data, the transaction took TWICE as long. For
some reason, though, iostat is showing 5x as much disk writing (to the physical
disks)
It is a striped/mirror:
# zpool status
NAMESTATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirrorONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
mirror
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an
application server which is reading/writing to hundreds of thousands of files
on it, thousands of files @ a time.
If 1 client uses the app server, the transaction
Some more information about the system. NOTE: Cpu utilization never goes above
10%.
Sun Fire v40z
4 x 2.4 GHz proc
8 GB memory
3 x 146 GB Seagate Drives (10k RPM)
1 x 146 GB Fujitsu Drive (10k RPM)
This message posted from opensolaris.org
___
Okay, so back to this. What's the best way of getting user usage of a ZFS file
system?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This was asked before, but was not responded to. Is there a ZFS
equivalent to the 'quot' command?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm having trouble finding information on any hooks into ZFS. Is there
information on a ZFS API so I can access ZFS information directly as opposed to
having to constantly parse 'zpool' and 'zfs' command output?
This message posted from opensolaris.org
Fretts-Saxton [EMAIL PROTECTED] wrote:
I'm having trouble finding information on any hooks into ZFS. Is
there information on a ZFS API so I can access ZFS information
directly as opposed to having to constantly parse 'zpool' and 'zfs'
command output?
libzfs: http://cvs.opensolaris.org/source
20 matches
Mail list logo