as the topic says, this uses literals when the zfs command is asked
to list stuff in script mode (ie, zfs list -H). this is useful if
you want the sizes of things in raw values.
i have no onnv systems to build this on, so i am unable to demonstrate
this, but i would really like to see this (or
A better solution (one that wouldn't break backwards compatability)
would be to add the '-p' option (parseable output) from 'zfs get' to the
'zfs list' command as well.
- Eric
On Wed, Oct 01, 2008 at 03:59:27PM +1000, David Gwynne wrote:
as the topic says, this uses literals when the zfs
Hello all,
in the setup I try to build I want to have snapshots of a file
system replicated from host replsource to host repltarget and
from there NFS-mounted on host nfsclient to access snapshots
directly:
replsource# zfs create pool1/nfsw
replsource# mkdir /pool1/nfsw/lala
Jürgen,
In a snoop I see that, when the access(2) fails, the nfsclient gets
a Stale NFS file handle response, which gets translated to an
ENOENT.
What happens if you use the noac NFS mount option on the client?
I'd not recommend to use it for production environments unless you really need
Hi,
I am running snv90. I have a pool that is 6x1TB, config raidz. After a computer
crash (root is NOT on the pool - only data) the pool showed FAULTED status.
I exported and tried to reimport it, with the result as follows:
# zpool import
pool: ztank
id:
On Tue, 30 Sep 2008, Robert Thurlow wrote:
Modern NFS runs over a TCP connection, which includes its own data
validation. This surely helps.
Less than we'd sometimes like :-) The TCP checksum isn't
very strong, and we've seen corruption tied to a broken
router, where the Ethernet
[EMAIL PROTECTED] wrote:
On Tue, 30 Sep 2008, Robert Thurlow wrote:
Modern NFS runs over a TCP connection, which includes its own data
validation. This surely helps.
Less than we'd sometimes like :-) The TCP checksum isn't
very strong, and we've seen corruption tied to a broken
router,
an update to the above: I tried to run zdb -e on the pool id and here's the
result:
# zdb -e 12125153257763159358
zdb: can't open 12125153257763159358: I/O error
NB zdb seems to recognize the ID because runnig it with an incorrect ID gives
me an error
# zdb -e 12125153257763159354
zdb: can't
On Wed, Oct 1, 2008 at 3:42 AM, Douglas R. Jones [EMAIL PROTECTED] wrote:
...
3) Next I created another file system called dpool/GroupWS/Integration. Its
mount point was inherited from GroupWS and is /mnt/zfs1/GroupWS/Integration.
Essentially I only allowed the new file system to inherit from
Tim [EMAIL PROTECTED] wrote:
Hmm ... well, there is a considerable price difference, so unless someone
says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
drives. By the way, how many of those would saturate a single (non trunked)
Gig ethernet link ? Workload NFS
Louwtjie Burger [EMAIL PROTECTED] wrote:
Server: T5120 on 10 U5
Storage: Internal 8 drives on SAS HW RAID (R5)
Oracle: ZFS fs, recordsize=8K and atime=off
Tape: LTO-4 (half height) on SAS interface.
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I
suspect the CPU
Bob Friesenhahn [EMAIL PROTECTED] wrote:
On Tue, 30 Sep 2008, BJ Quinn wrote:
True, but a search for zfs segmentation fault returns 500 bugs.
It's possible one of those is related to my issue, but it would take
all day to find out. If it's not flaky or unstable, I'd like to
try
Next stable (as in fedora or ubuntu releases) opensolaris version
will be 2008.11.
In my case I found 2008.05 is simply unusable (my
main interest is xen/xvm), but upgrading to the latest available build
with OS's pkg, (similar to apt-get) fixed the problem.
If you
installed the original OS
On Wed, Oct 01, 2008 at 01:03:28AM +0200, Ahmed Kamal wrote:
Hmm ... well, there is a considerable price difference, so unless someone
says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
drives. By the way, how many of those would saturate a single (non trunked)
Gig
Carson Gaspar [EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I
suspect the CPU cannot push more since it's a single thread doing all the
work.
Dumping oracle db files from filesystem yields ~ 25 MB/s. The
David Magda [EMAIL PROTECTED] wrote:
On Sep 30, 2008, at 19:09, Tim wrote:
SAS has far greater performance, and if your workload is extremely
random,
will have a longer MTBF. SATA drives suffer badly on random
workloads.
Well, if you can probably afford more SATA drives for the
Toby Thain Wrote:
ZFS allows the architectural option of separate storage without losing end to
end protection, so the distinction is still important. Of course this means
ZFS itself runs on the application server, but so what?
The OP in question is not running his network clients on
Ian Collins wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason for buying in server appliances?
Assuming that the application servers can coexist in the only 16GB
On Wed, Oct 1, 2008 at 8:52 AM, Brian Hechinger [EMAIL PROTECTED] wrote:
On Wed, Oct 01, 2008 at 01:03:28AM +0200, Ahmed Kamal wrote:
Hmm ... well, there is a considerable price difference, so unless someone
says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
drives. By
Moore, Joe wrote:
Toby Thain Wrote:
ZFS allows the architectural option of separate storage without losing end
to end protection, so the distinction is still important. Of course this
means ZFS itself runs on the application server, but so what?
The OP in question is not running his
On Wed, Oct 1, 2008 at 9:34 AM, Moore, Joe [EMAIL PROTECTED] wrote:
Ian Collins wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason for buying in server appliances?
[EMAIL PROTECTED] wrote
Linux does not implement stable kernel interfaces. It may be that there is
an intention to do so but I've seen problems on Linux resulting from
self-incompatibility on a regular base.
To be precise, Linus tries hard to prevent ABI changes in the system
call interfaces
Darren J Moffat wrote:
Moore, Joe wrote:
Given the fact that NFS, as implemented in his client
systems, provides no end-to-end reliability, the only data
protection that ZFS has any control over is after the write()
is issued by the NFS server process.
NFS can provided on the wire
On 10/01/08 10:46, Al Hopper wrote:
On Wed, Oct 1, 2008 at 9:34 AM, Moore, Joe [EMAIL PROTECTED] wrote:
Ian Collins wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason
On Tue, 30 Sep 2008, Al Hopper wrote:
I *suspect* that there might be something like a hash table that is
degenerating into a singly linked list as the root cause of this
issue. But this is only my WAG.
That seems to be a reasonable conclusion. BTFW that my million file
test directory uses
On Wed, 1 Oct 2008, Ian Collins wrote:
A million files in ZFS is no big deal:
But how similar were your file names?
The file names are like:
image.dpx[000]
image.dpx[001]
image.dpx[002]
image.dpx[003]
image.dpx[004]
.
.
.
So they will surely trip up Al Hopper's bad
On Wed, 1 Oct 2008, Tim wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason for buying in server appliances?
I think you'd be surprised how quickly they'd be fired for
On Wed, 1 Oct 2008, Ram Sharma wrote:
So for storing 1 million MYISAM tables (MYISAM being a good performer when
it comes to not very large data) , I need to save 3 million data files in a
single folder on disk. This is the way MYISAM saves data.
I will never need to do an ls on this folder.
[EMAIL PROTECTED] wrote
Linux does not implement stable kernel interfaces. It may be that there is
an intention to do so but I've seen problems on Linux resulting from
self-incompatibility on a regular base.
To be precise, Linus tries hard to prevent ABI changes in the system
call interfaces
On Wed, Oct 1, 2008 at 9:18 AM, Joerg Schilling
[EMAIL PROTECTED] wrote:
David Magda [EMAIL PROTECTED] wrote:
On Sep 30, 2008, at 19:09, Tim wrote:
SAS has far greater performance, and if your workload is extremely
random,
will have a longer MTBF. SATA drives suffer badly on
On Wed, Oct 1, 2008 at 10:28 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Wed, 1 Oct 2008, Tim wrote:
I think you'd be surprised how large an organisation can migrate most,
if not all of their application servers to zones one or two Thumpers.
Isn't that the reason for buying in server
Ummm, no. SATA and SAS seek times are not even in the same universe.=
They
most definitely do not use the same mechanics inside. Whoever told y=
ou that
rubbish is an outright liar.
Which particular disks are you guys talking about?
I;m thinking you guys are talking about the same 3.5 w/
On Wed, Oct 1, 2008 at 11:20 AM, [EMAIL PROTECTED] wrote:
Ummm, no. SATA and SAS seek times are not even in the same universe.=
They
most definitely do not use the same mechanics inside. Whoever told y=
ou that
rubbish is an outright liar.
Which particular disks are you guys talking
Joerg Schilling wrote:
Carson Gaspar[EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I
suspect the CPU cannot push more since it's a single thread doing all the
work.
Dumping oracle db files from filesystem yields ~ 25
On Wed, October 1, 2008 10:18, Joerg Schilling wrote:
SATA and SAS disks usually base on the same drive mechanism. The seek
times are most likely identical.
Some SATA disks support tagged command queueing and others do not.
I would asume that there is no speed difference between SATA with
On Wed, 1 Oct 2008, Joerg Schilling wrote:
SATA and SAS disks usually base on the same drive mechanism. The seek times
are most likely identical.
This must be some sort of urban legend. While the media composition
and drive chassis is similar, the rest of the product clearly differs.
The
pt == Peter Tribble [EMAIL PROTECTED] writes:
pt I think the term is mirror mounts.
he doesn't need them---he's using the traditional automounter, like we
all used to use before this newfangled mirror mounts baloney.
There were no mirror mounts with the old UFS NFSv3 setup that he
With much excitement I have been reading the new features coming into Solaris
10 in 10/08 and am eager to start playing with zfs root. However one thing
which struck me as strange and somewhat annoying is that it appears in the FAQs
and documentation that its not possible to do a ZFS root
t == Tim [EMAIL PROTECTED] writes:
t So what would be that the application has to run on Solaris.
t And requires a LUN to function.
ITYM requires two LUN's, or else when your filesystem becomes corrupt
after a crash the sysadmin will get blamed for it. Maybe you can
deduplicate the
On Wed, Oct 1, 2008 at 11:53 AM, Ahmed Kamal
[EMAIL PROTECTED] wrote:
Thanks for all the opinions everyone, my current impression is:
- I do need as much RAM as I can afford (16GB look good enough for me)
Depends on both the workload, and the amount of storage behind it. From
your
On Tue, Sep 30, 2008 at 09:54:04PM -0400, Miles Nordin wrote:
ok, I get that S3 went down due to corruption, and that the network
checksums I mentioned failed to prevent the corruption. The missing
piece is: belief that the corruption occurred on the network rather
than somewhere else.
It was something we couldn't get into the release
due to insufficient resources. I'd like to see it
implemented in the future.
Lori
Adrian Saul wrote:
With much excitement I have been reading the new features coming into Solaris
10 in 10/08 and am eager to start playing with zfs root.
On Wed, 1 Oct 2008, [EMAIL PROTECTED] wrote:
To get the same storage between capacity with SAS drives and SATA drives,
you'd probably have to put the SAS drives in a RAID-5/6/Z configuration to
be more space efficient. However by doing this you'd be losing spindles,
and therefore IOPS. With
Tim [EMAIL PROTECTED] wrote:
Ummm, no. SATA and SAS seek times are not even in the same universe. They
most definitely do not use the same mechanics inside. Whoever told you that
rubbish is an outright liar.
It is extremely unlikely that two drives from the same manufacturer and with the
Carson Gaspar [EMAIL PROTECTED] wrote:
Yes. Which is exactly what I was saying. The tar data might be more
compressible than the DB, thus be faster. Shall I draw you a picture, or
are you too busy shilling for star at every available opportunity?
If you did never compare Sun tar speed with
cd == Casper Dik [EMAIL PROTECTED] writes:
cd The whole packet lives in the memory of the switch/router and
cd if that memory is broken the packet will be send damaged.
that's true, but by algorithmically modifying the checksum to match
your ttl decrementing and MAC address
On Wed, Oct 01, 2008 at 01:12:08PM -0400, Miles Nordin wrote:
pt == Peter Tribble [EMAIL PROTECTED] writes:
pt I think the term is mirror mounts.
he doesn't need them---he's using the traditional automounter, like we
all used to use before this newfangled mirror mounts baloney.
Oh
On Wed, 1 Oct 2008, Joerg Schilling wrote:
Ummm, no. SATA and SAS seek times are not even in the same universe. They
most definitely do not use the same mechanics inside. Whoever told you that
rubbish is an outright liar.
It is extremely unlikely that two drives from the same manufacturer
Ahmed Kamal wrote:
Thanks for all the opinions everyone, my current impression is:
- I do need as much RAM as I can afford (16GB look good enough for me)
- SAS disks offers better iops better MTBF than SATA. But Sata
offers enough performance for me (to saturate a gig link), and its
MTBF
Bob Friesenhahn [EMAIL PROTECTED] wrote:
On Wed, 1 Oct 2008, Joerg Schilling wrote:
SATA and SAS disks usually base on the same drive mechanism. The seek times
are most likely identical.
This must be some sort of urban legend. While the media composition
and drive chassis is similar,
I'm using Neelakanth's arcstat tool to troubleshoot performance problems with a
ZFS filer we have, sharing home directories to a CentOS frontend Samba box.
Output shows an arc target size of 1G, which I find odd, since I haven't tuned
the arc, and the system has 4G of RAM. prstat -a tells me
Miles Nordin wrote:
sounds
like they are not good enough though, because unless this broken
router that Robert and Darren saw was doing NAT, yeah, it should not
have touch the TCP/UDP checksum.
I believe we proved that the problem bit flips were such
that the TCP checksum was the same, so
On Wed, Oct 01, 2008 at 12:22:56PM -0500, Tim wrote:
- This will mainly be used for NFS sharing. Everyone is saying it will have
bad performance. My question is, how bad is bad ? Is it worse than a
plain Linux server sharing NFS over 4 sata disks, using a crappy 3ware raid
card with
On Wed, Oct 01, 2008 at 01:30:45PM +0100, Peter Tribble wrote:
On Wed, Oct 1, 2008 at 3:42 AM, Douglas R. Jones [EMAIL PROTECTED] wrote:
Any ideas?
Well, I guess you're running Solaris 10 and not OpenSolaris/SXCE.
I think the term is mirror mounts. It works just fine on my SXCE boxes.
On Wed, Oct 1, 2008 at 12:51 PM, Joerg Schilling
[EMAIL PROTECTED] wrote:
Did you recently look at spec files from drive manufacturers?
If you look at drives in the same category, the difference between a SATA
and a
SAS disk is only the firmware and the way the drive mechanism has been
The problem could be in the zfs command or in the kernel. Run pstack on the
core dump and search the bug database for the functions it lists. If you can't
find a bug that matches your situation and your stack, file a new bug and
attach the core. If the engineers find a duplicate bug, they'll
On Wed, Oct 01, 2008 at 11:54:55AM -0600, Robert Thurlow wrote:
Miles Nordin wrote:
sounds
like they are not good enough though, because unless this broken
router that Robert and Darren saw was doing NAT, yeah, it should not
have touch the TCP/UDP checksum.
I believe we proved that
on the advice of Okana in the freenode.net #opensolaris channel I tried to run
the latest opensolaris livecd and try to import the pool. No luck, however I
tried the trick in Lukas's post that allowed him to import the pool and I had a
beginning of luck.
By doing the mdb wizardry he indicated
Blake Irvin wrote:
I'm using Neelakanth's arcstat tool to troubleshoot performance problems with
a ZFS filer we have, sharing home directories to a CentOS frontend Samba box.
Output shows an arc target size of 1G, which I find odd, since I haven't
tuned the arc, and the system has 4G of
On Wed, 1 Oct 2008, Joerg Schilling wrote:
Did you recently look at spec files from drive manufacturers?
Yes.
If you look at drives in the same category, the difference between a
SATA and a
The problem is that these drives (SAS / SATA) are generally not in the
same category so your
I think I need to clarify a bit.
I'm wondering why arc size is staying so low, when i have 10 nfs clients and
about 75 smb clients accessing the store via resharing (on one of the 10 linux
nfs clients) of the zfs/nfs export. Or is it normal for the arc target and arc
size to match? Of note, I
Douglas R. Jones wrote:
4) I change the auto.ws map thusly:
Integration chekov:/mnt/zfs1/GroupWS/
Upgradeschekov:/mnt/zfs1/GroupWS/
cstools chekov:/mnt/zfs1/GroupWS/
com chekov:/mnt/zfs1/GroupWS
This is standard NFS behavior (prior to NFSv4). Child
You might want to also try toggling the Nagle tcp setting to see if that helps
with your workload:
ndd -get /dev/tcp tcp_naglim_def
(save that value, default is 4095)
ndd -set /dev/tcp tcp_naglim_def 1
If no (or a negative) difference, set it back to the original value
ndd -set /dev/tcp
On Tue, Sep 30, 2008 at 11:09:05PM -0700, Eric Schrock wrote:
A better solution (one that wouldn't break backwards compatability)
would be to add the '-p' option (parseable output) from 'zfs get' to the
'zfs list' command as well.
yes, that makes sense to me.
thanks for pointing the -p out in
On Wed, 2008-10-01 at 11:54 -0600, Robert Thurlow wrote:
like they are not good enough though, because unless this broken
router that Robert and Darren saw was doing NAT, yeah, it should not
have touch the TCP/UDP checksum.
NAT was not involved.
I believe we proved that the problem bit
First of all let me thank each and everyone of you who helped with this issue.
Your responses were not only helpful but insightful as well. I have been around
Unix for a long time but only recently have I had the opportunity to do some
real world admin work (they laid off or had quit those who
Blake Irvin wrote:
I think I need to clarify a bit.
I'm wondering why arc size is staying so low, when i have 10 nfs
clients and about 75 smb clients accessing the store via resharing (on
one of the 10 linux nfs clients) of the zfs/nfs export. Or is it
normal for the arc target and arc
On 1-Oct-08, at 1:56 AM, Ram Sharma wrote:
Hi Guys,
Thanks for so many good comments. Perhaps I got even more than what
I asked for!
I am targeting 1 million users for my application.My DB will be on
solaris machine.And the reason I am making one table per user is
that it will be a
Carson Gaspar [EMAIL PROTECTED] writes:
Joerg Schilling wrote:
Carson Gaspar[EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
Dumping a large file from memory using tar to LTO yields 44 MB/s ... I
suspect the CPU cannot push more since it's a single thread doing all the
work.
Dumping
Tim tim at tcsac.net writes:
That's because the faster SATA drives cost just as much money as
their SAS counterparts for less performance and none of the
advantages SAS brings such as dual ports.
SAS drives are far from always being the best choice, because absolute IOPS or
throughput
Marc Bevand wrote:
Tim tim at tcsac.net writes:
That's because the faster SATA drives cost just as much money as
their SAS counterparts for less performance and none of the
advantages SAS brings such as dual ports.
SAS drives are far from always being the best choice, because
71 matches
Mail list logo