the system if an OS disk fails.
Once Illumos is better supported on the R720 and the PERC H310, I plan to get
rid of the hypervisor silliness and run Illumos on bare metal.
-Greg
Sent from my iPhone
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
at
NFSv4, but i'm not holding my breath.
--
Greg Mason
HPC Administrator
Michigan State University
Institute for Cyber Enabled Research
High Performance Computing Center
web: www.icer.msu.edu
email: gma...@msu.edu
___
zfs-discuss mailing list
zfs-discuss
On Wed, Jun 9, 2010 at 8:17 PM, devsk funt...@yahoo.com wrote:
$ swap -s
total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k
available
$ swap -l
swapfile dev swaplo blocks free
/dev/dsk/c6t0d0s1 215,1 8 12594952 12594952
Can someone
Hey Scott,
Thanks for the information. I doubt I can drop that kind of cash, but back to
getting bacula working!
Thanks again,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hey Miles,
Do you have any idea if there is a way to backup a zvol in the manner you speak
of with bacula? Is DD a secure way to do this or are there better methods to do
this? Otherwise I will just use dd. Thanks again!
Thanks!
Greg
--
This message posted from opensolaris.org
Yes it would, however we only have the restore/verify portion. Unless of course
I am overlooking something.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Thank you for such a thorough look into my issue. As you said, I guess I am
down to trying to backup to a zvol and then backing that up to tape. Has anyone
tried this solution? I would be very interested to find out. Anyone else with
any other solutions?
Thanks!
Greg
--
This message posted
it is a lot of questions but I thought the
solution would work perfect in my environment.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
will then be written to tape with bacula. I hope I am
posting this in the correct place.
Thanks,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
and we are up and running. The next issue is then
backing this all up to tape and making it so that it is not impo
ssible to recover if people do their standard bone headed things. Does anyone
have any ideas on how to do this? I was first thinking rsync or zfs
send/receive.
Thanks,
Greg
Hello all,
I am having a problem when I do a zfs promote or a zfs rollback, I get a
dataset is busy error I am now doing a image update to see if there was an
issue with the image I have. Has anyone idea as to how to fix this issue?
Thanks,
Greg
--
This message posted from opensolaris.org
This also occurs when I do a zfs destroy.
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have tried to unmount the zfs volume and remount it. However, this does not
help the issue.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
updated it but again to no avail. If anyone has any ideas it would
be helpful!
Thanks!
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
workload or not. I didn't know about
this script at the time of our testing, so it ended up being some trial
and error, running various tests on different hardware setups (which
means creating and destroying quite a few pools).
-Greg
Jorgen Lundman wrote:
Does un-taring something count
How about the bug removing slog not possible? What if this slog fails? Is
there a plan for such situation (pool becomes inaccessible in this case)?
You can zpool replace a bad slog device now.
-Greg
___
zfs-discuss mailing list
zfs-discuss
wiki:
https://wiki.hpcc.msu.edu/display/Issues/Known+Issues, under Home
Directory file system.
-Greg
--
Greg Mason
System Administrator
High Performance Computing Center
Michigan State University
___
zfs-discuss mailing list
zfs-discuss
. The SLC device (Intel
X25-E) will last quite a bit longer than the MLC device.
-Greg
--
Greg Mason
System Administrator
Michigan State University
High Performance Computing Center
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
quotas, as we're
having problems with NFSv4 on our clients (SLES 10 SP2). We'd like to be
able to use NFSv3 for now (one large ZFS filesystem, with user quotas
set), until the flaws with our Linux NFS clients can be addressed.
--
Greg Mason
System Administrator
Michigan State University
High
Thanks for the link Richard,
I guess the next question is, how safe would it be to run snv_114 in
production? Running something that would be technically unsupported
makes a few folks here understandably nervous...
-Greg
On Thu, 2009-07-09 at 10:13 -0700, Richard Elling wrote:
Greg Mason wrote
backup wise or are those snapshots useless and I am up to
last week.
Thanks for helping!
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this method of
replacing a slog, and the zpool is imported on boot, like nothing
happened, even though the physical hardware has changed.
A question I have is, does zpool replace now work for slog devices as
of snv_111b?
-Greg
On Fri, 2009-06-05 at 20:57 -0700, Paul B. Henson wrote:
My research
-10218245-64.html?tag=mncol
It should also be noted that the Intel X25-M != the Intel X25-E. The
X25-E hasn't had any of the performance and fragmentation issues.
The X25-E is an SLC SSD, the X25-M is an MLC SSD, hence the more complex
firmware.
-Greg
devices (folks
typically use SSDs for very fast (15k RPM) SAS drives for this).
-Greg
Francois wrote:
Hello list,
What would be the best zpool configuration for a cache/proxy server
(probably based on squid) ?
In other words with which zpool configuration I could expect best
reading performance
compression algorithm isn't gzip, so you aren't
going to get the greatest compression possible, but it is quite fast.
Depending on the type of data, it may not compress well at all, leading
ZFS to store that data completely uncompressed.
-Greg
All good info thanks. Still one thing doesn't quite
in-place, and in production.
basically, what I'm thinking is:
zpool remove mypool list of devices/vdevs
Allow time for ZFS to vacate the vdev(s), and then light up the OK to
remove light on each evacuated disk.
-Greg
Blake Irvin wrote:
Shrinking pools would also solve the right-sizing dilemma
it is possible
to have issues. Likewise with database systems.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the discussion of database
recovery into the discussion seems to me to only be increasing the FUD
factor.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
seldom turn it on unless
I'm doing heavy I/O to a USB hard drive, otherwise the performance
difference is just not that great.
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
, in
one instance.
I'm trying to optimize our machines for a write-heavy environment, as
our users will undoubtedly hit this limitation of the machines.
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Are you sure thar write cache is back on after restart?
Yes, I've checked with format -e, on each drive.
When disabling the write cache with format, it also gives a warning
stating this is the case.
What I'm looking for is a faster way to do this than format -e -d disk
-f script, for all
but not
writes. If you enable them you will lose data if you pull the stick out
before all the data is written. This is the type of safety measure that
needs to be implemented in ZFS if it is to support the average user
instead of just the IT professionals.
Regards,
Greg
, something makes its way into the write cache, then
the cache is disabled. Does this mean the write cache is flushed to disk
when the cache is disabled? If so, then I guess it's less critical when
it happens in the bootup process or if it's permanent...
-Greg
A Darren Dunham wrote:
On Thu, Feb
about the first one - Albert Einstein
Regards,
Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in an X4540?
Thanks,
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tony,
I believe you want to use zfs recv -F to force a rollback on the
receiving side.
I'm wondering if your ls is updating the atime somewhere, which would
indeed be a change...
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, so YMMV.
Fishworks does this. They use an SSD both for the read cache as well as
the ZIL.
-Greg
Orvar Korvar wrote:
So are there no guide lines how to add a SSD disk as a home user? Which is
the best SSD disk to add? What percentage improvements are typical? Or, will
a home user
Orvar Korvar wrote:
Ok. Just to confirm: A modern disk has already some spare capacity which is
not normally utilized by ZFS, UFS, etc. If the spare capacity is finished,
then the disc should be replaced.
Yup, that is the case.
___
zfs-discuss
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
Perhaps I missed something, but what was your previous setup?
I.e. what did
I should also add that this creating many small files issue is the
ONLY case where the Thors are performing poorly, which is why I'm
focusing on it.
Greg Mason wrote:
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform
Jim Mauro wrote:
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
But there could be a _latency_ issue with the network.
If there was a latency issue, we would see such a problem with our
existing file server
If there was a latency issue, we would see such a problem with our
existing file server as well, which we do not. We'd also have much
greater problems than just file server performance.
So, like I've said, we've ruled out the network as an issue.
I should also add that I've tested these
in such a
situation? Would I simply just risk losing that in-play data, or could
more serious things happen? I know disabling the ZIL is an Extremely Bad
Idea, but I need to tell people exactly why...
-Greg
Jim Mauro wrote:
You have SSD's for the ZIL (logzilla) enabled, and ZIL IO
is what is hurting your
How were you running this test?
were you running it locally on the machine, or were you running it over
something like NFS?
What is the rest of your storage like? just direct-attached (SAS or
SATA, for example) disks, or are you using a higher-end RAID controller?
-Greg
kristof wrote
is disabling all write caches, and disabling
the cache flushing.
What would this mean for the safety of data in the pool?
And, would this even do anything to address the performance issue?
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
on this issue, I've ruled out the network as an
issue, as well as the NFS clients. I've narrowed my particular
performance issue down to the ZIL, and how well ZFS plays with NFS.
-Greg
Jim Mauro wrote:
Multiple Thors (more than 2?), with performance problems.
Maybe it's the common demnominator
. For the 7210 (which is basically a Sun Fire X4540),
that gives you 46 disks and 2 SSDs.
-Greg
Bob Friesenhahn wrote:
On Thu, 22 Jan 2009, Ross wrote:
However, now I've written that, Sun use SATA (SAS?) SSD's in their
high end fishworks storage, so I guess it definately works for some
known technical issues with using a SSD in a X4540?
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
device?
And, yes, I already know that turning off the ZIL is a Really Bad Idea.
We do, however, need to provide our users with a certain level of
performance, and what we've got with the ZIL on the pool is completely
unacceptable.
Thanks for any pointers you may have...
--
Greg Mason
Systems
.
The current solution we are considering is disabling the cache
flushing (as per a previous response in this thread), and adding one
or two SSD log devices, as this is similar to the Sun storage
appliances based on the Thor. Thoughts?
-Greg
On Jan 19, 2009, at 6:24 PM, Richard Elling wrote:
We took
Good idea. Thor has a CF slot, too, if you can find a high speed
CF card.
-- richard
We're already using the CF slot for the OS. We haven't really found
any CF cards that would be fast enough anyways :)
___
zfs-discuss mailing list
(snv_100).
Another use I've seen is using zfs-auto-snapshot to take and manage
snapshots on both ends, using rsync to replicate the data, but that's
less than ideal for most folks...
-Greg
Ian Mather wrote:
Fairly new to ZFS. I am looking to replicate data between two thumper boxes.
Found
Perhaps I mis-understand, but the below issues are all based on Nevada,
not Solaris 10.
Nevada isn't production code. For real ZFS testing, you must use a
production release, currently Solaris 10 (update 5, soon to be update 6).
In the last 2 years, I've stored everything in my environment
It would be a manual process. As with any arbitrary name, it's a useful
tag, not much more.
James C. McPherson wrote:
Gregory Shaw wrote:
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or
James C. McPherson wrote:
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli
56 matches
Mail list logo