, $5/1024/1024, $NF
}'
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
+ the cost of 4 1 GB DDR DIMMs. I suppose you
could mirror across a pair of them and still have a pretty fast small
4GB of space for less than $1k.
http://www.anandtech.com/storage/showdoc.aspx?i=2480
FWIW, google gives plenty of hits for solid state disk terabyte.
Mike
--
Mike Gerdts
http
(optional)
o archive files
It seems as though if suitably motivated, additional information about
the desired configuration could be stored in one of the above
sections, either directly or as a result of scripts (e.g. derived
profiles in jumpstart).
Mike
--
Mike Gerdts
http
requires a
source tree checkout, learning docbook, etc. most would-be authors or
editors will be discouraged. Else, I guess it just winds up in a
bunch of blogs that are really hard to find.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
, another 11 GB of disk is
used. At this rate, it doesn't take long to burn through a 73 GB
disk. However, if ZFS could de-duplicate the blocks, each patch
cycle would take up only a couple hundred megabytes. But I guess that
is off-topic. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
against current quota
was part of the problem statement. My approach with rsync avoids this
but, as I said before, is an ugly hack because it doesn't use the
features of zfs.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
On 7/31/06, Bev Crair [EMAIL PROTECTED] wrote:
However, note the limitations on usage: 4 'user-data file systems'...
B.
And last I looked it was x86-only.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
sooner than later.
If running on sun4v, consider LDOM's when they are available (November?).
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
the various NDMP Internet drafts into RFC's seems
to be stalled. A quick search of existing Open Source NDMP
implementations doesn't turn up much. Do others on the list have more
insight into whether this has been considered?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
On 8/21/06, Richard Elling - PAE [EMAIL PROTECTED] wrote:
I haven't done measurements of this in years, but...I'll wager that compression is memory bound, not CPU bound, for today'sservers.A system with low latency and high bandwidth memory will performwell (UltraSPARC-T1).Threading may not help
On 8/26/06, Mike Gerdts [EMAIL PROTECTED] wrote:
FWIW, I saw the same backtrace on build 46 doing some wierd stuff
documented at http://mgerdts.blogspot.com/. At the time I was booted
from cdrom media importing a pool that I had previously exported.
I got thinking... how can I outdo the ME
be an awesome feature to have in ZFS, even if
the de-duplication happens as a later pass similar to zfs scrub.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
other.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mirroring just
isn't an option.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that
it is up to ZFS to generate or manage the signature.
The nice thing about it is that so long as the private key is secret,
the signature stays with the file as it is moved, taken to tape, other
file systems, etc. so long as the file manipulation mechanisms support
extended-attributes.
Mike
--
Mike
the original stays put. This could be done to refresh
non-production instances from production, to perform backups in such a
way that it doesn't put load on the production spindles, networks,
etc.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
the most?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that and use it for swap or whatever.
The original question was about using ZFS root on a T1000. /grub
looks suspiciously incompatible with the T1000 because it isn't x86.
I've heard rumors of brining grub to sparc, but...
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
and little interest in creating very
complex command lines with many -x options.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any problems with
this procedure.
However, I waited until someone else announced the features or lack
thereof found in S10 11/06. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, or
should I file one and stop complaining. :)
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
This may be a good place to look:
http://www.oracle.com/technology/deploy/availability/htdocs/xtts.htm
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
bytes
close
The rrd file in question is 8.6 MB. There was 8KB of reads and 5472
bytes of write. This is one of the big wins over the current binary
rrd format over the original ASCII version that came with MRTG.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
as that is, ZFS promises to not corrupt my data and
to tell on others that do. ZFS cannot break that promise.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
?
If you have (or download) the latest installation DVD, look in the
/UpgradePatches (or similarly named) directory.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of the fs that won't unmount.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, not a careful read of all the
parts involved.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
no heritage with SAM-QFS.
http://www.oracle.com/technology/products/database/asm/index.html
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be
able to use the parameters above to achieve what you are trying to do
regardless of which UNIXy file system is being used.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
,
(long long) zfs`arc.c_min / 1024/1024,
(long long) zfs`arc.c_max / 1024/1024,
(long long) zfs`arc.size / 1024/1024,
(long long) zfs`arc.c / 1024/1024);
}
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
. There are a
couple folks out here still running sparc. Is there any news to
report related to the sparc variant ZFS boot?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs
is, but 512 bytes at a time should be fine.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
but with just a different target or LUN
range.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
custom jumpstart profiles to rebuild
the system.
I would love to see flash archive content that is the result of zfs
send. Incrementals are easy to do so long as you keep the initial
(pristine) snapshot around that matches up exactly with the flar that
was initially applied.
Mike
--
Mike Gerdts
a files for cpio.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 5/13/07, Gael [EMAIL PROTECTED] wrote:
No, no find is running alas...
jumps8002:/root #ps -edf | grep 25751
root 27628 25751 0 11:58:30 pts/3 0:18 cpio -pdum
/tank/sol10u4b/wanboot/interim_dir
root 25751 25656 0 11:28:31 pts/3 0:00 /bin/sh
./setup_install_server -w
that is absolutely unaccepted practice.
The past week of inactivity is likely related to most of Sun in the US
being on mandatory vacation. Sun typically shuts down for the week
that contains July 4 and (I think) the week between Christmas and Jan
1.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
.
Is this something that is maybe worth spending a few more cycles on,
or is it likely broken from the beginning?
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On 7/11/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Mike Gerdts wrote:
Perhaps a better approach is to create a pseudo file system that looks like:
mntpt/pool
/@@
/@today
/@yesterday
/fs
/@@
/@2007-06-01
this is in the works. Most of my use cases for
ZFS involve use of clones. Lack of space-efficient backups and
especially restores makes me wait to use ZFS outside of the lab.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
that (I'm told) is
being worked on.
I only mention this to say that this type of problem is not restricted
to zfs boot.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
%3Amail.opensolaris.org+%28dedup+OR+%22de-duplication%22+OR+deduplication%29btnG=Google+Search
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and I think snv59:
panic - S10u4 backtrace is very different from snv*
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reset (panic, I believe) of the primary LDOM seems to have
caused the corruption in the guest LDOM. What was that about having
the redundancy as close to the consumer as possible? :)
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
it with success (and
failures) in limited scope. I'm sure that with time the improvements
will come that make that scope increase dramatically, but for now it
is confined to the lab. :(
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
expensive - you would be charging quota to each user but
only storing one copy. Depending on the balance of CPU power vs. I/O
bandwidth, compressed zvols could be a real win, more than paying back
the space required to have a few snapshots around.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com
backups, etc. Pushing that out to desktop or laptop machines is not
really a good idea.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-writes of
data (e.g. crypto rekey) to concentrate data that had become
scattered into contiguous space.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
writes could be batched coalesced and applied
in a journaled manner such that each batch fully applies or is rolled
back on the target. I haven't heard of this being done.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
does, but Snap Upgrade does.
http://opensolaris.org/os/project/caiman/Snap_Upgrade/
It is likely worth considering more of the roadmap when reading that page.
http://opensolaris.org/os/project/caiman/Roadmap/
--
Mike Gerdts
http://mgerdts.blogspot.com
to
administer the location mapping while providing transparency to the
end-users.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
still figuring out how to fix this other than moving all of my zones onto
UFS.
How about a dtrace script that changes the fstyp in statvfs() returns
to say that it is ufs? :)
I bet someone comes along and says that isn't supported either...
--
Mike Gerdts
http://mgerdts.blogspot.com
in coordination with iSCSI.
irony
Oh, wait! What if the NAS device runs out of space while I'm
patching? Better rule out the thin provisioning capabilities of the
HDS storage that Sun sells as well.
/irony
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
On 9/24/07, Paul B. Henson [EMAIL PROTECTED] wrote:
but checking the actual release notes shows no ZFS mention. 3.0.26 to
3.2.0? That seems an odd version bump...
3.0.x and before are GPLv2. 3.2.0 and later are GPLv3.
http://news.samba.org/announcements/samba_gplv3/
--
Mike Gerdts
http
+ screens[1] on the default sized terminal window.
1. If you are in this situation, there is a good chance that the
formatting of df cause line folding or wrapping that doubles the
number of lines to 80+ screens of df output.
--
Mike Gerdts
http://mgerdts.blogspot.com
the importance of 2 a bit.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that linked against the included
version of OpenSSL automatically gets to take advantage of the N2
crypto engine, so long as it is using one of the algorithms supported
by N2 engine.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
).
Remember marketing info his very high level, the devil as aways is in
the code.
Yeah, I know. It's often times difficult to find the right code when
you know what you are looking for. When you don't know that you
should be fact-checking, the code rarely finds its way in front of
you.
--
Mike Gerdts
cheaper on systems with lower latency between CPUs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/18/07, Gary Mills [EMAIL PROTECTED] wrote:
What's the command to show cross calls?
mpstat will show it on a system basis.
xcallsbypid.d from the DTraceToolkit (ask google) will tell you which
PID is responsible.
--
Mike Gerdts
http://mgerdts.blogspot.com
*
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED] [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike
- mine was SPARC) to see if it
addresses your problem.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of zfs are using something that does something along the
lines of
while readdir ; do
open file
read from file
write to backup stream
close file
done
Since files are unlikely to be on disk in a contiguous manner, this
looks like a random read operation to me.
Am I wrong?
--
Mike
df.xpg4
df.cdf.po df.xcl df.xpg4.o
It looks to me as though df becomes /usr/bin/df and df.xpg4 becomes
/usr/xpg4/bin/df.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Also... since there is nothing zfs-specific here, opensolaris-code may
be a more appropriate forum.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
and likely more space in production use than
ZFS.
I think that ZFS holds a lot of promise for shared-nothing database
clusters, such as is being done by Greenplumb with their extended
variant of Postgres.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
? I would guess that you
don't have large file support. A variant of the following would
probably be good:
cc -c $CFLAGS `getconf LFS_CFLAGS` myprog.c
cc -o myprog $LDFLAGS `getconf LFS_LDFLAGS`
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
corruption)
- Opportunities to do things previously not possible
ZFS doesn't win on many of those, but with the improvements that I
have seen throughout the storage stack it is somewhat likely that the
required improvements are already on the roadmap.
--
Mike Gerdts
http://mgerdts.blogspot.com
# zfs mount -a (not sure this needed)
# cd /somewhere_else
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the additional space to be seen.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
should take only a few
seconds longer than a standard init 6. Failback is similarly easy.
I can't remember the last time I swapped physical drives to minimize
the outage during an upgrade.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
name, temp.
(I am trying to move this thread over to zfs-discuss, since I originally
posted to the wrong alias)
storage-discuss trimmed in my reply.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
independently either I need to have a zpool per zone or I need
to have per-dataset replication. Considering that with some workloads
20+ zones on a T2000 is quite feasible, a T5240 could be pushing 80+
zones and as such a relatively large number of zpools.
--
Mike Gerdts
http://mgerdts.blogspot.com
with general
systemtools of a particular directory?
any idea would be appreciated
karsten
Have you tried fsstat? I think it will do what you are looking for
whether it is zfs, ufs, tmpfs, etc.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss
to workloads that use a lot of RAM but are fairly inactive. As
such, a $10k PCIe card may be able to allow a $42k 64 GB T5240 handle
5+ times the number of not-too-busy J2EE instances.
If anyone's done any modelling or testing of such an idea, I'd love to
hear about it.
--
Mike Gerdts
http
privileges has
everything they need to gain full root access.
I wish that there was a flag to open(2) to say not to update the atime
and that there was a privilege that could be granted to allow this
flag without granting file_dac_write.
--
Mike Gerdts
http://mgerdts.blogspot.com
better method for getting rid of the cruft that builds up in
/var/sadm either.
I suspect that further discussion on this topic would be best directed
to [EMAIL PROTECTED] or sun-managers mailing list (see
http://www.sunmanagers.org/).
--
Mike Gerdts
http://mgerdts.blogspot.com
/SPROcc/save/pspool/SPROcc/install/depend
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkginfo
var/sadm/pkg/SPROcc/save/pspool/SPROcc/pkgmap
Notice the lack of undo.Z files (and associated patch directories),
but the rest looks the same.
--
Mike Gerdts
http://mgerdts.blogspot.com
pool0 bootfs - default
pool0 delegation on default
pool0 autoreplace off default
pool0 temporaryoff default
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
On Sat, May 31, 2008 at 8:48 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
I just experienced a zfs-related crash. I have filed a bug (don't
know number - grumble). I have a crash dump but little free space. If
someone would like some more info from the core, please let me know in
the next few
related
directories is (save/patchid) may trip something up.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, May 31, 2008 at 9:38 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
$ find /ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
/ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix
/ws/mount/onnv-gate/usr/src/uts/sun4u/serengeti/unix/.make.state.lock
/ws/mount/onnv-gate/usr/src/uts/sun4u
and wrote a blog entry.
http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-environment.html
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
in the
documentation or zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
+0x49(ddfb6c00, 5a23, 8045e40, 13, e8b3a020, e0620f78)
ioctl+0x155()
sys_call+0x10c()
The dtrace command that I was running was:
dtrace -n 'fbt:zfs:dsl_dataset_promote:return { trace(arg0); stack() }'
--
Mike Gerdts
http://mgerdts.blogspot.com
of pkg.sun.com) and the current batch of
really fresh code from the Installation and Packaging community gets
burned in a bit, the 18 month cycle will not be such a big deal in
many cases. It's shaping up that upgrading to the latest bits should
be easier and safer than patching is today.
--
Mike
complaints of repeated timeouts when the snv_90
packages were released resulting in having to restart the upgrade from
the start.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
of information about which packages and
patches are installed. There is a lot of other stuff that shouldn't
be snapshotted with it. I have proposed /var/share to cope with this.
http://mgerdts.blogspot.com/2008/03/future-of-opensolaris-boot-environment.html
--
Mike Gerdts
http://mgerdts.blogspot.com
of ancient history as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Jun 24, 2008 at 7:24 AM, Gary Mills [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 10:25:09PM -0500, Mike Gerdts wrote:
Really it boils down to lots of file systems to hold the OS adds
administrative complexity and rarely saves more work than it creates.
Some of us want to use
are much more likely to use jumpstart for installations than
laptop-based VM's.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to the timestamps in my prompt, I'm
thinking that virtualbox reset the time to zero while the command was
running. This seems to happen from time to time, but this is the most
entertaining result I have seen.
--
Mike Gerdts
http://mgerdts.blogspot.com
.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on the console after the dump completed.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jun 25, 2008 at 3:36 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
On Wed, Jun 25, 2008 at 3:09 PM, Robert Milkowski [EMAIL PROTECTED] wrote:
Well, I've seen core dumps bigger than 10GB (even without ZFS)... :)
Was that the size in the dump device or the size in /var/crash
is not a
bug. If you remove /sbin/init the system would be hosed worse but you
would have gotten no error message before reboot.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Mon, Jun 30, 2008 at 9:19 AM, jan damborsky [EMAIL PROTECTED] wrote:
Hi Mike,
Mike Gerdts wrote:
On Wed, Jun 25, 2008 at 11:09 PM, Jan Damborsky [EMAIL PROTECTED]
wrote:
Thank you very much all for this valuable input.
Based on the collected information, I would take
following
errors when doing a stat()
of a file. Repeated tries fails, but a reboot seems to clear it.
zpool scrub reports no errors and the pool consists of a single mirror
vdev. I haven't filed a bug on this yet.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs
excessively short
on memory X times in recent history. Any of these approaches is miles
above the Linux approach of finding a memory hog to kill.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat [EMAIL PROTECTED] wrote:
Mike Gerdts wrote:
On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat [EMAIL PROTECTED]
wrote:
Instead we should take it completely out of their hands and do it all
dynamically when it is needed. Now that we can swap
1 - 100 of 246 matches
Mail list logo