Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems
then our remote support/solution centre staff would find such an
output useful in identifying and verifying the config.
I'm don't have visibility of the
Included below is a a thread which dealt with trying to find the
packages necessary for a minimal Solais 10 U2 install with ZFS
functionality. In addition to SUNWzfskr, SUNzfsr and SUNWzfsu the
SUNWsmapi package needs to be installed. The libdiskmgt.so.1 library is
required for the
No arguement from me. For better or for worse, most of the customers I
speak with minimize their OS distributions. The more we can accurately
describe dependencies within our current methods, the better.
/jason
Jim Connors wrote:
Included below is a a thread which dealt with trying to
On Tue, Jul 25, 2006 at 10:25:04AM -0400, Jim Connors wrote:
Included below is a a thread which dealt with trying to find the
packages necessary for a minimal Solais 10 U2 install with ZFS
functionality. In addition to SUNWzfskr, SUNzfsr and SUNWzfsu the
SUNWsmapi package needs to be
Craig Morgan wrote:
Spare a thought also for the remote serviceability aspects of these
systems, if customers raise calls/escalations against such systems then
our remote support/solution centre staff would find such an output
useful in identifying and verifying the config.
I'm don't have
Guys,
Thanks for the help so far, now comes the more interesting questions ...
Piggybacking off of some work being done to minimize Solaris for
embedded use, I have a version of Solaris 10 U2 with ZFS functionality
with a disk footprint of about 60MB. Creating a miniroot based upon
this
I understand. Thanks.
Just curious, ZFS manages NFS shares. Have you given any thought to
what might be involved for ZFS to manage SMB shares in the same manner.
This all goes towards my stateless OS theme.
-- Jim C
Eric Schrock wrote:
You need the following file:
On Tue, Jul 25, 2006 at 01:07:59PM -0400, Jim Connors wrote:
I understand. Thanks.
Just curious, ZFS manages NFS shares. Have you given any thought to
what might be involved for ZFS to manage SMB shares in the same manner.
This all goes towards my stateless OS theme.
Yep, this is in
I've recently started doing ON nightly builds on zfs filesystems on the
internal ATA disk of a Blade 1500 running snv_42. Unfortunately, the
builds are extremely slow compared to building on an external IEEE 1394
disk attached to the same machine:
ATA disk:
Elapsed build time (DEBUG)
I've run into this myself. (I am in a university setting). after reading bug
ID 6431277 (URL below for noobs like myself who didn't know what see 6431277
meant):
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
...it's not clear to me how this will be resolved. What I'd
On Tue, 2006-07-25 at 13:45, Rainer Orth wrote:
At other times, the kernel time can be even as high as 80%. Unfortunately,
I've not been able to investigate how usec_delay is called since there's no
fbt provider for that function (nor for the alternative entry point
drv_usecwait found in
Eric Schrock wrote:
You need the following file:
/etc/zfs/zpool.cache
So as a workaround (or more appropriately, a kludge) would it be
possible to:
1. At boot time do a 'zpool import' of some pool guaranteed to exist.
For the sake of this discussion call it 'system'
2. Have
Bill,
In the future, you can try:
# lockstat -s 10 -I sleep 10
which aggregates on the full stack trace, not just the caller, during
profiling interrupts. (-s 10 sets the stack depth; tweak up or down to
taste).
nice. Perhaps lockstat(1M) should be updated to include something like
On Tue, 25 Jul 2006, Brad Plecs wrote:
I've run into this myself. (I am in a university setting). after reading
bug ID 6431277 (URL below for noobs like myself who didn't know what see
6431277 meant):
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
...it's not clear
A couple of weeks ago, there was a discussion on the best system for ZFS
and I mentioned that AMD would reduce pricing and withdraw some of the
939-pin (non AM2) processors from the marketplace.
Update: I see a dual-core AMD X2 4400+ (1Mb cache per core) processor on
www.monarchcomputers.com for
First, ZFS allows one to take advantage of large, inexpensive Serial ATA
disk drives. Paraphrased: ZFS loves large, cheap SATA disk drives. So
the first part of the solution looks (to me) as simple as adding some
cheap SATA disk drives.
I hope not. We have quotas available for a reason.
First, ZFS allows one to take advantage of large, inexpensive Serial ATA
disk drives. Paraphrased: ZFS loves large, cheap SATA disk drives. So
the first part of the solution looks (to me) as simple as adding some
cheap SATA disk drives.
Next, after extra storage space has been added to
I would like to make a couple of additions to the proposed model.
Permission Sets.
Allow the administrator to define a named set of permissions, and then
use the name as a permission later on. Permission sets would be
evaluated dynamically, so that changing the set definition would
On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
Perhaps lockstat(1M) should be updated to include something like
this in the EXAMPLES section.
I filed 6452661 with this suggestion.
Any word when this might be fixed?
I can't comment in terms of time, but the engineer working on it has a
Bill,
On Tue, 2006-07-25 at 14:36, Rainer Orth wrote:
Perhaps lockstat(1M) should be updated to include something like
this in the EXAMPLES section.
I filed 6452661 with this suggestion.
excellent, thanks.
Any word when this might be fixed?
I can't comment in terms of time, but
On Tue, Jul 25, 2006 at 11:13:16AM -0700, Brad Plecs wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
What I'd really like to see is ... the ability for the snapshot space
to *not* impact the filesystem space).
Yep, as Eric mentioned, that is the purpose of this
Our application Canary has approx 750 clients uploading to the server
every 10 mins, that's approx 108,000 gzip tarballs per day writing to
the /upload directory. The parser untars the tarball which consists of
8 ascii files into the /archives directory. /app is our application and
tools
Given the amount of I/O wouldn't it make sense to get more drives
involved or something that has cache on the front end or both? If you're
really pushing the amount of I/O you're alluding too - Hard to tell
without all the details - then you're probably going to hit a limitation
on the drive
On Tue, Jul 25, 2006 at 07:24:51PM -0500, Mike Gerdts wrote:
On 7/25/06, Brad Plecs [EMAIL PROTECTED] wrote:
What I'd really like to see is ... the ability for the snapshot space
to *not* impact the filesystem space).
The idea is that you have two storage pools - one for live data, one
for
On 7/25/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
You can simplify and improve the performance of this considerably by
using 'zfs send':
for user in $allusers ; do
zfs snapshot users/[EMAIL PROTECTED]
zfs send -i $yesterday users/[EMAIL PROTECTED] | \
Hi Torrey; we are the cobblers kids. We borrowed this T2000 from
Niagara engineering after we did some performance tests for them. I am
trying to get a thumper to run this data set. This could take up to 3-4
months. Today we are watching 750 Sun Ray servers and 30,000 employees.
Lets see
1)
26 matches
Mail list logo