Harry Putnam wrote:
Here is where I need some kind of brief outline telling what all is
needed to get that to happen.
When I look at the server, Its said to be in `maintenance mode'
# svcs | grep smb
online 18:40:45 svc:/network/smb/client:default
maintenance23:55:48
Once again, I find I have to correct myself:
If you go to a future version of zfs, simply replace all your full
filesystem streams with new ones, and then of course start new
incrementals. Any reasonable backup procedure probably involves starting
new full backups at regular intervals anyway,
Hello Asif,
Wednesday, February 18, 2009, 1:28:09 AM, you wrote:
AI On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski mi...@task.gda.pl wrote:
Hello Asif,
Tuesday, February 17, 2009, 7:43:41 PM, you wrote:
AI Hi All
AI Does anyone have any experience on running qmail on solaris 10 with
Hi Andras,
No problems writing direct. Answers inline below. (If there are any
typo's it cause it's late and I have had a very long day ;))
andras spitzer wrote:
Scott,
Sorry for writing you directly, but most likely you have missed my
questions regarding your SW design, whenever you have
Robert Milkowski wrote:
Hello Asif,
Wednesday, February 18, 2009, 1:28:09 AM, you wrote:
AI On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski mi...@task.gda.pl wrote:
Hello Asif,
Tuesday, February 17, 2009, 7:43:41 PM, you wrote:
AI Hi All
AI Does anyone have any experience on running
On Fri, Feb 13, 2009 at 9:47 PM, Richard Elling
richard.ell...@gmail.com wrote:
It has been my experience that USB sticks use FAT, which is an ancient
file system which contains few of the features you expect from modern
file systems. As such, it really doesn't do any write caching. Hence, it
I would very much appreciate some advice on this;
For our file- and mail servers we have been using mirrored raid-5
chassises, with disksuite and ufs. This has served us well, and the
el-cheapo raid-5 chassises have failed several times without any
downtime for our services.
We are now looking
On Tue, February 17, 2009 16:56, Joe S wrote:
I have an OpenSolaris snv_105 server at home that holds my photos,
docs, music, etc, in a zfs pool. I backup my laptops with rsync to the
OpenSolaris server. All of my important data is in one place, on the
OpenSolaris server. I want to backup
On Tue, February 17, 2009 17:35, David Magda wrote:
Personally I recommend using FireWire whenever
possible.
I haven't tested performance on Solaris.
But I bought into the Firewire hype, and put firewire cards into my two
Linux servers for access to my external hard drives, and also used it
Hello Lori,
Any update to this issue, and can you speculate as to if it will be a
patch to Solaris 10u6, or part of 10u7?
Thanks again,
Jerry
Lori Alt wrote:
This is in the process of being resolved right now. Stay tuned
for when it will be available. It might be a patch to Update 6.
Latest is that this will go into an early build of Update 8
and be available as a patch shortly thereafter (shortly
after it's putback, that is. The patch doesn't have to wait for U8
to be released.)
I will update the CR with this information.
Lori
On 02/18/09 09:12, Jerry K wrote:
Hello
On Tue, 17 Feb 2009, Elizabeth Schwartz wrote:
Sun support threw up their hands and said to install Solaris 10 u6,
which I'm not really happy about doing as a bug fix to a production
server running a supported version of Sun OS. Once Upon a Time, Sun
used to offer *patches* to paying customers
It's an old version but it's a *supported* version and we have a
five-figure support contract. That used to matter.
I've never used Live Upgrade; I want to try it out but not on my
production file server, and I want to know that this particular bug is
fixed first, something more definite than
On Wed, Feb 18, 2009 at 10:11 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Tue, February 17, 2009 17:35, David Magda wrote:
Personally I recommend using FireWire whenever
possible.
I haven't tested performance on Solaris.
But I bought into the Firewire hype, and put firewire cards
Calculating the availability and economic trade-offs of configurations
is hard. Rule of thumb seems to rule.
I recently profiled an availability/reliability tool on
StorageMojo.com that uses Bayesian analysis to estimate datacenter
availability. You can quickly (minutes, not days) model
On Wed, 18 Feb 2009, Elizabeth Schwartz wrote:
It's an old version but it's a *supported* version and we have a
five-figure support contract. That used to matter.
I can understand your frustration. ZFS in Solaris 10U3 was a bit
rough around the edges. It is definitely improved in later
On Wed, February 18, 2009 11:19, Tim wrote:
Odd, my firewire enclosure transfers are north of 50MB/sec, while the same
drive in a USB enclosure is lucky to break 25MB/sec. You sure your local
disk isn't just dog slow?
I can easily see 90MB/sec (and that's production load, not benchmark
Ian Collins wrote:
Harry Putnam wrote:
[...]
Still when I look again... its still in maintenance mode.
What does tail /var/svc/log/network-smb-server:default.log show?
The log file for a service listed as part of the long listing (svcs -l
smb/server).
Following these two commands:
fc == Frank Cusack fcus...@fcusack.com writes:
dd == David Dyer-Bennet d...@dd-b.net writes:
fc If you go to a future version of zfs, simply replace all your
fc full filesystem streams with new ones,
I still think you should not be storing these streams at all, for
reasons you describe
You definitely need SUNWsmbskr - the cifs server provided with
OpenSolaris is tied to the kernel at some low level.
I found this entry helpful:
http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode
On Wed, Feb 18, 2009 at 1:03 PM, Harry Putnam rea...@newsguy.com wrote:
Ian
Bob is correct to praise LiveUpgrade. It's pretty much risk-free when
used properly, provided you have some spare slices/disks.
At the same time, I'd say that this is probably an appropriate time to
escalate the bug with support - the answers you are getting aren't
satisfactory.
I would also
Robin,
From recollection the business case for investment in power protection
technology was relatively simple.
We calculated what the downtime per hour was worth and how frequently it
happened. We used to
have several if not more incidents per year and that would cause major
system
I appreciate the feedback.
I've decided to:
* create daily ZFS snapshots and zfs send these to separate external
disks (via esata).
* create monthly full backups via rsync, tar, or amanda on separate
external disks.
I'm not going to store everything on S3, it is too expensive. However,
I will
On Wed, 18 Feb 2009 11:27:38 -0800, Joe S js.li...@gmail.com wrote:
I appreciate the feedback.
I've decided to:
* create daily ZFS snapshots and zfs send these to separate external
disks (via esata).
I'll be interested in hearing anything you learn about eSata on Solaris. I
haven't used
sl == Scott Lawson scott.law...@manukau.ac.nz writes:
sl Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than availability, meaning you don't want to
On Wed, 18 Feb 2009, Miles Nordin wrote:
I just don't like the idea people are building fancy space-age data
centers and then thinking they can safely run crappy storage software
that won't handle power outages because they're above having to worry
about all that little-guy nonsense. A big
Blake wrote:
You definitely need SUNWsmbskr - the cifs server provided with
OpenSolaris is tied to the kernel at some low level.
I found this entry helpful:
http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode
Looks like it will be immensely so..
However it appears from the
Miles Nordin wrote:
sl == Scott Lawson scott.law...@manukau.ac.nz writes:
sl Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than
Zpool iostat 5 reports:
rpool115G 349G 91 0 45.7K 0
rpool115G 349G 90 0 45.5K 0
rpool115G 349G 89 0 44.6K 0
rpool115G 349G 93 0 47.9K 0
rpool115G 349G 90 0 45.0K 0
rpool
I have two separate system where zpool remove pooldisk failed to
remove the spare disks from pool. In both cases command is returning
without any error (success). Also, dtracing an IOCTL
ZFS_IOC_VDEV_REMOVE is also showing no error returned. I searched
sunsolve for bugs but no match found.
have you made sure that samba is *disabled*?
svcs samba
?
On Wed, Feb 18, 2009 at 4:14 PM, Harry Putnam rea...@newsguy.com wrote:
Blake wrote:
You definitely need SUNWsmbskr - the cifs server provided with
OpenSolaris is tied to the kernel at some low level.
I found this entry helpful:
Blake blake.ir...@gmail.com writes:
have you made sure that samba is *disabled*?
svcs samba
?
First..good news ... its working.
About samba:
Yeah, that was one of the things I did find while googling. But
apparently that package is not installed by default.. it was not
installed here at
Hello Lori,
Thank you again for the quick reply.
Unfortunately, I had mistakenly anticipated a somewhat quicker
integration than Solaris 10u8.
Approaching this from another angle, would it be possible for me to
build a jumpstart server using a current Solaris Nevada b107/SXCE, and
to
Turns out setting altroot is the way to do this.
Doesn't work for the root pool. Once you get to the root filesystem,
mounted on /, zfs attempts to mount it. Even though you are using
an altroot, / now maps to /altroot, which is of course already occupied.
:(
-frank
34 matches
Mail list logo