Hi folks,
wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes, like this one 16TB - 24TB:
http://www.pogolinux.com/quotes/editsys?sys_id=3989
And here is announce:
http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=129Itemid=56
How about we complain enough to shame somebody into
adding power
management to the K8 chips? We can start by
reminding SUN on how much
it was trumpeting the early Opterons as 'green
computing'.
Cheers,
florin
Casper's frkit power management script works very well with AMD's
I just tried that, but the installgrub keeps failing:
[EMAIL PROTECTED]:~# zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h1m with 0 errors on Sat Aug 2 01:44:55
2008
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
Malachi de Ælfweald wrote:
I just tried that, but the installgrub keeps failing:
[EMAIL PROTECTED]:~# zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h1m with 0 errors on Sat Aug 2
01:44:55 2008
config:
NAME STATE READ WRITE CKSUM
[EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/c5t1d0s2
raw device must be a root slice (not s2)
and trying rdsk with s0 gave same error as before
On Sat, Aug 2, 2008 at 2:02 AM, Enda O'Connor [EMAIL PROTECTED] wrote:
Malachi de Ælfweald wrote:
I just tried
How about we complain enough to shame somebody into
adding power
management to the K8 chips? We can start by
reminding SUN on how much
it was trumpeting the early Opterons as 'green
computing'.
Cheers,
florin
Casper's frkit power management script works very well with AMD's
Hi everyone,
I've been running a zfs fileserver for about a month now (on snv_91) and
it's all working really well. I'm scrubbing once a week and nothing has
come up as a problem yet.
I'm a little worried as I've just noticed these messages in
/var/adm/message and I don't know if they're bad
On Sat, Aug 2, 2008 at 06:02, Erast Benson [EMAIL PROTECTED] wrote:
wanted to share some exciting news with you. Pogo Linux shipping
NexentaStor pre-installed boxes
at fairly astounding prices! The 16 disk one fully loaded with 1TB
sas disks, CPUs, memory, and warranty comes in at $11,620; the
Is it possible to compress a root pool? If yes, how? Thanks.
(I installed os 08.05 into a 4 GB USB stick, and want to know whether I could
squeeze more stuff in there.)
This message posted from opensolaris.org
___
zfs-discuss mailing list
Todd E. Moore wrote:
I'm working with a group that wants to commit all the way to disk every
single write - flushing or bypassing all the caches each time. The
fsync() call will flush the ZIL. As for the disk's cache, if given the
entire disk, ZFS enables its cache by default. Rather than ZFS
What does zpool status say?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Looks like my naive conclusion was wrong. This morning I installed os08.05
into a 4GB flash stick plugged to an HP 6700 notebook (Intel C2D, bought this
week). This machine boots and runs very nicely from this os08.05 flash stick.
However, I was unable to use this flash stick to boot an
Ross wrote:
What does zpool status say?
zpool status says everythings fine, i've run another scrub and it hasn't
found any errors, so can i just consider this harmless? its filling up
my log quickly though
thanks
Matt
No virus found in this outgoing message.
Checked by AVG -
Matt Harrison wrote:
Ross wrote:
What does zpool status say?
zpool status says everythings fine, i've run another scrub and it hasn't
found any errors, so can i just consider this harmless? its filling up
my log quickly though
I've just checked past logs and i'm getting up to about
W. Wayne Liauh wrote:
...
However, I was unable to use this flash stick to boot an Athlon X2
machine. Its MBR was read--and the GRUB options were shown on the
screen. But when I selected an option (e.g., the rc3), the machine
would go into the restart mode, and the GRUB screen would be shown
Dear all
As we wanted to patch one of our iSCSI Solaris servers we had to offline
the ZFS submirrors on the clients connected to that server. The devices
connected to the second server stayed online so the pools on the clients
were still available but in degraded mode. When the server came
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn several hours we got about 3500 error in 600GB of
Miles
On Sat, 2 Aug 2008, Miles Nordin wrote:
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn I never experienced that one but we usually don't touch any of
tn the iSCSI settings as long as a devices is offline. At least
tn as long as we don't have to for any reason
Usually I do 'zpool offline' followed by 'iscsiadm remove
I ran into this as well. For some reason installgrub needs slice 2 to be the
special backup slice that covers the whole disk, as in Solaris. You actually
specify s0 on the command line since this is the location of the ZFS root, but
installgrub will go away and try to access the whole disk
Have you verified that it will auto failover correctly if one is s0 and one
is s2?
On Sat, Aug 2, 2008 at 3:53 PM, andrew [EMAIL PROTECTED] wrote:
I ran into this as well. For some reason installgrub needs slice 2 to be
the special backup slice that covers the whole disk, as in Solaris. You
c == Miles Nordin [EMAIL PROTECTED] writes:
tn == Thomas Nau [EMAIL PROTECTED] writes:
c 'zpool status' should not be touching the disk at all.
I found this on some old worklog:
http://web.Ivy.NET/~carton/oneNightOfWork/20061119-carton.html
-8-
Also, zpool status takes forEVer.
Sure,
zfs set compression=on rpool
Of course this only compresses items written after compression is turned on.
On my system when I started to perform the install I opened up a
terminal and after rpool was created I set the compression to on so that
it got set before the packages were installed.
Carson Gaspar wrote:
Todd E. Moore wrote:
I'm working with a group that wants to commit all the way to disk every
single write - flushing or bypassing all the caches each time. The
fsync() call will flush the ZIL. As for the disk's cache, if given the
entire disk, ZFS enables its cache by
Mark Danico wrote:
Sure,
zfs set compression=on rpool
Of course this only compresses items written after compression is turned on.
On my system when I started to perform the install I opened up a
terminal and after rpool was created I set the compression to on so that
it got set before the
25 matches
Mail list logo