Still no luck :-(
I installed snv_100 on a new disk, mounted the old disk and copied the home
directories etc.
and now at least I have a system that works, if somewhat stunted cf the old
system.
It would be good if the old disk could be brought back to its former glory...
--
This message
On Thu, Nov 13, 2008 at 04:54:57PM -0800, Gerry Haskins wrote:
Jens, http://www.sun.com/bigadmin/patches/firmware/release_history.jsp on
the Big Admin Patching center, http://www.sun.com/bigadmin/patches/ list
firmware revisions.
Thanks a lot. Digged around there and found, that 121683-06
Hi guys. Read this thread, good info! I'm now considering getting one of the
MBs recommended in the Tom's Hardware review, to which a URL was posted
earlier. The article is here:
http://www.tomshardware.com/reviews/intel-e7200-g31,2039.html
I would like to know if any of you can confirm
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave more data across. In
Joerg Schilling schrieb:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Could you provide the panic message and stack trace,
plus the stack traces of when it's hung?
--matt
Hello matt,
here is info and stack trace of a server running Update 3:
$ uname -a
SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M
$ head -1 /etc/release
No clue. My friend also upgraded to b101. Said it was working awesome
- improved network performance, etc. Then he said after a few days,
he's decided to downgrade too - too many other weird side effects.
This has a comparison (at the time) as to what the differences are
with the different
Hello Thomas,
What is mbuffer? Where might I go to read more about it?
Thanks,
Jerry
yesterday, I've release a new version of mbuffer, which also enlarges
the default TCP buffer size. So everybody using mbuffer for network data
transfer might want to update.
For everybody unfamiliar
Jerry K schrieb:
Hello Thomas,
What is mbuffer? Where might I go to read more about it?
Thanks,
Jerry
yesterday, I've release a new version of mbuffer, which also enlarges
the default TCP buffer size. So everybody using mbuffer for network data
transfer might want to update.
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
That is exactly the issue. When the zfs recv data has been written, zfs
recv starts reading the network again, but there's only a tiny amount of
data buffered in the TCP/IP stack, so it has to wait for the network to
heave
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong -- should read 250MB buffer!
Still some orders of
On 11/14/08 04:29, Tobias Exner wrote:
Hi experts,
I need a little help from your site to understand what's going on.
I've got a SUN X4540 Thumper and setup some zpools. Further I engaged
the powerd configuration to stop the disks when there are idle for a
specified time.
Now I
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No -- that's wrong --
On Fri, Nov 14, 2008 at 10:04 AM, Joerg Schilling
[EMAIL PROTECTED] wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer size to 63 kB. 63kB : 1 MB is the same ratio
as 256 MB : 4 GB.
BTW: a lot of numbers in Solaris did not grow since a long time and
thus create problems
Joerg Schilling wrote:
Andrew Gabriel [EMAIL PROTECTED] wrote:
Andrew Gabriel wrote:
Interesting idea, but for 7200 RPM disks (and a 1Gb ethernet link), I
need a 250GB buffer (enough to buffer 4-5 seconds worth of data). That's
many orders of magnitude bigger than SO_RCVBUF can go.
No --
- original Nachricht
Betreff: Re: [zfs-discuss] 'zfs recv' is very slow
Gesendet: Fr, 14. Nov 2008
Von: Bob Friesenhahn[EMAIL PROTECTED]
On Fri, 14 Nov 2008, Joerg Schilling wrote:
On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
set the socket buffer
Neil Perrin wrote:
I wouldn't expect any improvement using a separate disk slice for the Intent
Log
unless that disk was much faster and was otherwise largely idle. If it was
heavily
used then I'd expect quite the performance degradation as the disk head
bounces
around between slices.
OpenSolaris + ZFS achieves 120MB/sec read speed with 4 SATA 7200 rpm discs.
440 MB/Sec read speed with 7 SATA discs. 220MB/sec write speed.
2GB/sec write speed with 48 discs (on SUN Thumper x4600).
I have links to websites were Ive read this.
--
This message posted from opensolaris.org
[EMAIL PROTECTED] wrote:
But zfs could certainly use bigger buffers; just like mbuffer, I also
wrote my own pipebuffer which does pretty much the same.
You too? (My buffer program which I used to diagnose the problem is
attached to the bugid ;-)
I know Chris Gerhard wrote one too.
Seems like
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more than x100 ;-)
Yes. And how that SSDs are
Bob Friesenhahn [EMAIL PROTECTED] wrote:
On Fri, 14 Nov 2008, Joerg Schilling wrote:
-----
Disk RPM 3,600 10,000x3
The best rate I did see in 1985 was 800 kB/s (w. linear reads)
now I see 120 MB/s this is more
Brent Jones wrote:
*snip*
a 'zfs send' on the sending host
monitors the pool/filesystem for changes, and immediately sends them to
the
receiving host, which applies the change to the remote pool.
This is asynchronous, and isn't really different from running zfs send/recv
in a loop. Whether
I think you're confusing our clustering feature with the remote
replication feature. With active-active clustering, you have two closely
linked head nodes serving files from different zpools using JBODs
connected to both head nodes. When one fails, the other imports the
failed node's pool and
hi All,
I realize the subject is a bit incendiary, but we're running into what
I view as a design omission with ZFS that is preventing us from
building highly available storage infrastructure; I want to bring some
attention (again) to this major issue:
Currently we have a set of iSCSI
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
That is _not_ active-active, that is active-passive.
If you have a active-active system I can access the same data via both
controllers at the same time. I can't if it works like you just
described. You can't call it
On Fri, Nov 14, 2008 at 01:07:29PM -0800, Ed Clark wrote:
hi,
is the system still in the same state initially reported ?
Yes.
ie. you have not manually run any commands (ie. installboot) that would have
altered the slice containing the root fs where 137137-09 was applied
could you
On Fri, Nov 14, 2008 at 10:22 AM, mike [EMAIL PROTECTED] wrote:
No clue. My friend also upgraded to b101. Said it was working awesome
- improved network performance, etc. Then he said after a few days,
he's decided to downgrade too - too many other weird side effects.
Any more details
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper [EMAIL PROTECTED] wrote:
No clue. My friend also upgraded to b101. Said it was working awesome
- improved network performance, etc. Then he said after a few days,
he's decided to downgrade too - too many other weird side effects.
Any more details
Adam Leventhal wrote:
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
That is _not_ active-active, that is active-passive.
If you have a active-active system I can access the same data via both
controllers at the same time. I can't if it works like you just
described.
On Fri, Nov 14, 2008 at 4:43 PM, gnomad [EMAIL PROTECTED] wrote:
Like many others, I am looking to put together a SOHO NAS based on ZFS/CIFS.
The plan is 6 x 1TB drives in RAIDZ2 configuration, driven via mobo with 6
SATA ports.
I've read most, if not all, of the threads here, as well as
On Sat, Nov 15, 2008 at 00:46, Richard Elling [EMAIL PROTECTED] wrote:
Adam Leventhal wrote:
On Fri, Nov 14, 2008 at 10:48:25PM +0100, Mattias Pantzare wrote:
That is _not_ active-active, that is active-passive.
If you have a active-active system I can access the same data via both
Rich Reynolds wrote:
BTW: I am loath to call them bugs until I know its not a
configuration/pilot error.
IMHO, if you can cause the root to become corrupt, it's a bug. Short of
mucking around in /dev/kmem or /dev/dsk/*, it just shouldn't be possible
to corrupt a file system.
WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
That is the question.
Rob
___
zfs-discuss mailing list
[EMAIL PROTECTED] wrote:
WD Caviar Black drive [...] Intel E7200 2.53GHz 3MB L2
The P45 based boards are a no-brainer
16G of DDR2-1066 with P45 or
8G of ECC DDR2-800 with 3210 based boards
That is the question.
I guess the answer is how valuable is your data?
--
Ian.
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED]wrote:
In short, separate logs with rotating rust may reduce sync write latency by
perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
will reduce sync write latency by perhaps 10x in all cases. This is one of
36 matches
Mail list logo