I just got off the phone with one of their executive customer service guys
and he says they are working on a fix right now and should have a firmware
update out some time this week, assuming all the testing goes well. When I
get it I'll update and let you guys know if it works.

On Thu, Nov 6, 2008 at 2:43 AM, Dan Graham <[EMAIL PROTECTED]> wrote:

> Chris,
>
> Thank you for the heads up. I was going to add another RAID 5 array
> with these drives ... shudderz ... I held off because of reliabilty
> issues with the 1T Seagates (I already have one of two over heating).
> I have had really good luck with the 750 Gig. Hitachis.
>
> All the best, Dan
>
> On Wed, Nov 5, 2008 at 5:45 PM, Chris q <[EMAIL PROTECTED]> wrote:
> >
> >
> > ---------- Forwarded message ----------
> > From: Chris <[EMAIL PROTECTED]>
> > Date: Wed, Nov 5, 2008 at 5:45 PM
> > Subject: Re: [clug-talk] Fwd: Fwd: asus P5Q -Carefule of seagate 1.5 TB
> > drives, they are a bit broken
> > To: CLUG General <[email protected]>
> >
> >
> >
> http://forums.seagate.com/stx/board/message?board.id=ata_drives&thread.id=2390&view=by_date_ascending&page=6
> > Problem tentatively solved. Any of you thinking about buying the seagate
> 1.5
> > TB drives, wait until they fix this problem.
> >
> >
> > On Wed, Nov 5, 2008 at 4:15 PM, Chris <[EMAIL PROTECTED]> wrote:
> >>
> >> ok, here's the part that I don't get perhaps you can explain what's
> going
> >> on. it looks like a bunch of cups stuff is happening then my raid array
> sata
> >> links reset,although that might be a coincidence. Is there any reason
> why
> >> the link is resetting? doesn't make sense to me.
> >>
> >> Nov  5 12:49:12 serverv2 -- MARK --
> >> Nov  5 13:09:12 serverv2 -- MARK --
> >> Nov  5 13:29:12 serverv2 -- MARK --
> >> Nov  5 13:49:12 serverv2 -- MARK --
> >> Nov  5 14:09:12 serverv2 -- MARK --
> >> Nov  5 14:29:12 serverv2 -- MARK --
> >> Nov  5 14:49:12 serverv2 -- MARK --
> >> Nov  5 15:09:12 serverv2 -- MARK --
> >> Nov  5 15:29:12 serverv2 -- MARK --
> >> Nov  5 15:49:12 serverv2 -- MARK --
> >> Nov  5 15:50:55 serverv2 python: hp-systray(init)[6671]: warning: No hp:
> >> or hpfax: devices found in any installed CUPS queue. Exiting.
> >> Nov  5 15:52:07 serverv2 kernel: [12192.326735] type=1503
> >> audit(1225925527.559:4): operation="inode_permission"
> requested_mask="r::"
> >> denied_mask="r::" fsuid=7 name="/proc/6778/net/" pid=6778
> >> profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222565] type=1503
> >> audit(1225925528.454:5): operation="inode_permission"
> requested_mask="r::"
> >> denied_mask="r::" fsuid=7 name="/proc/6782/net/" pid=6782
> >> profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222606] type=1503
> >> audit(1225925528.454:6): operation="socket_create" family="ax25"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222614] type=1503
> >> audit(1225925528.454:7): operation="socket_create" family="netrom"
> >> sock_type="seqpacket" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222621] type=1503
> >> audit(1225925528.454:8): operation="socket_create" family="rose"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222628] type=1503
> >> audit(1225925528.454:9): operation="socket_create" family="ipx"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222634] type=1503
> >> audit(1225925528.454:10): operation="socket_create" family="appletalk"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222641] type=1503
> >> audit(1225925528.454:11): operation="socket_create" family="econet"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222648] type=1503
> >> audit(1225925528.454:12): operation="socket_create" family="ash"
> >> sock_type="dgram" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 15:52:08 serverv2 kernel: [12193.222654] type=1503
> >> audit(1225925528.454:13): operation="socket_create" family="x25"
> >> sock_type="seqpacket" protocol=0 pid=6782 profile="/usr/sbin/cupsd"
> >> Nov  5 16:05:21 serverv2 kernel: [12986.184075] ata3: hard resetting
> link
> >> Nov  5 16:05:21 serverv2 kernel: [12986.184077] ata4: hard resetting
> link
> >> Nov  5 16:05:21 serverv2 kernel: [12986.668023] ata4: SATA link up 3.0
> >> Gbps (SStatus 123 SControl 300)
> >> Nov  5 16:05:21 serverv2 kernel: [12986.668709] ata3: SATA link up 3.0
> >> Gbps (SStatus 123 SControl 300)
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670396] ata4.00: configured for
> >> UDMA/133
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670419] ata4: EH complete
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670494] sd 3:0:0:0: [sdd]
> >> 2930277168 512-byte hardware sectors (1500302 MB)
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670517] sd 3:0:0:0: [sdd] Write
> >> Protect is off
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670556] sd 3:0:0:0: [sdd] Write
> >> cache: enabled, read cache: enabled, doesn't support DPO or FUA
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670941] ata3.00: configured for
> >> UDMA/133
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670952] ata3: EH complete
> >> Nov  5 16:05:21 serverv2 kernel: [12986.670992] sd 2:0:0:0: [sdc]
> >> 2930277168 512-byte hardware sectors (1500302 MB)
> >> Nov  5 16:05:21 serverv2 kernel: [12986.671012] sd 2:0:0:0: [sdc] Write
> >> Protect is off
> >> Nov  5 16:05:21 serverv2 kernel: [12986.671050] sd 2:0:0:0: [sdc] Write
> >> cache: enabled, read cache: enabled, doesn't support DPO or FUA
> >> Nov  5 16:05:21 serverv2 kernel: [12986.682437] md: super_written gets
> >> error=-5, uptodate=0
> >> Nov  5 16:05:21 serverv2 kernel: [12986.704202] md: super_written gets
> >> error=-5, uptodate=0
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757591] RAID5 conf printout:
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757598]  --- rd:6 wd:4
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757601]  disk 0, o:1, dev:sda1
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757604]  disk 1, o:1, dev:sdb1
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757606]  disk 2, o:0, dev:sdc1
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757608]  disk 3, o:0, dev:sdd1
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757610]  disk 4, o:1, dev:sde1
> >> Nov  5 16:05:21 serverv2 kernel: [12986.757612]  disk 5, o:1, dev:sdf1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769512] RAID5 conf printout:
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769520]  --- rd:6 wd:4
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769523]  disk 0, o:1, dev:sda1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769525]  disk 1, o:1, dev:sdb1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769527]  disk 2, o:0, dev:sdc1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769529]  disk 4, o:1, dev:sde1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769531]  disk 5, o:1, dev:sdf1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769549] RAID5 conf printout:
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769551]  --- rd:6 wd:4
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769552]  disk 0, o:1, dev:sda1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769554]  disk 1, o:1, dev:sdb1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769556]  disk 2, o:0, dev:sdc1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769558]  disk 4, o:1, dev:sde1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.769560]  disk 5, o:1, dev:sdf1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789508] RAID5 conf printout:
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789513]  --- rd:6 wd:4
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789516]  disk 0, o:1, dev:sda1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789518]  disk 1, o:1, dev:sdb1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789520]  disk 4, o:1, dev:sde1
> >> Nov  5 16:05:22 serverv2 kernel: [12986.789522]  disk 5, o:1, dev:sdf1
> >>
> >>
> >>
> >> On Tue, Nov 4, 2008 at 4:57 PM, Mark Carlson <[EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> I'm sure someone can help you if you simply state why you think your
> >>> RAID array is failing.  What makes you think the array is failing?
> >>> Can you not access the file system on it?  Do you get an error message
> >>> that says that the array is bad?
> >>>
> >>> As far as I know, you may not even have created a file system on the
> >>> array, let alone mounted it.  This is why you need to provide the
> >>> steps you took to create the array.  I cannot stress this enough.
> >>> Tell us what you did and we can help you.  "I created a software raid
> >>> array" does not cut it.  It's not like we need screen shots or
> >>> anything.  If you did it using a GUI, tell us what GUI you used and
> >>> what buttons you pressed and anything you typed in.  If you used the
> >>> command line, tell us what you typed in.  That's it, that's all...
> >>> it's a pretty standard thing to do when you need help with a problem,
> >>> even non-computer problems.
> >>>
> >>> What you've done so far is analogous to going to the doctor and
> >>> saying: "I'm sick, give me pills that make me feel better."
> >>>
> >>> -Mark C.
> >>>
> >>> On 11/4/08, Chris q <[EMAIL PROTECTED]> wrote:
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > The stuff down at the bottom about the ata fail seemed important. I'm
> >>> > hoping
> >>> > someone can tell me why the raid array is failing.
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >  On Tue, Nov 4, 2008 at 2:11 PM, Mark Carlson <[EMAIL PROTECTED]
> >
> >>> > wrote:
> >>> >
> >>> > >
> >>> > >
> >>> > > On 11/2/08, Chris q <[EMAIL PROTECTED]> wrote:
> >>> > > > Well, I attached an IDE cdrom and it immediately happened again..
> >>> > > > here's the /var/log/messages output, I'm hoping you guys can help
> >>> > > > me
> >>> > figure
> >>> > > > this out.
> >>> > >
> >>> > > Posting the entire contents of /var/log/messages is helpful
> >>> > > sometimes... but not right now.  What in /var/log/messages do you
> >>> > > think relates to your problems, and why do you think it has
> something
> >>> > > to do with your cdrom drive?
> >>> > >
> >>> > > I would like to help you solve your problem, but you are missing
> some
> >>> > > key things here.
> >>> > >
> >>> > > 1. CLEARLY state your problem.  Include any error messages related
> to
> >>> > > the problem, no more and no less.  I still don't understand what
> your
> >>> > > problem and you seem to be withholding information.
> >>> > > 2. How are you setting up the software raid?  Describe your steps.
> >>> > > This is often the root cause of Linux problems, misconfiguration,
> >>> > > rather than hardware or software errors.
> >>> > > 3. Have you tried setting up a RAID 0 array instead of RAID 5?  Use
> >>> > > two drives instead of all 6.  Maybe one of your drives is bad.
> >>> > >
> >>> > >
> >>> > > -Mark C.
> >>>
> >>> _______________________________________________
> >>> clug-talk mailing list
> >>> [email protected]
> >>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> >>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> >>> **Please remove these lines when replying
> >>
> >
> >
> >
> > _______________________________________________
> > clug-talk mailing list
> > [email protected]
> > http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> > Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> > **Please remove these lines when replying
> >
>
> _______________________________________________
> clug-talk mailing list
> [email protected]
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
>
_______________________________________________
clug-talk mailing list
[email protected]
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to