Re: [CentOS] Chromium browser for C6

2014-11-08 Thread Steve Brooks

On Fri, 7 Nov 2014, Les Mikesell wrote:


On Thu, Nov 6, 2014 at 4:11 PM, Johnny Hughes joh...@centos.org wrote:



I am sorry, but Google is not interested in supporting CentOS.



Is there some way we might utilize social media to help them
understand the demand?




I installed from the repo given here a month ago and it works fine. I 
something wrong with this google repo? Should I remove and reinstall from 
the new one given here on this list?


Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Chromium browser for C6

2014-11-08 Thread Steve Brooks

On Sat, 8 Nov 2014, Steve Brooks wrote:

I installed from the repo given here a month ago and it works fine. I 
something wrong with this google repo? Should I remove and reinstall from the 
new one given here on this list?


Correction this is the repo file

[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1

It installs and works.. youtube videos play.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Video card radiator

2014-02-11 Thread Steve Brooks
 Now that I look again, that appears to be the case.
 Not only that, the radiator is tilted so that
 only the right front corner is close to the board.

 you would be very surprised at just how much time is spent
 in trying to tear down a new system design. heat sinking
 is an on going challenge.

 No, I wouldn't.

 What surprises me is that its current mechanical
 arrangement corresponds to its orignal design.
 To me, the radiator seemed to have fallen down.
 For all I knew, it might have been held up by persuasion.

The design will be deliberate such as to allow the heat to flow upwards 
along the angle of the radiator fins.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Video card radiator

2014-02-10 Thread Steve Brooks

 I recently obtained a desktop computer with an nVidia video card:
 from lspci:
 02:00.0 VGA compatible controller: nVidia Corporation G84 [GeForce 8600 GT] 
 (rev a1
 I had to open the case to connect the DVD
 drive and saw what appears to be a fallen radiator:
 http://www.cs.ndsu.nodak.edu/~hennebry/computer/amd64-1.jpg
 http://www.cs.ndsu.nodak.edu/~hennebry/computer/amd64-2.jpg
 That nothing is shorted out appears to be a matter of luck.

 Any suggestions regarding how to prevent
 the radiator from shorting its video card?
 A suggestion of who to ask would be good.


Could be nothing wrong here.. hard to tell from your pics.. see

http://www.trustedreviews.com/MSI-RX2600XT_PC-Component_review_msi-rx2600xt_Page-2

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Video card radiator

2014-02-10 Thread Steve Brooks
On Sun, 9 Feb 2014, Michael Hennebry wrote:

 I recently obtained a desktop computer with an nVidia video card:
 from lspci:
 02:00.0 VGA compatible controller: nVidia Corporation G84 [GeForce 8600 GT] 
 (rev a1
 I had to open the case to connect the DVD
 drive and saw what appears to be a fallen radiator:
 http://www.cs.ndsu.nodak.edu/~hennebry/computer/amd64-1.jpg
 http://www.cs.ndsu.nodak.edu/~hennebry/computer/amd64-2.jpg
 That nothing is shorted out appears to be a matter of luck.

 Any suggestions regarding how to prevent
 the radiator from shorting its video card?
 A suggestion of who to ask would be good.

Is this it?

http://www.xsreviews.co.uk/modules/FCKeditor/Upload/Image/MSI8600/MSI-8600-Heatpipes.jpg
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ata marvel errors after kernel upgrade

2014-01-24 Thread Steve Brooks

Hi,

After upgrading to kernel 2.6.32-431.1.2.el6.x86_64 the messages below
keep appearing in /var/log/messages.

--
ata16.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
ata16.00: irq_stat 0x4001
scsi 16:0:0:0: [sg18] CDB: Read Capacity(10): 25 00 00 00 00 00 00 00 00 
00
ata16.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in
  res 00/00:00:00:00:00/00:00:00:00:00/00 Emask 0x3 (HSM violation)
ata16: hard resetting link
ata16: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
ata16.00: configured for UDMA/66
ata16: EH complete
--

Output from  sginfo -i  /dev/sg18 shows

Command Queueing   0
Vendor:Marvell
Product:   91xx Config
Revision level:1.01

This is an unused SATA chipset on a the motherboard, no
devices are plugged into it.

Also oddly I am getting

[root@mach~]# sginfo -r
/dev/scanner-sg18 /dev/scanner /dev/sdb /dev/sda

/dev/sdb and /dev/sda are HD's no idea what /dev/scanner is?

[root@mach~]# sginfo /dev/scanner
INQUIRY response (cmd: 0x12)

Device Type3
Vendor:Marvell
Product:   91xx Config
Revision level:1.01

-

There is no scanner attached to this machine so I am confused at the 
above output.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] ata marvel errors after kernel upgrade

2014-01-24 Thread Steve Brooks
On Fri, 24 Jan 2014, John R Pierce wrote:

 On 1/24/2014 12:54 AM, Steve Brooks wrote:
 [root@mach~]# sginfo /dev/scanner
 INQUIRY response (cmd: 0x12)
 
 Device Type3
 Vendor:Marvell
 Product:   91xx Config
 Revision level:1.01

 fwiw, scsi device type '3' is a 'processor', which I'm guessing is a
 backplane controller ?   not sure why it has a dev node

Hmm not sure, so far it does not seem to be causing any issues but on the 
other hand I would feel better if I knew for sure exactly why it was 
there.

This machine does have an adaptec hardware raid card which presents as 
/dev/sda. The OS is on /dev/sdb which is connected to an INTEL
  C600/X79 SATA chipset via an Icy Dock HD enclosure.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] XFS : Taking the plunge

2014-01-21 Thread Steve Brooks

Hi All,

I have been trying out XFS given it is going to be the file system of 
choice from upstream in el7. Starting with an Adaptec ASR71605 populated 
with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 
x86_64 and has 64G of RAM.

This next part was not well researched as I had a colleague bothering me 
late on Xmas Eve that he needed 14 TB immediately to move data to from an 
HPC cluster. I built an XFS file system straight onto the (raid 6) logical 
device made up of all sixteen drives with.


 mkfs.xfs -d su=512k,sw=14 /dev/sda


where 512k is the Stripe-unit size of the single logical device built on 
the raid controller. 14 is from the total number of drives minus two 
(raid 6 redundancy).

Any comments on the above from XFS users would be helpful!

I mounted the filesystem with the default options assuming they would be 
sensible but I now believe I should have specified the inode64 mount 
option to avoid all the inodes will being stuck in the first TB.

The filesystem however is at 87% and does not seem to have had any 
issues/problems.

 df -h | grep raid
/dev/sda   51T   45T  6.7T  87% /raidstor

Another question is could I now safely remount with the inode64 option 
or will this cause problems in the future? I read this below in the XFS 
FAQ but wondered if it have been fixed (backported?) into el6.4?

Starting from kernel 2.6.35, you can try and then switch back. Older 
kernels have a bug leading to strange problems if you mount without 
inode64 again. For example, you can't access files  dirs that have been 
created with an inode 32bit anymore.

I also noted that xfs_check ran out of memory and so after some reading 
noted that it is reccommended to use xfs_repair -n -vv instead as it 
uses far less memory. One remark is so why is xfs_check there at all?

I do have the option of moving the data elsewhere and rebuilding but this 
would cause some problems. Any advice much appreciated.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] XFS : Taking the plunge

2014-01-21 Thread Steve Brooks
On Tue, 21 Jan 2014, Keith Keller wrote:

 On 2014-01-21, Steve Brooks ste...@mcs.st-and.ac.uk wrote:

 mkfs.xfs -d su=512k,sw=14 /dev/sda

 where 512k is the Stripe-unit size of the single logical device built on
 the raid controller. 14 is from the total number of drives minus two
 (raid 6 redundancy).

 The usual advice on the XFS list is to use the defaults where possible.
 But you might want to ask there to see if they have any specific advice.

Thanks for the reply Keith. Yes I will ask on the list, I did read that 
when built on mdadm raid devices it is geared up to tune itself but with 
hardware raid it may take manual tuning.


 I mounted the filesystem with the default options assuming they would be
 sensible but I now believe I should have specified the inode64 mount
 option to avoid all the inodes will being stuck in the first TB.

 The filesystem however is at 87% and does not seem to have had any
 issues/problems.

 df -h | grep raid
 /dev/sda   51T   45T  6.7T  87% /raidstor

 Wow, impressive!  I know of a much smaller fs which got bit by this
 issue.  What probably happened is, as a new fs, the entire first 1TB was
 able to be reserved for inodes.

Yes and the output of df -i shows only

Filesystem  Inodes  IUsed   IFree   IUse% 
/dev/sda2187329088  189621  21871394671%

So few inodes are used because the data is from hpc used to run MHD 
(magneto hydro-dynamics)  simulations on the Sun many of the files are 
snapshots of the simulation at various instances 93G in size etc.

 Another question is could I now safely remount with the inode64 option
 or will this cause problems in the future? I read this below in the XFS
 FAQ but wondered if it have been fixed (backported?) into el6.4?

 I have mounted a large XFS fs that previously didn't use inode64 with
 it, and it went fine.  (I did not attempt to roll back.)  You *must*
 umount and remount for this option to take effect.  I do not know when
 the inode64 option made it to CentOS, but it is there now.

Ok so I am sort of wondering for this filesystem if it is actually worth 
it given lack of inodes does not look like it will be an issue.


 I also noted that xfs_check ran out of memory and so after some reading
 noted that it is reccommended to use xfs_repair -n -vv instead as it
 uses far less memory. One remark is so why is xfs_check there at all?

 The XFS team is working on deprecating it.  But on a 51TB filesystem
 xfs_repair will still use a lot of memory.  Using -P can help, but it'll
 still use quite a bit (depending on the extent of any damage and how many
 inodes, probably a bunch of other factors I don't know).

Yes this bothers me a bit, I issued a  xfs_repair -n -vv and that told 
me I only needed 6G I guess with only a few inodes and a clean 
filesystem it makes sense. I did read a good solution on the XFS mailing 
list which seems really neat..

Add an SSD of sufficient size/speed for swap duty to handle xfs_repair 
requirements for filesystems with arbitrarily high inode counts.  Create a 
100GB swap partition and leave the remainder unallocated. The unallocated 
space will automatically be used for GC and wear leveling, increasing the 
life of all cells in the drive.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Enterprise Class Hard Drive - Scam Warning

2013-10-02 Thread Steve Brooks

Hi All,

I know many of us here manage RAID on our Centos based servers so this may 
be of interest to us all.

I ordered three new Enterprise hard drives this month from a well known 
UK online retailer. The drives arrived as new in their anti-static 
packaging. Before using one of the drives in a mission critical hardware 
raid I checked the SMART attributes and was amazed at what I saw; see a 
few of the attributes listed below

   1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  -   2600
   9 Power_On_Hours  0x0032   098   097   000Old_age   -   2106
  12 Power_Cycle_Count   0x0032   100   100   000Old_age   - 80
198 Offline_Uncorrectable   0x0030   196   196   000Old_age   -398
200 Multi_Zone_Error_Rate   0x0008   180   180   000Old_age   -   4077


So for a brand new packaged drive this was a bit of a surprise. 2106 
power on hours, obviously should be zero for a new drive and 398 
Offline_Uncorrectable sectors this is a well used and faulty drive. I 
contacted the (very well known) manufacturer of the drive and asked for 
information on the serial number. I was told the serial number of the 
drive was region specific to the USA and should not even be in the UK. I 
opened and tested the second and third drives with similar results. I was 
told two of the drives had already been returned under warranty and 
replaced with new drives. Wow... I was also told by the online retailer 
this is known as a grey import and is not that uncommon..

So it may be a good policy to check the SMART attributes of drives before 
deployment!

Cheers, Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Enterprise Class Hard Drive - Scam Warning

2013-10-02 Thread Steve Brooks
 On Wed, Oct 02, 2013 at 05:24:54PM +0100, Steve Brooks wrote:

9 Power_On_Hours  0x0032   098   097   000Old_age   -   2106
   12 Power_Cycle_Count   0x0032   100   100   000Old_age   - 80

 replaced with new drives. Wow... I was also told by the online retailer
 this is known as a grey import and is not that uncommon..

 Grey imports would not have been running for 87 days and power cycled 80
 times in that period.

 If the retailer doesn't refund your money then you need to escalate.

 And name the retailer...


The retailer is certainly willing to refund and the manufacturer is 
also willing to replace.. The worrying part is that the drives that were 
replaced under warranty should *not* find there way back onto the shelves 
re-packaged as new enterprise class drives..

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Getting a do_IRQ: xx.xxx No irq handler for vector(irq -1), any ideas?

2013-09-06 Thread Steve Brooks
Sep  1 04:04:02 sraid1v kernel: do_IRQ: 4.110 No irq handler for vector 
(irq -1)
Sep  1 04:59:22 sraid1v kernel: do_IRQ: 4.102 No irq handler for vector 
(irq -1)
Sep  1 05:42:22 sraid1v kernel: do_IRQ: 5.224 No irq handler for vector 
(irq -1)
Sep  1 05:43:42 sraid1v kernel: do_IRQ: 5.121 No irq handler for vector 
(irq -1)
Sep  1 05:56:02 sraid1v kernel: do_IRQ: 4.71 No irq handler for vector 
(irq -1)
Sep  1 06:30:22 sraid1v kernel: do_IRQ: 5.222 No irq handler for vector 
(irq -1)
Sep  1 06:32:22 sraid1v kernel: do_IRQ: 5.183 No irq handler for vector 
(irq -1)
Sep  1 06:53:22 sraid1v kernel: do_IRQ: 4.224 No irq handler for vector 
(irq -1)
Sep  1 07:04:42 sraid1v kernel: do_IRQ: 0.86 No irq handler for vector 
(irq -1)
Sep  1 07:39:02 sraid1v kernel: do_IRQ: 3.189 No irq handler for vector 
(irq -1)
Sep  1 07:54:42 sraid1v kernel: do_IRQ: 0.148 No irq handler for vector 
(irq -1)
Sep  6 15:04:42 sraid1v kernel: do_IRQ: 5.85 No irq handler for vector 
(irq -1)

=
lspci output

00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07)
00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 1a (rev 07)
00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 2a (rev 07)
00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 3a in PCI Express Mode (rev 07)
00:05.0 System peripheral: Intel Corporation Xeon E5/Core i7 Address Map, 
VTd_Misc, System Management (rev 07)
00:05.2 System peripheral: Intel Corporation Xeon E5/Core i7 Control 
Status and Global Errors (rev 07)
00:05.4 PIC: Intel Corporation Xeon E5/Core i7 I/O APIC (rev 07)
00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Virtual Root Port (rev 05)
00:16.0 Communication controller: Intel Corporation C600/X79 series 
chipset MEI Controller #1 (rev 05)
00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network 
Connection (rev 05)
00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 
Enhanced Host Controller #2 (rev 05)
00:1b.0 Audio device: Intel Corporation C600/X79 series chipset High 
Definition Audio Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 1 (rev b5)
00:1c.1 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 2 (rev b5)
00:1c.2 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 3 (rev b5)
00:1c.3 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 4 (rev b5)
00:1c.4 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 5 (rev b5)
00:1c.5 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 6 (rev b5)
00:1c.7 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 8 (rev b5)
00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 
Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C600/X79 series chipset LPC 
Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port 
SATA AHCI Controller (rev 05)
00:1f.3 SMBus: Intel Corporation C600/X79 series chipset SMBus Host 
Controller (rev 05)
01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 
610] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev 
a1)
03:00.0 RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3 (rev 01)
06:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
07:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
09:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA 
Controller (rev 01)
0a:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series 
Firewire Controller (rev 01)
0b:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 
6 Gb/s RAID controller with HyperDuo (rev 11)
ff:08.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 0 
(rev 07)
ff:08.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 
0 (rev 07)
ff:08.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 
0 (rev 07)
ff:09.0 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link 1 
(rev 07)
ff:09.3 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 
(rev 07)
ff:09.4 System peripheral: Intel Corporation Xeon E5/Core i7 QPI Link Reut 1 
(rev 07)
ff:0a.0 System peripheral: Intel Corporation Xeon E5/Core i7 Power Control Unit 
0 (rev 07)
ff:0a.1 System peripheral: Intel Corporation Xeon E5/Core i7 Power Control Unit 
1 (rev 07)
ff:0a.2 System peripheral: Intel Corporation Xeon E5/Core i7 Power Control Unit 
2 (rev 07)
ff:0a.3 System peripheral: Intel 

Re: [CentOS] Getting a do_IRQ: xx.xxx No irq handler for vector (irq -1), any ideas?

2013-09-05 Thread Steve Brooks
On Thu, 5 Sep 2013, Howard Leadmon wrote:

 I setup a CentOS 6 server to use with KVM/QEMU, and I am getting the
 following error a good bit, granted it doesn't seem to be causing any
 trouble.   I figured I would post and see if anyone has any ideas on the
 issue, or if I can just dismiss it.

 If I look at my logging/dmesg, I see:
 do_IRQ: 18.201 No irq handler for vector (irq -1)

 The machine is an HP ProLiant DL580-G5 series with 4x Xeon hexacore
 processors installed, along with 32G of RAM at this time.  I am happy to
 post more dmesg output if it's of any help.

 Kernel version is:  2.6.32-358.18.1.el6.x86_64

I had this too.. Found this from googling

https://access.redhat.com/site/solutions/110053

--
Re: No irq handler for vector CentOS 6.4

Stop irqbalance

The message can appear when IRQs are moved between CPU cores. We can stop 
the irqbalance service moving IRQs by turning it off with:

 chkconfig irqbalance off
 service irqbalance stop


Note that this will result in all interrupt requests being handled by a 
single CPU core, which may be detrimental to performance.
-

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Getting a do_IRQ: xx.xxx No irq handler for vector (irq -1), any ideas?

2013-09-05 Thread Steve Brooks
 Howard Leadmon wrote:
  I setup a CentOS 6 server to use with KVM/QEMU, and I am getting the
 following error a good bit, granted it doesn't seem to be causing any
 trouble.   I figured I would post and see if anyone has any ideas on the
 issue, or if I can just dismiss it.

 If I look at my logging/dmesg, I see:

 do_IRQ: 19.217 No irq handler for vector (irq -1)
 snip
 Had the same problem a month or so ago (you can see the thread in the
 archives). It appears, from the many hours of googling I did, to be a
 problem explicitly with HP servers.

Hi Mark, I am seein the same errors on many el6 machines (not HP) but 
based on Asus Sabertooth X79 workstation boards with Intel Core i7-3970X 
CPU's. . Steve



-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] elrepo kmod-sk98lin.i686

2013-09-03 Thread Steve Brooks
On Tue, 3 Sep 2013, Markus Falb wrote:


 On 02.Sep.2013, at 22:14, Steve Brooks wrote:

 [2] This motherboard has a Marvell 88E8052 as a second NIC, currently
 disbled in the BIOS. Problem is that the 88E8001 NIC has to be eth0 as
 it is the one used in a flexlm license server file. In Centos five how
 can you *force* a given NIC controller to always post at eth0 ?

 I think that setting HWADDR in ifcfg-eth0 should do the trick.


Thanks for the replay Markus, I thought that too but on reboot of the 
server it stayed at eth1. That is why I disabled the other NIC in the 
bios and deleted all NIC configurations and reconfigured when it did come 
up as eth0.

It looks now like I need to keep the current NIC (88E8001 with rx 
overruns) as device eth0 so flexlm works and enable the Marvell 
88E8052 in the bios to take over the ethernet traffic.

Cheers,

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] elrepo kmod-sk98lin.i686

2013-09-03 Thread Steve Brooks

 You don't actually need hwaddr in your ifcfg-* files -- though it's
 probably not a bad thing to have the MAC in there.  [As Scott pointed out,
 it's all about what udev has in its rules.]



 You also have to look at /etc/udev/rules.d/70-persistent-net.rules


 +1
 Yes, udev rules for network devices needs modified.



 (Learned that after cloning some VMs, where the hardware address was
 wrong.)


 And I did too after cloning a physical machine to newer hardware.  ;-)

I am using centos 5 and it seems the udev cam in with e16 ... So can't 
go that way..

Something else to bother me is that lspci reports the ethernet device 
correctly. Marvell Technology Group Ltd. 88E8001 but the ifcfg-eth0

# head -3  /etc/sysconfig/network-scripts/ifcfg-eth0

# D-Link System Inc DGE-528T Gigabit Ethernet Adapter
DEVICE=eth0
ONBOOT=yes


# ethtool -i eth0

driver: skge
version: 1.6


So it seems it is using the correct driver for 88E8001


Steve


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] elrepo kmod-sk98lin.i686

2013-09-03 Thread Steve Brooks
On Tue, 3 Sep 2013, Scott Robbins wrote:

 On Tue, Sep 03, 2013 at 02:02:08PM +0100, Steve Brooks wrote:

 You don't actually need hwaddr in your ifcfg-* files -- though it's
 probably not a bad thing to have the MAC in there.  [As Scott pointed out,
 it's all about what udev has in its rules.]



 You also have to look at /etc/udev/rules.d/70-persistent-net.rules


 I am using centos 5 and it seems the udev cam in with e16 ... So can't
 go that way..

 Cursory googling indicates that you can create the file (and directories if
 needed).  The line in there would read something like

 SUBSYSTEM==net, ACTION==add, DRIVERS==?*,
 ATTR{address}==XX:XX:XX:XX:XX:XX, ATTR{type}==1, KERNEL==eth*,
 NAME=eth0

 with XX:XX etc being the hardware address.  It's all on one line and
 quotation marks are used as shown.

 All this is untested by me on CentOS 5.x


Thanks Scott, I do have a test machine I can try it on, so I will see what 
happens with your suggestion.. Thanks, Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] elrepo kmod-sk98lin.i686

2013-09-02 Thread Steve Brooks

Hi,

I noticed that one of out Centos 5 servers with an onboard  Marvell 
88E8001 was showing some packet overruns.

# ifconfig -a eth0 | grep RX p
 RX packets:1629537 errors:0 dropped:0 overruns:3694 frame:0

So I thought about using a driver from elrepo the lspci id's suggest to 
install the kmod-sk98lin driver. I yum installed the kernel module and 
on reboot found the server was still using the old skge driver.

ethtool -i eth0
driver: skge
version: 1.6

Looking at modprobe.conf shows

# cat /etc/modprobe.conf | grep eth0
alias eth0 skge

[1] Should I just change the skge in the above line to sk98lin and 
reboot?


[2] This motherboard has a Marvell 88E8052 as a second NIC, currently 
disbled in the BIOS. Problem is that the 88E8001 NIC has to be eth0 as 
it is the one used in a flexlm license server file. In Centos five how 
can you *force* a given NIC controller to always post at eth0 ?

Cheers, Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-26 Thread Steve Brooks
On Sun, 25 Aug 2013, SilverTip257 wrote:

 On Sun, Aug 25, 2013 at 12:13 PM, Steve Brooks ste...@mcs.st-and.ac.ukwrote:

 On Sun, 25 Aug 2013, Ljubomir Ljubojevic wrote:

 On 08/25/2013 03:15 PM, Steve Brooks wrote:
 Ok here is the memory and kernel information it doesn't state PAE yet it
 seems to recognise the 32G.

 [root@app2 ~]# uname -a
 Linux app2 2.6.18-348.12.1.el5 #1 SMP Wed Jul 10 05:31:48 EDT 2013 i686
 i686 i386 GNU/Linux
 [root@app2 ~]# free
total   used   free sharedbuffers
 cached
 Mem:   3574676 4257723148904  0  26748
 266128
 -/+ buffers/cache: 1328963441780
 Swap:  4192924  04192924
 [root@app2 ~]#  cat /proc/meminfo | grep MemTotal
 MemTotal:  3574676 kB


 MemTotal:  3574676 kB =  MemTotal: 3,574,676 kB = 3.4GB, not 32GB




 [2] I am still confused how given the kernel reports as being

  2.6.18-348.12.1.el5 #1 SMP

  and not a PAE kernel.. Why am I seeing

   MemTotal:  3574676 kB


 free just outputs in kilobytes unless you pass it options.

 So you're only looking at 3.5 GB of memory with a non-PAE kernel in CentOS
 5 based on your numbers.  Go PAE and you'll be able to utilize the 32GB of
 RAM.

 # Megabytes
 free -m

 # Gigabytes
 free -g

 # any others options:
 man free

Doh.. thanks Mike .. consequence of working through the night I expect! Ok 
now just to decide if it is safe to run centos 5 on the x79 motherboard.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-26 Thread Steve Brooks
On Mon, 26 Aug 2013, Michael Duvall wrote:

Hi Guys, In a bit of a pickle.. Is anyone running the latest Centos 
 5.9
or earlier version with an Intel X79 based motherboard. I have a server
which needs the motherboard replacing asap and I have a spare 
 Sabertooth
X79. I have a machine with a Sabertooth X79 motherboard to test on
which boots up fine on on Centos 5.9 i386 image from the troubled
server. However the output from lspci (see below) seems far less
descriptive than you normally see in Centos 5 or Centos 6. I am 
 just a
bit worried that this might imply lack of support. I do not need
audio/firewire.. but need the essential to work memory, sata etc. Also 
 I
thought i386 installs could not see more than 3 and a bit Gigs of 
 ram?
The test machine has 32G and meminfo shows

MemTotal:  3574676 kB
MemFree:   3146556 kB

 Steve,

 You need to run a PAE 32-bit kernel in order to see more than 3.1 GB.

 I wouldn't be too concerned about the lspci output yet.

Thanks for the help Michael, I suppose if I test the machine as much as 
I can before putting it in service, fingers crossed! Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-25 Thread Steve Brooks
On Sat, 24 Aug 2013, SilverTip257 wrote:

 On Sat, Aug 24, 2013 at 8:08 AM, Steve Brooks ste...@mcs.st-and.ac.ukwrote:


 Hi Guys, In a bit of a pickle.. Is anyone running the latest Centos 5.9
 or earlier version with an Intel X79 based motherboard. I have a server
 which needs the motherboard replacing asap and I have a spare Sabertooth
 X79. I have a machine with a Sabertooth X79 motherboard to test on
 which boots up fine on on Centos 5.9 i386 image from the troubled
 server. However the output from lspci (see below) seems far less
 descriptive than you normally see in Centos 5 or Centos 6. I am just a
 bit worried that this might imply lack of support. I do not need


 I'm not familiar with the Sabertooth X79, so specifics there are better
 suited for someone else.

 Though the lack of detail in the newer board might be more apparent if we
 had output from both systems to compare.


 audio/firewire.. but need the essential to work memory, sata etc. Also I
 thought i386 installs could not see more than 3 and a bit Gigs of ram?
 The test machine has 32G and meminfo shows

 MemTotal:  3574676 kB
 MemFree:   3146556 kB


 Are you using a PAE kernel?
 If you want access to the additional memory above ~3.5GB you'll need a PAE
 kernel.

 CentOS 5 i386 has both non-PAE and PAE kernels.
 CentOS 6 i386 only has kernels with PAE support.

 Take a peek at the output from uname to find out what kernel you're running.




 I would really appreciate any advice.  Regards Steve

Thanks for the reply, I have included the lspci output from the new 
board running el6. I am sure that the kernel is not PAE but can not 
check as I am at home and the test machine is turned off at work. I did 
however run a yum update on the machine so I am wondering if that would 
notice the memory was sufficient to benefit from a PAE kernel? I will pop 
into work later to day and post the running kernel.


Steve


00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07)
00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 1a (rev 07)
00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 2a (rev 07)
00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root 
Port 3a in PCI Express Mode (rev 07)
00:05.0 System peripheral: Intel Corporation Xeon E5/Core i7 Address Map, 
VTd_Misc, System Management (rev 07)
00:05.2 System peripheral: Intel Corporation Xeon E5/Core i7 Control 
Status and Global Errors (rev 07)
00:05.4 PIC: Intel Corporation Xeon E5/Core i7 I/O APIC (rev 07)
00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Virtual Root Port (rev 05)
00:16.0 Communication controller: Intel Corporation C600/X79 series 
chipset MEI Controller #1 (rev 05)
00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network 
Connection (rev 05)
00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 
Enhanced Host Controller #2 (rev 05)
00:1b.0 Audio device: Intel Corporation C600/X79 series chipset High 
Definition Audio Controller (rev 05)
00:1c.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 1 (rev b5)
00:1c.1 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 2 (rev b5)
00:1c.2 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 3 (rev b5)
00:1c.3 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 4 (rev b5)
00:1c.4 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 5 (rev b5)
00:1c.5 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 6 (rev b5)
00:1c.7 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express 
Root Port 8 (rev b5)
00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 
Enhanced Host Controller #1 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation C600/X79 series chipset LPC 
Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port 
SATA AHCI Controller (rev 05)
00:1f.3 SMBus: Intel Corporation C600/X79 series chipset SMBus Host 
Controller (rev 05)
01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 
610] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev 
a1)
03:00.0 RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3 (rev 01)
06:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
07:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB 
Host Controller
09:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA 
Controller (rev 01)
0a:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series 
Firewire Controller (rev 01)
0b:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 
6 Gb/s RAID controller

Re: [CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-25 Thread Steve Brooks
On Sun, 25 Aug 2013, Ljubomir Ljubojevic wrote:

 On 08/25/2013 11:41 AM, Steve Brooks wrote:

 Thanks for the reply, I have included the lspci output from the new
 board running el6. I am sure that the kernel is not PAE but can not
 check as I am at home and the test machine is turned off at work. I did
 however run a yum update on the machine so I am wondering if that would
 notice the memory was sufficient to benefit from a PAE kernel? I will pop
 into work later to day and post the running kernel.


 Steve


 00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 07)
 00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root
 Port 1a (rev 07)
 00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root
 Port 2a (rev 07)
 00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root
 Port 3a in PCI Express Mode (rev 07)
 00:05.0 System peripheral: Intel Corporation Xeon E5/Core i7 Address Map,
 VTd_Misc, System Management (rev 07)
 00:05.2 System peripheral: Intel Corporation Xeon E5/Core i7 Control
 Status and Global Errors (rev 07)
 00:05.4 PIC: Intel Corporation Xeon E5/Core i7 I/O APIC (rev 07)
 00:11.0 PCI bridge: Intel Corporation C600/X79 series chipset PCI Express
 Virtual Root Port (rev 05)
 snip

 It is better that you provide output of lspci -nn so DeviceID's are
 also listed, to be searched for kernel modules, device drivers.

 Like:
 00:14.5 USB controller [0c03]: Advanced Micro Devices [AMD] nee ATI
 SB7x0/SB8x0/SB9x0 USB OHCI2 Controller [1002:4399]

 Device ID in this case is: 1002:4399

 So you can go to http://elrepo.org/tiki/DeviceIDs and check if there are
 kmod packages to provide the driver for that device.

 Also worth noting is that ElRepo also provides 3.0.x kernels for CentOS
 5.x that should be kABI/ABI compatible. Latest is 3.0.93.

I have included the lspci -nn output below. The X79 board and Centos 5.9 
seem to be fine.. It boots, the SATA disk seems to work as expected it 
just would be good to actually clarify the system is safe to use.

Ok here is the memory and kernel information it doesn't state PAE yet it 
seems to recognise the 32G.

[root@app2 ~]# uname -a
Linux app2 2.6.18-348.12.1.el5 #1 SMP Wed Jul 10 05:31:48 EDT 2013 i686 
i686 i386 GNU/Linux
[root@app2 ~]# free
  total   used   free sharedbuffers cached
Mem:   3574676 4257723148904  0  26748 266128
-/+ buffers/cache: 1328963441780
Swap:  4192924  04192924
[root@app2 ~]#  cat /proc/meminfo | grep MemTotal
MemTotal:  3574676 kB






00:00.0 Host bridge [0600]: Intel Corporation Device [8086:3c00] (rev 06)
00:01.0 PCI bridge [0604]: Intel Corporation Device [8086:3c02] (rev 06)
00:02.0 PCI bridge [0604]: Intel Corporation Device [8086:3c04] (rev 06)
00:03.0 PCI bridge [0604]: Intel Corporation Device [8086:3c08] (rev 06)
00:05.0 System peripheral [0880]: Intel Corporation Device [8086:3c28] 
(rev 06)
00:05.2 System peripheral [0880]: Intel Corporation Device [8086:3c2a] 
(rev 06)
00:05.4 PIC [0800]: Intel Corporation Device [8086:3c2c] (rev 06)
00:11.0 PCI bridge [0604]: Intel Corporation Device [8086:1d3e] (rev 05)
00:16.0 Communication controller [0780]: Intel Corporation Device 
[8086:1d3a] (rev 05)
00:19.0 Ethernet controller [0200]: Intel Corporation Device [8086:1503] 
(rev 05)
00:1a.0 USB Controller [0c03]: Intel Corporation Device [8086:1d2d] (rev 
05)
00:1b.0 Audio device [0403]: Intel Corporation Device [8086:1d20] (rev 05)
00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:1d10] (rev b5)
00:1c.1 PCI bridge [0604]: Intel Corporation Device [8086:1d12] (rev b5)
00:1c.2 PCI bridge [0604]: Intel Corporation Device [8086:1d14] (rev b5)
00:1c.3 PCI bridge [0604]: Intel Corporation Device [8086:1d16] (rev b5)
00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:1d18] (rev b5)
00:1c.5 PCI bridge [0604]: Intel Corporation Device [8086:1d1a] (rev b5)
00:1c.7 PCI bridge [0604]: Intel Corporation Device [8086:1d1e] (rev b5)
00:1d.0 USB Controller [0c03]: Intel Corporation Device [8086:1d26] (rev 
05)
00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] 
(rev a5)
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation Device [8086:1d02] (rev 
05)
00:1f.3 SMBus [0c05]: Intel Corporation Device [8086:1d22] (rev 05)
01:00.0 VGA compatible controller [0300]: nVidia Corporation Device 
[10de:104a] (rev a1)
01:00.1 Audio device [0403]: nVidia Corporation Device [10de:0e08] (rev 
a1)
06:00.0 USB Controller [0c03]: Device [1b21:1042]
07:00.0 USB Controller [0c03]: Device [1b21:1042]
08:00.0 USB Controller [0c03]: Device [1b21:1042]
09:00.0 SATA controller [0106]: Device [1b21:0612] (rev 01)
0a:00.0 FireWire (IEEE 1394) [0c00]: VIA Technologies, Inc. Device 
[1106:3403] (rev 01)
0b:00.0 SATA controller [0106]: Device [1b4b:9130] (rev 11)
ff:08.0 System peripheral [0880]: Intel

Re: [CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-25 Thread Steve Brooks
On Sun, 25 Aug 2013, Ljubomir Ljubojevic wrote:

 On 08/25/2013 03:15 PM, Steve Brooks wrote:
 Ok here is the memory and kernel information it doesn't state PAE yet it
 seems to recognise the 32G.

 [root@app2 ~]# uname -a
 Linux app2 2.6.18-348.12.1.el5 #1 SMP Wed Jul 10 05:31:48 EDT 2013 i686
 i686 i386 GNU/Linux
 [root@app2 ~]# free
total   used   free sharedbuffers cached
 Mem:   3574676 4257723148904  0  26748 266128
 -/+ buffers/cache: 1328963441780
 Swap:  4192924  04192924
 [root@app2 ~]#  cat /proc/meminfo | grep MemTotal
 MemTotal:  3574676 kB


  MemTotal:  3574676 kB =  MemTotal: 3,574,676 kB = 3.4GB, not 32GB


 Btw., if you want to see if all device have kernel modules, use output
 of lspci -nnk.



[1] I include the output of lspci -nnk below.. problem is I am not sure 
the best way to interpret the information. Can I assume that if the kernel 
picks a driver for a device i.e. the ATA controller [0106]: Device 
[1b4b:9130] then this is properly supported by the kernel *or* could it 
be it picks a generic driver that might / might not work properly? I hope 
it will work properly being an enterprise distribution?


[2] I am still confused how given the kernel reports as being

 2.6.18-348.12.1.el5 #1 SMP

 and not a PAE kernel.. Why am I seeing

  MemTotal:  3574676 kB


Steve



00:00.0 Host bridge [0600]: Intel Corporation Device [8086:3c00] (rev 06)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
00:01.0 PCI bridge [0604]: Intel Corporation Device [8086:3c02] (rev 06)
 Kernel driver in use: pcieport-driver
00:02.0 PCI bridge [0604]: Intel Corporation Device [8086:3c04] (rev 06)
 Kernel driver in use: pcieport-driver
00:03.0 PCI bridge [0604]: Intel Corporation Device [8086:3c08] (rev 06)
 Kernel driver in use: pcieport-driver
00:05.0 System peripheral [0880]: Intel Corporation Device [8086:3c28] 
(rev 06)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
00:05.2 System peripheral [0880]: Intel Corporation Device [8086:3c2a] 
(rev 06)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
00:05.4 PIC [0800]: Intel Corporation Device [8086:3c2c] (rev 06)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
00:11.0 PCI bridge [0604]: Intel Corporation Device [8086:1d3e] (rev 05)
 Kernel driver in use: pcieport-driver
00:16.0 Communication controller [0780]: Intel Corporation Device 
[8086:1d3a] (rev 05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
00:19.0 Ethernet controller [0200]: Intel Corporation Device [8086:1503] 
(rev 05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:849c]
 Kernel driver in use: e1000e
 Kernel modules: e1000e
00:1a.0 USB Controller [0c03]: Intel Corporation Device [8086:1d2d] (rev 
05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
 Kernel driver in use: ehci_hcd
 Kernel modules: ehci-hcd
00:1b.0 Audio device [0403]: Intel Corporation Device [8086:1d20] (rev 05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:8436]
 Kernel driver in use: snd_hda_intel
 Kernel modules: snd-hda-intel
00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:1d10] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.1 PCI bridge [0604]: Intel Corporation Device [8086:1d12] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.2 PCI bridge [0604]: Intel Corporation Device [8086:1d14] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.3 PCI bridge [0604]: Intel Corporation Device [8086:1d16] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:1d18] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.5 PCI bridge [0604]: Intel Corporation Device [8086:1d1a] (rev b5)
 Kernel driver in use: pcieport-driver
00:1c.7 PCI bridge [0604]: Intel Corporation Device [8086:1d1e] (rev b5)
 Kernel driver in use: pcieport-driver
00:1d.0 USB Controller [0c03]: Intel Corporation Device [8086:1d26] (rev 
05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
 Kernel driver in use: ehci_hcd
 Kernel modules: ehci-hcd
00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] 
(rev a5)
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:1d41] (rev 05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
 Kernel modules: i8xx_tco
00:1f.2 SATA controller [0106]: Intel Corporation Device [8086:1d02] (rev 
05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
 Kernel driver in use: ahci
 Kernel modules: ahci
00:1f.3 SMBus [0c05]: Intel Corporation Device [8086:1d22] (rev 05)
 Subsystem: ASUSTeK Computer Inc. Device [1043:84ef]
 Kernel driver in use: i801_smbus
 Kernel modules: i2c-i801
01:00.0 VGA compatible controller

[CentOS] Is X79 Motherboard supported by latest Centos 5.9 version?

2013-08-24 Thread Steve Brooks

Hi Guys, In a bit of a pickle.. Is anyone running the latest Centos 5.9 
or earlier version with an Intel X79 based motherboard. I have a server 
which needs the motherboard replacing asap and I have a spare Sabertooth 
X79. I have a machine with a Sabertooth X79 motherboard to test on 
which boots up fine on on Centos 5.9 i386 image from the troubled 
server. However the output from lspci (see below) seems far less 
descriptive than you normally see in Centos 5 or Centos 6. I am just a 
bit worried that this might imply lack of support. I do not need 
audio/firewire.. but need the essential to work memory, sata etc. Also I 
thought i386 installs could not see more than 3 and a bit Gigs of ram? 
The test machine has 32G and meminfo shows

MemTotal:  3574676 kB
MemFree:   3146556 kB


I would really appreciate any advice.  Regards Steve

e.g. The output from Sabertooth X79 and Centos 5.9 gives


00:00.0 Host bridge: Intel Corporation Device 3c00 (rev 06)
00:01.0 PCI bridge: Intel Corporation Device 3c02 (rev 06)
00:02.0 PCI bridge: Intel Corporation Device 3c04 (rev 06)
00:03.0 PCI bridge: Intel Corporation Device 3c08 (rev 06)
00:05.0 System peripheral: Intel Corporation Device 3c28 (rev 06)
00:05.2 System peripheral: Intel Corporation Device 3c2a (rev 06)
00:05.4 PIC: Intel Corporation Device 3c2c (rev 06)
00:11.0 PCI bridge: Intel Corporation Device 1d3e (rev 05)
00:16.0 Communication controller: Intel Corporation Device 1d3a (rev 05)
00:19.0 Ethernet controller: Intel Corporation Device 1503 (rev 05)
00:1a.0 USB Controller: Intel Corporation Device 1d2d (rev 05)
00:1b.0 Audio device: Intel Corporation Device 1d20 (rev 05)
00:1c.0 PCI bridge: Intel Corporation Device 1d10 (rev b5)
00:1c.1 PCI bridge: Intel Corporation Device 1d12 (rev b5)
00:1c.2 PCI bridge: Intel Corporation Device 1d14 (rev b5)
00:1c.3 PCI bridge: Intel Corporation Device 1d16 (rev b5)
00:1c.4 PCI bridge: Intel Corporation Device 1d18 (rev b5)
00:1c.5 PCI bridge: Intel Corporation Device 1d1a (rev b5)
00:1c.7 PCI bridge: Intel Corporation Device 1d1e (rev b5)
00:1d.0 USB Controller: Intel Corporation Device 1d26 (rev 05)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
00:1f.0 ISA bridge: Intel Corporation Device 1d41 (rev 05)
00:1f.2 SATA controller: Intel Corporation Device 1d02 (rev 05)
00:1f.3 SMBus: Intel Corporation Device 1d22 (rev 05)
01:00.0 VGA compatible controller: nVidia Corporation Device 104a (rev a1)
01:00.1 Audio device: nVidia Corporation Device 0e08 (rev a1)
06:00.0 USB Controller: Device 1b21:1042
07:00.0 USB Controller: Device 1b21:1042
08:00.0 USB Controller: Device 1b21:1042
09:00.0 SATA controller: Device 1b21:0612 (rev 01)
0a:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 (rev 01)
0b:00.0 SATA controller: Device 1b4b:9130 (rev 11)
ff:08.0 System peripheral: Intel Corporation Device 3c80 (rev 06)
ff:08.3 System peripheral: Intel Corporation Device 3c83 (rev 06)
ff:08.4 System peripheral: Intel Corporation Device 3c84 (rev 06)
ff:09.0 System peripheral: Intel Corporation Device 3c90 (rev 06)
ff:09.3 System peripheral: Intel Corporation Device 3c93 (rev 06)
ff:09.4 System peripheral: Intel Corporation Device 3c94 (rev 06)
ff:0a.0 System peripheral: Intel Corporation Device 3cc0 (rev 06)
ff:0a.1 System peripheral: Intel Corporation Device 3cc1 (rev 06)
ff:0a.2 System peripheral: Intel Corporation Device 3cc2 (rev 06)
ff:0a.3 System peripheral: Intel Corporation Device 3cd0 (rev 06)
ff:0b.0 System peripheral: Intel Corporation Device 3ce0 (rev 06)
ff:0b.3 System peripheral: Intel Corporation Device 3ce3 (rev 06)
ff:0c.0 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0c.1 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0c.2 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0c.6 System peripheral: Intel Corporation Device 3cf4 (rev 06)
ff:0c.7 System peripheral: Intel Corporation Device 3cf6 (rev 06)
ff:0d.0 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0d.1 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0d.2 System peripheral: Intel Corporation Device 3ce8 (rev 06)
ff:0d.6 System peripheral: Intel Corporation Device 3cf5 (rev 06)
ff:0e.0 System peripheral: Intel Corporation Device 3ca0 (rev 06)
ff:0e.1 Performance counters: Intel Corporation Device 3c46 (rev 06)
ff:0f.0 System peripheral: Intel Corporation Device 3ca8 (rev 06)
ff:0f.1 System peripheral: Intel Corporation Device 3c71 (rev 06)
ff:0f.2 System peripheral: Intel Corporation Device 3caa (rev 06)
ff:0f.3 System peripheral: Intel Corporation Device 3cab (rev 06)
ff:0f.4 System peripheral: Intel Corporation Device 3cac (rev 06)
ff:0f.5 System peripheral: Intel Corporation Device 3cad (rev 06)
ff:0f.6 System peripheral: Intel Corporation Device 3cae (rev 06)
ff:10.0 System peripheral: Intel Corporation Device 3cb0 (rev 06)
ff:10.1 System peripheral: Intel Corporation Device 3cb1 (rev 06)
ff:10.2 System peripheral: Intel Corporation Device 3cb2 (rev 06)

Re: [CentOS] What FileSystems for large stores and very very large stores?

2013-08-07 Thread Steve Brooks

On Tue, 6 Aug 2013, SilverTip257 wrote:

 On Tue, Aug 6, 2013 at 8:58 PM, Eliezer Croitoru elie...@ngtech.co.ilwrote:

 OK so back to the issue in hands.
 The issue is that I have a mail storage for more then 65k users per
 domain and the ext4 doesn't support this size of directory list.
 The reiser FS indeed fits for the purpose but ext4 doesn't even start to
 scratch it.
 Now the real question is that:
 What FS will you use for dovecot backhand to store a domain with more
 then 65k users?


 XFS?
 Used for situations where one has lots or large as Dave Chinner says [0]
 Meaning lots of files or large files.

 [0] http://www.youtube.com/watch?v=i3IreQHLELU



 Eliezer

 On 07/05/2013 04:45 PM, Eliezer Croitoru wrote:
 I was learning about the different FS exists.
 I was working on systems that ReiserFS was the star but since there is
 no longer support from the creator there are other consolidations to be
 done.
 I want to ask about couple FS options.
 EXT4 which is amazing for one node but for more it's another story.
 I have heard about GFS2 and GlusterFS and read the docs and official
 materials from RH on them.
 In the RH docs it states the EXT4 limit files per directory is 65k and I
 had a directory which was pretty loaded with files and I am unsure
 exactly what was the size but I am almost sure it was larger the 65k
 files per directory.

 I was considering using GlusterFS for a very large storage system with
 NFS front.
 I am still unsure EXT4 should or shouldn't be able to handle more then
 16TB since the linux kernel ext4 docs at:
 https://www.kernel.org/doc/Documentation/filesystems/ext4.txt in
 section 2.1
 it states: * ability to use filesystems  16TB (e2fsprogs support not
 available yet).
 so can I use it or not?? if there are no tools to handle this size then
 I cannot trust it.

 I want to create a storage with more then 16TB based on GlusterFS since
 it allows me to use 2-3 rings FS which will allow me to put the storage
 in a form of:
 1 client - HA NFS servers - GlusterFS cluster.

 it seems to more that GlusterFS is a better choice then Swift since RH
 do provide support for it.

 Every response will be appreciated.

 Thanks,
 Eliezer


Just for interest I have had two 44TB raid 6 arrays using EXT4, running 
with heavy useage 24/7 on el6 now since January 2013 without any problems 
so far. I rebuilt e2fsprogs from source. Something along the lines below 
looking at my notes.

wget http://atoomnet.net/files/rpm/e2fsprogs/e2fsprogs-1.42.6-1.el6.src.rpm
yum-builddep e2fsprogs-1.42.6-1.el6.src.rpm
rpmbuild --rebuild --recompile e2fsprogs-1.42.6-1.el6.src.rpm
cd /root/rpmbuild/RPMS/x86_64
rpm -Uvh *.rpm

## build array with a partition ###
parted /dev/sda mkpart primary ext4 1 -1
mkfs.ext4 -L sraid1v  -E stride=64,stripe-width=384  /dev/sda1

## build array without a partition ###

mkfs.ext4 -L sraid1v  -E stride=64,stripe-width=384  /dev/sda

Maybe this will help someone.

Cheers Steve





-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] awk awk

2012-12-07 Thread Steve Brooks
On Fri, 7 Dec 2012, mark wrote:

 On 12/06/12 19:23, Steve Brooks wrote:

 Why are you doing all that piping and grepping? And the -F  confuses
 me...oh, I see. First, whitespace is the default field separator in awk.
 Then, are you asking if there's a line with a . in it, or just any
 non-whitespace? If the latter... mmm, I see, you *really* don't understand
 awk.


 Ok Mark very nice of you to help Craig. He does not claim to be an expert
 in awk or even competent ---  Which is obviously why he is asking for
 help in the first place. No need for the sarcasm and to belittle the
 poster. Remember lots of people looking for help will be directed to this
 answer and your help could be much appreciated

 Steve, first of all, this would have been more appropriate to email me
 offlist, unless you intend to try to do to me what you accuse me of
 doing to Craig.


Sorry Mark you are right I should not have sent it to the list, apologies 
to you and to the list for that mistake. Wasn't having a good day and 
should have gone straight to bed instead of trying to do more work! Things 
always look different in the morning.

Regards,

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] awk awk

2012-12-06 Thread Steve Brooks

 Why are you doing all that piping and grepping? And the -F  confuses
 me...oh, I see. First, whitespace is the default field separator in awk.
 Then, are you asking if there's a line with a . in it, or just any
 non-whitespace? If the latter... mmm, I see, you *really* don't understand
 awk.


Ok Mark very nice of you to help Craig. He does not claim to be an expert 
in awk or even competent --- Which is obviously why he is asking for 
help in the first place. No need for the sarcasm and to belittle the 
poster. Remember lots of people looking for help will be directed to this 
answer and your help could be much appreciated

Regards,

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] gtkpod

2012-11-13 Thread Steve Brooks
On Sun, 11 Nov 2012, Nux! wrote:

 On 10.11.2012 22:17, Bob Hepple wrote:
 High and low searching (google, most of the repos in
 http://wiki.centos.org/AdditionalResources/Repositories) availed
 nothing - has
 anyone found a repo for gtkpod on centos-6? I seem to recall having
 to use
 fedora packages at some time in the past. Bit loath to do that again
 or compile
 from source.

There is a bunch of rpms here

http://lcfg-sl5.see.ed.ac.uk/see/sl6_64/

including

gtkpod-2.0.2-1.el6.x86_64.rpm
gtkpod-devel-2.0.2-1.el6.x86_64.rpm

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] fsck.ext4 problem 64bit

2012-10-25 Thread Steve Brooks

Hi All,

Trying to run fsck on a local linux raid partition gave the following.


[root@... /]# fsck.ext4 /dev/md0
e2fsck 1.41.12 (17-May-2010)
/dev/md0 has unsupported feature(s): 64bit
e2fsck: Get a newer version of e2fsck!

Odd as the server is 64bit running latest kernel and using 
latest e2fsprogs.x86_64.

Any ideas awould be much appreciated.

Cheers Steve
k
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] fsck.ext4 problem 64bit [SOLVED]

2012-10-25 Thread Steve Brooks
On Thu, 25 Oct 2012, Banyan He wrote:

 I think it comes out from the app itself for the verification. The
 question is if the soft raid supports by anyhow. Trying to build one
 test box for it.

 print_unsupp_features:
 if (features[0] || features[1] || features[2]) {
 int i, j;
 __u32   *mask = features, m;

 fprintf(stderr, _(%s has unsupported feature(s):),
 ctx-filesystem_name);

 for (i=0; i 3; i++,mask++) {
 for (j=0,m=1; j  32; j++, m=1) {
 if (*mask  m)
 fprintf(stderr,  %s,
 e2p_feature2string(i, m));
 }
 }
 putc('\n', stderr);
 goto get_newer;
 }

 
 Banyan He
 Blog: http://www.rootong.com
 Email: ban...@rootong.com


Thanks for looking, the issue was the filesystem was built with version of 
e2fspogs not consistent with upstream. Version 1.42-0 was compiled 
from source and used to create a filesystem of 20T as that is what was 
needed on the server. No entry was made in the repo config file to prevent 
the rogue e2fspogs from being replaced and so when an update took 
place this broke the filsystem utilities. After reinstalling the original 
1.42-0 all is well again..

[root@... x86_64]# fsck.ext4 /dev/md0
e2fsck 1.42 (29-Nov-2011)
/dev/md0: clean, 198212/335761408 files, 3637729870/5372161872 blocks

Cheers,

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks

Hi,

I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The 
kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

  Marvell Technology Group Ltd. 88SE9123

I use it to provide extra SATA ports to a raid system.

The HD's are all WD2003FYYS and so run at 3Gbps on the 6Gbps controller.

However I am seeing lots of instances of errors like this

-

Jun 22 03:13:23 viz1 kernel: ata13.00: exception Emask 0x10 SAct 0x4 SErr 
0x40 action 0x6 frozen
Jun 22 03:13:23 viz1 kernel: ata13.00: irq_stat 0x0800, interface 
fatal error
Jun 22 03:13:23 viz1 kernel: ata13: SError: { Handshk }
Jun 22 03:13:23 viz1 kernel: ata13.00: failed command: WRITE FPDMA QUEUED
Jun 22 03:13:23 viz1 kernel: ata13.00: cmd 
61/e8:10:98:05:1b/01:00:66:00:00/40 tag 2 ncq 249856 out
Jun 22 03:13:23 viz1 kernel: ata13.00: status: { DRDY }
Jun 22 03:13:23 viz1 kernel: ata13: hard resetting link
Jun 22 03:13:24 viz1 kernel: ata13: SATA link up 3.0 Gbps (SStatus 123 
SControl 330)
Jun 22 03:13:24 viz1 kernel: ata13.00: configured for UDMA/133
Jun 22 03:13:24 viz1 kernel: ata13: EH complete

---

Vendor ID : 1b4b
Device ID : 9123

I tried to see what drivers were currently being used but the command 
below gave nothing

grep -i 1b4b /lib/modules/*/modules.alias | grep -i 9123

I have changed the card and cables but still get the same errors. I am 
wondering if the el6 kernel is using the correct drivers I checked 
elrepo against the Vendor:Device ID pairing and it also came up blank.

Any ideas would be much appreciated.

Regards,

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, Reindl Harald wrote:



 Am 22.06.2012 13:58, schrieb Steve Brooks:
 I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
 kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 I use it to provide extra SATA ports to a raid system.
 The HD's are all WD2003FYYS and so run at 3Gbps on the 6Gbps controller.
 However I am seeing lots of instances of errors like this

 -

 Jun 22 03:13:23 viz1 kernel: ata13.00: exception Emask 0x10 SAct 0x4 SErr
 0x40 action 0x6 frozen
 Jun 22 03:13:23 viz1 kernel: ata13.00: irq_stat 0x0800, interface
 fatal error
 Jun 22 03:13:23 viz1 kernel: ata13: SError: { Handshk }
 Jun 22 03:13:23 viz1 kernel: ata13.00: failed command: WRITE FPDMA QUEUED
 Jun 22 03:13:23 viz1 kernel: ata13.00: cmd
 61/e8:10:98:05:1b/01:00:66:00:00/40 tag 2 ncq 249856 out
 Jun 22 03:13:23 viz1 kernel: ata13.00: status: { DRDY }
 Jun 22 03:13:23 viz1 kernel: ata13: hard resetting link
 Jun 22 03:13:24 viz1 kernel: ata13: SATA link up 3.0 Gbps (SStatus 123
 SControl 330)
 Jun 22 03:13:24 viz1 kernel: ata13.00: configured for UDMA/133
 Jun 22 03:13:24 viz1 kernel: ata13: EH complete

 ---

 Vendor ID : 1b4b
 Device ID : 9123

 I tried to see what drivers were currently being used but the command
 below gave nothing


 why do you care for drivers?

 this looks like dying hard-drives are always looking in syslog

Hi Reindl,

I should have mentioned I swapped out the hard-drive and same errors on 
new drive. I checked the SMART attributes of the drive and nothing 
untoward, also executed the

smartctl -long 

test wich came back error free.

Steve


-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:

 I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
 kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 I use it to provide extra SATA ports to a raid system.
 The HD's are all WD2003FYYS and so run at 3Gbps on the 6Gbps controller.
 However I am seeing lots of instances of errors like this

 -

 Jun 22 03:13:23 viz1 kernel: ata13.00: exception Emask 0x10 SAct 0x4 SErr
 0x40 action 0x6 frozen
 Jun 22 03:13:23 viz1 kernel: ata13.00: irq_stat 0x0800, interface
 fatal error
 Jun 22 03:13:23 viz1 kernel: ata13: SError: { Handshk }
 Jun 22 03:13:23 viz1 kernel: ata13.00: failed command: WRITE FPDMA QUEUED
 Jun 22 03:13:23 viz1 kernel: ata13.00: cmd
 61/e8:10:98:05:1b/01:00:66:00:00/40 tag 2 ncq 249856 out
 Jun 22 03:13:23 viz1 kernel: ata13.00: status: { DRDY }
 Jun 22 03:13:23 viz1 kernel: ata13: hard resetting link
 snip
 Crap. First question: what make  model are the drives on it? If they're
 Caviar Green, you're hosed. WD, and *maybe* Seagate as well, disabled a
 certain function you used to be able to set on the lower cost,
 consumer-grade models (in '09, I believe), and so when a server controller
 is trying to do i/o, and has a problem, in server-grade drives, it gives
 up after something like 6 sec, and does error handling, I *think* to other
 sectors. The consumer ones, on the other hand, keep trying for 1? 2?
 *minutes*; the disabled function allowed a used to tell it to give up in a
 shorter time. Meanwhile, a hardware controller will, as I said, have fits.

mark you'd think I just spent months dealing with this


As mentioned in the original post the drives are all WD2003FYYS. I am 
convinced it has nothing to do with TLER enabled on the WD drives as we 
run hundreds of them using linux mdadm raid on motherboard SATA 
controllers with no problems in the last eight or so years. This appears 
to be specific to the SATA PCIe 6Gbps 4 port controller card made by 
Startech. There are four other HD's (WD2003FYYS) in the machine running on 
an onboard Intel Corporation Patsburg 6-Port SATA AHCI Controller with 
no problems.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:

 I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
 kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 I use it to provide extra SATA ports to a raid system.
 The HD's are all WD2003FYYS and so run at 3Gbps on the 6Gbps
 controller. However I am seeing lots of instances of errors like this

 Jun 22 03:13:23 viz1 kernel: ata13.00: exception Emask 0x10 SAct 0x4
 SErr
 0x40 action 0x6 frozen
 Jun 22 03:13:23 viz1 kernel: ata13.00: irq_stat 0x0800, interface
 fatal error
 Jun 22 03:13:23 viz1 kernel: ata13: SError: { Handshk }
 Jun 22 03:13:23 viz1 kernel: ata13.00: failed command: WRITE FPDMA
 QUEUED
 Jun 22 03:13:23 viz1 kernel: ata13.00: cmd
 61/e8:10:98:05:1b/01:00:66:00:00/40 tag 2 ncq 249856 out
 Jun 22 03:13:23 viz1 kernel: ata13.00: status: { DRDY }
 Jun 22 03:13:23 viz1 kernel: ata13: hard resetting link
 snip
 Crap. First question: what make  model are the drives on it? If they're
 Caviar Green, you're hosed. WD, and *maybe* Seagate as well, disabled a
 certain function you used to be able to set on the lower cost,
 consumer-grade models (in '09, I believe), and so when a server
 controller is trying to do i/o, and has a problem, in server-grade drives,
  it gives up after something like 6 sec, and does error handling, I *
 think* to other sectors. The consumer ones, on the other hand, keep trying
 for 1? 2? *minutes*; the disabled function allowed a used to tell it to
 give up in a shorter time. Meanwhile, a hardware controller will, as I
 said,
 have fits.

mark you'd think I just spent months dealing with this


 As mentioned in the original post the drives are all WD2003FYYS. I am

 Missed the original post; sorry.

 convinced it has nothing to do with TLER enabled on the WD drives as we

 Thanks, that was the acronym I was trying to remember.

 run hundreds of them using linux mdadm raid on motherboard SATA
 controllers with no problems in the last eight or so years. This appears
 to be specific to the SATA PCIe 6Gbps 4 port controller card made by
 Startech. There are four other HD's (WD2003FYYS) in the machine running on
 an onboard Intel Corporation Patsburg 6-Port SATA AHCI Controller with
 no problems.

 I also see those are enterprise drives, not consumer grade, which
 implies that they ought to work. It still looks to me as though it's
 timing out, which I'd think is a function of the RAID card. You might see
 if it has any firmware configuration options.


Thanks for the reply, the card is purely JBOD no RAID or other 
configuration available. It simply posts the SATA devices attached to the 
OS. I am wondering if it could be a strange symptom of running SATA3 
drives on this particular SATA6 controller but that is just a stab in the 
dark.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:

 I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
 kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 Is this your card?


Hi Mark,

Yes that is the very card, the page says the chipset is Marvell 88SE9128 
but lspci shows

Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, Steve Brooks wrote:

 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:

 I have a SATA PCIe 6Gbps 4 port controller card made by Startech. The
 kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 Is this your card?


 Hi Mark,

 Yes that is the very card, the page says the chipset is Marvell 88SE9128
 but lspci shows

 Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller

It is odd because the kernel reports it as 88SE9123 the web page says it 
is 88SE9128 as does the manual supplied with the card. Now the 
motherboard already has an onboard  Marvell 88SE9128 controller which is 
correctly identified by the kernel and works properly so I know the 
correct divers are in the kernel but the Startech card does not seem to be 
using them.

[root@viz1 ~]# lspci | grep SATA
00:1f.2 SATA controller: Intel Corporation Patsburg 6-Port SATA AHCI Controller 
(rev 05)
04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 
Gb/s controller (rev 11)
05:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 
Gb/s controller (rev 11)
0f:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller 
(rev 01)
10:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 
6 Gb/s RAID controller with HyperDuo (rev 11)

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SATA errors in log

2012-06-22 Thread Steve Brooks
On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:

 Hi, Steve,

 Steve Brooks wrote:
 On Fri, 22 Jun 2012, Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:
 On Fri, 22 Jun 2012, m.r...@5-cent.us wrote:
 Steve Brooks wrote:

 I have a SATA PCIe 6Gbps 4 port controller card made by Startech.
 The kernel (Linux viz1 2.6.32-220.4.1.el6.x86_64) sees it as

   Marvell Technology Group Ltd. 88SE9123

 Is this your card?

 Yes that is the very card, the page says the chipset is Marvell 88SE9128
 but lspci shows

 Marvell Technology Group Ltd. 88SE9123 PCIe SATA 6.0 Gb/s controller

 It is odd because the kernel reports it as 88SE9123 the web page says it
 is 88SE9128 as does the manual supplied with the card. Now the

 Yeah, I noticed that too, and thought it odd.
 snip
 I looked at the manual, and the only thing that came to mind was to try
 going into the BIOS and making sure that it was set to AHCI rather than,
 say, IDE, or whatever.


Thanks Mark for the reply , I hadn't thought about the card posting drives 
in the bios, I assumed only the onboard SATA devices would allow you 
change the mode in the motherboard's BIOS. I will have a look on Monday to 
see if anything has appeared in the BIOS. I guess the default mode in a 
SATA6 card would be AHCI but yes worth a check.

Cheers,

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] e2fsprogs 1.42 EXT4 16TB RAID

2012-02-24 Thread Steve Brooks

Hi All,

I was thinking of using ext4 on a new raid system of 20TB. I had read the 
article (halfway done the page) titled Linux File System Fsck Testing -- 
The Results Are In

https://www.ultimateeditionoz.com/forum/viewtopic.php?p=26180

They used CentOS 5.7 (2.6.18-274 kernel) and they state

The stock configuration was used throughput the testing except for one 
component. The e2fsprogs package was upgraded to version 1.42, enabling 
ext4 file systems larger than 16TB to be created.


I have looked for an el6 version of e2fsprogs 1.42 with no joy. Any 
ideas anyone?

Cheers,

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] script regular expression

2012-02-10 Thread Steve Brooks

On Thu, 9 Feb 2012, Alejandro Rodriguez Luna wrote:


Hi everyone, I was creating a script and i found something i can't figure out.

#/bin/bash
for i in $(cat certificates.txt)
do 
    echo $i
done

I expected this

RSA Secure Server Certification Authority
VeriSign Class 1 CA Individual Subscriber-Persona Not Validated


but i got this

RSA
Secure
Server
Certification
Authority
VeriSign
Class
1
CA
Individual
Subscriber-Persona
Not
Validated

 any ideas how to fix this? i mean, how can i get the whole line instead of 
word by word?


This will make it work your way.


#!/bin/sh

IFS=

for i in `cat certificates.txt` ; do
  echo $i
done


Cheers,

Steve___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] C6: ssh X-forwarding does not work

2011-10-28 Thread Steve Brooks

On Wed, 26 Oct 2011, John Hodrien wrote:


On Wed, 26 Oct 2011, Lorenzo Martínez Rodríguez wrote:


 Hi,

 I have a working configuration with CentOS 6. Can you try to set next
 lines in /etc/ssh/sshd_config and restart SSH server please?

 #X11Forwarding no
 X11Forwarding yes
 #X11DisplayOffset 10
 X11UseLocalhost yes


 In fact I do not have xorg-x11-auth rpm installed:

 [root@Carmen ~]# rpm -qa|grep -i xorg-x11-auth
 [root@Carmen ~]#

 and it works...


He meant xorg-x11-xauth and I'm 99% certain you *need* that installed on the
target machine for ssh forwarding to work.


I have a few sl6.1 worstations that do not have xorg-x11-xauth 
installed and it does *not* seem to appear in the repos. Yet 
X11-Forwarding works fine.


Steve___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6.0 CR mdadm-3.2.2 breaks Intel BIOS RAID

2011-10-08 Thread Steve Brooks
On Sat, 8 Oct 2011, Trey Dockendorf wrote:

 I just upgraded my home KVM server to CentOS 6.0 CR to make use of the
 latest libvirt and now my RAID array with my VM storage is missing.  It
 seems that the upgrade to mdadm-3.2.2 is the culprit.

 This is the output from mdadm when scanning that array,

 # mdadm --detail --scan
 ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b
 ARRAY /dev/md126 metadata=imsm UUID=3d135942:f0fad0b0:33255f78:29c3f50a
 mdadm(IMSM): Unsupported attributes : 4000
 mdadm: IMSM metadata loading not allowed due to attributes incompatibility.
 mdadm(IMSM): Unsupported attributes : 4000
 mdadm: IMSM metadata loading not allowed due to attributes incompatibility.
 ARRAY /dev/md127 container=/dev/md0 member=0
 UUID=734f79cf:22200a5a:73be2b52:3388006b

 The error about IMSM shows up on google as something that happened to Fedora
 users during a FC14-FC15 upgrade.

 The server itself isn't old, it's a Supermicro 2U with Dual Xeon 5400 family
 of CPU.  There are two RAIDs on this one controller...a RAID1 which still
 functions and a RAID5 which is the one that is unable to be seen.  I don't
 know what IMSM is for, but the only thing strange about that array is it is
 2.7TB so the BIOS configured it as two separate arrays, one as 2TB and one
 as 700GB, but it was showing up to CentOS as a single volume.

 I downgraded to 3.2.1 , ran mdadm again and bam...it works,

 # mdadm --detail --scan
 ARRAY /dev/md0 metadata=imsm UUID=734f79cf:22200a5a:73be2b52:3388006b
 ARRAY /dev/md126 metadata=imsm UUID=3d135942:f0fad0b0:33255f78:29c3f50a
 ARRAY /dev/md127 container=/dev/md0 member=0
 UUID=691f975d:6beecfd8:67b39886:b7ee7f6e

 Hopefully this can be fixed before this version makes it to 6.1, though it's
 likely a problem for upstream RHEL as well.

 - Trey
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



Hmm I recall seeing something like this on an sl6 box. I think it needed 
a /etc/mdadm.conf with som metadata id code.. I am pretty sure I fixed 
it with

mdadm --detail --scan  /etc/mdadm.conf

and a reboot.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Keyboards

2011-07-25 Thread Steve Brooks
On Mon, 25 Jul 2011, John Doe wrote:

 From: m.r...@5-cent.us m.r...@5-cent.us

 On my new Dell system, it's got a cardreader. More to the point, it's
 got an idiot menu key... *right* next to the right control key, and just
 where the annoying keyboard design has it cut down from the
 oversize space bar
 The result is that just trying to type, I regularly hit the thing. Does
 anyone have an idea how to disable this forever (if y'all don't, I'm
 prying the key *off*).


 See maybe showkey, dumpkeys, loadkeys and xmodmap...

 JD
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



If it is that bad swap out the keyboard they are ten a penny.. Maybe you 
would enjoy the revenge element of butchering the offending key :-) ...


Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-14 Thread Steve Brooks

On Thu, 14 Apr 2011, Peter Kjellström wrote:


On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote:

OTOH, gparted doesn't see my software raid array either. Gparted it
rather practical for regular plain vanilla partitions, but for more
advanced stuff and filesystems, fdisk is probably better.


 For filersystems  2TB, you're better off grabbing a copy of GPT fdisk.


Even better, use LVM and stay away from partitioning completely.


Is it not ok to build the filesystem straight onto the device and not 
bother with patitioning at all.


Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-12 Thread Steve Brooks

On Tue, 12 Apr 2011, Marian Marinov wrote:


On Tuesday 12 April 2011 10:36:54 Alain Péan wrote:

Le 12/04/2011 09:23, Matthew Feinberg a écrit :

Hello All

I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.

CentOS 5.6

array is /dev/sdb

So here is what I have tried so far
reiserfs is limited to 16TB
ext4 does not seem to be fully baked in 5.6 yet. parted 1.8 does not
support creating ext4 (strange)

Anyone work with large filesystems like this that have any
suggestions/recommendations?


Hi Matthew,

I would go for xfs, which is now supported in CentOS. This is what I use
for a 16 TB storage, with CentOS 5.3 (Rocks Cluster), and it woks fine.
No problem with lengthy fsck, as with ext3 (which does not support such
capacities). I did not try yet ext4...

Alain


I have Raid6 Arrays with 30TB. We have tested XFS and its write performance
was really dissapointing. So we looked at Ext4. It is really good for our
workloads, but it lacks the ability to grow over 16TB. So we crated two
partitions on the raid with ext4.

The RAID rebuild time is around 2 days, max 3 if the workload is higher. So I
presume that for 40TB it will be around 4 days.

Marian



For interest how much *memory* would you need in your raid management node 
to support fsck on a 40TB array. I imagine it would be very high.


Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] WD RE4-GP Dropped From Raid

2011-04-02 Thread Steve Brooks
On Sat, 2 Apr 2011, Gerhard Schneider wrote:


 Did you check if you already have the G05 firmware on all RE4-GP?
 The G04 firmware is not suitable for RAID.

 http://forum.qnap.com/viewtopic.php?f=182t=25057

Yes all our drives are on G05 firmware.

Cheers,

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] WD RE4-GP Dropped From Raid

2011-04-01 Thread Steve Brooks

Hi All,

I have a WD RE4-GP which dropped an Adaptec 51645 RAID controller. I ran a 
smartctl short test on the drive and it failed with a read error. So I ran 
the Western Digital's own diagnostic software (DLGDIAG), both the short 
and extended test on the drive and it passed with no errors. So I ran the 
smartctl short test again and again it failed. I then ran smartctl long 
test and that also failed at the same logical block address as the short 
test. Can anyone shed any light on this? Can I RMA the drive even if it 
passes Western Digital's own tests? Thanks in advance for any advice.

Cheers,

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] WD RE4-GP Dropped From Raid

2011-04-01 Thread Steve Brooks
On Fri, 1 Apr 2011, compdoc wrote:

 I have a WD RE4-GP which dropped an Adaptec 51645 RAID
 controller. I ran a smartctl short test on the drive and it failed
 with a read error.

 What does smart say about reallocated sectors, pending sector count, drive
 temperature, etc?


They are clean, no  reallocated sectors or pending sector count. Temp 
about 35C.

Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] question on software raid

2011-04-01 Thread Steve Brooks
On Fri, 1 Apr 2011, Jerry Geis wrote:

 dmesg is not reporting any issues.

 The /proc/mdstat looks fine.
 md0 : active raid1 sdb1[1] sda1[0]
X blocks [2/2]  [UU]

 however /var/log/messages says:

 smartd[3392] Device /dev/sda 20 offline uncorrectable sectors

 The machine is running fine.. raid array looks good - what
 is up with smartd?

This page is one I like for understanding SMART attributes.

http://www.z-a-recovery.com/man-smart.htm

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID support in kernel?

2011-01-31 Thread Steve Brooks
On Mon, 31 Jan 2011, Les Bell wrote:


 Kenni Lund ke...@kelu.dk wrote:


 Fakeraid is a proprietary software RAID
 solution, so if your motherboard suddently decides to die, how will
 you then get access to your data?
 

 Obviously, you restore it from a backup. RAID is not a substitute for
 backups.

 Best,

 --- Les Bell

Hmm... What percentage of home users keep backups of their systems and 
data .. not enough me thinks?

Go with Linux software raid it is very stable and as Kenni Lund states is 
more portable than the software raid found on many motherboard chipsets 
that aspire/claim to be hardware raid.

Steve

-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] networking problem

2010-10-07 Thread Steve Brooks

At a guess looks like your DNS is down, or like the Ben suggests no 
servers in your

/etc/resolve.conf


Steve

On Thu, 7 Oct 2010, Ben McGinnes wrote:

 On 7/10/10 6:20 PM, Smith Erick Marume-Bahizire wrote:
 Hello
Please I want help in centos server I can ping the gateway or
 my eth1 ip address but i cant browse from my server could you help
 me with the codes the codes that will enable network cause i've
 already configure my iptables and it's showing me that everything is
 ok. Please help Thank you.

 Okay, firstly, when asking for help with a new issue, it is best to
 start a new message rather than reply to a message on an unrelated
 topic.  Otherwise those of us using threaded mail clients (like Mutt
 or Thunderbird) might overlook the query.

 Secondly, we need a little detail about your current network
 configuration and what you have tried.

 Is it only browsing that is not working, or do other services not work
 either?

 Can you send through the output of:

 route -n
 cat /etc/resolv.conf


 Regards,
 Ben


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue : update

2010-10-06 Thread Steve Brooks

Thanks Gordon .. a relief .. I am still inclined to move data and rebuild 
with all the current default EXT4 attributes.

Steve





On Tue, 5 Oct 2010, Gordon Messmer wrote:

  On 10/05/2010 12:50 PM, Steve Brooks wrote:
 tune4fs listed the filesystem state as not clean. I remounted them as
 read only while I decided what to do. The next day I check them again and
 tune4fs reports the filesystem state as clean. Could this be normal
 behaviour?

 Yes. not clean is fine.  A mounted FS without a journal will always be
 not clean.

 not clean with errors is a cause for concern.
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue

2010-10-05 Thread Steve Brooks


Hi,

The /etc/mke4fs.conf is below. This file has never been edited by me or 
anyone else.


[defaults]
 base_features = 
sparse_super,filetype,resize_inode,dir_index,ext_attr
 blocksize = 4096
 inode_size = 256
 inode_ratio = 16384

[fs_types]
 ext3 = {
 features = has_journal
 }
 ext4 = {
 features = 
has_journal,extents,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
 inode_size = 256
 }
 ext4dev = {
 features = 
has_journal,extents,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize
 inode_size = 256
 options = test_fs=1
 }
 small = {
 blocksize = 1024
 inode_size = 128
 inode_ratio = 4096
 }
 floppy = {
 blocksize = 1024
 inode_size = 128
 inode_ratio = 8192
 }
 news = {
 inode_ratio = 4096
 }
 largefile = {
 inode_ratio = 1048576
 blocksize = -1
 }
 largefile4 = {
 inode_ratio = 4194304
 blocksize = -1
 }
 hurd = {
  blocksize = 4096
  inode_size = 128
 }





On Mon, 4 Oct 2010, Miguel Medalha wrote:


 The defaults are determined by /etc/mke2fs.conf.  If you've modified or
 removed that file, mkfs.ext4 will behave differently

 On my CentOS 5.5 systems, defaults for ext4 reside on /etc/mke4fs.conf.

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue

2010-10-05 Thread Steve Brooks

Hi Brent, Thanks for the reply.

I have to make a decision yes, it is not an easy one either, I read have so 
many different reports, opinions that I now feel my brain has become rather 
scrambled.. Wondering now if I should just have smaller filesystems and stick 
with EXT3.. I have never used XFS and have therefore no experience with it and 
this I guess leaves me short of confidence.


On Tue, 5 Oct 2010, Brent L. Bates wrote:

  It is up to you to decide.  Do you go with a file system, XFS, that has
  15 YEARS of use and EXABYTES of storage  behinding it?  Or do you go with
  ext4, something that has just been declared `stable'.  How important is the
  data on those drives?  Will you have excellent tape backup of that data not
  just copies on another ext4 file system, but verified tape backups?

  I prefer using something I know works and has survived numerous system
  crashes with out missing a beat.


Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue : update

2010-10-05 Thread Steve Brooks

In the two 11T EXT4 filesystems (raid level 6), referred to inprevious 
posts, built on devices

/dev/sdb
/dev/sdc

tune4fs listed the filesystem state as not clean. I remounted them as 
read only while I decided what to do. The next day I check them again and 
tune4fs reports the filesystem state as clean. Could this be normal 
behaviour?

Steve




-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] EXT4 mount issue

2010-10-04 Thread Steve Brooks
Hi All,

When a couple of EXT4 filesystems are mounted in a server I get the 
message

Oct  1 18:49:42 sraid3 kernel: EXT4-fs (sdb): mounted filesystem without journal
Oct  1 18:49:42 sraid3 kernel: EXT4-fs (sdc): mounted filesystem without journal

in the system logs.

My confusion is why are they mounted without a journal?  They were both 
created with

mkfs -t ext4 /dev/sdb
mkfs -t ext4 /dev/sdc

and mounted in  /etc/fstab with

/dev/sdb  /sraid3  ext4defaults   1 2
/dev/sdc  /sraid4  ext4defaults   1 2

both are 11T and so I would prefer as much stability as possible, io 
performance is not an issue on either device just integrity so I thought 
the journal would be default and necessary.

Any thoughts would be much appreciated.

Steve




-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue

2010-10-04 Thread Steve Brooks

Hi,

Below is the output from tune4fs. From what people are saying it looks 
like et4 may not be the way to go.

[r...@sraid3 ~]# tune4fs -l /dev/sdb
tune4fs 1.41.9 (22-Aug-2009)
Filesystem volume name:   none
Last mounted on:  /sraid3/sraid3
Filesystem UUID:  adc08889-f6a9-47c6-a570-e51c480240a3
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  ext_attr resize_inode dir_index filetype 
sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options:(none)
Filesystem state: not clean
Errors behavior:  Continue
Filesystem OS type:   Linux
Inode count:  731381760
Block count:  2925527040
Reserved block count: 146276352
Free blocks:  499285087
Free inodes:  730894437
First block:  0
Block size:   4096
Fragment size:4096
Reserved GDT blocks:  326
Blocks per group: 32768
Fragments per group:  32768
Inodes per group: 8192
Inode blocks per group:   512
Filesystem created:   Wed Feb 10 14:49:46 2010
Last mount time:  Fri Oct  1 18:49:29 2010
Last write time:  Mon Oct  4 01:32:34 2010
Mount count:  3
Maximum mount count:  37
Last checked: Mon Jun  7 15:51:57 2010
Check interval:   15552000 (6 months)
Next check after: Sat Dec  4 14:51:57 2010
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
First inode:  11
Inode size:   256
Required extra isize: 28
Desired extra isize:  28
Default directory hash:   half_md4
Directory Hash Seed:  78a52c1a-0e24-4e94-b1dc-e193e7cac68d



On Mon, 4 Oct 2010, Miguel Medalha wrote:


 Can you give us the output of tune4fs -l /dev/sdb ?

 Does it show  has_journal under Filesystem features?

 If it doesn't, you can input the following:

 tune4fs -o journal_data

 The option journal_data fits the case in which you don't care about the 
 fastest speed but you put your focus on data integrity instead.

 By the way, if you only used the defaults when creating the ext4 filesystems, 
 I am afraid that you didn't use the ext4 specific features that give it a 
 real advantage over ext3. Some of them cannot be configured latter, they have 
 to be specified when you create the filesystem.




Steve
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue

2010-10-04 Thread Steve Brooks

Hi Miguel,

Thanks for the reply.

 What people are saying? So instead of understanding and solving some issue

I was just a little worried at the response from Brent earlier quote 
Don't play Russian Roulette and use ext4.  . The really odd thing here 
is that on another raid disk created the exact same way with the exact 
same parameters to mkfs and identically mounted I have an EXT4 
filesystem with different attributes, see below. Surely that should not 
happen. Also as I understand it one of the defaults is to have journal 
enabled not disabled.

[r...@vraid3 ~]# tune4fs -l /dev/sdc
tune4fs 1.41.9 (22-Aug-2009)
Filesystem volume name:   none
Last mounted on:  /vraid3/vraid3
Filesystem UUID:  9e1d0cbf-f5f8-4116-9b60-a9e3c07da220
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode dir_index 
filetype needs_recovery extent flex_bg sparse_super large_file huge_file 
uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options:(none)
Filesystem state: clean
Errors behavior:  Continue
Filesystem OS type:   Linux
Inode count:  609484800
Block count:  2437935360
Reserved block count: 121896768
Free blocks:  2107056555
Free inodes:  609318938
First block:  0
Block size:   4096
Fragment size:4096
Reserved GDT blocks:  442
Blocks per group: 32768
Fragments per group:  32768
Inodes per group: 8192
Inode blocks per group:   512
Flex block group size:16
Filesystem created:   Mon Jan 18 19:16:27 2010
Last mount time:  Fri Oct  1 19:38:31 2010
Last write time:  Fri Oct  1 19:38:31 2010
Mount count:  1
Maximum mount count:  28
Last checked: Fri Oct  1 18:50:37 2010
Check interval:   15552000 (6 months)
Next check after: Wed Mar 30 18:50:37 2011
Lifetime writes:  146 GB
Reserved blocks uid:  0 (user root)
Reserved blocks gid:  0 (group root)
First inode:  11
Inode size:   256
Required extra isize: 28
Desired extra isize:  28
Journal inode:8
Default directory hash:   half_md4
Directory Hash Seed:  820f990d-9af5-4fb3-9e58-1c4bd504ca12
Journal backup:   inode blocks





On Mon, 4 Oct 2010, Miguel Medalha wrote:


  Below is the output from tune4fs. From what people are saying it looks
  like et4 may not be the way to go.
 

 What people are saying? So instead of understanding and solving some issue 
 you just jump wagon, maybe only to find some other issue there?

 ext4 is stable and works perfectly. You just have to configure it properly, 
 as with anything.

 Can you still recreate the filesystems? If so, study the parameters for ext4 
 and use them. You will want extents, because it provides a much better use 
 of disk space and avoids fragmentation.

 As you are, you can still create a journal on the filesystem you have, using 
 tune4fs. Look under switch -o (options).

 As an example, I give you some of what I have here with a ext4 partition:

 In /etc/fstab:

 LABEL=/data1/data   ext4 
 defaults,data=journal,acl,user_xattr 1 2

 tune2fs gives me the following:

 Filesystem features:  has_journal ext_attr resize_inode dir_index 
 filetype needs_recovery extent flex_bg sparse_super large_file huge_file 
 uninit_bg dir_nlink extra_isize
 Filesystem flags: signed_directory_hash
 Default mount options:journal_data user_xattr acl

 Regards



-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] EXT4 mount issue

2010-10-04 Thread Steve Brooks
On Mon, 4 Oct 2010, Miguel Medalha wrote:


  Filesystem state: not clean
 

 You should really look at that line and at why it is there.

Thanks again Miguel,

Yep I have mounted the filesystems as read only for the time being. I am 
inclined to move the data and rebuild the filesystem from scratch as the 
output from tune4fs seems to indicate that the EXT4 filesystem has many 
important attributes missing.

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Odd INFO 120 seconds in logs for 2.6.18-194.3.1

2010-06-07 Thread Steve Brooks

Hi,

Since upgrading to 2.6.18-194 I am getting odd messages in the logs. 
Such as;

sraid3 kernel  INFO  task pdflush 259 blocked for more than 120 seconds.


The output from

 grep '120 seconds' /var/log/messages | tr : ' ' | awk '{print $10}' | sort | 
 uniq -c

   6 nfsd
   4 pdflush

This is from an NFS server that since the upgrade has been flakey. I have 
an identical server running 2.6.18-164.11.1 and no such messages are 
seen.

is anyone else seeing this and/or know what is going on?

Steve

The messages appear  as follows.



Jun  7 19:45:21 sraid3 kernel: INFO: task nfsd:3369 blocked for more than 
120 seconds.
Jun  7 19:45:21 sraid3 kernel: echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
Jun  7 19:45:21 sraid3 kernel: nfsd  D 8101062a9100 0 
3369  1  3370  3368 (L-TLB)
Jun  7 19:45:21 sraid3 kernel:  8101053519e0 0046 
0020 810105ea0780
Jun  7 19:45:21 sraid3 kernel:  8101180e203a 000a 
81013ecb77a0 8101062a9100
Jun  7 19:45:21 sraid3 kernel:  0cab54499f77 26c3 
81013ecb7988 000102d8
Jun  7 19:45:21 sraid3 kernel: Call Trace:
Jun  7 19:45:21 sraid3 kernel:  [80064b29] 
_spin_lock_bh+0x9/0x14
Jun  7 19:45:21 sraid3 kernel:  [800ec2a2] inode_wait+0x0/0xd
Jun  7 19:45:21 sraid3 kernel:  [800ec2ab] inode_wait+0x9/0xd
Jun  7 19:45:21 sraid3 kernel:  [80063a16] 
__wait_on_bit+0x40/0x6e
Jun  7 19:45:21 sraid3 kernel:  [800ec2a2] inode_wait+0x0/0xd
Jun  7 19:45:21 sraid3 kernel:  [80063ab0] 
out_of_line_wait_on_bit+0x6c/0x78
Jun  7 19:45:21 sraid3 kernel:  [800a0aec] 
wake_bit_function+0x0/0x23
Jun  7 19:45:21 sraid3 kernel:  [8003dbbf] ifind_fast+0x6e/0x83
Jun  7 19:45:21 sraid3 kernel:  [80023290] 
iget_locked+0x59/0x149
Jun  7 19:45:21 sraid3 kernel:  [88cf0b0f] 
:ext4:ext4_iget+0x16/0x65a
Jun  7 19:45:21 sraid3 kernel:  [800a0abe] 
autoremove_wake_function+0x0/0x2e
Jun  7 19:45:21 sraid3 kernel:  [88cf8171] 
:ext4:ext4_get_dentry+0x3b/0x87
Jun  7 19:45:21 sraid3 kernel:  [88dcf36b] 
:exportfs:find_exported_dentry+0x43/0x480
Jun  7 19:45:21 sraid3 kernel:  [88ddc753] 
:nfsd:nfsd_acceptable+0x0/0xd8
Jun  7 19:45:21 sraid3 kernel:  [88de074f] 
:nfsd:exp_get_by_name+0x5b/0x71
Jun  7 19:45:21 sraid3 kernel:  [88de0d3e] 
:nfsd:exp_find_key+0x89/0x9c
Jun  7 19:45:21 sraid3 kernel:  [8008b4b1] 
__wake_up_common+0x3e/0x68
Jun  7 19:45:21 sraid3 kernel:  [8009b1dd] 
set_current_groups+0x159/0x164
Jun  7 19:45:21 sraid3 kernel:  [88dcf7f3] 
:exportfs:export_decode_fh+0x4b/0x50
Jun  7 19:45:21 sraid3 kernel:  [88ddcac5] 
:nfsd:fh_verify+0x29a/0x4bd
Jun  7 19:45:21 sraid3 kernel:  [88dddccf] 
:nfsd:nfsd_open+0x20/0x184
Jun  7 19:45:21 sraid3 kernel:  [88dddffb] 
:nfsd:nfsd_write+0x89/0xd5
Jun  7 19:45:21 sraid3 kernel:  [88de4b1a] 
:nfsd:nfsd3_proc_write+0xea/0x109
Jun  7 19:45:21 sraid3 kernel:  [88dda1db] 
:nfsd:nfsd_dispatch+0xd8/0x1d6
Jun  7 19:45:21 sraid3 kernel:  [88d5a651] 
:sunrpc:svc_process+0x454/0x71b
Jun  7 19:45:21 sraid3 kernel:  [80064644] __down_read+0x12/0x92
Jun  7 19:45:21 sraid3 kernel:  [88dda5a1] :nfsd:nfsd+0x0/0x2cb
Jun  7 19:45:21 sraid3 kernel:  [88dda746] 
:nfsd:nfsd+0x1a5/0x2cb
Jun  7 19:45:21 sraid3 kernel:  [8005dfb1] child_rip+0xa/0x11
Jun  7 19:45:21 sraid3 kernel:  [88dda5a1] :nfsd:nfsd+0x0/0x2cb
Jun  7 19:45:21 sraid3 kernel:  [88dda5a1] :nfsd:nfsd+0x0/0x2cb
Jun  7 19:45:21 sraid3 kernel:  [8005dfa7] child_rip+0x0/0x11





-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Odd INFO 120 seconds in logs for 2.6.18-194.3.1

2010-06-07 Thread Steve Brooks
On Mon, 7 Jun 2010, Scott Silva wrote:

 on 6-7-2010 1:12 PM Steve Brooks spake the following:

 Hi,

 Since upgrading to 2.6.18-194 I am getting odd messages in the logs.
 Such as;

 sraid3 kernel  INFO  task pdflush 259 blocked for more than 120 seconds.


 The output from

 grep '120 seconds' /var/log/messages | tr : ' ' | awk '{print $10}' | sort 
 | uniq -c

6 nfsd
4 pdflush

 This is from an NFS server that since the upgrade has been flakey. I have
 an identical server running 2.6.18-164.11.1 and no such messages are
 seen.

 is anyone else seeing this and/or know what is going on?

 Steve

 Maybe boot back to the older kernel and see if the messages stop... That would
 eliminate any sudden coincidental hardware problems.

Thanks Scott,

Yes, this is a good idea, however being a production server it 
can't be done immediately. Also, with the kernel messages being 
labeled INFO it could be *possibly* just be the new kernel being 
informative and possibly nothing serious but it does look alarming. Having 
done more research now, it appears that upstream distros have had problems 
with this too and experienced software lockups and hangs.

Steve

-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] 24G running on centos 5 desktop.

2010-06-02 Thread Steve Brooks

Hi All,

Thought I would let those that are interested know that I had success in 
running 24G on an Asus P6T with 24G kit of Kingston DDR3. While I was 
putting this together I saw lots of forum posts asking if anyone had tried 
it. Well we did here at our work and all looks great including running 
memtest86 overnight.

I have a fluid dynamics simulation running on it with 90% memory usage and 
all looks great.

Intel(R) Core(TM) i7 CPU  960  @ 3.20GHz 
MemTotal: 24676112 kB


Cheers,

Steve


-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Server HD failed and I think I am hosed

2010-02-19 Thread Steve Brooks

Do not use fsck on faulty hardware.

I would remove the drive attach it to another linux box with free storage 
space on a file system larger than that of the whole damaged drive. Use 
ddrescue to rebuild as much of the failed drive as possible them mount 
the image produced from ddrescue and copy all you can from that.

Best Wishes,

Steve

On Thu, 18 Feb 2010, Slack-Moehrle wrote:


 a hd in my server failed.

 I noticed when I went to SSH in and get a copy of some files in /var/www/html
 I tried to tar the files and I was told No, read-only file system.

 I restarted and tried running FSCK manually (without -a or -p) and I get 
 inode errors, short reads, etc.

 I tried booting to my CentOS 53 install DVD. I do linux rescue and I get to 
 where it wants to know where CentOS images are. I select local CD-ROM and it 
 just spits it out at me and tells me to try someplace else. Even though I 
 booted from the CD!

 I am not sure what else to do. I dont think I can mount this on my macBook to 
 do anything. But I really need just the data from /var/www/html. I have a 
 backup that is a week old, but I know changes were made late last week that I 
 would like to not have to do again

 Where is my backup of the changes I made? Well my SSD drive failed in my 
 laptop and I dont think I have them...

 Can anyone provide advice as to what to do next?

 Also, what would have caused this all of the sudden? This box has been 
 running fine for months.
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


-- 
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] SAS raid controllers

2010-02-17 Thread Steve Brooks


Hi Gordon,

I am running a 51645, two 31605 and four 3405 SAS raid controllers from 
adaptec plus a few more older 2820sa cards.


Two of the 3405 controllers have been running for nearly three years 
without any issues at all.


The 5 series card has been working fine. They do run very hot so even in 
an air-conditioned room make sure there is plenty cooled air passing over 
the cards heat sink. There is no fan on the card just the heat sink. As a 
comparison in the same room with the same case, fan setup and backplane 
the 3405 was running at 38 C but the 51605 was running at 105C .. Not 
Good. This causes the physical HD's to drop out of the raid and other 
problems even though the controller is reporting the temps as normal (this 
is normal for the 5 series cards from what I read).  I installed a PCI 
slot cooler (Antec Cyclone) positioned next to the card and that brought 
the controller temp down to 48 C. Remember this is in an already 
air-conditioned room holding temps at 16C.


Also beware of the HD's you buy for the raid. I bought the  WD2002 (RE4 
-GP) 2TB enterprise class drives.. They all need a firmware update, a 
nightmare when you have 48 of them to update in a working raid. All of 
them also have to be jumpered down to SATA 150 . As they have a

serious issue that even the firmware update can not fix.

Hope this helps,

Steve






On Tue, 16 Feb 2010, Gordon McLellan wrote:


Is anyone running either the newish Adaptec 5805 or the new LSI (3ware) 9750 
sas raid controllers in a production environment with
Centos 5.3/5.4?

The low price of these cards makes me suspicious, compared to the more 
expensive pre-merger 3ware cards and considerably more
expensive Areca ARC-1680.  I've been 'burned' by the low cost of Promise raid 
cards (just as this group pointed out, they're crap
cards!), but I'm still not convinced that most expensive == best.

When it comes to parallel scsi cards, Adaptec has always been my choice 
regardless of platform, but I've never used their serial scsi
products.  I have several older pre-merger 3ware cards installed and have been 
working flawlessly for years, but under a windows
environment.

I noticed none of these manufacturers are listed on the upstream provider's HCL, yet they 
all eagerly claim Linux support on their
respective websites..  The Adapetc website actually names both Centos and the 
upstream provider as 'supported'.

Kind Regards,
Gordon




--
Dr Stephen Brooks

http://www-solar.mcs.st-and.ac.uk/
Solar MHD Theory Group
Tel::  01334 463735
Fax::  01334 463748
E-mail :: ste...@mcs.st-andrews.ac.uk
---
Mathematical Institute
North Haugh
University of St. Andrews
St Andrews, Fife KY16 9SS
SCOTLAND
---
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos