RobH added subscribers: mark, RobH.
RobH assigned this task to Smalyshev.
RobH added a comment.
Ok some working notes:
Both wdqs1001/wdqs1002 are Dell Poweredge R420s that have space for 8 total SFF
(2.5") disks. We presently have dual 300GB installed into each.
robh@wdqs1002:~$ sudo megacli -PDList -aALL
Adapter #0
Enclosure Device ID: 32
Slot Number: 0
Drive's position: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 0
WWN: 500151795961c7a4
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 279.460 GB [0x22eec130 Sectors]
Non Coerced Size: 278.960 GB [0x22dec130 Sectors]
Coerced Size: 278.875 GB [0x22dc0000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: 0302
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221104000000
Connected Port Number: 0(path0)
Inquiry Data: CVPR130402SH300EGN INTEL SSDSA2CW300G3
4PC10302
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 3.0Gb/s
Link Speed: 3.0Gb/s
Media Type: Solid State Device
Drive: Not Certified
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Drive's NCQ setting : N/A
Port-0 :
Port status: Active
Port's Linkspeed: 3.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 1
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: 1
Device Id: 1
WWN: 5001517bb2844a0b
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 279.460 GB [0x22eec130 Sectors]
Non Coerced Size: 278.960 GB [0x22dec130 Sectors]
Coerced Size: 278.875 GB [0x22dc0000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: 0362
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221105000000
Connected Port Number: 1(path0)
Inquiry Data: CVPR206200LX300EGN INTEL SSDSA2CW300G3
4PC10362
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 3.0Gb/s
Link Speed: 3.0Gb/s
Media Type: Solid State Device
Drive: Not Certified
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Drive's NCQ setting : N/A
Port-0 :
Port status: Active
Port's Linkspeed: 3.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Exit Code: 0x00
These appear to be in a single raid 1 at this time. We cannot really add a
single disk in, without breaking the entire redundancy aspect of the raid/data
arrays.
So I would recommend that we upgrade by adding two more disks to each from our
on site spares, of which we have 38 Intel 320 Series SSDSA2CW300G3 2.5" 300GB.
These are the identical model. I suggest we add in two more per system, and
then reinstall the system from a single raid1 array into a 4 disk raid10 array.
We could do this upgrade to one system and have it come fully back online
before installing in the second.
The only costs I see is the rolling maintainance/reinstallation of each system
(leaving one of the two in full service at all times). We have 38 Intel 320
SSDs, as they are an older model spare we do not use in our newer SSD based
deployments.
@Smalyshev: Would you please review my above suggestion and provide feedback?
If you agree, we can assign this task to @Mark to approve the use of allocated
spares for upgrades. (The use of spares for repair is automatic, but stealing
spares for upgrade requires we ensure its acceptable.) If you do not agree,
please assign this back to me with your corrections.
Thanks!
TASK DETAIL
https://phabricator.wikimedia.org/T119579
EMAIL PREFERENCES
https://phabricator.wikimedia.org/settings/panel/emailpreferences/
To: Smalyshev, RobH
Cc: RobH, mark, fgiunchedi, hoo, Aklapper, Joe, StudiesWorld, Smalyshev,
jkroll, Wikidata-bugs, Jdouglas, aude, Deskana, Manybubbles, Mbch331
_______________________________________________
Wikidata-bugs mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs