Re: [CentOS] new large fileserver config questions

2012-10-04 Thread Rafa Griman
Hi :)

On Wed, Oct 3, 2012 at 8:01 PM, Keith Keller
kkel...@wombat.san-francisco.ca.us wrote:
 On 2012-10-03, Rafa Griman rafagri...@gmail.com wrote:

 If it works with you ... I mean, there's no perfect partition scheme
 (IMHO), depends greatly on what you do, your budget, workflow, file
 size, ... So if you're happy with this, go ahead. Just some advice:
 test a couple of different options first just in case ;)

 Well, given the warnings about SSD endurance, I didn't want to do
 excessive testing and contribute to faster wear.  But I've been reading
 around, and perhaps I'm just overreacting.  For example:

 http://www.storagesearch.com/ssdmyths-endurance.html


As all new technologies ... starting off is complicated. SSD vendors
have developed new strategies (leverage existing technology like
duplicating the real amount on the SSD) and new algorithms so they're
working on it ;)


 This article talks about RAID1 potentially being better for increasing
 SSD lifetime, despite the full write that mdadm will want to do.

 So.  For now, let's just pretend that these disks are not SSDs, but
 regular magnetic disks.  Do people have preferences for either of the
 methods for creating a bootable RAID1 I mentioned in my OP?  I like the
 idea of using a partitionable RAID, but the instructions seem
 cumbersome.  The anaconda method is straightforward, but simply creates
 RAID1 partitions, AFAICT, which is fine till a disk needs to be replaced,
 then gets slightly annoying.

 Yup, even though you've got the sw and su options in case you want to
 play around ... With XFS, you shouldn't have to use su and sw ... in
 fact you shouldn't have to use many options since it tries to
 autodetect and use the best options. Check the XFS FAQ.

 Well, I'm also on the XFS list, and there are varying opinions on this.
 From what I can tell most XFS experts suggest just as you do--don't
 second-guess mkfs.xfs, and let it do what it thinks is best.  That's
 certainly what I've done in the past.  But there's a vocal group of
 posters who think this is incredibly foolish, and strongly suggest
 determing these numbers on your own.  If there were a straightforward
 way to do this with standard CentOS tools (well, plus tw_cli if needed)
 then I could try both methods and see which worked better.  John Doe
 suggested a guideline which I may try out.  But my gut instinct is that
 I shouldn't try to second-guess mkfs.xfs.


As always, if you know what you're doing ... feel free to define the
parameters/options ;) Oh, and if you've got the time to test different
options/values ;) If you know how your app writes/reads to disk, how
the RAID cache works, ... you can probably define better
options/values ... but that takes a lot of time, testing and reading.
XFS' default options might be a bit more conservative, but at least
you know they work.

You have probably seen some XFS list member get scolded for messing
around with AG (or other options) and then saying performance has
dropped. I don't usually mess around with the options and just let
mkfs decide ... after all, the XFS devs spend more time benchmarking,
reading and testing than me ;)

I've been using XFS for a long time and I'm very happy with how it
works out of the box (YMMV).


 Nope, just mass extinction of the Human Race. Nothing to worry about.

 So, it's a win-win?  ;-)


Definetly :D

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] new large fileserver config questions

2012-10-03 Thread Rafa Griman
Hi :)

On Tue, Oct 2, 2012 at 8:47 PM, Keith Keller
kkel...@wombat.san-francisco.ca.us wrote:
 On 2012-10-02, John R Pierce pie...@hogranch.com wrote:

 a server makes very little use of its system disks after its booted,
 everything it needs ends up in cache pretty quickly.  and you typically
 don't reboot a server very often.   why waste SSD for that?

 I think the impetus (which I wasn't totally on top of) was to maximize
 the number of drive bays in the controller node.  So the bays are 2.5
 instead of 3.5, and finding 2.5 ''enterprise'' SATA drives is fairly
 nontrivial from what I can tell.  I don't actually need 8 2.5 drive
 bays, so that was an oversight on my part.

 After reading the SSD/RAID docs that John Doe posted, I am a little
 concerned, but I think my plan will be to use these disks as I
 originally planned, and if they fail too quickly, find some 2.5
 magnetic drives and RAID1 them instead.  I may also end up putting /tmp,
 /var, and swap on the disk array instead of on the SSD array, and treat
 the SSD array as just the write-seldom parts of the OS (e.g., /boot,
 /usr, /usr/local).  If I do that I should be able to alleviate any
 issues with excessive writing of the SSDs.


If it works with you ... I mean, there's no perfect partition scheme
(IMHO), depends greatly on what you do, your budget, workflow, file
size, ... So if you're happy with this, go ahead. Just some advice:
test a couple of different options first just in case ;)


 I am not sure what drives I have, but I have seen claims of enterprise
 SSDs which are designed to be up 24/7 and be able to tolerate more
 writes before fatiguing.  Has anyone had experience with these drives?

 re: alignment, use the whole disks, without partitioning.   then there's
 no alignment issues.   use a raid block size of like 32k. if you need
 multiple file systems, put the whole mess into a single LVM vg, and
 create your logical volumes in lvm.

 So, something like mkfs.xfs will be able to determine the proper stride
 and stripe settings from whatever the 3ware controller presents?


Yup, even though you've got the sw and su options in case you want to
play around ... With XFS, you shouldn't have to use su and sw ... in
fact you shouldn't have to use many options since it tries to
autodetect and use the best options. Check the XFS FAQ.


  (The
 controller of course uses whole disks, not partitions.)  From reading
 other sites and lists I had the (perhaps mistaken) impression that this
 was a delicate operation, and not getting it exactly correct would cause
 performance issues, possibly set fire to the entire data center, and
 even cause the next big bang.


Nope, just mass extinction of the Human Race. Nothing to worry about.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] new large fileserver config questions

2012-10-02 Thread Rafa Griman
Hi :)

On Tue, Oct 2, 2012 at 6:57 AM, John R Pierce pie...@hogranch.com wrote:
 On 10/01/12 8:39 PM, Keith Keller wrote:
 The controller node has two 90GB SSDs that I plan to use as a
 bootable RAID1 system disk.  What is the preferred method for laying
 out the RAID array?

 a server makes very little use of its system disks after its booted,
 everything it needs ends up in cache pretty quickly.  and you typically
 don't reboot a server very often.   why waste SSD for that?

 I'd rather use SSD for something like LSI Logic's CacheCade v2 (but this
 requires you use a LSI SAS raid card too)

Just add to this comment that you can also use the SSD drives to store
the logs/journals/metadata/whatever_you_call_it.

As an example, with XFS you would use the -l option.

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Is Amanda vaulting what I need for archiving data?

2012-01-13 Thread Rafa Griman
On Thu, Jan 12, 2012 at 12:48 AM, Devin Reade g...@gno.org wrote:
 --On Wednesday, January 11, 2012 03:40:20 PM -0500 Alan McKay
 alan.mc...@gmail.com wrote:

 Well, the scientists are talking longer than 7 years so HDs just are not
 going to cut it

[...]

 For long term storage, you may need to be able to not just put stuff
 away, but also have a policy (and the resources!) to periodically
 migrate data to newer media  formats.  This can get expensive in
 time and money of course; your stakeholders may need to weigh in
 again periodically to evaluate the value of the data vs the cost
 of migration.


What about LTFS?

http://en.wikipedia.org/wiki/Linear_Tape_File_System

Seems interesting. Anyone tried it?

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Monitoring power consumption

2011-04-13 Thread Rafa Griman
On Wed, Apr 13, 2011 at 4:16 PM, Peter Peltonen
peter.pelto...@gmail.com wrote:
 Hi all,

 I would like to monitor the power consumption of my server. What I am
 looking for is:

 * a simple monitoring device that would measure the power consumption
 of 1 server

 * a way to get the consumption reading from that device to my centos
 server (via usb / wlan / whatever works)

 Any suggestions?


Does your server support IPMI (iLO, BMC, ...)?

IPMI's quite easy and useful.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cant find out MCE reason (CPU 35 BANK 8)

2011-03-22 Thread Rafa Griman
Hi :)

On Tue, Mar 22, 2011 at 3:59 PM, Vladimir Budnev
vladimir.bud...@gmail.com wrote:

[...]

 But... as i sad we have following slots
    CPU1    cpu1-a1 cpu1-a2 cpu1-a3 cpu1-b1 cpu1-b2 cpu1-b3
    CPU2    cpu2-a1 cpu2-a2 cpu2-a3 cpu2-b1 cpu2-b2 cpu2-b3

 We have modules placed in such way:
 ++++++++
 |  |  V |     V  |  V |      V |
 free    |    free    |
 ++++++++
 |   CPU1  |  cpu1-a1| cpu1-a2 | cpu1-a3 | cpu1-b1 | cpu1-b2| cpu1-b3 |
 ++++++++


 ++++++++
 |  |  V |     V  |  V |      V |
 free    |    free    |
 ++++++++
 |   CPU2  |  cpu2-a1| cpu2-a2 | cpu2-a3 | cpu2-b1 | cpu1-b2| cpu1-b3 |
 ++++++++

 Definetely there is something with memory banks,becasue replacinbg moudels
 changed the mce messages, but what exactly...or iv interpreted all wrong?


This isn't an optimal setup (performance-wise). You should always
populate complete slots in multiples of 3 to get the full bandwidth.
In your case, you've got cpu1-b[2|3] and cpu2-b[2|3] with no DIMMs so
that would affect your performance.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] connection speeds between nodes

2011-03-07 Thread Rafa Griman
Hi :)

On Mon, Mar 7, 2011 at 12:12 PM, wessel van der aart
wes...@postoffice.nl wrote:
 Hi All,

 I've been asked to setup a 3d renderfarm at our office , at the start it
 will contain about 8 nodes but it should be build at growth. now the
 setup i had in mind is as following:
 All the data is already stored on a StorNext SAN filesystem (quantum )
 this should be mounted on a centos server trough fiber optics  , which
 in its turn shares the FS over NFS to all the rendernodes (also centos).


From what I can read, you have 1 NFS server only and a separate
StoreNext MDC. Is this correct?


 Now we've estimated that the average file send to each node will be
 about 90MB , so that's what i like the average connection to be, i know
 that gigabit ethernet should be able to that (testing with iperf
 confirms that) but testing the speed to already existing nfs shares
 gives me a 55MB max. as i'm not familiar with network shares performance
 tweaking is was wondering if anybody here did and could give me some
 info on this?
 Also i thought on giving all the nodes 2x1Gb-eth ports and putting those
 in a BOND, will do this any good or do i have to take a look a the nfs
 server side first?


Things to check would be:
 - Hardware:
  * RAM and cores on the NFS server
  * # of GigE  FC ports
  * PCI technology you're using: PCIe, PCI-X, ...
  * PCI lanes  bandwidth you're using up
  * if you are sharing PCI buses between different PCI boards
(FC and GigE): you should NEVER do this. If you have to share a PCI
bus, share it between two PCI devices which are the same. That is you
can share a PCI bus between 2 GigE cards or between 2 FC cards, but
never mix the devices.
  * cabling
  * switch configuration
  * RAID configuration
  * cache configuration on the RAID controller. Cache
mirroring gives you more protection, but less performance.

 - software:
  * check the NFS config. There are some interesting tips if
you google around.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Lost root access

2011-02-03 Thread Rafa Griman
Hi :)

On Wed, Feb 2, 2011 at 3:44 PM, James Bensley jwbens...@gmail.com wrote:
 So on a virtual server the root password was no longer working (as in
 I couldn't ssh in anymore). Only I and one other know it and neither
 of us have changed it. No other account had the correct privileges to
 correct this so I'm wondering, if I had mounted that vdi as a
 secondary device on another VM, browsed the file system and delete
 /etc/shadow would this have wiped all users passwords meaning I could
 regain access again?

 (This is past tense because its sorted now but I'm curious if this
 would have worked? And if not, what could I have done?).


As the other said: DON'T delete /etc/shadow.

Someone also mentioned you could modify the hash in /etc/shadow. This
will work if you are root or have the right permissions with sudo.

If you can reboot the system, what really works great is passing the
following option to the kernel on the lilo/grub screen when the system
boots:

 init=/bin/bash

This will give you a shell without being asked for a password (unless
the sys admin has done his homework ;) Now that you have shell access
... you are in charge so you can:

 - mount the / partition and chroot

 - edit /etc/shadow and delete the password hash

 - whatever you can imagine ... you decide ;)


HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Basic Permissions Questions

2011-01-26 Thread Rafa Griman
Hi :)

On Wed, Jan 26, 2011 at 11:12 AM, James Bensley jwbens...@gmail.com wrote:
 Hi List :)

 So, I have a folder1, its owner is user1 who has r+w on the folder.
 User2 is the group owner who only has read access (when I say user2, I
 mean the group called user2, because when you make a new user the OS
 can make them their own group). You can see these permissions below:

 [user2@host test]$ ls -l
 drw-r-  3 user1    user2   28 Nov  2 16:17 folder1

 How ever user2 can not 'cd' into this directory, and gets the
 following out put form 'ls -l folder1'

 [user2@host test]$ ls -l folder1/
 total 0
 ?- ? ? ? ?            ? sub-folder

 And the sub-folder name is written in white text flashing on a red
 background. So, it seems to me that there is some permissions problems
 here. What permissions are required on the group settings to allow a
 group user to browser folder1 and its sub folders and read the files
 in side if it isn't 'r' ?

 **Note: I have used sudo to replicate permissions through the directy 
 structure:

 [user2@host test]$ sudo ls -l folder1/
 drw-r- 2 user1 user2 4096 Jan 24 06:49 sub-folder


Directories should have +x permissions. Do a:

chmod0750/directory

And see what happens.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Basic Permissions Questions

2011-01-26 Thread Rafa Griman
Hi :)

On Wed, Jan 26, 2011 at 11:31 AM, James Bensley jwbens...@gmail.com wrote:
 On 26 January 2011 10:17, Rafa Griman rafagri...@gmail.com wrote:
 Directories should have +x permissions. Do a:

 chmod    0750    /directory

 And see what happens.


 Hi Rafa, like a fool I sent that email and then worked this out
 shortly after :)


I'm glad you worked it out ;)


 Still, if I hadn't your response was quick so I wouldn't have been
 waiting long. This leads me onto a new question though;

 If user1 writes a file in folder1 will user2 be made the default group
 owner, is there a way of enforcing this and with the required
 privileges (r for files, rx for directories?).


Ownership doesn't change just by creating files. Ownership of a file
is set to the user that creates that file, no matter where the file
is. Obviously, root can change file ownership ... so treat him well ;)

In any case, try it out yourself. Create the files and see what happens ;)


 User1 accesses folder1 over smb so I could set up a create mask but
 other folders accessed by users1 not via smb (ssh, rsync etc) I still
 want user2 to have read only access. Can you implement smb style
 create masks at a file system level?

Samba is a different story (but related), you can create masks, set
default permissions, ...

I usually recommend O'Reilley's Samba book because it starts off with
a very simple config and then complicates it little by little.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] disk quotas + centos 5,5 +xfs

2011-01-17 Thread Rafa Griman
Hi :)

On Tue, Jan 18, 2011 at 3:15 AM,  aurfal...@gmail.com wrote:
 Hi all,

 is any one aware quotas not working in 5,5?

 I'm using XFS as a file system.

 My fstab has the appropriate usrquota,grpquota but when I try to run;

 quotacheck -cug /foo

 I get;

 quotacheck: Can't find filesystem to check or filesystem not mounted
 with quota option.


Did you remount /foo after modifying /etc/fstab?



BTW: http://linux.die.net/man/8/xfs_quota

check the QUOTA ADMINISTRATION section:

The quotacheck command has no effect on XFS filesystems. The first
time quota accounting is turned on (at mount time), XFS does an
automatic quotacheck internally; afterwards, the quota system will
always be completely consistent until quotas are manually turned off.


And the EXAMPLES section:

# mount -o uquota /dev/xvm/home /home
# xfs_quota -x -c 'limit bsoft=500m bhard=550m tanya' /home
# xfs_quota -x -c report /home

 I already have a large amount of data in the mount.  Do quotas get
 implemented on only empty filesystems and then one adds data?



HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos