Re: [CentOS] Troubles expanding file system. - Solved

2021-09-09 Thread Jeff Boyce
To follow-up and provide a conclusion to my issue, in case anyone else 
runs into a similar situation.


TLDR - go to bottom and read item #7.

To recap the issue:
I have a Dell PowerEdge server with a CentOS KVM host (Earth) with
one CentOS guest (Sequoia) that I am trying to expand the partition and
filesystem on.  I have LVM logical volumes on the host system (Earth),
which are used as devices/partitions on the guest system (Sequoia).  In
this particular situation I have successfully extended the logical
volume (lv_SeqEco) on Earth from 500GB to 700GB.

1.  Checking the disk information (lsblk) on Earth shows that the
logical volume (lv_SeqEco) is now listed as 700GB.

2.  Checking disk information (lsblk) on Sequoia shows that the disk
/dev/vde is still listed as 500GB, and partition /dev/vde1 where the
mount point /ecosystem is located is also listed as 500GB.

3.  I had tried using the resize2fs command to expand the filesystem on
/dev/vde1, but it returned with the result that there was nothing to
do.  Which makes sense now after I checked the disk information, since
/dev/vde on Sequoia has not increased from 500GB to 700GB.

4.  On previous occasions when I have done this task, I would just start
GParted on Sequoia and use the GUI to expand the partition and
filesystem.  I am unable to do this now, as the VGA adapter on my server 
has died and I have no graphical output to the attached monitor.


5.  My goal was to not have to not have to reboot the system. Especially 
since it is a 10-year old server and there is no longer VGA output to 
view the boot process.


What I tried, and what worked to solve my issue:

1.  First, I tried rescannning the device on the guest:
echo 1 > 
/sys/devices/pci:00/:00:01.1/host1/target1:0:0/1:0:0:0/rescan


This resulted in no output, and checking lsblk on the guest (Sequoia) 
showed no change:

vde  500GB  disk
vde1   500GB   part /ecosystem

2.  Second, I tried using "virsh blockresize" from the host (Earth) system.
Get the block size information
earth# virsh domblkinfo SequoiaVM vde
output:  vde capacity 751619276800

Resize the block (express the size in Bytes)
earth# virsh blockresize SequioaVM vde 751619276800B
output:  Block device vde is resized.

Check the block device on the guest to confirm.
sequoia# lsblk
vde   700GB   disk
vde1   500GB   part /ecosystem

Therefore, virsh blockresize was successful.

3.  I now have the larger disk recognized on the guest system, but still 
need to expand the partition and the filesystem.  Tried the following 
options.


resize2fs -p /dev/vde1
   output:  filesystem is already xx blocks long, nothing to do.

growpart /dev/vde 1
   output:  This showed output indicating the partition had changed
  changed:  partition=1 start=63
  old:  size=1048575937
   end=1048576000
  new:  size=1468003572
 end=1468003635

However, checking lsblk on the guest still showed the same result
   vde   700GB   disk
   vde1   500GB   part /ecosystem

On the off chance that growpart was successful, I ran resize2fs again 
and it produced the same result as above.


partprobe
   output:  failed to re-read the partition table on all attached 
devices (including /dev/vde1); device or resource busy.


Tried using "parted" and its resize command.  First, using the print 
command to get the parameters of /dev/vde (output: end=751617861119B, 
size=751617828864B).  Then changing parted to select (use) /dev/vde1 and 
used print command to view parameters of /dev/vde1 (output:  
end=536870879743B, size=536870879744B).


(parted)# resize 1 0 751617861119
output:  The location 751617861119 is outside of the device /dev/vde1

Looked at using fdisk; however, the documentation stated that the only 
way to change the partition size using fdisk is by deleting it and 
recreating it.  I didn't want to do this.


I finally decided that everything indicated that I would not be able to 
complete this while the system was online and/or mounted, despite all my 
research information showing that.


4.  Unmounted the filesystem of the guest and run partprobe.

sequoia# service smb stop
sequoia# umount /ecosystem
sequoia# partprobe
   output:  This produced the same warnings as before (failed to 
re-read the partition, device busy) for all partitions that were still 
mounted, but did not produce the warning for partition /dev/vde1 that 
had been unmounted.


sequoia# lsblk
  vde   700GB   disk
   vde1   700GB   part /ecosystem

Success.  The new partition is now recognized in the kernel, and I 
should now be able to resize the filesystem.


5.  Run resize2fs

sequoia# resize2fs -p /dev/vde1
   output:  please run e2fsck -f /dev/vde1 first

sequoia# e2fsck -f -C 0 /dev/vde1  (-C 0 will display the progress)
   output:  system passed all checks.

sequoia# resize2fs -p /dev/vde1
   output:  resizing the filesystem on /dev/vde1 to 183500446 (4k) 
blocks, begin Pass 1, extending inode table, filesystem is now 183500446 
blocks 

Re: [CentOS] Troubles expanding file system.

2021-09-02 Thread Jeff Boyce

I realized I was still on receiving the daily digest format last night, so I 
have probably screwed up the threading on this now.  If you cc me directly 
maybe I can maintain the future threading.

Ok, looking at Parted it looks like the resize (or resizepart) command will be 
what I will need.  But that doesn't appear to help recognize the expanded disk, so 
I think I need something before that.  That is what I thought the echo 1 > 
rescan would do for me.

I will look more into fdisk to understand the capabilities there.  I am going 
to take advantage of the holiday weekend in a few days to take care of this so 
I am trying to understand all of the options available to me before diving into 
the task.

In response to Gordon also, I did rescan the drive as suggested and got the 
same results; no such file or directory.  So then I did a search for the rescan 
file to see where it was present.  Found it in a few locations, but this one 
looks to be the one that I would want to try.

/sys/devices/pci:00/:00:01.1/host1/target1:0:0/1:0:0:0/

The rescan file was also located in just:  /sys/bus/pci  but don't know if that 
would do the job for the specific device.

Thanks for everyone's input.  Very helpful.  More suggestions are welcome while 
I am still reading up on options.

Jeff



Date: Wed, 1 Sep 2021 13:15:37 -0400
From: Stephen John Smoogen
To: CentOS mailing list
Subject: Re: [CentOS] Troubles expanding file system.
Message-ID:

Content-Type: text/plain; charset="UTF-8"

On Wed, 1 Sept 2021 at 12:42, Jeff Boyce  wrote:


Greetings -

  I have tried posting this four times now, from two different email
addresses (on the 25th, 27th, 30th, and 31st) and it never appeared.  I
don't see it in the archives, so it appears to be getting dropped in
transition for some reason.  I am not getting messages from the email
system saying it is undeliverable, or is bounced; I am sending as plain
text, not HTML, I stripped off my signature.  If this makes it through,
someone please give me a clue why the others might not have.  But that
is not as important as the real issue that I am trying to get addressed
below.  Thanks for any assistance.

  I have a Dell PowerEdge server with a CentOS KVM host (Earth) with
one CentOS guest (Sequoia) that I am trying to expand the partition and
filesystem on.  I have LVM logical volumes on the host system (Earth),
which are used as devices/partitions on the guest system (Sequoia).  In
this particular situation I have successfully extended the logical
volume (lv_SeqEco) on Earth from 500GB to 700GB.

1.  Checking the disk information (lsblk) on Earth shows that the
logical volume (lv_SeqEco) is now listed as 700GB.

2.  Checking disk information (lsblk) on Sequoia shows that the disk
/dev/vde is still listed as 500GB, and partition /dev/vde1 where the
mount point /ecosystem is located is also listed as 500GB.

3.  I had tried using the resize2fs command to expand the filesystem on
/dev/vde1, but it returned with the result that there was nothing to
do.  Which makes sense now after I checked the disk information, since
/dev/vde on Sequoia has not increased from 500GB to 700GB.


Thanks for the long list of items of what you have done. In Fedora
Infrastructure, we used this method to resize images in the past
https://pagure.io/infra-docs/blob/main/f/docs/sysadmin-guide/sops/guestdisk.rst

The guest system usually needs to have the `fdisk` , `gdisk` or
`parted` commands rerun to resize the disk to its new size.



4.  On previous occasions when I have done this task, I would just start
GParted on Sequoia and use the GUI to expand the partition and
filesystem.  A real quick and simple solution.

5.  The problem I have now is that the VGA adapter on my server has died
and I have no graphical output to the attached monitor, nor to the iDrac
console display.  So I am stuck doing this entirely by the command line
while logged into the system remotely.

6.  I suspect that I need to rescan the devices on Sequoia so that it
recognizes the increased space that has been allocated from the extended
the logical volume.  But when I did that (command below) it came back
with a no such file or directory.

echo 1 > /sys/class/block/vde1/device/rescan


Not sure that would do anything.



7.  This server is being retired in the next few months, but I need this
additional space prior to migrating to the new system. Can someone give
me some guidance on what I am missing in this sequence?

Let me know if I haven't been clear enough in the explanation of my
systems and objective.  Thanks.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Troubles expanding file system.

2021-09-01 Thread Jeff Boyce

Greetings -

    I have tried posting this four times now, from two different email 
addresses (on the 25th, 27th, 30th, and 31st) and it never appeared.  I 
don't see it in the archives, so it appears to be getting dropped in 
transition for some reason.  I am not getting messages from the email 
system saying it is undeliverable, or is bounced; I am sending as plain 
text, not HTML, I stripped off my signature.  If this makes it through, 
someone please give me a clue why the others might not have.  But that 
is not as important as the real issue that I am trying to get addressed 
below.  Thanks for any assistance.


    I have a Dell PowerEdge server with a CentOS KVM host (Earth) with 
one CentOS guest (Sequoia) that I am trying to expand the partition and 
filesystem on.  I have LVM logical volumes on the host system (Earth), 
which are used as devices/partitions on the guest system (Sequoia).  In 
this particular situation I have successfully extended the logical 
volume (lv_SeqEco) on Earth from 500GB to 700GB.


1.  Checking the disk information (lsblk) on Earth shows that the 
logical volume (lv_SeqEco) is now listed as 700GB.


2.  Checking disk information (lsblk) on Sequoia shows that the disk 
/dev/vde is still listed as 500GB, and partition /dev/vde1 where the 
mount point /ecosystem is located is also listed as 500GB.


3.  I had tried using the resize2fs command to expand the filesystem on 
/dev/vde1, but it returned with the result that there was nothing to 
do.  Which makes sense now after I checked the disk information, since 
/dev/vde on Sequoia has not increased from 500GB to 700GB.


4.  On previous occasions when I have done this task, I would just start 
GParted on Sequoia and use the GUI to expand the partition and 
filesystem.  A real quick and simple solution.


5.  The problem I have now is that the VGA adapter on my server has died 
and I have no graphical output to the attached monitor, nor to the iDrac 
console display.  So I am stuck doing this entirely by the command line 
while logged into the system remotely.


6.  I suspect that I need to rescan the devices on Sequoia so that it 
recognizes the increased space that has been allocated from the extended 
the logical volume.  But when I did that (command below) it came back 
with a no such file or directory.


echo 1 > /sys/class/block/vde1/device/rescan

7.  This server is being retired in the next few months, but I need this 
additional space prior to migrating to the new system. Can someone give 
me some guidance on what I am missing in this sequence?


Let me know if I haven't been clear enough in the explanation of my 
systems and objective.  Thanks.


Jeff
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Incoming rsync connection attempts

2015-10-14 Thread Jeff Boyce

Greetings -

In my logwatch report this morning I noticed reference to an attempt to 
connect to rsync from an external IP address.  It doesn't appear that 
the connection was successful based on correlating information between 
/var/log/secure and /var/log/messages.  But I am looking for some 
suggestions for implementing more preventative measures, if necessary.  
The log information from the last few attempts are shown below.


/var/log/secure
Oct 13 00:14:08 Bison xinetd[2232]: START: rsync pid=15306 
from=180.97.106.36

Oct 13 01:55:51 Bison xinetd[2232]: START: rsync pid=15343 from=85.25.43.94
Oct 13 23:25:35 Bison xinetd[2232]: START: rsync pid=16548 
from=114.119.37.86


/var/log/messages
Oct 13 00:14:08 Bison rsyncd[15306]: rsync: unable to open configuration 
file "/etc/rsyncd.conf": No such file or directory (2)
Oct 13 00:14:08 Bison rsyncd[15306]: rsync error: syntax or usage error 
(code 1) at clientserver.c(923) [receiver=3.0.5]
Oct 13 01:55:51 Bison rsyncd[15343]: rsync: unable to open configuration 
file "/etc/rsyncd.conf": No such file or directory (2)
Oct 13 01:55:51 Bison rsyncd[15343]: rsync error: syntax or usage error 
(code 1) at clientserver.c(923) [receiver=3.0.5]
Oct 13 23:25:35 Bison rsyncd[16548]: rsync: unable to open configuration 
file "/etc/rsyncd.conf": No such file or directory (2)
Oct 13 23:25:35 Bison rsyncd[16548]: rsync error: syntax or usage error 
(code 1) at clientserver.c(923) [receiver=3.0.5]


There is no /etc/rsyncd.conf file present on the system, so I can see 
why the connection wasn't successful.  Our backups get pushed to this 
one from other servers using rsync.


This is on a RHEL 3.9 box (Dell PE2600, year 2004) that is primarily 
used as backup storage within our LAN.  I will retire it when it dies, 
until then it runs fairly maintenance free.  I do have a public IP 
address assigned to the WAN because we have a vsftp server running on it 
for transferring files back and forth to a few clients, and I 
occasionally access the server remotely.  I am wondering if there is 
anything relatively simple that I can do to address these attempted 
connections, until I have time to move our vsftp server from it and 
remove the public IP address from the WAN? Thanks.


Jeff

--

Jeff Boyce
Meridian Environmental


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Issue getting Gnome display manager on Centos 6 to Xming on Win7

2015-05-05 Thread Jeff Boyce

Greetings -

My objective is to get a full Gnome GUI console display to show in an Xming 
window on my Win7 box.  I do a similar thing with my Raspberry Pi so that it 
operates headless with just the power and network cord attached.  My CentOS 
6 box and the Win7 box are on the same network within my office LAN behind a 
good firewall.  For various reasons that I won't go into I am unable to 
combine them into a single box using KVM, so I would at least like to be 
able to display my CentOS Gnome GUI within an Xming window on my Win7 box. 
And I like the way that Xming works, so I would rather not switch to 
something like X2Go as I wasn't too keen on it when I last tried it.


I have done plenty of research on the web that describes how to do this, but 
I must be missing something.  I have the following things configured on the 
CentOS 6 box.


/etc/gdm/custom.conf file
[security]
 DisallowTCP=false
 AllowRemoteRoot=true
[xdmcp]
 Enable=true

/etc/ssh/sshd_config
 X11Forwarding yes
 X11DisplayOffset 10
 X11UseLocalHost yes

I start Xming on the Win7 box with the basic parameters.  Xming.exe 
:0 -resize -clipboard (one window is default)


Putty has X11 Forwarding enabled.

After logging into the CentOS box via Putty I can invoke a single program, 
such as Gedit and it will display on the Xming window, but I can not seem to 
get the entire Gnome GUI display manager.  On my Raspberry Pi, once logged 
in through Putty I just type startlxde at the command prompt to get the 
entire display.  But the Pi is a Debian based system running a different 
display manager, so I don't know what would be comparable for CentOS 6, and 
whether additional configuration is needed.


I have tried all the basic troubleshooting actions.  I have disabled the 
firewalls on both the CentOS 6 and Win7 boxes, and I have changed SELinux to 
permissive mode.  None of these changes made any difference, I could still 
display Gedit but not the entire Gnome GUI display manager.  So I figure 
there must be something else I need to do that I am not finding in all my 
Google searches.  Can someone clue me in please.  Thanks.


Jeff Boyce
Meridian Environmental

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] General question about understanding PCI passthrough

2015-01-23 Thread Jeff Boyce

Greetings -

I saw Andrew Holway's post yesterday referencing using PCI passthrough as a 
solution to someone else's issue.  Not being familiar with it, the post made 
me look into it more to see if it is something to use for my setup.  From my 
research on the web, I have two questions to make sure I understand how PCI 
passthrough works.


1.  If you use PCI passthrough on a graphics card to allow a virtual guest 
direct access to the graphics card for it use, does the host still have use 
of the graphics card also?


2.  Or, once you pass it to the guest, the host can then no longer use it 
and has to rely on base level motherboard graphics?  It looks from my 
reading that using PCI passthrough you can make the graphics card available 
to multiple guest simultaneously, but it seemed to imply that the host could 
no longer use it.


With the box that I am configuring, I was planning on installing Linux Mint 
(Cinnamon Desktop) as my kvm host, then installing Win7 as a virtual guest. 
Since I use some mapping software in Win7 it would be nice to be able to use 
the graphics card via PCI passthrough, but not at the expense of loosing it 
from the Mint Cinnamon Desktop.  However if multiple kvm guests can use the 
graphics card simultaneously (but the host can not), the maybe I should use 
CentOS as a very basic host and then make both Mint and Win7 guests.


Jeff Boyce
Meridian Environmental


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Centos 7 how to make second disk of RAID1 bootable

2014-12-12 Thread Jeff Boyce

Greetings -

Ok, I have my CentOS 7 KVM host system installed and I want to be able to 
boot the system from either installed drive if one of them fails.  My 
objective is to have the following layout for the two 3 TB disks.


sda1  /boot/efi
sda2  /boot
sda3  RAID1 with sdb3

sdb1  /boot/efi
sdb2  /boot
sdb3  RAID1 with sda3

The system is installed and boots from sda[1,2] and md127 (sda3 and sdb3). 
sdb[1,2] were untouched during the installation, and had been partitioned as 
FAT32 prior to the installation exactly the same as sda[1,2] using GParted. 
A GPT partition table was added to both disks before partitioning.  The 
current partition information for my two drives is:


Disk /dev/sda: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 0C26A36C-3857-4E97-85CC-2D4E57F4015A
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2925 sectors (1.4 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1026047 500.0 MiB EF00 EFI System Partition
2 1026048 2050047 500.0 MiB 0700
3 2050048 5860532223 2.7 TiB FD00

Disk /dev/sdb: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): A3F0F6C1-A395-4A24-8940-BDE803E5D073
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2925 sectors (1.4 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1026047 500.0 MiB 0700
2 1026048 2050047 500.0 MiB 0700
3 2050048 5860532223 2.7 TiB FD00

sda1 and sda2 were reformatted during installation; with sda1 showing in 
GParted now as FAT16 and a boot flag, and sda2 showing as XFS without a boot 
flag.  sdb[1,2] still show as FAT32 and have no files on them.


What is the simplest and least error-prone way to make my second drive (sdb) 
bootable if the first drive (sda) were to fail?  I have done a lot of 
Googling over the last few days to try and understand what needs to be done, 
and almost everything I find is outdated in that it does not reference using 
grub2, and does not reference UEFI booting.  I am open to reading more 
how-to's if someone knows of a good one that I may have missed in the 50 
plus guides I have looked at.  I suspect that this is really not that 
difficult, but the detail that I need seems to be missing in what I have 
read.


Any responses may cc me directly as I only get the daily digest.  Thanks.

Jeff Boyce
Meridian Environmental

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 grub.cfg missing on new install

2014-12-11 Thread Jeff Boyce


- Original Message - 
From: Ned Slider n...@unixmail.co.uk

To: centos@centos.org
Cc: jbo...@meridianenv.com
Sent: Wednesday, December 10, 2014 8:53 PM
Subject: Re: [CentOS] CentOS 7 grub.cfg missing on new install




On 10/12/14 18:13, Jeff Boyce wrote:

Greetings -

The short story is that got my new install completed with the
partitioning I wanted and using software raid, but after a reboot I
ended up with a grub prompt, and do not appear to have a grub.cfg file.
So here is a little history of how I got here, because I know in order
for anyone to help me they would subsequently ask for this information.
So this post is a little long, but consider it complete.



. . . trim . . .


I then installed GRUB2 on /dev/sdb1 using the following command:
root#  grub2-install /dev/sdb1
   Results:  Installing for x86_64-efi platform.  Installation finished.
No error reported.



The upstream docs (see below) seem to suggest 'grub2-install /dev/sdb'
rather than /dev/sdb1 (i.e, installing to the device rather than a
partition on the device). I don't know if this is the cause of your issue.


I rebooted the system now, only to be confronted with a GRUB prompt.
Thinking that this is a good opportunity to for me to learn to rescue a
system since I am going to need to understand how to recover from a disk
or raid failure, I started researching and reading.  It takes a little
bit of work to understand what information is valuable when a lot of it
refers to GRUB (not GRUB2) and doesn't make reference to UEFI booting
and partitions. I found this Ubuntu wiki as a pretty good source
https://help.ubuntu.com/community/Grub2/Troubleshooting#Search_.26_Set



I found the upstream documentation for grub2 to be useful:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/ch-Working_with_the_GRUB_2_Boot_Loader.html

Included is a procedure for completely reinstalling grub2 which might
help you recover.


. . . trim . . .

Ned, thanks for your insight.  I feel like I have been sleeping with that 
RH7 document the last day or so trying to understand what I messed up and 
how to recover, I just didn't reference it in my post.  Your conclusion 
about grub2-install being directed to the partition rather than the device 
may be correct, and is about the only little detail that I see that may have 
been wrong.  The weird thing is that the installation should have put 
everything in the proper place on the primary drive, and my grub2-install 
command is being directed at putting it on the secondary drive.  That is 
what is confusing me as the proper grub files should have been on the 
primary drive, allowing me to boot from there.  It would have been nice if I 
had happened to check for the grub files before the failed reboot, or 
immediately after the installation.  I think at this point I am going to not 
try and recover, but just re-install from scratch.  I have gained enough 
knowledge in the past few days learning about grub that at least I know the 
general process and how to get started, but at this point I want to make 
sure I have a good clean system on the initial install.  Thanks to others 
who at least took the time to read my long post.


Jeff

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 grub.cfg missing on new install

2014-12-11 Thread Jeff Boyce


- Original Message - 
From: Gordon Messmer gordon.mess...@gmail.com

To: CentOS mailing list centos@centos.org
Cc: Jeff Boyce jbo...@meridianenv.com
Sent: Thursday, December 11, 2014 9:45 AM
Subject: Re: [CentOS] CentOS 7 grub.cfg missing on new install



On 12/10/2014 10:13 AM, Jeff Boyce wrote:
The short story is that got my new install completed with the 
partitioning I wanted and using software raid, but after a reboot I ended 
up with a grub prompt, and do not appear to have a grub.cfg file.

...
I initially created the sda[1,2] and sdb[1,2] partitions via GParted 
leaving the remaining space unpartitioned.


I'm pretty sure that's not necessary.  I've been able to simply change the 
device type to RAID in the installer and get mirrored partitions.  If you 
do your setup entirely in Anaconda, your partitions should all end up 
fine.


It may not be absolutely necessary, but it appears to me to be the only way 
to get to my objective.  The  /boot/efi  has to be on a separate partition, 
and it can not be on a RAID device.  The  /boot  can be on LVM according to 
the documentation I have seen, but Anaconda will give you an error and not 
proceed if it is.   Someone pointed this out to me a few days ago, that this 
is by design in RH and CentOS.  And within the installer I could not find a 
way to put  /boot  on a non-LVM RAID1 while the rest of my drive is setup 
with LVM RAID1.  So that is when I went to GParted to manually setup the 
/boot/efi  and  /boot  partitions before running the installer.


At this point I needed to copy my /boot/efi and /boot partitions from 
sda[1,2] to sdb[1,2] so that the system would boot from either drive, so 
I issued the following sgdisk commands:


root#  sgdisk -R /dev/sdb1 /dev/sda1
root#  sgdisk -R /dev/sdb2 /dev/sda2
root#  sgdisk -G /dev/sdb1
root#  sgdisk -G /dev/sdb2


sgdisk manipulates GPT, so you run it on the disk, not on individual 
partitions.  What you've done simply scrambled information in sdb1 and 
sdb2.


The correct way to run it would be
# sgdisk -R /dev/sdb /dev/sda
# sgdisk -G /dev/sdb


Point taken, I am going back to read the sgdisk documentation again.  I had 
assumed that this would be a more technically accurate way to copy sda[1,2] 
to sdb[1,2] rather than using dd as a lot of how-to's suggest.


However, you would only do that if sdb were completly unpartitioned.  As 
you had already made at least one partition on sdb a member of a RAID1 
set, you should not do either of those things.


The entire premise of what you're attempting is flawed.  Making a 
partition into a RAID member is destructive.  mdadm writes its metadata 
inside of the member partition.  The only safe way to convert a filesystem 
is to back up its contents, create the RAID set, format the RAID volume, 
and restore the backup.  Especially with UEFI, there are a variety of ways 
that can fail.  Just set up the RAID sets in the installer.


I need some additional explanation of what you are trying to say here, as I 
don't understand it.  My objective is to have the following layout for my 
two 3TB disks.


sda1/boot/efi
sda2/boot
sda3RAID1 with sdb3

sdb1/boot/efi
sdb2/boot
sdb3RAID1 with sda3

I just finished re-installing using my GParted prepartitioned layout and I 
have a bootable system with sda1 and sda2 mounted, and md127 created from 
sda3 and sdb3.  My array is actively resyncing, and I have successfully 
rebooted a couple of times without a problem.  My goal now it to make sdb 
bootable for the case when/if sda fails.  This is the process that I now 
believe I failed on previously, and it likely has to do with issueing the 
sgdisk command to a partition rather than a device.  But even so, I don't 
understand why it would have messed with my first device that had been 
bootable.



I then installed GRUB2 on /dev/sdb1 using the following command:
root#  grub2-install /dev/sdb1
   Results:  Installing for x86_64-efi platform.  Installation finished. 
No error reported.


Again, you can do that, but it's not what you wanted to do.  GRUB2 is 
normally installed on the drive itself, unless there's a chain loader that 
will load it from the partition where you've installed it.  You wanted to:

# grub2-install /dev/sdb


Yes, I am beginning to think this is correct, and as mentioned above am 
going back to re-read the sgdisk documentation.



I rebooted the system now, only to be confronted with a GRUB prompt.


I'm guessing that you also constructed RAID1 volumes before rebooting, 
since you probably wouldn't install GRUB2 until you did so, and doing so 
would explain why GRUB can't find its configuration file (the filesystem 
has been damaged), and why GRUB shows no known filesystem detected on 
the first partition of hd1.


If so, that's expected.  You can't convert a partition in-place.


Looking through the directories, I see that there is no grub.cfg file.


It would normally be in the first partition, which GRUB cannot read

[CentOS] CentOS 7 grub.cfg missing on new install

2014-12-10 Thread Jeff Boyce
.
Starting Emergency Shell. . .
Failed to issue method call: Invalid argument

Now I am not sure that I want to get misdirected to what the problem is with 
this boot, if I can boot from a CD in linux rescue mode and do the grub 
install, then be back to a booting system.  So lets ignore the boot error if 
we can.  So I boot from a CD in rescue mode, and it is only able to 
automatically mount sd3 under /mnt/sysimage (the LVM RAID1 containing mounts 
for / and /var).  I am able to manually mount sda1 and sda2, but am not sure 
at what level in the filesystem to mount them (i.e., at /mnt/sda1 or at 
mnt/sysimage/sda1) in order to properly run grub2-install.


So that is where I am at now.  I would like to know how to repair the 
system, rather than starting over on a new install.  Can someone enlighten 
me on what I need to do from here.  Also if someone can speculate on why my 
grub.cfg is missing in the first place I would be interested.


Also, please cc me directly on any responses, as I am only subscribed to the 
daily digest.  Thanks.


Jeff Boyce
Meridian Environmental
www.meridianenv.com 


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7 install software Raid on large drives error

2014-12-08 Thread Jeff Boyce

A few comments in-line and at the bottom.


Date: Sat, 06 Dec 2014 11:32:24 -0500
From: Ted Miller tedli...@sbcglobal.net
To: centos@centos.org
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives
error

On 12/05/2014 01:50 PM, Jeff Boyce wrote:


- Original Message - From: Mark Milhollan m...@pixelgate.net
To: Jeff Boyce jbo...@meridianenv.com
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives 
error




On Wed, 3 Dec 2014, Jeff Boyce wrote:


I am trying to install CentOS 7 into a new Dell Precision 3610. I have
two 3
TB drives that I want to setup in software RAID1. I followed the guide
here
for my install as it looked fairly detailed and complete
(http://www.ictdude.com/howto/install-centos-7-software-raid-lvm/).


I suggest using the install guide rather than random crud. The storage
admin guide is fine to read too, but go back to the install guide when
installing.


/mark



Well I thought I had found a decent guide that wasn't random crud, but I
can see now that it was incomplete. I have read the RHEL installation
guide (several times now) and I am still not quite sure that it has all 
the

knowledge I am looking for.

I have played around with the automated and the manual disk partitioning
system in the installation GUI numerous times now trying to understand 
what

it is doing, or more accurately, how it responds to what I am doing. I
have made a couple of observations.

1. The installer requires that I have separate partitions for both /boot
and /boot/efi. And it appears that I have to have both of these, not just
one of them.

2. The /boot partition can not reside on LVM.

3. The options within the installer then appear to allow me to create my
LVM with Raid1, but the /boot and /boot/efi are then outside the Raid.

4. It looks like I can set the /boot partition to be Raid1, but then it 
is

a separate Raid1 from the LVM Raid1 on the rest of the disk. Resulting in
two separate Raid1s; a small Raid1 for /boot and a much larger Raid1 for
the LVM volume group.

I finally manually setup a base partition structure using GParted that
allowed the install to complete using the format below.

sda (3TB)
sda1 /boot fat32 500MB
sda2 /boot/efi fat32 500MB
sdb (3TB)
sdb1 /boot fat32 500MB
sdb2 /boot/efi fat32 500MB

The remaining space was left unpartitioned in GParted, which was then
prepared as LVM Raid1 in the CentOS installer. The installer also put the
/boot and /boot/efi files on sda1 and sda2. Then I would have to manually
copy them over to sdb1 and sdb2 if I wanted to be able to boot from drive
sdb if drive sda failed.

I am not sure that this result is what I really want, as it doesn't Raid 
my

entire drives. The structure below is what I believe I want to have.

sda  sdb RAID1 to produce md1
md1 partitioned
md1a /boot non-LVM
md1b /boot/efi non-LVM
md1c-f LVM containing /, /var, /home, and /swap

Well the abbreviations may not be the proper syntax, but you probably get
the idea of where I am going. If this is correct, then it looks like I
need to create the RAID from the command line of a rescue disk and set 
the

/boot and /boot/efi partitions first before beginning the installer. But
then again I could be totally off the mark here so I am looking for 
someone

to set me straight. Thanks.

Jeff


The last time I actually needed to do this was probably Centos 5, so 
someone will correct me if I have not kept up with all the changes.


1. Even though GRUB2 is capable of booting off of an LVM drive, that 
capability is disabled in RHEL  Centos. Apparently RH doesn't feel it is 
mature yet. Therefore, you need the separate boot partition. (I have a 
computer running a non-RH grub2 installation, and it boots off of LVM OK, 
but apparently it falls into the works for me category).


Now that you say that I do recall seeing someone mention that before on this 
list, but had not run across it recently in all my Goggle searching.


2. I cannot comment from experience about the separate drive for /boot/efi, 
but needing a separate partition surprises me. I have not read about others 
needing that. I would think that having an accessible /boot partition would 
suffice.


I tried a lot of different combinations with the installer and 
pre-partitioning the drives, but I don't recall if I tried putting the /boot 
and /boot/efi on the same partition outside of the RAID.  That may work, but 
I am not going back to try that combination now.


3. When grub (legacy or grub2) boots off of a RAID1 drive, it doesn't 
really boot off of the RAID. I just finds one of the pair, and boots off 
of that half of the RAID. It doesn't understand that this is a RAID 
drive, but the disk structure for RAID1 is such that it just looks like a 
regular drive to GRUB. Basically, it always boots off of sda1. If sda 
fails, you have to physically (or in BIOS) swap sda and sdb in order for 
grub to find the RAID copy.


This seems reasonable

Re: [CentOS] CentOS 7 install software Raid on large drives error

2014-12-05 Thread Jeff Boyce


- Original Message - 
From: Mark Milhollan m...@pixelgate.net

To: Jeff Boyce jbo...@meridianenv.com
Sent: Thursday, December 04, 2014 7:18 AM
Subject: Re: [CentOS] CentOS 7 install software Raid on large drives error



On Wed, 3 Dec 2014, Jeff Boyce wrote:

I am trying to install CentOS 7 into a new Dell Precision 3610.  I have 
two 3
TB drives that I want to setup in software RAID1.  I followed the guide 
here

for my install as it looked fairly detailed and complete
(http://www.ictdude.com/howto/install-centos-7-software-raid-lvm/).


I suggest using the install guide rather than random crud.  The storage
admin guide is fine to read too, but go back to the install guide when
installing.


/mark



Well I thought I had found a decent guide that wasn't random crud, but I can 
see now that it was incomplete.  I have read the RHEL installation guide 
(several times now) and I am still not quite sure that it has all the 
knowledge I am looking for.


I have played around with the automated and the manual disk partitioning 
system in the installation GUI numerous times now trying to understand what 
it is doing, or more accurately, how it responds to what I am doing.  I have 
made a couple of observations.


1.  The installer requires that I have separate partitions for both /boot 
and /boot/efi.  And it appears that I have to have both of these, not just 
one of them.


2.  The /boot partition can not reside on LVM.

3.  The options within the installer then appear to allow me to create my 
LVM with Raid1, but the /boot and /boot/efi are then outside the Raid.


4.  It looks like I can set the /boot partition to be Raid1, but then it is 
a separate Raid1 from the LVM Raid1 on the rest of the disk.  Resulting in 
two separate Raid1s; a small Raid1 for /boot and a much larger Raid1 for the 
LVM volume group.


I finally manually setup a base partition structure using GParted that 
allowed the install to complete using the format below.


sda  (3TB)
  sda1  /boot   fat32   500MB
  sda2  /boot/efi   fat32   500MB
sdb  (3TB)
  sdb1  /boot   fat32   500MB
  sdb2  /boot/efi   fat32   500MB

The remaining space was left unpartitioned in GParted, which was then 
prepared as LVM Raid1 in the CentOS installer.  The installer also put the 
/boot and /boot/efi files on sda1 and sda2.  Then I would have to manually 
copy them over to sdb1 and sdb2 if I wanted to be able to boot from drive 
sdb if drive sda failed.


I am not sure that this result is what I really want, as it doesn't Raid my 
entire drives.  The structure below is what I believe I want to have.


sda  sdb  RAID1 to produce md1
md1 partitioned
  md1a/boot non-LVM
  md1b   /boot/efi non-LVM
  md1c-f   LVM  containing  /, /var, /home, and /swap

Well the abbreviations may not be the proper syntax, but you probably get 
the idea of where I am going.  If this is correct, then it looks like I need 
to create the RAID from the command line of a rescue disk and set the /boot 
and /boot/efi partitions first before beginning the installer.  But then 
again I could be totally off the mark here so I am looking for someone to 
set me straight.  Thanks.


Jeff



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 7 install software Raid on large drives error

2014-12-03 Thread Jeff Boyce

Greetings -

I am trying to install CentOS 7 into a new Dell Precision 3610.  I have two 
3 TB drives that I want to setup in software RAID1.  I followed the guide 
here for my install as it looked fairly detailed and complete 
(http://www.ictdude.com/howto/install-centos-7-software-raid-lvm/).  I only 
changed the size of the partitions from what is described, but ended up with 
the disk configuration error that won't allow the installation to complete. 
The error is:


You have not created a bootloader stage 1 target device.
You have not created a bootable partition.

So I am clearly missing a step in setting up the drives; likely before 
running the installer.  My disks are blank raw disks right now with nothing 
on them.  Reading the RHEL Storage Admin Guide (Sec. 18.6, Raid Support in 
the Installer) this should be supported, but I am assuming I may need to do 
something different because the drives are greater than 2 TB.  I have a 
SystemRescueCD that I can use GParted to do some setup in advance of the 
installer, but am not sure of what exactly I need to do.


My objective is to RAID1 the two drives, use LVM on top of the RAID, install 
CentOS7 as a KVM host system, with two KVM guests (Linux Mint and Windows 
7).  Can anyone tell me the steps I am missing, or point me to a better 
tutorial than what I have found in my extensive Google searches.  Thanks.


Please cc me directly on replies as I am only subscribed to the daily 
digest.  Thanks.


Jeff Boyce
www.meridianenv.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Advice on CentOS 7, software raid 1, lvm, 3 TB

2014-09-17 Thread Jeff Boyce
Sorry for breaking the threading, as I only get the daily digest.  My 
comments (interspersed) begin with the **.



Message: 41
Date: Tue, 16 Sep 2014 18:38:02 -0400
From: SilverTip257 silvertip...@gmail.com
To: CentOS mailing list centos@centos.org
Subject: Re: [CentOS] Advice on CentOS 7, software raid 1, lvm, 3 TB
disks
Message-ID:
CA+W-zoqD9e826==2H6i9g4CkM_SrqW=yEtFcWntAy=qeazg...@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Tue, Sep 16, 2014 at 6:07 PM, Jeff Boyce jbo...@meridianenv.com wrote:


Greetings -

I am preparing to order a new desktop system for work. In general the new
system will be a Dell Precision T3610 with two 3 TB drives. I plan on
installing CentOS 7 as a KVM host, with virtual machines for Win 7 Pro and
Linux Mint. I am looking for some advice or a good how-to on configuring
software raid on the two drives, then using LVM for the host and virtual
machines. I have configured our company server (Dell T610) with hardware
raid and LVM for a CentOS 6 KVM host and several virtual machines so I am
not a complete novice, but have never setup a Linux software raid system,
and have not played with a CentOS 7 install yet. I have been searching the
web and forums for information and am not finding much for good guidance.
Lots of gotcha's are popping up identifying issues related to CentOS 7,
software raid 1, grub install,  2 TB disks (or any combination of these
factors). The CentOS Wiki has a good description of installing



I'm not sure which wiki article you might have read. That URL might be
worthwhile to share.

** The CentOS Wiki article I was referring to was the same one you provided 
in the first link of the group of references posted at the bottom of your 
note.  I was a little put off by the article having fairly significant 
warnings in the first two paragraphs of the article, so I only skimmed 
through it.




CentOS 5 with raid 1, but there is a big warning about being an
unsupported (risky) approach. Can anyone point me to a good how-to, or
provide some general guidance. Thanks.



Hopefully what I have typed up below helps you.
I don't know about soft-raid1 being an unsupported/risky approach ... that
said I'd pick hardware raid over software raid (considering I had spare
hardware) so I don't have to fuss with raid at the OS level. I have worked
on a mix of software-raid and hardware-raid systems (and still do) ... each
has its own pros/cons. I've had success re-adding a new drive in degraded
soft-raid1 arrays in a production environment ... so I say go for it.

[ ]
Somebody else asked about C7 and soft-raid in the past week or week and a
half.
You can find that thread here:
http://lists.centos.org/pipermail/centos/2014-September/145656.html
Though I don't think much was accomplished in that thread.
[/]

** Yea, I saw that thread recently also, and was hoping for some good 
information from it, but the original post wasn't specific enough to 
generate specific guidance.  That is why I thought to try a post that was 
more specific to my objective.



My suggestion to you (as well as that last person) is to spin up a VM (or
spare bare-metal hardware) and use mdadm commands to assemble, stop,
hot-fail, hot-remove, and rebuild (add a new disk to replace a failed
one) your soft-raid array.

** I am planning on having some time set aside to play/experiment with the 
new box while setting it up.  So I am fortunate to not be in a situation 
where I have to get it into production now before figuring it all out.



As is the case with many things Linux, the manpage is your friend.
Sometimes sysadmins and hobbyists decide to publish what they've done
(good or bad) which can be found with the search engine of your choice.

** I am a book worm so I don't mind reading man pages.  But as an ecologist, 
it is often good to understand the big picture before diving into the 
details.  That way I understand which detail I need to look at, and what 
order all the details go together in.  It looks like the ArchLinux links you 
gave me below will give me an understanding of that as I read them in 
detail.



In this case, even generic (non-RH or non-CentOS specific) command
documentation is likely what you want. More than likely you'll get the
results you want by booting to a rescue CD (or switching to a shell on your
install CD), setting up your soft-raid, then booting to your install CD,
which will probe for your disk/soft-raid/lvm layout.

steps for graphical approach (C5, so dated) -
http://wiki.centos.org/HowTos/SoftwareRAIDonCentOS5
partitionable soft-raid -
http://wiki.centos.org/HowTos/Install_On_Partitionable_RAID1
TLDP create soft-raid - http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

** I've seen these first three, and will go back and look at the first one 
in more detail now as mentioned above.



Arch Linux create soft-raid -
https://wiki.archlinux.org/index.php/Software_RAID_and_LVM

** I had not found this one.  My first impression scanning through

[CentOS] Advice on CentOS 7, software raid 1, lvm, 3 TB disks

2014-09-16 Thread Jeff Boyce

Greetings -

I am preparing to order a new desktop system for work.  In general the new 
system will be a Dell Precision T3610 with two 3 TB drives.  I plan on 
installing CentOS 7 as a KVM host, with virtual machines for Win 7 Pro and 
Linux Mint.  I am looking for some advice or a good how-to on configuring 
software raid on the two drives, then using LVM for the host and virtual 
machines.  I have configured our company server (Dell T610) with hardware 
raid and LVM for a CentOS 6 KVM host and several virtual machines so I am 
not a complete novice, but have never setup a Linux software raid system, 
and have not played with a CentOS 7 install yet.  I have been searching the 
web and forums for information and am not finding much for good guidance. 
Lots of gotcha's are popping up identifying issues related to CentOS 7, 
software raid 1, grub install,  2 TB disks (or any combination of these 
factors).  The CentOS Wiki has a good description of installing CentOS 5 
with raid 1, but there is a big warning about being an unsupported (risky) 
approach.  Can anyone point me to a good how-to, or provide some general 
guidance.  Thanks.


Jeff Boyce
www.meridianenv.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Need to unmount an LV from host system

2013-03-06 Thread Jeff Boyce

  - Original Message - 
  From: SilverTip257 
  To: Discussion about the virtualization on CentOS 
  Cc: jbo...@meridianenv.com 
  Sent: Tuesday, March 05, 2013 4:21 PM
  Subject: Re: [CentOS-virt] Need to unmount an LV from host system


  On Tue, Mar 5, 2013 at 3:24 PM, jbo...@meridianenv.com wrote:

Greetings -

Ok, I made a mistake that I need to fix.  Fortunately it is not a
destructive mistake, but I need some advice on how to correct the problem.

CentOS 6.3 host system named Earth

I was creating some new logical volumes within my exiting volume group for
a new virtual machine using the LVM GUI.  When I created the LV that I
plan to use for root partition of the new VM (Bacteria) I mistakenly
clicked on the box to mount the LV, and specified the mount point as /.




  So you mounted that new LV as / (or over the existing root) on your host node?


  You may end up needing to boot to a rescue CD, mount, and rsync files from 
Bacteria's root to Earth's [real] root.  ( I wonder if anything is being 
written to Bacteria's root since it's mounted over the real root. )

[root@earth ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_mei-lv_earthroot
  5.0G  3.9G  880M  82% /
tmpfs 5.9G  276K  5.9G   1% /dev/shm
/dev/sda1 485M  116M  344M  26% /boot
/dev/mapper/vg_mei-lv_earthvar
  3.0G  748M  2.1G  27% /var
/dev/mapper/vg_mei-lv_bacteriaroot
  5.0G  3.9G  880M  82% /

I tried to unmount the device, but as shown below, it is busy.

[root@earth ~]# umount /dev/mapper/vg_mei-lv_bacteriaroot
umount: /: device is busy.
(In some cases useful info about processes that use
 the device is found by lsof(8) or fuser(1))




  I would have expected you could unmount it given that you're umounting by the 
device name.  Having it mounted on or over root likely makes this a bit finicky.

I tried to force unmount the device, but that failed also.

[root@earth ~]# umount -f /dev/mapper/vg_mei-lv_bacteriaroot
umount2: Device or resource busy
umount: /: device is busy.
(In some cases useful info about processes that use
 the device is found by lsof(8) or fuser(1))
umount2: Device or resource busy


What other options are there.  Is there are way to get this unmounted
without having to shutdown my host system and boot into rescue mode.  I
don't really want to shutdown my active VM's while other staff are working
on them right now.



  Take the advice of the output you pasted ... check the output from lsof and 
see if you can't narrow down the problem.


  After you accidentally mounted that LV on root did you change your directory? 
 Something is hanging on to that device.


Please cc me directly as I only receive the daily digest.  Thanks.

Jeff
Meridian Environmental


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt





  -- 
  ---~~.~~---
  Mike
  //  SilverTip257  // 


  Thanks for the insight.  I had tried the advice from the output to use lsof 
and fuser, that is what led me to try the force unmount.  In the end I realized 
that the only way to unmount a / partition is to be off of it entirely, meaning 
you must be booted to a live rescue disk such as SystemRescue CD.  I ended up 
solving this by checking to make sure that /etc/fstab did not have an entry for 
the Bacteria LV, then shutting down the system and rebooting it late at night 
when there was no one on the system and no scheduled system activity.  A reboot 
brought everything back up normal, so I didn't even have to resort to using a 
Live CD.  Sorry for all the noise.

  Jeff
  Meridian Environmental
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Resizing est4 filesystem while mounted

2012-06-19 Thread Jeff Boyce
Replying to the daily digest, with my response at the bottom.


 Message: 18
 Date: Mon, 18 Jun 2012 22:28:31 +0200
 From: Dennis Jacobfeuerborn denni...@conversis.de
 Subject: Re: [CentOS] Resizing est4 filesystem while mounted
 To: centos@centos.org
 Message-ID: 4fdf8f6f.8030...@conversis.de
 Content-Type: text/plain; charset=ISO-8859-1

 On 06/18/2012 10:09 PM, Jeff Boyce wrote:
 Replying to the daily digest, with my response at the bottom.



 Message: 13
 Date: Fri, 15 Jun 2012 12:22:08 -0700
 From: Ray Van Dolson ra...@bludgeon.org
 Subject: Re: [CentOS] Resizing est4 filesystem while mounted
 To: centos@centos.org
 Message-ID: 20120615192207.ga23...@bludgeon.org
 Content-Type: text/plain; charset=us-ascii

 On Fri, Jun 15, 2012 at 12:10:09PM -0700, Jeff Boyce wrote:
 Greetings -

 I had a logical volume that was running out of space on a virtual
 machine.
 I successfully expanded the LV using lvextend, and lvdisplay shows that
 it
 has been expanded. Then I went to expand the filesystem to fill the new
 space (# resize2fs -p /dev/vde1) and I get the results that the
 filesystem
 is already xx blocks long, nothing to do. If I do a # df -h, I can see
 that
 the filesystem has not been extended. I could kick the users off the 
 VM,
 reboot the VM using a GParted live CD and extend the filesystem that 
 way,
 but I thought that it was possible to do this live and mounted? The RH
 docs
 say this is possible; the man page for resize2fs also says it is 
 possible
 with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4
 filesystem. The logical volumes are setup on the host system which is
 also
 a Centos 6.2 system.

 Try resize4fs (assuming your FS is ext4).

 Ray

 Well, I have never seen a reference to resize4fs before (and yes my FS is
 ext4).  It is not on my Centos 6.2 system, and doing a little searching
 through repositories for that specifically, or e4fsprogs, and I can't 
 find
 it anywhere to even try it.  Any google reference seems to point back to
 resize2fs.  I ended up booting a live SystemRescueCD and using GParted 
 via
 the GUI.  My notes indicate that is what I had done previously also.  I 
 am
 still stumped, everything that I have read indicates that resize2fs can 
 do a
 live resizing on ext4 file systems.  Can anybody confirm or deny this? 
 Is
 the reason I can't do this because it is on an LVM logical volume? 
 Thanks.

 Please post some details about your storage topology. Without this
 information its not really possible to be sure what is going on.
 resizefs cannot work as long as the underlying layers don't see any change
 in size and you didn't seem to look for that.

 Regards,
   Dennis


I provided some of that information in my original post, but if you can help 
explain why I couldn't seem to resize the file system while mounted here is 
more information.

Host system is Centos 6.2 on a Dell PE T610 with hardware raid on a PERC 
H700.  Raid 5 is setup across three disks with a fourth hot spare.  I have 
created a volume group within the raid 5 encompassing most of my drive 
space.  Within the VG I have created numerous logical volumes that are 
assigned to specific systems.

Volume Group:  vg_mei
Logical Volumes:
   lv_earthroot
   lv_earthswap
   lv_earthvar
   lv_sequoiaroot
   lv_sequoiaswap
   lv_sequoiavar
   lv_sequoiahome
   lv_sequoiaecosystem

Earth is my host system and Sequoia is one of the guest systems. 
lv_sequoiaecosystem is the space dedicated to our Samba server and is the LV 
that I was expanding to make more space available to the rest of the staff. 
I had successfully extended lv_sequoiaecosystem using the following command 
from root on earth (lvextend -L+50G /dev/vg_mei/lv_sequoiaecosystem). 
Issuing the command  (lvdisplay /dev/vg_mei/lv_sequoiaecosystem) following 
this showed that the LV was successfully extended from 100 to 150 GB.

I then logged onto sequoia as root and issued a df -h to determine which 
device needed the file system to be resized (/dev/vde1).  The output below 
is current, after I resized the filesystem using GParted.

[root@sequoia ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/vda2 4.5G  2.5G  1.8G  59% /
tmpfs1004M  112K 1004M   1% /dev/shm
/dev/vda1 485M   55M  406M  12% /boot
/dev/vde1 148G   85G   56G  61% /ecosystem
/dev/vdd1  20G  1.3G   18G   7% /home
/dev/vdc1 2.0G  266M  1.7G  14% /var

Then from root on sequoia I issued the command (resize2fs -p /dev/vde1) and 
got back the result that the filesystem is already 26214144 blocks long, 
nothing to do.  That is when I posted my first question about not being able 
to resize a live mounted filesystem.  Is that enough information for your 
question, or is there something that I am not providing?  Thanks.

Jeff

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Resizing est4 filesystem while mounted

2012-06-18 Thread Jeff Boyce
Replying to the daily digest, with my response at the bottom.



Message: 13
Date: Fri, 15 Jun 2012 12:22:08 -0700
From: Ray Van Dolson ra...@bludgeon.org
Subject: Re: [CentOS] Resizing est4 filesystem while mounted
To: centos@centos.org
Message-ID: 20120615192207.ga23...@bludgeon.org
Content-Type: text/plain; charset=us-ascii

On Fri, Jun 15, 2012 at 12:10:09PM -0700, Jeff Boyce wrote:
 Greetings -

 I had a logical volume that was running out of space on a virtual 
 machine.
 I successfully expanded the LV using lvextend, and lvdisplay shows that 
 it
 has been expanded. Then I went to expand the filesystem to fill the new
 space (# resize2fs -p /dev/vde1) and I get the results that the 
 filesystem
 is already xx blocks long, nothing to do. If I do a # df -h, I can see 
 that
 the filesystem has not been extended. I could kick the users off the VM,
 reboot the VM using a GParted live CD and extend the filesystem that way,
 but I thought that it was possible to do this live and mounted? The RH 
 docs
 say this is possible; the man page for resize2fs also says it is possible
 with ext4. What am I missing here? This is a Centos 6.2 VM with an ext4
 filesystem. The logical volumes are setup on the host system which is 
 also
 a Centos 6.2 system.

Try resize4fs (assuming your FS is ext4).

Ray

Well, I have never seen a reference to resize4fs before (and yes my FS is 
ext4).  It is not on my Centos 6.2 system, and doing a little searching 
through repositories for that specifically, or e4fsprogs, and I can't find 
it anywhere to even try it.  Any google reference seems to point back to 
resize2fs.  I ended up booting a live SystemRescueCD and using GParted via 
the GUI.  My notes indicate that is what I had done previously also.  I am 
still stumped, everything that I have read indicates that resize2fs can do a 
live resizing on ext4 file systems.  Can anybody confirm or deny this?  Is 
the reason I can't do this because it is on an LVM logical volume?  Thanks.

Jeff Boyce
Meridian Environmental

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Resizing est4 filesystem while mounted

2012-06-15 Thread Jeff Boyce
Greetings -

I had a logical volume that was running out of space on a virtual machine. 
I successfully expanded the LV using lvextend, and lvdisplay shows that it 
has been expanded.  Then I went to expand the filesystem to fill the new 
space (# resize2fs -p /dev/vde1) and I get the results that the filesystem 
is already xx blocks long, nothing to do.  If I do a # df -h, I can see that 
the filesystem has not been extended.  I could kick the users off the VM, 
reboot the VM using a GParted live CD and extend the filesystem that way, 
but I thought that it was possible to do this live and mounted?  The RH docs 
say this is possible; the man page for resize2fs also says it is possible 
with ext4.  What am I missing here?  This is a Centos 6.2 VM with an ext4 
filesystem.  The logical volumes are setup on the host system which is also 
a Centos 6.2 system.

Jeff Boyce
Meridian Environmental

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] ABRT interpretation / guidance needed

2012-03-19 Thread Jeff Boyce
=
SERVER_REALROOT=/usr/libexec/webmin
PWD=/usr/libexec/webmin/webmincron
WEBMIN_CRON=1
SERVER_ADMIN=
LANG=
SERVER_ROOT=/usr/libexec/webmin
WEBMIN_CONFIG=/etc/webmin
PERLLIB=/usr/libexec/webmin
SHLVL=1
HOME=/root
LANGUAGE=
MINISERV_PID=1657
SERVER_SOFTWARE=MiniServ/1.580
WEBMIN_VAR=/var/webmin
_=/bin/rpm

os_release
-
CentOS release 6.2 (Final)



Jeff Boyce
Meridian Environmental
www.meridianenv.com 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Resize guest filesystem question

2012-02-27 Thread Jeff Boyce
Responding to the daily digest, see comment at bottom.


 Message: 1
 Date: Mon, 27 Feb 2012 14:01:27 +0100
 From: Markus Falb markus.f...@fasel.at
 Subject: Re: [CentOS-virt] Resize guest filesystem question
 To: centos-virt@centos.org
 Message-ID: jifur8$pmi$1...@dough.gmane.org
 Content-Type: text/plain; charset=iso-8859-1

 On 24.2.2012 21:28, Sergiy Yegorov wrote:
 Fri, 24-Feb-2012 12:05:55 Jeff Boyce wrote:

 6.  Then I ran  resize2fs /dev/vda2  and got the result that the 
 filesystem
 is already xx blocks long.  Nothing to do!
 Before you can resize filesystem, you have to resize partition.
 If it is only 2 partitions on /dev/vda, you can use one of two ways:
 1. Resize partition from host system (I think it is not the best idea for 
 root
 partition operations):
  Run fdisk /dev/vg/lv_guest1root, delete second partition and create new 
 one
 which starts from the same place but takes all available space, after it 
 you
 can boot guest (in single mode) and run resize2fs.
 2. Boot VM from any 3-party (LiveCD or any) with access to virtual disk, 
 and
 do the same: in fdisk delete existing partition, create new one and run
 resize2fs on it. Or just use parted to do it in one command.

 3. You can repartition from the guest itself
 Do as in 2. After saving the new partition table fdisk will probably
 request a reboot for using the new table. reboot, then resize the fs.

 -- 
 Kind Regards, Markus Falb


Thanks to everyone that replied.  Ed gave me the right clue that I was 
missing (and is apparently missing in a lot of the how-to's that I reviewed 
which only said to expand the LV, then expand the filesystem).  I had to 
expand the partition before expanding the filesystem.  So for my solution I 
rebooted that particular VM using SystemRescue LiveCD, then started GParted 
and expanded the partition which also expanded the filesystem in a single 
step.  Then rebooted using the VM's image and #df -h now shows the expanded 
LV and a file system on the full space.

Jeff

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Resize guest filesystem question

2012-02-24 Thread Jeff Boyce
Greetings -

I am going through some testing steps to expand a logical volume and the 
corresponding filesystem on a KVM guest and have run across a deficiency in 
my knowledge.  I spent the afternoon yesterday googling for answers, but had 
have come up blank still.  What I am trying to do is resize the file system 
to use the additional disk space that I added to the logical volume that the 
guest uses.  Here is what I have done and the details of my system.

0.  Both my host and guest are running Centos 6.2.

1.  My KVM host system has the LVM volume group that is divided into logical 
volumes which are then presented to the KVM guests as raw space.

2.  A guest may use 2 or 3 logical volumes from the host system for its 
filesystem (/, /var, /data) and I have logical volumes named within the host 
system by guest and mount point so that I know what each logical volume is 
assigned to by it's name.

3.  I expanded a specific logical volume on the host (/dev/vg/lv_guest1root) 
that is used by Guest1, and I can see in vgdisplay and lvdisplay that the 
logical volume was properly expanded.

4.  I then issued a  resize2fs /dev/vg/lv_guest1root  command (on the host) 
to resize the filesystem to the expanded logical volume.  This resulted in a 
message that it essentially couldn't find a valid filesystem superblock. 
Well of course then I realized that there is no filesystem on the logical 
volume from the perspective of the host.  The filesystem wasn't set on the 
logical volume until the guest installation occurred.

5.  So then I switched over to the guest system and ran  df -h  to see the 
existing filesystem

[root@guest1 jeffb]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/vda2 4.5G  2.3G  2.0G  53% /
tmpfs1004M   88K 1004M   1% /dev/shm
/dev/vda1 485M   30M  430M   7% /boot
/dev/vdb1 2.0G  219M  1.7G  12% /var

6.  Then I ran  resize2fs /dev/vda2  and got the result that the filesystem 
is already xx blocks long.  Nothing to do!

So here is where I am stuck.  Guest1 is my test system so it only has the / 
and /var logical volumes, whereas the production guest (guest2) that I will 
be expanding also has /data, which will be the logical volume that I will 
expand.  So two things I did not do where, I did not shut down the guest VM, 
and I did not unmount the filesystem before asking it to resize.  However my 
research before doing this did not seem to indicate that I had to do either, 
and the message about nothing to do also seems to indicate that they were 
not necessary.

So I am missing a hole in my knowledge and additional googling has not 
helped to fill it.  I must be missing something simple.  Is this result due 
to the fact that I am testing on expanding the / filesystem, and it would 
work properly on a guest system that had  /data?  Do I need to unmount the 
filesystem, or shut down the guest VM, or mount the guest from a LiveCD?  Or 
do I need to give it  resize2fs /dev/vda  rather than specifically 
/dev/vda2 ?  Any clues, or pointers to good documentation is greatly 
appreciated.  Thanks.

Jeff Boyce
Meridian Environmental
www.meridianenv.com 

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Confusion over steps to add new logical volume to guest VM

2011-12-19 Thread Jeff Boyce
Greetings -

I am hoping someone can confirm for me the steps that I am using to add an 
LV to an existing Guest in KVM, and what I am seeing as I do some of these 
steps.  My host (earth) is CentOS 6 and so is my guest (disect).  The guest 
(disect) is my testing server.  The main objective of what I am trying to do 
is add a logical volume for /var into my guest.  The LV (lv_disectvar) is 
part of an existing VG (vg_mei) on the host. When I installed my guest OS, I 
could only select one LV from my host for the installation, so the entire OS 
was installed into lv_disectroot.  I have added lv_disectvar into the 
storage pool available for VMs through the VirtManager GUI, and successfully 
added the lv_disectvar to the guest VM also through the VirtManager GUI. 
And now I want to move my /var over to the new LV.

My notes show that I created the LV about two weeks ago, and added it to the 
guest VM last week.  I don't see any other notes indicating that I did 
anything else with this LV.  Today I finally got around to the task of 
moving /var from lv_disectroot over to lv_disectvar and editing fstab so 
that it would automount at boot.  So here are the steps that I was going to 
do, and what I saw that made me pause.

1.  Verify that the device is identified by the guest, and get the UUID for 
fstab.
[jeffb@disect ~]$ ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx. 1 root root 10 Dec 12 14:37 
738923a4-c345-40bf-a850-4454caa152b6 - ../../vda2
lrwxrwxrwx. 1 root root  9 Dec 12 14:37 
db1230fb-a061-4ea4-9229-c5953d29aef8 - ../../vdb  lv_disectvar
lrwxrwxrwx. 1 root root 10 Dec 12 14:37 
db70a6db-3640-4392-b9ba-49839d6e4b45 - ../../vda1

2.  Create a mount point for /dev/vdb
[root@disect jeffb]# mkdir /mnt/var

3.  Mount the device
[root@disect jeffb]# mount -t ext4 /dev/vdb /mnt/var

4.  At this point I was going to move /var to /mnt/var but decided to check 
the mount first
[root@disect mnt]# cd /mnt/var
[root@disect var]# ls
cache  lib  lock  log  lost+found  run

5.  When I saw the directories listed I was initially confused and thought I 
had possibly mounted the wrong device.  But after double checking everything 
it appears I did not do anything wrong.  As I mentioned above my notes do 
not show that I did anything with this LV other than create it and attach it 
to the guest VM.  I am wondering about the source of these directories and 
all the subsequent files under them.  My assumption is that these were 
created during the creation of the LVM and are part of its file system.  Is 
that a correct assumption?  How could I confirm this assumption?

6.  Can I just add a /var directory to this and move my /var from /vda2 over 
to /vdb?

7.  Then I would add the following line to my /etc/fstab in the guest disect
UUID=db1230fb-a061-4ea4-9229-c5953d29aef8 /var ext4 defaults 1 2

8.  I initially tried just adding the above line to my fstab, but realized 
after the guest VM hung on rebooting that /var is already mounted under /, 
and that of course caused a problem when booting.

Thanks for any advice you can provide.


Jeff Boyce
Meridian Environmental
www.meridianenv.com

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Slightly OT: Centos KVM Host/Guest functions and LVM considerations

2011-09-14 Thread Jeff Boyce
Greetings -



I will be getting a new server for my company in the next few months and am 
trying to get a better understanding of some of the basic theoretical 
approaches to using Centos KVM on a server.  I have found plenty of things 
to read that discuss how to install KVM on the host and how to install a 
guest and setup the network bridge, and all the other rudimentary tasks. 
But all of these how to's make the assumption that I have a philosophical 
understanding of how KVM is incorporated into the design of how I use my 
server, or network of virtual servers as the case may be.  So in general I 
am looking for some opinions or guidance from KVM experts or system 
administrators to help direct me along my path.  Pointers to other HowTo's 
or Blogs are appreciated, but I have run out of Google search terms over the 
last month and haven't really found enough information to address my 
specific concerns.  I suspect my post might get a little long, so before I 
give more details of my objectives and specific questions, please let me 
know if there is a better forum for me to post my question(s).



The basic questions I am trying to understand are, (1) what functions in my 
network should my host be responsible for, (2) what functions should 
logically be separated to different VMs, and (3) how should I organize 
disks, raid, LVM, partitions, etc to make best use of how my system will 
function?  Know I know these questions are wide open without any context to 
what the purpose(s) of my new server are, and my existing background and 
knowledge, so that is the next part.



I am an ecologist by education, but I manage all the computer systems for my 
company with a dozen staff.  I installed my current Linux server about 7 
years ago (RHEL3) as primarily a Samba file server.  Since then the 
functions of this server have expanded to include VPN access for staff and 
FTP access for staff and clients.  Along the way I have gained knowledge and 
implement various other functions that are primarily associated with 
managing the system, such as tape backups, UPS shutdown configuration, Dell's 
OMSA hardware monitoring, and network time keeping.  I am certainly not a 
Linux expert and my philosophy is to learn as I go, and document it so that 
I don't have to relearn it again.  My current server is a Dell (PE2600) with 
1 GB RAM and 6 drives in a RAID 5 configuration, without LVM.  I have been 
blessed with a very stable system with only a few minor hiccups in 7 years. 
My new server will be a Dell (T610) with 12 GB RAM, 4 drives in a RAID 5 
configuration, and an iDRAC6 Enterprise Card.



The primary function of my new server hardware will be as the Samba file 
server for the company.  It may also provide all, or a subset of, the 
functions my existing server provides.  I am considering adding a new 
gateway box (ClearOS) to my network and could possibly move some functions 
(FTP, VPN, etc.) to it if appropriate.  There are also some new functions 
that my server will probably be responsible for in the near future (domain 
controller, groupware, open calendar, client backup system [BackupPC]).  I 
am specifically planning on setting up at least one guest VM as a space to 
test and setup configurations for new functions for the server before making 
them available to all the staff.



So to narrow down my first two questions how should these functions be 
organized between the host system and any guest VMs?  Should the host be 
responsible just for hardware maintenance and monitoring (OMSA, APC 
shutdown), or should it include the primary function of the hardware (Samba 
file server)?  Should remote access type functions (FTP  VPN) be segregated 
off the host and onto a guest?  Or should these be put on the gateway box?



I have never worked with LVM yet, and I am trying to understand how I should 
setup my storage space and allocate it to the host and any guests.  I want 
to use LVM, because I see the many benefits it brings for flexible 
management of storage space.  For my testing guest VM I would probably use 
an image file, but if the Samba file server function is in a guest VM I 
think I would rather have that as a raw LV partition (I think?).  The more I 
read the more confused I get about understanding the hierarchy of the 
storage (disks.RAID.[PV,VG,LV].Partition.Image File) and how I should be 
looking at organizing and managing the file system for my functions.  With 
this I don't even understand it enough to ask a more specific question.



Thanks to anyone that has had the patients for reading this much.  More 
thanks to anyone that provides a constructive response.



Jeff Boyce
Meridian Environmental 

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS] Slightly OT: First Time KVM and LVM on Centos

2011-06-13 Thread Jeff Boyce
Greetings -

I am a novice system administrator and will soon be purchasing a new server 
to replacing an aging file server for my company.  I am considering setting 
up the new server as a KVM host with two guests; one guest as the Samba file 
server and a second guest as a testing area.  My old server was set up about 
7 years ago and has a 5 disk raid 5 configuration without LVM.  I understand 
the benefits of using LVM and KVM in the right circumstances, but have never 
used either of them.  I have spent a couple of days over the last week 
trying to understand how to setup a KVM host with guests, but there is an 
area that I still don't understand; that is the relationship between the 
underlying raid partitions, LVM, and allocating space to a host and guests. 
Many of the standard search term combinations in Google don't seem to be 
getting me anywhere.  From what I have read so far I think that I want to 
have my file server guest using a raw partition rather than an image file, 
but I haven't found anything with examples or best-practices guidance for 
partitioning or volume management with hosts and guest VMs.  So I am hoping 
that someone here can give me some pointers, or point me to some clear 
how-to's somewhere.  Any help is appreciated.  Thanks.

Jeff Boyce
Meridian Environmental
www.meridianenv.com

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] User accounts management for small office

2011-04-27 Thread Jeff Boyce
- Original Message - 
From: Jeff Boyce jbo...@meridianenv.com
To: centos@centos.org
Sent: Thursday, April 21, 2011 11:39 AM
Subject: User accounts management for small office


 Greetings -

 This may be a little off-topic here so if someone wants to point me to a 
 more appropriate mailing list I would appreciate it.

 I administer the network for my small company and am preparing to install 
 a new server in the next month or so.  It will be running CentOS 6 and 
 function primarily as a Samba file server to 10 Windows workstations (XP, 
 Vista, 7).  It will also host our OpenVPN server and possibly our FTP 
 server; however I am hoping to move our FTP server to a gateway box when 
 the new server is installed.

 The issue that I would like to be able to resolve when the new server is 
 installed, is that currently if a user wants to change the password on 
 their Windows workstation, I have to manually update that new password on 
 the Linux user account, and also manually change the Samba user account. 
 Manually updating the password in three different locations is a minor 
 headache that I would like to correct.  I have been researching and 
 reading lots of information about account management to try and understand 
 what is available, and what would be the best fit for my network size. 
 Much of what I have read is related to larger networks or larger user 
 bases, which seem to have a lot of extraneous stuff that would be 
 unnecessary in my small user environment.  I looked into OpenLDAP, and 
 have recently been reading about Samba/Winbind.  But after encountering 
 the following statement in the Samba documentation, I am still lost about 
 what I could, or should, be using.
 A standalone Samba server is an implementation that is not a member of a 
 Windows NT4 domain, a Windows 200X Active Directory domain, or a Samba 
 domain.  By definition, this means that users and groups will be created 
 and controlled locally, and the identity of a network user must match a 
 local UNIX/Linux user login. The IDMAP facility is therefore of little to 
 no interest, winbind will not be necessary, and the IDMAP facility will 
 not be relevant or of interest.

 My only goal is to be able to allow my users to change their Windows 
 password at their workstation and have it perpetuate through the system so 
 that it also changes their Linux User and Samba User account passwords.  I 
 don't expect to ever have more than a dozen users, so I want something 
 that fits our size network and is simple to administer.  I am not looking 
 for a how-to to set something up, but some opinions about what I should 
 consider using, and why it would be a good fit to achieve my goal.  I can 
 do the additional research to understand configuration once I know what I 
 should be researching.  Thanks.  Please cc me directly, as I only get the 
 list in daily digest mode.

 Jeff Boyce
 Meridian Environmental



Thanks to everyone that replied, you have helped me understand what 
direction I should be going (or staying away from).  Here are the highlights 
and my comments to some of the suggestions that were provided, since I can't 
respond to every thread from the digest.  The opinions both for and against 
OpenLDAP have made me take a little closer look at it, but my conclusion is 
that it is more cumbersome than what I really want to handle right now for 
the size of the network.  I have looked closer at Samba/Wins/Winbind, etc. 
and it looks like the main source of my current problem is that my Samba 
network is setup now as a Workgroup and not as a Domain.  I didn't 
understand that difference when I ran across the quote I included above.  It 
looks like if I change to a Domain and configure it properly with 
Wins/Winbind that I should be able to have the single point password 
changing option occur from the Windows desktop.  I am now re-reading 
sections of my copy of the Definitive Guide to Samba 3 which should help me 
(although it was published before Vista and 7, which all my workstations are 
now).

Also thanks to some for the suggestions of using ClearOS or Webmin.  I do 
have Webmin installed and use it for some of my administrative functions. 
So if I do try playing around with OpenLDAP I will certainly see if it will 
reduce my learning curve on getting it setup properly.  With the new gateway 
box that I mentioned above, I have been planning on installing ClearOS on 
it, so I will take a look at how it might be used to learn about using LDAP. 
Although I was thinking to have this box function more strictly as a gateway 
than providing services to the internal lan.

Jeff

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] User accounts management for small office

2011-04-21 Thread Jeff Boyce
Greetings -

This may be a little off-topic here so if someone wants to point me to a 
more appropriate mailing list I would appreciate it.

I administer the network for my small company and am preparing to install a 
new server in the next month or so.  It will be running CentOS 6 and 
function primarily as a Samba file server to 10 Windows workstations (XP, 
Vista, 7).  It will also host our OpenVPN server and possibly our FTP 
server; however I am hoping to move our FTP server to a gateway box when the 
new server is installed.

The issue that I would like to be able to resolve when the new server is 
installed, is that currently if a user wants to change the password on their 
Windows workstation, I have to manually update that new password on the 
Linux user account, and also manually change the Samba user account. 
Manually updating the password in three different locations is a minor 
headache that I would like to correct.  I have been researching and reading 
lots of information about account management to try and understand what is 
available, and what would be the best fit for my network size.  Much of what 
I have read is related to larger networks or larger user bases, which seem 
to have a lot of extraneous stuff that would be unnecessary in my small user 
environment.  I looked into OpenLDAP, and have recently been reading about 
Samba/Winbind.  But after encountering the following statement in the Samba 
documentation, I am still lost about what I could, or should, be using.
A standalone Samba server is an implementation that is not a member of a 
Windows NT4 domain, a Windows 200X Active Directory domain, or a Samba 
domain.  By definition, this means that users and groups will be created and 
controlled locally, and the identity of a network user must match a local 
UNIX/Linux user login. The IDMAP facility is therefore of little to no 
interest, winbind will not be necessary, and the IDMAP facility will not be 
relevant or of interest.

My only goal is to be able to allow my users to change their Windows 
password at their workstation and have it perpetuate through the system so 
that it also changes their Linux User and Samba User account passwords.  I 
don't expect to ever have more than a dozen users, so I want something that 
fits our size network and is simple to administer.  I am not looking for a 
how-to to set something up, but some opinions about what I should consider 
using, and why it would be a good fit to achieve my goal.  I can do the 
additional research to understand configuration once I know what I should be 
researching.  Thanks.  Please cc me directly, as I only get the list in 
daily digest mode.

Jeff Boyce

Meridian Environmental



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos