Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Alexandru Chiscan

Hello Tim,

On 06/24/2015 07:42 PM, Tim Dunphy wrote:

rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
Broken pipe (32)
rsync: write failed on "/opt/var/log/lastlog": No space left on device (28)
lastlog is a VERY large SPARSE file and when you rsync it it looses the sparsity and tries 
to copy all the data to /opt


ls -al -h /var/log/lastlog
-rw-r--r--. 1 root root *94G* Jun 25 09:10 /var/log/lastlog

Real space on disk
 du -h /var/log/lastlog
*60K* /var/log/lastlog

Lec
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum and yumex change system time

2015-06-24 Thread g


On 06/23/2015 08:25 PM, Johnny Hughes wrote:
<<>>

> Edit the file:
>
> /etc/sysconfig/clock
>
> make sure to set:
>
> ZONE="America/Chicago"

it was;

  ZONE="Etc/UTC"

it is now.

  ZONE="America/Chicago"

> then copy /usr/share/zoneinfo/America/Chicago to /etc/localtime

done.

> Run the time tool and make sure that "System clock uses UTC" is
> NOT checked

if by 'time tool' you mean "System Settings > Date & Time", there
is no "System clock uses UTC".

i ran 'yumex' to pull in some progs, 'tzdata-2015e-1.el6.noarch.rpm'
was updated 2015-0621, so i could not pull it.

closed 'yumex' no change in system time in panel. rebooted, still OK.

1 problem left. :-(

when i started kde after rebooting, i ran the 'hwclock' and 'zdump'
checks, all compared to before. but...

time stamp showing in konqueror is CST, not CDT.

when i mouse over clock in panel, 'Chicago' and 'UTC' times show
correctly, ie,

   Chicago 21:30, Wednesday 24 June 2015
   UTC 02:30, Thursday 25 June 2015

'Chicago' is CDT, -0500 hrs, yet, knoqueror stamps CST.

what would be causing konqueror to be time stamping CST instead
of CDT?

how to correct?


-- 

peace out.

-+-
If Bill Gates got a dime for every time Windows crashes...
 ...oh, wait. He does. THAT explains it!
-+-
in a world with out fences, who needs gates.
-+-

CentOS GNU/Linux 6.6

tc,hago.

g
.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] PXE question

2015-06-24 Thread James A. Peltier
- Original Message -
| I was wondering, where is the format and options of files like
| /usr/share/system-config-netboot/pxelinux.cfg/default from
| system-config-netboot-cmd described? There are plenty of PXE tutorials
| with examples out there, but nothing that looks like actual
| documentation.

rather than use PXELinux, chainload iPXE and watch the world of PXE booting 
become like unicorns pooping Skittles.  Your life will be much easier for it.

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 604-365-6432
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
Twitter : @sfu_rcg
Powering Engagement Through Technology
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Possible bug in kickstart

2015-06-24 Thread James A. Peltier
Hello All,

I seem to have run into a bug with the new --bridgeslaves= option.  
It would seem that if I tell the bridge device to use a virtual interface (like 
bond0) rather than a physical interface (em1/em2) that kickstart completely 
barfs on it.  I have provided my network section below which works fine as long 
as i don't enable all the bridge content.

When the installation starts off the creation of bond0 and all VLANs on bond0 
go without issue.  Jumping into TTY2 I can type 'ip a' and see all the bond0 
device and bond0.VLANID devices no problem.

When I enable bridge content with --bridgeslaves=bond0 or 
--bridgeslaves=bond0.VLANID anaconda barfs

When I enable bridge content with --bridgeslaves=em1 or --bridgeslaves=em2 
anaconda doesn't have an issue.

Is this expected behaviour?  I really don't want to have to manually manipulate 
the ifcfg-bond* interfaces and create the ifcfg-BRIDGE* interfaces manually so 
I'd like this to work.




# Configure a bond over the slaves in failover mode
network --device=bond0 --noipv6 --bootproto=dhcp --onboot=yes 
--bondslaves=em1,em2 --bondopts=mode=active-backup;primary=em1 --activate

# Configure VLANs
network --device=bond0 --vlanid=11 --noipv6 --onboot=yes --bootproto=dhcp 
--activate
network --device=bond0 --vlanid=100 --noipv6 --onboot=yes --bootproto=dhcp 
--activate
network --device=bond0 --vlanid=302 --noipv6 --onboot=yes --bootproto=dhcp 
--activate
network --device=bond0 --vlanid=303 --noipv6 --onboot=yes --bootproto=dhcp 
--activate
network --device=bond0 --vlanid=304 --noipv6 --onboot=yes --bootproto=dhcp 
--activate
network --device=bond0 --vlanid=306 --noipv6 --onboot=yes --bootproto=dhcp 
--activate

# Create a bridge on the bonded VLAN interfaces
network --device=FASNET --bridgeslaves=bond0
network --device=GLUSTER --bridgeslaves=bond0.11
network --device=EXPERIMENTAL --bridgeslaves=bond0.302
network --device=NAT --bridgeslaves=bond0.303
network --device=DMZ2 --bridgeslaves=bond0.304
network --device=NETM --bridgeslaves=bond0.306

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 604-365-6432
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
Twitter : @sfu_rcg
Powering Engagement Through Technology
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Marko Vojinovic
On Wed, 24 Jun 2015 10:40:59 -0700
Gordon Messmer  wrote:

> On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
> 
> > For concreteness, let's say I have a guest machine, with a
> > dedicated physical partition for it, on a single drive. Or, I have
> > the same thing, only the dedicated partition is inside LVM. Why is
> > there a performance difference, and how dramatic is it?
> 
> Well, I said that there's a big performance hit to file-backed
> guests, not partition backed guests.  You should see exactly the same
> disk performance on partition backed guests as LV backed guests.

Oh, I see, I missed the detail about the guest being file-backed when I
read your previous reply. Of course, I'm fully familiar with the
drawbacks of file-backed virtual drives, as opposed to physical (or LVM)
partitions. I was (mistakenly) under the impression that you were
talking about the performance difference between a bare partition and a
LVM partition that the guest lives on.

> However, partitions have other penalties relative to LVM.

Ok, so basically what you're saying is that in the usecase when one is
spinning VM's on a daily basis, LVM is more flexible than dedicating
hardware partitions for each new VM. I can understand that. Although, I
could guess that if one is spinning VM's on a daily basis, their
performance probably isn't an issue, so that a file-backed VM would do
the job... It depends on what you use them for, in the end.

It's true I never came across such a scenario. In my experience so far,
spinning a new VM is a rare process, which includes planning,
designing, estimating resource usage, etc... And then, once the VM is
put in place, it is intended to work long-term (usually until its OS
reaches EOL or the hardware breaks).

But I get your point, with LVM you have additional flexibility to spin
test-VM's basically every day if you need to, keeping the benefit of
performance level of partition-backed virtual drives.

Ok, you have me convinced! :-) Next time I get my hands on a new
harddrive, I'll put LVM on it, and see if it helps me manage VM's more
efficiently. Doing this on a single drive doesn't run the risk of
losing more than one drive's worth of data if it fails, so I'll play
with it a little more in the context of VM's, and we'll see if it
improves my workflow.

Maybe I'll have a change of heart over LVM after all. ;-)

Best, :-)
Marko

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Robert Heller
At Wed, 24 Jun 2015 14:06:30 -0400 CentOS mailing list  
wrote:

> 
> Gordon Messmer wrote:
> > On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
> >> Ok, you made me curious. Just how dramatic can it be? From where I'm
> >> sitting, a read/write to a disk takes the amount of time it takes, the
> >> hardware has a certain physical speed, regardless of the presence of
> >> LVM. What am I missing?
> >
> > Well, there's best and worst case scenarios.  Best case for file-backed
> > VMs is pre-allocated files.  It takes up more space, and takes a while
> > to set up initially, but it skips block allocation and probably some
> > fragmentation performance hits later.
> >
> > Worst case, though, is sparse files.  In such a setup, when you write a
> > new file in a guest, the kernel writes the metadata to the journal, then
> 
> 
> Here's a question: all of the arguments you're giving have to do with VMs.
> Do you have some for straight-on-the-server, non-VM cases?

In the most *common* case the straight-on-the-server, non-VM case are the VM 
themselves.  Basically, in the vast number of servers you  most commonly have 
a host with a number of VMs.  The VMs are the publicly visible servers and the 
host is pretty much invisible.  The VMs themselves won't be using LVM, but the 
host server will be.

Otherwise...

I recently upgraded to a newer laptop and put a 128G SSD disk in it.  My 
previous laptop had a 60gig IDE disk.  Since I didn't have any need for more 
files (at this time!) I set the laptop with LVM.  Because of how I do backups 
and because of the kinds of things I have on my laptop, I have multiple 
logical volumes:

newgollum.deepsoft.com% df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_newgollum-lv_root
  9.8G  5.7G  3.6G  62% /
tmpfs 1.9G  8.2M  1.9G   1% /dev/shm
/dev/sda1 477M   86M  367M  19% /boot
/dev/mapper/vg_newgollum-lv_home
  4.8G  4.0G  602M  88% /home
/dev/mapper/vg_newgollum-scratch
   30G   10G   18G  36% /scratch
/dev/mapper/vg_newgollum-mp3s
  9.8G  5.1G  4.2G  55% /mp3s

I only have about 60gig presently allocated (there is about 60gig 'free').  
And yes, this is a laptop with a single physical disk.  Some day I might 
create additional LVs and/or grow the existing LVs.  I *might* even install a 
VM or two on this laptop.

My disktop machine is also a host to a number of VMs (mostly used for build 
environments for different versions / flavors of Linux). Here LVM is pretty 
much a requirement, esp. since its disks are RAID'ed.

I also manage a server for the local public library. The host runs CentOS 6 on
the bare metal. It also provides DHCP, DNS, Firewall, and IP routing. The
library's workstations (for staff and patrons) are diskless and boot using
tftp, but they actually run Ubuntu 14.04 (since it is more 'user friendly'),
so I have a Ubuntu 14.04 (server) VM providing tftp boot for Ubuntu 14.04's
kernel and NFS for Ubuntu 14.04's root and /usr file systems. (The CentOS host
provides the /home file system.) And just as an extra 'benefit' (?) I have a
VM running a 32-bit version of MS-Windows 8 (this is needed to talk to the
library's heating system). This is a basic server, but uses virtualization for
selected services. Except for 'appliance' servers, I see things being more and
more common that pure 'bare metal' servers becoming the exception rather than
the rule. For all sorts of reasons (including security), servers will commonly
be using virtualization for many purposes. And LVM makes things really easy to
deal with disk space for VMs.

> 
>mark
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
> 
>

-- 
Robert Heller -- 978-544-6933
Deepwoods Software-- Custom Software Services
http://www.deepsoft.com/  -- Linux Administration Services
hel...@deepsoft.com   -- Webhosting Services

   
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Chuck Campbell
On 6/24/2015 1:06 PM, m.r...@5-cent.us wrote:
> Gordon Messmer wrote:
>> On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
>>> Ok, you made me curious. Just how dramatic can it be? From where I'm
>>> sitting, a read/write to a disk takes the amount of time it takes, the
>>> hardware has a certain physical speed, regardless of the presence of
>>> LVM. What am I missing?
>> Well, there's best and worst case scenarios.  Best case for file-backed
>> VMs is pre-allocated files.  It takes up more space, and takes a while
>> to set up initially, but it skips block allocation and probably some
>> fragmentation performance hits later.
>>
>> Worst case, though, is sparse files.  In such a setup, when you write a
>> new file in a guest, the kernel writes the metadata to the journal, then
> 
>
> Here's a question: all of the arguments you're giving have to do with VMs.
> Do you have some for straight-on-the-server, non-VM cases?
>
>mark
>
>

Is there an easy to follow "howto" for normal LVM administration tasks. I get
tired of googling every-time I have to do something I don't remember how to do
regarding LVM, so I usually just don't bother with it at all.

I believe it has some benefit for my use cases, but I've been reticent to use
it, since the last time I got LVM problems, I lost everything on the volume, and
had to restore from backups anyway. I suspect I shot myself in the foot, but I
still don't know for sure.

thanks,
-chuck

-- 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Gordon Messmer

On 06/24/2015 12:35 PM, Gordon Messmer wrote:
Interesting. I wasn't aware that LVM had that option.  I've been 
looking at bcache and dm-cache.  I'll have to look into that as well. 


heh.  LVM cache *is* dm-cache.  Don't I feel silly.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Gordon Messmer

On 06/24/2015 12:06 PM, Chris Adams wrote:

LVM snapshots make it easy to get point-in-time consistent backups,
including databases.  For example, with MySQL, you can freeze and flush
all the databases, snapshot the LV, and release the freeze.


Exactly.  And I mention this from time to time... I'm working on 
infrastructure to make that more common and more consistent:

https://bitbucket.org/gordonmessmer/dragonsdawn-snapshot

If you're interested in testing or development (or even advocacy), I'd 
love to have more people contributing.



That also avoids the access-time churn (for backup programs that
don't know O_NOATIME, like any that use rsync).


Yes, though rsync based systems are usually always-incremental, so they 
won't access files that haven't been modified, and impact on atime is 
minimal after the first backup.



That's server stuff.  On a desktop with a combination of SSD and
"spinning rust" drives, LVM can give you transparent SSD caching of
"hot" data (rather than you having to put some filesystems on SSD and
some on hard drive).


Interesting.  I wasn't aware that LVM had that option.  I've been 
looking at bcache and dm-cache.  I'll have to look into that as well.



Now, if btrfs ever gets all the kinks worked out (and has a stable
"fsck" for the corner cases), it integrates volume management into the
filesystem, which makes some of the management easier.


btrfs and zfs are also more reliable than RAID.  If a bit flips in a 
RAID set, all that can be determined is that the blocks are not 
consistent.  There's no information about which blocks are correct, or 
how to repair the inconsistency.  btrfs and zfs *do* have that 
information, so they can repair those errors correctly.  As much as I 
like LVM today, I look forward to ditching RAID and LVM in favor of btrfs.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread m . roth
Gordon Messmer wrote:
> On 06/24/2015 09:42 AM, Tim Dunphy wrote:
>> And for some reason when the servers were ordered the large
>> local volume ended up being /usr when the ES rpm likes to
>> store it's indexes on /var.
>>
>> So I'm syncing the contents of both directories to a different place,
>> and I'm going swap the large local volume from /usr to /var.
>
> Have you considered just resizing the volumes?  If you're trying to swap
> them with rsync, you're going to have to reboot anyway, and relabel your
> system.  If any daemons are running, you might also corrupt their data
> this way.
>
>> The entire /var partition is only using 549MB:
>>
>> rsync: write failed on "/opt/var/log/lastlog": No space left on device
>> (28)
>
> Depending on what UIDs are allocated to your users, lastlog can be an
> enormous sparse file.  You would need to use rsync's -S flag to copy it.

Um, I've not been following this closely, but /var is 549M? And a separate
partition? I haven't had /var and /usr as separate partitions in almost 10
years. Nor have I had a drive smaller than, um, 160G in about the same.

That being said, why not simply mount the additional partition under var,
for the directory that's running out of space?

  mark


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Jason Warr



On 6/24/2015 2:06 PM, Chris Adams wrote:

Once upon a time, m.r...@5-cent.us  said:

Here's a question: all of the arguments you're giving have to do with VMs.
Do you have some for straight-on-the-server, non-VM cases?

I've used LVM on servers with hot-swap drives to migrate to new storage
without downtime a number of times.  Add new drives to the system,
configure RAID (software or hardware), pvcreate, vgextend, pvmove,
vgreduce, and pvremove (and maybe a lvextend and resize2fs/xfs_growfs).
Never unmounted a filesystem, just some extra disk I/O.

Even in cases where I had to shutdown or reboot a server to get drives
added, moving data could take a long downtime, but with LVM I can
live-migrate from place to place.


This is one of my primary use cases, and a real big time saver.  I do 
this allot when migrating Oracle DB LUN's to larger sized, new 
allocations.  It works great weather you are using ASM or any Linux 
filesystem.  It is especially handy when migrating from one SAN frame to 
another.  You can fully migrate with zero down time if you do even a 
small amount of planning ahead.


There are just so many time saving things you can do with it.  Sure, if 
all groups in the chain plan ahead properly there can be very little 
change needed but how often does that happen in real life? It is part of 
my job to plan well enough ahead to know that storage needs grow despite 
everyone's best intentions to get out of the gate properly.  LVM makes 
growing much easier and flexible.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Chris Adams
Once upon a time, m.r...@5-cent.us  said:
> Here's a question: all of the arguments you're giving have to do with VMs.
> Do you have some for straight-on-the-server, non-VM cases?

I've used LVM on servers with hot-swap drives to migrate to new storage
without downtime a number of times.  Add new drives to the system,
configure RAID (software or hardware), pvcreate, vgextend, pvmove,
vgreduce, and pvremove (and maybe a lvextend and resize2fs/xfs_growfs).
Never unmounted a filesystem, just some extra disk I/O.

Even in cases where I had to shutdown or reboot a server to get drives
added, moving data could take a long downtime, but with LVM I can
live-migrate from place to place.

LVM snapshots make it easy to get point-in-time consistent backups,
including databases.  For example, with MySQL, you can freeze and flush
all the databases, snapshot the LV, and release the freeze.  MySQL takes
a brief pause (few seconds), and then you mount and back up the snapshot
for a fully consistent database (only way to do that other than freezing
all writes during a mysqldump, which can take a long time for larger
DBs).  That also avoids the access-time churn (for backup programs that
don't know O_NOATIME, like any that use rsync).

That's server stuff.  On a desktop with a combination of SSD and
"spinning rust" drives, LVM can give you transparent SSD caching of
"hot" data (rather than you having to put some filesystems on SSD and
some on hard drive).

Now, if btrfs ever gets all the kinks worked out (and has a stable
"fsck" for the corner cases), it integrates volume management into the
filesystem, which makes some of the management easier.  I used AdvFS on
DEC/Compaq/HP Tru64 Unix, which had some of that, and it made some of
this easier/faster/smoother.  Btrfs may eventually obsolete a lot of
uses of LVM, but that's down the road.
-- 
Chris Adams 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Gordon Messmer

On 06/24/2015 11:06 AM, m.r...@5-cent.us wrote:

Here's a question: all of the arguments you're giving have to do with VMs.
Do you have some for straight-on-the-server, non-VM cases?


Marko sent two messages and suggested that we keep the VM performance 
question as a reply to that one.  My reply to his other message is not 
specific to VMs.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Gordon Messmer

On 06/24/2015 09:42 AM, Tim Dunphy wrote:

And for
some reason when the servers were ordered the large local volume ended up
being /usr when the ES rpm likes to store it's indexes on /var.

So I'm syncing the contents of both directories to a different place, and
I'm going swap the large local volume from /usr to /var.


Have you considered just resizing the volumes?  If you're trying to swap 
them with rsync, you're going to have to reboot anyway, and relabel your 
system.  If any daemons are running, you might also corrupt their data 
this way.



The entire /var partition is only using 549MB:

rsync: write failed on "/opt/var/log/lastlog": No space left on device (28)


Depending on what UIDs are allocated to your users, lastlog can be an 
enormous sparse file.  You would need to use rsync's -S flag to copy it.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread m . roth
Gordon Messmer wrote:
> On 06/23/2015 08:10 PM, Marko Vojinovic wrote:
>> Ok, you made me curious. Just how dramatic can it be? From where I'm
>> sitting, a read/write to a disk takes the amount of time it takes, the
>> hardware has a certain physical speed, regardless of the presence of
>> LVM. What am I missing?
>
> Well, there's best and worst case scenarios.  Best case for file-backed
> VMs is pre-allocated files.  It takes up more space, and takes a while
> to set up initially, but it skips block allocation and probably some
> fragmentation performance hits later.
>
> Worst case, though, is sparse files.  In such a setup, when you write a
> new file in a guest, the kernel writes the metadata to the journal, then


Here's a question: all of the arguments you're giving have to do with VMs.
Do you have some for straight-on-the-server, non-VM cases?

   mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Gordon Messmer

On 06/23/2015 09:00 PM, Marko Vojinovic wrote:

On Tue, 23 Jun 2015 19:08:24 -0700
Gordon Messmer  wrote:

1) LVM makes MBR and GPT systems more consistent with each other,
reducing the probability of a bug that affects only one.
2) LVM also makes RAID and non-RAID systems more consistent with each
other, reducing the probability of a bug that affects only one.

OTOH, it increases the probability of a bug that affects LVM itself.


No, it doesn't.  As Anaconda supports more types of disk and filesystem 
configuration, its complexity increases, which increases the probability 
that there are bugs.  The number of users is not affected by complexity 
growth, but the permutations of possible configurations grows.  
Therefore, the number of users of some configurations is smaller, which 
means that there are fewer people testing the edge cases, and bugs that 
affect those edge cases are likely to last longer.


Consistency reduces the probability of bugs.


But really, these arguments sound like a strawman. It reduces the
probability of a bug that affects one of the setups --- I have a hard
time imagining a real-world usecase where something like that can be
even observable, let alone relevant.


Follow anaconda development if you need further proof.


3) MBR has silly limits on the number of partitions, that don't
affect LVM.  Sure, GPT is better, but so long as both are supported,
the best solution is the one that works in both cases.

That only makes sense if I need a lot of partitions on a system that
doesn't support GPT.


You are looking at this from the perspective of you, one user.   I am 
looking at this from the perspective of the developers who manage 
anaconda, and ultimately have to support all of the users.


That is, you are considering an anecdote, and missing the bigger picture.

LVM is an inexpensive abstraction from the specifics of disk 
partitions.  It is more flexible than working without it.  It is 
consistent across MBR, GPT, and RAID volumes underlying the volume 
group, which typically means fewer bugs.



4) There are lots of situations where you might want to expand a
disk/filesystem on a server or virtual machine.  Desktops might do so
less often, but there's no specific reason to put more engineering
effort into making the two different.  The best solution is the one
that works in both cases.

What do you mean by engineering effort? When I'm setting up a data
storage farm, I'll use LVM. When I'm setting up my laptop, I won't.
What effort is there?


The effort on the part of the anaconda and dracut developers who have to 
test and support various disk configurations.  The more consistent 
systems are, the fewer bugs we hit.



I just see it as an annoyance of having to
customize my partition layout on the laptop, during the OS installation
(customizing a storage farm setup is pretty mandatory either way, so
it doesn't make a big difference).


In my case, I set up all of my systems with kickstart and they all have 
the same disk configuration except for RAID.  Every disk in every system 
has a 200MB partition, a 1G partition, and then a partition that fills 
the rest of the disk.  On laptops, that's the EFI partition, /boot, and 
a PV for LVM.  On a BIOS system, it's a bios_grub partition, /boot, and 
a PV for LVM.  On a server, the second and third are RAID1 or RAID10 
members for sets that are /boot and a PV for LVM. Because they all have 
exactly the same partition set, when I replace a disk in a server, a 
script sets up the partitions and adds them to the RAID sets.  With less 
opportunity for human error, my system is more reliable, it can be 
managed by less experienced members of my team, and management takes 
less time.


When you manage hundreds of systems, you start to see the value of 
consistency.  And you can't get to the point of managing thousands 
without it.



5) Snapshots are the only practical way to get consistent backups,
and you should be using them.

That depends on what kind of data you're backing up. If you're backing
up the whole filesystem, than I agree. But if you are backing up only
certain critical data, I'd say that a targeted rsync can be waaay more
efficient.


You can use a targeted rsync from data that's been snapshotted, so 
that's not a valid criticism.  And either way, if you aren't taking 
snapshots, you aren't guaranteed consistent data.  If you rsync a file 
that's actively being written, the destination file may be corrupt.  The 
only guarantee of consistent backups is to quiesce writes, take a 
snapshot, and back up from the snapshot volume.



LVM has virtually zero cost, so there's no practical benefit to not
using it.

If you need it. If you don't need it, there is no practical benefit of
having it, either. It's just another potential point of failure, waiting
to happen.


The *cost* the same whether you need it or not.  The value changes, but 
the cost is the same.  Cost and value are different things.  LVM has 
virtually zero cost,

Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Александр Кириллов
Does anyone have a good guess as to why these 'out of space' failures 
are

occurring?


Probaly sparse files or hard links? Try
# rsync -aHASWXv --delete src/ dst/

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Gordon Messmer

On 06/23/2015 08:10 PM, Marko Vojinovic wrote:

Ok, you made me curious. Just how dramatic can it be? From where I'm
sitting, a read/write to a disk takes the amount of time it takes, the
hardware has a certain physical speed, regardless of the presence of
LVM. What am I missing?


Well, there's best and worst case scenarios.  Best case for file-backed 
VMs is pre-allocated files.  It takes up more space, and takes a while 
to set up initially, but it skips block allocation and probably some 
fragmentation performance hits later.


Worst case, though, is sparse files.  In such a setup, when you write a 
new file in a guest, the kernel writes the metadata to the journal, then 
writes the file's data block, then flushes the journal to the 
filesystem.  Every one of those writes goes through the host filesystem 
layer, often allocating new blocks, which goes through the host's 
filesystem journal.  If each of those three writes hit blocks not 
previously used, then the host may do three writes for each of them.  In 
that case, one write() in an application in a VM becomes nine disk 
writes in the VM host.


The first time I benchmarked a sparse-file-backed guest vs an LVM backed 
guest, bonnie++ measured block write bandwidth at about 12.5% (1/8) 
native disk write performance.


Yesterday I moved a bunch of VMs from a file-backed virt server (set up 
by someone else) to one that used logical volumes.  Block write speed on 
the old server, measured with bonnie++, was about 21.6MB/s in the guest 
and about 39MB/s on the host.  So, less bad than a few years prior, but 
still bad.  (And yes, all of those numbers are bad.  It's a 3ware 
controller, what do you expect?)


LVM backed guests measure very nearly the same as bare metal 
performance.  After migration, bonnie++ reports about 180MB/s block 
write speed.



For concreteness, let's say I have a guest machine, with a
dedicated physical partition for it, on a single drive. Or, I have the
same thing, only the dedicated partition is inside LVM. Why is there a
performance difference, and how dramatic is it?


Well, I said that there's a big performance hit to file-backed guests, 
not partition backed guests.  You should see exactly the same disk 
performance on partition backed guests as LV backed guests.


However, partitions have other penalties relative to LVM.

1) If you have a system with a single disk, you have to reboot to add 
partitions for new guests.  Linux won't refresh the partition table on 
the disk it boots from.
2) If you have two disks you can allocate new partitions on the second 
disk without a reboot.  However, your partition has to be contiguous, 
which may be a problem, especially over time if you allocate VMs of 
different sizes.
3) If you want redundancy, partitions on top of RAID is more complex 
than LVM on top of RAID.  As far as I know, partitions on top of RAID 
are subject to the same limitation as in #1.
4) As far as I know, Anaconda can't set up a logical volume that's a 
redundant type, so LVM on top of RAID is the only practical way to 
support redundant storage of your host filesystems.


If you use LVM, you don't have to remember any oddball rules.  You don't 
have to reboot to set up new VMs when you have one disk.  You don't have 
to manage partition fragmentation.  Every system, whether it's one disk 
or a RAID set behaves the same way.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Tim Dunphy
Hey Carl,

 Hi Tim,
> At first glance, I don't see anything obvious, but if it were me, I'd
> do the following:
> a) add the 'n' flag to do a dry run (no actual copying)
> b) increase rsync's verbosity
>(A single -v will give you information about what files are being
>transferred and a brief summary at the end. Two -v options (-vv)
>will give you information on what files are being skipped and
>slightly more information at the end. A third 'v' is insanely
>verbose.)
> c) redirect standard out to a text file that you can examine for more
>clues.
> hth & regards,
>


Good suggestions! Thanks!

Tim



On Wed, Jun 24, 2015 at 1:05 PM, Carl E. Hartung 
wrote:

> On Wed, 24 Jun 2015 12:42:19 -0400
> Tim Dunphy wrote:
>
> > Does anyone have a good guess as to why these 'out of space' failures
> > are occurring?
>
> Hi Tim,
>
> At first glance, I don't see anything obvious, but if it were me, I'd
> do the following:
>
> a) add the 'n' flag to do a dry run (no actual copying)
>
> b) increase rsync's verbosity
>(A single -v will give you information about what files are being
>transferred and a brief summary at the end. Two -v options (-vv)
>will give you information on what files are being skipped and
>slightly more information at the end. A third 'v' is insanely
>verbose.)
>
> c) redirect standard out to a text file that you can examine for more
>clues.
>
> hth & regards,
>
> Carl
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Frank Cox
On Wed, 24 Jun 2015 12:42:19 -0400
Tim Dunphy wrote:

> how come I am running out of space in doing my rsync? 

Are you running out of room for file and directory names, which is a different 
thing than simple free disk space?

-- 
MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] rsyncing directories - sanity check

2015-06-24 Thread Carl E. Hartung
On Wed, 24 Jun 2015 12:42:19 -0400
Tim Dunphy wrote:

> Does anyone have a good guess as to why these 'out of space' failures
> are occurring?

Hi Tim,

At first glance, I don't see anything obvious, but if it were me, I'd
do the following:

a) add the 'n' flag to do a dry run (no actual copying)

b) increase rsync's verbosity
   (A single -v will give you information about what files are being
   transferred and a brief summary at the end. Two -v options (-vv)
   will give you information on what files are being skipped and
   slightly more information at the end. A third 'v' is insanely
   verbose.)

c) redirect standard out to a text file that you can examine for more
   clues.

hth & regards,

Carl
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] rsyncing directories - sanity check

2015-06-24 Thread Tim Dunphy
hey guys,

 I need to mount a different volume onto /var so we have more room to
breathe. I'll be turning 3 servers into an elasticsearch cluster. And for
some reason when the servers were ordered the large local volume ended up
being /usr when the ES rpm likes to store it's indexes on /var.

So I'm syncing the contents of both directories to a different place, and
I'm going swap the large local volume from /usr to /var.

It looked like /opt had more than enough space to hold both directories.
/opt was 6GB and I successfully synced /usr to it. /usr was 2.5GB.

Then I went to sync /var to a temp folder in /opt. Checking I see that it
still has 1/6GB available after the first sync.

# df -h /opt
FilesystemSize  Used *Avail* Use% Mounted on
/dev/mapper/SysVG-OptVol
 6.0G  4.1G  *1.6G*  72% /opt


The entire /var partition is only using 549MB:

# df -h /var
FilesystemSize  *Used* Avail Use% Mounted on
/dev/mapper/SysVG-VarVol
   6.0G   *549M*  5.1G  10% /var

So that being the case, if I make a temp directory in /opt called /opt/var,
how come I am running out of space in doing my rsync? It fails at the end
and the /opt volume is filled up to 100%. Even tho I only have 549MB to
sync.

rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
Broken pipe (32)
rsync: write failed on "/opt/var/log/lastlog": No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(301) [receiver=3.0.6]
rsync: recv_generator: mkdir "/opt/var/www/manual/developer" failed: No
space left on device (28)
*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir "/opt/var/www/manual/faq" failed: No space
left on device (28)
*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir "/opt/var/www/manual/howto" failed: No space
left on device (28)
*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir "/opt/var/www/manual/images" failed: No space
left on device (28)
*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir "/opt/var/www/manual/misc" failed: No space
left on device (28)
*** Skipping any contents from this failed directory ***
rsync: recv_generator: mkdir "/opt/var/www/manual/mod" failed: No space
left on device (28)
*** Skipping any contents from this failed directory ***
rsync: connection unexpectedly closed (148727 bytes received so far)
[sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600)
[sender=3.0.6]


And if I do a df of the entire system, it looks like everything is still ok:

# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/SysVG-RootVol
   2.0G  872M  1.1G  46% /
tmpfs  4.0G 0  4.0G   0% /dev/shm
/dev/sda1486M   87M  375M  19% /boot
/dev/mapper/SysVG-HomeVol
4.0G  137M  3.7G   4% /home
/dev/mapper/SysVG-OptVol
   6.0G  4.3G  1.4G  76% /opt
/dev/mapper/SysVG-TmpVol
2.0G  130M  1.8G   7% /tmp
/dev/mapper/SysVG-UsrVol
  197G  2.8G  185G   2% /usr
/dev/mapper/SysVG-VarVol
   6.0G  549M  5.1G  10% /var

Does anyone have a good guess as to why these 'out of space' failures are
occurring?

Thanks,
Tim



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Robert Heller
At Wed, 24 Jun 2015 04:10:35 +0100 CentOS mailing list  
wrote:

> 
> On Tue, 23 Jun 2015 18:42:13 -0700
> Gordon Messmer  wrote:
> > 
> > I wondered the same thing, especially in the context of someone who 
> > prefers virtual machines.  LV-backed VMs have *dramatically* better
> > disk performance than file-backed VMs.
> 
> Ok, you made me curious. Just how dramatic can it be? From where I'm
> sitting, a read/write to a disk takes the amount of time it takes, the
> hardware has a certain physical speed, regardless of the presence of
> LVM. What am I missing?
> 
> For concreteness, let's say I have a guest machine, with a
> dedicated physical partition for it, on a single drive. Or, I have the
> same thing, only the dedicated partition is inside LVM. Why is there a
> performance difference, and how dramatic is it?
> 
> If you convince me, I might just change my opinion about LVM. :-)

Well if you are comparing direct partitions to LVM there is no real
difference. OTOH, if you have more than a few VMs (eg more than the limits
imposed by the partitioning system) and/or want to create [temporary] ones
'on-the-fly', using LVM makes that trivially possible. Otherwise, you have to
repartition the disk and reboot the host. This puts you 'back' in the
old-school reality of a multi-boot system. And partitioning a RAID array is
tricky and combersome. Resizing physical partitions is also non-trivial.  
Bascally, LVM gives you on-the-fly 'partitioning', without rebooting.  It is 
just not possible (AFAIK) to always update partition tables of a running 
system (never if the disk is the system disk).  Most partitioning tools are 
not really designed for dynamic re-sizing of partitions and it is a highly 
error-prone process.  Most partitioning tools are designed for dealing with a 
'virgin' disk (or a re-virgined disk) with the idea that the partitioning 
won't be revisited once the O/S has been installed.  LVM is all about creating 
and managing *dynamic* 'partitions' (which is what Logical Volumes effectively 
are).  And no, there is little advantage in using multiple PVs.  To get 
performance gains (and/or redundency, etc.), one uses real RAID (eg kernel 
software RAID -- md or hardware RAID), then layers LVM on top of that.

The 'other' *alternitive* is to use virtual container disks (eg image files as
disks), which have horrible performance (compared to LVM or hard partitions)
and are hard to backup.

The *additional* feature: with LVM you can take a snapshot of the VM's disk 
and back it up safely.  Otherwise you *have* to shutdown the VM and remount 
the VM's disk to back it up OR you have to install backup software (eg 
amanda-client or the like) on the VM and back it up over the virtual network. 
It some cases (many cases!) it is not possible to either shutdown the VM 
and/or install backup software on it (eg the VM is running a 'foreign' or 
otherwise imcompatible O/S).

> 
> Oh, and just please don't tell me that the load can be spread accross
> two or more harddrives, cutting the file access by a factor of two (or
> more). I can do that with raid, no need for LVM. Stick to a single
> harddrive scenario, please.
> 
> Best, :-)
> Marko
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
> 
>   

-- 
Robert Heller -- 978-544-6933
Deepwoods Software-- Custom Software Services
http://www.deepsoft.com/  -- Linux Administration Services
hel...@deepsoft.com   -- Webhosting Services

  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread John Hodrien

On Tue, 23 Jun 2015, John R Pierce wrote:


While it has the same concepts, physical volumes, volume groups, logical
volumes, the LVM in AIX shares only the initials with Linux.  I've heard
that Linux's LVM was based on HP-UX's design.


Sure, and IRIX had a similar concept, although my experiences with that were
slightly less good than with LVM on linux.

in AIX, the LVM is tightly integrated with file system management, so you 
issue the command to grow a file system, and it automatically grows the 
underlying logical volume.   the OS itself can automatically grow file 
systems when its installing software. Also, in AIX, the volume manager is the 
raid manager, you say 'copies = 2' as an attribute of a LV, and data is 
mirrored.


Without knowing the details, this is possibly just semantics.  With lvresize,
you can resize the LV and the filesystem in one go.  With lvcreate --type
raid1 you can specify that a given LV is RAID1 mirrored.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM hatred, was Re: /boot on a separate partition?

2015-06-24 Thread Chris Adams
Once upon a time, Marko Vojinovic  said:
> On Tue, 23 Jun 2015 18:42:13 -0700
> Gordon Messmer  wrote:
> > I wondered the same thing, especially in the context of someone who 
> > prefers virtual machines.  LV-backed VMs have *dramatically* better
> > disk performance than file-backed VMs.
> 
> Ok, you made me curious. Just how dramatic can it be? From where I'm
> sitting, a read/write to a disk takes the amount of time it takes, the
> hardware has a certain physical speed, regardless of the presence of
> LVM. What am I missing?

File-backed images have to go through the filesystem layer.  They are
not allocated contiguously, so what appear to be sequential reads inside
the VM can be widely scattered across the underlying disk.

There are plenty of people that have documented the performance
differences, just Google it.

-- 
Chris Adams 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with todays update and fence-agents-all

2015-06-24 Thread Johnny Hughes
On 06/24/2015 06:31 AM, Gerald Vogt wrote:
> Hi all!
> 
> I add another problem with todays updates:
> 
> Error: Package: fence-agents-all-4.0.11-13.el7_1.x86_64
> (centos7-x86_64-updates)
>Requires: fence-agents-compute
> 
> 
> fence-agents-compute seems to be missing.
> 
> Thanks,
> 
> Gerald

Thanks,

This is a new package for that update and is now fixed on master ..
syncing to mirrors.

-- Johnny Hughes




signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-announce Digest, Vol 124, Issue 12

2015-06-24 Thread centos-announce-request
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
centos-announce-requ...@centos.org

You can reach the person managing the list at
centos-announce-ow...@centos.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of CentOS-announce digest..."


Today's Topics:

   1. CESA-2015:1135 Important CentOS 7 php SecurityUpdate
  (Johnny Hughes)
   2. CEBA-2015:1140 CentOS 7 selinux-policy BugFix Update
  (Johnny Hughes)
   3. CEEA-2015:1141 CentOS 7 resource-agents   Enhancement Update
  (Johnny Hughes)
   4. CEBA-2015:1142  CentOS 7 pcs BugFix Update (Johnny Hughes)
   5. CEBA-2015:1143  CentOS 7 gnutls BugFix Update (Johnny Hughes)
   6. CEEA-2015:1144 CentOS 7 fence-agents Enhancement  Update
  (Johnny Hughes)
   7. CEBA-2015:1145  CentOS 7 iputils BugFix Update (Johnny Hughes)
   8. CEBA-2015:1146  CentOS 7 trousers BugFix Update (Johnny Hughes)
   9. CEBA-2015:1147  CentOS 7 expect BugFix Update (Johnny Hughes)
  10. CEBA-2015:1148  CentOS 7 lvm2 BugFix Update (Johnny Hughes)
  11. CEBA-2015:1150  CentOS 7 nuxwdog BugFix Update (Johnny Hughes)
  12. CEEA-2015:1152 CentOS 7 tomcatjss Enhancement Update
  (Johnny Hughes)
  13. CEBA-2015:1151 CentOS 7 java-1.7.0-openjdk BugFix Update
  (Johnny Hughes)
  14. CESA-2015:1153 Moderate CentOS 7 mailman Security Update
  (Johnny Hughes)
  15. CESA-2015:1154 Moderate CentOS 7 libreswanSecurity Update
  (Johnny Hughes)
  16. CEBA-2015:1157  CentOS 7 haproxy BugFix Update (Johnny Hughes)
  17. CEEA-2015:1156 CentOS 7 dracut Enhancement Update (Johnny Hughes)
  18. CEBA-2015:1155  CentOS 7 systemd BugFix Update (Johnny Hughes)
  19. CEBA-2015:1159  CentOS 7 ntp BugFix Update (Johnny Hughes)
  20. CEBA-2015:1158  CentOS 7 ruby BugFix Update (Johnny Hughes)
  21. CEBA-2015:1160  CentOS 7 sos BugFix Update (Johnny Hughes)
  22. CEBA-2015:1161  CentOS 7 golang BugFix Update (Johnny Hughes)
  23. CEBA-2015:1163  CentOS 7 mdadm BugFix Update (Johnny Hughes)
  24. CEBA-2015:1162  CentOS 7 python BugFix Update (Johnny Hughes)
  25. CESA-2015:1137 Important CentOS 7 kernel Security Update
  (Johnny Hughes)


--

Message: 1
Date: Wed, 24 Jun 2015 03:28:02 +
From: Johnny Hughes 
To: centos-annou...@centos.org
Subject: [CentOS-announce] CESA-2015:1135 Important CentOS 7 php
SecurityUpdate
Message-ID: <20150624032802.ga43...@n04.lon1.karan.org>
Content-Type: text/plain; charset=us-ascii


CentOS Errata and Security Advisory 2015:1135 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2015-1135.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( sha256sum Filename ) 

x86_64:
48e10bf983d8cb8920e562c5b9782ad0bcda1ed62d33c639817c53bbe95f4584  
php-5.4.16-36.el7_1.x86_64.rpm
ce72cd8b3b8261f85bd8b538be1ba93134b3d3ec8bb1e23394f42904af52ad9b  
php-bcmath-5.4.16-36.el7_1.x86_64.rpm
229aace6c955dd7a47465df600d6ac3414b2efbafa67a917d2bb034543d0fa24  
php-cli-5.4.16-36.el7_1.x86_64.rpm
c8cdf98285385ecf6200ec1d0f31a6198a40efcfb719719cb8dae15fb1f27420  
php-common-5.4.16-36.el7_1.x86_64.rpm
9593e7d5e0a658552d67a39993db7e79ebc4d738e3c851ca9e2045d6de9fa009  
php-dba-5.4.16-36.el7_1.x86_64.rpm
f812b259af6f68ae4f77515ab1e0f3591952f460954a5269f38967f200b20e78  
php-devel-5.4.16-36.el7_1.x86_64.rpm
8e5ce8cf18ecca3db725e390cae629cd88ac9a51bb56a34812e15836ca1db239  
php-embedded-5.4.16-36.el7_1.x86_64.rpm
4d0a35c1eb498a9162f1ab5be4e2af8441c1112b7275b46a476c79fc05babbe0  
php-enchant-5.4.16-36.el7_1.x86_64.rpm
7a00a8a71b4d5c25693e10d8bbe42d6d0e08c70c7625a19ee04b75666d0296f0  
php-fpm-5.4.16-36.el7_1.x86_64.rpm
561626f45ee349721fbb796f4dfa63ea7f5d0d0594b30ec1c0fa828a52eb690e  
php-gd-5.4.16-36.el7_1.x86_64.rpm
d5b4780975853514ea291e15a219958852b9a6e848f8db26956e67123a03d821  
php-intl-5.4.16-36.el7_1.x86_64.rpm
b05febaade06a39d430a53fa3fdac1649db840f9ab9542c4b9986c104c477cdb  
php-ldap-5.4.16-36.el7_1.x86_64.rpm
abc98c2761906505b85f3c67f61576b12235ca2ee4030c2db79d27ca2ca61dd4  
php-mbstring-5.4.16-36.el7_1.x86_64.rpm
d669c4be73c910f232f14db0c3c391c960d678fa283047b1488682d7f5ae32f6  
php-mysql-5.4.16-36.el7_1.x86_64.rpm
f321826bb84b27a89ec848621a2f1607df58697793fea47c7acbba7f97fa78a3  
php-mysqlnd-5.4.16-36.el7_1.x86_64.rpm
dc08c56691cb53929bebd900b1bbb8307ebf8d8b5e3566d7710b8f76b1a4ed6c  
php-odbc-5.4.16-36.el7_1.x86_64.rpm
4273e4ed35f39096a0d32df6027c545b6d7505163c02ca2aa3e05864eeb86c42  
php-pdo-5.4.16-36.el7_1.x86_64.rpm
047579521a907b9ac9a105733511fe6f95dc0954f7e915c22cb303682a1deb36  
php-pgsql-5.4.16-36.el7_1.x86_64.rpm
447f9e129240749c8cbe80fef47b263aff1aeba21e0b31191646eab7fb4299bd  
php-process-5.4.16-36.el7_1.x86_64.rpm
a2f66

[CentOS] POSTMORTEM: Re: OT: default password for HP printer

2015-06-24 Thread ken

On 06/23/2015 03:52 PM, g wrote:



On 06/23/2015 02:14 PM, ken wrote:

On 06/23/2015 11:49 AM, g wrote:


hello Ken,

am i correct to presume that you are getting the "Bcc:" of my post
to the fedora list?


g,

I'm already subscribed to that list, so you needn't bcc me.  I've read
your post there.  Thanks for that.  Very considerate of you.  The main
issue, getting back into the EWS has been resolved.  See my long post
there about it.

Thanks again.


you are most welcome.

this email was supposed to go to you and not list and why it had
[OFF-LIST] in "Subject:". my bad, failed to change the "To:".


Not a problem.







glad to see you found a workaround to get into ews. seems strange that
hp support was not aware that what happened with your printer was
something that could happen. could be that all was blank because when
changed, and then reset, it has no record of what was to go back to.


Yes, it's especially strange because HP tech support offices have labs 
which house, among others, the very same printer I have.  (At least 
those in the Philippines and Ontario, Canada.)  Talking with people at 
both places I asked them to do a semi-full reset in order to actually 
see what I was seeing, but they declined.  Evidently the policy is that 
only supervisors are allowed to do that... and they are afraid to do it, 
thinking they might disable the printer and make it totally 
non-functional.  A tech in Ontario said, 'if we do that, then we might 
have to send it back and get a new one."  (Yet they aren't afraid to 
tell customers to do such a reset!?)  My response was: With a hundred 
tech support people in that office, how could it be that you wouldn't be 
able to recover that printer from a reset?


That was just five or ten minutes of five hours' worth of conversations 
with HP tech support.  I can't, though, blame those people too much.  No 
one's born knowing these things.  A supervisor in the Philippines told 
me that he gets no money, nor is he allotted time, for training of 
employees.  They just get a manual for each printer, each manual 
containing a script for each known problem, and they just have to follow 
the series of diagnostics -- or blind potential remedies -- for each 
issue.  That and "on the job training" (learning from the customers' 
problems) is pretty much what we can expect when we call tech support. 
This has come about because some high- or mid-level manager, likely a 
strong advocate of market economics, decided that this would be the 
cheapest way to deal with customers' technical problems.  And that's how 
we're dealt with.  And that's how the political becomes personal.





then again, it is a good way for it to work, but support should have
known.


Following on the above, support folks can be expected to know little 
more than what's in the documentation they're handed.


Standard methods are often standard for a reason... or several reasons. 
 The "no surprise" principle alone would tell us that, if there's to be 
a variation from standard, that variation  should be an exceptional 
improvement.  I don't know that this is.




now you know what to do if you forget your password again. ((GBWG))


Actually, I didn't forget my password.  I forgot a password (the 
default) I needed to use once six months ago and never anticipated 
needing again.  And as it turned out, I actually didn't need to remember 
it and won't ever need it again.  :^\


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Problem with todays update and fence-agents-all

2015-06-24 Thread Gerald Vogt

Hi all!

I add another problem with todays updates:

Error: Package: fence-agents-all-4.0.11-13.el7_1.x86_64 
(centos7-x86_64-updates)

   Requires: fence-agents-compute


fence-agents-compute seems to be missing.

Thanks,

Gerald
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with todays update and ntpdate

2015-06-24 Thread Johnny Hughes
On 06/24/2015 05:51 AM, Nux! wrote:
> Yes, it is a "known" issue, I'm sure it's going to get fixed ASAP.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "Kai Bojens" 
>> To: "CentOS mailing list" 
>> Sent: Wednesday, 24 June, 2015 10:48:22
>> Subject: [CentOS] Problem with todays update and ntpdate
> 
>> Hello everybody,
>> I just tried to run 'yum update' and got this error:
>>
>> Error: Package: ntp-4.2.6p5-19.el7.centos.x86_64 (@updates)
>>   Requires: ntpdate = 4.2.6p5-19.el7.centos
>>   Removing: 
>> ntpdate-4.2.6p5-19.el7.centos.x86_64 (@updates)
>>  ntpdate 
>> = 4.2.6p5-19.el7.centos
>>   Updated By: 
>> ntpdate-4.2.6p5-19.el7.centos.1.x86_64 (updates)
>>  ntpdate 
>> = 4.2.6p5-19.el7.centos.1
>>
>> Am I right in the assumption that this looks like a dependency problem?
>> Can anybody confirm this problem?


This is now fixed and pushed, it should be on all of mirror.centos.org
by now, and will get to external mirrors when they next update.

Sorry for the inconvenience.

Thanks,
Johnny Hughes



signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Centos7.1 : Firefox-language problem is fixed

2015-06-24 Thread johan . vermeulen7
Hello, 

I somehow missed the announcement, but I just updated and Firefox 38.0.1 put me 
back to Dutch. 

( https://bugzilla.redhat.com/show_bug.cgi?id=1221286 ) 

greetings, Johan 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Problem with todays update and ntpdate

2015-06-24 Thread Nux!
Yes, it is a "known" issue, I'm sure it's going to get fixed ASAP.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Kai Bojens" 
> To: "CentOS mailing list" 
> Sent: Wednesday, 24 June, 2015 10:48:22
> Subject: [CentOS] Problem with todays update and ntpdate

> Hello everybody,
> I just tried to run 'yum update' and got this error:
> 
> Error: Package: ntp-4.2.6p5-19.el7.centos.x86_64 (@updates)
>   Requires: ntpdate = 4.2.6p5-19.el7.centos
>Removing: 
> ntpdate-4.2.6p5-19.el7.centos.x86_64 (@updates)
>   ntpdate 
> = 4.2.6p5-19.el7.centos
>Updated By: 
> ntpdate-4.2.6p5-19.el7.centos.1.x86_64 (updates)
>   ntpdate 
> = 4.2.6p5-19.el7.centos.1
> 
> Am I right in the assumption that this looks like a dependency problem?
> Can anybody confirm this problem?
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Problem with todays update and ntpdate

2015-06-24 Thread Kai Bojens
Hello everybody,
I just tried to run 'yum update' and got this error:

Error: Package: ntp-4.2.6p5-19.el7.centos.x86_64 (@updates)
   Requires: ntpdate = 4.2.6p5-19.el7.centos
 Removing: 
ntpdate-4.2.6p5-19.el7.centos.x86_64 (@updates)
ntpdate 
= 4.2.6p5-19.el7.centos
 Updated By: 
ntpdate-4.2.6p5-19.el7.centos.1.x86_64 (updates)
ntpdate 
= 4.2.6p5-19.el7.centos.1

Am I right in the assumption that this looks like a dependency problem?
Can anybody confirm this problem?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos