Re: [CentOS] Freenx/x2go on CentOS7?

2014-07-28 Thread Les Mikesell
On Fri, Jul 11, 2014 at 11:25 AM, SilverTip257 silvertip...@gmail.com wrote:

 Can anyone comment on the best remote GUI approach for C7 yet?
 X2goserver is in epel but when I tried it on the RHEL beta it only
 worked with a KDE desktop due to the 3d requirement of Gnome3.


 I toyed around with FreeNX on a Fedora (17 maybe?) system some time ago.

 I had to modify a few settings so GNOME3 would default to classic/fallback
 mode.
 -- set COMMAND_START_GNOME to /usr/bin/gnome-session --session=gnome

 There is also a bug [0] where the session didn't get set properly and the
 display mode.
 -- use Ctrl-Alt-R to get into desktop resize mode
 -- in my case I had to hit that key sequence twice

 [0] https://bugzilla.redhat.com/show_bug.cgi?id=838028


 Maybe a few of these bugs(?) will get resolved now that GNOME3 is the
 default for EL7 along with FreeNX...


 Please give these suggestions a shot and report back!
 I haven't pursued this further due to lack of time and shell access is all
 I really need. ;-)

Gnome3 still doesn't work with the default configuration but MATE
desktop is now in EPEL and seems to work nicely (just like gnome2).

-- 
Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert bare partition to RAID1 / mdadm?

2014-07-25 Thread Les Mikesell
On Fri, Jul 25, 2014 at 8:56 AM, Robert Nichols
rnicholsnos...@comcast.net wrote:
 On 07/24/2014 10:16 PM, Lists wrote:
 So... is it possible to convert an EXT4 partition to a RAID1 partition
 without having to copy the files over?

 Unless you can figure out some way to move the start of the partition back
 to make room for the RAID superblock ahead of the existing filesystem, the
 answer is, No. The version 1.2 superblock is located 4KB from the start
 of the device (partition) and is typically 1024 bytes long.

  https://raid.wiki.kernel.org/index.php/RAID_superblock_formats


What happens if you mount the partition of a raid1 member directly
instead of the md device?   I've only done that read-only, but it does
seen to work.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert bare partition to RAID1 / mdadm?

2014-07-25 Thread Les Mikesell
On Fri, Jul 25, 2014 at 12:32 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
 On 07/25/2014 06:56 AM, Robert Nichols wrote:
 Unless you can figure out some way to move the start of the partition back
 to make room for the RAID superblock ahead of the existing filesystem, the
 answer is, No. The version 1.2 superblock is located 4KB from the start
 of the device (partition) and is typically 1024 bytes long.

   https://raid.wiki.kernel.org/index.php/RAID_superblock_formats

 Sadly, this is probably the authoritative answer I was hoping not to
 get. It would seem technically quite feasible to reshuffle the partition
 a bit to make this happen with a special tool (perhaps offline for a bit
 - you'd only have to manage something less than a single MB of data) but
 I'm guessing nobody has felt the itch to make such a tool.


 On 07/25/2014 08:10 AM, Les Mikesell wrote:
 What happens if you mount the partition of a raid1 member directly
 instead of the md device?   I've only done that read-only, but it does
 seen to work.


 As I originally stated, I've done this successfully many times with a
 command like:

 mount -t ext{2,3,4} /dev/sdXY /media/temp -o rw

But if you write to it, can you clobber the raid superblock?  That is,
is it somehow allocated as used space in the filesystem or is there a
difference it the space available on the md and direct partition, or
something else?

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert bare partition to RAID1 / mdadm?

2014-07-25 Thread Les Mikesell
On Fri, Jul 25, 2014 at 3:08 PM, Benjamin Smith
li...@benjamindsmith.com wrote:
 On 07/25/2014 12:12 PM, Michael Hennebry wrote:
 Is there soome reason that the existing files cannot
 be accessed while they are being copied to the raid?

 Sheer volume. With something in the range of 100,000,000 small files, it
 takes a good day or two to rsync. This means that getting a consistent
 image without significant downtime is impossible. I can handle a few
 minutes, maybe an hour. Much more than that and I have to explore other
 options. (In this case, it looks like we'll be biting the bullet and
 switching to ZFS)

Rsync is really pretty good at that, especially the 3.x versions.  If
you've just done a live rsync (or a few so there won't be much time
for changes during the last live run), the final one with the system
idle shouldn't take much more time than a 'find' traversing the same
tree.   If you have space and time to test, I'd time the third pass or
so before deciding it won't work  (unless even find would take too
long).
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] error disk too full, deleted pc, how to delete cpool?

2014-07-24 Thread Les Mikesell
On Thu, Jul 24, 2014 at 12:55 AM, yashiahru
backuppc-fo...@backupcentral.com wrote:
 pc/pcName/71 (the only and the latest backup)
 6.5T

Is that what you expected - that is, approximately the size of the
target system after compression?

 as you said:
 1) How can i remove all backup? (delete all files and directories in cpool  
 pc?)

If that is all that is on the mounted archive disk it is probably
fastest to unmount it and reformat (mkfs), then make the top level
directories again.  But, if the content size seems wrong, you might
first try walking down the largest directories with 'du -sh *' to see
if you can find the problem.  You might be able to just remove the
offending directory and get enough space to run another full.

 2) I've update to the latest version of backuppc, how could i know if the 
 BackupPC_nightly is corrupted?


If the cpool and pc/hostname directory contents are approximately the
same size, it is probably not the problem.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Compile on Centos7, run on 6 possible?

2014-07-24 Thread Les Mikesell
On Mon, Jul 21, 2014 at 2:49 PM, Les Mikesell lesmikes...@gmail.com wrote:
 On Mon, Jul 21, 2014 at 2:19 PM, Jonathan Billings billi...@negate.org 
 wrote:
 On Mon, Jul 21, 2014 at 12:16:39PM -0500, Les Mikesell wrote:
 I'm getting errors: '/lib64/libc.so.6: version `GLIBC_2.14' not found'
 and   '/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found'.
 Is there a way to make a backward-compatible binary?

 If not, is there a sane way to build something that needs gcc
 4.8+/boost 1.5.3+/cmake 2.8 on Centos6?  I found the devtoolset-2
 software collection with a usable gcc, but no boost.   And cmake 2.8
 installs as cmake28 whereas the project expects the normal
 cmake/ccmake names.

 I've found that the easiest way is to package your software and use
 software like 'mock' (http://fedoraproject.org/wiki/Projects/Mock) to
 build the software for other platforms.  Mock builds the software in a
 chrooted shell built up using the packages for that distribution, so
 you it'd use CentOS6's GCC, boost, cmake and glibc.

 If it would build easily with Centos6's native tools, I wouldn't be
 asking the question...I think it was originally built on Centos5
 but with locally compiled up-rev gcc/boost/cmake versions and
 delivered with some alternative .so's and a scheme to set
 LD_LIBRARY_PATH.   Aside from not knowing the exact build environment
 it expects, I was hoping it could be done in a more standard way.  It
 turns out that it does build on Centos7 - which I guess doesn't really
 help when the runtime target is 6.

I haven't gotten back to this yet, but I think the right answer to
this question would have been to install the compat-glibc package:

Description : This package contains stub shared libraries and static libraries
: from CentOS Linux 6.
:
: To compile and link against these compatibility libraries, use
: gcc -fgnu89-inline \
:   -I /usr/lib/x86_64-redhat-linux6E/include \
:   -B /usr/lib/x86_64-redhat-linux6E/lib64/

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Convert bare partition to RAID1 / mdadm?

2014-07-24 Thread Les Mikesell
On Thu, Jul 24, 2014 at 7:11 PM, Lists li...@benjamindsmith.com wrote:
 I have a large disk full of data that I'd like to upgrade to SW RAID 1
 with a minimum of downtime. Taking it offline for a day or more to rsync
 all the files over is a non-starter. Since I've mounted SW RAID1 drives
 directly with mount -t ext3 /dev/sdX it would seem possible to flip
 the process around, perhaps change the partition type with fdisk or
 parted, and remount as SW RAID1?

 I'm not trying to move over the O/S, just a data paritition with LOTS of
 data. So far, Google pounding has resulted in howtos like this one
 that's otherwise quite useful, but has a big copy all your data over
 step I'd like to skip:

 http://sysadmin.compxtreme.ro/how-to-migrate-a-single-disk-linux-system-to-software-raid1/

 But it would seem to me that a sequence roughly like this should work
 without having to recopy all the files.

 1) umount /var/data;
 2) parted /dev/sdX
  (change type to fd - Linux RAID auto)
 3) Set some volume parameters so it's seen as a RAID1 partition
 Degraded. (parted?)
 4) ??? Insert mdadm magic here ???
 5) Profit! `mount /dev/md1 /var/data`

 Wondering if anybody has done anything like this before...


Even if I found the magic place to change to make the drive think it
was a raid member, I don't think I would trust getting it right with
my only copy of the data.  Note that you don't really have to be
offline for the full duration of an  rysnc to copy it.  You can add
another drive as a raid with a 'missing' member, mount it somewhere
and rsync with the system live to get most of the data over.  Then you
can shut down all the applications that might be changing data for
another rsync pass to pick up any changes - and that one should be
fast.   Then move the raid to the real mount point and either (safer)
swap a new disk, keeping the old one as a backup or (more dangerous)
change the partition type on the original and add it into the raid set
and let the data sync up.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] error disk too full, deleted pc, how to delete cpool?

2014-07-23 Thread Les Mikesell
On Wed, Jul 23, 2014 at 2:01 PM, yashiahru
backuppc-fo...@backupcentral.com wrote:
 Sorry my bad.
 it's 1 backuppc to 1 pc on same LAN.

 a) In backuppc server, the LVM is mounted on /backup
 /dev/mapper/vg_backup-LogVol00
   7.2T  6.5T  337G  96% /backup

 b) du -h /backup/backuppc/cpool/
 6.5T /backup/backuppc/cpool/

 c) In cpool directory, i'm not sure if the data structure is normal: (e.g.: 6 
 in 6 in 6 )
 1.1G /backup/backuppc/cpool/6/6/8
 292M /backup/backuppc/cpool/6/6/5
 621M /backup/backuppc/cpool/6/6/4
 930M /backup/backuppc/cpool/6/6/7
 280M /backup/backuppc/cpool/6/6/3
 738M /backup/backuppc/cpool/6/6/e

 d) I have deleted everything except the latest full backup, in the web UI:
 the latest full backup Totals: 7.45TB !!!

 I'm quite sure that a full backuppc won't sized 7.45TB
 It's dangerous to delete the last backup, so I don't know what to do.

 BUT if delete the last backup is the last solution before reinstall the whole 
 system.
 should i just delete everything in cpool? or leave the 1st layer of directory 
 in cpool?


How does the size of cpool compare to pc/host_name?   Basically all
files under the cpool tree should be hardlinks to files in a
pc/hostname/backup_number directory, and BackupPC_nightly should
remove anything that does not have at least 2 links.   If cpool is
substantially bigger, then something is wrong with BackuPC_nightly (a
possibility also indicated by the error messages...).If your
pc/hostname tree is also that large, then the target must have had
additional filesystems mounted when the backup was taken or perhaps
filesystem corruption that caused some sort of directory recursion
loop.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Development Tools install

2014-07-22 Thread Les Mikesell
On Tue, Jul 22, 2014 at 3:07 PM, F. Mendez fmende...@terra.com wrote:

 Not quiet sure if that may be. Because if all Dev tools where installed,
 the error message should be that ALL packages installed are the latest.
 But the error shown here is that there IS NONE package or group to be
 install.


I think that's just the difference in the message for 'groupinstall'
vs. 'install'.  If you do a 'yum groupinfo 'Development Tools', then
an 'rpm -q' for each package listed in the mandatory and default
section, I think you'll see they are all there.  You aren't going to
get an 'anjuta' out of the Centos or epel repos in any case though.
There is eclipse and probably some other similar things.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Compile on Centos7, run on 6 possible?

2014-07-21 Thread Les Mikesell
I'm getting errors: '/lib64/libc.so.6: version `GLIBC_2.14' not found'
and   '/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found'.
Is there a way to make a backward-compatible binary?

If not, is there a sane way to build something that needs gcc
4.8+/boost 1.5.3+/cmake 2.8 on Centos6?  I found the devtoolset-2
software collection with a usable gcc, but no boost.   And cmake 2.8
installs as cmake28 whereas the project expects the normal
cmake/ccmake names.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Compile on Centos7, run on 6 possible?

2014-07-21 Thread Les Mikesell
On Mon, Jul 21, 2014 at 2:19 PM, Jonathan Billings billi...@negate.org wrote:
 On Mon, Jul 21, 2014 at 12:16:39PM -0500, Les Mikesell wrote:
 I'm getting errors: '/lib64/libc.so.6: version `GLIBC_2.14' not found'
 and   '/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found'.
 Is there a way to make a backward-compatible binary?

 If not, is there a sane way to build something that needs gcc
 4.8+/boost 1.5.3+/cmake 2.8 on Centos6?  I found the devtoolset-2
 software collection with a usable gcc, but no boost.   And cmake 2.8
 installs as cmake28 whereas the project expects the normal
 cmake/ccmake names.

 I've found that the easiest way is to package your software and use
 software like 'mock' (http://fedoraproject.org/wiki/Projects/Mock) to
 build the software for other platforms.  Mock builds the software in a
 chrooted shell built up using the packages for that distribution, so
 you it'd use CentOS6's GCC, boost, cmake and glibc.

If it would build easily with Centos6's native tools, I wouldn't be
asking the question...I think it was originally built on Centos5
but with locally compiled up-rev gcc/boost/cmake versions and
delivered with some alternative .so's and a scheme to set
LD_LIBRARY_PATH.   Aside from not knowing the exact build environment
it expects, I was hoping it could be done in a more standard way.  It
turns out that it does build on Centos7 - which I guess doesn't really
help when the runtime target is 6.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: How do I stay logged into multiple Jenkins servers

2014-07-18 Thread Les Mikesell
On Thu, Jul 17, 2014 at 9:55 AM, Rob Mandeville
rmandevi...@dekaresearch.com wrote:
 I have two Jenkins servers: one for prime time use (my production
 environment; my customers’ development environment) and another for my own
 mad science development.  Both are currently on the same Linux machine and
 running out of Winstone, but both aspects of that are negotiable.  The fact
 that it’s using Active Directory for authentication is less negotiable.



 The problem is that I have to work on both of them.  I do some work on one,
 then I go to the other one and I have to log in again.  Once I flip back to
 the first one, I have to log in _again_.  Is there any way for me to remain
 logged into both instances simultaneously?


There's probably an authentication realm or domain buried somewhere in
the configs, but I'd recommend running the test instance in a virtual
machine with its own IP address if it has to be on the same physical
host. Virtualbox is easy to use if you don't already use something
else.   There is a bit more overhead but it will let you install
standard packages/configurations with a lot less difference in the
application level setup between your production/test instances.

-- 
   Les Mikesell
  lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [CentOS] Some thoughts on systemd

2014-07-18 Thread Les Mikesell
On Fri, Jul 18, 2014 at 4:20 PM, Mark Tinberg mtinb...@wisc.edu wrote:

 So simple things are trivial, more complicated things are possible and the 
 options are there in the config file if you want to use them but you aren’t 
 forced to.

But it does force people who should be focusing on improving an
application to instead spend their time reconfiguring the startup
configuration for a distribution just to keep it working the same way.

For example: http://issues.opennms.org/browse/NMS-6137

And, while it might offer a benefit in terms of being able to make it
wait for the supporting postgres database if it is local, what happens
if it is configured that way but you use the setup recommended for
scaling where the database runs on a different system?

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 full backup software?

2014-07-17 Thread Les Mikesell
On Thu, Jul 17, 2014 at 12:06 PM, Rafał Radecki radecki.ra...@gmail.com wrote:
 I need a block level backup because I need an easy to restore backup of the
 whole server, including mbr, partition layout and of course data. The
 server will be reinstalled so filesystem level backup is an option but not
 as straightforward and easy to restore as for example Clonezilla.


The 'rear' (Relax-and-Recover) package from EPEL is about as easy to
use but with a different approach.  It will generate a bootable iso
containing a script to reconstruct the partitions, filesystems, etc.
and restore to them.  Some tradeoffs are that Clonezilla will do
single disks and bring along windows or other partitions not part of
the active system, but can't handle multiple drives or RAID and it
needs at least an equal-sized disk for the restore.   ReaR can make
its backup without shutting the running system down, understands
raid/lvm, etc., but only the linux filesystems - and with some work
you can modify the disk layout/sizes before the restore.   ReaR is a
reasonable tool to do conversions to VM's, etc., where you are likely
to want to rearrange the layout or remove software raid, although you
have to manually edit the layout description file.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: Copy files from master to slave

2014-07-16 Thread Les Mikesell
On Wed, Jul 16, 2014 at 1:44 AM, Panikera Raj panikera.raj...@gmail.com wrote:
 Hi Corneil Thanks for you help,

 Actually I am using cross platform, as I mentioned above, If at all I want
 to use as Build step every time I need to mount directory and copy file to
 the machine some times mounting issue I am facing.

 Is there any other way where I can overcome this.

If these are build results from some other build, you can archive them
in the build that creates them and use copy to slave or the web
interface to get them when you need them.  If they are just static
files, sharing vai nfs/samba from a common server will work (use UNC
\\server\share references on windows).  If they are versioned files,
use a source control system like subversion.

-- 
   Les Mikesell
lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [CentOS] Centos 6 full backup software?

2014-07-16 Thread Les Mikesell
On Wed, Jul 16, 2014 at 3:10 PM, John R Pierce pie...@hogranch.com wrote:
 On 7/16/2014 12:50 PM, Rafał Radecki wrote:
 I need a good tool to backup whole system on block level rather than file
 level and easy to use. I currently need to backup to an USB disc (50+ GB of
 data) a system and then reinstall it. In the future if needed I will revert
 to the system from backup;)

 What can you recommend?

 For ext2/3/4, use dumpe2fs, for xfs, use xfsdump

If you use dump you'll have to create partitions/filesystems before
the restore and reinstall grub yourself. Clonezilla will do that for
you.   The 'rear' package from EPEL would also likely work although it
uses tar for the backup at least by default.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] Incremental Backup fail

2014-07-15 Thread Les Mikesell
On Tue, Jul 15, 2014 at 3:40 AM, raceface raceface_the_...@gmx.net wrote:

 thank you for being that kind to non professionals. I have only 3 blank
 lines in my last posting and 21 non blank, don't know, why you get more
 blank 50 times more blank lines. Using this script is a suggestion of the
 backuppc FAQ and not my personal idea. This script helps me getting backuppc
 running full backups. Using $tarPath ends in error  sudo: no tty present
 and no askpass program specified. Root has also no rights to login via ssh,
 so ssh is no option. Giving the user backuppc sudo rights is no option, to
 prevent having too much users with to many rights.


Personally I use rsync with ssh keys for the local host so it is not a
special case - and I don't use sudo so I haven't had that problem.
But, $* is almost always the wrong thing to put in a shell script vs.
$@ because it won't keep parameters with embedded spaces together.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-15 Thread Les Mikesell
On Mon, Jul 14, 2014 at 11:50 PM, Keith Keller
kkel...@wombat.san-francisco.ca.us wrote:

 1. See the systemd myths web page
 http://0pointer.de/blog/projects/the-biggest-myths.html

 In the interest of full disclosure, that page is written by one of the
 primary authors of systemd, so we shouldn't expect an unbiased opinion.
 (Not saying it's wrong, only that it's important to understand the
 perspective an author might have.)

One thing that bothers me very much when reading that is the several
mentions of how you don't need to learn shell syntax as though that is
an advantage or as if the author didn't already know and use it
already.   As if he didn't understand that _every command you type at
the command line_ is shell syntax.   Or as if he thinks learning a
bunch of special-case language quirks is somehow better than one that
you can use in many other situations.  When you get something that
fundamental wrong it is hard to take the rest seriously.

-- 
  Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-15 Thread Les Mikesell
On Tue, Jul 15, 2014 at 9:46 AM, James Hogarth james.hoga...@gmail.com wrote:

 4.) Debugging.  Why is my unit not starting when I can start it from
 the command line?  Once I figured out journalctl it was a bit easier,
 and typically it was SELinux, but no longer being able to just run
 'bash -x /etc/rc.d/init.d/foobar' was frustrating.  sytemd disables
 core dumps on services by default (at least it did on Fedora, the
 documentation now says it's on by default.  Huh.  I should test
 that...)


 Jon as a heads up this isn't a systemd/el7 thing necessarily...

 Look at the daemon function in /etc/init.d/functions that most standard EL
 init scripts will be using...

 Core files have been disabled on things started with that by default (need
 to export a variable in the environment of the script usually via
 sysconfig) the whole of el6 ...

Is there a simple generic equivalent to:
sh -x /etc/rc.d/init.d/program_name start
to see how configurations options that are abstracted out of the main
files are being picked up and expanded?

-- 
   Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-15 Thread Les Mikesell
On Tue, Jul 15, 2014 at 10:18 AM, Jonathan Billings billi...@negate.org wrote:

  1. See the systemd myths web page
  http://0pointer.de/blog/projects/the-biggest-myths.html
 
  In the interest of full disclosure, that page is written by one of the
  primary authors of systemd, so we shouldn't expect an unbiased opinion.
  (Not saying it's wrong, only that it's important to understand the
  perspective an author might have.)

 One thing that bothers me very much when reading that is the several
 mentions of how you don't need to learn shell syntax as though that is
 an advantage or as if the author didn't already know and use it
 already.   As if he didn't understand that _every command you type at
 the command line_ is shell syntax.   Or as if he thinks learning a
 bunch of special-case language quirks is somehow better than one that
 you can use in many other situations.  When you get something that
 fundamental wrong it is hard to take the rest seriously.

 You mean this paragraph?

 systemd certainly comes with a learning curve. Everything
 does. However, we like to believe that it is actually simpler to
 understand systemd than a Shell-based boot for most people. Surprised
 we say that? Well, as it turns out, Shell is not a pretty language to
 learn, it's syntax is arcane and complex. systemd unit files are
 substantially easier to understand, they do not expose a programming
 language, but are simple and declarative by nature. That all said, if
 you are experienced in shell, then yes, adopting systemd will take a
 bit of learning.


 I think the point is that systemd unit file syntax is significantly
 simpler than shell syntax -- can we agree on that?

No.  Everything you type on a command line is shell syntax.  If you
don't think that is an appropriate way to start programs you probably
shouldn't be using a unix-like system, much less redesigning it.  If
you don't think the shell is the best tool, how about fixing it so it
will be the best in all situations.

 It also is
 significantly less-featureful than a shell programming language.  Yes,
 you're going to be using shell elsewhere, but in my experience, the
 structure of most SysVinit scripts is nearly identical, and where it
 deviates is where things often get confusing to people not as familiar
 with shell scripting.  Many of the helper functions in
 /etc/rc.d/init.d/functions seem to exist to STOP people from writing
 unique shell code in their init scripts.

Yes, reusing common code and knowledge is a good thing.  But spending
a bit of time learning shell syntax will help you with pretty much
everything else you'll ever do on a unix-like system, where spending
that time learning a new way to make your program start at boot will
just get you back to what you already could do on previous systems.

-- 
Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-15 Thread Les Mikesell
On Tue, Jul 15, 2014 at 11:56 AM, Marko Vojinovic vvma...@gmail.com wrote:
 
 Yes, reusing common code and knowledge is a good thing.  But spending
 a bit of time learning shell syntax will help you with pretty much
 everything else you'll ever do on a unix-like system, where spending
 that time learning a new way to make your program start at boot will
 just get you back to what you already could do on previous systems.

 Les, I could re-use your logic to argue that one should never even try
 to learn bash, and stick to C instead.

You could, if every command typed by every user since unix v7 had been
parsed with C syntax instead of shell so there would be something they
could 'stick to'.  But, that's not true.

 Every *real* user of UNIX-like
 systems should be capable of writing C code, which is used in so many
 more circumstances than bash.

That might be true, but it is irrelevant.

 Why would you ever want to start your system using some clunky
 shell-based interpreter like bash, (which cannot even share memory
 between processes in a native way), when you can simply write a short
 piece of C code, fork() all your services, compile it, and run?

If you think bash is 'clunky', then why even run an operating system
where it is used as the native user interface?Or, if you need to
change something, why not fix bash to have the close mapping to system
calls that bourne shell had back in the days before sockets?

 And if you really insist on writing commands interactively into a
 command prompt, you are welcome to use tcsh, and reuse all the syntax
 and well-earned knowledge of C, rather than invest time to learn
 yet-another-obscure-scripting-language...

 Right? Or not?

Well, Bill Joy thought so.  I wouldn't argue with him about it for his
own use, but for everyone else it is just another incompatible waste
of human time.

 If not, you may want to reconsider your argument against systemd ---
 it's simple, clean, declarative, does one thing and does it well, and
 it doesn't pretend to be a panacea of system administration like bash
 does.

I'm sure it can work - and will.  But I'm equally sure that in my
lifetime the cheap computer time it might save for me in infrequent
server reboots will never be a win over the expensive human time for
the staff training and new documentation that will be needed to deal
with it and the differences in the different systems that will be
running concurrently for a long time.

The one place it 'seems' like it should be useful would be on a laptop
if it handles sleep mode gracefully, but on the laptop where I've been
testing RHEL7 beta it seems purely random whether it will wake from
sleep and continue or if it will have logged me out.   And I don't
have a clue how to debug it.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: Copy files from master to slave

2014-07-14 Thread Les Mikesell
On Mon, Jul 14, 2014 at 2:06 AM, Panikera Raj panikera.raj...@gmail.com wrote:
 Hi All

 I have to copy couple of files from master machine (OS x) to slave machine
 (Windows) using Jenkins. how can I achieve this. If it is possible then I
 need to copy files from master (Users/Panikera/test ) to slave (D:/Panikera)
 machine. how to overcome this.

Jenkins has pretty good support for source control systems.  You could
use one of them as a file transport even if you don't need the other
versioning features.

-- 
   Les Mikesell
 lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-14 Thread Les Mikesell
On Mon, Jul 14, 2014 at 11:47 AM, Andrew Wyatt and...@fuduntu.org wrote:
 
 Anyway, he also seems determined to see it all as black and white, rather
 than looking at the *much* larger set of bugs and vulnerabilities that
 Windows Server has had than any version of 'Nix. Sure, we have some... but
 a *lot* fewer, and overwhelmingly far less serious.

 mark


 Yup, overwhelmingly less serious.

 http://heartbleed.com/

 Oh, wait.

Openssl doesn't have much to do with Unix/linux.  It is just one of a
bazillion application level programs that you might run.  Are you
going to include all bugs in all possible windows apps in your
security comparison?

But init/upstart/systemd are very special things in the unix/linux
ecosystem.  They become the parent process of everything else.  For
everything else, the only way to create a process is fork(), with it's
forced inheritance of environment and security contexts.

In any case, giant monolithic programs that try to do everything
sometimes become become better than a toolbox, but it tends to be
rare.  First, it takes years to fix the worst of the bugs - but maybe
that has already happened in fedora...  And after that it is an
improvement only if the designers really did anticipate every possible
need.   Otherwise the old unix philosophy that processes are cheap -
if you need another one to do something, use it - is still in play.
If you need something to track how many times something has been
respawned or to check/clean related things at startup/restart you'll
probably still need a shell there anyway.

-- 
   Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] latest freeIPA on CentOS

2014-07-14 Thread Les Mikesell
On Mon, Jul 14, 2014 at 2:02 PM, Jitse Klomp jitsekl...@gmail.com wrote:
 
 I certainly don't want to run Fedora in production - and I don't want
 to do the backport for  such a complicated piece of software myself.


 RH will *not* do a backport of 3.3 to RHEL 6.x.

 Alexander Bokovoy (from Red Hat) on the freeipa-users list (feb. 17):
 RHEL 6.x lacks many of the dependencies required for IPA 3.3. Newer
 MIT Kerberos (with API and ABI change for KDC database driver and many
 other changes required for trusts and two-factor authentication), newer
 Dogtag which relies on several dozens of Java packages and newer tomcat,
 systemd (we use socket activation and tmpfiles.d a lot), newer SSSD.
 Kerberos ccache stored in the kernel space (KEYRING ccache type)
 requires changes at kernel level which are also needed for kerberized
 NFSv4 for trusts as AD users have large Kerebros tickets when they are
 members of many groups and so on.

Isn't that the sort of thing that 'software collections' are intended
to provide?   It would be encouraging to see something actually built
on top of them.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-14 Thread Les Mikesell
On Mon, Jul 14, 2014 at 2:05 PM, William Woods wood...@gmail.com wrote:


 1/3 of my servers use C 5.10, 2/3 use C 6.5. I use C 5.10 as my
 individual development server and desktop.

 C 5 works well for me.

 Centos 5 Fan :-)

 That is probably the most pointless comment you have made yet. Just because
 you use something, and you are a fan does not mean anything in the context
 of the discussion.

On the contrary - it means his services start just fine without
systemd, and the best systemd is going to do is start them the same
way - that is, not be an improvement even after someone wastes the
time to rewrite the startup code.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Freenx/x2go on CentOS7?

2014-07-11 Thread Les Mikesell
Can anyone comment on the best remote GUI approach for C7 yet?
X2goserver is in epel but when I tried it on the RHEL beta it only
worked with a KDE desktop due to the 3d requirement of Gnome3.

-- 
  Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] 1stboot stuff?

2014-07-11 Thread Les Mikesell
Will anything break if you never log into the console after the
initial reboot?  I just installed my first copy in a VM, and connected
over ssh as I normally would for all access after the install.   But I
just happened to leave the console window open and later noticed that
it was prompting for license acceptance which I didn't see in the ssh
login.On a more typical install, no one will ever log in at the
console after the network is up.   Will that matter, and is there a
way to keep it from confusing operators that might need to log in with
a crash cart much later?

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 1stboot stuff?

2014-07-11 Thread Les Mikesell
On Fri, Jul 11, 2014 at 12:48 PM, Thomas Eriksson
thomas.eriks...@slac.stanford.edu wrote:


 On 07/11/2014 10:35 AM, Les Mikesell wrote:
 Will anything break if you never log into the console after the
 initial reboot?  I just installed my first copy in a VM, and connected
 over ssh as I normally would for all access after the install.   But I
 just happened to leave the console window open and later noticed that
 it was prompting for license acceptance which I didn't see in the ssh
 login.On a more typical install, no one will ever log in at the
 console after the network is up.   Will that matter, and is there a
 way to keep it from confusing operators that might need to log in with
 a crash cart much later?



 If your typical install is via kickstart, there is a keyword

   eula --agreed


My typical install will be arranging for an operator in some other
location to pop in a minimal iso, run the install, and give it an IP
address out of his range that I can reach.   If ssh connects to it
after the reboot, he's done.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 1stboot stuff?

2014-07-11 Thread Les Mikesell
On Fri, Jul 11, 2014 at 1:29 PM, David Both
db...@millennium-technology.com wrote:
 Nothing breaks. Nothing stops working. I do this all the time. I almost never
 login to a local console after the initial reboot of a newly installed system,
 either Fedora or CentOS.

 In fact I have a post-install script that turns off the firstboot service and
 terminates it if it is already running - that in addition to many other
 customization tasks that I perform on every Linux box I install. You could 
 then
 uninstall the firstboot RPM if you choose.

Thanks - if for some reason the network subsequently breaks and the
remote operators have to revive it from the console I'd rather not
have them think missing that step might have been the problem with the
box.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: Frustration with build-step/post-build-action access...

2014-07-10 Thread Les Mikesell
On Thu, Jul 10, 2014 at 12:29 AM, Jeff predato...@gmail.com wrote:
 Thanks...that's what I am trying to do.  I have successfully copied the
 artifacts.  Now I need to SCP them to the server BEFORE I run a remote SSH
 script to install them.

 Using the SCP plugin would help me not have to manually manage credentials
 in the job or risk exposing them in the logs (am going to investigate other
 ways per Daniel's comment) but the SCP plugin step is currently only
 available in a post-build action so I am unable to use it to copy the files
 before attempting to install them.

 I could (and may still) write a local SSH script that simply calls SCP but
 I'm unsure how to manage credentials securely and still use them in scripts
 (again, still investigating).

Not sure I understand the point of copying things twice if the jenkins
node doesn't actually do anything to the files. Can't your target
server script just grab the archived artifacts directly from the
jenkins web interface itself?   In any case, if the point is just to
run some arbitrary stuff via ssh from a node with strategic firewall
access, why not use the ssh-agent plugin and do whatever you need in
one or more build scripts that can be embedded in the job or pulled
from an scm?   A side benefit is that you can use rsync over ssh
instead of scp which can sometimes be helpful.

-- 
   Les Mikesell
 lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [BackupPC-users] BackupPC acting strangely

2014-07-10 Thread Les Mikesell
On Thu, Jul 10, 2014 at 12:42 PM, Richard Stockton - Tierpoint Systems
Administrator richard.stock...@tierpoint.com wrote:
 2nd try: 1st didn't post to list...

 Isn't there anyone out there who has had a similar problem?  I can't
 believe I'm the only one.  Further investigation shows the problem
 is happening for almost all of my 14 hosts.

 Bottom line: The incrementals get (and create) all the files, but the
 full backups only create empty directories.  No errors are shown in
 the logs, and the GUI shows all the backups as complete.  When the
 incrementals are deleted, the fulls don't have everything, and data
 is permanently lost.

 This is using rsync between multiple CentOS (Linux) boxes.

 I REALLY need to get this fixed.  Anybody?  Please help.

It doesn't make any sense to me.  There was a recent thread on the list:
https://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg26870.html
which sounded similarly broken, but that was on ubuntu and fixed with
a re-install.  My best guess would be that some perl module is
corrupted (unless your xfer logs are full of 'can't link errors).

If you installed the package from EPEL, you can use 'rpm -Vv BackupPC'
to see if any of the package files have been changed since
installation.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-10 Thread Les Mikesell
On Thu, Jul 10, 2014 at 8:39 AM, David G. Miller d...@davenjudy.org wrote:

 Generally speaking, if a service is broken to the point that it needs
 something to automatically restart it I'd rather have it die
 gracefully and not do surprising things until someone fixes it.   But
 then again, doesn't mysqld manage to accomplish that in a
 fully-compatible manner on Centos6?

 Can't find the original post so replying and agreeing with Les.  Have the
 same ongoing problem with radvd.  When My IPv6 tunnel provider burps, the
 tunnel drops.  The tunnel daemon usually reconnects but radvd stays down.
 Solution:

 */12 * * * * /sbin/service radvd status  /dev/null 21 || /sbin/service
 radvd start 21

 in crontab.  How hard is that?  And without all of the systemd nonsense.

Or, if you want things to respawn, the original init handled that very
nicely via inittab.   Also,running a shell as the parent of your
daemon as a watchdog that can repair its environment and restart it if
it exits doesn't have much overhead.  Programs share the loaded
executable code across all instances and you pretty much always have
some shells running on a linux/unix box - a few more won't matter.

-- 
Les Mikesell
   lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-09 Thread Les Mikesell
On Wed, Jul 9, 2014 at 12:11 PM, Lamar Owen lo...@pari.edu wrote:

   That was back when
 people actually using the systems contributed their fixes directly.
 I had a couple of 4+ year uptime runs on a system with RH7 + updates -
 and only shut it down to move it once.


 I remember the
 mechanisms, and the gatekeepers, involved, very well. The Fedora way is
 way more open, with people outside of Red Hat directly managing packages
 instead of contributing fixes to the 'official' Red Hat packager for
 that package.

I'm not convinced that being open and receptive to changes from people
that aren't using and appear to not even like the existing, working
system is better than having a single community, all running the same
system because they already like it, and focusing on improving it
while keeping things they like and are currently using.  With the
latter approach, there was a much better sense of the cost of breaking
things that previously worked.   With fedora, well, nobody cares -
they aren't running large scale production systems on it anyway/

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-09 Thread Les Mikesell
On Wed, Jul 9, 2014 at 12:21 PM, Lamar Owen lo...@pari.edu wrote:

 But an init that takes a bit more
 care to its offspring, making sure they stay alive until such time as
 they are needed to die (yuck again!) is a vast improvement over 'start
 it and forget it.'

So your solution to the problems that happen in complex daemon
software is to use even more complex software as a manager for all of
them???  Remind me why (a) you think that will be perfect, and (b) why
you think an unpredictable daemon should be resurrected to continue
its unpredictable behavior.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-09 Thread Les Mikesell
On Wed, Jul 9, 2014 at 1:22 PM, Lamar Owen lo...@pari.edu wrote:
 On 07/09/2014 01:31 PM, Les Mikesell wrote:
 I'm not convinced that being open and receptive to changes from people
 that aren't using and appear to not even like the existing, working
 system is better than having a single community, all running the same
 system because they already like it, and focusing on improving it
 while keeping things they like and are currently using.

 I think you and I remember a different set of lists.  I remember lots of
 griping about changes being forced down throats.  Heh, a quick perusal
 of one of the lists' archives just a minute ago confirmed my recollection.

No, that is exactly my point.  Back then the griping by affected
active users happened in more or less real time compared to the
changes being done.  Now fedora goes off on its own merry way for
years before its breakage comes back to haunt the people that wanted
stability.

 With the latter approach, there was a much better sense of the cost of
 breaking things that previously worked.
 Do you remember the brouhaha over libc5 that 'just worked' versus the
 'changed for no reason' glibc2?  And don't get me started on the
 recollections over the GNOME 1 to 2 upgrade (or fvwm to GNOME, for that
 matter!), or the various KDE upgrades (and the entire lack of KDE for
 RHL 5.x due to the odd license for Qt, remember?

Don't think people running a bunch of RH5 servers really cared about X
or desktops at all...

 And then
 all the i18n changes for 8.0 (I dealt with that one directly, since the
 PostgreSQL ANSI C default had to be changed to whatever was now
 localized

That one was sort of inevitable.   Likewise for grub2 and UEFI...

 The bad rep for x.0 releases
 started somewhere, remember?

Well, that was the equivalent of fedora.  You don't use that in
production.   The x.2 release mapped pretty well to 'enterprise''  -
except maybe for 8.x and 9 which never really were very good.

 Not that I necessarily disagree with your observations, by the way. I'm
 just looking at the brushstrokes of the really big picture and
 remembering how at the time it seemed like we sometimes were just moving
 from one kluge to another (if you insist on the alternate spelling
 'kludge' feel free to use it.).  But it was a blast being there and
 watching this thing called Linux find its wings, no?

In these observations you have to take into account just how badly
broken the base code was back then.  Wade through some old changelogs
if you disagree.  There were real reasons that things had to change.
But by, say, CentOS5 or so we had systems that would run indefinitely
we a few security updates now and then.  (Actually CentOS3 was pretty
solid, but you have to follow the kernel).

 And I have two previous versions of CentOS to fall back on while I learn
 the new tools; I have both C5 and C6 in production, and have plenty of
 time in which to do a proper analysis on the best way ('best way' of
 course being subjective; there is no such thing as an entirely objective
 'best way') for me to leverage the new tools. The fact of the matter is
 that Red Hat would not bet the farm on systemd without substantial
 buy-in from a large number of people. The further fact the Debian and
 others have come to the same conclusion speaks volumes, whether any
 given person thinks it stupid or not. And I don't have enough data to
 know whether it's going to work for me or not; I'm definitely not going
 to knee-jerk about it, though.

I'm never against adding new options and features.  But I am very
aware of the cost of not making the new version backwards compatible
with anything the old version would have handled.  And I'm rarely
convinced that someone who doesn't consider backwards compatibility as
a first priority is going to do so later either, so you are likely
wasting your time learning to work with today's version since
tomorrows will break what you just did.

 But the rumors of something 'killing' Linux have and will always be
 exaggerated.  Systemd certainly isn't going to, if gcc 2.96 didn't. I
 mean, think about it: the first rev out of gcc 2.96 wouldn't even
 compile the Linux kernel, IIRC!

Yes, but on the other hand, people still pay large sums of money for
other operating systems.  And there are some reasons for that.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-09 Thread Les Mikesell
On Wed, Jul 9, 2014 at 3:07 PM, Lamar Owen lo...@pari.edu wrote:

 If you don't follow the Fedora lists and get involved, well,
 you get what you pay for, I guess.

Following the list just makes it more painfully clear that they don't
care about compatibility or breakage of previously working
code/assumptions or other people's work.  It's all about change.   I
tried to use/follow fedora for a while, but gave up when an update
between releases pushed a kernel that wouldn't boot on the fairly
mainstream IBM server I was using for testing.

  We already had Upstart, and the move from Upstart to
 systemd is not that big (at least in my opinion), so it's not something
 that got me up in arms.

Backwards compatibility isn't a big/little thing, it is binary choice
yes/no.  If you copy stuff over and it doesn't work, that's a no, and
it is going to cost something to make it work again.

 Don't think people running a bunch of RH5 servers really cared about X
 or desktops at all...

 You missed my Red Baron comment, didn't you?  I ran Red Hat Linux 4.1 as
 a desktop, and once Mandrake 5.3 was out I went completely Linux as my
 primary work and personal desktop.  I figured if I was going to run it
 as a server I needed to 'dogfood' things and really rely on it for daily
 work.  And my employer agreed.

Did you keep track of the time you spent keeping that working?

 Yes, but on the other hand, people still pay large sums of money for
 other operating systems.  And there are some reasons for that.

 Many of which are not technical.

Many  aren't.   And many are just a large base of stuff that works and
will break if anything underneath changes.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-09 Thread Les Mikesell
On Wed, Jul 9, 2014 at 3:43 PM, Lamar Owen lo...@pari.edu wrote:

 The only constant is change.

Sure, but it is only progress if you stop changing things when they work.

 Have you checked how compatible or not systemd is for the init scripts
 of the packages about which you care (such as OpenNMS)?

OpenNMS provides yum repositories.  When they add the EL7 repo, I'll
expect it to include something that already works.  So that's not my
problem but will likely waste someone else's time if the existing init
script doesn't drop in. I'll just need to make things work for the
internal programs, some of which are done by developers that would
really rather stay on windows.

 Did you keep track of the time you spent keeping [desktop RHL 5.x] working?

 My employer put a line item on my timesheet for it, so, yes, I kept
 track of it and got paid for it.  Those paper files have long since been
 tossed, since that was fifteen-plus years ago.  My employer was paying
 me to keep the server up, I had an employer who understood the value of
 training, and that employer definitely understood the value of dogfooding.

Still you must have come up with some bottom line recommendations.
Did your employer make all or some large number of staff follow your
lead back then on desktop versions/updates after seeing what it costs?
  Personally, I gave up on the hardware aspect of a linux desktop as
soon as I saw freenx working from windows/macs where vendor-optimized
video drivers come with the distribution.   And then having access to
both Linux and native desktop programs I've tended to ignore the
problems with linux desktop apps.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] Restore Single Files with Default Tools?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 12:09 PM, Christian Völker chrisc...@knebb.de wrote:
 Hi,

 I have a BackupPC host directory copied from the /var/lib/BackupPC
 directory.

 So all files start with the f prefix. Don't mind about the prefix. But
 they are compressed somewhat.

 Do I have a chance to access the file content with some (more or less)
 default command line tools? I cannot use any BackupPC specific tools.
 First, the files are outside the /var/lib/BackupPC tree and second the
 do not reside on a BackupPC computer, just Linux.

 Any hints here?

I think the backuppc compression is unique, so you'll have to install
backuppc or at least enough of it to get BackupPC_zcat working.   It's
just perl...Of course if the old system still works, the web
interface would have been the place t grab the files.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] fileListReceive failed

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 9:38 AM, Elodie Chapeaublanc
elodie.chapeaubl...@curie.fr wrote:
 After cheking, My version of File::RsyncP is Version 0.68, released 18
 Nov 2006.

But your error is coming from the remote side.   The older protocol
negotiated by backuoppc is going to force it to load the whole
directory for the 'share' into memory.   Is enough available?

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 8:42 AM, Dennis Jacobfeuerborn
denni...@conversis.de wrote:
 Also the switch from messy bash scripts to a declarative
 configuration makes things easier once you get used to the syntax.

Sorry, but I'd recommend that anyone who thinks shell syntax is
'messy' just stay away from unix-like systems instead of destroying
the best parts of them.   There is a huge advantage of consistent
behavior whether some command is executed interactively on the command
line or started automatically by some other means.

 Then there is the fact that services are actually monitored and can be
 restarted automatically if they fail/crash and they run in a sane
 environment where stdout is redirected into the journal so that all
 output is caught which can be useful for debugging.

What part of i/o redirection does the shell not handle well for you?

 Its certainly a change one needs to get used to but as mentioned above I
 don't think its a bad change and you don't have to jump to it
 immediately if you don't want to.

'Immediately' has different meanings to different people.  I'd rather
see such things discussed in terms of cost of re-implementations.  How
much is this going to cost a typical company _just_ to keep their
existing programs working the same way over the next decade (which is
a relatively short time in terms of business-process changes)?   Even
if the changes themselves are minor, you have to cover the cost of
paying some number of people for that 'get used to the syntax' step.
Personally I think Red Hat did everyone a disservice by splitting the
development side off to fedora and divorcing it from the enterprise
users that like the consistency.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 10:58 AM, Lamar Owen lo...@pari.edu wrote:
 
 And dynamic spinup of servers to handle increased load is a use case for
 systemd's rapid bootup.  They go hand-in-hand.

Don't know about your servers, but ours take much, much longer for
their boot-time memory and hardware tests and initialization than
anything the old style sysvinit scripts do.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 11:05 AM, Andrew Wyatt and...@fuduntu.org wrote:
 
 This is an unfortunate problem in the community today, anyone who disagrees
 with status-quo is just an antique, it's insulting to say the least.  It
 doesn't matter our experience, we're just causing trouble because we
 don't want change which is an excuse that isn't even remotely true.
  Eventually when all these old guys leave, all that will be left are the
 inexperienced kids and that's when the real problems will begin to surface.

The people promoting change most like do not have a large installed
base of their own complex programming to maintain or any staff to
retrain.

  There are a few good reasons to adopt systemd, but the bad outweigh the
 good in my opinion.

My opinion is that if a new system is really better, then it should be
capable of handling everything the previous standard did
transparently.   If it can't, then it's not really better.  It is just
different.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 11:13 AM, Lamar Owen lo...@pari.edu wrote:
 On 07/08/2014 11:58 AM, Les Mikesell wrote:
 ... How much is this going to cost a typical company _just_ to keep
 their existing programs working the same way over the next decade
 (which is a relatively short time in terms of business-process changes)?

 Les, this is the wrong question to ask.  The question I ask is 'What
 will be my return on investment be, in potentially lower costs, to run
 my programs in a different way?'

But the answer is still the same.  It's sort of the same as asking
that about getting a shiny new car with a different door size that
won't carry your old stuff without changes and then still won't do it
any better.   Our services take all the hardware can do and a lot of
startup initialization on their own.  Saving a fraction of a second of
system time starting them is never going to be a good tradeoff for
needing additional engineer training time on how to port them between
two different versions of the same OS.

 If there is no ROI, or a really long
 ROI, well, I still have C6 to run until 2020 while I invest the time in
 determining if a new way is better or not.

So a deferred cost doesn't matter to you?   You aren't young enough to
still think that 6 years is a long time away, are you?

 Fact is that all of the
 major Linux distributions are going this way; do you really think all of
 them would change if this change were stupid?

Yes, Linux distributions do a lot of things I consider stupid.  Take
the difficulty of maintaining real video drivers as an example.

 Even the Unix philosophy was new at one point.  Just because it works
 doesn't mean it's the best that can be found.

Re-using things that work may not be best, but if everyone is
continually forced to re-implement them, they will never get a chance
to do what is best.   In terms of your ROI question, you should be
asking if that is the best use of your time.

 Even if the changes themselves are minor, you have to cover the cost
 of paying some number of people for that 'get used to the syntax'
 step. Personally I think Red Hat did everyone a disservice by
 splitting the development side off to fedora and divorcing it from the
 enterprise users that like the consistency.

 Consistency is not the only goal.

But that's why we are here using an 'enterprise' release, not
rebuilding gentoo every day.

 Efficiency should trump consistency,

Efficiency comes from following standards so components are reusable
and can be layered on top of each other. Then you can focus on making
the least efficient part better and spend your time where it will make
a difference. Adding options to increase efficiency is great - as long
as you don't break backwards compatibility.

 and I for one like being able to see where the direction lies well in
 advance of EL adopting a feature blind.  Or don't you remember how Red
 Hat Linux development used to be before Fedora and the openness of that
 process?

Yes, I remember it worked fantastically well up through at least RH7 -
which was pretty much compatible with CentOS3.   That was back when
people actually using the systems contributed their fixes directly.
I had a couple of 4+ year uptime runs on a system with RH7 + updates -
and only shut it down to move it once.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 11:25 AM, Lamar Owen lo...@pari.edu wrote:
 Memory tests are redundant with ECC. (I
 know; I have an older SuperMicro server here that passes memory testing
 in POST but throws nearly continuous ECC errors in operation; it does
 operate, though).  If it fails during spinup, flag the failure while
 spinning up another server.

I don't think that is generally true.  I've seen several IBM systems
disable memory during POST and come up running will a smaller amount.

 Virtual servers have no need of POST (they also don't save as much
 power; although dynamic load balancing can do some predictive heuristics
 and spin up host hypervisors as needed and do live migration of server
 processes dynamically).

Our services that need scaling need all of the hardware capability and
aren't virtualized.   That might change someday...

 To detect failures early, spin up every server in a rotating sequence
 with a testing instance, and skip POST entirely.

 If you have to, spin up the server in a stateless mode and put it to
 sleep.  Then wake it up with dynamic state.

Our servers tend to just run till they die.  If we didn't need them we
wouldn't have bought them in the first place.  I suppose there are
businesses with different processes that come and go, but I'm not sure
that is desirable.

 Long POSTs need to go away, with better fault tolerance after spinup
 being far more desirable, much like the promise of the old as dirt
 Tandem NonStop system. (I say the 'promise' rather than the
 'implementation' for a reason.).

If you need load balancing anyway you just run enough spares to cover
the failures.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Y2K not - Re: Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 12:22 PM, Gilbert Sebenste
seben...@weather.admin.niu.edu wrote:
 On Tue, 8 Jul 2014, Robert Moskowitz wrote:

 and did the conversion for display to save another byte.  Efficiency?
 We were desperate for every byte we could squeeze out.  the US Post
 Office created a standard so that all US cities (and supposedly streets)
 could be entered in 14 characters or less.  We changed the abbreviation
 of Nebraska from NB to NE (I remember writing that conversion program)
 so we could more easily mix US and Canada addresses (those they would
 not change their 6 character code to our 5 digit one).  We burned CPU to
 save storage. then rewrote key routines in assembler and hacked the
 COBOL calls to make it all work.

 Things change.  Design  goals change.  Systems have to change.

 Of course they do. And those were changes in efficiency that were the
 result of needed productivity improvements. Change for the sake of
 major improvement(s). Wonderful, well-designed, efficient AND necessary.
 And it obviously made things more productive for everyone!

 I argue that systemd neither improves efficiency, productivity or
 satisfaction...nor is it necessary.

More to the point, those 'old' efficiency hacks were from a time when
programmer time was cheaper than the computer resources.   Now, the
computers should be doing the work for us instead of the other way
around.  Can anyone really make the argument that we can't afford the
computer resources for transparent backwards compatibility now?

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 1:08 PM, John R Pierce pie...@hogranch.com wrote:
 On 7/8/2014 6:53 AM, Ned Slider wrote:
 That's not always true.

 Some configs that were under /etc on el6 must now reside under /usr on el7.

 Take modprobe blacklists for example.

 On el5 and el6 they are in/etc/modprobe.d/

 On el7 they need to be in/usr/lib/modprobe.d/

 If you install modprobe blacklists to the old location under el7 they
 will not work.

 I'm sure there are other examples, this is just one example I've
 happened to run into.

 this is insane.   traditionally in Unix-like systems, /usr is supposed
 to be able to be read only and sharable between multiple systems, for
 instance in NFS boot scenarios.   /var is specifically for host-specific
 complex configuration and status stuff like /var/logs   /var/state
 /var/run   and so forth.

And more to the point, /usr isn't supposed t be needed until you are
past the point of mounting all filesystems so you can boot from
something tiny.  Doesn't modprobe need its files earlier than that?

-- 
Les Mikesell
   lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 1:12 PM, Scott Robbins scot...@nyc.rr.com wrote:
 On Tue, Jul 08, 2014 at 11:36:07AM -0500, Gilbert Sebenste wrote:
 On Tue, 8 Jul 2014, Lamar Owen wrote:

 People will vote with their feet on this. And, that old white men are
 complaining about this is ageist, racist, and demeaning to EVERYONE. I am
 really disappointed in Red Hat saying this, far more than the
 whole systemd concerns.


 Again, let me clarify, it was a tongue-in-cheek comment made among friends,
 and certainly not a RedHat official quote.

But aside from insulting anyone, you should think of that reference
realistically as meaning the people who have established systems
working well enough to have built businesses worth maintaining.   Do
you really want to rock that boat in favor of youngsters that don't
know how to make it work?

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 1:47 PM, Jonathan Billings billi...@negate.org wrote:
 On Tue, Jul 08, 2014 at 01:22:54PM -0500, Les Mikesell wrote:
 And more to the point, /usr isn't supposed t be needed until you are
 past the point of mounting all filesystems so you can boot from
 something tiny.  Doesn't modprobe need its files earlier than that?

 I think that a lot of these objections are addressed here:

 http://www.freedesktop.org/wiki/Software/systemd/separate-usr-is-broken/

Ummm, 'addressed' by pointing out that a whole bunch of the changes
fedora has made break things that are expected to work in unix-like
systems.   I fail to see how that helps with the problem.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 2:16 PM, Reindl Harald h.rei...@thelounge.net wrote:

 Am 08.07.2014 17:58, schrieb Les Mikesell:
 On Tue, Jul 8, 2014 at 8:42 AM, Dennis Jacobfeuerborn
 denni...@conversis.de wrote:
 Also the switch from messy bash scripts to a declarative
 configuration makes things easier once you get used to the syntax.

 Sorry, but I'd recommend that anyone who thinks shell syntax is
 'messy' just stay away from unix-like systems instead of destroying
 the best parts of them

 WTF - you can place a shell-script in ExecStart and
 set type to 'oneshot' - nobody is taking anything
 away from you

Unless you are offering to do that for me, for free,  on all my
systems, having to do it certainly does take something away.


 Then there is the fact that services are actually monitored and can be
 restarted automatically if they fail/crash and they run in a sane
 environment where stdout is redirected into the journal so that all
 output is caught which can be useful for debugging.

 What part of i/o redirection does the shell not handle well for you?

 wtaht part of monitoring did you not understand?

Generally speaking, if a service is broken to the point that it needs
something to automatically restart it I'd rather have it die
gracefully and not do surprising things until someone fixes it.   But
then again, doesn't mysqld manage to accomplish that in a
fully-compatible manner on Centos6?

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 2:34 PM, Reindl Harald h.rei...@thelounge.net wrote:

 Our servers tend to just run till they die.  If we didn't need them we
 wouldn't have bought them in the first place.  I suppose there are
 businesses with different processes that come and go, but I'm not sure
 that is desirable

 what proves you never done serious IT

No, it means our servers run for years,

 the goal of standby servers in case of virtualization is that
 you know there may be load peaks from time to time but mostly
 you don't need up 4 servers

We design to handle a whole data center failure in only the time it
takes for a new client connection.  With/without systemd, nobody is
going to wait for a new server to spin up.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 2:55 PM, Reindl Harald h.rei...@thelounge.net wrote:

 Unless you are offering to do that for me, for free,  on all my
 systems, having to do it certainly does take something away.

 then just don't upgrade to RHEL7
 so what

I expect our systems to  still have services running past 2020.


 Generally speaking, if a service is broken to the point that it needs
 something to automatically restart it I'd rather have it die
 gracefully and not do surprising things until someone fixes it. But
 then again, doesn't mysqld manage to accomplish that in a
 fully-compatible manner on Centos6?

 generally speaking if my webserver dies for whatever reason
 in want it to get restarted *now* and seek for the reason
 while the services are up and running

Then I hope I'm never a customer of that service that doesn't
know/care why it is failing.  I consider it a much better approach to
let you load balancing shift the connections to predictably working
servers.

 generally speaking: there is more than only mysqld on that world

 generally speaking if i restart a server i want SSH tunnels
 to them get restarted on other machines automatically, see below

Seems awkward, compared to openvpn.

 generally speaking if the OpenVPN service on the location some
 hundret kilometers away fails because the poor internet
 connection their i want it to be restarted

You don't have to restart openvpn to have it reconnect itself after
network outages.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Cemtos 7 : Systemd alternatives ?

2014-07-08 Thread Les Mikesell
On Tue, Jul 8, 2014 at 3:01 PM, Mark Tinberg mtinb...@wisc.edu wrote:

 And more to the point, /usr isn't supposed t be needed until you are
 past the point of mounting all filesystems so you can boot from
 something tiny.  Doesn't modprobe need its files earlier than that?

 This work is all about being able to boot a system with just a read-only 
 /usr.  Any foo you need to get to a complex filesystems, like NFS or 
 encrypted software RAID needs to be in the initial ramdisk which the boot 
 loader can access before the kernel loads and which tools like Dracut build 
 based on what’s required for your particular setup.  The seeds of that change 
 basically existed from the time that initial ram disks were introduced as a 
 feature a long time ago, now we’ve just widely acknowledged this reality.

Errr, I thought you only needed stuff on the ramdisk to access the
root partition.  Can't you mount /usr from a different disk controller
or NFS from modules loaded from /lib/modules?   Or was that already
broken when user's home directories were kicked into /home?   And if
not, how did things get in that mess?

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: Problem running batch script from Jenkins - help please!

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 9:31 AM, funeeld...@gmail.com wrote:


 We run our build.bat on a windows box. We are moving this into jenkins. One 
 of the command line args is a UNC pathname. The batch script works fine on 
 the windows machine in a cmd window. When I execute the same command in 
 jenkins batch script, it cannot see the unc path somehow. The batch script 
 has the following command which fails when it is run in jenkins:

 @rem Test if staging area exists. @set BuildResultsDir=%4 @if not exist 
 %BuildResultsDir% (
 @echo.
 @echo. -ERROR- Arg4 StagingArea %BuildResultsDir% does not exist.
 @echo.
 @goto:eof )

 The error message in the log is: M:setlocal enabledelayedexpansion

 -ERROR- Arg4 StagingArea \wesrdbb5\Reef7.2Sust\Nightly_Build\CM03Build does 
 not exist.

 I tried changing %BuildResultsDir% to !BuildResultsDir! with no success.. any 
 advice is welcome.

UNC paths work in general, but jenkins will be running as a different
user that probably doesn't have access.  I've only used read-only
shares that permit guest access to avoid dealing with the quirks of
windows network authentication.

-- 
Les Mikesell
  lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem running batch script from Jenkins - help please!

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 2:32 PM,  funeeld...@gmail.com wrote:
 I am running the job as a user who has access.  is that different than the
 user running jenkins?

Yes, the user running jenkins will depend on how the slave agent was
started.  Or the master if you aren't using slaves.

-- 
   Les Mikesell
 lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [BackupPC-users] Backup aborted (NT_STATUS_ACCESS_DENIED listing \\*)

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 8:01 AM, Daniel Doughty d...@heartofamericait.com 
wrote:

 I've definitely had the -N  problem.


There was a change in some version of the underlying smbclient and the
backuppc command has to match.  But I thought the -N was already out
in the current backuppc release.   Also, unless you permit guest
access you should get an error in the connection, not accessing the
file listing.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC not backing up contents of certain Directories (No Error in XFER log)

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 5:26 AM, Jurie Botha jur...@taprojects.co.za wrote:


 I am having an issue with BackupPC not backing up certain files.

 I did a Full backup of a servre 2003 File Sever Share - which seemed to
 have run through fine.

 I then Copied some MP3 files to a test structure, and ran an incremental
 Backup. BackupPC backs up the directories, but none of the MP3 files
 beneath them.

 I checked the permissions, and they're fine, Since it also backed up 1.3
 TB without issue using the same permissions.

 So i guess what I want to know is this:

 1. Does BackupPC exclude MP3's by default (I have not set up ANY
 exclusions)?

 2. If not, how could I solve this?

It may be normal - if your 2nd run was an incremental and the method
used to copy the files preserved a timestamp earlier than your prior
full run.  Smb and tar backups can only use the file timestamps to
determine what to include.   That's one of the advantages of
rsync/rsynd which can actually compare the directories.

If that is the case, they should be included in your next full run.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (NT_STATUS_ACCESS_DENIED listing \\*)

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 9:37 AM, Greer, Jacob - District Tech
jacob.gr...@ksd.kyschools.us wrote:
 I can't seem to pinpoint the problem -N was not set, all of my settings are 
 the same as they were on my old server it worked fine but the new BackupPC 
 server is having issues.  I am going to reload it and see if that helps.  I 
 appreciate everyones advice.


It could be some version-related issue in smbclient, since that does
all the work.   I'd try using it interactively with the same
credentials to see if you have the same errors.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Miredo server for Centos 6

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 10:58 AM, Robert Moskowitz r...@htt-consult.com wrote:
 On delving deeper into Miredo support, it seems that Miredo Server is a
 separate program from the Miredo client/relay.  And that there is no
 Miredo Server available for Centos 6.  Not in EPEL 6, or repoforge.

 So far the maintainer of Miredo for Fedora/EPEL has not reponded to a
 query on its status for EPEL 6.  I have to ask on repoforge about
 miredo-server.

 Anyone know of anywhere else to look?  I suppose I can bring up a Fedora
 20 box, as this is 'just' a testbed.  But to move the testbed out of my
 network into the corp testbed, I need it for Centos.


Have you tried the simple-minded approach of downloading the fedora
src rpm and doing an 'rpmbuild --rebuild' of it?  Sometimes all it
take to make that work is installing whatever dependencies are
missing, sometimes that turns out to be difficult or impossible,
depending on required versions and conflicts.   You might have a
better chance of making this work after Centos 7 is out, though.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Miredo server for Centos 6

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 12:07 PM, Robert Moskowitz r...@htt-consult.com wrote:

 Have you tried the simple-minded approach of downloading the fedora
 src rpm and doing an 'rpmbuild --rebuild' of it?  Sometimes all it
 take to make that work is installing whatever dependencies are
 missing, sometimes that turns out to be difficult or impossible,
 depending on required versions and conflicts.   You might have a
 better chance of making this work after Centos 7 is out, though.

 For various reasons I lean toward installing software over doing my own
 builds.  No one else is going to do the write ups I need for
 management.

Sure, but the rpm package you get from rebuilding an existing fedora
source rpm is going to be essentially the same thing you'd get if the
maintainer built it for centos6/EPEL.   That is, all of the things
that would make it 'your' build have already been done by someone else
and coded in the spec file.   If it works...

 I have been asked to setup a testbed to show how this works
 now, and I have not seen that Miredo is any more available for Centos
 7.  Also the datacenter where my testbed would be moved to will be on
 Centos 6 for some time.

 But you might have more knowledge Miredo for Centos 7 than I do...

Not specifically about those, but just in terms of compatibility
between a fedora src rpm and the Centos environment.   A lot of things
have changed in libraries and rpm syntax between centos 6 and current
fedora so you are fairly likely to have some problems rebuilding an
unmodified src rpm.   On the other hand you should still be able to
find fedora 19 src rpms and that environment should be very similar to
Centos 7.  So the rpmbuild would be much more likely to 'just work' -
with the result also being very likely to be compatible with what
would land in EPEL if the maintainer decides to add it.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Miredo server for Centos 6

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 12:36 PM, Robert Moskowitz r...@htt-consult.com wrote:

 Not specifically about those, but just in terms of compatibility
 between a fedora src rpm and the Centos environment.   A lot of things
 have changed in libraries and rpm syntax between centos 6 and current
 fedora so you are fairly likely to have some problems rebuilding an
 unmodified src rpm.   On the other hand you should still be able to
 find fedora 19 src rpms and that environment should be very similar to
 Centos 7.  So the rpmbuild would be much more likely to 'just work' -
 with the result also being very likely to be compatible with what
 would land in EPEL if the maintainer decides to add it.

 Ah, so taking the src from remlab.net might not successfully build.
 Putting up a Centos 7 beta (I could redo rigel to C7) and getting the
 F19 source, I should be more successful.


You can try it on C6, but it may take some tweaking in the spec file.

 So where is miredo for F19?
 Not at:

 https://dl.fedoraproject.org/pub/fedora/linux/releases/19/Fedora/source/SRPMS/m/

Look under:
https://dl.fedoraproject.org/pub/fedora/linux/releases/19/Everything/source/SRPMS/m/

And you'll need to 'yum install rpm-build' if you don't have it, along
with development tools.

If you can find an archive with one that worked on fedora 13 it would
have a better chance of rebuilding on Centos 6.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Miredo server for Centos 6

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 12:53 PM, Robert Moskowitz r...@htt-consult.com wrote:


 If you can find an archive with one that worked on fedora 13 it would
 have a better chance of rebuilding on Centos 6.

 I see that https://dl.fedoraproject.org/pub/fedora/linux/releases/13/ is
 empty...

 And will at best find v 1.17 back then.

I guess google can find anything:
http://archive.fedoraproject.org/pub/fedora-secondary/releases/13/Everything/source/SRPMS/

It might not be hard to tweak the old spec file to build the newer
source.   Easier than starting from scratch anyway.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] block level changes at the file system level?

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 2:06 PM,  m.r...@5-cent.us wrote:
 Lists wrote:
 On 07/02/2014 12:57 PM, m.r...@5-cent.us wrote:
 I think the buzzword you want is dedup.
 dedup works at the file level. Here we're talking about files that are
 highly similar but not identical. I don't want to rewrite an entire file
 that's 99% identical to the new file form, I just want to write a small
 set of changes. I'd use ZFS to keep track of which blocks change over
 time.

 I've been asking around, and it seems this capability doesn't exist
 *anywhere*.

 I was under the impression from a few years ago that at least the
 then-commercial versions operated at the block level, *not* at the file
 level. rsync works at the file level, and dedup is supposed to be fancier.


Yes, basically it would keep a table of hashes of the content of
existing blocks and do something magic to map writes of new matching
blocks to the existing copy at the file system level.  Whether that
turns out to be faster/better than something like rdiff-backup would
be up to the implementations.   Oh, and I forgot to mention that there
is an alpha version of backuppc4 at
http://sourceforge.net/projects/backuppc/files/backuppc-beta/4.0.0alpha3/
that is supposed to do deltas between runs.

But, since this is about postgresql, the right way is probably just to
set up replication and let it send the changes itself instead of doing
frequent dumps.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] block level changes at the file system level?

2014-07-03 Thread Les Mikesell
On Thu, Jul 3, 2014 at 2:48 PM, Lists li...@benjamindsmith.com wrote:

 On 07/03/2014 12:23 PM, Les Mikesell wrote:
 But, since this is about postgresql, the right way is probably just to
 set up replication and let it send the changes itself instead of doing
 frequent dumps.

 Whatever we do, we need the ability to create a point-in-time history.
 We commonly use our archival dumps for audit, testing, and debugging
 purposes. I don't think PG + WAL provides this type of capability.

I think it does.  You should be able to have a base dump plus some
number of incremental logs that you can apply to get to a point in
time.   Might take longer than loading a single dump, though.

Depending on your data, you might be able to export it as tables in
sorted order for snapshots that would diff nicely, but it is painful
to develop things that break with changes in the data schema.


 So at
 the moment we're down to:

 A) run PG on a ZFS partition and snapshot ZFS.
 B) Keep making dumps (as now) and use lots of disk space.
 C) Cook something new and magical using diff, rdiff-backup, or related
 tools.

Disk space is cheap - and pg_dumps usually compress pretty well.   But
if you have time to experiment, I'd like to know how rdiff-backup or
backuppc4 performs.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 5:55 AM, Paolo Basenghi paolo.basen...@fcr.re.it wrote:

 Il 30/06/2014 19:14, Les Mikesell ha scritto:

 I'm still confused as to what executing an ssh command has to do with
 the smb.conf file.  Is this running some implementation of busybox as
 a shell and combination of what would normally be independent
 commands?  Even then it doesn't make a lot of sense that it would
 involve your non-stock OptWare client which would be the part that
 doesn't understand the stock smb.conf contents.

 Also, did you really need to install the  OptWare samba client if you
 are going to use rsync for backups?
 I made some test.
 It turned out that are the QNAP ssh and scp binaries that produce the
 warning about smb.conf parameter, not the smbclient like I guessed.
 I opened a support ticket with QNAP.

 But why does BackupPC rsync method block until timeout upon this warning?


Does ssh work otherwise?   BackupPC isn't happy about additional text
coming ahead of the rsync startup, but that usually causes a quick
'version mismatch' error.   During the timeout period you could check
that rsync is actually running at the other end.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup clients from anywhere

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 9:12 AM, Löffler Thomas J. t...@erdw.ethz.ch wrote:
 Hi,

 laptops are traveling through the world changing the (and often more than
 one) ip address. I’ve setup scripts (for os x, w7) to collect these data and
 push it into a db. Periodically, the /etc/hosts file on the backuppc server
 is updated with these data.



 Now, nmblookup doesn’t use the /etc/hosts file, but $Conf{ClientNameAlias} =
 ‘xxx.xxx.xxx.xxx’ in the /etc/backuppc/CLIENTS.pl file which can be updated
 periodically as well to the latest data.



 Are there other/simpler possibilities already implemented for clients to be
 able to backup from “foreign” networks? Is there any possibility to deal
 with the multiple ip addresses a client can have?

If the client pc users logs into the web interface to start the
backup, it will find the address.  But usually the bigger problem is
that these roaming connections are going through a NAT router or
firewall so the server can't connect back anyway.   One solution is to
run a VPN service like openvpn on the client and server and configure
it to assign a known private address to each client when it is
connected - and configure backuppc to use that address (via
ClientAliasName).

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 10:01 AM, Paolo Basenghi
paolo.basen...@fcr.re.it wrote:
 Il 02/07/2014 14:48, Les Mikesell ha scritto:
 Does ssh work otherwise? BackupPC isn't happy about additional text
 coming ahead of the rsync startup, but that usually causes a quick
 'version mismatch' error. During the timeout period you could check
 that rsync is actually running at the other end.

 If I don't get wrong, backuppc does not run rsync client, but a set of
 perl library functions instead. Perhaphs are them that block...

That is correct, but it doesn't block on other platforms unless the
remote stops sending.. But first the server-side ssh has to launch
rsync on the remote side, so you could see if you are getting at least
that far.   It is also possible that it is working but just extremely
slow at handling compression, and thus hitting the timeout.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 11:46 AM, Paolo Basenghi
paolo.basen...@fcr.re.it wrote:

 Il 02/07/2014 17:23, Les Mikesell ha scritto:
 That is correct, but it doesn't block on other platforms unless the
 remote stops sending.. But first the server-side ssh has to launch
 rsync on the remote side, so you could see if you are getting at least
 that far. It is also possible that it is working but just extremely
 slow at handling compression, and thus hitting the timeout.
 Got your point and tested.
 The rsync process at the client side does nothing (no CPU nor mem
 activity). It is idle (blocked).

 The output of the BackupPC_dump program is (only significant lines):

 Running: /usr/bin/ssh -q -x -l root caronte /usr/bin/rsync -v --server 
 --sender --numeric-ids --perms --owner --group -D --links --hard-links 
 --times --block-size=2048 --recursive . /
 Xfer PIDs are now 7567
 xferPids 7567
 Got remote protocol 1869506377
 Fatal error (bad version): Ignoring unknown parameter conn log

 I guess it is stuck because the two sides cannot agree about a protocol
 version or pheraphs the perl library cannot handle correctly this case.
 I don't know...

The backuppc-internal rsync can't handle any extraneous messages
before the remote rsync handshake. That is a known problem but usually
triggered by a login message from the remote side.  I thought it would
exit quickly with that error, but maybe it waits for ssh to exit.  In
any case you'll have to find a way to get that message out of the data
stream.   Meanwhile you might try setting up rsync in daemon mode on
the remote side and using the rsyncd xfer method.   That would get ssh
out of the picture.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (NT_STATUS_ACCESS_DENIED listing \\*)

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 2:15 PM, Greer, Jacob - District Tech
jacob.gr...@ksd.kyschools.us wrote:
 I am using SMB to back up a file server.  The backup runs for several hours
 then it stops with the following error:  Backup aborted
 (NT_STATUS_ACCESS_DENIED listing \\*)  This is a clean install of Ubuntu
 running BackupPC


The error basically means what it says - the credentials you are using
aren't allowed to read the directory.  You can try using smbclient
interactively with the same target to see what you are allowed to do.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (NT_STATUS_ACCESS_DENIED listing \\*)

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 2:52 PM, Greer, Jacob - District Tech
jacob.gr...@ksd.kyschools.us wrote:
 Well that is weird, I am using an admin account I have access to everything.


You might get that if the 'share' is actually a junction point or
whatever it is that windows uses as symlinks.

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] block level changes at the file system level?

2014-07-02 Thread Les Mikesell
On Wed, Jul 2, 2014 at 2:53 PM, Lists li...@benjamindsmith.com wrote:
 I'm trying to streamline a backup system using ZFS. In our situation,
 we're writing pg_dump files repeatedly, each file being highly similar
 to the previous file. Is there a file system (EG: ext4? xfs?) that, when
 re-writing a similar file, will write only the changed blocks and not
 rewrite the entire file to a new set of blocks?

 Assume that we're writing a 500 MB file with only 100 KB of changes.
 Other than a utility like diff, is there a file system that would only
 write 100KB and not 500 MB of data? In concept, this would work
 similarly to using the 'diff' utility...

There is something called rdiff-backup
(http://www.nongnu.org/rdiff-backup/ and packaged in EPEL) that does
reverse diffs at the application level.  If it performs well enough it
might be easier to manage than a de-duping filesystem.Or backuppc
- which would store a complete copy if there are any changes at all
between dumps but would compress them and automatically manage the
number you need to keep.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] Cannot edit configuration using the web interface

2014-07-01 Thread Les Mikesell
On Tue, Jul 1, 2014 at 7:35 AM, HAL9000
backuppc-fo...@backupcentral.com wrote:
 Thank you for the reply. What I ended up doing though was I just reinstalled 
 in on ubutnu server and everything now works as it should. It turns out that 
 CentOS isn't really supported.


I'm sure that comes as a surprise to all of us using the CentOS
version.  I think you do have to add the
$Conf{CgiAdminUsers}  to match the user name(s) you configured.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[CentOS] gdb question

2014-07-01 Thread Les Mikesell
If you are trying to debug a core dump from a different machine, how
do you tell gdb where to look for the debug info files to match the
shared libs that you have in a non-standard location? It finds the
shared libs themselves with a 'set solib-absolute-path' directive, but
it doesn't look for the corresponding (relative) usr/lib/debug/...
file that you get if you extract the debuginfo rpm in the same place.
 And if I set debug-file-directory to the top of the tree, it wants to
also add the solib-absolute-path to that. Is there some way to make it
realize that the top of the tree is the same for both?

-- 
  Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-06-30 Thread Les Mikesell
On Mon, Jun 30, 2014 at 9:50 AM, Paolo Basenghi
paolo.basen...@fcr.re.it wrote:
 Il 30/06/2014 16:25, Holger Parplies ha scritto:
 As you can view in the log I attached in my original post, BackupPC_Dump
 produces the warning at every step.

 Do you have shell access to the device? Can you run something like

   % ssh -q -x -l root afrodite /bin/true
 You're right Holger! Tried the above command from a bash on the device
 as backuppc user and obtained the warning about unknown parameter conn
 log. If I delete the smb.conf related line, then no warning at all!

 Now I've got no time to investigate, but I will do it at soon!

Where did you delete the line in question?A quirk of the backuppc
implementation of rsync is that it can't handle any output sent from
the remote side before the rsync program starts, although that usually
results in an error that is logged as a version mismatch.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-06-30 Thread Les Mikesell
On Mon, Jun 30, 2014 at 11:28 AM, Paolo Basenghi
paolo.basen...@fcr.re.it wrote:

 Il 30/06/2014 17:10, Les Mikesell ha scritto:
 Where did you delete the line in question? A quirk of the backuppc
 implementation of rsync is that it can't handle any output sent from
 the remote side before the rsync program starts, although that usually
 results in an error that is logged as a version mismatch.

 The parameter is in smb.conf on the QNAP NAS. It appeared with the last
 firmware upgrade (QTS 4.1.x).

I'm still confused as to what executing an ssh command has to do with
the smb.conf file.  Is this running some implementation of busybox as
a shell and combination of what would normally be independent
commands?  Even then it doesn't make a lot of sense that it would
involve your non-stock OptWare client which would be the part that
doesn't understand the stock smb.conf contents.

Also, did you really need to install the  OptWare samba client if you
are going to use rsync for backups?


-- 
  Les Mikesell
 lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Uneffective rsync transfers?

2014-06-30 Thread Les Mikesell
On Mon, Jun 30, 2014 at 2:55 AM, Holger Parplies wb...@parplies.de wrote:

 But I have to correct myself. Building an incorrect view can't be the cause,
 because then the file would simply be transfered (and logged as pool instead
 of same). Maybe we should think more closely about the rsync on the *remote
 end*, the host to be backed up. I'm not quite sure which side (of rsync) is
 responsible for doing the file list comparison, and I haven't really got the
 time to look right now, but failing to detect/signal unchanged files would
 result in exactly what you see.

I think what should be happening is that the remote side sends the
equivalent of a full directory tree listing, then both the backuppc
server and the remote walk the list comparing the contents with block
checksums.  In the case of fulls every file should be checked and
files that are the same should be logged as 'same' and linked into the
new backup tree.   This behavior doesn't make sense at all, but I
don't see how the remote side could miss sending the directory listing
for the files in the last backup, and then the server side should log
them as 'same', 'create', or 'pool'.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG (IMHO): BackupPC_Dump on QNAP QTS 4.1

2014-06-28 Thread Les Mikesell
On Sat, Jun 28, 2014 at 6:04 AM, Paolo Basenghi
paolo.basen...@fcr.re.it wrote:
 Hello,
 I installed BackupPC 3.3.0 on a QNAP NAS with the last QNAP OS (4.1)
 following QNAP/OptWare instructions.

 QNAP customizes some daemon (like Samba) adding new non-samba conf
 parameters, so the optware smbclient utilized by backuppc return warnings
 like ignoring unknown parameter 'conn log'. QNAP does not install a
 smbclient, so I must use the optware version that does not understand the
 customized QNAP parameters.
 That warning message cause backups to fail, even it is only a warning and
 even with rsync backups (samba should not be used).

Smbclient will default to reading smb.conf from the standard location,
but you could override that by adding '-s /path/to/your/smb.conf'  to
the SmbClient related commands.

 I attach BackupPC_Dump -i -v output.

 On the CGI console I see the backup jobs (all rsync based) in running (but
 idle) state for several hours before a timeout interrupts them.

 This appear like a backuppc bug because:

 - a warning message cause backup to fail
 - a samba configuration problem cause rsync based backups to hang and fail.

I'm missing something here.   The smb config has nothing to do with
rsync based xfers.Does the xfer log show any files copied?   Maybe
the box is just very slow at compressing files and was not finished
when the (configurable) timeout happened.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Uneffective rsync transfers?

2014-06-28 Thread Les Mikesell
On Sat, Jun 28, 2014 at 3:31 AM, František Starý franti...@stary.net wrote:

 Every file should show at least 2 links.  One for the name in the pc
 tree where you are looking, and one under cpool where the name is a
 hash of the contents.  But those files show a lot more.  If they
 aren't being linked into the next backup number's tree (like they
 should be), where are they?  And if they are, why can't you see them?

 The count of hard links is high because all the testfiles have the same
 content and I have more backups and more testhosts. Its a proof the
 pooling works. But the merging of the backups made using rsync xfer
 method not (although using tar xfer method works good). Look at the
 backup number 1. It is a full backup, but contains only the newly added
 file.

That is the way an incremental should look at the filesystem level,
but the web view should fill it with the contents of the backing full.
  Fulls should be relinking the whole tree.

 The backup number 2 contains only a changes against the backup 1.
 And the backup number 3 only changes against the backup 2. The source
 directory has not changed during the backups 1,2,3...

I think you are the first person to report anything like that, at
least without seeing errors in the logs.   I don't know where to start
to debug it.  I'd be inclined to start with a fresh install in a VM to
get something that works normally and then try to compare the systems.

-- 
   Les Mikesell
  lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Uneffective rsync transfers?

2014-06-27 Thread Les Mikesell
On Fri, Jun 27, 2014 at 12:59 AM, František Starý franti...@stary.net wrote:
 
 A different possibility would be that the code for building a view of the 
 last
 backup is not working correctly for you, though I can't even begin to imagine
 why it wouldn't.
 I'm sure hardlinks for my testhost are created successfully:
 root@morgan:~# ls -l /var/lib/backuppc/pc/testhost/0/f%2ftestdir%2f/
 total 28
 -rw-r-  4 backuppc backuppc   48 Jun 22 11:56 attrib
 -rw-r- 36 backuppc backuppc 4596 Jun 22 11:56 ftestfile1
 -rw-r- 36 backuppc backuppc 4596 Jun 22 11:56 ftestfile2
 -rw-r- 36 backuppc backuppc 4596 Jun 22 11:56 ftestfile3


Every file should show at least 2 links.  One for the name in the pc
tree where you are looking, and one under cpool where the name is a
hash of the contents.  But those files show a lot more.  If they
aren't being linked into the next backup number's tree (like they
should be), where are they?  And if they are, why can't you see them?

-- 
  Les Mikesell
lesmikes...@gmail.com

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: Does Jenkins not support complex build workflows?

2014-06-23 Thread Les Mikesell
On Sun, Jun 22, 2014 at 7:37 PM,  johnjay76...@gmail.com wrote:

 I have to believe it's possible, because I've heard of multiple
 enterprise-level organizations using Jenkins to automate their build
 processes over the years, but I can't see how they'd accomplish that without
 the basic functionality this new plugin seems to provide.

We aren't quite at that scale with jenkins but we generally build/test
libraries with independent release schedules with the binaries
committed to a subversion repository, and the consuming projects
individually advance their svn externals as they are ready for new
features. So there is never a need to rebuild everything at once
within a job.

-- 
   Les Mikesell
  lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [BackupPC-users] Dificulties

2014-06-23 Thread Les Mikesell
On Mon, Jun 23, 2014 at 10:04 AM, Francisco Suarez
franci...@wihphotels.com wrote:
 installation working and configured a test backup of another server.
 what could be the problem...?

 Error:

 full backup started for directory downloads
 Connected to xx.xxx.xxx.xx:873, remote version 31
 Negotiated protocol version 28
 Error connecting to module downloads at xx.xxx.xxx.xx:873: auth required,
 but service downloads is open/insecure

 Got fatal error during xfer (auth required, but service downloads is
 open/insecure)
 Backup aborted (auth required, but service downloads is open/insecure)

This means you configured backuppc to use rsyncd with a login and
password but you configured an rsync to run in standalone/daemon mode
on the target host  without requiring a login/password.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 10:20 AM, Francisco Suarez
franci...@wihphotels.com wrote:
 Hello there,

 I'm having much difficulties getting backuppc to open the web interface.

 http://host/backuppc/

 The webserver has been configured and when I request the url on the browser
 it downloads what I believe is a cgi.

That means apache really isn't configured with a scriptalias or
handler for the cgi.


 Using Ubuntu 14  Xampp lamp distro.


What do you mean by xampp lamp?   Ubuntu should have a package in the
distribution repository.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 11:57 AM, Francisco Suarez
franci...@wihphotels.com wrote:
 Thomas,

 This is my apache config file
 http://pastebin.com/krhVYj3h

 Not sure what I'm missing here and any help will be super appreciated.

 Looks like fast cgi is enabled in PHP.

Backuppc is not php.   Are you using the ubuntu deb package?  It
should set the web server up for you but I'm only familar with the rpm
version.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 12:25 PM, Francisco Suarez
franci...@wihphotels.com wrote:
 Les, yes i'm using ubuntu 14.04. would you recommend another linux distro
 instead which installing backuppc will be simpler and more reliable to get
 it going using the rpm you mention? where could i get the rpm from?

 for some reason apache doesn't wan't to execute the cgi and it's downloading
 it on the browser.


Here's something that turned up in a google search:
https://bugs.launchpad.net/ubuntu/+source/backuppc/+bug/1243476

I use the CentOS distro myself with the backuppc package from the EPEL
yum repository - specifically because they rarely update things in
non-backward compatible ways to minimize that kind of breakage.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 2:14 PM, Francisco Suarez
franci...@wihphotels.com wrote:
 Les, I'm trying now with Centos. I have installed apache and the rpm for
 backuppc you have suggested. I the problem is apache, same issue within
 centos as it's returning a txt instead of the web interface. Unsure what
 could be the issue.


Are you using the /BackupPC urrl as set up by the ScriptAlias in
/etc/httpd/conf.d/BackupPC.conf?  You'll also need to set up
authentication with the htpasswd command mentioned there in a comment
near the top.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 3:19 PM, Francisco Suarez
franci...@wihphotels.com wrote:
 Les, I got it working on CentOs, thanks lot. configured apache with all
 dependencies.


You may also need the perl-suidperl package if you have trouble with
permissions.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 3:48 PM, Francisco Suarez
franci...@wihphotels.com wrote:
 Apache is letting me open the web console without asking for authentication.
 Which file should I edit to enable 'backuppc' user to be entered?


It should be in /etc/httpd/conf.d/BackupPC.conf.   The 'require
valid-user' in the
Directory   /usr/share/BackupPC/sbin/
section.

Note that your browser will cache your credentials if you haven't
completely closed it, though.

-- 
  Les Mikesell
lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dificulties

2014-06-20 Thread Les Mikesell
On Fri, Jun 20, 2014 at 3:51 PM, Francisco Suarez
franci...@wihphotels.com wrote:
 Problem Solved:

 I have removed previously this like on BackupPC.conf which caused the issue
 of non-authenticating.

 require valid-user


If you haven't found this part yet, you want to set
$Conf{CgiAdminUsers} =
to the user that will have admin rights in /etc/BackupPC/config.pl.
Then you can do everything else in the web interface.

-- 
  Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: Coverity plugin

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 10:33 AM, Ginga, Dick
dick.gi...@perkinelmer.com wrote:
 I have run Coverity with Jenkins. The latest plugin is better than previous
 ones but still not real robust.


We just do a batch/script that runs cov-build with the appropriate
command line options, then cov-analyze, then cov-commit.  The latest
version of the plugin didn't work with our (not-latest) coverity
server.  I did do some tests with a back-rev plugin and having a
pick-list of projects from the server was nice, but it seemed
confusing as to how the plugin options actually mapped to what it
would do with cov-build, etc.

-- 
   Les Mikesell
 lesmikes...@gmail.com

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [BackupPC-users] rsync file is vanishing

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 3:42 PM, Gerald Brandt g...@majentis.com wrote:
 On 2014-06-12, 3:28 PM, backu...@kosowsky.org wrote:

 The files are still there, but I checked the original filesystem, not
 the shadow copy.  The shadow has already been deleted.


You could try removing the script that deletes the shadow copy after
the backup completes and check later to see if it is still there with
files the backup says went missing.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [CentOS] Google chrome vs network settings proxy?

2014-06-12 Thread Les Mikesell
On Wed, Jun 11, 2014 at 11:22 PM, Gé Weijers g...@weijers.org wrote:
 On Wed, Jun 11, 2014 at 11:10 AM, Les Mikesell lesmikes...@gmail.com
 wrote:

  However, I can start a chrome connection to gmail and it just goes
 direct (which happens to work, I just prefer the proxy which will use
 a different outbound route).   If I go to any non-google site, it uses
 the proxy and will pop up the expected authentication dialog on the
 first connection.   Does anyone know (a) why it bypasses the proxy
 when going to a google site, (b) why it doesn't have its own internal
 proxy settings, or (c) how to fix it?


 Did you configure the proxy for HTTPS? Gmail uses HTTPS exclusively these
 days, the certificate is pinned (hard coded) in Chrome to prevent spoofing,
 maybe the protocol is too. Time for 'tcpdump'?

Yes, that turned out to be the problem.  I had only set http in the
system settings and must have bookmarked/saved the https url so it
didn't even need the initial redirect.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Google chrome vs network settings proxy?

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 10:33 AM, Billy Crook bcr...@riskanalytics.com wrote:
 Makes me wonder what happens if a site uses spdy://


I'd expect that to be the case for chrome talking to gmail.  But it is
supposed to run over https://.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 6:45 AM, Timothy Murphy gayle...@eircom.net wrote:

 Does XFS have any advantages over ext4 for normal users, eg with laptops?
 I've only seen it touted for machines with enormous disks, 200TB plus.

 It is generally better at handling a lot of files - faster
 creation/deletion when there are a large number in the same directory.

 I'm wondering if, for the home user, BackupPC would be a good test of that?
 Otherwise I can't think of a case where I would have a very large number
 of files in the same directory.

There are users on the backuppc list that recommend XFS - but for
'home' size systems it probably doesn't matter that much.

The only down side for a long time has been on 32bit machines where
 the RH default 4k kernel stacks were too small.

 Do you mean that that is a down side of XFS, or ext4?

XFS - it needs more working space..  RedHat's choice to configure the
kernel for 4k stacks on 32bit systems is probably the reason XFS
wasn't the default filesystem in earlier versions.  And now that I
think of it, this may be an issue again if CentOS revives 32bit
support.

 Does XFS have the same problems that LVM has if there are disk faults?

 You can't really expect any file system to work if the disk underneath
 is bad.  Raid is your friend there.

 In my meagre experience, when a disk shows signs of going bad
 I have been able to copy most of ext3/ext4 disks before compete failure,
 while LVM disks have been beyond (my) rescue.
 Actually, this was in the time of SCSI disks,
 which seemed quite good at giving advance warning of failure.

I'm not sure what controls the number of soft retries before giving up
at the hardware layer.  My only experience is that with RAID1 pairs a
mirror drive seems to get kicked out at the first hint of an error but
the last remaining drive will try much harder before giving up.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-12 Thread Les Mikesell
On Thu, Jun 12, 2014 at 12:27 PM, Warren Young war...@etr-usa.com wrote:


 [*] The absolute XFS filesystem size limit is about 8 million terabytes,


Isn't there some ratio of RAM to filesystem size (or maybe number of
files or inodes) that you need to make it through an fsck?

-- 
  Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-11 Thread Les Mikesell
On Wed, Jun 11, 2014 at 8:11 AM, Timothy Murphy gayle...@eircom.net wrote:
 m.r...@5-cent.us wrote:

 Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with
 embedded support for Docker containers and support for direct use of
 Microsoft's Active Directory. The update uses XFS as its new file system.

 Does XFS have any advantages over ext4 for normal users, eg with laptops?
 I've only seen it touted for machines with enormous disks, 200TB plus.

It is generally better at handling a lot of files - faster
creation/deletion when there are a large number in the same directory.
   The only down side for a long time has been on 32bit machines where
the RH default 4k kernel stacks were too small.

 Does XFS have the same problems that LVM has if there are disk faults?

You can't really expect any file system to work if the disk underneath
is bad.  Raid is your friend there.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum install to a portable location

2014-06-11 Thread Les Mikesell
On Wed, Jun 11, 2014 at 11:26 AM, Dan Hyatt dhy...@dsgmail.wustl.edu wrote:
 I have googled, read the man page, and such.

 What I am trying to do is install applications to a NFS mounted drive,
 where the libraries and everything are locally installed on that
 filesystem so that it is portable across servers (I have over 100
 servers which each need specific applications installed via yum and we
 do not want to install 100 copies).

 We tried the yum relocate and it was not available on Centos6.4

 and
 yum --nogpgcheck localinstall R-3.1.0-5.el6.x86_64

 I want the binaries and all dependencies in the application filesystem
 which is remote mounted on all servers.

I don't think there is a generic way of doing this since yum can
pretty much install anything anywhere and run the postinstall scripts
that might only work on the host where it runs.   However, for typical
things it might work to install everything on one host where / is
exported and mounted somewhere on the other systems with PATH and
LD_LIBRARY_PATH tweaked to visit the  bin,sbin,usr/sbin,usr/bin and
equivalent library paths  on the mountpoint after the local locations.
  Things like perl modules will probably become a horrible mess,
though, and other things might need a splattering of symlinks to work.

Might be easier to PXE boot into an NFS-mounted root with DRBL or the
like if you need to save disk space.   On the other hand, it's really
not that hard to tell yum on 100 machines to install the package.

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Information Week: RHEL 7 released today

2014-06-11 Thread Les Mikesell
On Wed, Jun 11, 2014 at 11:56 AM, Mike Hanby mha...@uab.edu wrote:

 I wonder if the certification for 6.x will still be viable and
 available?and for how long? (sigh!) guess I now have to find a RHEL
 7 certification guide!...LOL!
 RHCSA is current for three (3) years from the date it was earned.
 http://www.redhat.com/training/certifications/recertification.html


On a slightly related note, is there anything, anywhere that would
show the differences in the documentation between versions?
Something like one of those side-by-side color coded diff listings
that you can do with source code would be ideal to flip through on a
wide screen.

-- 
Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Google chrome vs network settings proxy?

2014-06-11 Thread Les Mikesell
I started using chrome on the RHEL7 beta after some firefox hangs -
but maybe this behavior is generic. I have the system 'network
settings/network proxy' set to use squid on another host for
connections out of the private range we use.   This proxy requires
authentication so I can always tell the first time a browser uses it.
 However, I can start a chrome connection to gmail and it just goes
direct (which happens to work, I just prefer the proxy which will use
a different outbound route).   If I go to any non-google site, it uses
the proxy and will pop up the expected authentication dialog on the
first connection.   Does anyone know (a) why it bypasses the proxy
when going to a google site, (b) why it doesn't have its own internal
proxy settings, or (c) how to fix it?

-- 
   Les Mikesell
  lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Analyzing the MBR

2014-06-05 Thread Les Mikesell
On Thu, Jun 5, 2014 at 4:00 PM, John R Pierce pie...@hogranch.com wrote:
 On 6/5/2014 1:56 PM, m.r...@5-cent.us wrote:
 mkpart pri 0.0GB x.GB
 *always*  gives me aligned partions (and parted - talk about user hostile
 programs! Not aligned, with not a clue as to what it actually wants)

 I've taken to always running parted with -a none, as its alignment rules
 are based on old concepts like cylinders, heads, which are meaningless
 and even WRONG on today's storage devices. (63 heads, 255 sectors is not
 uncommon, this means everything at a cylinder or head is on an ODD
 sector boundary, ouch!)


New bigger disks may use 4k physical sectors but report 512 for
backwards compatibility.  If you don't write 4 contiguous sectors it
has to read, wait for the disk to spin back around, then write,
merging in what you did write.  Which means writes will be very slow
if your partitions are not aligned on 4k boundaries. I haven't had to
deal with many of these yet so I've mostly just installed gparted from
epel and used its defaults rather than doing the math myself.

-- 
  Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Analyzing the MBR

2014-06-05 Thread Les Mikesell
On Thu, Jun 5, 2014 at 5:06 PM, John R Pierce pie...@hogranch.com wrote:
 On 6/5/2014 2:50 PM, Les Mikesell wrote:
 New bigger disks may use 4k physical sectors but report 512 for
 backwards compatibility.  If you don't write 4 contiguous sectors it
 has to read, wait for the disk to spin back around, then write,
 merging in what you did write.  Which means writes will be very slow
 if your partitions are not aligned on 4k boundaries. I haven't had to
 deal with many of these yet so I've mostly just installed gparted from
 epel and used its defaults rather than doing the math myself.

 some new disks even report they are 4K and allow 4K sector operations,
 which file systems like XFS support natively.   This is going to be
 increasingly common going forwards.

 [btw, thats 8 consecutive 512B sectors for 4K]


Ummm, yeah That's why I let gparted do the math.

-- 
   Les Mikesell
 lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Analyzing the MBR

2014-06-05 Thread Les Mikesell
On Thu, Jun 5, 2014 at 6:24 PM, Timothy Murphy gayle...@eircom.net wrote:
 Les Mikesell wrote:

 Ummm, yeah That's why I let gparted do the math.

 But even gparted leaves some maths to be done,
 eg since it uses MiB's it seems logical to use GiB's
 which means difficult calculations like 80x1024 = ?


No, usually I'm just adding the whole thing as one partition so
however it wants the units it is still the default.

-- 
   Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


<    5   6   7   8   9   10   11   12   13   14   >