Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread markw
 On 3/10/2015 11:09 PM, ma...@mohawksoft.com wrote:
 There are some very good reasons to NOT use ZFS, but this isn't the
 discussion I intended to start.

 Then all I will say on this subject at this time is that your problems
 with ZFS seem to fall under you're doing it wrong. ZFS best practices
 are thoroughly documented and those documents do address your complaints
 about ZFS.

Yes, please, oh please, put some links that describe best practices that
address my complaints as there are none that anyone I have ever known
have been able to find. Yes, there are some that claim to fix these
problems, but not really or completely dismiss the architecture of the
application.

Remember, a lot of very high quality, very high performance, applications
are designed to run on very thin disk layers, i.e. LVM, RAID, etc. ZFS
introduces I/O, latency, memory requirements, CPU utilization, and other
resource requirements that are otherwise not desirable in a product. A
high performance application which is bottle-necked by I/O and I/O
latency, will run faster against a raw disk than it will against a zvol or
file in a zfs pool.



 In re Linux LVM, well, it comes as no surprise to me that the thin
 provisioning mechanism feels like a bolted-on hack. LVM always felt
 unfinished to me compared to other offerings like AdvFS, VxVM, even the
 volume manager that IBM created to support JFS (IBM's tools and internal
 consistency made up for a lot of the shortcomings in AIX). I used LVM
 not because it was good but because it was the only volume manager that
 Linux had. These days I try to avoid using LVM for anything other than
 basic OS volumes.

 --
 Rich P.
 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] magnet links in bittorrent question

2015-03-11 Thread Eric Chadbourne
Exactly what I was looking for.  Thanks John!

—
Eric Chadbourne
http://Nonprofit-CRM.org/


 On Mar 10, 2015, at 8:56 PM, John Abreau j...@blu.org wrote:
 
 This url seems to explain how they work. Essentially, there's a hardwired 
 address that's used to bootstrap a cache of nearby peers if it doesn't yet 
 know of any peers. 
 
 http://stackoverflow.com/questions/3844502/how-do-bittorrent-magnet-links-work
 
 On Tue, Mar 10, 2015 at 7:39 PM, Eric Chadbourne eric.chadbou...@icloud.com 
 wrote:
 Hi All,
 
 My stumbling block is magnet links.  I don’t see any PHP libraries that can 
 do what I want (including the one I wrote long ago, they all seem to use 
 announce urls and torrent files) so I will just write my own.  But I really 
 don’t know where to start.  Sites like this appear to be ancient:  
 http://magnet-uri.sourceforge.net/  Any tips to something more fresh and with 
 more technical detail?
 
 I want to write my own bittorrent tracker from scratch in PHP.  It will not 
 upload or download anything.  Merely tracks torrents submitted via magnet 
 link.  What do you think?  Has somebody already done this under a free 
 software license?
 
 Thanks,
 
 —
 Eric Chadbourne
 http://Nonprofit-CRM.org/
 
 
 
 
 
 
 
 
 
 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss
 
 
 
 -- 
 John Abreau / Executive Director, Boston Linux  Unix
 Email j...@blu.org / WWW http://www.abreau.net / PGP-Key-ID 0x920063C6
 PGP-Key-Fingerprint A5AD 6BE1 FEFE 8E4F 5C23  C2D0 E885 E17C 9200 63C6
 

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread Richard Pieri

On 3/11/2015 3:04 PM, ma...@mohawksoft.com wrote:

Again, like I said, these do not address the problems. Specifically, the
post about sparse volumes says nothing about how to keep a ZFS pool from
growing out of control on a sparse presented to it from a SAN. It merely


You probably have deduplication turned off.


says give ZFS whole disks, which is stupid.


Contrary to your assertion, giving ZFS whole disks is smart. Trying to 
manage ZFS storage at any level other than whole disks is stupid.




The performance best practice show how to improve performance on ZFS,
but not how to make the performance on ZFS equivalent to much thinner
volume management.


I don't see how you can search for zfs sparse volume performance 
tuning and not get Oracle's documentation on ZFS sparse volume 
performance tuning. That's the first hit that I get.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] pulse files in /tmp on RHEL 6

2015-03-11 Thread Richard Pieri

On 3/11/2015 2:49 PM, Jerry Feldman wrote:

I have removed all pulseadio packages except pulseaudio-gdm-hooks.x86_64
0:0.9.21-17.el6 that is needed for gdm, but the directories are still
being created.


Those are used for PulseAudio sockets. The idea behind not cleaning up 
is so that when a user logs in again he can re-use his existing 
directory. The pile of turds in /tmp means that the PulseAudio support 
is working as designed according to the PulseAudio devs:


http://lists.freedesktop.org/archives/pulseaudio-discuss/2012-April/013415.html

--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread markw
 On 3/11/2015 1:13 PM, ma...@mohawksoft.com wrote:
 Yes, please, oh please, put some links that describe best practices
 that
 address my complaints as there are none that anyone I have ever known

 http://lmgtfy.com/?q=zfs+best+practices+memory
 http://lmgtfy.com/?q=zfs+best+practices+database
 http://lmgtfy.com/?q=zfs+best+practices+sparse+volumes

Again, like I said, these do not address the problems. Specifically, the
post about sparse volumes says nothing about how to keep a ZFS pool from
growing out of control on a sparse presented to it from a SAN. It merely
says give ZFS whole disks, which is stupid.

The performance best practice show how to improve performance on ZFS,
but not how to make the performance on ZFS equivalent to much thinner
volume management.

ZFS has a lot of good qualities for a number of applications, but it is
just bad for a lot of other applications.



 Was that so hard?

Yes, because it didn't have any usable information.

 --
 Rich P.
 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] pulse files in /tmp on RHEL 6

2015-03-11 Thread Jerry Feldman
I have not been able to find a satisfactory answer online yet.

The /tmp directory fills up with many pulse directories such as
'pulse-zZRmb3xqyy69'

I have removed all pulseadio packages except pulseaudio-gdm-hooks.x86_64
0:0.9.21-17.el6 that is needed for gdm, but the directories are still
being created.

The problem is that while these directories are empty, they take up an
inode slot and eventually cause file system full messages. They are hard
to remove because when there are too many of them they cannot be removed
from the command line directory. I just do a remove of /tmp, and
recreate /tmp

it tends to occur for users logged in whether they come in from ssh, or
from VNC when a vnc server is running for a specific user. (I leave a
vncserver up for each member of the team). In the 20 minutes since I
cleaned them all out there are over 3000 with only 10 vncservers running
and only 1 active user, me

I could set up a daily cron just to remove these directories but that is
only a workaround


-- 
Jerry Feldman g...@blu.org
Boston Linux and Unix
PGP key id:B7F14F2F
PGP Key fingerprint: D937 A424 4836 E052 2E1B  8DC6 24D7 000F B7F1 4F2F


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] pulse files in /tmp on RHEL 6

2015-03-11 Thread John Abreau
Argument list too long? Tri find(1):

find /tmp -type d -name pulse\* -mtime +1 -print0 | xargs -0 rm -rf

If a granularity of a day is too large:

touch -d '5 minutes ago' foo  \
find /tmp -type d -name pulse\* -newer foo -print0 | xargs -0 rm -rf 
\
rm -f foo


On Wed, Mar 11, 2015 at 2:49 PM, Jerry Feldman g...@blu.org wrote:

 I have not been able to find a satisfactory answer online yet.

 The /tmp directory fills up with many pulse directories such as
 'pulse-zZRmb3xqyy69'

 I have removed all pulseadio packages except pulseaudio-gdm-hooks.x86_64
 0:0.9.21-17.el6 that is needed for gdm, but the directories are still
 being created.

 The problem is that while these directories are empty, they take up an
 inode slot and eventually cause file system full messages. They are hard
 to remove because when there are too many of them they cannot be removed
 from the command line directory. I just do a remove of /tmp, and
 recreate /tmp

 it tends to occur for users logged in whether they come in from ssh, or
 from VNC when a vnc server is running for a specific user. (I leave a
 vncserver up for each member of the team). In the 20 minutes since I
 cleaned them all out there are over 3000 with only 10 vncservers running
 and only 1 active user, me

 I could set up a daily cron just to remove these directories but that is
 only a workaround


 --
 Jerry Feldman g...@blu.org
 Boston Linux and Unix
 PGP key id:B7F14F2F
 PGP Key fingerprint: D937 A424 4836 E052 2E1B  8DC6 24D7 000F B7F1 4F2F


 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss




-- 
John Abreau / Executive Director, Boston Linux  Unix
Email: abre...@gmail.com / WWW http://www.abreau.net / PGP-Key-ID 0x920063C6
PGP-Key-Fingerprint A5AD 6BE1 FEFE 8E4F 5C23  C2D0 E885 E17C 9200 63C6
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread Richard Pieri

On 3/11/2015 1:13 PM, ma...@mohawksoft.com wrote:

Yes, please, oh please, put some links that describe best practices that
address my complaints as there are none that anyone I have ever known


http://lmgtfy.com/?q=zfs+best+practices+memory
http://lmgtfy.com/?q=zfs+best+practices+database
http://lmgtfy.com/?q=zfs+best+practices+sparse+volumes

Was that so hard?

--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread Eric Chadbourne
I was just reading some Oracle docs cause of your discussion.  It’s amazing how 
much has changed so fast.  I used to work for a company that had about a dozen 
large Oracle databases on a couple of different *nixes.  I was just a support 
tech who got to write a bunch of SQL and Bash.  The things Mark is saying 
sounds just like the things the DBAs would have said.  It sounds like ZFS has 
changed things.  The speed of development is amazing.

Also, honest folks can disagree and change their minds.  Hell, I used to like 
MySQL.  ;)

—
Eric Chadbourne
http://Nonprofit-CRM.org/



 On Mar 11, 2015, at 6:37 PM, Edward Ned Harvey (blu) b...@nedharvey.com 
 wrote:
 
 From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On
 Behalf Of ma...@mohawksoft.com
 
 says give ZFS whole disks, which is stupid.
 
 Mark, clearly you know nothing about ZFS.
 
 Also, it's clear you have an axe to grind, which makes anything you say about 
 it take it with a grain of salt.
 
 I've personally used a lot of zfs, and a lot of lvm, and there is barely any 
 situation that I would ever consider using lvm ever again.
 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] pulse files in /tmp on RHEL 6

2015-03-11 Thread Richard Pieri

On 3/11/2015 3:18 PM, John Abreau wrote:

Argument list too long? Tri find(1):


I'd originally considered suggesting that you tweak the tmpwatch daily 
crontab entry to be more aggressive since the default EL prune time is 
30 days but a separate pulse pruner crontab would be less disruptive.


Personally, I don't find /tmp cleaners to be workarounds. They're part 
of every normally functioning UNIX and Unix-alike that I've ever managed 
even when I've had to roll my own.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread Edward Ned Harvey (blu)
 From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On
 Behalf Of ma...@mohawksoft.com
 
 says give ZFS whole disks, which is stupid.

Mark, clearly you know nothing about ZFS.

Also, it's clear you have an axe to grind, which makes anything you say about 
it take it with a grain of salt.

I've personally used a lot of zfs, and a lot of lvm, and there is barely any 
situation that I would ever consider using lvm ever again.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread markw
 From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On
 Behalf Of ma...@mohawksoft.com

 says give ZFS whole disks, which is stupid.

 Mark, clearly you know nothing about ZFS.

Think what you wish. Maybe I'm not explaining the problem

Commercial SAN systems provide disks as LUNs over fibre channel or
iSCSI. These LUNs are allocated from a pool of disks in a commercial
storage system. Ideally, a number of servers would use storage from the
SAN. Each of the servers or VMs will be presented with their disks.

Now, EXT2, XFS and many other file systems keep their data allocation
conservative, opting to re-use blocks in-place instead of using new
blocks.

The problem arises when you have something like a 100 VMs, each with a 2TB
LUNs, running off a SAN with only 20TB of actual storage. Without ZFS, the
systems only use space as they need it. 100VMs with 2TB of logical storage
each, can easily come out of 20TB as long as block allocation is
conservative. When you use ZFS the 100VMs will, far more quickly than
actually needed, gobble up 2TB each and force 200TB physical storage even
though most of the VMs have largely free space used by ZFS.

This is representative of a *real* and actual problem seen in the field by
a real customer. ZFS is not compatible with this strategy, and this
strategy is common and not something the VERY LARGE customer is willing to
change.


 Also, it's clear you have an axe to grind, which makes anything you say
 about it take it with a grain of salt.

Believe what you will, I have posted nothing but real issues that myself
and other people have had.


 I've personally used a lot of zfs, and a lot of lvm, and there is barely
 any situation that I would ever consider using lvm ever again.

Agreed, ZFS does a lot of things right, unfortunately it does a lot of
things incorrectly and renders itself as a sub-optimal for a class of
applications, specifically ones which manage their own block cache and
block I/O strategy.

You can make ZFS faster, but in the configuration I describe, not as
fast as a simpler volume management system.



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Thin Provisioned LVM

2015-03-11 Thread Richard Pieri

On 3/10/2015 11:09 PM, ma...@mohawksoft.com wrote:

There are some very good reasons to NOT use ZFS, but this isn't the
discussion I intended to start.


Then all I will say on this subject at this time is that your problems 
with ZFS seem to fall under you're doing it wrong. ZFS best practices 
are thoroughly documented and those documents do address your complaints 
about ZFS.


In re Linux LVM, well, it comes as no surprise to me that the thin 
provisioning mechanism feels like a bolted-on hack. LVM always felt 
unfinished to me compared to other offerings like AdvFS, VxVM, even the 
volume manager that IBM created to support JFS (IBM's tools and internal 
consistency made up for a lot of the shortcomings in AIX). I used LVM 
not because it was good but because it was the only volume manager that 
Linux had. These days I try to avoid using LVM for anything other than 
basic OS volumes.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] Rekonq doesn't trust my Certificate Authority

2015-03-11 Thread Bill Horne

I've come across an odd problem with Rekonq, and I'm looking for help.

I have a real SSL certificate for my website, billhorne.com. It shows, 
as is expected, a padlock icon when I go to https://billhorne.com/ .


Except when I use Rekonq, and then the KDE browser gives me an 
untrusted error, saying that the root CA certificate is not trusted 
for this use.  Google searches show that it's a known problem, but the 
only pages I found were of suggestions that there was a MITM attack in 
progress or warning against using a self-signed cert.


I took a screen shot of the deails page: it's at 
https://billhorne.com/snapshot1.png .  All suggestions are welcome, and 
thank you in advance.


Bill

--
E. William Horne
339-364-8487

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss