[gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread James
Michael Orlitzky mjo at gentoo.org writes:



 The entries of the SRC_URI variable are all logical-ANDed together
 rather than logical-OR. In other words, every entry is downloaded and
 considered part of the source. You only need the first one (from
 apache.org); Gentoo will ultimately take care of mirroring it if the
 ebuild makes its way into the tree.

OK, now it is simplified to:

MY_PV=${PV/_/}

SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz; 

But I get this error:

'ebuild spark-1.1.0.ebuild manifest'

Downloading 'http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz'
--2014-09-20 13:54:17--  http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz



ERROR 404: Not Found.

!!! Couldn't download 'spark-1.1.0.tgz'. Aborting.
!!! Fetch failed for spark-1.1.0.tgz, can't update Manifest


But th file is in:
http://www.apache.org/dist/spark/spark-1.1.0/

so PV is not forming correctly in the download, and I've tried everything
to fix it?

Downloading 'http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz'

should be:

Downloading 'http://www.apache.org/dist/spark/spark-1.1.0/spark-1.1.0.tgz'
or
Downloading 'http://www.apache.org/dist/spark/spark-1.1.0.tgz





James








[gentoo-user] SRC_URISRC_URI.mirror

2014-09-20 Thread James
Hello,

So I'm working on apache spark (overlay) ebuild. 
I cannot see to get the sources to download.

Here are the sources:
http://www.apache.org/dist/spark/spark-1.1.0/
or here:
http://mir2.ovh.net/ftp.apache.org/dist/spark/spark-1.1.0/

My local ebuild has these etries:

snip

MY_PV=${PV/_/}

SRC_URI=http://www.apache.org/dist/spark/spark/${PV}/${P}.tgz
http://mir2.ovh.net/ftp.apache.org/dist/spark/spark/${PV}/${P}.tgz;

'ebuild spark-1.1.0.ebuild manifest'

When attemping to compile (emerge spark### ) I get:
!!! Couldn't download 'spark-1.1.0.tgz'. Aborting.
!!! Fetch failed for spark-1.1.0.tgz, can't update Manifest

So based on 'repoman scan spark-1.1.0.ebuild' I add:

SRC_URI.mirror=mirror://apache/spark/spark/1.1.0/spark-1.1.0.tgz
and comment out the  original SRC_URI line.

I can then update the  manifest, but ti fails to build:

 /usr/local/portage/sys-cluster/spark/spark-1.1.0.ebuild: line 19:
SRC_URI.mirror=mirror://apache/spark/spark/1.1.0/spark-1.1.0.tgz: No such
file or directory


I've tried dozens and dozens of varous SRC_URI, but it just will
not download the file.

ideas?


And here is the rest, just in case anyone wants to make suggestions.

LICENSE=Apache-2.0
SLOT=0
KEYWORDS=~amd64 ~x86
IUSE=java python scala

DEPEND=python? ( dev-lang/python dev-python/boto )
java? ( virtual/jdk )
scala? ( dev-lang/scala )
dev-java/maven-bin
${DEPEND}

RDEPEND= python? ( dev-lang/python )
  =virtual/jdk-1.6
  scala? ( dev-lang/scala )
  dev-java/maven-bin

S=${WORKDIR}/${P}

ECONF_SOURCE=${S}

src_prepare() {
mkdir ${S}/build || die
}

src_configure() {
cd ${S}/build
econf \
$(use_enable python) \
$(use_enable java)
}

src_compile() {
cd ${S}/build
emake -j1 V=1
}

src_install() {
cd ${S}/build







Re: [gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread Michael Orlitzky
On 09/20/2014 01:55 PM, James wrote:
 
 OK, now it is simplified to:
 
 MY_PV=${PV/_/}
 
 SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz; 
 
 But I get this error:
 
 'ebuild spark-1.1.0.ebuild manifest'
 
 Downloading 'http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz'
 --2014-09-20 13:54:17--  
 http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz
 
 
 
 ERROR 404: Not Found.
 

Because that's the wrong URL =)




Re: [gentoo-user] SRC_URISRC_URI.mirror

2014-09-20 Thread Michael Orlitzky
On 09/20/2014 01:07 PM, James wrote:
 Hello,
 
 So I'm working on apache spark (overlay) ebuild. 
 I cannot see to get the sources to download.
 
 Here are the sources:
 http://www.apache.org/dist/spark/spark-1.1.0/
 or here:
 http://mir2.ovh.net/ftp.apache.org/dist/spark/spark-1.1.0/
 
 ...
 
 So based on 'repoman scan spark-1.1.0.ebuild' I add:
 
 SRC_URI.mirror=mirror://apache/spark/spark/1.1.0/spark-1.1.0.tgz
 and comment out the  original SRC_URI line.

The SRC_URI.mirror warning indicates that your original ebuild has a
suspicious entry in SRC_URI, namely one of Gentoo's 3rd party mirrors.
Repoman gives a warning because you probably don't need it, and I think
it's right in this case.

The entries of the SRC_URI variable are all logical-ANDed together
rather than logical-OR. In other words, every entry is downloaded and
considered part of the source. You only need the first one (from
apache.org); Gentoo will ultimately take care of mirroring it if the
ebuild makes its way into the tree.




[gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread James
Michael Orlitzky mjo at gentoo.org writes:


  MY_PV=${PV/_/}

  SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz; 

 Because that's the wrong URL =)

SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz; 

Works. Is this correct?
(sorry for being dense)


James






[gentoo-user] mesos spark ebuilds need testing

2014-09-22 Thread James
Hello,

Well at this point, I probably need a few folks to test
the mesos and spark ebuilds as they are in bugs.gentoo.org

mesos (510912 attachment 385316)   and 
spark (523412 attachment 385318)

I had alreay installed java (icedtea, scala and maven-bin)
to those dependancies might need tweaking.
Let me know experience, publically or privately, as you like.


enjoy,
James




Re: [gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread Michael Orlitzky
On 09/20/2014 02:08 PM, James wrote:
 Michael Orlitzky mjo at gentoo.org writes:
 
 
 MY_PV=${PV/_/}
 
 SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz; 
 
 Because that's the wrong URL =)
 
 SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz; 
 
 Works. Is this correct?
 (sorry for being dense)
 

Yes, and you can replace spark-1.1.0 by ${P} in the path as well. The
link that Bryan posted has a list of all of the variables that are
available. You can go pretty crazy with some of them, but in this case
the only other thing I would replace is spark by ${PN}.




Re: [gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread Bryan Gardiner
On Saturday, September 20, 2014 18:08:30 James wrote:
 Michael Orlitzky mjo at gentoo.org writes:
   MY_PV=${PV/_/}
   
   SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
  
  Because that's the wrong URL =)
 
 SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;
 
 Works. Is this correct?
 (sorry for being dense)
 
 
 James

See: http://devmanual.gentoo.org/ebuild-writing/variables/index.html

${PV} is only the version number, it doesn't include the package name.

- Bryan



Re: [gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread Jc García
2014-09-20 12:08 GMT-06:00 James wirel...@tampabay.rr.com:

 Michael Orlitzky mjo at gentoo.org writes:


   MY_PV=${PV/_/}

   SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;

  Because that's the wrong URL =)

 SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;


If you want to build the URI with the ebuild environment variables, it would be:
SRC_URI=http://apache.org/dist/${PN}/${P}/${P}.tgz;

 Works. Is this correct?
 (sorry for being dense)


 James







[gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread James
Michael Orlitzky mjo at gentoo.org writes:


 Yes, and you can replace spark-1.1.0 by ${P} in the path as well. The
 link that Bryan posted has a list of all of the variables that are
 available. You can go pretty crazy with some of them, but in this case
 the only other thing I would replace is spark by ${PN}.


OK, that behind me now..

So the build fails, so I figure I'll just build it manually, then
finish the ebuild. So I went to:


/var/tmp/portage/sys-cluster/spark-1.1.0/work/spark-1.1.0
and no configure scripts

The README.md has this:

Spark is built on Scala 2.10. To build Spark and its example programs, run:

./sbt/sbt assembly

I did and it looks like the manual compile worked:

[info] Packaging
/var/tmp/portage/sys-cluster/spark-1.1.0/work/spark-1.1.0/
examples/target/scala-2.10/spark-examples-1.1.0-hadoop1.0.4.jar
...
[info] Done packaging.
[success] Total time: 786 s, completed Sep 20, 2014 3:04:22 PM

So I need to add commands to the ebuild to launch
 ./sbt/sbt assembly

I've been all over the man 5 ebuild and the devmanual. So naturally
I've seen what to do, but missed it.

Suggested reading (which section) or syntax is most welcome.

James







Re: [gentoo-user] Re: SRC_URISRC_URI.mirror

2014-09-20 Thread Michael Orlitzky
On 09/20/2014 03:17 PM, James wrote:
 
 OK, that behind me now..
 
 So the build fails, so I figure I'll just build it manually, then
 finish the ebuild. So I went to:
 
 
 /var/tmp/portage/sys-cluster/spark-1.1.0/work/spark-1.1.0
 and no configure scripts
 
 The README.md has this:
 
 Spark is built on Scala 2.10. To build Spark and its example programs, run:
 
 ./sbt/sbt assembly
 
 I did and it looks like the manual compile worked:
 
 [info] Packaging
 /var/tmp/portage/sys-cluster/spark-1.1.0/work/spark-1.1.0/
 examples/target/scala-2.10/spark-examples-1.1.0-hadoop1.0.4.jar
 ...
 [info] Done packaging.
 [success] Total time: 786 s, completed Sep 20, 2014 3:04:22 PM
 
 So I need to add commands to the ebuild to launch
  ./sbt/sbt assembly
 
 I've been all over the man 5 ebuild and the devmanual. So naturally
 I've seen what to do, but missed it.
 

Short answer: just put the command in the ebuild (which is nothing but a
fancy bash script). It'll run. For example,

src_compile() {
./sbt/sbt assembly || die assembly build failed
[the rest of the build commands go here]
}

The long answer is that we usually have eclasses for different build
systems. The way eclass inheritance works, the eclasses override certain
phases of the ebuild. So for example, the haskell-cabal eclass knows
that it should run `runghc Setup.hs configure` or something like that
instead of ./configure (like we would run by default).

Unfortunately, I don't see any other ebuilds for scala packages in the
tree. So you're probably the first person to need such an eclass. Rather
than set yourself back implementing the eclass, you're probably better
off running all of the build commands manually and waiting for somebody
else to write the eclass.

Someone may be working on scala in an overlay; the java herd is taking
care of dev-lang/scala so perhaps you can ask in #gentoo-java.




Re: [gentoo-user] Re: Nagios testers wanted

2014-11-05 Thread Alec Ten Harmsel

On 11/05/2014 11:42 AM, James wrote:

 Let's make a deal. Lots of folks are trying to get Nagios running
 on Mesos/spark as a cluster based tool. Have your (hacks) efforts
 focoused on runnning Nagios on a mesos/spark cluster? My good friend
 and dev-in-making Alec has graticiouly put working versions of both
 mesos and spark on his git_tub_club collection:

 https://github.com/trozamon/overlay/tree/master/sys-cluster

 http://caen.github.io/hadoop/user-hadoop.html#spark



Disclaimer: None of my ebuilds (especially Spark) are terribly great; at
most they get the job done. The Spark ebuild even uses *gasp* maven.

After this experience, I have a great deal of sympathy for the Java
herd. All of the large projects that rely on the JVM that I've dealt
with (eclipse, hadoop, hive, spark) organize themselves in such a way
that makes them look extremely difficult to package.

Thanks to all the Gentoo developers, actually. I can't imagine how much
time you guys put into this stuff.

Alec

P.S. dev-in-making? haha, I have too many radical ideas to be accepted
as a Gentoo dev. Symptom of being a young 'un, probably. Don't have near
enough experience quite yet.



[gentoo-user] Re: The end of Herds

2014-11-06 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:


 There is a large discussion on the Spark mailing list right now about
 having groups of maintainers for different areas:


http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Designating-maintainers-for-some-Spark-components-td9115.html


This is an excellent link and model for a hi profile software.
It is a very open and accountable model for code development, reviewing
patches, including patches and in general code maintenance and bug
fixes.

 I'm not sure how relevant that is, but it's interesting.

It is relevant to very large and important codes. I do believe that
most of the gentoo ebuilds  (packages) will not be afforded this
level and number of devs. As the gentoo distro grows, it is a model
for the devs and the council to keep in mind for those critically
important packages... 


James






[gentoo-user] Re: The end of Herds

2014-11-06 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:

  I think the concept of Projects will persist, but herds have
  to become active and request to become Projects as defined
  on the gentoo wiki or they will be erased. Like many others, 
  I have been burned in the past with trying to get directly involved 
  with Gentoo (been here since 2004). That's all water under the bridge.
  So I am tip_toeing behind the scenes willing to be a grunt
  and clean up some of the java mess, participate in clustering and 
  contribute to the science project. We'll see just how long it lasts 
  before I get bitch_slapped like  my previous attempts

 There is a large discussion on the Spark mailing list right now about
 having groups of maintainers for different areas:
 

http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Designating-maintainers-for-some-Spark-components-td9115.html
 
 I'm not sure how relevant that is, but it's interesting.
 
 My own viewpoint is that there should be no individual maintainers;
 packages should be assigned on a herd level, and the herds can
 self-regulate and know who has expertise with each package. Just my two
 cents; best to not have a single point of failure.


The spark post is relevant to the discussion. But spark is one (large)
code_set and we have thosands of different codes at Gentoo as a distro.
So some of our softwares, such as Python, are like spark and there
are multiple maintainers, like spark. We also have many smaller softwares
(ebuilds) that need someone (anyone?) to step forward and maintain that
singular package. Routine on Gentoo dev, there are packages up for
grabs that need a maintainer. Spark is in the luxury postion of having
many, very talented coders all working on one (large) piece of software.

Beside, I think the the projects will provide that group effort
that you admire in the current gentoo herds and the spark community for very
important codes (like gcc, python, perl etc).

 But there will also be many useful softwares that we should keep around
that just need a single maintainer. How it shakes out as to what the devs
will allow for those sorts of packages, like elvis for example of a
package that is not in anyone's critical path, but are cool to keep around. 

We, gentoo, have a wide variety of codes to maintain, and we'll need
everyone from the very talented coders to capable_users to maintain
these ebuilds, as our distro grows. We're going to have dozens if not
hundreds of codes (ebuilds) just to fluff out the clustering codes
necessary for a robust set of ebuilds for  gentoo_clustering, imho.

We need more devs and responsible users to help maintain and grow the base
of ebuilds, imho. But I do  agree, spark is going to need a very
talented maintainer.. with quite a bit of java and gentoo expertise?

Beside I think the decision, from what I've read, to terminate herds
is pretty much a done deal. Think of projects and maintainers and others,
as you formulate gentoo's path forward.


James







[gentoo-user] Re: OOM memory issues

2014-09-18 Thread James
Rich Freeman rich0 at gentoo.org writes:




 A big problem with Linux along these fronts is that we don't really
 have good mechanisms for prioritizing memory use.  You can set hard
 limits of course, which aren't flexible, but otherwise software is
 trusted to just guess how much RAM it should use.

Exactamundo!
Besides fine grained controls I want it in a fat_boy controllable gui!
Clustering is where it's at. NOW much of the fuss I read
in the clustering groups, particularly Spark and other 
in_memory tools, is all about monitoring and managing
all types of memory and related issues. [1] 


 It would be nice if processes could allocate cache RAM, which could be
 preferentially freed if the kernel deems necessary.  If some pages are
 easier to regenerate than to swap, this could also be flagged (I have
 a 50Mbps connection - I'd rather see my browser re-fetch pages than go
 to disk when the disk is already busy).  There are probably a lot of
 other ways that memory use could be optimized with hinting.

I think you need to look into apache spark. It is exploding. Technology
to run certain codes 100% in memory looks to be a revolution, driven
by the mesos/spark clusters. [2] The weapons on top of mesos/spark
are Python, Java and Scala (in portage).


hth,
James

[1] https://issues.apache.org/jira/browse/SPARK-3535

[2] https://amplab.cs.berkeley.edu/

http://radar.oreilly.com/2014/06/a-growing-number-of-applications-are-being-built-with-spark.html




[gentoo-user] Re: Grub1: Cant ? Re: keeping grub 1

2015-08-26 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:


 I don't know anything about arm64, but if it is 64-bit, why would you
 need 32-bit binaries?

An enormous codebase that is not likely to get ported to 64 bit arm.
Easy (embedded) product migration to arm64.

also, arm64 supports big indian and little indian codes simultaneously.

 I also am a bit of a purist, and just run no-multilib because it is
 emotionally satisfying.

Naw. Your teasing?  (wink wink nudge nudge).


  OFF TOPIC
  On another note: have you seen spark-1.5 ? Cleaner build?
 
http://apache-spark-developers-list.1001551.n3.nabble.com/Fwd-ANNOUNCE-Spark-1-5-0-preview-package-td13683.html

 I haven't looked at the new features of 1.5 specifically, but I know
 that the build process is basically the same. The API is nice, but it is
 definitely possible to write a faster job using Hadoop's API since it is
 lower-level and can be optimized more, so I spend more time writing jobs
 using Hadoop's API.

I've read that building spark-1.5 from sources is much cleaner now.
bgo-523412. (your on the cc list?). Particularly parsing out
hadoop support, for more focus regression testing on bare metal
setups  Drop me a line when you install 1.5 at work and how it
runs with Hadoop.


hth,
James






Re: [gentoo-user] Re: xfce woes

2011-02-03 Thread Adam Carter
 Until one day one of their bright spark techies had a brilliant idea. They
 hired a bunch of pretty girls wearing tight skimpy New! Improved! Check
 Our
 Promotion! outfits to stand outside the front door handing out free
 complimentary CDs.

 Yes, you guessed it. Within the hour the perimeter firewalls had more holes
 than a Swiss cheese. Somebody paid dearly for that.


That's not new. A similar one i heard of was to leave some USB drives on the
ground in the carpark... or you could use spear phishing emails


[gentoo-user] Re: Nagios testers wanted

2014-11-05 Thread James
Michael Orlitzky mjo at gentoo.org writes:


 We're collecting more and more Nagios bugs every day, and we've been
 stuck on the 3.x series for a while even though upstream has moved to 4.x.

 The main problem as far as I can see is that nagios-plugins is a big
 mess, and it's hard for any one person to test. (We use it at work, but
 there's no ipv6 there, or ldap, or snmp, or game servers...)

Um, I'm not up on the results of the Nagios user revolt (fork) from
a few years ago. Maybe if you clarify that recent history more folks
would be interested in Nagios?

 I've rewritten the nagios, nagios-core, and nagios-plugins ebuilds, and
 will eventually ask permission to commit them to ~arch. That will rip
 the band-aid off, so to speak, after which I can work on addressing the
 existing bugs. But before I do, I'd like to have a few people test it
 and tell me it works.

Let's make a deal. Lots of folks are trying to get Nagios running
on Mesos/spark as a cluster based tool. Have your (hacks) efforts
focoused on runnning Nagios on a mesos/spark cluster? My good friend
and dev-in-making Alec has graticiouly put working versions of both
mesos and spark on his git_tub_club collection:

https://github.com/trozamon/overlay/tree/master/sys-cluster

http://caen.github.io/hadoop/user-hadoop.html#spark


 So if anyone is using nagios, please give these a try.

I think the Nagios user community is now splintered (it's been
a while since I looked at Nagios seriously) cause the main dude
became such a *F!zt* so that most users left his fiefdom. Has that
changed? Do illuminate the recent history of Nagios, please?

Something in the net-analyzer realm needs to be modified to
run on a cluster. Mesos is the future of clustering, imho.
There are many other cool codes that can run on a cluster
for a killer-attraction-app for gentoo:  Tor, passwd crackers,
video farms, web servers, forensic-analysis, just to name a few.

Personally, I've had excellent results with jffnms, but others
find it limited. If you spend some time illuminating why nagios
is now stable (users happy with devs) then you'd likely attract
some grunts (testers) for your efforts. I sure the cluster community
would greatly appreciate a version of nagios running on mesos.


Nagios and systemd suffer quite a lot from the same disease, imho.
They surely display quite similar symptoms.



hth,
James







[gentoo-user] Re: Nagios testers wanted

2014-11-05 Thread James
walt w41ter at gmail.com writes:

 
 On 11/05/2014 09:42 AM, James wrote:
  Us old farts, call that:: wisdom
 
 Is that Haskell?

Maybe. My new linguas are Scala and R on Spark [1].
And those have me burried alive. My sleep hours have 
me cast in a sparse matrix schema. 
Haskill :: beyond my scope
escape clause :: I'm sticking with it

Besides, Haskill is definately beyound my pay_grade.

ps (I 

dont do annotation type ::  first date


better?

[1] https://spark.apache.org/  






[gentoo-user] Re: Grub1: Cant ? Re: keeping grub 1

2015-08-26 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:

  So some vintage installs/upgrades got me thinking. What does Grub-2
  offer that grub-1 does not. I cannot think of anything that I need
  from Grub-2 not mbr, nor efi board booting. Not dual/multi booting
  as grub-1 excels on that, and not on drives larger than 2 T.

  So what is the (hardware scenario)  where grub-2 and it's problems
  are superior to grub-1?  I'm having trouble thinking of that
  situation...?

 64-bit hardware with the no-multilib profile[1]. I have no -bin packages
 on my system, nor do I run any pre-built 3rd party applications, so I
 waste no time compiling worthless 32-bit libraries. Therefore, I need
 grub 2.

Ok this is interesting. Is this only an AMD64 thing? On Arm64 you'd
most likely want to run 32 bit binaries. This is profile [11} right?

  default/linux/amd64/13.0/no-multilib

I'm OK with this, but what is the benefit of such profile selection::
curiously I have no experience with the profile selection, despite
running quite a few amd64 system. What would the benefits be 
running this profile on older amd64 hardware ?


  AMD64 Team; amd64 at gentoo.org
  grub-1 is not available on no-multilib profiles;

I had not seen this, but so I guess this is well documented..?
Does that profile selection prevent one from selecting grub-1 during
and installation?

OFF TOPIC
On another note: have you seen spark-1.5 ? Cleaner build?
http://apache-spark-developers-list.1001551.n3.nabble.com/Fwd-ANNOUNCE-Spark-1-5-0-preview-package-td13683.html
..


James 







Re: [gentoo-user] Re: post build files

2014-09-11 Thread Alec Ten Harmsel

On 09/11/2014 12:20 PM, James wrote:
 Yes, I've been all over this. It's onto much of the Apache clustering
 codes that are not simple to configure in the ebuild. Besides the raw
 packege codes, like mesos, spark, scala, cassandra, etc there are a
 mulitude of fast moving codes written in Java and Python that need to
 be tested. Java is not difficult, but voluminous. Every problem
 somebody encouters, gets solved by some java bolt on code, rather
 than fixing/extending the main (mesos) sourcecode. As an old C hack,
 it's a tough pill to swallow, but I'm pursing this as best I can. I
 sure feel empathy for the java herd, but hey, now we are doing away
 with herds?
Part of the problem with a lot of open source Java projects is that they
*require* Oracle's Java build, and installing Oracle's JDK in Gentoo is
not a simple 'emerge package_name_here'.
 That said, I'm a bit stressed about 'maven'. We only have maven.bin.
 Much is dependant on it and java. For many reasons, java is not well
 supported in Gentoo. I just hope, I do not have to leave the Gentoo
 distro; because much of what I need (clustering) is java critical. You
 think I can build a gentoo cluster based on these?
I'm not a huge fan of Maven - it's a massive, complex morass. That said,
I can look into building it.
 Does the Gentoo dev team operate a robust gentoo cluster for gentoo
 development needs? Should they?
What would it do? Other than continuous integration, I can't see much
use (although I'm not a dev, so that's not saying much).
 Clusters that perform best are built on mesos/spark.
I can't remember the last time I saw Mesos/Spark on the TOP500 list...
but also, they're both young projects.
 But now, I need robust clustering and from the open source research
 work I have performed, Apache is the Only viable choice. Ubuntu has
 it. Debian Has it. RedHat has it. CentOS has it, but, we cannot
 (willnot?) sustain Apache style clustering at Gentoo because ? (dunno
 the reason; nothing seems plausible). 
From what I can tell, the reason is that it is not worth anyone's time
to do the massive amount of development and maintainence for something
that they won't benefit from.
 Hell, I'd even be willing to pay for a java support to develop the
 java (sourcecode) based tools; but in the past that idea has been very
 frowned up by the gentoo think tank.
If I wasn't in school I'd be all over this...

Anyways, I'll try and look at Maven. Not making any promises; I'm pretty
busy right now.

Alec



Re: [gentoo-user] Re: Grub1: Cant ? Re: keeping grub 1

2015-08-26 Thread Alec Ten Harmsel
On Wed, Aug 26, 2015 at 03:48:12PM +, James wrote:
 Alec Ten Harmsel alec at alectenharmsel.com writes:
  64-bit hardware with the no-multilib profile[1]. I have no -bin packages
  on my system, nor do I run any pre-built 3rd party applications, so I
  waste no time compiling worthless 32-bit libraries. Therefore, I need
  grub 2.
 
 Ok this is interesting. Is this only an AMD64 thing? On Arm64 you'd
 most likely want to run 32 bit binaries.

I don't know anything about arm64, but if it is 64-bit, why would you
need 32-bit binaries?

 This is profile [11} right?
 
   default/linux/amd64/13.0/no-multilib

Yes.

 I'm OK with this, but what is the benefit of such profile selection::
 curiously I have no experience with the profile selection, despite
 running quite a few amd64 system. What would the benefits be 
 running this profile on older amd64 hardware ?

The main benefit is reduced compile times for some packages since I only
compile the 64-bit versions, less stuff on the filesystem, etc. If you
do not run any applications that use a 32-bit version of a library, that
library is taking up disk space and compile time, but is never used.

I also am a bit of a purist, and just run no-multilib because it is
emotionally satisfying.

   AMD64 Team; amd64 at gentoo.org
   grub-1 is not available on no-multilib profiles;
 
 I had not seen this, but so I guess this is well documented..?
 Does that profile selection prevent one from selecting grub-1 during
 and installation?

Yes, although just now was the first time I ever tried installing
grub-1.

 OFF TOPIC
 On another note: have you seen spark-1.5 ? Cleaner build?
 http://apache-spark-developers-list.1001551.n3.nabble.com/Fwd-ANNOUNCE-Spark-1-5-0-preview-package-td13683.html
 ..

I haven't looked at the new features of 1.5 specifically, but I know
that the build process is basically the same. The API is nice, but it is
definitely possible to write a faster job using Hadoop's API since it is
lower-level and can be optimized more, so I spend more time writing jobs
using Hadoop's API.

Alec



Re: [gentoo-user] How to freeze my Gentoo system

2009-03-12 Thread Alan McKinnon
On Thursday 12 March 2009 21:43:32 Neil Bothwick wrote:
 It didn't occur to me that when putting
 the tilde at the wrong end, you were talking out of the wrong end :)

I seem to be doing that a lot lately. You should have seen Tuesdays' blunder:

mysql UPDATE passwds set passwd=a_hash, status=NEW, updated=1236889084;
Rows matched: 4329  Changed: 4329  Warnings: 0

Hang on, that doesn't look right. sigh there's no WHERE
I hope there's a backup...
What's in crontab -l?


Lucky for me, some OTHER bright spark had mysqldump in a daily cron!

-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Rebuild entire system - recompile all installed packages

2005-06-04 Thread Alexander Skwar
Peter Ruskin schrieb:

 That's strange, this is what I get here:

 $ ls -1d /var/db/pkg/*/* | wc -l
 1147
 $ emerge -Dep world | wc -l
 1100

Hmm.. Even on your system, emerge -De world would NOT re-install
everything that's already installed. You'd miss 47 packages.
Question: What 47 packages would be missing and why? And also,
how would you re-emerge everything on your system?

Alexander Skwar
-- 
This is not the age of pamphleteers. It is the age of the engineers.  The
spark-gap is mightier than the pen.  Democracy will not be salvaged by men
who talk fluently, debate forcefully and quote aptly.
-- Lancelot Hogben, Science for the Citizen, 1938
-- 
gentoo-user@gentoo.org mailing list



[gentoo-user] ceph on btrfs

2014-10-22 Thread James
Hello,

So looking at the package sys-cluster/ceph, I see these flags:
cryptopp debug fuse gtk +libaio libatomic +nss radosgw static-libs tcmalloc
xfs zfs   No specific flags for btrfs?

ceph-0.67.9 is marked stable, while 0.67.10 and  0.80.5 are marked
(yellow) testing and * is marked (red) masked. So what version
would anyone recommend, with what flags?  [1]

 Ceph will be the DFS on top of a (3) node mesos+spark cluster. 
btrfs is being  set up with 2 disks in raid 1 on each system. Btrfs
seems to be keenly compatible with ceph [2].


Guidance and comments, warmly requested,
James


[1] 
http://ceph.com/docs/v0.78/rados/configuration/filesystem-recommendations/

[2] http://ceph.com/docs/master/release-notes/#v0-80-firefly






[gentoo-user] Re: openshot-2.0.6.ebuild

2016-03-24 Thread James
Alan McKinnon  gmail.com> writes:

> What do you have for
> emerge --info | grep PYTHON


a ton of flagsHere's the relevant part?::

 PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7 python3_4"

 USE_PYTHON


I thought I read somewhere that this sort of bugger had been addressed
with EAPI-6, which the ebuild is? It could easily be me though, as
I have been noodling with python and ipython and  billions other
little python codesso no telling what I've buggered up. Still
all of the other python  centric codes are happy, atm.

I'm getting ready to start working on (2) new python ebuilds::
turbogears-2 and dpark (a very cool "in-memory" python knockoff of
apache-spark):: so bear that in mind on any other recommendations.


???
James

[1] https://github.com/douban/dpark

[2] http://turbogears.org/






[gentoo-user] Re: File system testing

2014-09-17 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:


 As far as HDFS goes, I would only set that up if you will use it for
 Hadoop or related tools. It's highly specific, and the performance is
 not good unless you're doing a massively parallel read (what it was
 designed for). I can elaborate why if anyone is actually interested.

Acutally, from my research and my goal (one really big scientific simulation
running constantly). Many folks are recommending to skip Hadoop/HDFS all
together and go straight to mesos/spark. RDD (in-memory)  cluster calculations
are at the heart of my needs. The opposite end of the spectrum, loads
of small files and small apps; I dunno about, but, I'm all ears.
In the end, my (3) node scientific cluster will morph and support
the typical myriad  of networked applications, but I can take
a few years to figure that out, or just copy what smart guys like
you and joost do.


 We use Lustre for our high performance general storage. I don't have any
 numbers, but I'm pretty sure it is *really* fast (10Gbit/s over IB
 sounds familiar, but don't quote me on that).

AT Umich, you guys should test the FhGFS/btrfs combo. The folks 
at UCI swear about it, although they are only publishing a wee bit.
(you know, water cooler gossip).. Surely the Wolverines do not
want those californians getting up on them?

Are you guys planning a mesos/spark test? 

  Personally, I would read up on these and see how they work. Then,
  based on that, decide if they are likely to assist in the specific
  situation you are interested in.

It's a ton of reading. It's not apples-to-apple_cider type of reading.
My head hurts.


I'm leaning to  DFS/LFS

(2)  Luster/btrfs  and FhGFS/btrfs

Thoughts/comments?

James





[gentoo-user] Re: gigabyte mobo latency

2014-10-18 Thread James
thegeezer thegeezer at thegeezer.net writes:


 there is a little more here
 http://gentoo-en.vfose.ru/wiki/Improve_responsiveness_with_cgroups
 which will allow you to script creating a cgroup with the processID of
 an interactive shell, that you can start from to help save hunting down
 all the threads spawned by chrome.
 you can then do fun stuff with echo $$ 
 /sys/fs/cgroup/cpu/high_priority/tasks

Yea this is cool. But when it's a cluster, with thousands of processes
this seem to be limited by the manual parsing and CLI actions that
are necessary for large/busy environments. (We shall see).

 hopefully this will give you a bit more control over all of that though


Gmane mandates that the previous lines be culled. That said; you have given
me much to think about, test and refine. 

In /sys/fs/cgroup/cpu   I have:

cgroup.clone_children  cgroup.procs  cpu.shares  release_agent
cgroup.event_control cgroup.sane_behavior notify_on_release  tasks

So I'll have to research creating and priotizing dirs like high_priority


I certainly appreciate your lucid and direct explanations.
Let me play with this a bit and I'll post back when I munge things
up.   Are there any graphical tools for adjusting and managing
cgroups?  Surely when I apply this to the myriad of things running
on my mesos+spark cluster I'm going to need a well thoughout tool
for cgroup management, particularly on memory resources organization
and allocations as spark is an in_memory environment that seems 
sensitive to OOM issues of all sorts.

thx,
James






Re: [gentoo-user] Re: Nagios testers wanted

2014-11-05 Thread Michael Orlitzky
On 11/05/2014 11:42 AM, James wrote:
 
 Um, I'm not up on the results of the Nagios user revolt (fork) from
 a few years ago. Maybe if you clarify that recent history more folks
 would be interested in Nagios?

If no one is interested, that's great -- I can push my changes with
reckless abandon =)

I'm not up-to-date either, but Nagios is still in the tree, and we still
use it, so I'd like to clean up a bit.


 Let's make a deal. Lots of folks are trying to get Nagios running
 on Mesos/spark as a cluster based tool. Have your (hacks) efforts
 focoused on runnning Nagios on a mesos/spark cluster?

At the moment I'm just trying to clean up the existing ebuilds so that
we can jump to the newest major version. There are a ton of other things
that need to be fixed, but I'm not going to work on them without a nice
clean starting point.

If the 4.x bump doesn't break existing, working, setups, then hopefully
I can just commit it and start on this list:

  https://bugs.gentoo.org/buglist.cgi?quicksearch=nagios

Porting it to a cluster (whatever that involves) would come after the
version bump.


 I think the Nagios user community is now splintered (it's been
 a while since I looked at Nagios seriously) cause the main dude
 became such a *F!zt* so that most users left his fiefdom. Has that
 changed? Do illuminate the recent history of Nagios, please?
 

You give me too much credit. I know that it forked into Icinga, but
Nagios is still being developed upstream. We use it as a glorified
`ping` that likes to wake me up at 4am, and it still works just fine for
that, so I haven't worried too much about the politics.




[gentoo-user] Re: ceph on gentoo?

2014-12-23 Thread James
Stefan G. Weichinger lists at xunil.at writes:


  Though this was a year ago or so. Your mileage may vary and it is
  likely that during this year stability was improved. Ceph is very
  promising by both design and capabilities.

 I expect that there were many changes over the time of a year ... they
 went from v0.72 (5th stable release) in Nov 2013 to v0.80 in May 2014
 (6th stable release) ... and v0.87 in Oct 2014 (7th ...)
 We get 0.80.7 in ~amd64 now ... I will see.
 Ad slow: what kind of hardware did you use and how many nodes/osds?

I too am building up a (3 node) cluster on btrfs/ceph. 
My hardware is AMD 8350 (8 cores) with 32G of ram on each mobo. I have water
coolers installed and intend to crank up to 6GHz after the cluster is
stable. My work has been idle for about a month due to other, more pressing,
needs. My cluster will be openrc centric, many others are systemd centric. ymmv.

I intend to run mesos+spark to keep some codes in-memory and thus
only write out to HD, when large jobs are finished. Here is the lab
that is pushing the state of the art on in-memory computations [1].
Spark is now managed under the Apache umbrella of projects.

I believe that most of the current problems folks encounter with btrfs+ceph,
are related to the need to tune the underlying linux
kernels with advanced tools and testing [2].

I think there is an ebuild (don't remember where) that puts trace-cmd,
ftrace and kernel shark into a gentoo gui package. I opened a bug on
BGO (Bug 517428), but so far it is still in search of a maintainer.


I hope an active group of gentoo-clustering emerges after the herds/projects
at gentoo are re-organized. The science herd/project
is your best bet for folks with similar interests in gentoo clusters,
 imho [3].


hth,
James


[1] https://amplab.cs.berkeley.edu/

[2] http://lwn.net/Articles/425583/

[3] http://wiki.gentoo.org/wiki/Project:Science/Overlay






Re: [gentoo-user] Re: configure.ac and Makefile.am easy_view ?

2015-03-28 Thread Jc García
2015-03-28 14:43 GMT-06:00 James wirel...@tampabay.rr.com:

 likewise I've been hacking at ebuilds for apache (spark and mesos)
 The spark file are still under /var/tmp/portage/sys-cluster but the mesos
 files, compiled just yesterday are not under /var/tmp/portage.  The
 same is true for ebuild in the portage tree. Some are there, some remain
 and others who knows.

This depends on wheter the ebuild was succesfully built, if so:

 ebuild ${ebuild_path} clean
is called and all the files under /var/tmp/portage/cat/pkg are removed

When I'm trying to make an ebuild I only use the ebuild(5) tool, for
building,  and call manually up to where I want to build(prepare,
configure, compile, etc...) and only when

ebuild ${ebuild_path} package

Has completed successfully, I use emerge so I don't get unwanted cleanups.

 So with this in mind, how do I tag certain ebuilds to at least save the
 configure.ac and Makefile.am files only for selected ebuild; either of which
 may be in /usr/portage or /usr/local/portage?  (sorry for not being more
 clear). Is this just some inconsistency in how various ebuilds are 
 constructed?

I don't see the utility of having these files apart form it's source
tree, if you modified them, then make patches, I usually unpack the
sources to a directory under $HOME, or clone the upstream repo, try to
get a working build as my user(I build experimental stuff with
--prefix=$HOME/opt/ so my system stays clean). Make a patch of my
changes if any, and then try to introduce the patch into an ebuild.



[gentoo-user] Re: gcc-5.0 ?

2015-04-23 Thread james
Stefan G. Weichinger lists at xunil.at writes:

 
 On 23.04.2015 10:12, Helmut Jarausch wrote:
 
  I've just renamed the  gcc-6.0.0_alpha20150412.ebuild (the 6 must be
typo) from toolchain overlay
  to gcc-5.1.0.ebuild which I have attached.
  It worked just fine here.

very cool! thx.




 I now try to build gcc-5.1.0 with itself ... and maybe later I will try
  at system in a btrfs-subvolume.

Hello Stephan,

Very interesting. You do know that both cephfs-0.94 and gcc-5.1.x
have support for RDMA. It should really speed up some applications,
particularly if you are running Apache:(spark|storm) or other 
in-memory codes on top of Apache-mesos (ebuild in BGO).

The recently released (portage)t dev-java/sbt has gotten me much further
along toward a working apache-spark ebuild, also in BGO.

So things are rocking for low-latency, HPCC in gentoo. I only regret
that somebody smarter than me is doing all of this. NONE of the 
old gentoo linux cluster devs are much interested in putting together
a gentoo cluster from 100% sources; and I find that most baffling,
particularly  Donnie Berkholz. Many are using clusters at their work,
based on other distros but little effort is being expended to bring
100% source solutions for clustering to gentoo. 

I do find lots of solutions for containers on remote (vendor) clouds and
binaries for hadoop and such. Nothing so that the rank and file gentoo
communities can build their High Performance Computer Clusters, (HPCC) from
100% sources. Strange, real strange, at least from where I sit

THANKS for the help.
James




James








[gentoo-user] Re: Install PreQualifying Matrix

2015-08-21 Thread James
Rich Freeman rich0 at gentoo.org writes:


  for (BS) Big Science, imho. BS needs all resources solving and 
  supporting  a single problem, with as low of latency as possible.

 What kind of latency are you expecting to get with Gentoo running on
 CoreOS?  A process inside a container is no different from a process
 outside a container as far as anything other than access/visibility
 goes.  They're just processes as far as the kernel is concerned.
 Sure, it isn't quite booting with init=myscieneapp but it is about as
 close as you'll get to that.


I'm not planning on running gentoo on CoreOS; so apologies if that is
confusing. I'm intending on running a stripped and optimized gentoo OS
and linux kernel as close to bare metal as I can. gcc5 is targeted at both
system, GPU and distributed resource compiling (RDMA).

Mesos + spark + tachyon + storm + RDMA + GCC-5.x is a killer platform
for clustering. It supports some traditional and well as radical frameworks.
Mesos is exploding with new Frameworks and is planning on support for many
languages. There is a bgo on apache-spark that needs a really talented Java
Hack to solve.  There is also an upcoming mesos conference in Ireland [1] that
any Euro_hack interested in Clustering should attend. Many companies are
hiring talent and paying a 50% premium, particularly if you can admin, code
and compile and know a bit of basic clustering.


[1] http://events.linuxfoundation.org/events/mesoscon-europe




[gentoo-user] Re: post build files

2014-09-11 Thread James
Rich Freeman rich0 at gentoo.org writes:


 If you literally want a list of everything that was installed by a
 Gentoo ebuild, then the simplest thing is to run qlist from
 app-portage/portage-utils.

wonderful idea. Despite having used qlist many times, it never dawned
on me for this purpose.


 If you're trying to learn how ebuilds work, devmanual.gentoo.org is
 the definitive resource.  If you have specific questions feel free to
 ask, but just about anything you want to know is there.  By all means
 try reading a few ebuilds to get a hang for things as well, but I'd
 start with simple ones (avoid trying to learn by looking at packages
 that use complex eclasses).  The simplest ebuilds tend to be for
 simple, standalone programs.

Yes, I've been all over this. It's onto much of the Apache clustering
codes that are not simple to configure in the ebuild.  Besides the raw 
packege codes, like mesos, spark, scala, cassandra, etc there are
a mulitude of fast moving codes written in Java and Python that
need to be tested. Java is not difficult, but voluminous. Every problem
somebody encouters, gets solved by some java bolt on code, rather
than fixing/extending the main (mesos) sourcecode.  As an old C hack,
it's a tough pill to swallow, but I'm pursing this as best I can.
I sure feel empathy for the java herd, but hey, now we are doing away
with herds?


 Keep in mind that ebuilds work by extending functions defined by PMS,
 so an ebuild can contain fairly little content and yet be fairly
 functional (the default functions are running for most of the build
 phases).  The idea is if all a package does is run configure ; make ;
 make install it often needs almost nothing in the ebuild to work.

Yea, I got this (mostlystill new skills...) but it is the fancy footwork
in the java and python worlds that keeps me doing more reading and research
than coding/compiling/testing. of the Clustering goodies.

That said, I'm a bit stressed about 'maven'. We only have maven.bin.
Much is dependant on it and java. For many reasons, java is not well
supported in Gentoo. I just hope, I do not have to leave the Gentoo
distro; because much of what I need (clustering) is java critical.

You think I can build a gentoo cluster based on these? [1] [2]
Does the Gentoo dev team operate a robust gentoo cluster for gentoo
development needs? Should they?

Clusters that perform best are built on mesos/spark. Spark is 
an in-memory computational enviroment for mesos clusters;
and it will change *everything* as systems become richly adormed
with ample, low cost ram  can consolidate into amazing clusters [3]  
RDD, Resilient Distributed Data is changing everying that is computationally
intensive.


I do appreciate all of the wonderful folks at gentoo (devs and users).
I have been using Gentoo since early 2004. But now, I need robust
clustering and from the open source research work I have performed,
Apache is the Only viable choice. Ubuntu has it. Debian Has it. RedHat
has it. CentOS has it, but, we cannot (willnot?) sustain Apache style
clustering at Gentoo because ? (dunno the reason; nothing seems plausible).

PS, I *HATE* oracle more than most, but for me that is not a valid
reason for piss_poor Java support. Google runs Java on top of embedded
linux (it's called Android) We even had Android on Gentoo,
it's call Gentroid [4]. If folks would just get over it (java_baggage),
we could have a robust Java platform on Gentoo; or am I missing something?

For my needs, I do not see path forward for Gentoo, without robust java
support. Hell, I'd even be willing to pay for a java support to develop
the java (sourcecode) based tools; but in the past that idea has been very
frowned up by the gentoo think tank. But, devs with their own agendas
can spend their time doing exact what they want (thing gentoo needs); I just
cannot pay somebody to do the same (with Java)?

As hard as I can, I'm working on this, but I'm no Java maverick
far from it.

(actually this is just to get it off my chest. I feel better now
no needs for anyone to reply, unless you are interested in working
on Apache-mesos/spark or java/maven). Please, all flames in a new
thread.




James

[1] http://wiki.gentoo.org/wiki/Cluster

[2] http://wiki.stoney-cloud.org/wiki/Main_Page

[3] http://www.cs.berkeley.edu/~pwendell/strataconf/api/core/spark/RDD.html

[4] https://code.google.com/p/gentroid/






Re: [gentoo-user] Re: Anyone switched to eudev yet?

2012-12-27 Thread Mark David Dumlao
On Fri, Dec 28, 2012 at 12:13 AM, Dale rdalek1...@gmail.com wrote:
 Mark David Dumlao wrote:
 On Thu, Dec 27, 2012 at 4:42 AM, Dale rdalek1...@gmail.com wrote:
 Mark David Dumlao wrote:
 On Tue, Dec 25, 2012 at 10:38 AM, Dale rdalek1...@gmail.com wrote:
 Feel free to set me straight tho.  As long as you don't tell me my
 system is broken and has not been able to boot for the last 9 years
 without one of those things.  ROFL
 Nobody's telling you _your_ system, as in the collection of programs
 you use for your productivity, is broken. What we're saying is that
 _the_ system, as in the general practice as compared to the
 specification, is broken. Those are two _very_ different things.
 From what I have read, they are saying what has worked for decades has
 been broken the whole time.  Doesn't matter that it works for millions
 of users, its broken.
 Yes, that is exactly what they are saying. What I am pointing out,
 however, is that there is, informally, a _technical meaning_ for the
 word broken, which is that the specs don't match the
 implementation. And in the case of /usr, the specs don't match the
 implementation. For like, maybe all of the Linuxen.

  They say it is broken so they can fix it with a
 init thingy for EVERYONE.  Sorry, that's like telling me my car has been
 broken for the last ten years when I have been driving it to town and it
 runs just fine.
 NOBODY is telling you your system or that the systems of millions of
 users out there aren't booting. You're assigning emotional baggage to
 technical language.

 To push your analogy, oh, your car is working just fine. Now anyone
 with a pair of spark plugs and a few tools may be able to start it
 without you, but your startup _works_. Now imagine some German
 engineer caring nothing about you lowly driver, and caring more about
 the car as a system, and he goes using fancy words like
 authentication systems and declaring that all cars have a flaw, or
 more incensingly, car security is fundamentally broken (Cue angry
 hordes of owners pitchfork and torching his house).

 Thing is, he's right, and if he worked out some way for software to
 verify that machine startup was done using the keys rather than spark
 plugs, he'd be doing future generations a favor in a dramatic
 reduction of carjackings. And if somehow it became mandated for future
 cars to have this added in addition to airbags and whatnot, it'd annoy
 the hell out of car makers but overall still be a good thing.

 I think your analogy actually proves my point.  Instead of just getting
 in the car and turning the key, they want to reinvent the engine and how
 it works.  It doesn't matter that it is and has been working for decades,

I think your reaction proves my point about angry mobs torching his
home without understanding what's being proposed. Your fine reading
comprehension once again failed to catch the notion that in my
analogy, all he invented was a mechanism that makes sure it was a key,
not a spark plug, that did the starting. i.e., you're asking literally
for a turnkey system, and that's literally what he invented, except
that the system guarantees that it's a key that was turned.

You have not said a THING about your misunderstanding of the use of
the word _broken_ and you're continuing to peddle your hate-boner even
after it's been shown that you're confused.

--
This email is:[ ] actionable   [ ] fyi[x] social
Response needed:  [ ] yes  [x] up to you  [ ] no
Time-sensitive:   [ ] immediate[ ] soon   [x] none



[gentoo-user] Re: File system testing

2014-09-17 Thread James
J. Roeleveld joost at antarean.org writes:


  Distributed File Systems (DFS):

  Local (Device) File Systems LFS:

 Is my understanding correct that the top list all require one of 
 the bottom  list?
 Eg. the clustering FSs only ensure the files on the LFSs are 
 duplicated/spread over the various nodes?

 I would normally expect the clustering FS to be either the full layer 
 or a  clustered block-device where an FS can be placed on top.

I have not performed these installation yet. My research indicates
that first you put the Local FS on the drive, just like any installation
of Linux. Then you put the distributed FS on top of this. Some DFS might
not require a LFS, but FhGFS does and does HDFS. I will not acutally
be able to accurately answer your questions, until I start to build
up the 3 system cluster. (a week or 2 away) is my best guess.


 Otherwise it seems more like a network filesystem with caching 
 options (See  AFS).

OK, I'll add AFS. You may be correct on this one  or AFS might be both.

 I am also interested in these filesystems, but for a slightly different 
 scenario:

Ok, so I the test-dummy-crash-victim I'd be honored to have, you,
Alan, Neil, Mic  etc etc back-seat-0drive on this adventure! (The more 
I read the more it's time for burbon, bash, and a  bit of cursing
to get started...)


 - 2 servers in remote locations (different offices)
 - 1 of these has all the files stored (server A) at the main office
 - The other (server B - remote office) needs to offer all files 
 from serverA  When server B needs to supply a file, it needs to 
 check if the local copy is still the valid version. 
 If yes, supply the local copy, otherwise download 
 from server A. When a file is changed, server A needs to be updated.
 While server B is sharing a file, the file needs to be locked on server A 
 preventing simultaneous updates.

OOch, file locking (precious tells me that is alway tricky).
(pist, systemd is causing fits for the clustering geniuses;
some are espousing a variety of cgroup gymnastics for phantom kills)
Spark is fault tolerant, regardless of node/memory/drive failures
above the fault tolerance that a file system configuration many support.
If fact, files lost can be 'regenerated' but it is computationally
expensive. You have to get your file system(s) set up. Then install
mesos-0.20.0 and then spark. I have mesos mostly ready. I should
have spark in alpha-beta this weekend. I'm fairly clueless on the 
DFS/LFS issue, so a DFS that needs no LFS might be a good first choice
for testing the (3) system cluster.


 I prefer not to supply the same amount of storage at server B as 
 server A has. The remote location generally only needs access to 5% of 
 the total amount of files stored on server A. But not always the same 5%.
 Does anyone know of a filesystem that can handle this?

So in clustering, from what I have read, there are all kinds of files
passed around between the nodes and the master(s). Many are critical
files not part of the application or scientific calculations. 
So in time, I think in a clustering evironment, all you seek is
very possible, but it's a hunch, gut feeling, not fact. I'd put
raid mirros underdneath that system, if it makes sense, for now,
or just dd the stuff with a script of something kludgy (Alan is the
king of kludge)

On gentoo planet one of the devs has Consul in his overlays. Read
up on that for ideas that may be relevant to what you need.


 Joost

James
 







Re: [gentoo-user] cannot burn cd: permissions error

2008-12-10 Thread Allan Gottlieb
At Wed, 10 Dec 2008 11:00:39 +0100 [EMAIL PROTECTED] (Joerg Schilling) wrote:

 Allan Gottlieb [EMAIL PROTECTED] wrote:

 Joerg believes that cdrkit is not as good as cdrtools (I have used only
 cdrtools and it works well for me).

 Believes is less that knows.

I was/am trying to sound neutral and not spark a long debate.

 I know that if you take a very old source and add new bugs that the
 result cannot be better than the maintained original.

 In contrary to the people behing cdrkit,

No comment.

 I carefully listen to the problems uf the users and I add bug-fixes
 for cdrtools or workarounds for defective drive firmware or
 conceptional bugs in e.g. the Linux kernel.

I very much agree and appreciate your helpful comments on this mailing
list with regard to technical questions involving use of cdrtools

 As a result, there are much less problems with the original software
 than with the fork.

I have not used the fork, but will say that cdrtools works well for me,
thanks in part to your helpful technical comments on this list.

thank you for cdrtools,
allan



Re: [gentoo-user] ZIC, aka setting the time zone.

2012-01-17 Thread Neil Bothwick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 17 Jan 2012 13:41:48 +0100, Hinnerk van Bruinehsen wrote:

  Symlinking is not recommended as it breaks when /usr is on a
  separate filesystem. The file should be copied instead.

  
  Are you sure you're not confusing that with hardlinking?
  
  Because AFAIK symlinking is the only linking that can cross
  filesystem borders.

It can... if the filesystem is mounted at the time. AFAIR this causes
problem setting the timezone at boot time.

  Symlinking works (over filesystem borders, too, Pandu is right) and
 it even autoupdates localtime when (why ever) something in zoneinfo
 changes...

The localtime files change all the time, look at how often timezone-data
is updated. Everyone some bright spark comes up with another clever way
of squeezing 25 hours into a day, his country's DST rules change. That's
why openrc has a setting to manage this automatically for you.


- -- 
Neil Bothwick

Make like a tree and leave.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.18 (GNU/Linux)

iEYEARECAAYFAk8VcnMACgkQum4al0N1GQPbRQCfSP/xhZyb5pvn9RY4B/OfcsHT
B5UAn1QvHsp22qTsEaOGZ43HMQDvWhsm
=p+9j
-END PGP SIGNATURE-


Re: [gentoo-user] ZIC, aka setting the time zone.

2012-01-17 Thread Neil Bothwick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 17 Jan 2012 14:13:30 +0100, Hinnerk van Bruinehsen wrote:

  The localtime files change all the time, look at how often
  timezone-data is updated. Everyone some bright spark comes up with
  another clever way of squeezing 25 hours into a day, his country's
  DST rules change. That's why openrc has a setting to manage this
  automatically for you.

  I know that it happens, I just fail to understand why... ;)

It's a government/civil service thing, you're not supposed to understand.
Just be a good boy and vote for them next time :-/


- -- 
Neil Bothwick

Windows - software package to turn a 486 into an Etch-A-Sketch!
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.18 (GNU/Linux)

iEYEARECAAYFAk8Veq4ACgkQum4al0N1GQMi0wCfRdHSd3KKTfmbzYHEtq4icEVg
4HgAn0rIa2VpAvt829RiBdtIO/jib7BN
=cloF
-END PGP SIGNATURE-


[gentoo-user] Re: Recommendations for scheduler

2014-08-02 Thread James
Alan McKinnon alan.mckinnon at gmail.com writes:


 Well, we've found 2 projects that at least in part seek to achieve our
 general goals - chronos and Martin's new project.
 Why don't we both fool around with them for a bit and get a sense of
 what it will take to add features etc? Then we can meet back here and
 discuss. Always better to build on an existing foundation

Mesos looks promising for a variety of (Apache) reasons. Some key
technologies folks may want google about that are related:

Quincy (fair schedular)
Chronos (scheduler)
Hadoop (scheduler)
HDFS (clusterd file system)
http://gpo.zugaina.org/sys-cluster/apache-hadoop-common

Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in many
parallel operations (advanced math/science and many other apps.)
https://spark.apache.org/

Dryad  Torque   Mpiche2 MPI
Globus tookit

mesos_tech_report.pdf

It looks as though Amazon, google, facebook and many others
large in the Cluster/Cloud arena are using Mesos..?

So let's all post what we find, particularly in overlays.

hth,
James




[gentoo-user] Re: openjdk-6-jdk

2014-08-25 Thread James
Jc García jyo.garcia at gmail.com writes:



 
 Just dropping and idea for your ebuild, you might have this planned
 but anyway, I would put something like 'virtual/jdk:1.6' in RDEPEND,
 so if things work as they should(but that's not realistic), any of the
 java implementations in the tree would provide jdk6 for your ebuild.

I installed icedtea-bin for now.

From the ebuild (which is a hack) I have :

DEPEND=net-misc/curl
dev-libs/cyrus-sasl
python? ( dev-lang/python dev-python/boto )
java? ( virtual/jdk )


It seems would not compile until I installed the maven-bin
package..Which is not a requisite in the ebuild
but I saw that maben was a required code for building mesos on another
distro

Like I said, it's a hack, but I'll get it cleaned up; because nobody
else seemed motivated to get mesos running on gentoo.

Now it off to get spark[1] and the hadoop[2] happy on gentoo...

happy, happy happy


James

[1] https://spark.apache.org/

[2] http://hadoop.apache.org/







[gentoo-user] Eapi 6 ?

2014-09-16 Thread James
Howdy,

I've read snippets that EAPI 6 will provide a mechanism for 
follks to put patches directly into ebuilds. I'm not certain
(some discussion needed) that this will eliminated some ebuilds  
from my /usr/local/portage  development repository.


However, now I'm learning and hacking on 2 different bleeding edge
technologies. Clusters (mesos, spark etc etc) and Java (maven etc etc).
I am but a follwer at this time on those two bleeding edge fronts.
But codes are release at multiple times during the day/week that I need
to test. So, in my limited understanding, EAPI 6 looks absolutely
wonderful.


So, since I'm only hacking at ebuilds for my own needs (currently
not able to produce anything that is not embarrashing) can I
start building ebuilds that use EAPI-6? I understand that it
is not finalized yet. But, if the user supplied patching is
at least workable, I'd rather get busy learning/testing those new
EAPI-6 tricks.


thoughts and comments and insight are most welcome.


James







[gentoo-user] Re: freeSwitch

2016-04-27 Thread James
  ccs.covici.com> writes:


> > How do you install freeswitch, just a raw compile from sources? 
> > Or is there an ebuild somewhere that I   missed?

> I just did the configure, make, make install combination, it by default
> installs in /usr/local/freeswitch -- even the binaries, so its somewhat
> isolated.  but check the docs first, you will need some libraries which
> are not in gentoo and must be gotten from their repository just to
> build.  I think I had to download and install a dozen packages from
> source.

Thanks for the info. Did you every file a bug to have an ebuild created?
BGO searching did not find one and freeswitch just seems like a cool
software to add to gentoo.

Oh, I almost forget to mention:: what caught my eye was that I ran
across instructions for installation on a Raspberry Pi 2 [1].
And was wondering if  a larger pbx (ip-switch) could be built
by clustering Rpi3 devices together?

Lots of projects seem to be centric on Rpi3 clusters these days. [2]
Thanks again for the feedback.

James

[1]
http://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server

[2] https://www.raspberrypi.org/magpi/pi-spark-supercomputer/





Re: [gentoo-user] Re: gigabyte mobo latency

2014-10-19 Thread thegeezer
On 19/10/14 04:15, James wrote:
 thegeezer thegeezer at thegeezer.net writes:


 there is a little more here
 http://gentoo-en.vfose.ru/wiki/Improve_responsiveness_with_cgroups
 which will allow you to script creating a cgroup with the processID of
 an interactive shell, that you can start from to help save hunting down
 all the threads spawned by chrome.
 you can then do fun stuff with echo $$ 
 /sys/fs/cgroup/cpu/high_priority/tasks
 Yea this is cool. But when it's a cluster, with thousands of processes
cgroups are hierarchical, so for example if you start a bash script
which is in cgroup cpu/high_prio which then starts your processes, all
called programs go into the same cgroup which makes it a bit simpler.
also openrc will start your services in the correct cgroup too
 this seem to be limited by the manual parsing and CLI actions that
 are necessary for large/busy environments. (We shall see).

 hopefully this will give you a bit more control over all of that though

 Gmane mandates that the previous lines be culled. That said; you have given
 me much to think about, test and refine. 

 In /sys/fs/cgroup/cpu   I have:

 cgroup.clone_children  cgroup.procs  cpu.shares  release_agent
 cgroup.event_control cgroup.sane_behavior notify_on_release  tasks

 So I'll have to research creating and priotizing dirs like high_priority


 I certainly appreciate your lucid and direct explanations.
 Let me play with this a bit and I'll post back when I munge things
 up.   Are there any graphical tools for adjusting and managing
 cgroups?  
i thought that htop did this but i was wrong.. it only shows which
cgroup processes are in. that would be a killer feature though.
 Surely when I apply this to the myriad of things running
 on my mesos+spark cluster I'm going to need a well thoughout tool
 for cgroup management, 
especially for non-local systems.  other distros have apps such as
cgclassify which provides some shortcut to managing cgroups --
creation / and moving process in and out
you can also have a nohup process that does ps -eLf to search for
process you want to classify and move them into the appropriate cgroup  
for default cgroups you can also use inotify
a quick search shows http://libcg.sourceforge.net/ which daemonises this
process.
all this is a bit hack'n'slash though i appreciate, so if anyone else
knows of suitable tools i'd also be interested to hear of them
 particularly on memory resources organization
 and allocations as spark is an in_memory environment that seems 
 sensitive to OOM issues of all sorts.

 thx,
 James








[gentoo-user] Mesos update

2015-04-13 Thread James

I recently ran across this wonderfully concise intro to mesos [1].
mesos.0.2.ebuild was posted to BGO-510912. I'm still looking for
a home for these hacked ebuilds, related to a clustering theme,
if anyone has any suggestions. Perhaps a gentoo wiki page pointing
to the various locations of (mesos) compatible ebuild?

Cisco is now pushing mesos with ansible (very cool project) [2]. Cisco
is promoting an opensource approach to microservices using ansible.

Zookeeper, spark, mesos, consul and many other codes that are part of the
clustering codes, can be found in portage, various overlays and 
BGO. /usr/portage/sys-cluster/ is a quick list of what is in the tree. Folks can
drop me some email if there is some software that they are interested in for
gentoo based clusters. There is a growing interest among many folks that use
gentoo, so maybe we can reconstitute the gentoo cluster herd or collect up
around some repo location, or just club it up in an overlay  repo somewhere? 


There are other truly exciting codes, like apache-storm, that
hold great promise for what gentoo based clusters and linux based cloud
services (or microservices) will be able to offer. It is exciting to see
gentoo positioned very well as one of the best platforms (minus java) for
innovation as we witness a myriad of devices all beginning to share video
and services, in an almost seamless fashion. Clusters will be the engine
that drives this convergence, imho. Marketing type refer to this as
the Internet of things. I'm also looking for suggestions on a new
Android phone or a linux based phone (T mobile or Sprint), where
you have the ability to updated the Android OS, without extraordinary
measures. One that could be rooted or running SeLinux, would be keen.


Some example apps that are open source for studying, as I'm quite
new to the app. developemnt cycle, and prefer to use gentoo for this
sort of development work.

Many large corporations are taking a keen interest in Apache (mesos +
spark). Cisco just strikes me as both odd and validation of apache-mesos.


hth,
James


[1]
http://opensource.com/business/14/9/
open-source-datacenter-computing-apache-mesos

[2]
https://github.com/CiscoCloud/microservices-infrastructure/
blob/0.2.0/CHANGELOG.rst




Re: [gentoo-user] Re: Anyone switched to eudev yet?

2012-12-26 Thread Mark David Dumlao
On Thu, Dec 27, 2012 at 4:42 AM, Dale rdalek1...@gmail.com wrote:
 Mark David Dumlao wrote:
 On Tue, Dec 25, 2012 at 10:38 AM, Dale rdalek1...@gmail.com wrote:
 Feel free to set me straight tho.  As long as you don't tell me my
 system is broken and has not been able to boot for the last 9 years
 without one of those things.  ROFL
 Nobody's telling you _your_ system, as in the collection of programs
 you use for your productivity, is broken. What we're saying is that
 _the_ system, as in the general practice as compared to the
 specification, is broken. Those are two _very_ different things.

 From what I have read, they are saying what has worked for decades has
 been broken the whole time.  Doesn't matter that it works for millions
 of users, its broken.

Yes, that is exactly what they are saying. What I am pointing out,
however, is that there is, informally, a _technical meaning_ for the
word broken, which is that the specs don't match the
implementation. And in the case of /usr, the specs don't match the
implementation. For like, maybe all of the Linuxen.

  They say it is broken so they can fix it with a
 init thingy for EVERYONE.  Sorry, that's like telling me my car has been
 broken for the last ten years when I have been driving it to town and it
 runs just fine.

NOBODY is telling you your system or that the systems of millions of
users out there aren't booting. You're assigning emotional baggage to
technical language.

To push your analogy, oh, your car is working just fine. Now anyone
with a pair of spark plugs and a few tools may be able to start it
without you, but your startup _works_. Now imagine some German
engineer caring nothing about you lowly driver, and caring more about
the car as a system, and he goes using fancy words like
authentication systems and declaring that all cars have a flaw, or
more incensingly, car security is fundamentally broken (Cue angry
hordes of owners pitchfork and torching his house).

Thing is, he's right, and if he worked out some way for software to
verify that machine startup was done using the keys rather than spark
plugs, he'd be doing future generations a favor in a dramatic
reduction of carjackings. And if somehow it became mandated for future
cars to have this added in addition to airbags and whatnot, it'd annoy
the hell out of car makers but overall still be a good thing.

And here the analogy is holding up: NOBODY is breaking into your car
and forcefully installing some authentication system in its startup.
And NOBODY is breaking into your servers and forcing you to switch to
udev/systemd or merged /usr. You can still happily plow along with
your system as is. Heck, you can even install current udev without
changing your partition setup. Just modify the ebuild and have it
install it into / instead of /usr. Or use an early bootup script. Or
use an init thingy.


 The udev/systemd people sound like politicians.

If anything, Lennart is the worst possible politician on the planet.
He makes unpopular decisions, mucks around in stuff people don't want
touched, talks snide and derisively, etc etc etc, because he's a
nerd's nerd that knows nothing about PR and goodwill. The software is
good, but that's about all he knows how to write. He's like DJB on
crack.
--
This email is:[ ] actionable   [ ] fyi[x] social
Response needed:  [ ] yes  [x] up to you  [ ] no
Time-sensitive:   [ ] immediate[ ] soon   [x] none



[gentoo-user] Clusters on Gentoo ?

2014-08-06 Thread James
Howdy one and all,

Many see a world where clusters abound even for the small business and
resource capable enthusist [1]. Clusters of old PCs are the norm, but a slew
of new extremely low powered 64bit embedded systems, running embedded linux,
with ample ram (ddr4 even) and up to (8) SATA-3 ports will undoubtly
be the targets of aquistion by hobbyist around the world. Other with more
salient goals are sure to follow!


For example, we (Gentoo) have just had one of the titans of the embedded
linux world, return to Gentoo. Linaro is the default industry group that
is leading the charge in new development for linux based embedded system
sharing most of their work with the larger open source communities.
Thomas Gall aka. tgall is working for Linaro as the acting director of the
Linaro Mobile Group [8,9].  Clusters will seemlessly integrate CPUs, GPUs,
Arms, FPGA, SOCs and many other instantiations of computational resources,
sooner rather than later. The Billion dollar players already run these sorts
of amalgamations for a very wide variety of reasons, so why should't the
bands of linux_commoners have access to such raw power? [10]


In a recent thread (schedulers) it was noted that several folks had interest
in clusters (privately operated clouds) as more than a passing interest.
Companion projects, such as Apache's Spark [4] have tremendous potential
as aggressive solutions such diverse fields as social media relationships,
distributed database techniques and new, massively parallel programing
paradymes for computationally intensive scientific endeavors, just to
mention a few [5,6,7].


So I'm soliciting the readers of this list to post any references to
distributed/cluster/cloud softwares/fileSystems they are aware of, have used
or would like to see; to guage interest in Mesos, Chronos, Spark (apache) as
well as all other open source cluster (distributed)  systems or tools [2].
My collection of such is sporadic, at best, and serves mostly my
math/science needs. Project Aethna, is one of the oldest efforts, still
kicking at MIT, the last I heard [3]. Newer/cooler efforts?


Hopefully, we can all share ideas and brainstorm about how Gentoo users
can lead the pack of linux distros into this brave_new world. [Overlays?]


curiously,
James



[1]
http://www.forbes.com/sites/marcochiappetta/2014/07/31/amd-opteron-64-bit-arm-based-seattle-dev-kits-are-shipping/?partner=yahootix

[2] http://hadoop.apache.org/docs/r1.2.1/cluster_setup.html

[3] https://ist.mit.edu/athena

[4] https://spark.apache.org/docs/latest/index.html

[5] https://spark.apache.org/docs/latest/graphx-programming-guide.html#overview

[6] http://en.wikipedia.org/wiki/Apache_Hadoop

[7] http://www.wired.com/2012/04/amazon-takes-genomics-research-to-the-clouds/

[8] http://www.gossamer-threads.com/lists/gentoo/dev/289556

[9] http://www.linaro.org/

[10] http://opencores.org/




[gentoo-user] Re: OOM memory issues

2014-09-18 Thread James
Kerin Millar kerframil at fastmail.co.uk writes:


 The need for the OOM killer stems from the fact that memory can be 
 overcommitted. These articles may prove informative:

 http://lwn.net/Articles/317814/

Yea I saw this article.  Its dated February 4, 2009. How much has
changed with the kernel/configs/userspace mechanism? Nothing, everything?



http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html

Nice to know.

 In my case, the most likely trigger - as rare as it is - would be a 
 runaway process that consumes more than its fair share of RAM. 
 Therefore, I make a point of adjusting the score of production-critical 
 applications to ensure that they are less likely to be culled.

Ok I see the manual tools for OOM-killer. Are there any graphical tools
for monitoring, configuring, and control of OOM related files and target
processes? All of this performed by hand?


 If your cases are not pathological, you could increase the amount of 
 memory, be it by additional RAM or additional swap [1]. Alternatively, 
 if you are able to precisely control the way in which memory is 
 allocated and can guarantee that it will not be exhausted, you may elect 
 to disable overcommit, though I would not recommend it.

I do not have a problem. It keeps popping up in my clustering research,
frequently. Many of the clustering environments have heavy memory
requirements, so this will eventually be monitored, diagnosed and managed,
real time, in the cluser softwares, such as load balancing. These are
very new technologies, hence my need to understand both legacy current
issues and solutions. You cannot just always add resources. ONce set up
you have to dynamically manage resource consumption, or at least that
is what the current readings reveal.


 With NUMA, things may be more complicated because there is the potential 
 for a particular memory node to be exhausted, unless memory interleaving 
 is employed. Indeed, I make a point of using interleaving for MySQL, 
 having gotten the idea from the Twitter fork.

Well my first cluster is just (3) AMD-FX8350 with 32G ram each.
Once that is working, reasonably well, I'm sure I'll be adding
different (multi) processors to the mix, with differnt ram characteristis.
There is a *huge interest* in heterogenous clusters, including but
not limited to the GPU/APU hardware. So dynamic, real-time memory
managment is quintessentially important for successful clustering.
  

 Finally, make sure you are using at least Linux 3.12, because some 
 improvements have been made there [2].

yep, [1] I always set of gigs of swap and rarely use it, for critical
computations that must be fast. Many cluster folks are building
systems with both SSD and traditional (raid) HD setups. The SSD
could be partitioned for the cluster and swap. Lots of experimentation
on how best to deploy SSD with max_ram in systems for clusters is
ongoing.


Memory Management is a primary focus of Apache-Spark (in-memory)
computations. Spark can be use with Python, Java and Scala; so it is very cool. 


 --Kerin
 [1] At a pinch, additional swap may be allocated as a file
 [2] https://lwn.net/Articles/562211/#oom

(2) is also good to know.

thx,
James









[gentoo-user] Re: configure.ac and Makefile.am easy_view ?

2015-03-28 Thread James
Todd Goodman tsg at bonedaddy.net writes:


 * Michael Orlitzky mjo at gentoo.org [150328 12:11]:
  On 03/28/2015 10:36 AM, James wrote:
   James wireless at tampabay.rr.com writes:

   Often, I need to inspect and ponder these files: configure.ac 
   and Makefile
   .am for a given ebuild. Is there an easy way to look at them without 
   compiling the ebuild ? 


  Those files are part of the upstream tarball. The easiest way to fetch
  the sources without compiling them is with `emerge -f`. Then you can
  copy the tarball out of $DISTDIR and unpack it somewhere.

  Some ebuilds may patch configure.ac or Makefile.am -- in that case it's
  a little harder. I'm sure there's an elegant way to do it, but what I
  usually do is begin to emerge the package and Ctrl-C it when it starts
  compiling. Then you can find the sources under /var/tmp/portage.

 Wouldn't 'ebuild ebuild_file_name prepare' do what you want without
 trying to time a Ctrl-C?


The man page says: Prepares  the extracted sources by running the
src_prepare() function specified in the ebuild file. When src_prepare()
starts, the  current working  directory  will  be  set to ${S}.

I work with /usr/local/portage/* on a variety of new ebuilds and other
hacks. Sometime these  files under /var/tmp/portage are persistent and
sometimes they are not. I'm not sure where you set the rules (configs?)
to keep various files around a while after cleaning up. Surely I've
experimented with the ebuilds of new codes I'm putting together, so
I look for guidance and a semantic that other, more experienced folks use
with ebuilds. I just compiled seamonkey last night (Installed versions: 
2.33.1(12:56:48 AM 03/28/2015) and it is not there, but I also installed
firefox and it is there in (var/tmp/portage/www-client/firefox-24.5.0/)

 so # find . -print | grep -i '.*[.]ac'

./work/mozilla-esr24/toolkit/crashreporter/google-breakpad/configure.ac
./work/mozilla-esr24/toolkit/crashreporter/google-breakpad/src/
third_party/glog/configure.ac
./work/mozilla-esr24/js/src/ctypes/libffi/configure.ac
./work/mozilla-esr24/memory/jemalloc/src/configure.ac
./work/mozilla-esr24/gfx/harfbuzz/configure.ac
./work/mozilla-esr24/modules/freetype2/builds/unix/configure.ac
./work/mozilla-esr24/media/webrtc/trunk/testing/gtest/configure.ac

and  # find . -print | grep -i 'Makefile.am'

./net-analyzer/iftop-1.0_pre4/work/iftop-1.0pre4/Makefile.am
./net-analyzer/iftop-1.0_pre4/work/iftop-1.0pre4/config/Makefile.am
./x11-misc/pcmanfm-0.9.10/work/pcmanfm-0.9.10/Makefile.am
./x11-misc/pcmanfm-0.9.10/work/pcmanfm-0.9.10/data/Makefile.am
snip

likewise I've been hacking at ebuilds for apache (spark and mesos)
The spark file are still under /var/tmp/portage/sys-cluster but the mesos
files, compiled just yesterday are not under /var/tmp/portage.  The 
same is true for ebuild in the portage tree. Some are there, some remain
and others who knows.

So with this in mind, how do I tag certain ebuilds to at least save the
configure.ac and Makefile.am files only for selected ebuild; either of which
may be in /usr/portage or /usr/local/portage?  (sorry for not being more
clear). Is this just some inconsistency in how various ebuilds are constructed?

Ideas and comments are most welcome.

James





Re: [gentoo-user] info system maintainence / repair

2007-09-09 Thread Etaoin Shrdlu
On Sunday 9 September 2007, Alan E. Davis wrote:

 I wonder where to find information on how to install a file in the
 info system, manually.  Apparently the install-info command is used,
 the death troll from Debian.  Nuf sed.

 I have learned a little about the info system on Gentoo, but I need to
 know how to install an info file manually.  I'd rather leave it alone,
 but texinfo is one of the greatest things about both the GNU system
 and Emacs. I need to know how it works.  If I may be forgiven for
 ranting a LITTLE bit, the idea of automatically setting up info files
 was a spark of genious; but it's implementation leaves something to be
 desired.  Before someone tells me to write a better one, I' will put
 on my asbestos suit, but Gentoo is a kinder and gentler crowd, at
 least most of the time..  I'm sorry if my own remarks have become
 unnecessarily pointed.

 So where is the documentation on how to install a file into that
 system by hand?  I have attempted to RTFM.

http://www.gnu.org/software/texinfo/manual/texinfo/texinfo.html#Creating-and-Installing-Info-Files
http://www.gnu.org/software/texinfo/manual/texinfo/texinfo.html#Installing-an-Info-File

seems like the dir file is located at /usr/share/info/dir in gentoo 
(don't know whether texinfo updates overwrite it though).

Hope this helps.
-- 
[EMAIL PROTECTED] mailing list



[gentoo-user] Re: video driver discovery

2008-12-23 Thread James
Alan McKinnon alan.mckinnon at gmail.com writes:


  Look at lspci -v. It lists quite a few kernel drivers

 I'm not sure I follow you. 

It was just to answer your 'quip' that grep is my friend. I understand the
difference between a driver lock to hardware, and one that's part of X.

It was just an example to show you that lots of drivers can be discovered,
quite easily.


 Weren't you looking for the X video driver? You won't find that in lspci,


Obviously. That said, it should not be that obscure to discover the video driver
info. When it was part of dmeg (kernel) it was not hard. Now it's part of X
and one has to parse lots of stuff. Maybe my complaint needs to be registered
elsewhere (with the X devs).


 Another thing that people all too easily lose sight of is that if someone 
 wants such information as which X driver is loaded, then we assume that the 
 person knows enough about the system to know where to look and knows the 
 usual tools for looking there. In much the same way as we expect the car 
 mechanic to know where the spark plugs are and what they do.

Now this is the 'horseshit' logic that I used the lspci example to displace.
Quickly discerning drivers, whatever their venue is of great importance.
That's why many are easy to discover. It seem to me in the 'genius' to move
things to X, some forgot how easy it was to discern the video driver, quite
a few kernel revs ago... Parsimg the X log files is just a poor way to
make that information available. Think of the massive new folks to linux,
think they'll be ready for that when something in X or their driver is
messed up?

Seriously, you sound very condescending here with this. A simple, parse
the X log files is the state of art for discerning X drivers, is sufficient.


Anyway, thanks for you help (and comments)

I done with this thread.

James











Re: [gentoo-user] Again: Critical bugs considered invalid

2007-06-06 Thread felix
Complaining TWICE worked.  The problem I complained about shouldn't
have happened in the first place; someonex fixed something that wasn't
broken and made it broken.

Your response is absolutely typical of my problem with the gentoo dev
community.  You misstate a complaint, overreact to it, and apparently
feel pretty smug about your accomplishment.  No one will admit to the
two screwups (first breaking a working ebuild, second incorrectly
closing a bug on it).  Instead you lash out at those who point out
problems.

Yes, I had the wrong program when I compalined about the color
problem.  But the gentoo community response then as now was to lash
out, scream and shout, not to actually investigate.  And when I
finally left the thread alone, you geniuses were still ranting about
it three days later when I next checked.

You folks may think you have a cool system, and it is in some ways and
could be in many others.  But I know many people who tried gentoo and
bailed precisely because of the shoot the messenger mentality so
pervasive here; the self-selected sample you see is meaningless.

Go ahead, have another three days' fun.  Maybe I'll spark some more
tinders in a month or two.  I wouldn't want to deprive you of your
fun.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o
-- 
[EMAIL PROTECTED] mailing list



Re: [gentoo-user] ZIC, aka setting the time zone.

2012-01-17 Thread Hinnerk van Bruinehsen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 17.01.2012 14:06, Neil Bothwick wrote:
 On Tue, 17 Jan 2012 13:41:48 +0100, Hinnerk van Bruinehsen wrote:
 
 Symlinking is not recommended as it breaks when /usr is on a 
 separate filesystem. The file should be copied instead.
 
 
 Are you sure you're not confusing that with hardlinking?
 
 Because AFAIK symlinking is the only linking that can cross 
 filesystem borders.
 
 It can... if the filesystem is mounted at the time. AFAIR this
 causes problem setting the timezone at boot time.
 
 Symlinking works (over filesystem borders, too, Pandu is right)
 and it even autoupdates localtime when (why ever) something in
 zoneinfo changes...
 
 The localtime files change all the time, look at how often
 timezone-data is updated. Everyone some bright spark comes up with
 another clever way of squeezing 25 hours into a day, his country's
 DST rules change. That's why openrc has a setting to manage this
 automatically for you.
 
 
 I know that it happens, I just fail to understand why... ;)
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPFXP6AAoJEJwwOFaNFkYc3XQIAMnFUGScoaV9hg5mCcMwthmj
i2k3iWxPaYNCwfoC64TlW6vijf/SpRwdLZULRq0N1Yc6t8sAs8cTY8u0CzZjVMPG
WL9mK2lYOyzhE41kjBjxvjfV3Tee6mgeBHXfeFqIglR9/jfEBFp3ZFF1kX++R2K3
fXFuOffEGczcXHyv6TCNYYcLU0VYqRYvrJ0NGFQHRjpXcLJffFO9FTpdeg0VJHar
8eSpFMKjy0505iggFVvdB2DMx/SdrLJ4UWlBI/O/Lxztk8VUHnZYzPhMn1W8lz9p
fyGf8BTGJieQvijN4njORZ4GGEEHOX3myFPbS1nNoRWXbqzjk44Mog3GDGu9WR4=
=V+Pc
-END PGP SIGNATURE-



Re: [gentoo-user] Re: Anyone switched to eudev yet?

2012-12-27 Thread Dale
Mark David Dumlao wrote:
 I think your reaction proves my point about angry mobs torching his
 home without understanding what's being proposed. Your fine reading
 comprehension once again failed to catch the notion that in my
 analogy, all he invented was a mechanism that makes sure it was a key,
 not a spark plug, that did the starting. i.e., you're asking literally
 for a turnkey system, and that's literally what he invented, except
 that the system guarantees that it's a key that was turned. You have
 not said a THING about your misunderstanding of the use of the word
 _broken_ and you're continuing to peddle your hate-boner even after
 it's been shown that you're confused. -- This email is: [ ] actionable
 [ ] fyi [x] social Response needed: [ ] yes [x] up to you [ ] no
 Time-sensitive: [ ] immediate [ ] soon [x] none 

So I guess Linus is confused to?  You think he is just a hate-boner?  I
would think Linus if anyone knows what he is talking about.  Maybe you
need to go talk to him about his feeling on the direction of
udev/systemd.  Good luck with that. 

Name calling, lost argument.  No more facts.

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: openjdk-6-jdk

2014-08-26 Thread Jc García
2014-08-25 19:56 GMT-06:00 James wirel...@tampabay.rr.com:
 Jc García jyo.garcia at gmail.com writes:



 
 Just dropping and idea for your ebuild, you might have this planned
 but anyway, I would put something like 'virtual/jdk:1.6' in RDEPEND,
 so if things work as they should(but that's not realistic), any of the
 java implementations in the tree would provide jdk6 for your ebuild.

 I installed icedtea-bin for now.

 From the ebuild (which is a hack) I have :

 DEPEND=net-misc/curl
 dev-libs/cyrus-sasl
 python? ( dev-lang/python dev-python/boto )
 java? ( virtual/jdk )


I have seen many ebuilds, with RDEPEND=cat/pkg and
DEPEND=${RDEPEND}, I would use that. because jdk is both a runtime,
and a build time dependency  in this case.

 It seems would not compile until I installed the maven-bin
 package..Which is not a requisite in the ebuild
 but I saw that maben was a required code for building mesos on another
 distro

Put maven-bin in DEPEND then, with any other build time dependency,
also there's a java-mvn-src eclass in the tree, and two other maven
related eclasses in the java overlay. check those out if you haven't
already . I have never used maven, only ant and I'm still learning
about ebuilds, so I can't say anything else.

 Like I said, it's a hack, but I'll get it cleaned up; because nobody
 else seemed motivated to get mesos running on gentoo.

 Now it off to get spark[1] and the hadoop[2] happy on gentoo...

 happy, happy happy


 James

 [1] https://spark.apache.org/

 [2] http://hadoop.apache.org/








[gentoo-user] Re: OOM memory issues

2014-09-18 Thread James
Kerin Millar kerframil at fastmail.co.uk writes:


 A new tunable, oom_score_adj, was added, which accepts values between 
 0 and 1000.

 https://github.com/torvalds/linux/commit/a63d83f#include/linux/oom.h


FANTASTIC! Exactly the sort of info I'm looking for learn the pass,
see what has been tried, how to configure it, and if it works/fails
when and why! Absolutely wonderful link!


 As mentioned there, the oom_adj tunable remains for reasons of 
 backward compatibility. Setting one will adjust the other per the 
 appropriate scale.

That said, the mechanism seem too simple minded to succeed in anything
but an extremely well monitored system. I think now the effort
particularly in clustering codes, is to only have basis memory monitoring
and control and leave the fine grained memory control needs to the 
clustering tools. The simple solution is there (in clustering) you just
priortize jobs (codes), migrate to systems with spare resources, and bump
other process to lower priority states. Also, there are (in-memory)
codes like Apache-Spark, that use (RDD) Resilient Distributed Data.

 It doesn't look as though Karthikesan's proposal for a cgroup based 
 controller was ever accepted.

I think many of the old kernel ideas, accepted or not, are being
repackaged in the clustering tools, or at least they are inspired
by these codes

Dude, YOU are the main{}. Keep the info flowing, as I'm sure lots
of folks on this list are reading this .

EXCELLENT!


 --Kerin

James






[gentoo-user] Re: File system testing

2014-09-19 Thread James
J. Roeleveld joost at antarean.org writes:


 Out of curiosity, what do you want to simulate?

subsurface flows in porous medium. AKA carbon sequestration
by injection wells. You know, provide proof that those
that remove hydrocarbons and actuall put the CO2 back
and significantly mitigate the effects of their ventures.

It's like this. I have been stuggling with my 17 year old genius
son who is a year away from entering medical school, with
learning responsibility. So I got him a hyperactive, highly
intelligent (mix-doberman) puppy to nurture, raise, train, love
and be resonsible for. It's one genious pup, teaching another
pup about being responsible.

So goes the earl_bidness...imho.



 
  Many folks are recommending to skip Hadoop/HDFS all  together

 I agree, Hadoop/HDFS is for data analysis. Like building a profile 
 about people based on the information companies like Facebook,  
 Google, NSA, Walmart, Governments, Banks, collect about their 
 customers/users/citizens/slaves/

  and go straight to mesos/spark. RDD (in-memory)  cluster
  calculations are at the heart of my needs. The opposite end of the
  spectrum, loads of small files and small apps; I dunno about, but, I'm all
  ears.
  In the end, my (3) node scientific cluster will morph and support
  the typical myriad  of networked applications, but I can take
  a few years to figure that out, or just copy what smart guys like
  you and joost do.
  
 Nope, I'm simply following what you do and provide suggestions where I can.
 Most of the clusters and distributed computing stuff I do is based on 
 adding machines to distribute the load. But the mechanisms for these are 
implemented in the applications I work with, not what I design underneath.

 The filesystems I am interested in are different to the ones you want.

Maybe. I do not know what I want yet. My vision is very light weight 
workstations running lxqt (small memory footprint) or such, and a bad_arse
cluster for the heavy lifting running on whatever heterogenous resoruces I
have. From what I've read, the cluster and the file systems are all
redundant that the cluster level (mesos/spark anyway) regardless of one any
give processor/system is doing. All of Alans fantasies (needs) can be
realized once the cluster stuff is master. (chronos, ansible etc etc).

 I need to provided access to software installation files to a VM server 
 and access to documentation which is created by the users. The 
 VM server is physically next to what I already mentioned as server A.  
 Access to the VM from the remote site will be using remote desktop   
 connections.  But to allow faster and easier access to the 
 documentation, I need a server B at the remote site which functions as 
 described.  AFS might be suitable, but I need to be able to layer Samba 
 on top of that to allow a seamless operation.
 I don't want the laptops to have their own cache and then having to 
 figure out how to solve the multiple different changes to documents 
 containing layouts. (MS Word and OpenDocument files).

Ok so your customers (hperactive problem users) inteface to your cluster
to do their work. When finished you write things out to other servers
with all of the VM servers. Lots of really cool tools are emerging
in the cluster space.

I think these folks have mesos + spark + samba + nfs all in one box. [1]
Build rather than purchase? WE have to figure out what you and Alan need, on
a cluster, because it is what most folks need/want. It the admin_advantage
part of cluster. (There also the Big Science (me) and Web centric needs.
Right now they are realted project, but things will coalesce, imho. There is
even Spark_sql for postgres admins [2].

[1]
http://www.quantaqct.com/en/01_product/02_detail.php?mid=29sid=162id=163qs=102

[2] https://spark.apache.org/sql/


   We use Lustre for our high performance general storage. I don't 
   have any numbers, but I'm pretty sure it is *really* fast (10Gbit/s 
   over IB sounds familiar, but don't quote me on that).
  
  AT Umich, you guys should test the FhGFS/btrfs combo. The folks
  at UCI swear about it, although they are only publishing a wee bit.
  (you know, water cooler gossip).. Surely the Wolverines do not
  want those californians getting up on them?

  Are you guys planning a mesos/spark test?

Personally, I would read up on these and see how they work. Then,
based on that, decide if they are likely to assist in the specific
situation you are interested in.

  It's a ton of reading. It's not apples-to-apple_cider type of reading.
  My head hurts.

 Take a walk outside. Clear air should help you with the headaches :P

Basketball, Boobs and Burbon use to work quite well. Now it's mostly
basketball, but I'm working on someone very cute..

  I'm leaning to  DFS/LFS
  (2)  Luster/btrfs  and FhGFS/btrfs

 I have insufficient knowledge to advise on either of these.
 One question, why BTRFS instead of ZFS?

I think btrfs has

[gentoo-user] Re: gigabyte mobo latency

2014-10-18 Thread James
thegeezer thegeezer at thegeezer.net writes:


  So. Is there a make.conf setting or elsewhere to make the 
  terminal session response times, in the browsers (seamonkey, firefox)
  faster?  
  the typing latency in the browser windows).

  ideas?

 two things you might like to look into: 1. cgroups (including freezer)
 to help isolate your browsers and also 2. look at atop instead of htop
 as this includes disk io


2. The system rarely uses 8 G of the 32 G available, so disk IO is 
not the problem. No heavy writes. It was the java scripts

1. Ahhh! tell me more. I found these links quickly:

https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt

http://wiki.gentoo.org/wiki/LXC#Freezer_Support

I'm not sure if you've read any of my clustering_frustration posts
over the last month or so, but cgroups is at the heart of clustering now.
It seems many of the systemd based cluster solutions are having all
sorts of OOM, OOM-killer etc etc issues. So any and all good information,
examples and docs related to cgroups is of keen interests to me. My efforts
to build up a mesos/spark cluster, center around openrc and therefore
direct management of resources via cgroups.

The freezer is exactly what I'm looking for. Maybe I also need to read up
on lxc?  What are the best ways to dynamically manage via cgroups? A gui?
A static config file? a CLI tool?


curiously,
James







Re: [gentoo-user] ceph on btrfs

2014-10-23 Thread Andrew Savchenko
Hi,

On Wed, 22 Oct 2014 20:05:48 + (UTC) James wrote:
 Hello,
 
 So looking at the package sys-cluster/ceph, I see these flags:
 cryptopp debug fuse gtk +libaio libatomic +nss radosgw static-libs tcmalloc
 xfs zfs   No specific flags for btrfs?

Ceph is optimized for btrfs by design, it has no configure options
to enable or disable btrfs-related stuff:
https://github.com/ceph/ceph/blob/master/configure.ac
No configure option = no use flag.
 
 ceph-0.67.9 is marked stable, while 0.67.10 and  0.80.5 are marked
 (yellow) testing and * is marked (red) masked. So what version
 would anyone recommend, with what flags?  [1]

Just use the latest (0.80.7 ATM). You may just nerame and rehash
0.80.5 ebuild (usually this works fine). Or you may stay with
0.80.5, but with fewer bug fixes.
 
  Ceph will be the DFS on top of a (3) node mesos+spark cluster. 
 btrfs is being  set up with 2 disks in raid 1 on each system. Btrfs
 seems to be keenly compatible with ceph [2].

If raid is supposed to be read more frequently than written to,
then my favourite solution is raid-10-f2 (2 far copies, perfectly
fine for 2 disks). This will give you read performance of raid-0 and
robustness of raid-1. Though write i/o will be somewhat slower due
to more seeks.

Also it depends on workload: if you'll have a lot of independent
read requests, raid-1 will be fine too. But for large read i/o from
a single or few clients raid-10-f2 is the best imo.

 Guidance and comments, warmly requested,
 James
 
 
 [1] 
 http://ceph.com/docs/v0.78/rados/configuration/filesystem-recommendations/
 
 [2] http://ceph.com/docs/master/release-notes/#v0-80-firefly

Best regards,
Andrew Savchenko


pgpUJMy39uiEh.pgp
Description: PGP signature


[gentoo-user] Re: ceph on btrfs

2014-10-23 Thread James
Andrew Savchenko bircoph at gmail.com writes:


 Ceph is optimized for btrfs by design, it has no configure options
 to enable or disable btrfs-related stuff:
 https://github.com/ceph/ceph/blob/master/configure.ac
 No configure option = no use flag.

Good to know; nice script.

 Just use the latest (0.80.7 ATM). You may just nerame and rehash
 0.80.5 ebuild (usually this works fine). Or you may stay with
 0.80.5, but with fewer bug fixes.

So just download from ceph.com, put it in distfiles and copy-edit
ceph-0.80.7 in my /usr/local/portage,  or is there an overlay somewhere
I missed?

 If raid is supposed to be read more frequently than written to,
 then my favourite solution is raid-10-f2 (2 far copies, perfectly
 fine for 2 disks). This will give you read performance of raid-0 and
 robustness of raid-1. Though write i/o will be somewhat slower due
 to more seeks. Also it depends on workload: if you'll have a lot of  
 independent read requests, raid-1 will be fine too. But for large read  
 i/o from a single or few clients raid-10-f2 is the best imo.

Interesting. For now I'm going to stay with simple mirroring. After
some time I might migrate to a more agressive FS arrangement, once
I have a better idea of the i/o needs. With spark(RDD)  on top of mesos,
I shooting for mostly in-memory usage so i/o  is not very heavily
used. We'll just have to see how things work out.

Last point. I'm using openrc and not systemd, at this time; any
ceph issues with openrc, as I do see systemd related items with ceph.


 Andrew Savchenko


Very good advice.
Thanks,
James






[gentoo-user] OT: GCC 5 Offloading

2015-09-14 Thread james
Fernando Rodriguez  outlook.com> writes:

> Do you know of any plans to enable offloading on the gentoo toolchain?

NO, but I'm sure some devs are keenly aware of this feature in gcc-5.


> I was able to build the offloading compiler using crossdev with a few hacks 
> and wrote an ebuild for Intel's simulator[2]. I will work on enabling the  
> host compiler tomorrow and may open a feature request and post patches 
> once I get it working. The changes needed to enable it on the host are 
> pretty trivial.

Sorry Fernando, I just now saw this thread on an old thread. I think that
'sys-cluster/ceph' is where I'd like to test your spin on the gcc-5.
Ceph has RDMA (RoCE) in the 0.94 branch (in portage). You are definately
ahead of me on practical gcc-5 experiments with offloading and other
new features.

You did not list your second reference. Where I can I get/git your
compiler and some brief suggestions on taking it for a test drive.

I'm not much interested in the Intel simulator, atm. I like to test on
old gear running gentoo:: borking is no big deal, if it happens.
Other codes keen to test gcc-5 (offloading) on are Apache-mesos and
Apache-spark and mesos-distcc.

Also, per this doc [1] you can get your own gentoo overlay to put
things up for wider experimentation, if you like.

[1] https://wiki.gentoo.org/wiki/Project:Overlays/Dev_Guide


Very cool, what you have done,
James





Re: [gentoo-user] Re: freeSwitch

2016-04-27 Thread Meik Frischke

Am 2016-04-27 16:12, schrieb James:

 ccs.covici.com> writes:



> How do you install freeswitch, just a raw compile from sources?
> Or is there an ebuild somewhere that I   missed?


I just did the configure, make, make install combination, it by 
default
installs in /usr/local/freeswitch -- even the binaries, so its 
somewhat
isolated.  but check the docs first, you will need some libraries 
which

are not in gentoo and must be gotten from their repository just to
build.  I think I had to download and install a dozen packages from
source.


Thanks for the info. Did you every file a bug to have an ebuild 
created?

BGO searching did not find one and freeswitch just seems like a cool
software to add to gentoo.

Oh, I almost forget to mention:: what caught my eye was that I ran
across instructions for installation on a Raspberry Pi 2 [1].
And was wondering if  a larger pbx (ip-switch) could be built
by clustering Rpi3 devices together?

Lots of projects seem to be centric on Rpi3 clusters these days. [2]
Thanks again for the feedback.

James

[1]
http://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server

[2] https://www.raspberrypi.org/magpi/pi-spark-supercomputer/


There is the Freeswitch overlay available [1], also listed here [2], 
which can be added with layman. The provided ebuild is a little outdated 
though (1.6.2 vs currently 1.6.7). It can probably be adapted for the 
current version, but I havn't tried it yet.


Meik

[1] 
https://github.com/alphallc/freeswitch/tree/master/net-voip/freeswitch

[2] https://overlays.gentoo.org/



[gentoo-user] Re: Gitlab experiences

2016-07-13 Thread James
Ralf <ralf+gentoo  ramses-pyramidenbau.de> writes:


> On 07/13/16 16:35, Rich Freeman wrote:
> > On Wed, Jul 13, 2016 at 9:19 AM, Ralf
> > <ralf+gentoo  ramses-pyramidenbau.de> wrote:
> >> I recommend to deploy gitlab inside a Debian LXC/Docker container as
> >> Gitlab guys provide and maintain precompiled .deb packages. You do not
> >> want to compile it on your own as it comes with a load of dependencies.
> >> And once dependencies change you really might run into trouble with
> >> gentoo. Gitlab isn't just a tiny one-click-and-it-runs webservice, it's
> >> a whole ecosystem.

Yep, I heard a bit of bitching about maintaining it. I was contacted by a
corp to install and manage gitlab for them. It's not my cup of tea, so folks
in that space must be upset with maintenance issues

> > This is part of why I'd suggest not upgrading it in-place.  Just
> > create a new container from scratch every time there is an update. 
> Sure, I totally agree. But from a maintenance point of view this can
> become a full-time job very quickly, as gitlab has pretty short
> innovation and release cycles. And you do want to install updates very
> quickly, especially for web-apps.

Interesting idea. It certainly is a good candidate for codes that stress
a cluster. I'll keep that in mind.

>   Ralf

> > OF course, Gentoo makes this somewhat more painful than other distros; 
> > it tends to be designed around upgrading in-place.

I did not realize it was so java centric.. I'm out, cause I have more
icedtea-java projects than I know what to do with. (apache-spark).

Thanks for all the info guys,
James







Re: [gentoo-user] Pure Data (Pd) can't access ALSA device

2017-09-22 Thread Daniel Sonck
This sounds pretty normal to me. ALSA isn't really suited for simultaneous 
audio access. In general with ALSA you have only one program that can use 
audio at a time, or you use the mixer module from ALSA. I assume a program 
running on the background has claimed ALSA already for certain reasons. 

If you run PulseAudio (which is pretty standard on a regular desktop), this 
one will be the cullprit. Usually the jack tools are smart enough to suspend 
pulseaudio. You can in fact run puredata through the pasuspend script which 
suspends pulseaudio so ALSA is free again.

What I recommend (which you already tried) is using JACK. JACK will take 
ownership of your ALSA device, and gives you a capable routing system allowing 
you to hook up more than just PureData to your audio card. Optionally routing 
it to other software. In addition, it's possible to compile pulseaudio with 
jack support which means you can in fact have regular (non-audio) apps work 
together with jack, which is what I sometimes use: Set up my audio studio 
setup, while still having pulseaudio around for stuff like browsers and video 
players. If I suddenly have a creative spark, I have my studio ready to play 
with. When I'm done, browsers still work with sound

Daniel

On vrijdag 22 september 2017 18:23:10 CEST Lasse Pouru wrote:
> I can't get Pure Data to work with ALSA. It detects my sound card, but
> whenever I try to turn on the audio I get the error:
> 
> ALSA output error (snd_pcm_open): Device or resource busy.
> 
> I've tried both using the ebuild from the audio-overlay and compiling
> from the source on the Pd website, both behave the same. I've read that
> Pd deals with ALSA differently than most other programs, but haven't
> found an explanation how. I did get it to work with JACK.
> 
> - Lasse





Re: [gentoo-user] [OFF-TOPIC] Best bios type thingy to boot a computer

2018-08-31 Thread Godzil
I really enjoyed (and still) Open Firmware which was used by Apple on the 
PowerPC macintosh (starting from the first PCI models up to the latest G5)

It is a nice environment, with all the capabilities of UEFI with even more as 
it come for free and directly with a Forth interpreter (basically the CLI is an 
immediate forth interpreter)

Was quite nice and tidy, allowing lots of stuff like modifications of the 
device tree and other nice things.

Was probably underused by Apple but yet, was the key for a lot of hacks on PPC 
models!


I think it was originated from Sun and use on spark station, not really sure 
there

> Le 31 août 2018 à 18:19, Andrew Lowe  a écrit :
> 
>> On 31/08/18 23:16, Andrew Udvare wrote:
>>> On 8/31/18 10:46 AM, Andrew Lowe wrote:
>>> Hi all,
>> 
>>>This is not to start a flame war, I just want to do some reading,
>>> wikipedia pages, for self interest on how a BIOS could have/should have
>>> been done. I'm thinking of how DECStations, Alpha's SPARCs etc etc
>>> booted up.
>> 
>> Try
>> 
>> https://en.wikipedia.org/wiki/Booting#Boot_sequence
>> https://github.com/coreos/grub/tree/2.02-coreos/grub-core/boot/i386/pc
>> https://github.com/torvalds/linux/blob/master/arch/x86/boot/main.c#L135
>> 
> 
> 
>Thanks for the comment but I was more looking along the lines of "When
> I used the early SPARC 1 the boot was controlled by  and it was
> really good because.." hence my original comment about "been there,
> done that", people who are old enough to know what a SPARC1 looked like
> or even used a Personal Iris or a POWERstation.
> 
>Andrew
> 




Re: [gentoo-user] Re: Anyone switched to eudev yet?

2012-12-27 Thread Dale
Mark David Dumlao wrote:
 On Thu, Dec 27, 2012 at 4:42 AM, Dale rdalek1...@gmail.com wrote:
 Mark David Dumlao wrote:
 On Tue, Dec 25, 2012 at 10:38 AM, Dale rdalek1...@gmail.com wrote:
 Feel free to set me straight tho.  As long as you don't tell me my
 system is broken and has not been able to boot for the last 9 years
 without one of those things.  ROFL
 Nobody's telling you _your_ system, as in the collection of programs
 you use for your productivity, is broken. What we're saying is that
 _the_ system, as in the general practice as compared to the
 specification, is broken. Those are two _very_ different things.
 From what I have read, they are saying what has worked for decades has
 been broken the whole time.  Doesn't matter that it works for millions
 of users, its broken.
 Yes, that is exactly what they are saying. What I am pointing out,
 however, is that there is, informally, a _technical meaning_ for the
 word broken, which is that the specs don't match the
 implementation. And in the case of /usr, the specs don't match the
 implementation. For like, maybe all of the Linuxen.

  They say it is broken so they can fix it with a
 init thingy for EVERYONE.  Sorry, that's like telling me my car has been
 broken for the last ten years when I have been driving it to town and it
 runs just fine.
 NOBODY is telling you your system or that the systems of millions of
 users out there aren't booting. You're assigning emotional baggage to
 technical language.

 To push your analogy, oh, your car is working just fine. Now anyone
 with a pair of spark plugs and a few tools may be able to start it
 without you, but your startup _works_. Now imagine some German
 engineer caring nothing about you lowly driver, and caring more about
 the car as a system, and he goes using fancy words like
 authentication systems and declaring that all cars have a flaw, or
 more incensingly, car security is fundamentally broken (Cue angry
 hordes of owners pitchfork and torching his house).

 Thing is, he's right, and if he worked out some way for software to
 verify that machine startup was done using the keys rather than spark
 plugs, he'd be doing future generations a favor in a dramatic
 reduction of carjackings. And if somehow it became mandated for future
 cars to have this added in addition to airbags and whatnot, it'd annoy
 the hell out of car makers but overall still be a good thing.

 And here the analogy is holding up: NOBODY is breaking into your car
 and forcefully installing some authentication system in its startup.
 And NOBODY is breaking into your servers and forcing you to switch to
 udev/systemd or merged /usr. You can still happily plow along with
 your system as is. Heck, you can even install current udev without
 changing your partition setup. Just modify the ebuild and have it
 install it into / instead of /usr. Or use an early bootup script. Or
 use an init thingy.

 The udev/systemd people sound like politicians.
 If anything, Lennart is the worst possible politician on the planet.
 He makes unpopular decisions, mucks around in stuff people don't want
 touched, talks snide and derisively, etc etc etc, because he's a
 nerd's nerd that knows nothing about PR and goodwill. The software is
 good, but that's about all he knows how to write. He's like DJB on
 crack.
 --
 This email is:[ ] actionable   [ ] fyi[x] social
 Response needed:  [ ] yes  [x] up to you  [ ] no
 Time-sensitive:   [ ] immediate[ ] soon   [x] none



I think your analogy actually proves my point.  Instead of just getting
in the car and turning the key, they want to reinvent the engine and how
it works.  It doesn't matter that it is and has been working for decades,

Thanks for proving my point tho.  LOL

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] Re: File system testing

2014-09-18 Thread J. Roeleveld

On Wednesday, September 17, 2014 08:56:28 PM James wrote:
 Alec Ten Harmsel alec at alectenharmsel.com writes:
  As far as HDFS goes, I would only set that up if you will use it for
  Hadoop or related tools. It's highly specific, and the performance is
  not good unless you're doing a massively parallel read (what it was
  designed for). I can elaborate why if anyone is actually interested.
 
 Acutally, from my research and my goal (one really big scientific 
simulation
 running constantly).

Out of curiosity, what do you want to simulate?

 Many folks are recommending to skip Hadoop/HDFS all
 together

I agree, Hadoop/HDFS is for data analysis. Like building a profile about 
people based on the information companies like Facebook, Google, NSA, 
Walmart, Governments, Banks, collect about their 
customers/users/citizens/slaves/

 and go straight to mesos/spark. RDD (in-memory)  cluster
 calculations are at the heart of my needs. The opposite end of the
 spectrum, loads of small files and small apps; I dunno about, but, I'm all
 ears.
 In the end, my (3) node scientific cluster will morph and support
 the typical myriad  of networked applications, but I can take
 a few years to figure that out, or just copy what smart guys like
 you and joost do.

Nope, I'm simply following what you do and provide suggestions where I 
can.
Most of the clusters and distributed computing stuff I do is based on 
adding machines to distribute the load. But the mechanisms for these are 
implemented in the applications I work with, not what I design underneath.

The filesystems I am interested in are different to the ones you want.
I need to provided access to software installation files to a VM server and 
access to documentation which is created by the users.
The VM server is physically next to what I already mentioned as server A. 
Access to the VM from the remote site will be using remote desktop 
connections.
But to allow faster and easier access to the documentation, I need a 
server B at the remote site which functions as described.
AFS might be suitable, but I need to be able to layer Samba on top of that 
to allow a seamless operation.
I don't want the laptops to have their own cache and then having to figure 
out how to solve the multiple different changes to documents containing 
layouts. (MS Word and OpenDocument files)

  We use Lustre for our high performance general storage. I don't have 
any
  numbers, but I'm pretty sure it is *really* fast (10Gbit/s over IB
  sounds familiar, but don't quote me on that).
 
 AT Umich, you guys should test the FhGFS/btrfs combo. The folks
 at UCI swear about it, although they are only publishing a wee bit.
 (you know, water cooler gossip).. Surely the Wolverines do not
 want those californians getting up on them?
 
 Are you guys planning a mesos/spark test?
 
   Personally, I would read up on these and see how they work. Then,
   based on that, decide if they are likely to assist in the specific
   situation you are interested in.
 
 It's a ton of reading. It's not apples-to-apple_cider type of reading.
 My head hurts.

Take a walk outside. Clear air should help you with the headaches :P

 I'm leaning to  DFS/LFS
 
 (2)  Luster/btrfs  and FhGFS/btrfs
 
 Thoughts/comments?

I have insufficient knowledge to advise on either of these.
One question, why BTRFS instead of ZFS?

My current understanding is:
- ZFS is production ready, but due to licensing issues, not included in the 
kernel
- BTRFS is included, but not yet production ready with all planned features

For me, Raid6-like functionality is an absolute requirement and latest I 
know is that that isn't implemented in BTRFS yet. Does anyone know when 
that will be implemented and reliable? Eg. what time-frame are we talking 
about?

--
Joost


[gentoo-user] Re: user agent switcher - automatic

2006-02-13 Thread Mick
Harm Geerts wrote:

 On Monday 13 February 2006 20:56, Joseph wrote:
 I remember there was some kind of user agent for Mozilla (before
 firefox) that worked per url basis.

 Anyhow, the current user agent doesn't help me.  I can seem to fool
 Canada Post web-loging screen with Firefox - user agent (it display
 log-in screen) but when I try to create an account it'll not the spoof.
 Can anybody try it:
 http://www.canadapost.ca/business/obc/default-e.asp?sblid=obc
 There is a link on the right: New user... click here
 
 As soon as the login page finishes loading I get redirected to a 404 page,
 I don't even get the chance to do anything...
 
 You really should send an email to them about it and let them know they're
 missing (give or take) 10% of potential customers by filtering
 firefox/gecko browsers.
 
 With the browser requirements they have I'll never be able to register...
 Their world stops with IE and netscape, IE obviously doesn't run native on
 linux, and netscape is masked for amd64.
 
 webdevelopment, it's a dirty business :-)

Their server sends a test cookie which discriminates against decent
browsers!  Firefox won't play.  Opera falls apart irrespective of how I set
it to identify itself.  The same happens with Konqueror in the default user
agent setting, but it logs in happily in the IE6 setting.  Some bright
spark has written 311 lines of code in a script which actively
discriminates against most browsers  OS's out there.

Some web-developers or businesses are concerned that they provide an
identical web presence to all visitors for marketing/corporate identity
purposes.  A deformed/malfunctioning website is understandably not
acceptable for them.  Lazy wysiwyg web designers will not spend time to
trim their code for most browsers to be able to show their content as
intended.  On the other hand one can end up chasing their tail if every
browser invented has to show the same content.  There's bound to be some
differences.

I understand that Google now discriminates against websites which . . .
discriminate against particular browsers.  IE and their wysiwyg HTML
editing products have cause a lot of bad code out there and it will take
some time to people to catch up and clean their code.  Until then COMPLAIN!
-- 
Regards,
Mick

-- 
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] Curious pattern in log files from ssh...

2008-12-04 Thread Alan McKinnon
On Thursday 04 December 2008 21:03:17 Christian Franke wrote:
 On 12/03/2008 09:02 PM, Steve wrote:
  I've recently discovered a curious pattern emerging in my system log
  with failed login attempts via ssh.
 
  I'm not particularly concerned - since I'm confident that all my users
  have strong passwords... but it strikes me that this data identifies a
  bot-net that is clearly malicious attempting to break passwords.
 
  Sure, I could use IPtables to block all these bad ports... or... I could
  disable password authentication entirely... but I keep thinking that
  there has to be something better I can do... any suggestions?  Is there
  a simple way to integrate a block-list of known-compromised hosts into
  IPtables - rather like my postfix is configured to drop connections from
  known spam sources from the sbl-xbl.spamhaus.org DNS block list, for
  example.

 I just don't see what blocking ssh-bruteforce attempts should be good
 for, at least on a server where few _users_ are active.

Two reasons:

a. Maybe, just maybe, you overlooked something. Belts, braces and a drawstring 
for good measure is not a bad thing.

b. You probably want to get all that crap out of your log files off into some 
other place where you can cope with it. Parsing auth log files that are 95% 
brute force attempts is no fun. I like to have the crap in place A and the 
real stuff in place B, makes my job so much easier

 The chance that security of a well configured system will be compromised
 by that is next to zero, and on recent systems it is also impossible to
 cause significant load with ssh-login-attempts.

Uh-huh. We all said that for many years. Then some bright spark actually 
looked at the patches the debian openssh maintainer was applying and we all 
had one of those special oops... moments

Did you have any idea of just how weak certs made on a debian box were before 
it hit the headlines? No-one I know did.

 Also, things like fail2ban add new attack-possibilities to a system, I
 remember the old DoS for fail2ban, resulting from a wrong regex in log
 file parsing, but I think at least this is fixed now.

Whereas that is true enough in itself, the actual risk of such is rather low 
in comparison to the gains. Hence it is not a valid reason to not use 
fail2ban and such-like apps.

If it were, we should all just stop using iptables and libwrap and openssl on 
the off-chance that maybe, just maybe, they open an attack vector. But that's 
silly reasoning right?


-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Re: video driver discovery

2008-12-23 Thread Alan McKinnon
On Tuesday 23 December 2008 03:28:55 James wrote:
 Alan McKinnon alan.mckinnon at gmail.com writes:
  grepping a log file is the most natural way for an experienced unix admin
  to do it. It's a useful skill, all newbies should be encouraged (but not
  required) to learn it. Sometimes we experienced admin types lose sight of
  the fact that regardless of all the nice new user-friendly aspects of
  Linux being driven by distros like Ubuntu, under the covers we still have
  a hard-core Unix system.

 h,

 Look at lspci -v. It lists quite a few kernel drivers

I'm not sure I follow you. lspci lists physical hardware devices found while 
enumerating the pci bus, with -v it lists the kernel driver loaded for 
accessing that device.

Weren't you looking for the X video driver? You won't find that in lspci, it's 
a user-space driver loaded by the X server. You may well find information 
related to 3D rendering and frame buffers though.

Another thing that people all too easily lose sight of is that if someone 
wants such information as which X driver is loaded, then we assume that the 
person knows enough about the system to know where to look and knows the 
usual tools for looking there. In much the same way as we expect the car 
mechanic to know where the spark plugs are and what they do.

 00:05.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge (prog-if 00
 [Normal decode])
 snip
 Kernel driver in use: pcieport-drive

 and here:
 00:12.0 IDE interface: ATI Technologies Inc 4379 Serial ATA Controller
 (prog-if 8f [Master SecP SecO PriP PriO])
 snip
 Kernel driver in use: sata_sil

 and so on...
 00:13.0 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller
 (prog-if 10 [OHCI])
 Kernel driver in use: ohci_hcd

 00:14.0 SMBus: ATI Technologies Inc IXP SB400 SMBus Controller (rev 11)

 Kernel driver in use: piix4_smbus


 I guess that was done just for lazy(slow) admins.?


 common, it's an obviously an oversight, cause lots of other
 things get listed.you think? Maybe it'd be too
 difficult to do?

Remind me again, what point are you making? lspci is a very low-level hardware 
detection tool. It's not supposed to be friendly, it's supposed to be 
complete.

-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Re: Recommendations for scheduler

2014-08-03 Thread Joost Roeleveld
On Saturday 02 August 2014 16:53:26 James wrote:
 Alan McKinnon alan.mckinnon at gmail.com writes:
  Well, we've found 2 projects that at least in part seek to achieve our
  general goals - chronos and Martin's new project.
  Why don't we both fool around with them for a bit and get a sense of
  what it will take to add features etc? Then we can meet back here and
  discuss. Always better to build on an existing foundation
 
 Mesos looks promising for a variety of (Apache) reasons. Some key
 technologies folks may want google about that are related:
 
 Quincy (fair schedular)
 Chronos (scheduler)
 Hadoop (scheduler)

Hadoop not a scheduler. It's a framework for a Big Data clustered database.

 HDFS (clusterd file system)

Unless it's changed recently, not suitable for anything else then Hadoop and 
contains a single point of failure.

 http://gpo.zugaina.org/sys-cluster/apache-hadoop-common
 
 Zookeeper (Fault tolerance)
 SPARK ( optimized for interative jobs where a datase is resued in many
 parallel operations (advanced math/science and many other apps.)
 https://spark.apache.org/
 
 Dryad  Torque   Mpiche2 MPI
 Globus tookit
 
 mesos_tech_report.pdf
 
 It looks as though Amazon, google, facebook and many others
 large in the Cluster/Cloud arena are using Mesos..?
 
 So let's all post what we find, particularly in overlays.

Unless you are dealing with Big Data projects, like Google, Facebook, Amazon, 
big banks,... you don't have much use for those projects.

Mesos looks like a nice project, just like Hadoop and related are also nice. 
But for most people, they are as usefull as using Exalytics.

A scheduler should not have a large set of dependencies that you wouldn't use 
otherwise. That makes Chronos a non-option to me.

Martin's project looks promising, but doesn't store the schedules internally. 
For repeating schedules, like what Alan was describing, you need to put those 
into scripts and start those from an existing cron.

Of the 2, I think improving Martin's project is the most likely option for me 
as it doesn't have additional dependencies and seems to be easily implemented.

--
Joost



[gentoo-user] Re: Boot Media Admin CD

2015-10-17 Thread James
João Miguel  openmailbox.org> writes:


> > On the download pages I found this interesting choice for AMD64::
> > Boot Media Admin CD::
> > https://www.gentoo.org/downloads/
> > Anyone know what an .admin cd. is? Anyone tested it yet?
> Hmm I hadn't seen it, but it seems pretty similar to the minimal
> installation CD, except it has a different logo. In fact, it calls
> itself a «minimal installation CD», boots in exactly the same way (boot
> prompt, keymap choice, loading tons of modules, etc.)

I was thinking it was a boodcd that analyzed your hardware/settings
and throws one directly into a "chrooted environment". That would be
very cool, if that is the goal:: repair/reconfigure of low level missteps...


> So I'm guessing it has a few more things than the standard ISO, maybe
> some things that were deemed unnecessary for the minimal install, but
> that may be useful, I don't know, if you mess up your bootloader, or
> maybe it has some statically compiled binaries that you can copy and use
> on your main system if something is broken... But that's just a guess.

Yes, or it could be a hybrid-customizable (catalyst/aufs/systemrescue) media
project where you spin your own boot/recovery cd/iso sometime after an
install is complete?   Just a wishful guess cloning for cluster research
and dev does come to mind (hopeful intentions here) for folks that need to
spin up lots of gentoo images on real hardware. As my work progresses to
'bare metal' clusters on heterogeneous hardware (arm64 is next) VM's are not
a viable option for my needs.   


> The easiest way to know the difference may be to mount the ISOs, the
> SquashFSs inside, and compare the files...
> If you want to try it, you can do:
>  $ qemu-system-x86_64 -m 1G -hda admincd.iso -enable-kvm
> Thanks for the question. Now I'm curious too :-D
> João Miguel

Yep, inquiring minds just want to know. Maybe?::It's preparation for the
upcoming "stage_4" installs?  [1,2] There's quite an impressive list of devs
@1 now part of Gentoo Release Engineering. I sure those folks will "put out"
some amazing new install toys! Either way, an "admin cd" in the installation
media collection for gentoo, does spark excitement and 
curiosities and possibilities, kno?


James

[1] https://wiki.gentoo.org/wiki/Project:RelEng

[2] https://wiki.gentoo.org/wiki/Project:RelEng_GRS



Re: [gentoo-user] Pure Data (Pd) can't access ALSA device

2017-09-22 Thread Lasse Pouru
I don't (and won't) use PulseAudio and haven't set up dmix or anything
like it. The weird thing is the simultaneous audio works with every
other program I use (Qutebrowser, mpd, Audacity etc.) -- it's only Pd
that gives the error.

I already use JACK when recording but it would be more convenient for me
to use ALSA when I'm just quickly trying out stuff. (I've set up JACK to
use an audio interface I don't have plugged in most of the time.)

- Lasse

Daniel Sonck <dan...@sonck.nl> writes:

> This sounds pretty normal to me. ALSA isn't really suited for simultaneous 
> audio access. In general with ALSA you have only one program that can use 
> audio at a time, or you use the mixer module from ALSA. I assume a program 
> running on the background has claimed ALSA already for certain reasons. 
>
> If you run PulseAudio (which is pretty standard on a regular desktop), this 
> one will be the cullprit. Usually the jack tools are smart enough to suspend 
> pulseaudio. You can in fact run puredata through the pasuspend script which 
> suspends pulseaudio so ALSA is free again.
>
> What I recommend (which you already tried) is using JACK. JACK will take 
> ownership of your ALSA device, and gives you a capable routing system 
> allowing 
> you to hook up more than just PureData to your audio card. Optionally routing 
> it to other software. In addition, it's possible to compile pulseaudio with 
> jack support which means you can in fact have regular (non-audio) apps work 
> together with jack, which is what I sometimes use: Set up my audio studio 
> setup, while still having pulseaudio around for stuff like browsers and video 
> players. If I suddenly have a creative spark, I have my studio ready to play 
> with. When I'm done, browsers still work with sound
>
> Daniel
>
> On vrijdag 22 september 2017 18:23:10 CEST Lasse Pouru wrote:
>> I can't get Pure Data to work with ALSA. It detects my sound card, but
>> whenever I try to turn on the audio I get the error:
>> 
>> ALSA output error (snd_pcm_open): Device or resource busy.
>> 
>> I've tried both using the ebuild from the audio-overlay and compiling
>> from the source on the Pd website, both behave the same. I've read that
>> Pd deals with ALSA differently than most other programs, but haven't
>> found an explanation how. I did get it to work with JACK.
>> 
>> - Lasse



Re: [gentoo-user] Re: File system testing

2014-09-17 Thread J. Roeleveld

On Wednesday, September 17, 2014 03:55:56 PM James wrote:
 J. Roeleveld joost at antarean.org writes:
   Distributed File Systems (DFS):
  
   Local (Device) File Systems LFS:
  Is my understanding correct that the top list all require one of
  the bottom  list?
  Eg. the clustering FSs only ensure the files on the LFSs are
  duplicated/spread over the various nodes?
  
  I would normally expect the clustering FS to be either the full layer
  or a  clustered block-device where an FS can be placed on top.
 
 I have not performed these installation yet. My research indicates
 that first you put the Local FS on the drive, just like any installation
 of Linux. Then you put the distributed FS on top of this. Some DFS might
 not require a LFS, but FhGFS does and does HDFS. I will not acutally
 be able to accurately answer your questions, until I start to build
 up the 3 system cluster. (a week or 2 away) is my best guess.

Playing around with clusters is on my list, but due to other activities having 
a higher priority, I haven't had much time yet.

  Otherwise it seems more like a network filesystem with caching
  options (See  AFS).
 
 OK, I'll add AFS. You may be correct on this one  or AFS might be both.

Personally, I would read up on these and see how they work. Then, based 
on that, decide if they are likely to assist in the specific situation you are 
interested in.
AFS, NFS, CIFS,... can be used for clusters, but, apart from NFS, I wouldn't 
expect much performance out of them.
If you need it to be fault-tolerant and not overly rely on a single point of 
failure, I wouldn't be using any of these. Only AFS, from my original 
investigation, showed some fault-tolerence, but needed too many 
resources (disk-space) on the clients.

  I am also interested in these filesystems, but for a slightly different
 
  scenario:
 Ok, so I the test-dummy-crash-victim I'd be honored to have, you,
 Alan, Neil, Mic  etc etc back-seat-0drive on this adventure! (The more
 I read the more it's time for burbon, bash, and a  bit of cursing
 to get started...)

Good luck and even though I'd love to join in with the testing, I simply do 
not have the time to keep up. I would probably just slow you down.

  - 2 servers in remote locations (different offices)
  - 1 of these has all the files stored (server A) at the main office
  - The other (server B - remote office) needs to offer all files
  from serverA  When server B needs to supply a file, it needs to
  check if the local copy is still the valid version.
  If yes, supply the local copy, otherwise download
  from server A. When a file is changed, server A needs to be updated.
  While server B is sharing a file, the file needs to be locked on server A
  preventing simultaneous updates.
 
 OOch, file locking (precious tells me that is alway tricky).

I need it to be locked on server A while server B has a proper write-lock to 
avoid 2 modifications to compete with each other.

 (pist, systemd is causing fits for the clustering geniuses;
 some are espousing a variety of cgroup gymnastics for phantom kills)

phantom kills?

 Spark is fault tolerant, regardless of node/memory/drive failures
 above the fault tolerance that a file system configuration many support.
 If fact, files lost can be 'regenerated' but it is computationally
 expensive.

Too much for me.

 You have to get your file system(s) set up. Then install
 mesos-0.20.0 and then spark. I have mesos mostly ready. I should
 have spark in alpha-beta this weekend. I'm fairly clueless on the
 DFS/LFS issue, so a DFS that needs no LFS might be a good first choice
 for testing the (3) system cluster.

That, or a 4th node acting like a NAS sharing the filesystem over NFS.

  I prefer not to supply the same amount of storage at server B as
  server A has. The remote location generally only needs access to 5% 
of
  the total amount of files stored on server A. But not always the same 
5%.
  Does anyone know of a filesystem that can handle this?
 
 So in clustering, from what I have read, there are all kinds of files
 passed around between the nodes and the master(s). Many are critical
 files not part of the application or scientific calculations.
 So in time, I think in a clustering evironment, all you seek is
 very possible, but it's a hunch, gut feeling, not fact. I'd put
 raid mirros underdneath that system, if it makes sense, for now,
 or just dd the stuff with a script of something kludgy (Alan is the
 king of kludge)

Hmm... mirroring between servers. Always an option, except it will not work 
for me in this case:
1) Remote location will have a domestic ADSL line. I'll be lucky if it has a 
500kbps uplink
2) Server A, currently, has around 7TB of current data that also needs to 
be available on the remote site.

With a 8mbps downlink, waiting for a file to be copied to the remote site is 
acceptable. After modifications, the new version can be copied back to 
serverA slowly during network-idle-time or when server

Re: [gentoo-user] The end of Herds

2014-11-06 Thread Alec Ten Harmsel
 on the Spark mailing list right now about
having groups of maintainers for different areas:

http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Designating-maintainers-for-some-Spark-components-td9115.html

I'm not sure how relevant that is, but it's interesting.

My own viewpoint is that there should be no individual maintainers;
packages should be assigned on a herd level, and the herds can
self-regulate and know who has expertise with each package. Just my two
cents; best to not have a single point of failure.

Alec



Re: [gentoo-user] help with the dreaded mount: RPC: Program not registered

2007-09-24 Thread Bogo Mipps
On Tue, 25 Sep 2007, John Blinka wrote:
 I think it happens when booting, but I see this message in the system log:

 Sep 23 21:12:01 tobey rc-scripts: ERROR:  cannot start nfs as
 rpc.statdcould not start

John, I've hesitated to join this thread because I haven't felt I've been able 
to throw any light on your problem - just hoped that you'd get some 
resolution that I could then apply to my own situation which is very similar 
to yours ... but as it's not looking so rosy maybe my experience may spark 
some other avenue to explore?

I've been running an amd64 nfs mount successfully for some months on this 
machine until around about mid August (difficult to tell exactly when as I 
had temporary wireless network about then because of building alterations) 
but from that point on have had major problems trying to mount the nfs 
directory.  No point in going though all the error codes etc. again as they 
are pretty similar to yours - main one is always mount: RPC: Timed out, but 
I have variations.  Like you I _never_ get it to mount from boot as it always 
did in earlier days: now, with a bit of patience, and re-running nfs, 
nfsmount - and sometimes portmap scripts, I can sometimes get it to mount. 
Sometimes I can get it to mount using the manual mount command.  Other times 
it just plain refuses to do anything until I go away for an hour or so - then 
come back and take it by surprise with nfsmount or manual mount and wham, bam 
we're away laughing!  Or sort of. 

I've googled extensively and followed up avenue after avenue, wiki after wiki:  
I've recompiled nfs-utils, portmap, baselayout.  I've altered hosts.allow and 
hosts.deny, etc., etc., and then tried all Emil's suggestions as on this 
thread.  But I still get nowhere, and I think I've now spent so much time on 
it that I really can't see the wood for the trees!  It's obviously something 
so simple, but I just can't see it.  Feel a bit of a prat, but this morning 
was the last straw when I thought I'd better join your thread: no success 
until I left it and went away for an hour - then came back and 
input mount -t nfs 192.168.0.216:/usr/portage /mnt/nfs_portage/ and away we 
went. All ok, just in time for a cronjob emerge --sync.

But not very satisfactory.  It's almost as though the original mount called 
by the scripts and earlier efforts takes some time to die, and a fresh 
instance does the trick! But I don't know enough about the process to know if 
that's so - I just seem to recall someone somewhere in my endless searches 
saying something along those lines ... 

Strange that it's just the two of us to be afflicted by this at about the same 
time?  AAMOI have you tried taking your machine by surprise?

Bogo





-- 
[EMAIL PROTECTED] mailing list



Re: [gentoo-user] Re: xfce woes

2011-02-03 Thread Alan McKinnon
Apparently, though unproven, at 00:15 on Friday 04 February 2011, walt did 
opine thusly:

 On 02/02/2011 09:15 PM, Alan McKinnon wrote:
  Apparently, though unproven, at 00:00 on Thursday 03 February 2011, walt
  did
  
  opine thusly:
  As much as I like the convenience of automounting as a luser, all of
  my bofh instincts cry out that lusers shouldn't be allowed to
  
   mount a filesystem!
   
  This is one of those Windows/convenience versus unix/security things,
  I think, but I'm just an amateur bofh.
  
  What do you professional bofhs think?
  
  Depends on what the machine is used for.
  
  For a multiuser box, you probably want user to not shutdown/reboot,
 
 Yes, even I thought of that.  As an amateur, though, I have no idea how
 many multi-user machines still exist.

I have more than 120 of them

 When I was a lad, the campus computer(s) still ran batch jobs submitted on
 punch cards.  We had to wait for hours or even the next day to discover a
 stupid typo.

Punch cards???

Piffle. We used *paper tape* :-)

 Actually, the profs didn't use punchcards, just us peons.  The profs had
 dumb terminals so they could log in to the central server -- and sit for
 as long as five minutes to discover if the server had crashed, or was
 just busy serving the needs of the department chairman's secretary.
 
 Over the years, the frustrations have merely morphed, not vanished :(
 
  be able to mount removeable media...
 
 That was really what I was asking.  I hear horror stories about employees
 plugging usb thumb drives into corporate workstations to steal files, or
 maybe infecting the whole network with malware from a lost thumb drive
 found at a bus stop or a car park.


Here's a funny story. It's true, and it's sad, but also macabrely funny.

A penetration testing firm that I know well was commissioned to test the 
external security of a certain enterprise that was obliged to comply with 
stiff legal requirements. This firm does our pentesting too, and they are 
pretty thorough. If you ask them to throw the book at something for testing, 
and pay them enough, they will gladly oblige, and not care too much if this 
embarrasses you

Try as they might, they could not get past this enterprise's border firewalls. 
Nothing showed up as a weakness. They tried and tried and tried and tried 

Until one day one of their bright spark techies had a brilliant idea. They 
hired a bunch of pretty girls wearing tight skimpy New! Improved! Check Our 
Promotion! outfits to stand outside the front door handing out free 
complimentary CDs.

Yes, you guessed it. Within the hour the perimeter firewalls had more holes 
than a Swiss cheese. Somebody paid dearly for that.

-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Re: Recommendations for scheduler

2014-08-03 Thread Alan McKinnon
On 03/08/2014 09:23, Joost Roeleveld wrote:
 On Saturday 02 August 2014 16:53:26 James wrote:
 Alan McKinnon alan.mckinnon at gmail.com writes:
 Well, we've found 2 projects that at least in part seek to achieve our
 general goals - chronos and Martin's new project.
 Why don't we both fool around with them for a bit and get a sense of
 what it will take to add features etc? Then we can meet back here and
 discuss. Always better to build on an existing foundation

 Mesos looks promising for a variety of (Apache) reasons. Some key
 technologies folks may want google about that are related:

 Quincy (fair schedular)
 Chronos (scheduler)
 Hadoop (scheduler)
 
 Hadoop not a scheduler. It's a framework for a Big Data clustered database.
 
 HDFS (clusterd file system)
 
 Unless it's changed recently, not suitable for anything else then Hadoop and 
 contains a single point of failure.
 
 http://gpo.zugaina.org/sys-cluster/apache-hadoop-common

 Zookeeper (Fault tolerance)
 SPARK ( optimized for interative jobs where a datase is resued in many
 parallel operations (advanced math/science and many other apps.)
 https://spark.apache.org/

 Dryad  Torque   Mpiche2 MPI
 Globus tookit

 mesos_tech_report.pdf

 It looks as though Amazon, google, facebook and many others
 large in the Cluster/Cloud arena are using Mesos..?

 So let's all post what we find, particularly in overlays.
 
 Unless you are dealing with Big Data projects, like Google, Facebook, Amazon, 
 big banks,... you don't have much use for those projects.


My wife works in BigData for real, she and Joost speak the same
language, I don't :-)
She reckons Big Data is like teenage sex - everyone says they are doing
it and no-one really does ;-D


 Mesos looks like a nice project, just like Hadoop and related are also nice. 
 But for most people, they are as usefull as using Exalytics.

A bit OT, but it might be worthwhile for interested persons to get good
ebuilds going for these projects. Someone will use it on Gentoo, and it
will add value to the project. Much like gems and other
business-oriented packages benefit


 
 A scheduler should not have a large set of dependencies that you wouldn't use 
 otherwise. That makes Chronos a non-option to me.
 
 Martin's project looks promising, but doesn't store the schedules internally. 
 For repeating schedules, like what Alan was describing, you need to put those 
 into scripts and start those from an existing cron.

Sounds like a small feature-add. If Martin did his groundwork
correctly[1] then the core logic will work and it's just a case of
adding some persistence and loading the data back in on demand

 Of the 2, I think improving Martin's project is the most likely option for me 
 as it doesn't have additional dependencies and seems to be easily implemented.

Don't forget Martins is the guy who does eix.
Street cred? check
Knows Gentoo? check





[1] I only say it this way as I haven't evaluated his code at all yet so
have no idea how far Martin has taken it


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Re: File system testing

2014-09-19 Thread J. Roeleveld

On Friday, September 19, 2014 01:41:26 PM James wrote:
 J. Roeleveld joost at antarean.org writes:
  Out of curiosity, what do you want to simulate?
 
 subsurface flows in porous medium. AKA carbon sequestration
 by injection wells. You know, provide proof that those
 that remove hydrocarbons and actuall put the CO2 back
 and significantly mitigate the effects of their ventures.

Interesting topic. Can't provide advice on that topic.

 It's like this. I have been stuggling with my 17 year old genius
 son who is a year away from entering medical school, with
 learning responsibility. So I got him a hyperactive, highly
 intelligent (mix-doberman) puppy to nurture, raise, train, love
 and be resonsible for. It's one genious pup, teaching another
 pup about being responsible.

Overactive kids, always fun.
I try to keep mine busy without computers and TVs for now. (She's going to be 
3 in November)

 So goes the earl_bidness...imho.
 
   Many folks are recommending to skip Hadoop/HDFS all  together
  
  I agree, Hadoop/HDFS is for data analysis. Like building a profile
  about people based on the information companies like Facebook,
  Google, NSA, Walmart, Governments, Banks, collect about their
  customers/users/citizens/slaves/
  
   and go straight to mesos/spark. RDD (in-memory)  cluster
   calculations are at the heart of my needs. The opposite end of the
   spectrum, loads of small files and small apps; I dunno about, but, I'm
   all
   ears.
   In the end, my (3) node scientific cluster will morph and support
   the typical myriad  of networked applications, but I can take
   a few years to figure that out, or just copy what smart guys like
   you and joost do.
  
   
  Nope, I'm simply following what you do and provide suggestions where I
  can.
  Most of the clusters and distributed computing stuff I do is based on
  adding machines to distribute the load. But the mechanisms for these are 
  implemented in the applications I work with, not what I design underneath.
  The filesystems I am interested in are different to the ones you want.
 
 Maybe. I do not know what I want yet. My vision is very light weight
 workstations running lxqt (small memory footprint) or such, and a bad_arse
 cluster for the heavy lifting running on whatever heterogenous resoruces I
 have. From what I've read, the cluster and the file systems are all
 redundant that the cluster level (mesos/spark anyway) regardless of one any
 give processor/system is doing. All of Alans fantasies (needs) can be
 realized once the cluster stuff is master. (chronos, ansible etc etc).

Alan = your son? or?
I would, from the workstation point of view, keep the cluster as a single 
entity, to keep things easier.
A cluster FS for workstation/desktop use is generally not suitable for a High 
Performance Cluster (HPC) (or vice-versa)

  I need to provided access to software installation files to a VM server
  and access to documentation which is created by the users. The
  VM server is physically next to what I already mentioned as server A.
  Access to the VM from the remote site will be using remote desktop
  connections.  But to allow faster and easier access to the
  documentation, I need a server B at the remote site which functions as
  described.  AFS might be suitable, but I need to be able to layer Samba
  on top of that to allow a seamless operation.
  I don't want the laptops to have their own cache and then having to
  figure out how to solve the multiple different changes to documents
  containing layouts. (MS Word and OpenDocument files).
 
 Ok so your customers (hperactive problem users) inteface to your cluster
 to do their work. When finished you write things out to other servers
 with all of the VM servers. Lots of really cool tools are emerging
 in the cluster space.

Actually, slightly different scenario.
Most work is done at customers systems. Occasionally we need to test software 
versions prior to implementing these at customers. For that, we use VMs.

The VM-server we have is currently sufficient for this. When it isn't, we'll 
need to add a 2nd VMserver.

On the NAS, we store:
- Documentation about customers + Howto documents on how to best install the 
software.
- Installation files downloaded from vendors (We also deal with older versions 
that are no longer available. We need to have our own collection to handle 
that)

As we are looking into also working from a different location, we need:
- Access to the VM-server (easy, using VPN and Remote Desktops)
- Access to the files (I prefer to have a local 'cache' at the remote location)

It's the access to files part where I need to have some sort of distributed 
filesystem.

 I think these folks have mesos + spark + samba + nfs all in one box. [1]
 [1]
 http://www.quantaqct.com/en/01_product/02_detail.php?mid=29sid=162id=163q
 s=102

Had a quick look, these use MS Windows Storage 2012, this is only failover on 
the storage side. I don't see anything related

[gentoo-user] Re: gentoo livedvd kernel

2014-11-12 Thread James
Rich Freeman rich0 at gentoo.org writes:


 Generally the kernel is the easiest thing to get off of one of those
 LiveDVDs by just sticking the DVD in a drive and reading it (without
 booting it).  Everything else on the DVD except for the kernel and
 initramfs and bootloader tends to go in some big squashfs or the like.
 However, the kernel has to be someplace the bootloader can read it,
 and that usually means a vmlinuz or whatever on the root directory.

Right now the (2) systems are booted up on media, so I cannot test this.
(later on I will).


 You could probably just use the same kernel on your own system, unless
 it has an embedded command line or initramfs (I forget offhand how
 overriding either of these works). 

Yes. scp is my favorite for this but I want to look at the actual
kernels that are used for bootup of a livedvd and the minimal, for
many reasons.   

 Most bootloaders tend to not
 require these, but in some embedded situations you could run into
 them.  Once upon a time the kernel had a BIOS boot sector in the first
 512 bytes so you could just dd the kernel onto a disk and boot it
 (there is still a stub that will tell you to bugger_off_msg if you do
 that in arch/x86/boot/header.S).  (just a bit of trivial there)

Do tell more.

Now this would be most useful to me. If you recall (over 6 months ago)
I was (am) working on a setup where I can use all of the old systems,
drives, usb, and a plethora of embedded boards  to run variants
of embedded gentoo through minimal gentoo. I particularly got stuck
on how to quickly reinstall a system moving media around. If
I could just dd kernels/images around, I could keep many images/kernels on
a server and then boot-- chroot one of those aforementioned test boxes
and quickly and have a minimal old cpu test system online for hacking.  I
(temporarily) shelved that project to get clustering going with btrfs, ceph
and mesos+spark on gentoo. Naturally, I have bitten off
a wee_bit too much, but, life is good!

Likewise, meino was (is?) working on porting/hacking the
old venerable netconsole.c   [1] to some newer embedded boards. 
Many variants of netconsole.c have existed over the decades.
I think running embedded/minimum gentoos via secure portal is kind of the
next step (for me) after figuring out a semantic for being about to use all
those old x86 and amd64 boxes for various testings of singular codes
and all sorts of custombuilt hardware across the net.  That way
folks could hack remotely without having to have the specific hardwware.
Developing virtually is great, but at some point you have torun
the codes on actual hardware. After x86 I'm going to support
a variety of arm boards (that run linux).

*SO* do tell me more..as I'm curious about your dd of kernels
and such.

 Rich

James

[1] http://lxr.free-electrons.com/source/drivers/net/netconsole.c







[gentoo-user] kexec

2014-11-16 Thread thegeezer

howdy folks,
i've had a bit of a hiatus of internet access and just catching up with 
mails i notice a recurring systemd related spark about boot times.  
please this message is not to recreate a flame but to suggest something 
that may benefit folks from all preferred init systems.


kexec is a great little utility.  when you run /etc/init.d/kexec start 
it creates references in the existing kernel for a soft reboot into a 
new kernel.  you can then at a time of your choosing run reboot and 
the system will appear to go through a clean shutdown cycle, but instead 
of triggering the power cycle, it will access the referenced kernel and 
initram and load them into memory as though we are just coming from the 
grub boot menu. the kernel image and initramfs must be visible at the 
time you choose to reboot.


from a forensics / debugging / kdump crash handling point of view this 
has great benefit because memory state remains the same when the system 
starts. (in fact for full access you need to use crash mode of kexec 
-p otherwise you overwrite bits when you boot and start services)


from a reboot a remote computer into a new (tested?) kernel and initram 
in very little time point of view this means you do not need to wait 
for bios / uefi / raid bios / 24 disk raid spinup cycle / 24GB memory 
test to complete.  sure if you are looking to reset faulty hardware like 
a stuck tape drive or graphics card this is not great.  however, as the 
new kernel need not be identical to the existing kernel, it does mean 
you can upgrade then reboot a lot faster.


using the tools manally is possible too -- /etc/init.d/kexec automounts 
boot and searches for the bits to use. you can do it manually by


## load a kernel and initram
kexec  -l /boot/vmlinux   --append=dolvm, root=/dev/vg/root 
--initrd=/boot/initrd


## reboot hard and fast into new kernel (warning does not go through 
shutdown so mounted fs acts as though you hit the reset button)

kexec  -e

let's say you have some remote or embedded systems that you want to 
upgrade the kernel for. if you have loadable modules you need to rsync 
/lib/modules/ otherwise you just need to scp the kernel and initram (if 
you have one) over then kexec it.  no more waiting for device reset 
scans, just reload the operating system with #/etc/init.d/kexec restart

followed by
# reboot

this is especially handy as grub2 has a few quirks regarding 'failsafe' 
menu choices, so doing things this way you can have grub2 still boot 
'actuallyworks-vmlinuz'  and then from ssh run kexec to 'testing-vmlinuz'


hope this has been interesting!



[gentoo-user] Re: btrfs fails to balance

2015-01-20 Thread James
Bill Kenworthy billk at iinet.net.au writes:


  The main thing keeping me away from CephFS is that it has no mechanism
  for resolving silent corruption.  Btrfs underneath it would obviously
  help, though not for failure modes that involve CephFS itself.  I'd
  feel a lot better if CephFS had some way of determining which copy was
  the right one other than the master server always wins.


The Giant version 0.87 is a major release with many new fixes;
it may have the features you need. Currently the ongoing releases are
up to : v0.91. The readings look promissing, but I'll agree it
needs to be tested with non-critical data.

http://ceph.com/docs/master/release-notes/#v0-87-giant

http://ceph.com/docs/master/release-notes/#notable-changes


 Forget ceph on btrfs for the moment - the COW kills it stone dead after
 real use.  When running a small handful of VMs on a raid1 with ceph -
 slw :)

I'm staying away from VMs. It's spark on top of mesos I'm after. Maybe
docker or another container solution, down the road.

I read where some are using a SSD with raid 1 and bcache to speed up
performance and stability a bit. I do not want to add SSD to the mix right
now, as the (3) node development systems all have 32 G of ram.



 You can turn off COW and go single on btrfs to speed it up but bugs in
 ceph and btrfs lose data real fast!

Interesting idea, since I'll have raid1 underneath each node. I'll need to
dig into this idea a bit more.


 ceph itself (my last setup trashed itself 6 months ago and I've given
 up!) will only work under real use/heavy loads with lots of discrete
 systems, ideally 10G network, and small disks to spread the failure
 domain.  Using 3 hosts and 2x2g disks per host wasn't near big enough :(
  Its design means that small scale trials just wont work.

Huh. My systems are FX8350 (8)processors running at 4GHz with 32 G ram.
Water coolers will allow me to crank up the speed (when/if needed) to
5 or 6 GHz. Not intel but  low end either.


 Its not designed for small scale/low end hardware, no matter how
 attractive the idea is :(

Supposedly there are tool to measure/monitor ceph better now. That is
one of the things I need to research. How to manage the small cluster
better and back off the throughput/load while monitoring performance
on a variety of different tasks. Definitely not a production usage.

I certainly appreciate your ceph_experiences. I filed a but with the
version request for Giant v0.87. Did your run the  version ?
What versions did you experiment with?

I hope to set up Anisble to facilitate rapid installations of a variety
of gentoo systems used for cluster or ceph testing. That way configurations
should be able to reboot after bad failures.  Did your experienced
failures with Ceph require the gentoo-btrfs based systems to be complete
reinstalled from scratch, or just purge the disk of Ceph and reconfigure Ceph?

I'm hoping to configure ceph in such a way that failures do not corrupt
the gentoo-btrfs installation and only require repair to ceph; so your
comments on that strategy are most welcome.




 BillK


James


 







[gentoo-user] Re: btrfs fails to balance

2015-01-20 Thread James
Rich Freeman rich0 at gentoo.org writes:


  You can turn off COW and go single on btrfs to speed it up but bugs in
  ceph and btrfs lose data real fast!

 So, btrfs and ceph solve an overlapping set of problems in an
 overlapping set of ways.  In general adding data security often comes
 at the cost of performance, and obviously adding it at multiple layers
 can come at the cost of additional performance.  I think the right
 solution is going to depend on the circumstances.

Raid 1 with btrfs can not only protect the ceph fs files but the gentoo
node installation itself.  I'm not so worried about proformance, because
my main (end result) goal is to throttle codes so they run almost
exclusively in ram (in memory) as design by amplabs. Spark plus Tachyon is a
work in progress, for sure.  The DFS will be used in lieu of HDFS for
distributed/cluster types of apps, hence ceph.  Btrfs + raid 1 is as
a failsafe for the node installations, but also all data. I only intend
to write out data, once a job/run is finished; but granted that is very
experimental right now and will evolve over time.


 
 if ceph provided that protection against bitrot I'd probably avoid a
 COW filesystem entirely.  It isn't going to add any additional value,
 and they do have a performance cost.  If I had mirroring at the ceph
 level I'd probably just run them on ext4 on lvm with no
 mdadm/btrfs/whatever below that.  Availability is already ensured by
 ceph - if you lose a drive then other nodes will pick up the load.  If
 I didn't have robust mirroring at the ceph level then having mirroring
 of some kind at the individual node level would improve availability.

I've read that btrfs and ceph are a very, suitable, yet very immature
match for local-distributed file system needs.


 On the other hand, ceph currently has some gaps, so having it on top
 of zfs/btrfs could provide protection against bitrot.  However, right
 now there is no way to turn off COW while leaving checksumming
 enabled.  It would be nice if you could leave the checksumming on.
 Then if there was bitrot btrfs would just return an error when you
 tried to read the file, and then ceph would handle it like any other
 disk error and use a mirrored copy on another node.  The problem with
 ceph+ext4 is that if there is bitrot neither layer will detect it.

Good points, hence a flexible configuration where ceph can be reconfigured
and recovered as warranted, for this long term set of experiments.

 Does btrfs+ceph really have a performance hit that is larger than
 btrfs without ceph?  I fully expect it to be slower than ext4+ceph.
 Btrfs in general performs fairly poorly right now - that is expected
 to improve in the future, but I doubt that it will ever outperform
 ext4 other than for specific operations that benefit from it (like
 reflink copies).  It will always be faster to just overwrite one block
 in the middle of a file than to write the block out to unallocated
 space and update all the metadata.

I fully expect the combination of btrfs+ceph to mature and become
competitive. It's not critical data, but a long term experiment. Surely
critical data will be backed up off the 3-node cluster. I hope to use
ansible to enable recovery, configuration changes and bringing on and
managing additional nodes; this a concept at the moment, but googling around
it does seem to be a popular idea.

As always your insight and advice is warmly received.


James


 







[gentoo-user] Re: This nite's switch to full multilib

2015-03-30 Thread James
Peter Humphrey peter at prh.myzen.co.uk writes:


 On Sunday 29 March 2015 20:12:45 Alan McKinnon wrote:

  grep -ir qt /etc/portage
grep qt /etc/portage/package.use | wc -l =11

dev-qt/qt-creator   android autotools cmake python 
dev-qt/qtguiqt3support
=dev-qt/qtsql-4.8.5 qt3support
=dev-qt/qtcore-4.8.5-r1 qt3support
# required by dev-qt/qtcore-4.8.5-r1[qt3support]
=dev-qt/qtgui-4.8.5-r1 qt3support
# required by dev-qt/qtopengl-4.8.5
=dev-qt/qtgui-4.8.5-r2 -qt3support
# required by dev-qt/qt3support-4.8.5
=dev-qt/qtgui-4.8.5-r2 qt3support
# required by dev-qt/qtwebkit-4.8.5[gstreamer]





# grep -ir qt /etc/portage | wc -l  =86

# eselect profile list
Available profile symlink targets:
  [1]   default/linux/amd64/13.0 *


So I am multilib? How/where do I tell, as one reader posted
that the profile is not where we designate if we are multilib
or not (news to me). I am open to edumacation on this aspect.


 and help them remove all cruft that's getting in the way of a clean upgrade

I just ran a 'depclean' a few days ago. Dozens of my java hacks (overlays)
and such got cleaned out and my apache-spark ebuild (hack) does not compile
anymore. No big deal, I get to spend another day learning all the neat
things I do not know about maven..

I Did not even know a cleanup was needed but 'eix-test-obsolete' 
broke me down and kicked me in the teeth. I've got a lot to clean up:

eix-test-obsolete | wc -l =209

emerge -uDNvp world | wc -l =111
emerge -uDNvp world : 
Total: 98 packages (2 upgrades, 2 new, 2 in new slots, 92 reinstalls, 3
uninstalls)


All I have done so far is run emerge --sync. I previously sync'd up on
28mar2015 before that. I do not run KDE, I use lxde + lots of java (hacks)
I refer to 'java(hacks)' because it is mostly a kludge of
old portage packages and overlays.


I have automask automated via make.conf.
EMERGE_DEFAULT_OPTS=--with-bdeps y --autounmask-write y

 But  before I follow the  path of others:

cat package.use | wc -l   =314

package.use via automask is getting a bit out of hand, already.
Somehow, I do not feel good about the devs solution is to 
munge up something I have already been abusing. So, does
'eix-test-obsolete' have some automated option to clean up
package.use? I think I need to do this before applying
the latest (dev_inspired) kludge to my main workstation?

Maybe I should BE the chicken that I am, and wait a few days
for others to flush this out a bit more? It's already been a
hell(o)Monday for me..


On a brighter note, I do feel good that my instincts on kludging
up a gentoo system, seem to be tracking the devs, quite nicely

Guidance, humor and spankings are all welcome.


James




[gentoo-user] Re: want to upgrade 50 month old installation

2015-08-05 Thread James
Neil Bothwick neil at digimed.co.uk writes:


  Um, we can think out of the box for a new and cool installation
  semantic. Just look at blueness's posting (Gentoo Reference System) on
  www.gentoo.org as a new, and useful approach to installs for
  established gentoo admins.

 That's interesting, but not an installer. It's a means to building a
 standard reference system repeatedly that then needs installing.

First, Anthony identifies but one popular need for gentooers with advanced
skills to want (and highly desire) a robust method to install new gentoo
systems. So it's not just the noobs, but devs and everybody in between that
knows  that this is a good idea. What do we end up with ? I'd hope several
different approaches to installing real hardware as well as virtual
hardware. The faster/simpler/error-free the better, imho::YMMV.

Anthony's works is alpha so guys like yourself, with tons of experiences,
could provide him ideas. You'll find he's quite a wonderful dev to work
with collegial is a very accurate term to describe Anthony as a dev.


 I see one major problem with a pointy-clicky YaST/Ananconda type
 installer: who is going to write it? Who has that particular itch bad
 enough to scratch it?

Rich0 said he'd modify the handbook into an experimental prose that
leads to a raid-1 btrfs baseline system, if enough folks liked the ideas.
I think that approach is best, because it makes all the 'die_hard handbook
fans happy and can also server as a preliminary specification to an actual
automated installation, not just for noobs. Add a dose of 'snapshots'
(snapper) and we'd have a much better support semantic for noobs and the
rest of us too!



 An automated installer is another matter, write a config file and point it
 at some bare metal using something like Ansible, to allow sysadmins to
 roll out systems with less fiddling.


Yes, but they are inter-related issues, imho. Yes I like what you are
saying. There are several needs here for automation of gentoo installs; not
just for noobs, but for those of us trying to develop or stabilize other
codes. HDFS, sucks as a distributed file system. HDFS is the source of many
problems found in modern clustering. For me, I'm spending way too much time
on trying to find an automated (semi-automated) install semantic for
raid-1_btrfs. So my work on mesos [1] is very slow, ATM. Fix the
installation problem, and I'll deliver (toes crossed tightly) the most 'bad
ass' clustering technology currently available::

*Mesos + spark + storm + tachyon + cassandra*  on gentoo (amd64). 

Then the stabilization  work moves to arm64. Both platforms on top of
btrfs/cephfs is going to be *smokin_wicked_cool*. Built from sources, gentoo
will be quickly adopted by many expert linux types. The baggage/packaging
problems, kernel tuning and optimization needs puts Gentoo in a unique
position to dominate this space...


That's my position and I'm sticking with it

hth,
James


[1] http://www.openstacksv.com/2014/09/02/make-no-small-plans/





Re: [gentoo-user] konqueror:5 - why couldn't it be more like konqueror:3 ?!!

2017-04-20 Thread R0b0t1
On Thu, Apr 20, 2017 at 3:01 PM, Mick <michaelkintz...@gmail.com> wrote:
> On Thursday 20 Apr 2017 14:22:06 R0b0t1 wrote:
>> On Wed, Apr 19, 2017 at 9:57 AM, Mick <michaelkintz...@gmail.com> wrote:
>> > OK, I know life moves on, but this move has been a retrograde step for me.
>> >  My konqueror:5 recently updated seems to have a number of problems and
>> > <aheam!> features I am not happy with.  Grateful for any pointers to
>> > address these.  In no particular order.
>> >
>> > 1. The Bookmarks Toolbar will *always* show up when launching Konqueror.
>> > I
>> > deselect Settings/Toolbars Shown/Bookmark Toolbar and relaunch the
>> > application, only to find out my deselection will not stick.
>> >
>> > 2. The menu shows no icons, only text; buttons like open new tab/close
>> > current tab show no icons, making difficult to guess.
>> >
>> > 3. When used as a file manager Konqueror will only open directories or
>> > files if I double click on them.  I have set up in systemsettings5
>> > Hardware/Input Devices/Mouse/Single click to open files and folders.
>> > Konqueror ignores it.
>> >
>> > 4. All my years of Bookmarks of Konqueror:4 gone.  I had to import them
>> > manually.
>> >
>> > 5. Network places, gone.
>> >
>> > 6. Left hand Panels with Places/Devices/Folders ain't thaa'r.
>> >
>> > 7. Konqueror Introduction page, no icons; unless I hover over them,
>> > hyperlinked titles shown in dark grey over a blue background.  I know my
>> > eye sight is not as good as it used to be, but this is really akin to
>> > usability sabotage.
>> >
>> > The are probably more problems I have not captured above, but my
>> > experience of konqueror:5 is that of crippleware.  I don't know if this
>> > has anything to do with it, I am not running konqueror on a full plasma
>> > desktop, but as a stand alone application.  Have you observed similar?
>> >
>> > --
>> > Regards,
>> > Mick
>>
>> You should take your complaints to the project's bug tracker. I know
>> they will take number 7 very seriously, and seeing as you've used the
>> project for an appreciable amount of time they will likely consider
>> most of your other issues.
>>
>> At the very least they would probably tell you how to do what you
>> don't know how to do. If they can't insist that there has been a
>> regression.
>
> Thanks R0b0t1, I don't want to darken their doorstep of devs with issues which
> appear isolated to my systems.  Other M/L participants many of whom are
> running the full Plasma desktop, do not seem to have such problems.  So, I'm
> guessing I must be missing some package or other to complement the required
> functionality.
>

I have to disagree with you here. There is no way I can see the
developers responding negatively, though time might be short for new
(or "reintroduced") features. I feel it necessary to mention that
seventh item again, which they will take very seriously.

> I'm still on kmail:4 and all menu icons are shown and functionality is not
> crippled in any way.  I fear what might happen when I eventually have to
> install kmail:5.
>

I feel this is also something you should express to the developers,
but admittedly I don't know the best place. Perhaps a mailing list. I
understand there is a time investment but if you have any to spare it
will almost assuredly spark a constructive conversation.



Re: [gentoo-user] To all IPv6-slackers among the Gentoo community

2019-11-26 Thread Mick
On Tuesday, 26 November 2019 17:58:46 GMT Dale wrote:

> I enter my username/password on the modem so I'm pretty sure it is
> processing the packets and such.  There is no mention of anything IPv4
> or v6.  I'd suspect it is v4 only, since it works it has to support v4. 
> lol  So, old modem may have to be bricked at some point.

Not necessarily.  If your modem is like the one described here, follow the 
guidance provided to set it in bridged mode:

https://www.dslreports.com/faq/6405

In bridged mode it will pass all ethernet packets to your router and your 
router will be able to obtain a public IP address with its dhcp client 
directly from your ISP.  Of course, to be able to connect to your ISP you will 
now need to enter your ADSL account username/passwd into the PPPoE (or PPPoA) 
client in your router's management interface.  DHCP and DNS server 
functionality will also be provided by your router for all devices on your 
LAN.  The modem will be just a dumb box between the ISP and your router.

In the unlikely chance your router does not possess such PPP authentication 
functionality, you will have to replace your router with one which does and at 
the same time look to buy one which offers IPv6 too.


> I do have a
> newer gray modem that came with the DSL kit.  I stopped using it because
> it got so warm.  The old black box one runs cool and it has more vent
> holes.  I may have to check and see if the gray one supports v6 but it
> is fairly old too.  It's at least 10 years old. 

ADSL ATM encapsulation technology has not changed for many years now.  I don't 
think age (or colour) matters really, unless you can see smoke coming out of 
it when you power it up!  LOL!


> My router also makes no mention of IPv4 or v6.  I suspect it is in the
> same boat as the modem, it doesn't support it and doesn't have the
> option to either.  I did go to the Linksys website and look for a
> firmware upgrade, nothing available, not even a old one. 

You haven't provided any model names[1] so it's difficult to google things for 
you, or suggest solutions.  Have a look here to see if your router is still 
supported by this open source Linux firmware:

https://openwrt.org/supported_devices

https://openwrt.org/toh/start

Other alternative(s):

http://www.polarcloud.com/tomato


> I did some searching for routers with ipv6 support.  I'm not finding a
> lot.  Is this something I need to worry about yet?  I mean, is there a
> lot of IPv6 equipment even available right now? 

You may have not tried hard enough.  There were a thing even 8 years ago:

https://www.cnet.com/news/top-5-ipv6-ready-wireless-routers/

Answering your question, yes, today all modern routers and any ADSL modems 
with routing capability come as dual IPv4/6 stack.


[1] True story:  Years ago a friend started work in a car accessories and 
spare parts shop.  Customer walks in looking for spark plugs, where upon my 
friend asks for his make and model.  Customer replies:  "Dunno, it's a blue 
car ..."  O_O

-- 
Regards,

Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: [O/T] PSU caps

2020-10-29 Thread Dale
Michael wrote:
> On Wednesday, 28 October 2020 22:57:25 GMT Dale wrote:
>> Michael wrote:
>>> On Wednesday, 28 October 2020 19:27:06 GMT Dale wrote:
>>>> I'm thinking about replacing that cap and seeing if it works.  I've
>>>> repaired a few monitors that way but my question is, should I trust it
>>>> after replacing that cap even if it works??  Should it be load tested or
>>>> something?  Does the protection circuitry only work once?
>>> It depends what was damaged and the cause of it.  It could be the
>>> capacitor
>>> reached its predicted end of life.  It could have been a transient
>>> voltage, in which case more things in the protection circuit (diodes,
>>> resistors) may have also been damaged.
>>>
>>> I had an old desktop which during a lightning storm ended up with a blown
>>> PSU and a blown winmodem.  The winmodem was unrepairable, but the PSU
>>> survived following the replacement of a single capacitor.  :-)
>>>
>>> For the cost of a capacitor I'd give it a try and then measure the output
>>> voltages under load.
>> Well, we getting rain but I haven't heard a single bit of thunder or any
>> light blinking.  Nothing really bad anywhere near us either.  It's the
>> hurricane thing again.  I might add, I got surge protection coming out
>> my ears.  One in the main breaker box that should protect everything. 
>> It's installed right below the main breaker.
> This type of surge protectors are good for mains transients and can be reset 
> when they trip.
>

It also has a indicator that tells when it is done protecting or
something happens and it trips the breaker. 

>> Another one at the wall plug where I plug my UPS in.
> These may or may not reset - depending on the type.  The multi-socket 
> extensions with varistors (MOV) in them are not a fit and forget item.  If 
> they have seen repeated or prolonged overvoltage conditions close or above to 
> their clamping voltage value, they can and do degrade over time.  So you may 
> think I'm well protected me, but when the next transient comes along the 
> surge 
> protector provides next to no protection at all.  A close by lightning strike 
> will cause the varistor to fail catastrophically, in which case you'll know 
> it's cooked and take action to replace the unit, but otherwise you wouldn't 
> be 
> aware of its suboptimal capability.
>

These also have a indicator that indicates when they have absorbed all
the surges they can.  In the past, I've had a few go out.  I replace
them when needed.  The biggest issue with power around here, sags or
just total blinks.  Our power company has surge arrestors in several
places along the lines.  Sometimes when I'm driving down the road, I see
them.  They place different kinds of protection devices to help protect
from different power issues.  Some are just a basic spark gap that when
the voltage gets to high it sparks and some are large cans which work
like a large MOV.  Very effective given the high voltages on the wires. 
Sometimes after a large storm comes through, I see them in the bucket
trucks replacing them.  No telling how many TVs or deep freezers that
may have saved. 


>> End of life.  That is my bet.  I did a search for when I ordered the
>> power supply.  It is within a month or so of being 10 years old.  I may
>> replace that capacitor just for giggles but honestly, I got my money out
>> of that thing a few years ago.  I'd be worried about the other
>> capacitors in there too.  Are they about to pop as well??  Who knows. 
> If they are not domed they ought to be OK.
>
> A big power surge will overheat the capacitor, causing the electrolyte paste 
> to evaporate fast and blow its top off.
>
> Lower surges, or operating in overheated conditions for prolonged periods 
> will 
> cause it to dome as it expands.  It may also cause it to leak slowly, in 
> which 
> case it may not pop/explode.  There are a number of failure modes of 
> electrolytic capacitors, but I don't recall all of them.
>
> The wear and tear of capacitors is a function of temperature and voltage.  As 
> long as both are kept low they will last long(er).


They may be OK at the moment but what about a month down the road?  Six
months down the road?  Yea, the one with the most pressure, read that as
heat, voltage and other conditions, may pop first but the others may
follow suite sometime after that.  The thing is ten years old and the
other caps are likely the same age.  Of course, power supplies nowadays
have really good protection.  Odds are it won't do any damage outside
the power supply itself but there is always a risk.  Given the price of
a decent power supply, it may be better to just buy a new one.  It is
tempting tho. 

Dale

:-)  :-) 



Re: [gentoo-user] Again: Critical bugs considered invalid

2007-06-07 Thread b.n.
[EMAIL PROTECTED] ha scritto:
 Complaining TWICE worked.  

Is it so bad? I'd say complaining ten times would be bad, but twice
seems a reasonable number of attempts.

 The problem I complained about shouldn't
 have happened in the first place; someonex fixed something that wasn't
 broken and made it broken.

Bugs! What an awkward occurrence in the world of programming! And, even
more unusual, people who should improve programs... introduce new bugs
too! Alas! They even have a word for these incredibly rare kind of
bugs: regressions. They are as common as shit, my friend. I just
discovered two of them, today, in the data analysis software I code in
my lab :)

Probably someone fixed something that WAS broken but, doing that, also
unfixed something else. In programming, often, tightening a string
somewhere looses it somewhere else. Bug fixing is harder than
programming itself.

 Your response is absolutely typical of my problem with the gentoo dev
 community.  You misstate a complaint, overreact to it, and apparently
 feel pretty smug about your accomplishment. 

Where did I misstate (?) a complaint?
Where did I overreact?
And where did I feel smug about it?

You had perfectly legit complaints. I (we) just told you what the
correct procedure to get solved is. Note:maybe it won't get them solved,
I agree. But ranting is not a way either. All you can logically do is
try again to follow the procedure, or fix them yourself. There's nothing
else you can do. Really.

 No one will admit to the
 two screwups (first breaking a working ebuild, second incorrectly
 closing a bug on it).  Instead you lash out at those who point out
 problems.

I fully, completely admit the screwups!
What you fail to understand is that they're common everyday problems
that will always occur on a large project like an operating system
distribution, and that there are methods to fix them most of the time.

 Yes, I had the wrong program when I compalined about the color
 problem.  But the gentoo community response then as now was to lash
 out, scream and shout, not to actually investigate.

What there was to investigate?
First, we are NOT the community that must investigate, since we're
users, not devels. Ask devels to investigate.
Second, your problem was not something like, say, X freezing, no error
messages, where could I look?, but more like colours ugly as hell, wtf
why don't they change them. What is there to investigate about that?
Everyone not colour-blind on this list knows what colours has emerge:
investigation finished.
Third, you actually already did all the investigation possible. You, IIRC:
-looked at emerge code
-didn't like that (probably rightly so)
-told yourself they're too dumb to even understand a complain (not
rightly so, IMHO)
-rant on gentoo-users

Really, what should have we done? It is not a rhetoric question: I just
don't understand. If you can tell me an example of what should we have
done, I'm really and sincerely happy to hear it.

  And when I
 finally left the thread alone, you geniuses were still ranting about
 it three days later when I next checked.

That's a good point. We can't resist flamebaits, that's all. :) But so,
what has it to do with the problem?

 You folks may think you have a cool system, and it is in some ways and
 could be in many others.  But I know many people who tried gentoo and
 bailed precisely because of the shoot the messenger mentality so
 pervasive here; the self-selected sample you see is meaningless.

Well, I tell you a secret: even with all its quirks and defects, Gentoo
has one of the more friendly and helpful communities in the OSS world.
Try have a look at the Debian, OpenBSD or Slackware forums/ml/IRC
channels, and you'll understand.

 Go ahead, have another three days' fun.  Maybe I'll spark some more
 tinders in a month or two.  I wouldn't want to deprive you of your
 fun.

I can't understand your sarcasm. It's you that put flamebaits in the
forests -how can you blame us for the fire? :)

m.
-- 
[EMAIL PROTECTED] mailing list



Re: [gentoo-user] [OT] What's up with Firefox?

2013-07-04 Thread Paul Hartman
On Thu, Jul 4, 2013 at 10:29 AM, Peter Humphrey
pe...@humphrey.ukfsn.org wrote:
 Sorry to be a nuisance but I can't think of where else to ask.

 On the website I run I have a link to our Twitter profile (or whatever it's
 called). This is the link:

 https://twitter.com/TideswellMVC

 If I examine the page using the web host's file editor I see exactly that,
 yet if I press CTRL-U in www-client/firefox-17.0.7 it shows this:

 https://twitter.com/#%21/TideswellMVC

 and if I click the link in the main window I'm asked for a login and
 password.

Very strange!

 Trying the latest Windows version of Firefox in an XP virtual box I get the
 unaltered link. I can't tell what version that is because About Firefox
 merely checks, then tells me I'm up to date.

The latest release of Firefox is version 22.0, however version 17 is
the latest Extended Support Release and coincidentally also the
latest stable version in Gentoo. The url about: will show the
version information in Firefox (and most other browsers). If you want
to ensure you are comparing apples to apples, you can download the
version 17 ESR Windows installer from:
http://www.mozilla.org/en-US/firefox/organizations/all.html

 Incidentally, I have a web server running on my LAN with an identical copy
 of the site. Using that as the target, rather than the public version, gives
 the same results.

 I haven't used JavaScript anywhere.

 What's going on here?

I don't know, but here is what I am thinking:

A) Does it do the same if you use a different browser? opera or
google-chrome are binaries and don't require any compilation, so they
might be fast to emerge if you haven't got any other browsers
installed. You could also simply use wget or curl to fetch a copy of
the page and look at it in a text editor.

If other browsers experience the same thing, go to C)

B) I would first try to rule out a configuration or plug-in/add-on
causing the issue. On the Firefox Help menu there is an option to
restart with add-ons disabled. This will restart Firefox in safe
mode. Please be aware that it also gives you an option to Reset
Firefox -- this will reset it to factory default configuration, while
supposedly preserving your personal information. I have not actually
tried that so I would backup your profile beforehand just in case it
goes off the rails. Once you're in safe mode, simply quitting firefox
and reopening it will bring it back to normal mode again.

If safe mode doesn't help, I would try creating a new profile. You can
do this without any effect on your existing profile. Start firefox
from shell prompt by firefox -P to launch the profile manager.
Alternatively, you could login using a different user on your machine.

C) If browser or settings don't make a difference, I would ask if
you're using any sort of proxy or ad-blocker/parental control/spam
filter on your network. That might be silently altering the pages in
an unintended way. Also, some employers, ISPs and governments perform
content modification on HTTP requests to insert ads or block
disallowed URLs. If your web server supports HTTPS I would try
fetching the page using that to see if it is the same. That should
eliminate the possibility of outside interference as far as
manipulation of the page contents goes.

D) If this is a website you created, I would ask if you might have
your /etc/hosts file pointing at a different server's IP. I have seen
a similar problem where someone had their domain name on their web
development laptop pointing to a test server rather than the live
public server. That's probably not the case since you've experienced
the same problem on your local web server, but I thought I would
mention it just in case it might spark any ideas if everything else
failed to work.

Good luck,
Paul



[gentoo-user] Re: Recommendations for scheduler

2014-08-05 Thread James
Joost Roeleveld joost at antarean.org writes:


  Mesos looks promising for a variety of (Apache) reasons. Some key
  technologies folks may want google about that are related:
  
  Quincy (fair schedular)
  Chronos (scheduler)
  Hadoop (scheduler)
 
 Hadoop not a scheduler. It's a framework for a Big Data clustered   
 database.

  HDFS (clusterd file system)
 Unless it's changed recently, not suitable for anything else then Hadoop 
 and  contains a single point of failure.

I'm curious as to more information about this 'single point of failure. Can
you be more specific or provides links?

On this resource: 

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html

JournalNode machines talks about surviving faults:

increase the number of failures the system can tolerate, you should run an
odd number of JNs, (i.e. 3, 5, 7, etc.). Note that when running with N
JournalNodes, the system can tolerate at most (N - 1) / 2 failures and
continue to function normally. 

 
  http://gpo.zugaina.org/sys-cluster/apache-hadoop-common
  
  Zookeeper (Fault tolerance)
  SPARK ( optimized for interative jobs where a datase is resued in many
  parallel operations (advanced math/science and many other apps.)
  https://spark.apache.org/
  
  Dryad  Torque   Mpiche2 MPI
  Globus tookit
  
  mesos_tech_report.pdf
  
  It looks as though Amazon, google, facebook and many others
  large in the Cluster/Cloud arena are using Mesos..?
  
  So let's all post what we find, particularly in overlays.
 
 Unless you are dealing with Big Data projects, like Google, Facebook,
Amazon,  big banks,... you don't have much use for those projects.

Many scientific applications are using the cluster (cloud) or big data
approach to all sorts of problems. Furthermore, as GPU and the new
Arm systems with dozens and dozens of cpu cores inside one computer become
readily available, the cluster-cloud (big data) approach will become much
more pervasive in the next few years, imho.

http://blog.rescale.com/reservoir-simulation-moves-to-the-cloud/

There are thousands of small companies needing reservoir simulation, not to 
mention the millions of folks working on carbon sequestration.
Anything to do with Biological or Chemical Science is using or moving
to the Cloud-Clustered world. For me, a Cluster is just a cloud internally
managee, rather than outsourcing it to others; ymmv.


 Mesos looks like a nice project, just like Hadoop and related are also 
 nice. But for most people, they are as usefull as using Exalytics.

I'm not excited about an Oracle solution to anything. Many of the folks
I know consult on moving technologies away from Oracle's spear of influence,
not limited to mysql; ymmv. I know of one very large communications company
that went broke and had to merge because of those ridiculous Oracle fees.
Caveat Emptor; long live Postresql.  


 A scheduler should not have a large set of dependencies that you wouldn't
 use otherwise. That makes Chronos a non-option to me.

Those other technologies are often useful to folks who would be attracted to
something like chronos.

 Martin's project looks promising, but doesn't store the schedules 
 internally. For repeating schedules, like what Alan was describing, you 
 need to put those into scripts and start those from an existing cron.
 Of the 2, I think improving Martin's project is the most likely option 
 for me as it doesn't have additional dependencies and seems to be 
 easily implemented.
 Joost

Understood.
Like others, I'll be curious to follow what develops out of Martin's work.

For me Chronos, Mesos and the other aforementioned technologies look to be
more viable; particularly if one is preparing for a clustered world with
CPUs, GPUs, SoCs and Arm machines distributed about the ethernet  as
resources to be scheduled and utilized in a variety of schema. It's the
quest for one-infrastructure to solve many problems where scenarios compete. 

Big data is not the only reason for cloud-clusters. Theoretically,
(Clustered) systems can have a far greater resource utilization of networked
resources than traditional (distributed) approaches. I grant you that this
is a work in progress, but I personally know of dozens of mathematically
complex distributed systems that are  migrating to the clustered approach
rather than something custom or traditionally distributed.

Granted, Cloud -- Clustered -- Distributed are all overlaping approaches
to big problems. I do appreciate the candor of this thread.


James







Re: [gentoo-user] Re: gigabyte mobo latency

2014-10-18 Thread thegeezer
On 18/10/14 22:51, James wrote:
 thegeezer thegeezer at thegeezer.net writes:


 So. Is there a make.conf setting or elsewhere to make the 
 terminal session response times, in the browsers (seamonkey, firefox)
 faster?  
 the typing latency in the browser windows).
 ideas?
 two things you might like to look into: 1. cgroups (including freezer)
 to help isolate your browsers and also 2. look at atop instead of htop
 as this includes disk io

 2. The system rarely uses 8 G of the 32 G available, so disk IO is 
 not the problem. No heavy writes. It was the java scripts

 1. Ahhh! tell me more. I found these links quickly:

 https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt

 http://wiki.gentoo.org/wiki/LXC#Freezer_Support

 I'm not sure if you've read any of my clustering_frustration posts
 over the last month or so, but cgroups is at the heart of clustering now.
 It seems many of the systemd based cluster solutions are having all
 sorts of OOM, OOM-killer etc etc issues. So any and all good information,
 examples and docs related to cgroups is of keen interests to me. My efforts
 to build up a mesos/spark cluster, center around openrc and therefore
 direct management of resources via cgroups.

 The freezer is exactly what I'm looking for. Maybe I also need to read up
 on lxc?  What are the best ways to dynamically manage via cgroups? A gui?
 A static config file? a CLI tool?


 curiously,
 James





the thing with cgroups is that you can choose to create a hierarchy of
what _you_ want to have as your priority
unfortunately you need to tell the machine what it is you want, it can't
really guess granularly iuc
e.g. your favourite terminal / ide etc you want high prio   and your
favourite file mangler to be low prio ( assuming you want compiling to
take precedence over bit munging to usb etc)

there is however cgroup support in htop, and i thought that you could
adjust cgroup in stead of nice but a quick google is showing that i
dreamed it as a nice feature; that would be super slick as you can
easily adjust all parts of program demand -- io / memory / cpu

using openRC you can start services i.e. apache to have a certain
priority, and ssh to have another
http://wiki.gentoo.org/wiki/OpenRC/CGroups
http://qnikst.github.io/posts/2013-02-20-openrc-cgroup.html


the reason i suggest freezer is that you can more easily pause or
CTRL-Z something that would otherwise be in a GUI and maybe not respond
to SIGSTP
on my laptop :-

# mount | grep freez
freezer on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
# cd /sys/fs/cgroup/freezer/

call the folder something meaningful
# mkdir investigate
# cd investigate

you can use the following, i just did echo $$ for local bash pid... make
sure to get all threads especially something like chrome spawns many
children
# ps -eLf | grep mybadapp

note the single  actually concatenates
# echo $PID  tasks

to remove you actually have to move the process into a different
cgroup i.e.
# echo $PID  /sys/fs/cgroup/freezer/tasks

ok so once you have all your tasks in there just make sure you are in
/sys/fs/cgroup/freezer/investigate
# echo FROZEN  freezer.state

to thaw
# echo THAWED  freezer.state



there is a little more here
http://gentoo-en.vfose.ru/wiki/Improve_responsiveness_with_cgroups
which will allow you to script creating a cgroup with the processID of
an interactive shell, that you can start from to help save hunting down
all the threads spawned by chrome.
you can then do fun stuff with echo $$ 
/sys/fs/cgroup/cpu/high_priority/tasks

but for your original point of maybe it's not an issue with something
like IO it could still be a very high number disk reads -- low actual
thoughput but the demand on the io system was high, i.e. 6zillion reads
hopefully this will give you a bit more control over all of that though




[gentoo-user] Re: Gentoo's future directtion ?

2014-11-24 Thread James
Rich Freeman rich0 at gentoo.org writes:


  The reason I jumped into this thread is that someone had problems with
  the java project. I'm not sure, but maybe something is wrong with 
  my eyes?


Your eyes are fine. Gmane's web interface was hosed. I tried to use
nntp (earlybird) but that interface does not consistently respond 
to the thread, so hacking at the settings and posting(tests) seems
to upset some.

I appreciate your ideas, I mostly like the ideas of distributed development
for non core modules. I also believe and Rich as articulated, that 
we mostly have thing now. Where I diagree with Rich is this. Sure,
If somebody has been a gentoo dev for years, has push access
then they can get coding completed.  If you are new to the depths
of what gentoo devs do, then it is opaque as to which docs to read,
dealing with the technical details that are currently used and which
are either not documented or poorly documented, etc etc. It's not
that any of the devs have a desire to keep folks from becoming deeply
capable devs such as themselves, it that these elites spend very
little time, paving a path for others fledgling devs; so the
the cost barrier to gaining inside (current) knowledge is very him,
imho. Most will just drift to anothe distro where that sort of help
is more redily available.  Thank god for SVEN. But, he cannot clean
up the myriad of docs that need tlc. What you do not seem to comprehend
Rich, is what most non super_devs want is some help, imho. Just look
back at your harsh postings on java. It's rather crushing for folks
that want to use Java. Quite simply, I'm developing the skills to
use java on gentoo. Those keen developers within the java team, are
spread very thin; just look at what they are recruiting for:


http://wiki.gentoo.org/wiki/Project:Gentoo/Staffing_Needs


So I understand that you are not interested in Java. So, how do
we find or develop a gentoo-inner-circle-java-interested dev?
With out that java does not have a chance to gain traction within
gentoo, imho. OK, so dont beat me up for being lazy (stupid as
volker points out routinely) just *help me* via private email
who at Gentoo is the java wizard?  I'll do tons of grunt work
to make then happy.?  OK?

Oh shit: problem one. Go clean up BGO's java bugs. I cant because
it has become a circular cluster_F!.  Too many old blocking bugs;
nobody (who gives a shit) with the authority to purge java bugs.



 And my point was that the only problem I see with Java is that nobody
 is actually working on it.  If nobody works on distributed java
 repositories, it will be just as bad.

Rich, there are billions of Java codes, available in source form
that can easily have ebuilds created. But we have poor support 
for maven, etc. So they go nowhere. (I believe a gentoo-dev for 
maven is the blocker here. I know that Maven is not easy. I am
not (yet) close to being qualified to work on deep maven issues
withing gentoo.


 My specific request was WHO is trying to do something to improve Java
 and finding a policy that is preventing them from doing so, and WHAT
 policy is causing the problem?

see details above?



 It is easy to talk about vague problems.  It is harder to actually
 pinpoint specific issues that can actually be fixed.
 Please don't take this as a personal attack - I generally have a lot
 of respect for you.  I just don't think that this is a helpful way of
 approaching these problems.


Ah, I think Hasufell has very good intentions and some keen ideas.
My fear is java is getting not love from the gentoo innner circle.
So we reduce the innner circle dev count, move java to the perimeter
(where nobody cares that is remotely connected to the inner-circle)
and java get's better?  I've seen this sort of thing thousands of
times in my life. Java will just be purged from Gentoo, reticent on
your previous postings about your feelings towards Java.


Another council member in 2013 was all encouraging about gentoo-clustering.
But now that clustering is mostly java centric (imho), few at 
Gentoo care about cluster on gentoo. Convenient. Sad. Not really
encourage to the fledgling community that are willing to work on
java, but need leadership.


I put (2) ebuilds into BGO. apache-mesos and apache-spark. 
bugs 510912 and 523412. There they languish.

Following the outside overlay semantic guidance, newer versions are here,
thanks to Alec:

https://github.com/trozamon/overlay/tree/master/sys-cluster

So as you point out, go and work on what you want. Really? You think
they'll get move to the gentoo tree any time soon? I'm mostly  looking
for help, encouragement and guidance. I am not looking for harsh
reponses that are discouraging. I am not trying to harash you, I appreciate
what you do for Gentoo, very much. I get you are not interested in
anything remotely connected to java. I would appreciated your keen
insight into finding us a java leader that is not burnt to a crisp
by the gentoo-stress-burnout syndrome that seems

Re: [gentoo-user] kde-apps/kde-l10n-16.04.3:5/5::gentoo conflicting with kde-apps/kdepim-l10n-15.12.3:5/5::gentoo

2016-08-10 Thread Michael Mol
On Wednesday, August 10, 2016 10:13:29 AM james wrote:
> On 08/10/2016 07:45 AM, Michael Mol wrote:
> > On Tuesday, August 09, 2016 05:22:22 PM james wrote:

> >> 
> >> I did a quick test with games-arcade/xgalaga. It's an old, quirky game
> >> with sporadic lag variations. On a workstation with 32G ram and (8) 4GHz
> >> 64bit cores, very lightly loaded, there is no reason for in game lag.
> >> Your previous settings made it much better and quicker the vast majority
> >> of the time; but not optimal (always responsive). Experiences tell me if
> >> I can tweak a system so that that game stays responsive whilst the
> >> application(s) mix is concurrently running then the  quick
> >> test+parameter settings is reasonably well behaved. So thats becomes a
> >> baseline for further automated tests and fine tuning for a system under
> >> study.
> > 
> > What kind of storage are you running on? What filesystem? If you're still
> > hitting swap, are you using a swap file or a swap partition?
> 
> The system I mostly referenced, rarely hits swap in days of uptime. It's
> the keyboard latency, while playing the game, that I try to tune away,
> while other codes are running. I try very hard to keep codes from
> swapping out, cause ultimately I'm most interested in clusters that keep
> everything running (in memory). AkA ultimate utilization of Apache-Spark
> and other "in-memory" techniques.

Gotcha. dirty_bytes and dirty_background_bytes won't apply to anything that 
doesn't call mmap() with a file backing or perform some other file I/O. If 
you're not doing those things, they should have little to no impact.

Ideal values for dirty_bytes and dirty_background_bytes will depend heavily on 
the nature of your underlying storage. Dozens of other things might be tweaked 
depending on what filesystem you're using. Which is why I was asking about 
those things.

> 
> 
> Combined codes running simultaneously never hits the HD (no swappiness)
> but still there is keyboard lag.

Where are you measuring this lag? How much lag are we talking about?

> Not that it is actually affecting the
> running codes to any appreciable degree, but it is a test I run so that
> the cluster nodes will benefit from still being (low latency) quickly
> attentive to interactions with the cluster master processes, regardless
> of workloads on the nodes. Sure its  not totally accurate, but so far
> this semantical approach, is pretty darn close. It's not part of this
> conversation (on VM etc) but ultimately getting this right solves one of
> the biggest problems for building any cluster; that is workload
> invocation, shedding and management to optimize resource utilization,
> regardless of the orchestration(s) used to manage the nodes. Swapping to
> disc is verbotim, in my (ultimate) goals and target scenarios.
> 
> No worries, you have given me enough info and ideas to move forward with
> testing and tuning. I'm going to evolve these  into more precisely
> controlled and monitored experiments, noting exact hardware differences;
> that should complete the tuning of the Memory Management tasks, within
> acceptable confine  . Then automate it for later checking on cluster
> test runs with various hardware setups. Eventually these test will be
> extended to a variety of  memory and storage hardware, once the
> techniques are automated. No worries, I now have enough ideas and
> details (thanks to you) to move forward.

You've got me curious, now you're going to go run off and play with your 
thought problems and not share! Tease!

> 
> >> Perhaps Zabbix +TSdB can get me further down the pathway.  Time
> >> sequenced and analyzed data is over kill for this (xgalaga) test, but
> >> those coalesced test-vectors  will be most useful for me as I seek a
> >> gentoo centric pathway for low latency clusters (on bare metal).
> > 
> > If you're looking to avoid Zabbix interfering with your performance,
> > you'll
> > want the Zabbix server and web interface on a machine separate from the
> > machines you're trying to optimize.
> 
> agreed.
> 
> Thanks Mike,
> James

np
-- 
:wq

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] konqueror:5 - why couldn't it be more like konqueror:3 ?!!

2017-04-21 Thread Mick
On Thursday 20 Apr 2017 16:43:49 R0b0t1 wrote:
> On Thu, Apr 20, 2017 at 3:01 PM, Mick <michaelkintz...@gmail.com> wrote:
> > On Thursday 20 Apr 2017 14:22:06 R0b0t1 wrote:
> >> On Wed, Apr 19, 2017 at 9:57 AM, Mick <michaelkintz...@gmail.com> wrote:
> >> > OK, I know life moves on, but this move has been a retrograde step for
> >> > me.
> >> > 
> >> >  My konqueror:5 recently updated seems to have a number of problems and
> >> > 
> >> > <aheam!> features I am not happy with.  Grateful for any pointers to
> >> > address these.  In no particular order.
> >> > 
> >> > 1. The Bookmarks Toolbar will *always* show up when launching
> >> > Konqueror.
> >> > I
> >> > deselect Settings/Toolbars Shown/Bookmark Toolbar and relaunch the
> >> > application, only to find out my deselection will not stick.
> >> > 
> >> > 2. The menu shows no icons, only text; buttons like open new tab/close
> >> > current tab show no icons, making difficult to guess.
> >> > 
> >> > 3. When used as a file manager Konqueror will only open directories or
> >> > files if I double click on them.  I have set up in systemsettings5
> >> > Hardware/Input Devices/Mouse/Single click to open files and folders.
> >> > Konqueror ignores it.
> >> > 
> >> > 4. All my years of Bookmarks of Konqueror:4 gone.  I had to import them
> >> > manually.
> >> > 
> >> > 5. Network places, gone.
> >> > 
> >> > 6. Left hand Panels with Places/Devices/Folders ain't thaa'r.
> >> > 
> >> > 7. Konqueror Introduction page, no icons; unless I hover over them,
> >> > hyperlinked titles shown in dark grey over a blue background.  I know
> >> > my
> >> > eye sight is not as good as it used to be, but this is really akin to
> >> > usability sabotage.
> >> > 
> >> > The are probably more problems I have not captured above, but my
> >> > experience of konqueror:5 is that of crippleware.  I don't know if this
> >> > has anything to do with it, I am not running konqueror on a full plasma
> >> > desktop, but as a stand alone application.  Have you observed similar?
> >> > 
> >> > --
> >> > Regards,
> >> > Mick
> >> 
> >> You should take your complaints to the project's bug tracker. I know
> >> they will take number 7 very seriously, and seeing as you've used the
> >> project for an appreciable amount of time they will likely consider
> >> most of your other issues.
> >> 
> >> At the very least they would probably tell you how to do what you
> >> don't know how to do. If they can't insist that there has been a
> >> regression.
> > 
> > Thanks R0b0t1, I don't want to darken their doorstep of devs with issues
> > which appear isolated to my systems.  Other M/L participants many of whom
> > are running the full Plasma desktop, do not seem to have such problems. 
> > So, I'm guessing I must be missing some package or other to complement
> > the required functionality.
> 
> I have to disagree with you here. There is no way I can see the
> developers responding negatively, though time might be short for new
> (or "reintroduced") features. I feel it necessary to mention that
> seventh item again, which they will take very seriously.
> 
> > I'm still on kmail:4 and all menu icons are shown and functionality is not
> > crippled in any way.  I fear what might happen when I eventually have to
> > install kmail:5.
> 
> I feel this is also something you should express to the developers,
> but admittedly I don't know the best place. Perhaps a mailing list. I
> understand there is a time investment but if you have any to spare it
> will almost assuredly spark a constructive conversation.

R0b0t1's comments prompted me to search in Konqueror's bug reports to see if I 
am alone in the above complains.  It seems not, with respect to some of them.

What Alan said was correct, there is only one developer trying to keep the 
boat afloat.  A lot of the code working fine on KDE-4 has not been ported to 
Plasma and unless man power increases it is unlikely features like the missing 
sidebar/panel will be reinstated:

https://bugs.kde.org/show_bug.cgi?id=373824

Furthermore, the current maintainer only intends to maintain the Konqueror 
code as a khtml/webkit browser, not a file manager.  For that he refers all 
comers to Dolphin.

Oh well, if only I could code ...  ;-)

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: Recommendations for scheduler

2014-08-05 Thread J. Roeleveld
On 5 August 2014 21:57:56 CEST, James wirel...@tampabay.rr.com wrote:
Joost Roeleveld joost at antarean.org writes:


  Mesos looks promising for a variety of (Apache) reasons. Some key
  technologies folks may want google about that are related:
  
  Quincy (fair schedular)
  Chronos (scheduler)
  Hadoop (scheduler)
 
 Hadoop not a scheduler. It's a framework for a Big Data clustered   
 database.

  HDFS (clusterd file system)
 Unless it's changed recently, not suitable for anything else then
Hadoop 
 and  contains a single point of failure.

I'm curious as to more information about this 'single point of failure.
Can
you be more specific or provides links?

On this resource: 

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html

JournalNode machines talks about surviving faults:

increase the number of failures the system can tolerate, you should
run an
odd number of JNs, (i.e. 3, 5, 7, etc.). Note that when running with N
JournalNodes, the system can tolerate at most (N - 1) / 2 failures and
continue to function normally. 

Just read that part. Looks like they solved it partly since 2.2.
The problem lies with the NameNodes.
Prior to 2.2, you only had 1. If that one dies, you loose the entire cluster. 
If that one is unrecoverable, you loose all the data.

After 2.2, you can configure a standby NameNode. However, it still requires 
manual restart.

Considering that Hadoop is most often running on old machines, chances for 
hardware failure are higher when compared with clusters using newer hardware.

I'm not sure how other cluster FSs deal with this, but I consider it a design 
flaw if the disappearance of a single machine in a 100+ node cluster dies, the 
entire cluster ends up in a broken state.
It's like running a single Raid5 with 100+ drives.
Anyone stupid enough to do that deserves to loose their data.

  http://gpo.zugaina.org/sys-cluster/apache-hadoop-common
  
  Zookeeper (Fault tolerance)
  SPARK ( optimized for interative jobs where a datase is resued in
many
  parallel operations (advanced math/science and many other apps.)
  https://spark.apache.org/
  
  Dryad  Torque   Mpiche2 MPI
  Globus tookit
  
  mesos_tech_report.pdf
  
  It looks as though Amazon, google, facebook and many others
  large in the Cluster/Cloud arena are using Mesos..?
  
  So let's all post what we find, particularly in overlays.
 
 Unless you are dealing with Big Data projects, like Google, Facebook,
Amazon,  big banks,... you don't have much use for those projects.

Many scientific applications are using the cluster (cloud) or big data
approach to all sorts of problems. Furthermore, as GPU and the new
Arm systems with dozens and dozens of cpu cores inside one computer
become
readily available, the cluster-cloud (big data) approach will become
much
more pervasive in the next few years, imho.

http://blog.rescale.com/reservoir-simulation-moves-to-the-cloud/

There are thousands of small companies needing reservoir simulation,
not to 
mention the millions of folks working on carbon sequestration.
Anything to do with Biological or Chemical Science is using or moving
to the Cloud-Clustered world. For me, a Cluster is just a cloud
internally
managee, rather than outsourcing it to others; ymmv.

My apologies. I forgot the scientific research here. But that was mostly 
because they have been dealing with really large datasets and corresponding 
large compute clusters for decades.

The term Big Data is generally applied to financial and social media data.

 Mesos looks like a nice project, just like Hadoop and related are
also 
 nice. But for most people, they are as usefull as using Exalytics.

I'm not excited about an Oracle solution to anything. Many of the folks
I know consult on moving technologies away from Oracle's spear of
influence,
not limited to mysql; ymmv. I know of one very large communications
company
that went broke and had to merge because of those ridiculous Oracle
fees.
Caveat Emptor; long live Postresql.  

I'd be interested in the name of that company. Even offlist.

And I definitely agree. PostgreSQL is often a valid alternative. Unfortunately, 
it is rarely possible to use it as a back end to enterprise software as these 
are all designed to be used with databases from the usual suspects (Oracle, 
IBM, Microsoft, )

Same goes for OSS projects. The developers are often unable to properly code 
the SQL layer and end up simply using MySQL and its broken SQL implementation.

 A scheduler should not have a large set of dependencies that you
wouldn't
 use otherwise. That makes Chronos a non-option to me.

Those other technologies are often useful to folks who would be
attracted to
something like chronos.

If you already use Mesos, using Chronos makes sense.
If you're only interested in a scheduler, installing Mesos just to use Chronos 
doesn't make sense.

 Martin's project looks promising, but doesn't store the schedules 
 internally. For repeating

[gentoo-user] Re: [OT] Linus Torvalds on systemd

2014-09-17 Thread James
Canek Peláez Valdés caneko at gmail.com writes:


 This is highly off-topic, and systemd-related, so if you don't want
 your breakfast with a healthy amount of flames, skip it.

I think this is very much on Topic.

 iTWire posted an interview with Linus Torvalds[1], where the Big
 Penguin himself gave a succinct and pretty fair opinion on systemd.
 The gist of it can be resumed in two lines:

 I don't personally mind systemd, and in fact my main desktop and
 laptop both run it.

Here I diagree. I think Linux's position is, hey it's a BIG tent;
can't we call get along? Kum_by_yah oh lord, Kum_by_yall..

Linus admits he rarely codes and does not have the skills he use to...

 I post it here because several times in the last discussions about
 systemd, there was people asking what opinion Linus had about systemd.
 I personally don't think Linus particular opinion matters at all in
 this particular issue; in general people who likes systemd will
 continue to like it, and people who despises it will continue to do
 so, for any good, bad, real or imaginary reason. However, I *really*
 like several things Linus says in the interview; some juicy bits:
 
 • So I think many of the original ideals of UNIX are these days
 more of a mindset issue than necessarily reflecting reality of the
 situation.
 
 • There's still value in understanding the traditional UNIX do one
 thing and do it well model where many workflows can be done as a
 pipeline of simple tools each adding their own value, but let's face
 it, it's not how complex systems really work, and it's not how major
 applications have been working or been designed for a long time. It's
 a useful simplification, and it's still true at *some* level, but I
 think it's also clear that it doesn't really describe most of
 reality.
 
 • ...systemd is in no way the piece that breaks with old UNIX legacy.
 
 •  I'm still old-fashioned enough that I like my log-files in text,
 not binary, so I think sometimes systemd hasn't necessarily had the
 best of taste, but hey, details..[.]
 
 • (About the single-point-of-failure argument) I think people are
 digging for excuses. I mean, if that is a reason to not use a piece of
 software, then you shouldn't use the kernel either.

Really? This is idiotic. Anything that breaks down a fault tolerant
system, has to be removed, or the system is no long fault tolerant
(pist, it a mathematical thing, no a linux/unix concept. Linus
sounds like an *idiot* here. It's not the first time, nor could anyone
in his shoes not sound like an idiot on something as fundamental as
what cgroups hopes to eventually accomplish. By the way, just for the
record, I like the theory behind systemd. It's going to take SYSTEMD
A LONG TIME to MATURE and become ROBUST.

cgroups are mature, flexible, robust, well-understood and this is
absolutely no reason in hell that folks should ever be force to pick
one of the other. If/when linx make that decision, it will be just
as catastropic as the day Sun Microsystem consolidated ownership
of most unix source licenses in a effort (conspiracy) that SCO
unix tried to finish by kill the BSD efforts. That was when most
folks on the internet migrated to Linux. I think Linux is trying
to prevent another (reverse) watershed moment.

If folks have the choice, then they will stay with Linux. If forced
many will leave. The entire affair is AVOIDABLE. systemd, in all
it's glory should never force anyone to choose. Choice is the greatest
asset of all open source. Many would say, it is the only asset of
the open source movement.


 • And there's a classic term for it in the BSD camps: bikeshed
 painting, which is very much about how random people can feel like
 they have the ability to discuss superficial issues, because everybody
 feels that they can give an opinion on the color choice. So issues
 that are superficial get a lot more noise. Then when it comes to
 actual hard and deep technical decisions, people (sometimes) realise
 that they just don't know enough, and they won't give that the same
 kind of mouth-time.

Retarded comparision of vi vs emac and antoher application. systemd
vs the traditional cgroups is an the lowest level of the kernel.
Think aobut it by going to 'make menuconfig' in your local source dir.
Look at the myriad of low level choices we have. Why the hell is
systemd so special that it cannot stand up to other solutions and
competition?


 It's an interesting read; I highly recommend it.

I agree. He sound more idiodic than Obama and his red line. We
all know how that turned out.

CHOICE is EVERYTHING!

My decision to run a lightweight desktop (lxde, lxqt) and have
a mesos/spark cluser across several machines is my choice.
Others like KDE becoming the cluster. CHOICE. Exclude cgroups
and it will split the community, imho. That said, we all already
split across windows, mac, androi, linux, bsd, etc etc so
it really does not matter at all, imho.

But comparing fights over editors and applications

Re: [gentoo-user] To all IPv6-slackers among the Gentoo community

2019-11-26 Thread Dale
Mick wrote:
> On Tuesday, 26 November 2019 17:58:46 GMT Dale wrote:
>
>> I enter my username/password on the modem so I'm pretty sure it is
>> processing the packets and such.  There is no mention of anything IPv4
>> or v6.  I'd suspect it is v4 only, since it works it has to support v4. 
>> lol  So, old modem may have to be bricked at some point.
> Not necessarily.  If your modem is like the one described here, follow the 
> guidance provided to set it in bridged mode:
>
> https://www.dslreports.com/faq/6405
>
> In bridged mode it will pass all ethernet packets to your router and your 
> router will be able to obtain a public IP address with its dhcp client 
> directly from your ISP.  Of course, to be able to connect to your ISP you 
> will 
> now need to enter your ADSL account username/passwd into the PPPoE (or PPPoA) 
> client in your router's management interface.  DHCP and DNS server 
> functionality will also be provided by your router for all devices on your 
> LAN.  The modem will be just a dumb box between the ISP and your router.
>
> In the unlikely chance your router does not possess such PPP authentication 
> functionality, you will have to replace your router with one which does and 
> at 
> the same time look to buy one which offers IPv6 too.
>
>

I'm almost certain my router can do this.  I've done it before but with
a wired only version.  I think they have the same basic firmware since
all the screens look alike, except for the wireless part being added. 
Thing is, I don't think the router has IPv6 capabilities.  It's a WRT54G
version 6 that I use now.  I switched to a wireless one when I got my
cell phone which needs wi-fi.  The old wired router was the same model
less the G on the end if I recall correctly.  I suspect a new router is
due, age and lack of firmware updates if nothing else.  I think the
firmware is about a decade old. 


>> I do have a
>> newer gray modem that came with the DSL kit.  I stopped using it because
>> it got so warm.  The old black box one runs cool and it has more vent
>> holes.  I may have to check and see if the gray one supports v6 but it
>> is fairly old too.  It's at least 10 years old. 
> ADSL ATM encapsulation technology has not changed for many years now.  I 
> don't 
> think age (or colour) matters really, unless you can see smoke coming out of 
> it when you power it up!  LOL!
>

I mention the color because some may remember the old thing.  When I see
a black Westell, I know what it is.  Heck, I found most of the ones I
got at a thrift store for $6.00.  lol  I can generally recognize the
gray ones BUT some look a lot alike but are different on the inside. 

>> My router also makes no mention of IPv4 or v6.  I suspect it is in the
>> same boat as the modem, it doesn't support it and doesn't have the
>> option to either.  I did go to the Linksys website and look for a
>> firmware upgrade, nothing available, not even a old one. 
> You haven't provided any model names[1] so it's difficult to google things 
> for 
> you, or suggest solutions.  Have a look here to see if your router is still 
> supported by this open source Linux firmware:
>
> https://openwrt.org/supported_devices
>
> https://openwrt.org/toh/start
>
> Other alternative(s):
>
> http://www.polarcloud.com/tomato
>

Model is above.  I've read about openwrt but always been nervous about
trying it.  I've read where some have bricked their router.  You know me
and my luck.  If it can be bricked, I can do it, real good.  LOL  ;-D  I
tried to find out how much memory and such my old router has but I can't
find it anywhere.  It may not show it so I may end up googling for it
online.  See if I can find a spec sheet somewhere. 

>> I did some searching for routers with ipv6 support.  I'm not finding a
>> lot.  Is this something I need to worry about yet?  I mean, is there a
>> lot of IPv6 equipment even available right now? 
> You may have not tried hard enough.  There were a thing even 8 years ago:
>
> https://www.cnet.com/news/top-5-ipv6-ready-wireless-routers/
>
> Answering your question, yes, today all modern routers and any ADSL modems 
> with routing capability come as dual IPv4/6 stack.
>
>
> [1] True story:  Years ago a friend started work in a car accessories and 
> spare parts shop.  Customer walks in looking for spark plugs, where upon my 
> friend asks for his make and model.  Customer replies:  "Dunno, it's a blue 
> car ..."  O_O
>


I just did one quick search for 'wireless router IPv6' and didn't see a
lot.  However, it may not be finding them all since it may not be in the
description since new ones come with it by default.  In other words,
they don't include IPv6 in the description for it to find it.  I'll do
some more searching but I'll ask here before I buy one unless it
specifically says it supports IPv6 somewhere.  No point buying one just
like I got now.  :/ 

I just don't want to wait until my internet stops working right to
upgrade this stuff. 

Dale

:-)  :-) 



Re: [gentoo-user] kde-apps/kde-l10n-16.04.3:5/5::gentoo conflicting with kde-apps/kdepim-l10n-15.12.3:5/5::gentoo

2016-08-10 Thread james
 really working on atm.


Honestly, I'd suggest you deep dive. An image once, with clarity, will
last
you a lot longer than ongoing fuzzy and trendy images from people whose
hardware and workflow is likely to be different from yours.

The settings I provided should be absolutely fine for most use cases. Only
exception would be mobile devices with spinning rust, but those are
getting
rarer and rarer...


I did a quick test with games-arcade/xgalaga. It's an old, quirky game
with sporadic lag variations. On a workstation with 32G ram and (8) 4GHz
64bit cores, very lightly loaded, there is no reason for in game lag.
Your previous settings made it much better and quicker the vast majority
of the time; but not optimal (always responsive). Experiences tell me if
I can tweak a system so that that game stays responsive whilst the
application(s) mix is concurrently running then the  quick
test+parameter settings is reasonably well behaved. So thats becomes a
baseline for further automated tests and fine tuning for a system under
study.


What kind of storage are you running on? What filesystem? If you're still
hitting swap, are you using a swap file or a swap partition?


The system I mostly referenced, rarely hits swap in days of uptime. It's 
the keyboard latency, while playing the game, that I try to tune away, 
while other codes are running. I try very hard to keep codes from 
swapping out, cause ultimately I'm most interested in clusters that keep 
everything running (in memory). AkA ultimate utilization of Apache-Spark 
and other "in-memory" techniques.



Combined codes running simultaneously never hits the HD (no swappiness) 
but still there is keyboard lag. Not that it is actually affecting the 
running codes to any appreciable degree, but it is a test I run so that 
the cluster nodes will benefit from still being (low latency) quickly 
attentive to interactions with the cluster master processes, regardless 
of workloads on the nodes. Sure its  not totally accurate, but so far 
this semantical approach, is pretty darn close. It's not part of this 
conversation (on VM etc) but ultimately getting this right solves one of 
the biggest problems for building any cluster; that is workload 
invocation, shedding and management to optimize resource utilization, 
regardless of the orchestration(s) used to manage the nodes. Swapping to 
disc is verbotim, in my (ultimate) goals and target scenarios.


No worries, you have given me enough info and ideas to move forward with 
testing and tuning. I'm going to evolve these  into more precisely 
controlled and monitored experiments, noting exact hardware differences; 
that should complete the tuning of the Memory Management tasks, within 
acceptable confine  . Then automate it for later checking on cluster 
test runs with various hardware setups. Eventually these test will be 
extended to a variety of  memory and storage hardware, once the 
techniques are automated. No worries, I now have enough ideas and 
details (thanks to you) to move forward.




Perhaps Zabbix +TSdB can get me further down the pathway.  Time
sequenced and analyzed data is over kill for this (xgalaga) test, but
those coalesced test-vectors  will be most useful for me as I seek a
gentoo centric pathway for low latency clusters (on bare metal).


If you're looking to avoid Zabbix interfering with your performance, you'll
want the Zabbix server and web interface on a machine separate from the
machines you're trying to optimize.


agreed.

Thanks Mike,
James




Re: [gentoo-user] Rearranging hard drives and data.

2020-12-19 Thread Dale
ber of drives and the price, that's good bang for buck there. 
You may want to make note of that for the future.  Maybe you can find a
good deal.  It has some good reviews.  I also found some good deals on
SAS drives, new even.  Some I found are pulls where they upgrade but
never used the drives.  If the price is right, I'm good with that. 

Add in the case I posted in another reply, it's a good start. 


>> Another option, find another case.  If I recall correctly tho, some
>> puter makers don't use standard layouts for the mobo screw holes. 
> Well, if you buy from a well-known brand, I don't think you will have any
> problem there (even if it is their cheapest model).
>
>> I could also have a open system with everything just mounted on the wall
>> in open air.
> I don't think that's a good idea. I remember you talking of lousy power
> utility reliability, and from what I heard over the years of the general
> standards of US rural power cabling (of course I'm no expert or even just
> savvy), I'd be worried of interference. I'd also be concerned about damage
> through physical contact (i.e. you bump into it, or something falls against
> it).

I have UPSs on everything I can.  Even my TVs have a UPS.  I like the
surge protection they have plus it takes care of those power blinks. 
That said, they ran new power lines from the sub-station several years
back.  My power situation is hugely better now.  I'm more likely to have
power go out from a tree breaking a line than I am anything else. 
Still, I wouldn't dare run without a UPS.  I also have quite a bit of
surge protection too.  One in breaker box, one at wall plug, more in the
UPS and whatever is in the puters power supply as well.  I was looking
at the transformer on the pole a few weeks ago, I think it has some sort
of surge protection too.  Sort of like a old timey spark gap but they do
help.  I'm also just a few hundred feet down from a MOV type protector
too.  They look like small transformers but was told they are like
MOVs.  I think they call them something else but same thing.  Just
really heavy duty and designed for high voltages. 


>> Of course, another option is to make this a media system and use those
>> little raspberry type thingys for the NFS.
> I am running raspi as a low-level server (pi-hole, Nextcloud, contacts and
> calendar server). It's a model 3B with a quadcore SoC and 1 Gig of ram,
> currently running raspbian (I am currently examining arch). For what you
> want, it is not powerful enough. Even the gen 4 does not suffice. It has
> gigabit ethernet (the 3 only has 100 Mb), but has no SATA connectors. So you
> either need a SATA bridge or are limited to USB enclosures. It has two
> USB-3-Sockets. Either way, you need a separate power supply for 3,5″ drives.
> On [0], the Pi 4 is benchmarked and reaches 363 Mb/s over USB. That is a
> third of Gigabit speed. Not counting overhead for filesystems.

That's one negative for those.  If they had a full speed SATA support
built in or as a add-on, then it would be better.  From what I've read,
you have to use the USB which is fairly fast but still limited when you
have several drives going at once.  I wish they would just jump up to
SAS myself.  That would make a lot of people happy I bet.  You make a
good point tho.


>> Or, buy a used NFS off ebay, kinda pricey last I looked.
> I built a NAS in a for-purpose cubic case [1] a few years ago. The system
> was costly, maybe even unnecessarily high, because I went with a niche
> Mini-ITX form factor, ZFS (for redundancy), thus ECC RAM, thus a server
> board that supports ECC. On the other hand, that board supports staggerd
> spin-up. At idle that system slurps around 50 Watts with a 300 W gold PSU.
> It has four WD RED 6 TB drives and a small SSD for the system.
>
> It is actually the last Gentoo system that I run and maintain. :'-( System
> upgrades puts some heat stress on the drives because they sit right atop the
> CPU due to the crammed dimensions, but since it's a server, the package
> count is hugely reduced compared to a desktop. And since I don't keep it
> running 24/7, I usually do upgrades right after bootup.
>
> My case is quite cheaply-made, with sharp edges here and there and some
> design flaws. An adequate, high-quality alternative might be [2].
>
> A tailored-to-the-use-case device might be your best option. You may not be
> able to use that hand-me-down machine at all, but I think it is unsuitable
> for 24/7 storage anyway. When I built my NAS, I was considering an HP
> Microserver, which has the same general specs as my system, but comes in a
> one-stop package including an optimised mainboard (think of HDD cabling).
>

You may want to look on Ebay for a Fractal Node 804, maybe Amazon too. 
You're in Europe it seems but I can get them new here for around $100. 
It

  1   2   >