of mirroring it if the
ebuild makes its way into the tree.
OK, now it is simplified to:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
But I get this error:
'ebuild spark-1.1.0.ebuild manifest'
Downloading 'http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz
Hello,
So I'm working on apache spark (overlay) ebuild.
I cannot see to get the sources to download.
Here are the sources:
http://www.apache.org/dist/spark/spark-1.1.0/
or here:
http://mir2.ovh.net/ftp.apache.org/dist/spark/spark-1.1.0/
My local ebuild has these etries:
snip
MY_PV=${PV
On 09/20/2014 01:55 PM, James wrote:
OK, now it is simplified to:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
But I get this error:
'ebuild spark-1.1.0.ebuild manifest'
Downloading 'http://www.apache.org/dist/spark/1.1.0/spark-1.1.0.tgz'
--2014-09-20
On 09/20/2014 01:07 PM, James wrote:
Hello,
So I'm working on apache spark (overlay) ebuild.
I cannot see to get the sources to download.
Here are the sources:
http://www.apache.org/dist/spark/spark-1.1.0/
or here:
http://mir2.ovh.net/ftp.apache.org/dist/spark/spark-1.1.0/
...
So
Michael Orlitzky mjo at gentoo.org writes:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
Because that's the wrong URL =)
SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;
Works. Is this correct?
(sorry for being dense)
James
Hello,
Well at this point, I probably need a few folks to test
the mesos and spark ebuilds as they are in bugs.gentoo.org
mesos (510912 attachment 385316) and
spark (523412 attachment 385318)
I had alreay installed java (icedtea, scala and maven-bin)
to those dependancies might need tweaking
On 09/20/2014 02:08 PM, James wrote:
Michael Orlitzky mjo at gentoo.org writes:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
Because that's the wrong URL =)
SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;
Works. Is this correct?
(sorry
On Saturday, September 20, 2014 18:08:30 James wrote:
Michael Orlitzky mjo at gentoo.org writes:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
Because that's the wrong URL =)
SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;
Works
2014-09-20 12:08 GMT-06:00 James wirel...@tampabay.rr.com:
Michael Orlitzky mjo at gentoo.org writes:
MY_PV=${PV/_/}
SRC_URI=http://www.apache.org/dist/spark/${PV}/${P}.tgz;
Because that's the wrong URL =)
SRC_URI=http://www.apache.org/dist/spark/spark-1.1.0/${P}.tgz;
If you want
Michael Orlitzky mjo at gentoo.org writes:
Yes, and you can replace spark-1.1.0 by ${P} in the path as well. The
link that Bryan posted has a list of all of the variables that are
available. You can go pretty crazy with some of them, but in this case
the only other thing I would replace
On 09/20/2014 03:17 PM, James wrote:
OK, that behind me now..
So the build fails, so I figure I'll just build it manually, then
finish the ebuild. So I went to:
/var/tmp/portage/sys-cluster/spark-1.1.0/work/spark-1.1.0
and no configure scripts
The README.md has
On 11/05/2014 11:42 AM, James wrote:
Let's make a deal. Lots of folks are trying to get Nagios running
on Mesos/spark as a cluster based tool. Have your (hacks) efforts
focoused on runnning Nagios on a mesos/spark cluster? My good friend
and dev-in-making Alec has graticiouly put working
Alec Ten Harmsel alec at alectenharmsel.com writes:
There is a large discussion on the Spark mailing list right now about
having groups of maintainers for different areas:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Designating-maintainers-for-some-Spark-components-td9115
get bitch_slapped like my previous attempts
There is a large discussion on the Spark mailing list right now about
having groups of maintainers for different areas:
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Designating-maintainers-for-some-Spark-components-td9115
use.
Exactamundo!
Besides fine grained controls I want it in a fat_boy controllable gui!
Clustering is where it's at. NOW much of the fuss I read
in the clustering groups, particularly Spark and other
in_memory tools, is all about monitoring and managing
all types of memory and related issues. [1
and little indian codes simultaneously.
I also am a bit of a purist, and just run no-multilib because it is
emotionally satisfying.
Naw. Your teasing? (wink wink nudge nudge).
OFF TOPIC
On another note: have you seen spark-1.5 ? Cleaner build?
http://apache-spark-developers-list
Until one day one of their bright spark techies had a brilliant idea. They
hired a bunch of pretty girls wearing tight skimpy New! Improved! Check
Our
Promotion! outfits to stand outside the front door handing out free
complimentary CDs.
Yes, you guessed it. Within the hour the perimeter
. Lots of folks are trying to get Nagios running
on Mesos/spark as a cluster based tool. Have your (hacks) efforts
focoused on runnning Nagios on a mesos/spark cluster? My good friend
and dev-in-making Alec has graticiouly put working versions of both
mesos and spark on his git_tub_club collection
walt w41ter at gmail.com writes:
On 11/05/2014 09:42 AM, James wrote:
Us old farts, call that:: wisdom
Is that Haskell?
Maybe. My new linguas are Scala and R on Spark [1].
And those have me burried alive. My sleep hours have
me cast in a sparse matrix schema.
Haskill :: beyond my scope
on no-multilib profiles;
I had not seen this, but so I guess this is well documented..?
Does that profile selection prevent one from selecting grub-1 during
and installation?
OFF TOPIC
On another note: have you seen spark-1.5 ? Cleaner build?
http://apache-spark-developers-list.1001551.n3.nabble.com
On 09/11/2014 12:20 PM, James wrote:
Yes, I've been all over this. It's onto much of the Apache clustering
codes that are not simple to configure in the ebuild. Besides the raw
packege codes, like mesos, spark, scala, cassandra, etc there are a
mulitude of fast moving codes written in Java
?
Yes, although just now was the first time I ever tried installing
grub-1.
OFF TOPIC
On another note: have you seen spark-1.5 ? Cleaner build?
http://apache-spark-developers-list.1001551.n3.nabble.com/Fwd-ANNOUNCE-Spark-1-5-0-preview-package-td13683.html
, updated=1236889084;
Rows matched: 4329 Changed: 4329 Warnings: 0
Hang on, that doesn't look right. sigh there's no WHERE
I hope there's a backup...
What's in crontab -l?
Lucky for me, some OTHER bright spark had mysqldump in a daily cron!
--
alan dot mckinnon at gmail dot com
would be missing and why? And also,
how would you re-emerge everything on your system?
Alexander Skwar
--
This is not the age of pamphleteers. It is the age of the engineers. The
spark-gap is mightier than the pen. Democracy will not be salvaged by men
who talk fluently, debate forcefully and quote
) masked. So what version
would anyone recommend, with what flags? [1]
Ceph will be the DFS on top of a (3) node mesos+spark cluster.
btrfs is being set up with 2 disks in raid 1 on each system. Btrfs
seems to be keenly compatible with ceph [2].
Guidance and comments, warmly requested,
James
y to start working on (2) new python ebuilds::
turbogears-2 and dpark (a very cool "in-memory" python knockoff of
apache-spark):: so bear that in mind on any other recommendations.
???
James
[1] https://github.com/douban/dpark
[2] http://turbogears.org/
why if anyone is actually interested.
Acutally, from my research and my goal (one really big scientific simulation
running constantly). Many folks are recommending to skip Hadoop/HDFS all
together and go straight to mesos/spark. RDD (in-memory) cluster calculations
are at the heart of my needs
graphical tools for adjusting and managing
cgroups? Surely when I apply this to the myriad of things running
on my mesos+spark cluster I'm going to need a well thoughout tool
for cgroup management, particularly on memory resources organization
and allocations as spark is an in_memory environment
=)
I'm not up-to-date either, but Nagios is still in the tree, and we still
use it, so I'd like to clean up a bit.
Let's make a deal. Lots of folks are trying to get Nagios running
on Mesos/spark as a cluster based tool. Have your (hacks) efforts
focoused on runnning Nagios on a mesos/spark
centric. ymmv.
I intend to run mesos+spark to keep some codes in-memory and thus
only write out to HD, when large jobs are finished. Here is the lab
that is pushing the state of the art on in-memory computations [1].
Spark is now managed under the Apache umbrella of projects.
I believe that most
2015-03-28 14:43 GMT-06:00 James wirel...@tampabay.rr.com:
likewise I've been hacking at ebuilds for apache (spark and mesos)
The spark file are still under /var/tmp/portage/sys-cluster but the mesos
files, compiled just yesterday are not under /var/tmp/portage. The
same is true for ebuild
try to build gcc-5.1.0 with itself ... and maybe later I will try
at system in a btrfs-subvolume.
Hello Stephan,
Very interesting. You do know that both cephfs-0.94 and gcc-5.1.x
have support for RDMA. It should really speed up some applications,
particularly if you are running Apache:(spark
on CoreOS; so apologies if that is
confusing. I'm intending on running a stripped and optimized gentoo OS
and linux kernel as close to bare metal as I can. gcc5 is targeted at both
system, GPU and distributed resource compiling (RDMA).
Mesos + spark + tachyon + storm + RDMA + GCC-5.x is a killer platform
codes, like mesos, spark, scala, cassandra, etc there are
a mulitude of fast moving codes written in Java and Python that
need to be tested. Java is not difficult, but voluminous. Every problem
somebody encouters, gets solved by some java bolt on code, rather
than fixing/extending the main (mesos
assigning emotional baggage to
technical language.
To push your analogy, oh, your car is working just fine. Now anyone
with a pair of spark plugs and a few tools may be able to start it
without you, but your startup _works_. Now imagine some German
engineer caring nothing about you lowly
, the file needs to be locked on server A
preventing simultaneous updates.
OOch, file locking (precious tells me that is alway tricky).
(pist, systemd is causing fits for the clustering geniuses;
some are espousing a variety of cgroup gymnastics for phantom kills)
Spark is fault tolerant, regardless
and not spark a long debate.
I know that if you take a very old source and add new bugs that the
result cannot be better than the maintained original.
In contrary to the people behing cdrkit,
No comment.
I carefully listen to the problems uf the users and I add bug-fixes
for cdrtools
localtime when (why ever) something in zoneinfo
changes...
The localtime files change all the time, look at how often timezone-data
is updated. Everyone some bright spark comes up with another clever way
of squeezing 25 hours into a day, his country's DST rules change. That's
why openrc has a setting
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 17 Jan 2012 14:13:30 +0100, Hinnerk van Bruinehsen wrote:
The localtime files change all the time, look at how often
timezone-data is updated. Everyone some bright spark comes up with
another clever way of squeezing 25 hours into a day
://gpo.zugaina.org/sys-cluster/apache-hadoop-common
Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in many
parallel operations (advanced math/science and many other apps.)
https://spark.apache.org/
Dryad Torque Mpiche2 MPI
Globus tookit
package..Which is not a requisite in the ebuild
but I saw that maben was a required code for building mesos on another
distro
Like I said, it's a hack, but I'll get it cleaned up; because nobody
else seemed motivated to get mesos running on gentoo.
Now it off to get spark[1
different bleeding edge
technologies. Clusters (mesos, spark etc etc) and Java (maven etc etc).
I am but a follwer at this time on those two bleeding edge fronts.
But codes are release at multiple times during the day/week that I need
to test. So, in my limited understanding, EAPI 6 looks absolutely
tch) could be built
by clustering Rpi3 devices together?
Lots of projects seem to be centric on Rpi3 clusters these days. [2]
Thanks again for the feedback.
James
[1]
http://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server
[2] https://www.raspberrypi.org/magpi/pi-spark-supercomputer/
. Are there any graphical tools for adjusting and managing
cgroups?
i thought that htop did this but i was wrong.. it only shows which
cgroup processes are in. that would be a killer feature though.
Surely when I apply this to the myriad of things running
on my mesos+spark cluster I'm going to need
) compatible ebuild?
Cisco is now pushing mesos with ansible (very cool project) [2]. Cisco
is promoting an opensource approach to microservices using ansible.
Zookeeper, spark, mesos, consul and many other codes that are part of the
clustering codes, can be found in portage, various overlays
just fine. Now anyone
with a pair of spark plugs and a few tools may be able to start it
without you, but your startup _works_. Now imagine some German
engineer caring nothing about you lowly driver, and caring more about
the car as a system, and he goes using fancy words like
authentication
that several folks had interest
in clusters (privately operated clouds) as more than a passing interest.
Companion projects, such as Apache's Spark [4] have tremendous potential
as aggressive solutions such diverse fields as social media relationships,
distributed database techniques and new, massively
folks are building
systems with both SSD and traditional (raid) HD setups. The SSD
could be partitioned for the cluster and swap. Lots of experimentation
on how best to deploy SSD with max_ram in systems for clusters is
ongoing.
Memory Management is a primary focus of Apache-Spark (in-memory
/Makefile.am
./net-analyzer/iftop-1.0_pre4/work/iftop-1.0pre4/config/Makefile.am
./x11-misc/pcmanfm-0.9.10/work/pcmanfm-0.9.10/Makefile.am
./x11-misc/pcmanfm-0.9.10/work/pcmanfm-0.9.10/data/Makefile.am
snip
likewise I've been hacking at ebuilds for apache (spark and mesos)
The spark file are still under
to
know how to install an info file manually. I'd rather leave it alone,
but texinfo is one of the greatest things about both the GNU system
and Emacs. I need to know how it works. If I may be forgiven for
ranting a LITTLE bit, the idea of automatically setting up info files
was a spark
enough about the system to know where to look and knows the
usual tools for looking there. In much the same way as we expect the car
mechanic to know where the spark plugs are and what they do.
Now this is the 'horseshit' logic that I used the lspci example to displace.
Quickly discerning drivers
and
could be in many others. But I know many people who tried gentoo and
bailed precisely because of the shoot the messenger mentality so
pervasive here; the self-selected sample you see is meaningless.
Go ahead, have another three days' fun. Maybe I'll spark some more
tinders in a month or two. I
is right)
and it even autoupdates localtime when (why ever) something in
zoneinfo changes...
The localtime files change all the time, look at how often
timezone-data is updated. Everyone some bright spark comes up with
another clever way of squeezing 25 hours into a day, his country's
DST
,
not a spark plug, that did the starting. i.e., you're asking literally
for a turnkey system, and that's literally what he invented, except
that the system guarantees that it's a key that was turned. You have
not said a THING about your misunderstanding of the use of the word
_broken_
used maven, only ant and I'm still learning
about ebuilds, so I can't say anything else.
Like I said, it's a hack, but I'll get it cleaned up; because nobody
else seemed motivated to get mesos running on gentoo.
Now it off to get spark[1] and the hadoop[2] happy on gentoo...
happy, happy
priortize jobs (codes), migrate to systems with spare resources, and bump
other process to lower priority states. Also, there are (in-memory)
codes like Apache-Spark, that use (RDD) Resilient Distributed Data.
It doesn't look as though Karthikesan's proposal for a cgroup based
controller was ever
Facebook,
Google, NSA, Walmart, Governments, Banks, collect about their
customers/users/citizens/slaves/
and go straight to mesos/spark. RDD (in-memory) cluster
calculations are at the heart of my needs. The opposite end of the
spectrum, loads of small files and small apps; I dunno
many of the systemd based cluster solutions are having all
sorts of OOM, OOM-killer etc etc issues. So any and all good information,
examples and docs related to cgroups is of keen interests to me. My efforts
to build up a mesos/spark cluster, center around openrc and therefore
direct management
anyone recommend, with what flags? [1]
Just use the latest (0.80.7 ATM). You may just nerame and rehash
0.80.5 ebuild (usually this works fine). Or you may stay with
0.80.5, but with fewer bug fixes.
Ceph will be the DFS on top of a (3) node mesos+spark cluster.
btrfs is being set up with 2
arrangement, once
I have a better idea of the i/o needs. With spark(RDD) on top of mesos,
I shooting for mostly in-memory usage so i/o is not very heavily
used. We'll just have to see how things work out.
Last point. I'm using openrc and not systemd, at this time; any
ceph issues with openrc, as I do see
our
compiler and some brief suggestions on taking it for a test drive.
I'm not much interested in the Intel simulator, atm. I like to test on
old gear running gentoo:: borking is no big deal, if it happens.
Other codes keen to test gcc-5 (offloading) on are Apache-mesos and
Apache-spark and mes
ip-switch) could be built
by clustering Rpi3 devices together?
Lots of projects seem to be centric on Rpi3 clusters these days. [2]
Thanks again for the feedback.
James
[1]
http://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server
[2] https://www.raspberrypi.org/magpi
> it tends to be designed around upgrading in-place.
I did not realize it was so java centric.. I'm out, cause I have more
icedtea-java projects than I know what to do with. (apache-spark).
Thanks for all the info guys,
James
support which means you can in fact have regular (non-audio) apps work
together with jack, which is what I sometimes use: Set up my audio studio
setup, while still having pulseaudio around for stuff like browsers and video
players. If I suddenly have a creative spark, I have my studio ready to play
the CLI is an
immediate forth interpreter)
Was quite nice and tidy, allowing lots of stuff like modifications of the
device tree and other nice things.
Was probably underused by Apple but yet, was the key for a lot of hacks on PPC
models!
I think it was originated from Sun and use on spark
analogy, oh, your car is working just fine. Now anyone
with a pair of spark plugs and a few tools may be able to start it
without you, but your startup _works_. Now imagine some German
engineer caring nothing about you lowly driver, and caring more about
the car as a system, and he goes using
all
together
I agree, Hadoop/HDFS is for data analysis. Like building a profile about
people based on the information companies like Facebook, Google, NSA,
Walmart, Governments, Banks, collect about their
customers/users/citizens/slaves/
and go straight to mesos/spark. RDD (in-memory
it to identify itself. The same happens with Konqueror in the default user
agent setting, but it logs in happily in the IE6 setting. Some bright
spark has written 311 lines of code in a script which actively
discriminates against most browsers OS's out there.
Some web-developers or businesses
by that is next to zero, and on recent systems it is also impossible to
cause significant load with ssh-login-attempts.
Uh-huh. We all said that for many years. Then some bright spark actually
looked at the patches the debian openssh maintainer was applying and we all
had one of those special
for looking there. In much the same way as we expect the car
mechanic to know where the spark plugs are and what they do.
00:05.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge (prog-if 00
[Normal decode])
snip
Kernel driver in use: pcieport-drive
and here:
00:12.0 IDE interface: ATI
-common
Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in many
parallel operations (advanced math/science and many other apps.)
https://spark.apache.org/
Dryad Torque Mpiche2 MPI
Globus tookit
mesos_tech_report.pdf
It looks as though
rt of Gentoo Release Engineering. I sure those folks will "put out"
some amazing new install toys! Either way, an "admin cd" in the installation
media collection for gentoo, does spark excitement and
curiosities and possibilities, kno?
James
[1] https://wiki.gentoo.org/wiki/Project:RelEng
[2] https://wiki.gentoo.org/wiki/Project:RelEng_GRS
ans you can in fact have regular (non-audio) apps work
> together with jack, which is what I sometimes use: Set up my audio studio
> setup, while still having pulseaudio around for stuff like browsers and video
> players. If I suddenly have a creative spark, I have my studio ready to play
&
for the clustering geniuses;
some are espousing a variety of cgroup gymnastics for phantom kills)
phantom kills?
Spark is fault tolerant, regardless of node/memory/drive failures
above the fault tolerance that a file system configuration many support.
If fact, files lost can be 'regenerated
in (a) above they are converted to aliases with appropriate
tagging. If no maintainer exists the package is handled per the
result of (c).
Comments, alternatives, etc?
There is a large discussion on the Spark mailing list right now about
having groups of maintainers for different areas:
http
any light on your problem - just hoped that you'd get some
resolution that I could then apply to my own situation which is very similar
to yours ... but as it's not looking so rosy maybe my experience may spark
some other avenue to explore?
I've been running an amd64 nfs mount successfully
them enough, they will gladly oblige, and not care too much if this
embarrasses you
Try as they might, they could not get past this enterprise's border firewalls.
Nothing showed up as a weakness. They tried and tried and tried and tried
Until one day one of their bright spark techies had
://gpo.zugaina.org/sys-cluster/apache-hadoop-common
Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in many
parallel operations (advanced math/science and many other apps.)
https://spark.apache.org/
Dryad Torque Mpiche2 MPI
Globus tookit
, collect about their
customers/users/citizens/slaves/
and go straight to mesos/spark. RDD (in-memory) cluster
calculations are at the heart of my needs. The opposite end of the
spectrum, loads of small files and small apps; I dunno about, but, I'm
all
ears.
In the end, my
test system online for hacking. I
(temporarily) shelved that project to get clustering going with btrfs, ceph
and mesos+spark on gentoo. Naturally, I have bitten off
a wee_bit too much, but, life is good!
Likewise, meino was (is?) working on porting/hacking the
old venerable netconsole.c [1
howdy folks,
i've had a bit of a hiatus of internet access and just catching up with
mails i notice a recurring systemd related spark about boot times.
please this message is not to recreate a flame but to suggest something
that may benefit folks from all preferred init systems.
kexec
-
slw :)
I'm staying away from VMs. It's spark on top of mesos I'm after. Maybe
docker or another container solution, down the road.
I read where some are using a SSD with raid 1 and bcache to speed up
performance and stability a bit. I do not want to add SSD to the mix right
now
about proformance, because
my main (end result) goal is to throttle codes so they run almost
exclusively in ram (in memory) as design by amplabs. Spark plus Tachyon is a
work in progress, for sure. The DFS will be used in lieu of HDFS for
distributed/cluster types of apps, hence ceph. Btrfs
open to edumacation on this aspect.
and help them remove all cruft that's getting in the way of a clean upgrade
I just ran a 'depclean' a few days ago. Dozens of my java hacks (overlays)
and such got cleaned out and my apache-spark ebuild (hack) does not compile
anymore. No big deal, I get
problem, and I'll deliver (toes crossed tightly) the most 'bad
ass' clustering technology currently available::
*Mesos + spark + storm + tachyon + cassandra* on gentoo (amd64).
Then the stabilization work moves to arm64. Both platforms on top of
btrfs/cephfs is going to be *smokin_wicked_cool
item again, which they will take very seriously.
> I'm still on kmail:4 and all menu icons are shown and functionality is not
> crippled in any way. I fear what might happen when I eventually have to
> install kmail:5.
>
I feel this is also something you should express to the developers,
but admittedly I don't know the best place. Perhaps a mailing list. I
understand there is a time investment but if you have any to spare it
will almost assuredly spark a constructive conversation.
ing your question, yes, today all modern routers and any ADSL modems
with routing capability come as dual IPv4/6 stack.
[1] True story: Years ago a friend started work in a car accessories and
spare parts shop. Customer walks in looking for spark plugs, where upon my
friend asks for his make an
and helpful communities in the OSS world.
Try have a look at the Debian, OpenBSD or Slackware forums/ml/IRC
channels, and you'll understand.
Go ahead, have another three days' fun. Maybe I'll spark some more
tinders in a month or two. I wouldn't want to deprive you of your
fun.
I can't understand
not the case since you've experienced
the same problem on your local web server, but I thought I would
mention it just in case it might spark any ideas if everything else
failed to work.
Good luck,
Paul
that when running with N
JournalNodes, the system can tolerate at most (N - 1) / 2 failures and
continue to function normally.
http://gpo.zugaina.org/sys-cluster/apache-hadoop-common
Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in many
parallel
is at the heart of clustering now.
It seems many of the systemd based cluster solutions are having all
sorts of OOM, OOM-killer etc etc issues. So any and all good information,
examples and docs related to cgroups is of keen interests to me. My efforts
to build up a mesos/spark cluster, center around openrc
.
I put (2) ebuilds into BGO. apache-mesos and apache-spark.
bugs 510912 and 523412. There they languish.
Following the outside overlay semantic guidance, newer versions are here,
thanks to Alec:
https://github.com/trozamon/overlay/tree/master/sys-cluster
So as you point out, go and work on what
time. It's
> the keyboard latency, while playing the game, that I try to tune away,
> while other codes are running. I try very hard to keep codes from
> swapping out, cause ultimately I'm most interested in clusters that keep
> everything running (in memory). AkA ultimate utilization of
many of whom
> > are running the full Plasma desktop, do not seem to have such problems.
> > So, I'm guessing I must be missing some package or other to complement
> > the required functionality.
>
> I have to disagree with you here. There is no way I can see the
> deve
past, I've had a few go out. I replace
them when needed. The biggest issue with power around here, sags or
just total blinks. Our power company has surge arrestors in several
places along the lines. Sometimes when I'm driving down the road, I see
them. They place different kinds of protection
a single Raid5 with 100+ drives.
Anyone stupid enough to do that deserves to loose their data.
http://gpo.zugaina.org/sys-cluster/apache-hadoop-common
Zookeeper (Fault tolerance)
SPARK ( optimized for interative jobs where a datase is resued in
many
parallel operations (advanced math/science
idiodic than Obama and his red line. We
all know how that turned out.
CHOICE is EVERYTHING!
My decision to run a lightweight desktop (lxde, lxqt) and have
a mesos/spark cluser across several machines is my choice.
Others like KDE becoming the cluster. CHOICE. Exclude cgroups
and it will split
>
> https://www.cnet.com/news/top-5-ipv6-ready-wireless-routers/
>
> Answering your question, yes, today all modern routers and any ADSL modems
> with routing capability come as dual IPv4/6 stack.
>
>
> [1] True story: Years ago a friend started work in a car acces
, that I try to tune away,
while other codes are running. I try very hard to keep codes from
swapping out, cause ultimately I'm most interested in clusters that keep
everything running (in memory). AkA ultimate utilization of Apache-Spark
and other "in-memory" techniques.
Combined cod
a UPS. I also have quite a bit of
surge protection too. One in breaker box, one at wall plug, more in the
UPS and whatever is in the puters power supply as well. I was looking
at the transformer on the pole a few weeks ago, I think it has some sort
of surge protection too. Sort of like a old tim
1 - 100 of 103 matches
Mail list logo