Gregory Maxwell wrote:
Fantastic. I already have a tool up using it, it's not user friendly..
it's really just a tool meant for other tools to use.
http://tools.wikimedia.de/~gmaxwell/cgi-bin/deletedimage.py?hash=f85b9e4a40434e664209f2a8e2ff106b4f636db8\wiki=enwiki
hash= sha1 hash of the
Simetrical wrote:
On 11/22/07, Platonides wrote:
Aye, but the carefully maintaned stable toolserver tools should be
efficient, too (if the task can be efficiently done, of course, the
criteria for inclusion would be greater).
In which case they should be moved to the main servers
I'm Ccing Wikitech, i suggest we follow this thread there.
Nikola Smolenski wrote:
(thread about interwiki bots at toolserver)
Coincidentally, yesterday I released a MediaWiki extension which, if
accepted on Wikimedia projects, may make interwiki bots much less busy.
See
James Hare wrote:
Heh, there was an account called LottoBot? In my younger days I wrote a
LottoBot... in mIRC script.
This seems perl :)
http://de.wikinews.org/wiki/Benutzer:Lottobot/Skript
As it was working yesterday, DaB. will probably get an email asking to
renew it soon :) Or perhaps
Stefan Kühn wrote:
Hello!
At the moment I and some other users of the dumps show every
day/week/month at http://dumps.wikimedia.org/ for a new dump. I use the
dump of 10 different languages for my projects (Wikipedia-World,
Template-Tiger, Persondata), so it is really work.
I hope
Lars Dɪᴇᴄᴋᴏᴡ 迪拉斯 wrote:
Platonides ✍:
PS: You know you can subscribe to RSS feeds for the xml dumps, do
you?
I was just already in the middle of scraping /backup-index.html to
generate categorised Atom feeds. Where are the feeds you're speaking
of?
Filename-rss.xml in the latest/ folder
DaB. wrote:
I'm done for first. You can find my programm at
http://toolserver.org/tsthumb/tsthumb
.
If you find any error (experiments with chinese-named pictures would be
nice),
please report it to JIRA.
Sincerly,
DaB.
Looks i'm (un)lucky. I got a java.lang.NullPointerException on
...) to download it and then read it as a normal file.
2. If I've questions about such stuff, am I right here? Otherwise, sorry
for bothering you. :-)
Cheers
seth
Yes, this is a good place :)
Platonides
___
Toolserver-l mailing list
Simon Walker wrote:
How long are the binlogs kept for on Wikimedia servers?
Surely it would be possible to take a dump now, import it to s3, start
replication, then import the same dump onto the new server, and let it
catch up from a month of replag?
Of course, this wouldn't be possible
River Tarnell wrote:
Platonides:
Move mysql data files to the new box
i usually try to avoid that when splitting a database, since it means the
innodb data file is much larger than it needs to be (yarrow has both s1 and
s3,
but the new server will only have s1). it's also a good
greater.
Some caveats: oldimage table has 'unexpected' entries. Don't make
assumptions such as a filename can't be twice or there will always be
a file.
Of course, the code is available. If I can be of help... just ask :)
Yours,
Platonides
___
Toolserver
Daniel Kinzler schrieb:
Marcin Cieslak schrieb:
I think we will be better of with flat files (for example in PHP or java
properties format or maybe) versioned in some version control, similary
as MediaWiki is doing.
The problem is that each user has his/her own svn repo. Pushing messages
Lars Aronsson wrote:
But features like this WikiMiniAtlas (or all the
services that rely on s3 replication) are too impressive and
useful to rely completely on the voluntary efforts of single
individuals.
How does having several programmers per project fix s3 replication?
Lars Aronsson wrote:
I really want to get better geo tagging going in the Swedish
Wikipedia. To this end, WikiMiniAtlas was activated some weeks
ago. The activation went just fine. But the underlying database is
not up to date. This is because Stefan Kühn is rewriting the
script that
There's another way to do a key-signing, faster than 1-to-1. You have
everyone have a list and each one presents itself, giving their fingerprint.
I guess you'll have a brief introduction at the beginning where everyone
presents himself? If you were to say I'm Daniel (aka as DaB), the evil
Stefan Kühn wrote:
Maybe a guru for perl can make a look at the script. I hope anyone have
a idea.
Stefan
Then you should provide the script.
Are you using non-blocking sockets? (you shouldn't)
___
Toolserver-l mailing list
Daniel Kinzler wrote:
Hi all
I have written a small script that pulls the content of the toolserver home
page
http://toolserver.org/ from our wiki. It's not live yet, but you can test it
at http://toolserver.org/newindex.php. The page's content is maintained on
River Tarnell wrote:
Platonides:
Why use a script?
why create a whole new skin when we can just use a script?
- river.
You make it sound hard, when creating a good script to scrap the wiki,
filter the page and present it will in fact be harder
River Tarnell wrote:
K. Peachey:
What might be nice is a little tool where people can enter a few article
names in a box and click a button and have it produce the static html dumps
of the desired article(/s).
before anyone runs off to implement this, please remember that the Toolserver
River Tarnell wrote:
Simon Walker:
I presume he meant download.wikimedia.org
i think it's very unlikely we'd run something like this on the static file
server.
- river.
*If* there's such a need, it would be worth to take some CPU time from
the servers doing dumps to fulfill it
DaB. wrote:
Hello,
Am Montag 17 August 2009 21:44:28 schrieb Carl Fürstenberg:
Perhaps there would be an idea to initiate an other dev to be able to
do such work.
it is not this easy as you think. For taking a dump, a db-slave of the wm-
cluster has to taken away from the cluster. That
River Tarnell wrote:
Platonides:
Wouldn't that have required making a dump of the db to reimport it at
the new cluster?
no. s4 was created by taking existing s2 slaves and removing all
databases which were not commons. then commonswiki was dropped on the
remaining s2 servers
River Tarnell wrote:
Platonides:
They should have kept a s2 slave so toolserver could get a dump from it.
why? we already have a dump from an s4 slave.
- river.
Reading DaB message, I thought the problem was taking a dump, since it
was harder to take a s4 slave out of rotation
DaB. wrote:
BTW: If the wm-server-admins hadn't drop commons from s2, we wouldn't need a
dump ;)
Sure. What about restricting drops so they aren't replicated?
That would make toolserver resistant against some datalosses at tampa
(including careless sysadmins ;) ). However, a subsequent create
River Tarnell wrote:
Martin Peeks:
Just out of curiosity, why does the toolserver use a webserver which so
few will be familiar with, rather than apache/etc?
we previously used Apache + mod_suphp, until CGI PHP became too slow.
we explored several solutions and eventually settled on ZWS.
River Tarnell wrote:
Platonides:
I understand the problem was the process creation needed by mod_suphp
(that's also why the switchserver was tried).
that is correct.
How does Zeus run the scripts as different users?
it starts a FastCGI process as the user when a request comes in. when
River Tarnell wrote:
for those who don't read journal.ts.o, a write-up of the outage is
available at
https://confluence.toolserver.org/display/tech/Platform+outage+2009-08-24
- river.
This issue also highlighted the problems of having the NFS server as a single
point of
Tim Landscheidt wrote:
Hi,
is there any reliable way to determine if a script is run in
the toolserver cluster, i. e. can make use of the database
servers Co., without checking for hostname being wolfs-
bane, nightshade or names-to-come?
TIA,
Tim
Check if it can connect to the
Tim Alder wrote:
Hello,
try /home/fuzkabir/public_html instead of
/$home/fuzkabir/public_html .
What works for me is:
mysqldump u_kolossos_databaseX tableXXX XXX.sql
Greetings Kolossos
mysqldump u- username -p password `u_fuzkabir`.`university`
/$home/fuzkabir/public_html;
Lars Aronsson wrote:
==Watch a category of articles==
2010 is election year in Sweden, so I want to keep an eye on all
articles in category:Swedish politicians, in recursive levels.
I now do this on a weekly basis, using CatScan,
Ryan Kaldari wrote:
Hello all,
I just got my account set up on the toolserver and I was playing
around with trying to get multi-file flash uploading working.
Unfortunately, I get the following error each time:
HTTP_ERROR - The file upload was attempted but the server did not
return a 200
Magnus Manske wrote:
On Sat, Jan 2, 2010 at 1:22 PM, Tim Alder wrote:
Hello,
perhaps this would be a good moment to transfer Geohack to a regular
mediawiki-extension on the main servers.
I hear this was also the plan of brion vibber before he go.
Who we need to ask now for this?
Yes,
Trevor Parscal wrote:
On 12/30/09 3:22 PM, River Tarnell wrote:
Hi,
PHP was upgrade to 5.3.1. At the same time, the PHP PDO MySQL module was
enabled. This should have no impact on users (unless you want to use PDO).
- river.
Now we need 5.3.1 on the cluster!
PHP 5.3.1 is broken.
DaB. wrote:
Hello,
Am Samstag 09 Januar 2010 23:18:13 schrieb Nakor:
what about putting TS in it's name to point out it runs from the
toolserver?
I not spoke of the name of the bot, but of the name of the multi-maintainer-
project. For the bot-name I would suggest ts-interwikibot or
Andre Engels wrote:
It just seems silly to run several interwiki-bots on the
toolserver, instead of cooperating to run one.
Still, there's the matter of what it means to 'cooperate to run one'.
Would that mean that there's only a single interwiki bot process
running? That can only work if
River Tarnell wrote:
Hi,
In the past, when an account was expired, it was still possible to access
its public_html. This has now changed. Any HTTP requests to an expired
account will return an error page indicating that the account has expired.
The files in public_html are not
River Tarnell wrote:
Platonides:
It doesn't seem to have changed. gmaxwell account expired time ago.
However, its files can still be accessed
http://toolserver.org/~gmaxwell/election_analysis/2008/GRAPH_3_totals.png
# acctexp gmaxwell
The account gmaxwell will expire on Tuesday, 01 June
Aryeh Gregor wrote:
On Mon, Mar 8, 2010 at 10:20 AM, Platonides wrote:
Maybe some file-transfering magic can be done to add the index in the
other dbs by copying it from rosemary?
No, file-copying doesn't work with InnoDB. You can only copy the full
contents of all databases on the server
Mashiah Davidson wrote:
As you remember, we've experienced issues with memory during last few
weeks. All that issues correllate with situations when both, me and
lvova ran the bot together (each requesting for up to 4 GB for
temorary data) and both worked on relatively small languages. In such
Mashiah Davidson wrote:
I guess it is not possible to reduce the limit in the bot and keep
bot's performance at least on the same level at the same time. You
know administration well, I know the application domain of
connectivity analysis.
I really think that the only acceptable solution
emijrp wrote:
Hi all;
The counter page is generated every 5 minutes, using the last data
available in site_stats table for every wiki project. So, the editrate
can change every 5 minutes, I think that it is a good estimation.
Regards!
[1]
Frédéric Schütz wrote:
River Tarnell wrote:
It's not available in the database yet, but that's something we're
looking at doing. If anyone else has a particular reason to need this
data, it would help if they could describe it, so we can decide how to
format the data, and how detailed it
Tim Landscheidt wrote:
BTW, the warn.png included on this page seems to be broken:
| [...@passepartout ~]$ display warn.png
| display: IDAT: CRC error `warn.png' @ png.c/PNGErrorHandler/1404.
| display: Corrupt image `warn.png' @ png.c/ReadPNGImage/2898.
| [...@passepartout ~]$
Tim
Conrad Irwin wrote:
In the event that I can't publish these files here, is there another
Wikimedia-related place I could?
Conrad
Maybe you could convince Tomasz to run your script on dumps.wikimedia.org
___
Toolserver-l mailing list
Nicolas Dumazet wrote:
I made a script to generate such an html page. Updated by a daily cron task.
http://toolserver.org/~nicdumz/expired.html
(I dont see any issue with listing publicly such information? If
there's any, let me know/delete the html ...)
Regards,
Your list begins with
Mike.lifeguard wrote:
On 10-08-08 04:02 PM, John Doe wrote:
How about requiring a password/code to go along with rev_id in order
to use the tool (similar to the move to commons process?
Delta
Yes, I suppose that's possible. Can we use Basic or Digest auth to
protect parts of our web
I suppose you have already read about doing requests single threaded,
the maxlag parameter and so on.
Make sure you use a User Agent that clearly leads to you in case it
gives problems.
___
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
Peter Körner wrote:
Hi
We're serving commonly used js libs under
http://toolserver.org/~osm/libs/
but the toolserver is serving out the js files uncompressed [1]. GZ
Compression reduces the size of openlayers from 923.66 KB to 207.08 KB
(77.58%) so it is really necessary to enable
Mike Dupont wrote:
Let me know if you get this script running
I would also like to make a tool that will upload to commons and to
archive.org http://archive.org
I had a idea to make a drupal site that would let users post
http://freesb.eu/drupal/drupal-7.0-alpha6/?q=node/2
would like to go
Wikimedia Israel also did something similar in their pikiwiki project.
The problem is not making such software, but the filtering labour
required later.
Mike Dupont wrote:
Well this idea is from gerard m
http://en.wikipedia.org/wiki/User:GerardM
the images being copyright violations, well
Mike Dupont wrote:
But the discussion is not that simple. You can host and display
pictures on other sites for selection and uploading to to toolserver.
What I am trying to discuss here are technical measures to make it
easier for facebook and other users to safely make positive
Maciej Jaros wrote:
Strange. I wasn't able to make phpMyAdmin act as expected otherwise then
casting page_title as binary. Shell act the same to me but I guess it
might be because my system doesn't use latin1.
I guess using this in my script was NOT a good idea:
Roan Kattouw wrote:
2010/11/26 Bryan Tong Minh bryan.tongm...@gmail.com:
Somehow I think that publishing an entire dump violates the do not
publish significant parts of an article rule.
Surely the toolserver admins could be asked to consider waiving that
in this case considering the public
Михајло Анђелковић wrote:
I do not see the dump of srwiki in the given directory. Is there any
clue when they might be available again on dumps.wikimedia.org?
M
You can follow the story here:
http://wikitech.wikimedia.org/view/Dataset1#11-10-2010_-_New_errors
This was expected to be fixed
Bryan Tong Minh wrote:
On Fri, Nov 26, 2010 at 6:43 PM, Platonides platoni...@gmail.com wrote:
Also, as discussed with Ariel, I will gladly mirror such dumps at wm-es
web space.
You do have a toolserver account right? I think it would be a good
idea if you could copy the dumps from the TS
Paul Selitskas wrote:
Is it able to give a project its own domain one day while the project will
still
be a Toolserver project with proper TS attribution/promotion? There's no
Wikipedia
DB replica in Belarus, so downloading dumps would not be the best approach in
my planned project. :)
Purodha Blissenbach wrote:
While we're at it - in the future, we shall have interwiki bots reading the
replicated data bases to a great extent while gathering informations about
existing and prseumably missing interwiki links. This will be sparing lots of
request to the wmf servers which will
Alex Brollo wrote:
2. The script bring into life a python bot, who reads RecentChanges at
10 minutes intervals by a cron routine. Is perhaps more efficient a #irc
bot listening it.wikisource #irc channel for recent changes in your
opinion?
Yes. Specially since you presumably want to get *all*
Михајло Анђелковић wrote:
Long ago I have noticed that the irc server is kicking my bot out
after some time from some reason.
Then I looked closer and noticed there is a server's ping around that
mishaps. Alright, then I just added an ad-hoc pong:
public void responsePing(String
Sumurai8 (DD) wrote:
Well... you can actually send every 3 minutes a PONG-message without
listening to the IRC-channel and the server will gladly accept that
^_^ . That's what I did at the time I didn't know about the
timeout-option of a socket :) But most of the time it is just better
to
River Tarnell wrote:
PS: I cringe every time I see someone parsing IRC lines with things like
strncmp(line, PRIVMSG , 8) or strstr(line, :). The IRC protocol is very
simple, and tokenising it properly is really not that difficult. (Every
argument is separated by a space; if the first byte
Paul Selitskas wrote:
And here a problem arises. Not all operating systems and browsers are
translated in every language. For example, there's still no Belarusian
Windows and Internet Explorer. There's no Belarusian Chrome
furthermore.
So it's a bit more complicated from the second sight.
Frederic Schutz wrote:
emijrp wrote:
Hi Frederic, thanks for your work. Have you tested 7z?
It makes no difference to me. River suggested (and installed) xz, so I
used it, but 7z would have worked too.
A quick test using my biased data for one day (but it should be
representative
MZMcBride wrote:
I know this has come up previously, but I don't think it was ever addressed.
What's the process for updating the design of http://toolserver.org? Can
the index file be made to load from the Toolserver wiki (similar to how
www.wikipedia.org works at Meta-Wiki)?
It loads from
Alex Brollo wrote:
I'd like to install into my toolserver account djvuLibre binaries.
Unluckily my knowlege of Unix is very primitive - approaching to nothing.
Is some of you willing to take a look to
http://djvu.sourceforge.net/index.html, and to tell me if
Solaris 6
Ilmari Karonen wrote:
I remember being disappointed by the lack of stderr output in the mail I
got after switching my Commons MIME type statistics script over to SGE,
but then I just thought meh, I'm logging to a file anyway, I can always
just look there if something goes wrong.
SGE has a
Alex Brollo wrote:
I'm going to run into toolserver some simple python + djvuLivre routines
to test the possibility to obtain a wikicaptcha, built to be useful
for wikisource activity.
I'm far from sufficiently skilled to write all the project, in
particular the final user interface; but
Alex Brollo wrote:
Looks interesting. Does the language matter for you? Because if python
does not offer an advantage over php, I would recommend the later so
that it'd be easier to merge into wikisource.
Yes, it matter; I hardly can write something in python, I have only very
Dr. Trigon wrote:
@Platonides:
What is this parameter then??
Thanks and greetings
DrTrigon
It's -j
-j join
Declares if the standard error stream of the job will be merged with the
standard output stream of the job.
An option argument value of oe directs that the two streams
Seb35 wrote:
Krinkle wrote:
How much is too much memory ?
We needed to transform and crop TIFF images, read an XML associated with a
book containing the OCRized text of the digitized book, and create a DjVu
with the images and the text layer.
For that we rent a server, I cannot
Another example are articles imported from usemodwiki, whose history
will have a later id than those which were 'current'.
___
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/toolserver-l
Posting
Krinkle wrote:
-- Tool developer workflow:
I'll describe how the system would work from a tool developers point
of view. [3]
So here's what you'd do to make it work, three easy steps:
1) The toolserver tool developer includes a single php file (eg. /
p_i18n/ToolStart.php). This makes
Brett Hillebrand wrote:
I have reason to believe that one of Betacommand's tools is currently
violating the Toolserver's privacy policy by profiling individual users
editing times and edited articles for comparative reasons as seen at:
http://toolserver.org/~betacommand/UserCompare/
This
Brett Hillebrand wrote:
Well aware of that fact, but you seem to operate on the assumption that I
can and would be arsed removing it just for this mailing list? Obviously
common sense isn't so common. But all this detracts from what I actually
raised earlier, but if some people have nothing
Krinkle wrote:
* Automated updates: Since the messages are file-stored in the
messages-directory of the tool. There's no need to keep track or
update anything for you.
(...)
-- TranslateWiki
I'm currently in talks with TranslateWiki how to best set up the
syncing system. Although initial
Andrew Dunbar wrote:
I've got a little program to index dump files that supports Windows
and Linux but it doesn't compile on the Toolserver with either cc or
gcc due to the lack of the function vasprintf(). It's a GNU extension
so I'm surprised it didn't work even with gcc.
Why doesn't the
Grimlock wrote:
Doing a sys.path.append(whatever) is uneffective when you have to schedule a
python work (I tested it yersteday).
The issue that Junaid indicated and the answer given is, in my mind, the only
one for the moment.
Grimlock
The generic way to change the current working dir
Jim Hutchinson wrote:
(I tried to post this question before but was not properly registered for
the mailing list. If this is a repeat I apologize.)
I am in need of some guidance on how to get some data out of the query
service. I signed up for an account, but I'm not sure if I'm supposed to
Manish Goregaokar wrote:
1. Select 200 random articles.
2. Get the top contributors for each of them.
3. Get the edit counts for those contributors.
I think he has the list/s of 200 articles, and does not want random ones.
Plus, he doesn't want the editcounts, he wants their top edited
On Fri, May 13, 2011 at 11:51 PM, Giftpflanze m.p.ropp...@web.de wrote:
The behavior of string
processing seem to have changed in different programs almost
simultaneously, somewhere around October 2010.
It may be connected with TS-852 [*] which was resolved on 2010-12-08.
TS-852 was a change
http://toolserver.org/~platonides/sandbox/privatekey.rsa
chmod 700 privatekey.rsa
ssh -i privatekey.rsa platoni...@toolserver.org
___
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/toolserver-l
River Tarnell wrote:
Hi,
I'm about to re-import several database clusters from WMF: s3, s4, s6
and s7. This will be done on the secondary server first, so users won't
be affected, except that queries on these clusters might be a bit slower
for a while.
This will resolve the following TS
While we are discussing the toolserver status reports, i'd like to bring
up some issues I recently found with the status files:
$ ls -l /var/www/status*
-rw-r--r-- 1 rdab root 34 Jul 1 14:53 /var/www/status_s1
-rw-r--r-- 1 rdab root 34 Jul 1 14:53 /var/www/status_s2
-rw-r--r-- 1 rdab root
into account and wants to
benefit from my library, he can do:
require_once /home/platonides/public_html/common/status.php;
ToolserverStatus::showPrettyBox();
And a nice box will appear -if needed- with any relevant status
information at that point
Marcin Cieslak wrote:
4.8Gsaper
now:
238.8M saper
Btw. I have an automatically updated copy of Mediawiki SVN
repository here, so you can save space if you have your
own copy.
/mnt/user-store/mediawiki/
(the copy is being pulled via hgsvn into Mercurial,
so Mercurial commands
Mike Dupont wrote:
Hi there,
I have been asked to help with the porting of media wiki to hiphopphp
and setting up an example,
it requires a large amount of cpu and disk and would like to know if we
can use the toolserver for development? This is for compiling and
testing, not a full
Dr. Trigon wrote:
I would check that xslt is only composed by alphanumeric
characters* and do something like /home/drtrigon/xslt/ + xslt +
.xslt (this ensures there's no ../ and doesn't contain \0)
I considered this solution, since it sounded to be very easy. BUT the
check for alphanum does
A few hours ago I issued the following command in commonswiki_p
mysql select * from logging where log_namespace=6 and
log_title='Estatuas_y_fuentes_de_La_Granja_de_San_Ildefonso_1.jpg' limit 10;
Empty set (1 hour 5 min 33.53 sec)
Why does it take so long? There should be an index on it, which
antoine delarue a écrit:
hello,
I can't connect anymore to frwiki_p with the command line
/
sql frwiki_p
/
but I am able to connect to enwiki_p. Is this due to some grants
removed, or is it a problem with the frwiki_p database ?
Regards
Hercule
Works for me.
Try again. What kind of
René Kijewski wrote:
Am Wed, 21 Sep 2011 20:48:58 -0400
schrieb Hersfoldhersfoldw...@gmail.com:
I've gotten this failure message the last two time my bot has tried
to run. Does any one know what might be causing this? I don't
recognize the script it mentions.
Unable to run job: got no
Merlijn van Deen wrote:
Back in may 2010, the 'official procedure' when the TS was down was
mailing to ts-major-outage (at) TCX (dot) ORG (dot) UK:
http://lists.wikimedia.org/pipermail/toolserver-l/2010-May/003175.html
Has this changed (e.g. because only river gets notices from that
On 16/10/11 20:45, Magnus Manske wrote:
The server at www.toolserver.org is taking too long to respond.
Also, no ssh (I tried nightshade).
Magnus
Works for me.
___
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
Marlen Caemmerer wrote:
Rosemary (one of the enwiki-DB-Hosts) seems to bring the maximum of the I/O
that is possible, disk graphs are clipping there.
In the MySQL traffic graph you can see there is clipping too.
Strange thing about this is that this phenomenon started in the middle of
DaB. schrieb:
have you ever thought about the possiblity that maybe there are more users on
the TS and more people who use the toolserver now than 2 years ago? Or maybe
it is just people like you, who let a query for a WEBTOOL run for 71 minutes!
We are just short on hardware at the
philipp.zed...@tu-berlin.de wrote:
Hallo,
I'm working with the very huge mysql table 'revision' in the database
e.g. enwiki_p and would like to choose keys by hand using 'USE
INDEX(...)', because I think, mysql's choices are sometimes not ideal.
My difficulty is that if I try to do
Seems the daemon of cronie is not running in willow nor nightshade.
Listing the processes with cron name, there's only /usr/sbin/cron which
seems to be the daemon for sun crontab.
From clematis process list (where cronie does continue running, so jobs
sent from submit were probably not affected),
John wrote:
I've been getting crontab emails less than 5 minutes ago
Sun crontab or cronie?
Note that crontab is still running on both.
___
Toolserver-l mailing list (Toolserver-l@lists.wikimedia.org)
On 09/11/11 09:06, Liangent wrote:
Now I find my cronjobs still run twice a time even if I clear my
crontab on nightshade. Can someone have a look and tell me what's
wrong?
-Liangent
I suspect you had your crons in nightshade inside cronie and the events
looked like this:
* cronie daemon
On Thu, Nov 10, 2011 at 18:08, Liangent wrote:
After some check I thought the fact was:
* I install my crontab on nightshade crontab (when it was still running Linux)
* nightshade is re-installed as Solaris and my crontab gets lost
* River publishes a notice and askes us to type 'cronie
On 25/11/11 14:35, Magnus Manske wrote:
On Fri, Nov 25, 2011 at 9:45 AM, John phoenixoverr...@gmail.com wrote:
Mine is betacommand
And one more (magnus):
Unable to run job: got no response from JSV script
/sge62/default/common/jsv.sh.
Exiting.
This is an SGE error, after your cron
1 - 100 of 226 matches
Mail list logo