Am 20.03.2013 um 18:21 schrieb Txema Heredia Genestar:

> Well, right now my master node postfix has an alias mapping the user's ldap 
> email field. Thus, mails are not stored for the users but sent directly to 
> their real accounts.
> 
> Should I get rid of that, let all mail to accumulate in /var/spool/mail and 
> parse the files with a cron job?

Besides Joshua's idea:

You could use also a plain call to `mail`. I.e. if the there is local mail, it 
can be retrieved by:

$ echo "p 1-10" | mail

already and an hourly cron-job could do:

$ echo "p (from root)" | mail | mail -s Summary [email protected]

(with the final address of the user). Or get the headers by:

$ mail -H

and you can do further processing.

-- Reuti


> I have been looking for some parsers and they seem too complicated for what I 
> want to obtain. Is not there any mail client able to retrieve mails one by 
> one from a script?
> 
> Txema
> 
> El 20/03/13 13:50, Reuti escribió:
>> Hi,
>> 
>> Am 20.03.2013 um 13:01 schrieb Txema Heredia Genestar:
>> 
>>> We have a 300-core cluster. Our users have always submitted their jobs 
>>> using "-m ea" to receive an email whenever the job finishes or aborts. Our 
>>> typical user tends to submit 1.000~3.000 jobs at once. They usually don't 
>>> use task jobs, but submit each one as an independent job. In addition, we 
>>> have an epilog script that, whenever a job finishes, checks the requested 
>>> memory (h_vmem) and the real maxvmem used, and sends an email to the user 
>>> if they are blatantly requesting more than they need.
>>> This leads that, on a typical "run", our users receive from 1.000 to 6.000 
>>> emails from the cluster.
>>> 
>>> Everybody was ok with this but, a few months ago, our university moved the 
>>> mail service to google and now the mail admins are threatening us to ban 
>>> all mails from our cluster.
>>> 
>>> Is there any solution to this (beyond not sending mails or changing the 
>>> task-job habits of our users)? Are there any addons/plugins out there that 
>>> accumulate mails in the master node and send hourly/daily digests to the 
>>> users?
>> This was on the list before some times, to get only one email when the 
>> complete array job finished in total. What you can implement:
>> 
>> a) Don't send emails from the array job, but submit a dummy job afterwards 
>> with -hold_jid of the array job (it could even run in a dummy-queue on the 
>> headnode solely for this purpose as it won't put any load on the machine), 
>> which sends just one eMail after the main job. It could also perform a 
>> `qacct` to extract the exit_codes from all the array instances.
>> 
>> b) Most likely you have an MTA agent running on the master node of the 
>> cluster to forward it to Google right now. It should also be possible to use 
>> a regexp/hash recipient_canonical to mangle the official email address like 
>> [email protected] into foobar@headnode and collect them there first. A cron 
>> job could then assemble one email per hour which includes all the emails for 
>> this particular user of the last hour and send this one to Google.
>> 
>> -- Reuti
>> 
>> 
>>> Thanks in advance,
>>> 
>>> Txema
>>> _______________________________________________
>>> users mailing list
>>> [email protected]
>>> https://gridengine.org/mailman/listinfo/users
> 
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to