[jira] [Commented] (YARN-888) clean up POM dependencies

2013-08-30 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13755236#comment-13755236
 ] 

Timothy St. Clair commented on YARN-888:


Our current list of JIRA's can be found here: 
https://fedoraproject.org/wiki/Changes/Hadoop#Upstream_patch_tracking

> clean up POM dependencies
> -
>
> Key: YARN-888
> URL: https://issues.apache.org/jira/browse/YARN-888
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>Assignee: Roman Shaposhnik
>
> Intermediate 'pom' modules define dependencies inherited by leaf modules.
> This is causing issues in intellij IDE.
> We should normalize the leaf modules like in common, hdfs and tools where all 
> dependencies are defined in each leaf module and the intermediate 'pom' 
> module do not define any dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-160) nodemanagers should obtain cpu/memory values from underlying OS

2013-08-05 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13729530#comment-13729530
 ] 

Timothy St. Clair commented on YARN-160:


I think the prudent approach would be to evaluate hwloc and its community, and 
determine if it meets the internal needs of YARN.  For risk mitigation 
purposes, I think having a plugin abstraction layer as a fallback would also be 
wise. 

I did notice there are also java bindings around hwloc 
(https://launchpad.net/jhwloc/) 

> nodemanagers should obtain cpu/memory values from underlying OS
> ---
>
> Key: YARN-160
> URL: https://issues.apache.org/jira/browse/YARN-160
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.1.0-beta
>
>
> As mentioned in YARN-2
> *NM memory and CPU configs*
> Currently these values are coming from the config of the NM, we should be 
> able to obtain those values from the OS (ie, in the case of Linux from 
> /proc/meminfo & /proc/cpuinfo). As this is highly OS dependent we should have 
> an interface that obtains this information. In addition implementations of 
> this interface should be able to specify a mem/cpu offset (amount of mem/cpu 
> not to be avail as YARN resource), this would allow to reserve mem/cpu for 
> the OS and other services outside of YARN containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-977) Interface for users/AM to know actual usage by the container

2013-07-26 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13720810#comment-13720810
 ] 

Timothy St. Clair commented on YARN-977:


Usage statistics can also be reported via cgroups.

> Interface for users/AM to know actual usage by the container
> 
>
> Key: YARN-977
> URL: https://issues.apache.org/jira/browse/YARN-977
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Omkar Vinit Joshi
>
> Today we allocate resource (memory and cpu) and node manager starts the 
> container with requested resource [I am assuming they are using cgroups]. But 
> there is definitely a possibility of users requesting more than what they 
> actually may need during the execution of their container/job-task. If we add 
> a way for users/AM to know the actual usage of the requested/completed 
> container then they may optimize it for next run..
> This will be helpful for AM to optimize cpu/memory resource requests by 
> querying NM/RM to know avg/max cpu/memory usage of the container or may be 
> containers belonging to application.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-972) Allow requests and scheduling for fractional virtual cores

2013-07-25 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13719905#comment-13719905
 ] 

Timothy St. Clair commented on YARN-972:


Enforcement usually involved cpu.shares, as that scales out it has a cost 
associated in the kernel.  Now take that to an extreme where O(n)>=1000, and 
things start to become messy fast.  We've tried looking at this in the past 
with HTCondor and punted due to diminishing returns, and a couple of kernel 
OOPs.

> Allow requests and scheduling for fractional virtual cores
> --
>
> Key: YARN-972
> URL: https://issues.apache.org/jira/browse/YARN-972
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api, scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>
> As this idea sparked a fair amount of discussion on YARN-2, I'd like to go 
> deeper into the reasoning.
> Currently the virtual core abstraction hides two orthogonal goals.  The first 
> is that a cluster might have heterogeneous hardware and that the processing 
> power of different makes of cores can vary wildly.  The second is that a 
> different (combinations of) workloads can require different levels of 
> granularity.  E.g. one admin might want every task on their cluster to use at 
> least a core, while another might want applications to be able to request 
> quarters of cores.  The former would configure a single vcore per core.  The 
> latter would configure four vcores per core.
> I don't think that the abstraction is a good way of handling the second goal. 
>  Having a virtual cores refer to different magnitudes of processing power on 
> different clusters will make the difficult problem of deciding how many cores 
> to request for a job even more confusing.
> Can we not handle this with dynamic oversubscription?
> Dynamic oversubscription, i.e. adjusting the number of cores offered by a 
> machine based on measured CPU-consumption, should work as a complement to 
> fine-granularity scheduling.  Dynamic oversubscription is never going to be 
> perfect, as the amount of CPU a process consumes can vary widely over its 
> lifetime.  A task that first loads a bunch of data over the network and then 
> performs complex computations on it will suffer if additional CPU-heavy tasks 
> are scheduled on the same node because its initial CPU-utilization was low.  
> To guard against this, we will need to be conservative with how we 
> dynamically oversubscribe.  If a user wants to explicitly hint to the 
> scheduler that their task will not use much CPU, the scheduler should be able 
> to take this into account.
> On YARN-2, there are concerns that including floating point arithmetic in the 
> scheduler will slow it down.  I question this assumption, and it is perhaps 
> worth debating, but I think we can sidestep the issue by multiplying 
> CPU-quantities inside the scheduler by a decently sized number like 1000 and 
> keep doing the computations on integers.
> The relevant APIs are marked as evolving, so there's no need for the change 
> to delay 2.1.0-beta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-972) Allow requests and scheduling for fractional virtual cores

2013-07-25 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13719664#comment-13719664
 ] 

Timothy St. Clair commented on YARN-972:


IMHO Fractional CPU acquisition can be dangerous, and prone to error. It 
essentially over-burdens the "real scheduler", and it goes against the long 
term trend of (N) cores over time.  e.g. - Do you really need this if you have 
1000+cores on a single chip?   

You can go down this road, but I think you will receive diminishing returns vs 
proper stating and splicing around other resources namely network, and I/O 
bandwidth, which are the real bottlenecks.

> Allow requests and scheduling for fractional virtual cores
> --
>
> Key: YARN-972
> URL: https://issues.apache.org/jira/browse/YARN-972
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api, scheduler
>Affects Versions: 2.0.5-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>
> As this idea sparked a fair amount of discussion on YARN-2, I'd like to go 
> deeper into the reasoning.
> Currently the virtual core abstraction hides two orthogonal goals.  The first 
> is that a cluster might have heterogeneous hardware and that the processing 
> power of different makes of cores can vary wildly.  The second is that a 
> different (combinations of) workloads can require different levels of 
> granularity.  E.g. one admin might want every task on their cluster to use at 
> least a core, while another might want applications to be able to request 
> quarters of cores.  The former would configure a single vcore per core.  The 
> latter would configure four vcores per core.
> I don't think that the abstraction is a good way of handling the second goal. 
>  Having a virtual cores refer to different magnitudes of processing power on 
> different clusters will make the difficult problem of deciding how many cores 
> to request for a job even more confusing.
> Can we not handle this with dynamic oversubscription?
> Dynamic oversubscription, i.e. adjusting the number of cores offered by a 
> machine based on measured CPU-consumption, should work as a complement to 
> fine-granularity scheduling.  Dynamic oversubscription is never going to be 
> perfect, as the amount of CPU a process consumes can vary widely over its 
> lifetime.  A task that first loads a bunch of data over the network and then 
> performs complex computations on it will suffer if additional CPU-heavy tasks 
> are scheduled on the same node because its initial CPU-utilization was low.  
> To guard against this, we will need to be conservative with how we 
> dynamically oversubscribe.  If a user wants to explicitly hint to the 
> scheduler that their task will not use much CPU, the scheduler should be able 
> to take this into account.
> On YARN-2, there are concerns that including floating point arithmetic in the 
> scheduler will slow it down.  I question this assumption, and it is perhaps 
> worth debating, but I think we can sidestep the issue by multiplying 
> CPU-quantities inside the scheduler by a decently sized number like 1000 and 
> keep doing the computations on integers.
> The relevant APIs are marked as evolving, so there's no need for the change 
> to delay 2.1.0-beta.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-888) clean up POM dependencies

2013-07-01 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13696842#comment-13696842
 ] 

Timothy St. Clair commented on YARN-888:


[~tucu00], I have a series of tickets relating to *this, and I'm wondering if 
it makes sense to use this as an umbrella and tree off.  

> clean up POM dependencies
> -
>
> Key: YARN-888
> URL: https://issues.apache.org/jira/browse/YARN-888
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Alejandro Abdelnur
>
> Intermediate 'pom' modules define dependencies inherited by leaf modules.
> This is causing issues in intellij IDE.
> We should normalize the leaf modules like in common, hdfs and tools where all 
> dependencies are defined in each leaf module and the intermediate 'pom' 
> module do not define any dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-799) CgroupsLCEResourcesHandler tries to write to cgroup.procs

2013-06-12 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13681477#comment-13681477
 ] 

Timothy St. Clair commented on YARN-799:


+1 to append to tasks, check 
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-Moving_a_Process_to_a_Control_Group.html
 for ref. 

> CgroupsLCEResourcesHandler tries to write to cgroup.procs
> -
>
> Key: YARN-799
> URL: https://issues.apache.org/jira/browse/YARN-799
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha, 2.0.5-alpha
>Reporter: Chris Riccomini
>
> The implementation of
> bq. 
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
> Tells the container-executor to write PIDs to cgroup.procs:
> {code}
>   public String getResourcesOption(ContainerId containerId) {
> String containerName = containerId.toString();
> StringBuilder sb = new StringBuilder("cgroups=");
> if (isCpuWeightEnabled()) {
>   sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + 
> "/cgroup.procs");
>   sb.append(",");
> }
> if (sb.charAt(sb.length() - 1) == ',') {
>   sb.deleteCharAt(sb.length() - 1);
> }
> return sb.toString();
>   }
> {code}
> Apparently, this file has not always been writeable:
> https://patchwork.kernel.org/patch/116146/
> http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
> https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
> The RHEL version of the Linux kernel that I'm using has a CGroup module that 
> has a non-writeable cgroup.procs file.
> {quote}
> $ uname -a
> Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 
> 2011 x86_64 x86_64 x86_64 GNU/Linux
> {quote}
> As a result, when the container-executor tries to run, it fails with this 
> error message:
> bq.fprintf(LOGFILE, "Failed to write pid %s (%d) to file %s - %s\n",
> This is because the executor is given a resource by the 
> CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
> {quote}
> $ pwd 
> /cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_01
> $ ls -l
> total 0
> -r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
> -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
> -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
> -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
> -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
> -rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
> {quote}
> I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, 
> and this appears to have fixed the problem.
> I can think of several potential resolutions to this ticket:
> 1. Ignore the problem, and make people patch YARN when they hit this issue.
> 2. Write to /tasks instead of /cgroup.procs for everyone
> 3. Check permissioning on /cgroup.procs prior to writing to it, and fall back 
> to /tasks.
> 4. Add a config to yarn-site that lets admins specify which file to write to.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-04 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675121#comment-13675121
 ] 

Timothy St. Clair commented on YARN-689:


Just to interject again, but whenever a scheduler sub-divides resources, there 
will eventually need to reside some separate deamon/tool/policy to 
re-balance/defrag your cluster.  I haven't really seen any JIRA's around it, so 
I'm also curious how that is intended to be dealt with.   

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-05-22 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13664616#comment-13664616
 ] 

Timothy St. Clair commented on YARN-689:


Hi folks, 

+1 in agreement with [~tucu00], around resource requests.  I'm not intimately 
familiar with the inner workings of YARN, but I have fair amount of experience 
with other schedulers.  They typically get around this is through expression 
syntax/lang where the admin can define policies in order to tune to their 
environment workloads, where %quantization boundaries are ideal (e.g. best fit 
in [X] chunks, where [X] could be whole(MB) or fractional(CPU) units), with 
fragmentation being the biggest problem with this flexibility.  

Use Case:
A request comes in for 20MB, .5CPUs(cpu_shares in cgroups), 1 booster_rock, and 
3 GPUs.   That request is then evaluated against an 
expression(min,max,whatever) during the activation time, which then splices the 
resource appropriately.  

Either way, this treads into a known space that exists around resource 
splicing, utilization, and such and such.
Ref1: 
http://spinningmatt.wordpress.com/2012/11/13/no-longer-thinking-in-slots-thinking-in-aggregate-resources-and-consumption-policies/
  
Ref2: Every paper & talk that Wilkes gives. 

Cheers, 
Tim

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-322) Add cpu information to queue metrics

2013-05-07 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13651011#comment-13651011
 ] 

Timothy St. Clair commented on YARN-322:


I'm still on my learning curve, but from what I have seen there is limited 
information that is passed by the Resource.  Given some of the other JIRA's 
around memory, wouldn't it make sense to expand the scope to enable some type 
of NVP set for Node attributes, with cpu capabilities being part of that set.  

One could envision the scope expanding to include GPUs, interconnect 
capabilities, etc, as not all machines are =.  

This discrimination will allow ApplicationManagers to better filter desired 
resources for their problem. 

Also feel free to tell me I'm wrong, as I'm still learning ;-). 



> Add cpu information to queue metrics
> 
>
> Key: YARN-322
> URL: https://issues.apache.org/jira/browse/YARN-322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, scheduler
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
> Fix For: 2.0.5-beta
>
>
> Post YARN-2 we need to add cpu information to queue metrics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-600) Hook up cgroups CPU settings to the number of virtual cores allocated

2013-04-23 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13639090#comment-13639090
 ] 

Timothy St. Clair commented on YARN-600:


What happens when a node does not have it enabled?  SetAffinity?

> Hook up cgroups CPU settings to the number of virtual cores allocated
> -
>
> Key: YARN-600
> URL: https://issues.apache.org/jira/browse/YARN-600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, scheduler
>Affects Versions: 2.0.3-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>
> YARN-3 introduced CPU isolation and monitoring through cgroups.  YARN-2 and 
> introduced CPU scheduling in the capacity scheduler, and YARN-326 will 
> introduce it in the fair scheduler.  The number of virtual cores allocated to 
> a container should be used to weight the number of cgroups CPU shares given 
> to it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-160) nodemanagers should obtain cpu/memory values from underlying OS

2013-04-12 Thread Timothy St. Clair (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630059#comment-13630059
 ] 

Timothy St. Clair commented on YARN-160:


If it's possible to tag along development on this one, I would be interested in 
the approach.  IMHO referencing existing solutions gauges baseline: 

Ref:
http://www.open-mpi.org/projects/hwloc/
http://www.rce-cast.com/Podcast/rce-33-hwloc-portable-hardware-locality.html
http://gridscheduler.sourceforge.net/projects/hwloc/GridEnginehwloc.html

> nodemanagers should obtain cpu/memory values from underlying OS
> ---
>
> Key: YARN-160
> URL: https://issues.apache.org/jira/browse/YARN-160
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.3-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.5-beta
>
>
> As mentioned in YARN-2
> *NM memory and CPU configs*
> Currently these values are coming from the config of the NM, we should be 
> able to obtain those values from the OS (ie, in the case of Linux from 
> /proc/meminfo & /proc/cpuinfo). As this is highly OS dependent we should have 
> an interface that obtains this information. In addition implementations of 
> this interface should be able to specify a mem/cpu offset (amount of mem/cpu 
> not to be avail as YARN resource), this would allow to reserve mem/cpu for 
> the OS and other services outside of YARN containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira