>>> to show how SHA1's speed made it unsuitable for hashing passwords.
>>>>
>>>> I think it would be cool to redo the demo, but utilize the power of a
>>>> cluster managed by Spark to crunch through hashes even faster.
>>>>
>>>> But how would you do that with Spark (if at all)?
>>>>
>>>> I'm guessing you would create an RDD that somehow defined the search
>>>> space you're going to go through, and then partition it to divide the work
>>>> up equally amongst the cluster's cores. Does that sound right?
>>>>
>>>> I wonder if others have already used Spark for
>>>> computationally-intensive workloads like this, as opposed to just
>>>> data-intensive ones.
>>>>
>>>> Nick
>>>>
>>>>
>>>> --
>>>> View this message in context: Using Spark to crack passwords
>>>> <http://apache-spark-user-list.1001560.n3.nabble.com/Using-Spark-to-crack-passwords-tp7437.html>
>>>> Sent from the Apache Spark User List mailing list archive
>>>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>>>>
>>>
>>>
>>
>
think it would be cool to redo the demo, but utilize the power of a
>>> cluster managed by Spark to crunch through hashes even faster.
>>>
>>> But how would you do that with Spark (if at all)?
>>>
>>> I'm guessing you would create an RDD that somehow defined th
somehow defined the search
>> space you're going to go through, and then partition it to divide the work
>> up equally amongst the cluster's cores. Does that sound right?
>>
>> I wonder if others have already used Spark for computationally-intensive
>> workloads like t
ers have already used Spark for computationally-intensive
> workloads like this, as opposed to just data-intensive ones.
>
> Nick
>
>
> --------------
> View this message in context: Using Spark to crack passwords
> <http://apache-spark-user-list.1001560.n3.na
I'm guessing you would create an RDD that somehow defined the search
>> space
>> > you're going to go through, and then partition it to divide the work up
>> > equally amongst the cluster's cores. Does that sound right?
>> >
>> > I wonder if others have already used Spark for computationally-intensive
>> > workloads like this, as opposed to just data-intensive ones.
>> >
>> > Nick
>> >
>> >
>> >
>> > View this message in context: Using Spark to crack passwords
>> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
#x27;s cores. Does that sound right?
> >
> > I wonder if others have already used Spark for computationally-intensive
> > workloads like this, as opposed to just data-intensive ones.
> >
> > Nick
> >
> >
> >
> > View this message in context: Using Spark to crack passwords
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
's cores. Does that sound right?
> >
> > I wonder if others have already used Spark for computationally-intensive
> > workloads like this, as opposed to just data-intensive ones.
> >
> > Nick
> >
> >
> >
> > View this message in context: Using Spark to crack passwords
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
;s cores. Does that sound right?
>
> I wonder if others have already used Spark for computationally-intensive
> workloads like this, as opposed to just data-intensive ones.
>
> Nick
>
>
> ____
> View this message in context: Using Spark to crack passwords
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
computationally-intensive
workloads like this, as opposed to just data-intensive ones.
Nick
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Using-Spark-to-crack-passwords-tp7437.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.