ct me with potential
employers, please reach out.
Thanks,
Sri Tummala
engineers and scientists to
focus on the application logic without needing to implement and test
infrastructure resilience mechanisms.
Thank you again for your time and assistance.
Best regards,
Sri Potluri
On Mon, Feb 19, 2024 at 5:03 PM Mich Talebzadeh
wrote:
> Went through your issue w
potential workaround, but it would be
preferable to have a more integrated solution.
I appreciate any guidance, insights, or feedback you can provide on this
matter.
Thank you for your time and support.
Best regards,
Sri P
red a job yet. I am seeking
referrals within product firms (preferably non-consulting) in India that
work with Flink, Spark, Scala, Big Data, or in the fields of ML & AI. Can
someone assist me with this?
Thanks
Sri
Hi All,
Is anyone looking for a spark scala contract role inside the USA? A company
called Maxonic has an open spark scala contract position (100% remote)
inside the USA if anyone is interested, please send your CV to
kali.tumm...@gmail.com.
Thanks & Regards
Sri Tummala
Hi Flink Users/ Spark Users,
Is anyone hiring contract corp to corp big Data spark scala or Flink scala
roles ?
Thanks
Sri
Sent from Mail for Windows 10
Make sense thanks.
Thanks
Sri
Sent from my iPhone
> On 2 Aug 2016, at 03:27, Jacek Laskowski wrote:
>
> Congrats!
>
> Whenever I was doing foreach(println) in the past I'm .toDF.show these
> days. Give it a shot and you'll experience the feeling yourself!
rdpress.com/2014/06/21/calculate-running-sums/
Thanks
Sri
On Mon, Aug 1, 2016 at 12:07 AM, Sri wrote:
> Hi ,
>
> I solved it using spark SQL which uses similar window functions mentioned
> below , for my own knowledge I am trying to solve using Scala RDD which I
> am unable to
Hi ,
I solved it using spark SQL which uses similar window functions mentioned below
, for my own knowledge I am trying to solve using Scala RDD which I am unable
to.
What function in Scala supports window function like SQL unbounded preceding
and current row ? Is it sliding ?
Thanks
Sri
NBOUNDED PRECEDING
>>>>>>>> AND
>>>>>>>> CURRENT ROW) daily_balance
>>>>>>>> FROM table
Thanks
Sri
Sent from my iPhone
> On 31 Jul 2016, at 13:21, Jacek Laskowski wrote:
>
> Hi,
>
> Impossible - see
> htt
)._1,x(1)._2,(x.foldLeft(0.0)(_ +
_._2/x.size)),x.foldLeft(0.0)(_ + _._2))).foreach(println)
On Sun, Jul 31, 2016 at 12:15 PM, sri hari kali charan Tummala <
kali.tumm...@gmail.com> wrote:
> Hi All,
>
> I already solved it using DF and spark sql I was wondering how to solve in
> s
81491906170/ch06.html
Sql:-
SELECT DATE,balance,
SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING AND
CURRENT ROW) daily_balance
FROM table
Thanks
Sri
On Sun, Jul 31, 2016 at 11:54 AM, Mich Talebzadeh wrote:
> Check also this
> <https://databricks.com/blog/2015/07/15
stering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Sun, Jul 31, 2016 at 9:23 AM, sri hari kali charan Tummala
> wrote:
> > tried this no luck, wht is non-empty iterator here ?
> >
> > OP:-
> > (-987,non-empty iterator)
> >
t;))
.map(x => (x(0),x(2)))
.map { case (key,value) =>
(key,value.toArray.toSeq.sliding(2,1).map(x =>
x.sum/x.size))}.foreach(println)
On Sun, Jul 31, 2016 at 12:03 AM, sri hari kali charan Tummala <
kali.tumm...@gmail.com> wrote:
> Hi All,
>
> I managed to write using
intln)
at the moment my output:-
75.0
-25.0
50.0
-50.0
-100.0
I want with key how to get moving average output based on key ?
987,75.0
987,-25
987,50.0
Thanks
Sri
On Sat, Jul 30, 2016 at 11:40 AM, sri hari kali charan Tummala <
kali.tumm...@gmail.com> wrote:
> for knowledge j
for knowledge just wondering how to write it up in scala or spark RDD.
Thanks
Sri
On Sat, Jul 30, 2016 at 11:24 AM, Jacek Laskowski wrote:
> Why?
>
> Pozdrawiam,
> Jacek Laskowski
>
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 http://bit.ly/m
thanks makes sense, can anyone answer this below question ?
http://apache-spark-user-list.1001560.n3.nabble.com/spark-parquet-too-many-small-files-td27264.html
Thanks
Sri
On Tue, Jul 5, 2016 at 8:15 PM, Saisai Shao wrote:
> It is not worked to configure local dirs to HDFS. Local dirs
Hi ,
Space issue we are currently using /tmp and at the moment we don't have any
mounted location setup yet.
Thanks
Sri
Sent from my iPhone
> On 5 Jul 2016, at 17:22, Jeff Zhang wrote:
>
> Any reason why you want to set this on hdfs ?
>
>> On Tue, Jul 5, 201
abled=false$" --conf
"spark.shuffle.service.enabled=false" --conf "spark.executor.instances=10"
Thanks
Sri
On Sat, Jul 2, 2016 at 2:53 AM, Takeshi Yamamuro
wrote:
> Please also see https://issues.apache.org/jira/browse/SPARK-16188.
>
> // maropu
>
> On Fri, Jul 1,
Thanks Ted, I know in spark-she'll can we set same in spark-sql shell ?
If I don't set hive context from my understanding spark is using its own SQL
and date functions right ? Like for example interval ?
Thanks
Sri
Sent from my iPhone
> On 21 May 2016, at 08:19, Ted Yu wrote:
using spark-sql shell for your
information.
Can I set hive context.sql in spark-Sql shell ? As we do in traditional spark
Scala application.
Thanks
Sri
Sent from my iPhone
> On 21 May 2016, at 02:24, Mich Talebzadeh wrote:
>
> Sou want to use hive version 0.14 when using Spark 1
Thank you very much, well documented.
Thanks
Sri
On Wed, Jan 27, 2016 at 8:46 PM, Deenar Toraskar
wrote:
> Sri
>
> Look at the instructions here. They are for 1.5.1, but should also work
> for 1.6
>
>
> https://www.linkedin.com/pulse/running-spark-151-cdh-deenar-tor
spark jar ?
Thanks
Sri
On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers wrote:
> If you have yarn you can just launch your spark 1.6 job from a single
> machine with spark 1.6 available on it and ignore the version of spark
> (1.2) that is installed
> On Jan 27, 2016 11:29
Thanks Ted trick worked, need to commit this feature in next spark release.
Thanks
Sri
Sent from my iPhone
> On 26 Jan 2016, at 15:49, Ted Yu wrote:
>
> Please take a look at getJDBCType() in:
> sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
>
>
Scala test class hard coded way?
Thanks
Sri
Sent from my iPhone
> On 6 Jan 2016, at 17:43, Mark Hamstra wrote:
>
> It's not a bug, but a larger heap is required with the new
> UnifiedMemoryManager:
> https://github.com/apache/spark/blob/master/core/src/main/scala/org
Hi Cody,
KafkaUtils.createRDD totally make sense now I can run my spark job once in
15 minutes extract data out of kafka and stop ..., I rely on kafka offset
for Incremental data am I right ? so no duplicate data will be returned.
Thanks
Sri
On Fri, Dec 18, 2015 at 2:41 PM, Cody Koeninger
thanks Sean and Ted, I will wait for 1.6 to be out.
Happy Christmas to all !
Thanks
Sri
On Sat, Dec 12, 2015 at 12:18 PM, Ted Yu wrote:
> Please take a look at SPARK-9078 which allows jdbc dialects to override
> the query for checking table existence.
>
> On Dec 12, 2015, at 7:12
in later spark release ?, I am using spark 1.5.
>>
>> val sourcedfmode=sourcedf.write.mode("append")
>> sourcedfmode.jdbc(TargetDBinfo.url,TargetDBinfo.table,targetprops)
>>
>> Full Code:-
>>
>> https://github.com/kali786516/ScalaDB/bl
atabase
name. Try(conn.prepareStatement(s"SELECT 1 FROM $table where 1=2"
).executeQuery().next()).isSuccess }
Thanks
Sri
On Wed, Dec 9, 2015 at 10:30 PM, Michael Armbrust
wrote:
> The release date is "as soon as possible". In order to make an Apache
> release we mu
Hi Ted,
Thanks for the info , but there is no particular release date from my
understanding the package is in testing there is no release date mentioned.
Thanks
Sri
Sent from my iPhone
> On 9 Dec 2015, at 21:38, Ted Yu wrote:
>
> See this thread:
>
> http://sear
Hi Ted,
Gave and exception am I following right approach ?
val test=sqlContext.sql("select *, monotonicallyIncreasingId() from kali")
On Mon, Dec 7, 2015 at 4:52 PM, Ted Yu wrote:
> Have you tried using monotonicallyIncreasingId ?
>
> Cheers
>
> On Mon, Dec 7, 20
Thanks , I found the right function current_timestamp().
different Question:-
Is there a row_number() function in spark SQL ? Not in Data frame just spark
SQL?
Thanks
Sri
Sent from my iPhone
> On 7 Dec 2015, at 15:49, Ted Yu wrote:
>
> Does unix_timestamp() satisfy your needs ?
Hi Richard,
Thanks so my take from your discussion is we want pass explicitly partition
values it have to be written inside the code.
Thanks
Sri
On Sun, Oct 18, 2015 at 7:05 PM, Richard Eggert
wrote:
> If you want to override the default partitioning behavior, you have to do
> so i
Thanks Richard , will give a try tomorrow...
Thanks
Sri
Sent from my iPhone
> On 10 Oct 2015, at 19:15, Richard Eggert wrote:
>
> You should be able to achieve what you're looking for by using foldByKey to
> find the latest record for each key. If you're relying
data in Oracle.
I am evaluating rather than running batch jobs, can I run spark streaming
from the data files to finally write the cleansed data into Oracle
database. Once the data is consolidated in Oracle, it serves as the source
of truth for external users.
Regards,
Sri Eswari.
On Mon, Sep 21
Hi,
We have a usecase where we get the dated from different systems and finally
data will be consolidated into Oracle Database. Does spark is a valid
useless for this scenario. Currently we also don't have any big data
component. In case if we go with Spark to ingest data, does it require
hadoop.
37 matches
Mail list logo