Re: [Spark on Kubernetes]: Seeking Guidance on Handling Persistent Executor Failures

2024-02-19 Thread Sri Potluri
engineers and scientists to focus on the application logic without needing to implement and test infrastructure resilience mechanisms. Thank you again for your time and assistance. Best regards, Sri Potluri On Mon, Feb 19, 2024 at 5:03 PM Mich Talebzadeh wrote: > Went through your is

[Spark on Kubernetes]: Seeking Guidance on Handling Persistent Executor Failures

2024-02-19 Thread Sri Potluri
workaround, but it would be preferable to have a more integrated solution. I appreciate any guidance, insights, or feedback you can provide on this matter. Thank you for your time and support. Best regards, Sri P

India Scala & Big Data Job Referral

2023-12-21 Thread sri hari kali charan Tummala
a job yet. I am seeking referrals within product firms (preferably non-consulting) in India that work with Flink, Spark, Scala, Big Data, or in the fields of ML & AI. Can someone assist me with this? Thanks Sri

Spark Scala Contract Opportunity @USA

2022-11-10 Thread sri hari kali charan Tummala
Hi All, Is anyone looking for a spark scala contract role inside the USA? A company called Maxonic has an open spark scala contract position (100% remote) inside the USA if anyone is interested, please send your CV to kali.tumm...@gmail.com. Thanks & Regards Sri Tummala

Big Data Contract Roles ?

2022-09-14 Thread sri hari kali charan Tummala
Hi Flink Users/ Spark Users, Is anyone hiring contract corp to corp big Data spark scala or Flink scala roles ? Thanks Sri

unsubscribe

2020-06-27 Thread Sri Kris
Sent from Mail for Windows 10

Re: sql to spark scala rdd

2016-08-02 Thread Sri
Make sense thanks. Thanks Sri Sent from my iPhone > On 2 Aug 2016, at 03:27, Jacek Laskowski <ja...@japila.pl> wrote: > > Congrats! > > Whenever I was doing foreach(println) in the past I'm .toDF.show these > days. Give it a shot and you'll experience the feeling your

Re: sql to spark scala rdd

2016-08-01 Thread sri hari kali charan Tummala
rdpress.com/2014/06/21/calculate-running-sums/ Thanks Sri On Mon, Aug 1, 2016 at 12:07 AM, Sri <kali.tumm...@gmail.com> wrote: > Hi , > > I solved it using spark SQL which uses similar window functions mentioned > below , for my own knowledge I am trying to solve using S

Re: sql to spark scala rdd

2016-08-01 Thread Sri
Hi , I solved it using spark SQL which uses similar window functions mentioned below , for my own knowledge I am trying to solve using Scala RDD which I am unable to. What function in Scala supports window function like SQL unbounded preceding and current row ? Is it sliding ? Thanks Sri

Re: sql to spark scala rdd

2016-08-01 Thread Sri
NBOUNDED PRECEDING >>>>>>>> AND >>>>>>>> CURRENT ROW) daily_balance >>>>>>>> FROM table Thanks Sri Sent from my iPhone > On 31 Jul 2016, at 13:21, Jacek Laskowski <ja...@japila.pl> wrote: > > Hi, > > Imp

Re: sql to spark scala rdd

2016-07-31 Thread sri hari kali charan Tummala
)._1,x(1)._2,(x.foldLeft(0.0)(_ + _._2/x.size)),x.foldLeft(0.0)(_ + _._2))).foreach(println) On Sun, Jul 31, 2016 at 12:15 PM, sri hari kali charan Tummala < kali.tumm...@gmail.com> wrote: > Hi All, > > I already solved it using DF and spark sql I was wondering how to solve in > s

Re: sql to spark scala rdd

2016-07-31 Thread sri hari kali charan Tummala
81491906170/ch06.html Sql:- SELECT DATE,balance, SUM(balance) OVER (ORDER BY DATE ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) daily_balance FROM table Thanks Sri On Sun, Jul 31, 2016 at 11:54 AM, Mich Talebzadeh <mich.talebza...@gmail.com > wrote: > Check also this > <http

Re: sql to spark scala rdd

2016-07-31 Thread sri hari kali charan Tummala
.0 http://bit.ly/mastering-apache-spark > Follow me at https://twitter.com/jaceklaskowski > > > On Sun, Jul 31, 2016 at 9:23 AM, sri hari kali charan Tummala > <kali.tumm...@gmail.com> wrote: > > tried this no luck, wht is non-empty iterator here ? > > > > OP:- &

Re: sql to spark scala rdd

2016-07-31 Thread sri hari kali charan Tummala
t;)) .map(x => (x(0),x(2))) .map { case (key,value) => (key,value.toArray.toSeq.sliding(2,1).map(x => x.sum/x.size))}.foreach(println) On Sun, Jul 31, 2016 at 12:03 AM, sri hari kali charan Tummala < kali.tumm...@gmail.com> wrote: > Hi All, > > I managed to write using

Re: sql to spark scala rdd

2016-07-31 Thread sri hari kali charan Tummala
intln) at the moment my output:- 75.0 -25.0 50.0 -50.0 -100.0 I want with key how to get moving average output based on key ? 987,75.0 987,-25 987,50.0 Thanks Sri On Sat, Jul 30, 2016 at 11:40 AM, sri hari kali charan Tummala < kali.tumm...@gmail.com> wrote: > for knowledge j

Re: sql to spark scala rdd

2016-07-30 Thread sri hari kali charan Tummala
for knowledge just wondering how to write it up in scala or spark RDD. Thanks Sri On Sat, Jul 30, 2016 at 11:24 AM, Jacek Laskowski <ja...@japila.pl> wrote: > Why? > > Pozdrawiam, > Jacek Laskowski > > https://medium.com/@jaceklaskowski/ > Mastering Apache Spark

Re: spark local dir to HDFS ?

2016-07-05 Thread sri hari kali charan Tummala
thanks makes sense, can anyone answer this below question ? http://apache-spark-user-list.1001560.n3.nabble.com/spark-parquet-too-many-small-files-td27264.html Thanks Sri On Tue, Jul 5, 2016 at 8:15 PM, Saisai Shao <sai.sai.s...@gmail.com> wrote: > It is not worked to configure l

Re: spark local dir to HDFS ?

2016-07-05 Thread Sri
Hi , Space issue we are currently using /tmp and at the moment we don't have any mounted location setup yet. Thanks Sri Sent from my iPhone > On 5 Jul 2016, at 17:22, Jeff Zhang <zjf...@gmail.com> wrote: > > Any reason why you want to set this on hdfs ? > >> On Tue

Re: spark parquet too many small files ?

2016-07-02 Thread sri hari kali charan Tummala
abled=false$" --conf "spark.shuffle.service.enabled=false" --conf "spark.executor.instances=10" Thanks Sri On Sat, Jul 2, 2016 at 2:53 AM, Takeshi Yamamuro <linguin@gmail.com> wrote: > Please also see https://issues.apache.org/jira/browse/SPARK-16188. > > // ma

Re: set spark 1.6 with Hive 0.14 ?

2016-05-21 Thread Sri
Thanks Ted, I know in spark-she'll can we set same in spark-sql shell ? If I don't set hive context from my understanding spark is using its own SQL and date functions right ? Like for example interval ? Thanks Sri Sent from my iPhone > On 21 May 2016, at 08:19, Ted Yu <yuzhih...@gma

Re: set spark 1.6 with Hive 0.14 ?

2016-05-21 Thread Sri
spark-sql shell for your information. Can I set hive context.sql in spark-Sql shell ? As we do in traditional spark Scala application. Thanks Sri Sent from my iPhone > On 21 May 2016, at 02:24, Mich Talebzadeh <mich.talebza...@gmail.com> wrote: > > Sou want to use hive version

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

2016-01-27 Thread sri hari kali charan Tummala
spark jar ? Thanks Sri On Wed, Jan 27, 2016 at 7:45 PM, Koert Kuipers <ko...@tresata.com> wrote: > If you have yarn you can just launch your spark 1.6 job from a single > machine with spark 1.6 available on it and ignore the version of spark > (1.2) that is installed > On

Re: how to run latest version of spark in old version of spark in cloudera cluster ?

2016-01-27 Thread sri hari kali charan Tummala
Thank you very much, well documented. Thanks Sri On Wed, Jan 27, 2016 at 8:46 PM, Deenar Toraskar <deenar.toras...@gmail.com> wrote: > Sri > > Look at the instructions here. They are for 1.5.1, but should also work > for 1.6 > > > https://www.linkedin.com/pulse/r

Re: org.netezza.error.NzSQLException: ERROR: Invalid datatype - TEXT

2016-01-26 Thread Sri
Thanks Ted trick worked, need to commit this feature in next spark release. Thanks Sri Sent from my iPhone > On 26 Jan 2016, at 15:49, Ted Yu <yuzhih...@gmail.com> wrote: > > Please take a look at getJDBCType() in: > sql/core/src/main/scala/org/apache/spark/sql/jdbc/Pos

Re: spark 1.6 Issue

2016-01-06 Thread Sri
in Scala test class hard coded way? Thanks Sri Sent from my iPhone > On 6 Jan 2016, at 17:43, Mark Hamstra <m...@clearstorydata.com> wrote: > > It's not a bug, but a larger heap is required with the new > UnifiedMemoryManager: > https://github.com/apache/spark/blob/master/co

Re: how to turn off spark streaming gracefully ?

2015-12-18 Thread sri hari kali charan Tummala
Hi Cody, KafkaUtils.createRDD totally make sense now I can run my spark job once in 15 minutes extract data out of kafka and stop ..., I rely on kafka offset for Incremental data am I right ? so no duplicate data will be returned. Thanks Sri On Fri, Dec 18, 2015 at 2:41 PM, Cody Koeninger

Re: spark data frame write.mode("append") bug

2015-12-12 Thread sri hari kali charan Tummala
com/kali786516/ScalaDB/blob/master/src/main/java/com/kali/db/SaprkSourceToTargetBulkLoad.scala >> >> Spring Config File:- >> >> https://github.com/kali786516/ScalaDB/blob/master/src/main/resources/SourceToTargetBulkLoad.xml >&

Re: Release data for spark 1.6?

2015-12-12 Thread sri hari kali charan Tummala
thanks Sean and Ted, I will wait for 1.6 to be out. Happy Christmas to all ! Thanks Sri On Sat, Dec 12, 2015 at 12:18 PM, Ted Yu <yuzhih...@gmail.com> wrote: > Please take a look at SPARK-9078 which allows jdbc dialects to override > the query for checking table existence. > &g

Re: Release data for spark 1.6?

2015-12-12 Thread sri hari kali charan Tummala
me. Try(conn.prepareStatement(s"SELECT 1 FROM $table where 1=2" ).executeQuery().next()).isSuccess } Thanks Sri On Wed, Dec 9, 2015 at 10:30 PM, Michael Armbrust <mich...@databricks.com> wrote: > The release date is "as soon as possible". In order to make an A

Re: Release data for spark 1.6?

2015-12-09 Thread Sri
Hi Ted, Thanks for the info , but there is no particular release date from my understanding the package is in testing there is no release date mentioned. Thanks Sri Sent from my iPhone > On 9 Dec 2015, at 21:38, Ted Yu <yuzhih...@gmail.com> wrote: > > See this thread: >

Re: spark sql current time stamp function ?

2015-12-07 Thread Sri
Thanks , I found the right function current_timestamp(). different Question:- Is there a row_number() function in spark SQL ? Not in Data frame just spark SQL? Thanks Sri Sent from my iPhone > On 7 Dec 2015, at 15:49, Ted Yu <yuzhih...@gmail.com> wrote: > > Does unix_time

Re: spark sql current time stamp function ?

2015-12-07 Thread sri hari kali charan Tummala
s > > On Mon, Dec 7, 2015 at 7:56 AM, Sri <kali.tumm...@gmail.com> wrote: > >> Thanks , I found the right function current_timestamp(). >> >> different Question:- >> Is there a row_number() function in spark SQL ? Not in Data frame just >> spark SQL? >>

Re: Pass spark partition explicitly ?

2015-10-18 Thread sri hari kali charan Tummala
Hi Richard, Thanks so my take from your discussion is we want pass explicitly partition values it have to be written inside the code. Thanks Sri On Sun, Oct 18, 2015 at 7:05 PM, Richard Eggert <richard.egg...@gmail.com> wrote: > If you want to override the default partitioning behav

Re: Create hashmap using two RDD's

2015-10-10 Thread Sri
Thanks Richard , will give a try tomorrow... Thanks Sri Sent from my iPhone > On 10 Oct 2015, at 19:15, Richard Eggert <richard.egg...@gmail.com> wrote: > > You should be able to achieve what you're looking for by using foldByKey to > find the latest record for each key.

Spark Ingestion into Relational DB

2015-09-21 Thread Sri
Hi, We have a usecase where we get the dated from different systems and finally data will be consolidated into Oracle Database. Does spark is a valid useless for this scenario. Currently we also don't have any big data component. In case if we go with Spark to ingest data, does it require