Hello Partner,

 Please check the position below and reply back with your updated resume.Title: 
Data EngineerLocation: Bellevue,WA
   
   - Design and develop the data platform to efficiently and cost effectively 
address various data needs across the business
   - Build software across entire cutting-edge data platform, including event 
driven data processing, storage, and serving through scalable and highly 
available APIs, with awesome cutting-edge technologies.
   - Performing exploratory and quantitative analytics, data mining, and 
discovery.
   - Make data platform more scalable, resilient and reliable and then work 
across our team to put your ideas into action.
   - Implementing and refining robust data processing, REST services, RPC (in 
an out of HTTP), and caching technologies.
   - Working closely with data architects, stream processing specialists, API 
developers, our DevOps team, and analysts to design systems which can scale 
elastically
   - Help build and maintain foundational data products such as but not limited 
to Various conformed datasets, Consumer 360, data marts etc.
   - Ensure data quality by implementing re-usable data quality frameworks.
   - Build process and tools to maintain Machine Learning pipelines in 
production.
   - Develop and enforce data engineering, security, data quality standards 
through automation.


Essential requirements 
   
   - Bachelor’s degree in computer science or Similar discipline.
   - 5+ years of experience in software engineering
   - 2+ years of experience in data engineering.
   - Ability to work in fast paced, high pressure, agile environment.
   - Expertise in at least few programming languages - Java, Scala, Python or 
similar.
   - Expertise in building and managing large volume data processing (both 
streaming and batch) platform is a must.
   - Expertise in stream processing systems such as Kafka, Kinesis, Pulsar or 
Similar
   - Expertise in building micro services and managing containerized 
deployments, preferably using Kubernetes
   - Expertise in distributed data processing frameworks such as Apache Spark, 
Flink or Similar.
   - Expertise in SQL, Spark SQL, Hive etc.
   - Expertise in OLAP databases such as Snowflake or Redshift.
   - No-SQL (Apache Cassandra, DynamoDB or similar) is a huge plus
   - Experience in operationalizing and scaling machine models is a huge plus.
   - Experience with variety of data Tools & frameworks (example: Apache 
Airflow, Druid) will be a huge plus.
   - Experience with Analytics Tools such as Looker, Tableau is preferred.
   - Cloud (AWS) experience is preferred
   - Ability to learn and teach new languages and frameworks.
   - Excellent data analytical skills
   - Direct to consumer digital business experience preferred
   - Digital Advertising Tech experience will be a huge plus

 

 




Regards

ARUN | Sr. Technical Recruiter

KLNtek

DESK:626-414-4522,MOBILE:626-346-9382

[email protected]



-- 
You received this message because you are subscribed to "rtc-linux".
Membership options at http://groups.google.com/group/rtc-linux .
Please read http://groups.google.com/group/rtc-linux/web/checklist
before submitting a driver.
--- 
You received this message because you are subscribed to the Google Groups 
"rtc-linux" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rtc-linux/261753921.631499.1612798569094%40mail.yahoo.com.

Reply via email to