Experience building and optimizing ‘big data’ pipelines, architectures and data sets on cloud native environments
Ability to work independently and learn new technologies
2+ years of experience building processes supporting data transformation, data structures, metadata, dependency and workload management
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ stores.
Proficiency with Python, SQL, Java or Scala.
Experience in data streaming tools and processes (Kafka, AirFlow, ELK, Spark etc.)
Experience with big data solutions like Athena, SnowFlake, Redshift, Redis, Aerospike, Impala, Presto, Bigquery, Hive, etc.
Experience in data modelling, BI solutions development (DWH, ETL) and SW development practices – Agile, CI/CD, TDD.
Experience in real time analytics
Experience with GCP and BigQuery
Experience with building automated validation processes to ensure data integrity

Please apply as a referral from Ohad A at: ohad29@yahoo.com
Company Industry: Other
Employment type:
Full-time
Part-time
Contract
Shifts
Temporary
Other
Workplace policy: Hybrid
Seniority Level: Senior-level
Technology :
Node.js
React
Angular
Java
PHP
Python
Javascript, HTML, CSS
Scala
Vue
Ruby
C#
C++
C
Objective-c
VB
Flutter
Kotlin
Perl
Go
Selenium
SQL
AWS
GCP
Azure
K8s
Jenkins
Circle CI
Terraform
Rust
Company size: 251-1000 Employees
Job created Mar 14, 2022

Public discussion (0)

You must log in to send a new comment.