Team: The Core Platform team is responsible for the data/infrastructure/messaging/services platform that powers Sift’s online systems. We make sure they are available and performant at all times to serve our customers. In the events of outage and failure, we will have practiced plans to be able to recover. These large and complicated systems require constant vigilance to meet these goals. Our R&D team consists of over 100 people, 25+ of them are based in the Kyiv R&D office. Tech stack: GCPTerraformGKEVaultJenkinsSnowflakeJava 11Python 3 Other Sift Products technical stack - AWS, Hadoop, Spark, Apache Airflow, Ruby 2.7, Ruby on Rails. We use Scrum and 2 weeks sprints. We’re finishing migration from AWS to GCP. Opportunities for you: Professional growth: quarterly Growth Cycles instead of performance review;Experience: knowledge sharing through biweekly Tech Talks sessions. You will learn how to build projects that handle petabytes of data and have small latency and high fault tolerance;Hybrid work approach: you can choose where you work better remotely or in the office. What you’ll do: Build immutable infrastructure and multi-AZ/multi-region fault-tolerant systems that are anti-fragile;Muti region deployment: deploying Bigtable cluster which spans multiple regions (how can we make a specific customer stick to a specific region, sticky sessions at region level);Local development and testing as fast and painless as possible;Create Dynamic environments (complete env to a specific service talking to other env.);Bot - deployment to monitoring via slack.
Team: The Data Platform team is responsible for making Sift’s data easy to use, understand, and communicate. This team ensures the availability, correctness, and data privacy compliance of information critical for Sift’s day-to-day operations. Our customers include not just Sift’s data science product teams, but also our sales, services, and business operations teams. We are excited about our plans to build our next-generation data analytics solution. Our R&D team consists of over 100 people, 35 of them are based in the Kyiv R&D office. We are going to have 3 Software engineers in the Ukraine R&D team who will be part of our Data Platform Team. Data Platform technical stack: Java 11Python 3GCPDataProc, SparkSnowflakeApache AirflowBigTableBigQuery Other Sift Products technical stack: Hadoop, Flink, AWS, Ruby, RoR, FE: React.js We use Scrum and 2 weeks sprints. Opportunities for you: Professional growth: quarterly Growth Cycles instead of performance reviewExperience: knowledge sharing through biweekly Tech Talks sessions. You will learn how to build projects that handle petabytes of data, have small latency and high fault tolerance.Business trips and the annual Sift Summit.Hybrid work approach: you can choose where you work better remotely or in the office What you’ll do: As a senior software engineer on Sift’s Data Platform team, you will build data warehousing and business intelligence systems to empower engineers, data scientists, and analysts to extract insights from data.You will design and build Petabyte scale systems for high availability, high throughput, data consistency, security, and end-user privacy, defining our next generation of data analytics tooling. You will do data modelling and ETL enhancements to improve efficiency and data quality.
Team: Our API Platform team is responsible for several core functions of Sift’s Digital Trust & Safety platform: bulk scoring and routing, up-to-the-minute reporting on business metrics, as well as key customer integration points, all of which work together to drive a seamless, accurate and fast solution for identifying and stopping fraud at scale. We combine customizable tools and powerful infrastructure to analyze and route all manner of transactions in our ongoing effort to build trust on the Internet. If you enjoy planning for scale, drawing on many engineering disciplines to solve difficult problems, and building tremendous customer value in the process, this team is for you. Technical stack: Java 11GCPKubernetesBigTableKafkaDropwizardPostgresMongoDbgRPC Opportunities for you: Experience: Participate in highload platform and technical challenges to improve API latency having 30K requests per second;Professional growth: quarterly Growth Cycles instead of performance review;Knowledge Sharing: we have biweekly Tech Talks sessions. You will learn how to build projects that handle petabytes of data and high fault tolerance;Culture for innovations: you can try your ideas on our annual Hackathon;Continuous learning: we have “Learning marathon” initiative where people choose some technology and dive deeper together sharing the progress on regular syncs. What you’ll do: Build highly scalable, distributed services that can handle hundreds of millions of events per day;Partner with product management to help scope and shape project requirements;Implement engineering solutions to address complex customer needs at scale;Collaborate with other engineers within the API Platform team as well as across other engineering teams;Help evolve and improve our engineering practices.