Remote
Data Engineer
Crypto's here to stay. That's one thing we're sure of. But what its future looks like—that's something we need you to create.
We're looking for the brightest and the best. And because we work pretty much everywhere across the Central European timezone, we aren't about to let borders stand in our way.
The role
We are looking for a Data Engineer to help us build and maintain the data backbone of our trading platform. You will be working on high-volume data pipelines, ensuring the reliability and observability of our infrastructure, and preparing the system for upcoming ML initiatives. If you’ve worked with modern data stacks, enjoy building efficient pipelines, and thrive in environments where data precision and scalability matter, this role might be for you.
Responsibilities
-
Design and maintain robust data pipelines to support real-time and batch processing.
-
Manage and optimize our Clickhouse data warehouse, including cluster performance and schema tuning.
-
Ensure data quality, observability, and governance across critical pipelines.
-
Collaborate with backend engineers, trading teams, and data stakeholders to align on data requirements.
-
Support internal initiatives by building tooling and monitoring for business and technical metrics.
-
Take ownership of scheduling and workflow orchestration (Argo, Airflow, etc.) and contribute to CI/CD automation.
Required Skills & Experience
-
At least 5 years of professional experience in data engineering or backend infrastructure.
-
Proficiency in Python, including object-oriented programming and testing.
-
Solid experience with SQL: complex joins, window functions, and performance optimization.
-
Hands-on experience with Clickhouse (especially the MergeTree engine family) or similar columnar DBs.
-
Familiarity with workflow schedulers (e.g., Argo Workflows, Airflow, or Kubeflow).
-
Understanding of Kafka architecture (topics, partitions, producers, consumers).
-
Comfortable with CI/CD pipelines (GitLab CI, ArgoCD, GitHub Actions).
-
Experience with monitoring and BI tools such as Grafana for technical/business dashboards.
Bonus Points
-
Experience with AWS services (S3, EKS, RDS).
-
Familiarity with Kubernetes and Helm for deployment and scaling.
-
Exposure to data quality/observability frameworks.
-
Experience supporting ML infrastructure (e.g., feature pipelines, training data workflows).
Like what you hear?
Simply tell us about yourself and your experience so far and upload your CV. That's all there is to it.
Apply now