Remote

Data Engineer - Remote Europe

Humn.ai logo

Humn.ai

Remote - Europe 🇪🇺

We're driving change in the insurance industry by making premiums transparent, giving control to our customers, and really caring about how they're doing. And we'll treat you the same way. No matter your role at Humn, you'll have true flexibility to work how you want, when you want (as proven by our top 4 ranking on Flexa.com!). After all, if you succeed, so do we. We'll support your personal and professional growth every step of the way, freeing you up to let your talent drive change in a long-standing industry. Your work will be challenging, but never to the point of burnout. And if you need a helping hand, we'll be there for you. Ready? It’s time to become Superhumn.

Finally, a job that puts you in the driver's seat.

Prefer to get up early and seize the day? Or perhaps you like to sleep in and smash your projects in the afternoon? Maybe you like to work from the beach? Or the ski slope? However, whenever, and wherever you want to work, we'll do everything we can to support you. All we ask is that you're around for team meetings, the rest is up to you.

What’s important to us?

At Humn our work ethos is built on the principles of empowerment and autonomy. We believe in the power of open collaboration, open communication & open source software. We need people who understand what is like to be part of a shared mission and what it takes for a team to succeed. Skills are important but people are everything. If these words speak to you, you should speak to us!

What kind of things will you be responsible for?

  • Building data pipelines for IoT data
  • Creation of reusable data models
  • Building streaming frameworks that deliver real-time insights that positively impact the products we offer through the use of Complex Event Detection
  • Design of batch analytics frameworks
  • Release testing and production-ising new components

What skills will you need?

  • Multiple years coding in Scala preferably; Java, Python and Go good to have
  • Streaming frameworks such as Kafka or SQS
  • Processing frameworks such as Spark or Flink
  • Deep understanding and knowledge of data modelling (RDBMS, serialisation protocols and NoSQL)
  • Distributed architectures and micro-services
  • DevOps and CI/CD tooling, such as Jenkins, Maven or Gradle, Git, Ansible
  • Strong automation mindset and a passion for root cause analysis
  • Expertise in performance tuning and service monitoring

A great candidate would have experience with:

  • AWS infrastructure & tooling
  • Docker/Kubernetes as a user
  • Modern data warehousing (Hive, Kylin or Presto)
  • Elasticsearch and Prometheus

Send your resume to humn@scalajobs.com