We have a current opportunity for a Data Scientist/Engineer with a major Telecoms company on a contract basis. The position will be based in Brussels.
As a Data Scientist/Engineer, you will play a key role preparing the infrastructure and data that will be used to deliver high quality data products. You will help design, develop and maintain data pipelines that will deliver insights.
You will collaborate with the other data engineers and data scientists of the Advanced Analytics team to create the simplest possible effective data landscape to improve delivery speed of future AI use cases.
You will be trusted to
* Conceive and build data architectures
* Participate in the short/mid/long term vision of the overall system
* Simplify & optimize existing pipelines if needed
* Execute ETL (extract/transform/load) processes from complex and/or large data sets
* Ensure data are easily accessible and that their exploitation is performing as requested, even in highly scalable circumstances
* Participate to the architecture and planning of the big data platform to optimize the ecosystem's performances
* Create large data warehouses fit for further reporting or advanced analytics
* Collaborate with machine learning engineers for the implementation, deployment, scheduling and monitoring of different solutions
* Ensure robust CI/CD processes are in place
* Promote DevOps best practices in the team
* You're quality oriented
* You are multi-disciplined & able to work with divers APIs and understand multiple languages well enough to work with them
* You are an excellent problem analyser and solver
* You're open minded , collaborative, team player, ready to adapt to the changing needs
* Curiosity about new techniques and tools, eagerness to always keep learning
* You're committed to deliver, pragmatic and solution oriented.
* Experience in telecom and/or financial sector is a plus
* Experience with an agile way of working is a plus
* Languages : English (very good in reading, writing, speaking) is a must
Technical knowledge in:
* Data pipeline management
* Cluster management
* Workflow management ( Oozie, Airflow)
* Database management of SQL and noSQL databases
* Large file storage (HDFS, Data Lake, S3, Blob storage,..)
* Strong knowledge of Scala and Python
* Strong knowledge & experience in Spark (Scala and Pyspark)
* Strong knowledge of CI/CD concepts
* Stream processing such as Kafka, Kinesis, Elasticsearch
* Good knowledge of a cloud environment
* High level understanding of data science concepts
* Knowledge of Data Visualisation framework like Qlik Sense is a plus
Language requirement; English
For further information about this position please apply.