As a Member of the Core and Ops team of the phenix project, I endorsed the following roles:
Data and software Architect :
▪ Design and industrialization of the use of Cloud composer (airflow) to deliver batch workflows:
o Migrate batch workflows from Azkaban to composer
o Implementing the CI pipeline with Jenkins/k8s/pants and the CD pipeline with a custom kube
operator to deploy dags.
▪ Design and industrialization of the use of Spring boot to develop rest apis:
o Implementing the api security layer (JWT and LdapWs).
o Implementing a spring-data-bigtable module from scratch.
o Implementing a spring-data-bigquery component to write avro data in Bigquery.
o Implementing an avro SerDe converter in Spring and a maven plugin to download avro schemas
from a custom schema manager.
o Industrializing the use of several spring-data backends.
o Industrializing the use of TestContainers for IT tests.
o Mentoring all data teams (60 Dev) to migrate from Scalatra to Spring Boot framework and the use
of DDD and hexagonal code design architecture.
▪ Kafka expert: administration, monitoring, security,…
▪ Implementation of a Kafka bigtable sink connector and custom connect transformers in scala.
▪ Implementing a streaming-pipleine-adaptor in Golang using sarama librabry: to migrate real time data
pipelines from using a custom avro serialization to confluent serialization to use schema registry and to
allow the standardization of avro messages (headers/Key/Value) within data pipelines.
▪ Leading a project to migrate from Kafka clusters running on IAAS to Strimzi handled kafka clusters.
▪ Industrialization of akhq to monitor kafka topics, consumer groups, …
▪ Design and industrialization of the use of spark on GKE:
o Industrializing the use of spark operator in Carrefour
o Contributing and committing to google spark-on-k8s-operator github repository to run spark jobs
on kubernetes and industrializing the operator (with Golang):
o PR#952: Filter Custom resources on specific labels to allow running multiple operator instances
on GKE.
o PR#935: exposing container Ports to Prometheus scraping.
o PR#914: Support ingress configuration in crd to expose the sparkUI in private networks.
o Extending the operator webhook to mutate pods with specific security features before
instantiation.
o Migrate the core phenix pipeline libraries developed in scala from spark 2.2.1/kafka 0.8 to
spark2.4.5/kafka 2.4 with all breaking changes of using kafka to manage consumers offsets instead
of Zookeeper.
▪ Leading the migration of more than 70 spark streaming pipeline (data normalizers and persisters) from
Mesos/Marathon to GKE.
Technologies: spark, scala, sbt/mvn, golang, Spring boot (2.X), java11/17, kafka, akhq, JWT, Jenkins, nexus,
Artifact registry, cloud composer, python, pants(mono repo for airflow dags), dataproc , Avro.
Kubernetes Architect :
▪ Design and industrialization of using, securing and exposing GKE (managed kubernetes by GCP ) in the
data platform:
o Use of kustomize to deploy and maintain the cluster state
o Setting Rbacks and enabling workloadIdentity
o Exposing with nginx Ingress controllers/Use of proxies and loadbalancers to wire networks
o Defining a functional namespace splitting strategy
▪ Design and industrialization of the CICD pipeline with jenkins (with k8s plugin enabled) to deploy on
different environments.
▪ Migration of workloads and data pipelines from Mesos/Marathon to GKE
▪ Deploy and maintain K8s webhooks and operators: Strimzi, spark, Prometheus, ingress, OPA operators.
▪ Implementing Nginx ingress controllers to expose services and securing the communication between
internal and external services to deployments in GKE.
▪ Implementing a monitoring stack with Prometheus + Grafana + Alertmanager.
Technologies: GKE, kustomize, Security in Kubernetes, Jenkins, GCP network, monitoring.
P r o f e s s i o n a l e x p e r i e n c e
Migration from IBM datacenter to GCP cloud :
▪ Contribute to define the migration strategy of the data platform to GCP.
▪ Contribute to define and secure the network connections between legacy IAAS datacenters and private
VPC that hosts all data backends, apis,..
▪ Setting up a one way Kerberos trust between legacy datacenters and GCP to backup data with distcp.
▪ Enabling and industrializing the use of gcp services.
▪ Define methods and architecture to write and read data from BigQuery.
▪ Defining a security and best practices framework to gcp services and to run applications
▪ Full automation with ansible, terraform and google deployment manager.
Audit of spark jobs implemented by the data scientists and boosting the jobs’ execution by a factor of 10.
▪ Giving guidelines composed of more than 15 points on how to fine tune the cluster.
▪ Auditing the CNAM Hortonworks clusters and fixing many security blocking points mainly related to
Kerberos, Ranger and Knox.
▪ Proving the feasibility of Implementing a multihomed hadoop cluster (hosts with many network
interfaces) with Kerberos enabled: Exposing the Kerberos traffic through exposed network interface to
communicate with the Active Directory
development leader of a regulatory project -Mesh contra ct-to address the IFRS 9.2 requirements in term of regulations using Big Data technologies at Société Générale:
▪Hortonworks consultant
▪Defining the software stack for the project.
▪Contributing and leading the developments : Mesh Contrat relies on many technical components: Oozie, spark (scala), spark streaming, kafka, Teradata, sqoop, elasticsearch and kibana.
▪Implementing the continuous delivery/integration process for the project ith Nexus, Jenkins, ansible
▪Successful production deployment of the project Hortonworks Solution Architect at Société Générale:
Hadoop (Hortonworks):
▪Hadoop Security Expert: Designing and implementing of secured solutions for security requirements.
▪Installation and configuration of a new secured development/integration cluster for projects with ranger and Kerberos enabled.
▪Synchronization ranger, with LDAPs, and Configuring sssd for ldap authentication
▪Full automation of installation and configuration of components/products for the cluster with ansible
▪Configuring backup cluster, and providing solutions for disaster recovery strategies.
▪Configuring and running mirror-maker to backup streaming data in secured environments (Kafka Acls; SSL and Kerberos).
▪Defining and implementing the migration strategy from using Kafka ACLs
to Ranger policies and migration from self -signed certificates to CA signed certificates for Kafka SSL listener.
▪Enabling wire encryption and managing SSL certificates on major Hadoop components.
▪Installing and configuring Hue on a HA and kerberized cluster and synchronization with ldap.
▪Installing and configuring Knox to connect reporting tools on Hive such as Tableau.
▪Setup of Prometheus for monitoring and alerting of the most critical components: ldap, FS size, ...
Talend:
▪Define and implementation in all Societe Genreale environments.
▪Connecting the different TAC instances to the Active Directory group and Securing the communication with SSL.
▪Implementing ansib le playbooks to install TAC and jobservers.
▪Define and implementation the logging strategy for Talend projects that use Kafka (SASL)
▪Defining best practices and security strategies to isolate jobservers
with cgroups for projects and to authenticate each jobserver with Kerberos.
▪Configuration and installation of Talend Data Quality on a kerberized environment: Integration with Kafka for data dictionary service and HDFS to import/export data.
Security referee :
▪ Reshaping authentication and authorization methods at the carrefour data platform by implementing an
openLdap cluster with saslauthd enabled to proxy authenticated users to the Ldap Group. Groups are
defined locally on the openldap.
▪ Installing and Securing Cloudera clusters by leveraging the ldap as a main entry point for authentication
and authorization
▪ Proposing and implementing new methods to allow to clients outside the cluster to access to HDFS/Hive
without the need to have a Kerberos token. This is by implementing and enabling Knox parcel on the
cluster instead of HttpFs which requires Kerberos and configuring extra Hive servers with Ldap
authentication. All of this while preserving the user impersonation.
▪ Extending a python client library to communicate with Cloudera Manager and to implement the required
rest calls to install and configure Knox parcel.
▪ Providing support and expertise to all data teams and its clients.
▪ Full automation with ansible of all kinds of deployments through rundeck.