Alouane - Architecte DOCKER
Ref : 211202N002-
Domicile
10000 RABAT (Maroc)
-
Profil
Architecte, DevOps, Développeur (33 ans)
-
MobilitéTélétravail uniquement
-
StatutBientôt freelance
-
Tarif Journalier MoyenVoir le tarif
-
Project manager
Ministry of higher education and scientific research - Rabat, MoroccoJan 2019 - Jan 2021Project: Docker swarm cluster setup
-Served as a solution architect to build up a secure and prod ready docker swarm cluster, the main cluster had to be
highly resilient, scalable and easy to replicate (100% configurable and automated), monitor and supervision. The entire
process fits into the following steps
• Setup of two DMZ zones
o Public: Dedicated for docker swarm slaves that will host any frontal/public services (e.g., Frontal
Traefik gateway, Frontal apps)
o Private: Dedicated for docker swarm managers that will host internal services (e.g., Grafana, Kibana,
Atlassian suite…) but cannot interfere with ministry internal network
• Setup of a persistence storage volume (GlusterFs) solution as a container storage system for high availability
• Setup a supervision UI (Portainer) to manage all cluster nodes, services, volumes, containers, users and teams
• Setup a scheduled system prune job that frequently clean up unused images, volume or dangling/unreferenced
containers to free nodes storage space (As a docker swarm service)
• Deploy a keepalived container at each Frontal node for high availability, this has helped provision a failover of
a virtual IP address from one node to another
• Deploy a monitoring UI system (Grafana + Prometheus) in order to view several resource consumptions per
service like:
o Network usage
o Protocol usage (e.g., POST, GET, PUT…)
o Status code (e.g., 2xx, 4xx, 5xx)
o Socket usage
o Uptime and response time
• Release a stack deployment best practice report for the dev team and exploitation team that includes
o Stack files and image versioning strategy
o Service routing, dns and subnets strategy
o Volume binding and service backups strategy
o Database backups strategy (i.e., Full, differential and transactional backups with automated job
scheduling)
o Service health check strategy
o User name space and non-rooted container strategy
Project: Docker registry (Docker registry + Portus)
-Served as a solution architect to setup a highly resilient, easy to replicate (100% configurable and automated), and
role-based access control registry, this will empower dev teams to push/pull service images locally for testing, and the
CI/CD platform to pull/push staging and production labelled service images. The entire process fits into the following
steps:
• Setup docker registry (open-source version) as a scalable service in the docker swarm cluster
• Setup a supervision UI (Portus) in order to manage namespaces, repositories, users and team authorisations
• Setup a Portus background process service to ensure data synchronisation between docker registry and Portus
UI service in case of a downtime
Project: Data stream pipeline (Confluent)
-Served as a solution architect to setup a data streaming pipeline platform, easy to replicate (100% configurable and
automated), that connects and process event data in real time between different micro services deployed by different
dev teams, The entire process fits into the following steps:
• Setup Zookeeper to store persistent cluster metadata and critical component of the Confluent platform
• Setup messaging broker service (Kafka) in cluster mode
• Setup Confluent Schema Registry
• Setup Confluent control center for cluster, broker, topics management and supervision
• Setup a Ksql DB to process real time data with simple sql statement
• Release a topic normalisation best practice report for the dev team and exploitation team
Project: Gateway (Traefik)
-Served as a solution architect to setup a high availability, easy to replicate (100% configurable and automated), and
secure frontal gateway (Traefik) in a cluster mode that supports the following
• Service auto discovery
• Loadbalancing
• Api control
• Dynamique Certificate management
• Orchestrator event consumption via Socat service (Socat exposes the docker socket)
• Redirect traefik metrics to Prometheus for service monitoring
Project: Logging system (EFK)
-Served as a solution architect to build up a role-based access and easy to replicate (100% configurable and automated)
logging system, it helps index service logs via Elasticsearch DB, then empower different dev/exploitation teams to
monitor, search and analyse data in real time through Kibana UI. The entire process fits into the following steps:
• Setup Elasticsearch service in cluster mode, with a snapshot/index backup strategy, each dev team has limited
access to the spaces, indexes and logs related to applications they have deployed
• Setup a Filebeat service that auto collects and ships relevant logs from different micro services to Elasticsearch
after some filtering and processing, with an index life cycle management strategy per service
• Setup Kibana service to explore, visualize and analyse data
Project: Atlassian suite
-Served as a solution architect to setup the Atlassian suite as a collaboration tool for enterprise product development
teams, the setup has to be 100% configurable, automated and easy to replicate, the entire process fits into the following
steps:
• Setup a postgreSql DB service to power up all Atlassian products, with a backup strategy support and
performance consideration in order to support large team workload
• Setup an Adminer service for postgreSql DB visualisation and supervision
• Setup of Jira software service with a data backup strategy, and custom workflows schemes appropriate for a
scrum environment and DevOps driven
• Setup of Jira service desk with a data backup strategy and custom SLA strategies and web portals
• Setup of confluence with a data backup strategy, and custom space structure organisation
• Setup of Bitbucket with a data backup strategy and synchronisation with other Atlassian products
• Setup of Bamboo as a CI/CD platform and in sync with the local docker registry and other Atlassian products
Project: Legacy app automation (CI/CD)
-Served as a DevOps consultant to setup many automation pipelines for legacy apps (12 apps) developed in different
technologies, all pipelines needed an auto-deployment step for staging environments and a manual confirmation step
for production deployment, these pipelines were built to support the following frameworks
• Backends
o Spring boot (Java)
o Django (Python)
o Symfony (Php)
o Laravel (Php)
o Express (Nodejs)
• Front
o Angular (Js)
o React (Js)
• Mobile
o Flutter (android & ios) via firebase app distribution
o Ionic (android)
Project: Cursussup platform
-Served as a Techlead and architect to build a national bachelor platform, where all Moroccan candidates (1 million) in
each year have to choose their preferred universities, prioritize and confirm or decline the proposed offer, the selection
system implements the well-known Gale-Shapley algorithm for maximum transparency. The platform has:
• A Front office where candidates can view latest news and apply to different universities (Angular 9)
• A Backoffice where universities can manage selection formulas, test/Assessment score required to passe and
candidates’ post registrations (Angular 9)
• A Backend with 3 micro services (Spring Boot + Liquibase for DbOps + Swagger for API documentation +
ehcach and vranish for caching)
o Referentiel: Manage all system data and repositories
o Orientation: Manage all candidates data and orders
o Selection: Manage selection formulas and candidates scoring
o The 3 micros services exchange events through Kafka topics
o Each micro service has its own SQL server database with a backup and recovery strategy
o Each service logs gets pushed using filebeat, and can be analysed through Kibana
o Each service has its own health check and rate limiter
• An SMTP service (AWS SES) with
o Backoff strategy to overcome service consumption limits
o Bounce rate and complaints rate listener
o Notification fallback in case of
▪ Hard bounce (Full User inbox, Email inexistant, ISP block)
▪ Complaints
▪ Delivery delays
Upon each pull request approval in an env branche (e.g., staging), a new docker image gets build and published (via
Bamboo CI/CD) in docker registry then gets deployed automatically via Portainer in the appropriate environment
(except for production env that needs a manual approve before deployment)
-Links: ******** -
Project: Accreditation platform
-Served as a techlead and architect to build a national course accreditation platform, where all establishment and
universities (800 entities) have to apply for course accreditation before starting the enrolment. The ministry teams and
the National Quality Agency (NQA) assess each application through a peer review process for compliance with the
standards. The NQA has more than 1000 experts that continually review applications, and gets paid monthly for their
contributions, all performance review of each expert gets calculated by the same platform and generates the right
monthly amount of payment.
More than 1200 Jira tickets has been processed and the project is still in active development and maintenance
The same technologies, workflows and stacks used in Cursussup platform have been deployed for this one
-Links: ******** -
Project: Equivalence platform
-Served as a techlead and architect to build a national degree equivalence platform, where all students have to apply for
an equivalence certificate when they have got a foreign degree. The ministry teams and the National Quality Agency
assess each application through a blind review process for compliance with the standards
The same technologies and stacks as Cursussup platform have been deployed for this one
-Links: ******** - -
Project manager
Threecomp - Rabat, MoroccoProject: Asset tracking (On going)Jan 2018 - aujourd'hui-Served as a solution architect to build up a flexible and easy asset tracking software, clients can use the service as a
SAAS platform, with a proprietary mobile device including a built-in barcode and RFID scanner, clients can customize
their internal workflow and data schemas as they go with a role-based access, and can perform more advanced audit
and maintenance jobs. The platform has 3 main components
• Front office where managers orchestrate jobs and workflows (Angular 12 + state and store management)
• Backend API for processing requests (Spring boot + Liquibase + ehcache + Swagger Api)
• Mobile app for auditors and maintenance jobs (Ionic + Capacitor)
• CI/CD pipeline is configured via Bamboo CI/CD
o Front office: Node based image (Install modules, test, Build release) + Bamboo docker image build
and push to docker hub registry
o Backend: Maven (Compile, test, Package) + Jib (Docker image build and push to docker hub registry)
o Mobile: CircleCI android/IOS based image (Install modules, Verify, Test) + Firebase CLI + Fastlane
(Generate screenshots, build signed release, Push to Beta/Alpha/Production stores)
Project: Docker swarm cluster setup
-Served as a solution architect to build up a secure and prod ready docker swarm cluster, the cluster had to be highly
resilient, scalable and easy to replicate (100% configurable and automated), monitor and supervision all deployed
services -
Project manager
Arkiia - Rabat, MoroccoProject: E-Council platform2018 - 2019-Served as a project manager to build an E-council platform similar to clarity.fm, It’s a marketplace for better matching
between consultants and professionals through a new form of support (VideoConferencing), while using latest
technologies:
• Design workflow: Invisionapp
• Tasks management: Asana (Scrum disposal)
• Webrtc for VideoConferencing
• Angular 6 app for the front site
• Angular 6 app for the backoffice app
• Payment gateways (Paypal and Payzone)
• Varnish cdn, new cluster build
• Crawlers redirection in the cdn edges side (SEO friendly)
• Social networks integration
• S3 as an origin hosting solution
• Slim Restfull API
• Aws code deploy + Php units + travis cli integration for deployments automation from github repository
-Links: ******** - -
Lead Technical Architect
Audivity - Tucson, United StatesJan 2017 - Jan 2018A react based platform to automate conversion of written content into digital audio, also leveraging AI voice
reconstruction, script automation, voice-over, editing, publishing and analytics to make things simpler for content
creators.
-Core Content:
• React front/backoffice app
• Nodejs API (Loopback 3.X)
• AmplitudeJs based audio player
• Cloudfront as cdn audio delivery solution
• S3 as an origin hosting solution
• Public audio canals integration (itunes, soundcloud)
• Aws Lambda + S3 +nginx for analytics
• ffmpeg for audio processing
• Aws code deploy + Php units + travis cli integration for deployments automation from github repository
-Links: ******** – -
Lead Technical Architect
Arkiia - Rabat - MoroccoVideo hosting platformJan 2016 - Jan 2018-Served as technical lead for a team of developers to build a fully functional video hosting platform similar to
********
platform where the public can monetize their videos & make money from vast ads impressions -Core Content:
• Angular 2 app for the front site
• Angular 2 app for the backoffice app
• Highwinds cdn integration for video, thumbs & assets hosting
• Crawlers redirection in the cdn edges side (SEO friendly)
• Social networks integration
• S3 as an origin hosting solution
• Cloudfront for backoffice app hosting
• Aws elastic transcoder for video transcoding
• Dmca for content ownership & report protection
• Slim Restfull API
• Videogular2 integration + plugins development
• Lightweight embed player solution for other sites
• Yeild bird Async adx integration
• Aws Lambda + S3 +nginx for click frauds & bots’ detection
• Daily Cron jobs for users, videos, channels statistics
• Aws elastic loadbalancer + Autoscaller for backend scalability
• Aws S3 + Kuberenetes + Vitess cluster management system as a database solution
• Aws code deploy + Php units + travis cli integration for deployments automation from github repository
-Links: ********, ******** -
Project: Data stream pipeline (Confluent)
aujourd'huiServed as a solution architect to setup a data streaming pipeline platform, easy to replicate (100% configurable and
automated), that connects and process event data in real time between different micro services deployed by different
dev teams -
Project: Gateway (Traefik)aujourd'hui
-Served as a solution architect to setup a high availability, easy to replicate (100% configurable and automated), and
secure frontal gateway (Traefik) in a cluster mode -
Project: Logging system (EFK)
aujourd'huiServed as a solution architect to build up a role-based access and easy to replicate (100% configurable and automated)
logging system, it helps index service logs via Elasticsearch DB, then empower different dev/exploitation teams to
monitor, search and analyse data in real time through Kibana UI -
Project: Atlassian suite
aujourd'huiServed as a solution architect to setup the Atlassian suite as a collaboration tool for enterprise product development
teams, the setup has to be 100% configurable, automated and easy to replicate -
Project: Single sign on (SSO)
aujourd'hui-Served as a solution architect to setup a keycloak based SSO to manage user identity, access and authorization for
• Atlassian suite (Private realm)
• Legacy LDAP based apps (Private realm)
EDUCATION
Master degree in Software engineering
Ibn tofail university – Kénitra – Morocco
June 2013
LINKS
********
********-alouane
PUBLICATIONS
Online multi-instance acquisition for cost optimization in IaaS Clouds
********/ April 2016
Abstract - Amazon Ec2 service offers two diverse instance purchasing options. Users can either run instances by
using on-demand plan and pay only for the incurred instance-hours, or by renting instances for a long period,
while taking advantage of significant reductions (up to 60%). One of the major problems facing these users is cost
management. How to dynamically combine between these two options, to serve sporadic workload, without
knowledge of future demands? Many strategies in the literature, require either using exact historic workload as a
reference or relying on long-term predictions of future workload. Unlike existing works, we propose two practical
online deterministic algorithms for the multi-slope case, that incur no more than 1+1/1- and 2/1-α respectively,
compared to the cost obtained from an optimal offline algorithm, where α is the maximum saving ratio of a
reserved instance offer over on-demand plan.
A thick-cloud solution for data auditing in a cloud environment
********/
May 2016
Abstract - Protecting and auditing data is not an easy task, especially when it comes to cloud storage. As such, it is
essential to design an efficient data auditing scheme along with a recovery process while controlling the cloud fees,
in the literature, many works has been devoted to cloud storage security, but the majority did not consider the
cloud fees into account or provide a cost analysis of their solutions. Therefore, they cannot be deployed in a real
cloud environment. For this reason, we introduce a new regenerating code-based model for cloud data integrity
protection that can surely conduct data auditing operation with the minimum cost possible. We also give some
insights about some new threat models like data tracking problem, which can cause the total loss of customer's
data, and data loss insurance problem. The global evaluation of our model shows that our solution can save the
total cost incurred by data check operation when using on-demand instances by 16%, and up to 40% when using
reserved instance plan. While it incurs additional cost no more than 4% when triggering a repair operation for
different parameters.
How can we design a cost-effective model that ensures a high level of data integrity protection?
********/ May 2015
Abstract - Everyone agree that data is more secured locally than when it is outsourced far away from their owners.
But the growth of local data annually implies extra charges for the customers, which makes their business slowing
down. Cloud computing paradigm comes with new technologies that offer a very economic and cost-effective
solutions, but at the expense of security. So, designing a lightweight system that can achieve a balance between cost
and data security is really important. Several schemes and techniques have been proposed for securing, checking
and repairing data, but unfortunately the majority doesn't respect and preserve the cost efficiency and profitability
of cloud offers. In this paper we try to answer the question: how can we design a model that enables a high level of
integrity check while preserving a minimum cost? We try also to analyse a new threat model regards the tracking
of a file's fragments during a repair or a download operation, which can cause the total loss of customers data. The
solution given in this paper is based on redistributing fragments locations after every data operation using a set of
random values generated by a chaotic map. Finally, we provide a data loss insurance (data corruption as well)
approach based on user estimation of data importance level that helps in reducing user concerns about data loss.
Virtual Machines Online Acquisition
********.html
June 2018
Abstract - Clouds basically offer a set of instance acquisition solutions, it’s either an on-demand plan where the
user has to pay the full VM hourly pricing or can go with a commitment for a X duration, then the user can benefit
from a Y percent of reduction over the total VM reservation period. That point of shifting or decision making
becomes more difficult during the last couple years, with this big number of service reservation offers with various
duration that we have on the market today and knowing the fact that not all workloads are easy to predict, it forces
the user to think about an optimal combination of these offers, while maintaining the same availability level,
consistency and latency of the on-demand solution. In this paper, we introduce two deterministic algorithms for
the multi-slope case, that incur no more than 1 + 1/1−α and 2/1−α respectively, compared to the cost obtained from
an optimal offline algorithm, where α is the maximum saving ratio of a reserved instance offer over on-demand
plan. Our simulation driven by the google cluster usage data-trace shows that more than 30% of cost savings can
be achieved when applied to a real cloud provider like amazon web services, while 40% when purchasing instances through a cloud broker service.
Master degree in Software engineering
Ibn tofail university – Kénitra – Morocco
June 2013
LINKS
********
********-alouane
PUBLICATIONS
Online multi-instance acquisition for cost optimization in IaaS Clouds
********/ April 2016
Abstract - Amazon Ec2 service offers two diverse instance purchasing options. Users can either run instances by
using on-demand plan and pay only for the incurred instance-hours, or by renting instances for a long period,
while taking advantage of significant reductions (up to 60%). One of the major problems facing these users is cost
management. How to dynamically combine between these two options, to serve sporadic workload, without
knowledge of future demands? Many strategies in the literature, require either using exact historic workload as a
reference or relying on long-term predictions of future workload. Unlike existing works, we propose two practical
online deterministic algorithms for the multi-slope case, that incur no more than 1+1/1- and 2/1-α respectively,
compared to the cost obtained from an optimal offline algorithm, where α is the maximum saving ratio of a
reserved instance offer over on-demand plan.
A thick-cloud solution for data auditing in a cloud environment
********/
May 2016
Abstract - Protecting and auditing data is not an easy task, especially when it comes to cloud storage. As such, it is
essential to design an efficient data auditing scheme along with a recovery process while controlling the cloud fees,
in the literature, many works has been devoted to cloud storage security, but the majority did not consider the
cloud fees into account or provide a cost analysis of their solutions. Therefore, they cannot be deployed in a real
cloud environment. For this reason, we introduce a new regenerating code-based model for cloud data integrity
protection that can surely conduct data auditing operation with the minimum cost possible. We also give some
insights about some new threat models like data tracking problem, which can cause the total loss of customer's
data, and data loss insurance problem. The global evaluation of our model shows that our solution can save the
total cost incurred by data check operation when using on-demand instances by 16%, and up to 40% when using
reserved instance plan. While it incurs additional cost no more than 4% when triggering a repair operation for
different parameters.
How can we design a cost-effective model that ensures a high level of data integrity protection?
********/ May 2015
Abstract - Everyone agree that data is more secured locally than when it is outsourced far away from their owners.
But the growth of local data annually implies extra charges for the customers, which makes their business slowing
down. Cloud computing paradigm comes with new technologies that offer a very economic and cost-effective
solutions, but at the expense of security. So, designing a lightweight system that can achieve a balance between cost
and data security is really important. Several schemes and techniques have been proposed for securing, checking
and repairing data, but unfortunately the majority doesn't respect and preserve the cost efficiency and profitability
of cloud offers. In this paper we try to answer the question: how can we design a model that enables a high level of
integrity check while preserving a minimum cost? We try also to analyse a new threat model regards the tracking
of a file's fragments during a repair or a download operation, which can cause the total loss of customers data. The
solution given in this paper is based on redistributing fragments locations after every data operation using a set of
random values generated by a chaotic map. Finally, we provide a data loss insurance (data corruption as well)
approach based on user estimation of data importance level that helps in reducing user concerns about data loss.
Virtual Machines Online Acquisition
********.html
June 2018
Abstract - Clouds basically offer a set of instance acquisition solutions, it’s either an on-demand plan where the
user has to pay the full VM hourly pricing or can go with a commitment for a X duration, then the user can benefit
from a Y percent of reduction over the total VM reservation period. That point of shifting or decision making
becomes more difficult during the last couple years, with this big number of service reservation offers with various
duration that we have on the market today and knowing the fact that not all workloads are easy to predict, it forces
the user to think about an optimal combination of these offers, while maintaining the same availability level,
consistency and latency of the on-demand solution. In this paper, we introduce two deterministic algorithms for
the multi-slope case, that incur no more than 1 + 1/1−α and 2/1−α respectively, compared to the cost obtained from
an optimal offline algorithm, where α is the maximum saving ratio of a reserved instance offer over on-demand
plan. Our simulation driven by the google cluster usage data-trace shows that more than 30% of cost savings can
be achieved when applied to a real cloud provider like amazon web services, while 40% when purchasing instances through a cloud broker service.