06/10/2024 updated


Lead Full-Stack, Cloud & DevOps Engineer
Badalona, Spain
Worldwide
JavaScript (Programming Language)PHP (Programming Language)Adobe Creative SuiteApplication Programming Interfaces (APIs)Artificial IntelligenceAmazon Web ServicesAmazon CloudfrontAmazon Elastic Compute CloudAmazon S3Apache HTTP ServerApple IOSConfluenceJIRATest AutomationBarcode ReadersBash ShellBootstrap (Software)C++ (Programming Language)Ubuntu (Operating System)Cloud ComputingSoftware QualityDatabasesContinuous IntegrationCustomer Data ManagementData As A ServicesRelational DatabasesDebian LinuxDebuggingLinuxDevOpsDigital Signal ProcessingFile SystemsDjango Web FrameworkDomain Name System (DNS)Amazon DynamoDBECMAScript (C Programming Language Family)GithubVim (Text Editor)Identity ManagementIntegrated Development EnvironmentsSubnettingJQueryPython (Programming Language)LaravelPostgreSQLLinux DistributionLog FilesMariaDBMongoDBMySQLNginxNode.JsNumPyScrum MethodologyRaspberry PiRedisAnsibleBlockchainTensorflowPrometheusNext.jsSymfonyTypeScriptSoftware VersioningThree.jsLoad BalancingTailwindAutoscalingReactJSAmazon ElastiCacheNFTFlask (Web Framework)PhpunitGrafanaCypress (Programming Language)AWS LambdaVuejsCloud TechnologiesPuppeteer (Software)Amazon Virtual Private Cloud (VPC)BackendKerasGitlabGitCloudformationPandasSkills CloudSyntactically Awesome Style Sheets (SASS)Twig (templating)ContainerizationGitlab-ciKubernetesEthereumSolidityBuild ProcessWeb3.jsRoute53CloudwatchPuppetRestful ApiElastic BeanstalkAmazon Simple Queue Service (SQS)TerraformShopwareDoctrine (orm)Serverless ComputingAWS EKSDockerProgramming LanguagesCrudMicroservices
AWS EKS, AWS Lambda, Adobe Suite, CloudFront, DynamoDB, ElastiCache, EC2, Simple Storage Service, S3, Amazon Simple Storage Service, SQS, VPC, AWS VPC, AWS, Ansible, Apache, macOS, API, Ai, automated tests, Autoscaling, Backend, barcode scanner, Bash, Blockchain, Bootstrap, build process, C++ 11, C++, Cloud, cloud-based, cloud infrastructure, CloudFormation, CloudWatch, Confluence, Containerization, CI/CD, CRUD, customer data, Cypress.io, Data Services, database, Debian, DevOps, Digital Signal Processing, Django, Docker, Doctrine, DNS, ECMAScript, Elastic Beanstalk, Ethereum, file system, Flask, Git, GitHub, GitLab, GitLab CI, Grafana, Identity and Access Management, IAM, development environment, Jira, jQuery, JavaScript, Keras, Kubernetes, Laravel, Linux, Linux distro, Load Balancer, log files, versioning, MariaDB, microservices, MongoDB, MySQL, NFT, Next.js, NGINX, Node.js, NumPy, PHP, Pandas, PHPUnit, PostgreSQL, Programming Languages, programming language, Prometheus, Puppet, Puppeteer, Python, Python 3, Raspberry Pi, React, Redis, Relational Database, Rest API, Route53, Scrum, serverless, Shopware, Skills Cloud, bugs, code quality, Solidity, subnets, Symfony, PHP Symfony, Sass, Tailwind, Tensorflow, Terraform, three.js, Twig, TypeScript, Ubuntu, Vim, Vuejs, Web3, web3.js
Languages
GermanNative speakerEnglishFluentFrenchBasic knowledge
Project history
- Teaching Cloud skills to people changing their careers towards IT
- The group consisted of 20-30 people (taught 2 cohorts)
- As the goal was to prepare the participants optimally to enter the workforce, we focussed on practical real-world skills.
- The following topics were the content of the course:
- Linux Essentials
- AWS Services (Level of Cloud Practitioner)
- Programming (Python + JavaScript)
- Full Stack Web Development (React, NodeJs)
- Containerization (Docker, Elastic Container Registry ECR)
- CICD (Github Actions, Gitlab CI, Jenkins)
- Ansible
- Infrastructure as Code IaC (Terraform)
- Development of the infrastructure of an Intelligent Document Processing (IDP) solution. Empowering Customers to integrate AI/ML to their business workflow.
- The infrastructure was deployed in the AWS Cloud, utilizing Kubernetes and mainly open-source tools. To make it possible to be host the application on-premises as well
- For the cluster we are using Elastic Kubernetes Service (EKS) and running an event-driven micro-service architecture. Events are saved in EventStoreDb.
- The cluster uses aws integrations like the AWS VPC CNI Driver, AWS EBS CSI Driver, AWS EFS CSI Driver, AWS Load Balancer Controller. To automatically scale the Kubernetes nodes, we were using the Cluster Autoscaler
- The services communicate using RabbitMQ and are partly zero-scaled dynamically based on queue size with Kubernetes Event-driven Autoscaling (KEDA).
- My role was to build and maintain the infrastructure from the ground up (almost green-field).
- As the code was structured in a Mono-Repo, we used Bazel to build all parts of the code-base (binaries, containers, documentation). I was personally in charge of maintaining part of that Bazel setup.
- To be flexible, the full infrastructure was constructed using Terraform using a lot of upstream aws modules (aws-eks, aws-iam, etc.) but packaging resources in in-house modules as well.
- For Continuous Integration and Continuous Delivery (CICD) we are using Gitlab Ci running on Kubernetes runners and a specialized On-Premises runners using NixOs configuration (which I maintained myself partly) optimized to run our Bazel build.
- The application is build using Python (Backend / ML / AI) and JavaScript Angular (Frontend) and then packaged using Skaffold, Kustomize and Helm Charts
- To have a strong Disaster Recovery, we are using Velero to do regular Backups and AWS Elastic Block Storage (EBS) Snapshots
- These images are stored and managed centrally in the AWS Elastic Container Registry
- All Application secrets are stored in HashiCorp Vault
- The monitoring was implemented with Prometheus, Loki, Tempo and Grafana
- To constantly monitor the compliance and security across our accounts (AWS Organizations), we are using AWS Config, AWS Guard Duty and AWS Security Hub
* Development of a CRUD Rest API according to the principlesof
elasticity and scalability, to manage an already existing database
* The infrastructure was deployed in the AWS Cloud
* The API application was designed with Python 3 Flask ,with all
secrets stored in the AWS Secrets Manager
* In the course of containerization, a Docker imagewas created so
that horizontal scalability can be easily guaranteed
* The code repository was created in AWS CodeCommit,in order
to then create a CI/CD pipeline via AWS CodePipeline and AWS
CodeBuild , which automatically builds a new Docker image after
each commit and deploys it in the production environment
* These images are stored and managed centrally in the AWS
Elastic Container Registry
* The current image is then run as a container in a Development
AWS Elastic Beanstalk environment
* An AWS Elastic Kubernetes Service (EKS) cluster wasused in
production, whereby the deployments were also triggered by the
CI/CD pipeline mentioned above using Helm Chart
* Terraform (IaC) was used to manage the cloud resources
centrally, automatically and version-controlled, with versioning
being managed with Git (GitOps)
* The monitoring was implemented with Prometheus and Grafana
elasticity and scalability, to manage an already existing database
* The infrastructure was deployed in the AWS Cloud
* The API application was designed with Python 3 Flask ,with all
secrets stored in the AWS Secrets Manager
* In the course of containerization, a Docker imagewas created so
that horizontal scalability can be easily guaranteed
* The code repository was created in AWS CodeCommit,in order
to then create a CI/CD pipeline via AWS CodePipeline and AWS
CodeBuild , which automatically builds a new Docker image after
each commit and deploys it in the production environment
* These images are stored and managed centrally in the AWS
Elastic Container Registry
* The current image is then run as a container in a Development
AWS Elastic Beanstalk environment
* An AWS Elastic Kubernetes Service (EKS) cluster wasused in
production, whereby the deployments were also triggered by the
CI/CD pipeline mentioned above using Helm Chart
* Terraform (IaC) was used to manage the cloud resources
centrally, automatically and version-controlled, with versioning
being managed with Git (GitOps)
* The monitoring was implemented with Prometheus and Grafana