DevOps
46 Notes
Branching Strategy: A branching strategy is a strategy that software development teams adopt when writing, merging, and deploying code when using a version control system. ---------------------------------------------------------- Common Git Branching Strategies: - GitFlow - GitHub Flow - GitLab Flow - Trunk-based development ---------------------------------------------------------- GitFlow: - Master - Develop - Feature - To develop new features that branch off the develop branch - Release - Help prepare a new production release; usually branched from the develop branch and must be merged back to both develop and master - Hotfix - Also helps prepare for a release but unlike release branches, hotfix branches arise from a bug that has been discovered and must be resolved; it enables developers to keep working on their own changes on the develop branch while the bug is being fixed. GitFlow pros and cons: The most obvious benefit of this model is that it allows for parallel development to protect the production code so the main branch remains stable for release while developers work on separate branches. Moreover, the various types of branches make it easier for developers to organize their work. This strategy contains separate and straightforward branches for specific purposes though for that reason it may become complicated for many use cases. It is also ideal when handling multiple versions of the production code. However, as more branches are added, they may become difficult to manage as developers merge their changes from the development branch to the main. Developers will first need to create the release branch then make sure any final work is also merged back into the development branch and then that release branch will need to be merged into the main branch. ---------------------------------------------------------- GitHub Flow: GitHub Flow is a simpler alternative to GitFlow ideal for smaller teams as they don’t need to manage multiple versions. Unlike GitFlow, this model doesn’t have release branches. You start off with the main branch then developers create branches, feature branches that stem directly from the master, to isolate their work which is then merged back into the main. The feature branch is then deleted. The main idea behind this model is to keep the master code in a constant deployable state and hence can support continuous integration and continuous delivery processes. GitHub Flow pros and cons: GitHub Flow focuses on Agile principles and so it is a fast and streamlined branching strategy with short production cycles and frequent releases. This strategy also allows for fast feedback loops so that teams can quickly identify issues and resolve them. Since there is no development branch as you are testing and automating changes to one branch which allows for quick and continuous deployment. This strategy is particularly suited for small teams and web applications and it is ideal when you need to maintain a single production version. Thus, this strategy is not suitable for handling multiple versions of the code. Furthermore, the lack of development branches makes this strategy more susceptible to bugs and so can lead to an unstable production code if branches are not properly tested before merging with the master-release preparation and bug fixes happen in this branch. The master branch, as a result, can become cluttered more easily as it serves as both a production and development branch. A further disadvantage is as this model is more suited to small teams and hence, as teams grow merge conflicts can occur as everyone is merging to the same branch and there is a lack of transparency meaning developers cannot see what other developers are working on. ---------------------------------------------------------- GitLab Flow: GitLab Flow is a simpler alternative to GitFlow that combines feature-driven development and feature branching with issue tracking. With GitFlow, developers create a develop branch and make that the default while GitLab Flow works with the main branch right away. GitLab Flow is great when you want to maintain multiple environments and when you prefer to have a staging environment separate from the production environment. Then, whenever the main branch is ready to be deployed, you can merge it back into the production branch and release it. Thus, this strategy offers proper isolation between environments allowing developers to maintain several versions of software in different environments. While GitHub Flow assumes that you can deploy into production whenever you merge a feature branch into the master, GitLab Flow seeks to resolve that issue by allowing the code to pass through internal environments before it reaches production, as seen in the image below. Therefore, this method is suited for situations where you don’t control the timing of the release, such as an iOS app that needs to go through the App store validation first or when you have specific deployment windows. ---------------------------------------------------------- Trunk-based development Trunk-based development is a branching strategy that in fact requires no branches but instead, developers integrate their changes into a shared trunk at least once a day. This shared trunk should be ready for release anytime. The main idea behind this strategy is that developers make smaller changes more frequently and thus the goal is to limit long-lasting branches and avoid merge conflicts as all developers work on the same branch. In other words, developers commit directly to the trunk without the use of branches. Consequently, trunk-based development is a key enabler of continuous integration (CI) and continuous delivery (CD) since changes are done more frequently to the trunk, often multiple times a day (CI) which allows features to be released much faster (CD). This strategy is often combined with feature flags. As the trunk is always kept ready for release, feature flags help decouple deployment from release so any changes that are not ready can be wrapped in a feature flag and kept hidden while features that are complete can be released to end-users without delay. Trunk-based development pros and cons As we’ve seen, trunk-based development paves the way for continuous integration as the trunk is kept constantly updated. It also enhances collaboration as developers have better visibility over what changes other developers are making as commits are made directly into the trunk without the need for branches. This is unlike other branching methods where each developer works independently in their own branch and any changes that occur in that branch can only be seen after merging into the main branch. Because trunk-based development does not require branches, this eliminates the stress of long-lived branches and hence, merge conflicts or the so-called ‘merge hell’ as developers are pushing small changes much more often. This also makes it easier to resolve any conflicts that may arise. Finally, this strategy allows for quicker releases as the shared trunk is kept in a constant releasable state with a continuous stream of work being integrated into the trunk which results in a more stable release. However, this strategy is suited to more senior developers as this strategy offers a great amount of autonomy which non-experienced developers might find daunting as they are interacting directly with the shared trunk. Thus, for a more junior team whose work you may need to monitor closely, you may opt for a Git branching strategy. ---------------------------------------------------------- How to choose the best branching strategy for your team: When first starting out, it’s best to keep things simple and so initially GitHub Flow or Trunk-based development may work best. They are also ideal for smaller teams requiring only a single version of a release to be maintained. GitFlow is great for open-source projects that require strict access control to changes. This is especially important as open-source projects allow anyone to contribute and so with Git Flow, you can check what is being introduced into the source code. However, GitFlow, as previously mentioned, is not suitable when wanting to implement a DevOps environment. In this case, the other strategies discussed are a better fit for an Agile DevOps process and support your CI and CD pipeline. ---------------------------------------------------------- Github Flow The GitHub Flow is a lightweight workflow. It was created by GitHub in 2011 and respects the following 6 principles: 1- Anything in the master branch is deployable. 2- To work on something new, create a branch off from master and give a descriptive name (ie: new-oauth2-scopes) 3- Commit to that branch locally and regularly push your work to the same named branch on the server 4- When you need feedback or help, or you think the branch is ready for merging, open a pull request 5- After someone else has reviewed and signed off on the feature, you can merge it into master 6- Once it is merged and pushed to the master, you can and should deploy it immediately Advantages - it is friendly for Continuous Delivery and Continuous Integration - A simpler alternative to Git Flow - It is ideal when it needs to maintain a single version in production Disadvantages - The production code can become unstable most easily - Are not adequate when it needs the release plans - It doesn’t resolve anything about deployment, environments, releases, and issues - It isn’t recommended when multiple versions in production are needed ---------------------------------------------------------- GitLab Flow The GitLab Flow is a workflow created by GitLab in 2014. It combines feature-driven development and feature branches with issue tracking. The most difference between GitLab Flow and GitHub Flow is that the environment branches have in GitLab Flow (e.g. staging and production) because there will be a project that isn’t able to deploy to production every time you merge a feature branch (e.g. SaaS applications and Mobile Apps) The GitLab Flow is based on 11 rules: 1- Use feature branches, no direct commits on master 2- Test all commits, not only ones on master 3- Run all the tests on all commits (if your tests run longer than 5 minutes have them run in parallel). 4- Perform code reviews before merging into master, not afterward. 5- Deployments are automatic, based on branches or tags. 6- Tags are set by the user, not by CI. 7- Releases are based on tags. 8- Pushed commits are never rebased. 9- Everyone starts from the master and targets the master. 10- Fix bugs in the master first and release branches second. 12- Commit messages reflect intent. Advantages - It defines how to make Continuous Integration and Continuous Delivery - The git history will be cleaner, less messy, and more readable (see why devs prefer squash and merge, instead of only merging, in this article https://softwareengineering.stackexchange.com/questions/263164/why-squash-git-commits-for-pull-requests) - It is ideal when it needs to single version in production Disadvantages - It is more complex than the GitHub Flow - It can become complex as Git Flow when it needs to maintain multiple versions in production ---------------------------------------------------------- GitHub Flow Branch Strategy In GitHub flow, the main branch contains your production-ready code. The other branches, feature branches, should contain work on new features and bug fixes and will be merged back into the main branch when the work is finished and properly reviewed. ---------------------------------------------------------- GitLab Flow Branch Strategy At its core, the GitLab flow branching strategy is a clearly-defined workflow. While similar to the GitHub flow branch strategy, the main differentiator is the addition of environment branches—ie production and pre-production—or release branches, depending on the situation. Just as in the other Git branch strategies, GitLab flow has a main branch that contains code that is ready to be deployed. However, this code is not the source of truth for releases. In GitLab flow, the feature branch contains work for new features and bug fixes which will be merged back into the main branch when they’re finished, reviewed, and approved. ----------------------------------------------------------
Add Ports section to the file docker/docker-compose.yml: services: tiptong_api-django: image: tiptong_api-django:latest ports: - 25:25 networks: - tiptong_api-local - tiptong_api-public After deploying using "docker stack deploy", install "postfix" in the docker container: docker exec -it --user root $(docker container ls -a -q --filter=name=tiptong_api-django) /bin/bash apt update apt install postfix -y service postfix start Now test accessing the port from your computer/laptop: telnet api.tiptong.io 25 Configuring Traefik config files or the deploy section in the compose.yml file is unnecessary. Probably this way, only "api.tiptong.io" container will use the port and other services/containers will not be able to use the port, and actually, that's why we need to configure the Traefik section properly. Test sending an email from the container: apt install telnet -y telnet localhost 25 mail from: whatever@whatever.com rcpt to: mohsen@mohsenhassani.com data (press enter) Type something for the body of the email. . (put an extra period on the last line and then press enter again) If everything works out, you should see a confirmation message resembling this: 250 2.0.0 Ok: queued as CC732427AE Type "quit" to exit. --------------------------------------------------------------------------------------------- In case the email is not sent, install the "rsyslog" to track the issue: apt install rsyslog service rsyslog start service postfix restart nano /var/log/syslog nano /var/log/mail.log nano /var/log/mail.info --------------------------------------------------------------------------------------------- Having the following error, I had to comment the line "myhostname = cc70132a9df8" in the /etc/postfix/main.cf file: postfix/smtp[1469]: EF329761203: to=<mohsen@mohsenhassani.com>, relay=mail.mohsenhassani.com[5.9.154.209]:25, delay=33, delays=12/0.01/21/0.01, dsn=5.0.0, status=bounced (host mail.mosenhassani.com[5.9.154.209] said: 550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1) (in reply to MAIL FROM command)) service postfix restart ---------------------------------------------------------------------------------------------
docker-compose.yml version: "3" volumes: files: driver: local mysql: driver: local redis: driver: local services: owncloud: image: owncloud/server depends_on: - mariadb - redis environment: - OWNCLOUD_DOMAIN=localhost:8080 - OWNCLOUD_DB_TYPE=mysql - OWNCLOUD_DB_NAME=owncloud - OWNCLOUD_DB_USERNAME=owncloud - OWNCLOUD_DB_PASSWORD=owncloud - OWNCLOUD_DB_HOST=mariadb - OWNCLOUD_ADMIN_USERNAME=admin - OWNCLOUD_ADMIN_PASSWORD=MohseN4301! - OWNCLOUD_MYSQL_UTF8MB4=true - OWNCLOUD_REDIS_ENABLED=true - OWNCLOUD_REDIS_HOST=redis networks: - owncloud-local - traefik-public healthcheck: test: ["CMD", "/usr/bin/healthcheck"] interval: 30s timeout: 10s retries: 5 volumes: - files:/mnt/data deploy: restart_policy: condition: on-failure max_attempts: 3 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.owncloud-http.rule=Host(`ftp.mohsenhasani.ir`) || Host(`ftp.mohsenhassani.ir`) || Host(`152.89.45.140`) - traefik.http.services.owncloud.loadbalancer.server.port=8080 mariadb: image: mariadb:10.6 # minimum required ownCloud version is 10.9 environment: - MYSQL_ROOT_PASSWORD=owncloud - MYSQL_USER=owncloud - MYSQL_PASSWORD=owncloud - MYSQL_DATABASE=owncloud command: ["--max-allowed-packet=128M", "--innodb-log-file-size=64M"] healthcheck: test: ["CMD", "mysqladmin", "ping", "-u", "root", "--password=owncloud"] interval: 10s timeout: 5s retries: 5 volumes: - mysql:/var/lib/mysql networks: - owncloud-local redis: image: redis:6 command: ["--databases", "1"] healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 volumes: - redis:/data networks: - owncloud-local networks: owncloud-local: traefik-public: external: true ------------------------------------------------------------------------- .env OWNCLOUD_VERSION=10.11 OWNCLOUD_DOMAIN=localhost:8080 ADMIN_USERNAME=mohsen ADMIN_PASSWORD=mohsen_2222 HTTP_PORT=8080 -------------------------------------------------------------------------
Intel SGX: Intel Software Guard Extensions (Intel SGX) is an Intel technology for application developers who are seeking to protect select code and data from disclosure or modification. ---------------------------------------------------------------------------------------------------- Enclave: - A trusted execution environment embedded in a process. - The core idea of SGX is the creation of a software ‘enclave’. - The enclave is basically a separated and encrypted region for code and data. - The enclave is only decrypted inside the processor, so it is even safe from the RAM being read directly. ----------------------------------------------------------------------------------------------------
Both are "Infrastructure as a Code", meaning they're both used to automate provisioning, configuring, and managing the infrastructure. Terraform: - is mainly an infrastructure provisioning tool. (That's where its main power lies.) - has the possibility to deploy applications in other tools in that structure. - is a better tool for provisioning infrastructure. Ansible: - is mainly a configuration tool. (Once the infrastructure is provisioned and is there, Ansible can now be used to configure it and deploy applications, install/update the software on that infrastructure, etc. - is a better tool for configuring that infrastructure deploying and installing applications and services on them. ------------------------------------------------------------------------------------ DevOps engineers use the combination of these tools to cover the whole setup end-to-end using both for their own strength, instead of using only one tool. ------------------------------------------------------------------------------------ "Declarative" vs "Imperative" approaches that configuration files are written: Declarative: When you create the Terraform files, instead of defining what steps to be executed to create a VPS or to spin up the instances or configure the network, you define the end state you desire: - 5 serves with the following network config - AWS user with the following permissions So, instead of defining exactly what to do, which is an imperative approach, you define what the end result should be the declarative approach. For the initial set-up this may not make much difference, but consider when you're updating your infrastructure like removing a server, adding another server, or making another adjustment, with the imperative approach you would say in a configuration file remove 2 servers, add a firewall configuration, add some permission to AWS user, so you give instructions of what to do, a declarative approach like in Terraform, for example, you would say my new desired state is now 7 servers, this firewall configuration and user with this permission. ------------------------------------------------------------------------------------
These two lines accept all ports from entrypoints and redirect the domain to the port 9001: - traefik.http.routers.router_name.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.services.service_name.loadbalancer.server.port=9001 Note that we don't have the following line! If you add the following line, it will limit your domain to a specific entrypoint, (http) - traefik.http.routers.owncloud-http.entrypoints=http Opening the following URLs in the browser will show the page from the 9001 port (exposed from the docker instance), but the URLs remain just as shown below. http://ftp.mohsenhassani.com:43 http://ftp.mohsenhassani.com:80 http://ftp.mohsenhassani.com:9000 http://ftp.mohsenhassani.com:9001 ------------------------------------------------------------------------------------ Limit (bind) the domain name to only one port: - traefik.http.routers.router_name.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.router_name.entrypoints=web-secure-entrypoint - traefik.http.services.service_name.loadbalancer.server.port=9001 It will only work with the following URL: http://ftp.mohsenhassani.com:9001 It's because in the traefik.yml file we have: - --entrypoints.web-secure-entrypoint.address=:9001 ------------------------------------------------------------------------------------ Add TLS using Lets Encrypt to a port: - traefik.http.routers.router_name.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.router_name.entrypoints=web-secure-entrypoint - traefik.http.routers.router_name.tls=true - traefik.http.routers.router_name.tls.certresolver=le - traefik.http.services.service_name.loadbalancer.server.port=9001 The following URL will work: https://ftp.mohsenhassani.com:9001 But NOT the following: http://ftp.mohsenhassani.com:9001 ------------------------------------------------------------------------------------ Configuring TCP routers: - traefik.tcp.routers.router_name.rule=HostSNI(`ftp.mohsenhassani.com`) - traefik.tcp.routers.router_name.entrypoints=tcp-entrypoint - traefik.tcp.routers.router_name.tls=true - traefik.tcp.routers.router_name.tls.certresolver=le - traefik.tcp.services.service_name.loadbalancer.server.port=9000 ------------------------------------------------------------------------------------ This redirects the http and web entrypoints to their secure entrypoints: - traefik.http.routers.web-router.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.web-router.entrypoints=http,web2 - traefik.http.routers.web-router.middlewares=https-redirect - traefik.http.routers.web-secure-router.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.web-secure-router.entrypoints=https,web2 - traefik.http.routers.web-secure-router.tls=true - traefik.http.routers.web-secure-router.tls.certresolver=le - traefik.http.services.web-service.loadbalancer.server.port=9001 ------------------------------------------------------------------------------------ Add CORS headers: - traefik.http.middlewares.dev-header.headers.accesscontrolallowmethods=GET,OPTIONS,PUT - traefik.http.middlewares.dev-header.headers.accesscontrolalloworiginlist=* - traefik.http.middlewares.dev-header.headers.accesscontrolmaxage=100 - traefik.http.middlewares.dev-header.headers.addvaryheader=true - traefik.http.routers.my-web-router.middlewares=dev-header ------------------------------------------------------------------------------------ Expose multiple ports for a domain: - Create a service - Set the port to it via loadbalancer Both of the ports "9000" and "9001" are accessible via the domain. The tls and certresolver are irrelevant to this topic. I'm putting them here in case we need them all together. - traefik.http.routers.web-secure-router.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.web-secure-router.entrypoints=https,my-web2 - traefik.http.routers.web-secure-router.tls=true - traefik.http.routers.web-secure-router.tls.certresolver=le - traefik.http.routers.web-secure-router.service=my-web-service - traefik.http.services.my-web-service.loadbalancer.server.port=9001 - traefik.http.routers.web-9000-router.rule=Host(`ftp.mohsenhassani.com`) - traefik.http.routers.web-9000-router.entrypoints=my-tcp - traefik.http.routers.web-9000-router.tls=true - traefik.http.routers.web-9000-router.tls.certresolver=le - traefik.http.routers.web-9000-router.service=web-9000-service - traefik.http.services.web-9000-service.loadbalancer.server.port=9000 ------------------------------------------------------------------------------------
Hypertext Transfer Protocol (HTTP) and Transmission Control Protocol (TCP) are both computer protocols involved in the transfer of data, but while they individually serve their own purpose, they have a close relationship. What is HTTP? HTTP is a request-response protocol that allows users to communicate data on the World Wide Web (WWW) and transfer hypertext. The protocol remains one of the primary means of using the Internet and provides users a way to interact with web resources such as HTML files by transmitting hypertext messages between clients (such as a web browser like Chrome) and a server. Essentially, it’s used to load web pages using hypertext links. What is TCP? TCP is a communication standard that enables application programs and computing devices to exchange data and/or messages over networks. It is a stateful protocol. This protocol defines how to establish and maintain a network connection through which data is then exchanged. It also determines how to break the application data into packets that networks can transfer and ensures end-to-end data delivery. TCP transmission is reliable, secure, and guarantees the integrity of data sent over a network, regardless of the amount. Examples include peer-to-peer sharing methods like File Transfer Protocol (FTP), Secure Shell (SSH), and Telnet. It is also used to send and receive email through Internet Message Access Protocol (IMAP), Post Office Protocol (POP), and Simple Mail Transfer Protocol (SMTP), and for web access through the Hypertext Transfer Protocol (HTTP). The Main Differences Between HTTP and TCP - HTTP typically uses port 80 – this is the port that the server “listens to” or expects to receive from a Web client. TCP doesn’t require a port to do its job. - HTTP is faster in comparison to TCP as it operates at a higher speed and performs the process immediately. TCP is relatively slower. - TCP tells the destination computer which application should receive data and ensures the proper delivery of said data, whereas HTTP is used to search and find the desired documents on the Internet. - TCP contains information about what data has or has not been received yet, while HTTP contains specific instructions on how to read and process the data once it’s received. - TCP manages the data stream, whereas HTTP describes what the data in the stream contains. - TCP operates as a three-way communication protocol, while HTTP is a single-way protocol.
The contents of "docker-compose.yml" file: version: '3.7' services: myminio: image: minio/minio:latest networks: - minio-local - traefik-public volumes: - ./data:/data deploy: restart_policy: condition: on-failure max_attempts: 3 environment: MINIO_ROOT_USER: admin MINIO_ROOT_PASSWORD: <password> command: server --address :9000 --console-address :9001 /data healthcheck: test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] interval: 30s timeout: 20s retries: 3 deploy: labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.minio-web-router.rule=Host(`minio2.mohsenhassani.com`) - traefik.http.routers.minio-web-router.entrypoints=http,port-9001 - traefik.http.routers.minio-web-router.middlewares=https-redirect - traefik.http.routers.minio-web-router.service=minio-secure-web-service - traefik.http.routers.minio-web-secure-router.rule=Host(`minio2.mohsenhassani.com`) - traefik.http.routers.minio-web-secure-router.entrypoints=https,port-9001 - traefik.http.routers.minio-web-secure-router.tls=true - traefik.http.routers.minio-web-secure-router.tls.certresolver=le - traefik.http.routers.minio-web-secure-router.service=minio-secure-web-service - traefik.http.services.minio-secure-web-service.loadbalancer.server.port=9001 - traefik.http.routers.minio-tcp_http-router.rule=Host(`minio2.mohsenhassani.com`) - traefik.http.routers.minio-tcp_http-router.entrypoints=port-9000 - traefik.http.routers.minio-tcp_http-router.tls=true - traefik.http.routers.minio-tcp_http-router.tls.certresolver=le - traefik.http.routers.minio-tcp_http-router.service=minio-9000-service - traefik.http.services.minio-9000-service.loadbalancer.server.port=9000 minio-client: image: minio/mc:latest networks: - minio-local deploy: restart_policy: condition: on-failure max_attempts: 3 depends_on: - myminio entrypoint: > /bin/sh -c " /usr/bin/mc config host rm local; /usr/bin/mc config host add --quiet --api s3v4 local http://myminio:9000 admin <password>; tail -F /dev/null; " prometheus-metrics: image: prom/prometheus networks: - minio-local volumes: - ./data/prometheus/scrape_configs.yml:/etc/prometheus/prometheus.yml depends_on: - myminio command: - "--config.file=/etc/prometheus/prometheus.yml" deploy: restart_policy: condition: on-failure max_attempts: 5 networks: minio-local: attachable: true traefik-public: external: true ---------------------------------------------------------------------------------- The contents of "traefik-host.yml" file: version: '3.3' services: traefik: image: traefik:v2.2 ports: - target: 80 published: 80 mode: host - target: 443 published: 443 mode: host - target: 9000 published: 9000 mode: host - target: 9001 published: 9001 mode: host deploy: placement: constraints: - node.labels.traefik-public.traefik-public-certificates == true labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.middlewares.admin-auth.basicauth.users=${USERNAME?Variable not set}:${HASHED_PASSWORD?Variable not set} - traefik.http.middlewares.https-redirect.redirectscheme.scheme=https - traefik.http.middlewares.https-redirect.redirectscheme.permanent=true - traefik.http.routers.traefik-public-http.rule=Host(`${DOMAIN?Variable not set}`) - traefik.http.routers.traefik-public-http.entrypoints=http - traefik.http.routers.traefik-public-http.middlewares=https-redirect - traefik.http.routers.traefik-public-https.rule=Host(`${DOMAIN?Variable not set}`) - traefik.http.routers.traefik-public-https.entrypoints=https - traefik.http.routers.traefik-public-https.tls=true - traefik.http.routers.traefik-public-https.service=api@internal - traefik.http.routers.traefik-public-https.tls.certresolver=le - traefik.http.routers.traefik-public-https.middlewares=admin-auth - traefik.http.services.traefik-public.loadbalancer.server.port=8080 volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - traefik-public-certificates:/certificates command: - --providers.docker - --providers.docker.constraints=Label(`traefik.constraint-label`, `traefik-public`) - --providers.docker.exposedbydefault=false - --providers.docker.swarmmode - --entrypoints.http.address=:80 - --entrypoints.https.address=:443 - --entrypoints.minio-web.address=:9001 - --entrypoints.minio-tcp.address=:9000 - --certificatesresolvers.le.acme.email=${EMAIL?Variable not set} - --certificatesresolvers.le.acme.storage=/certificates/acme.json - --certificatesresolvers.le.acme.tlschallenge=true - --accesslog - --log - --api networks: - traefik-public volumes: traefik-public-certificates: networks: traefik-public: external: true ---------------------------------------------------------------------------------- The contents of scrape_configs.yml file: global: scrape_interval: 15s scrape_configs: - job_name: minio-job metrics_path: /minio/v2/metrics/cluster scheme: http static_configs: - targets: ['myminio:9000'] ---------------------------------------------------------------------------------- docker stack rm minio docker stack deploy -c docker-compose.yml minio docker service logs minio_myminio -f ----------------------------------------------------------------------------------
A deploy or deployment includes all the technical activities that are needed to make a software system or feature available for use. Think of a fresh Docker container running in a pod on the Kubernetes cluster. The piece of software passed all checks and tests in your (CI/CD) pipeline and is ready to receive traffic from production users, but it is not actually receiving any, yet. This part of the process is just to make sure that the new version is healthy and running smoothly. It takes care of all the technical checks and balances, without any of the risk incurred by serving actual production traffic. You can conclude that deploying a piece of software is a mundane and risk-free activity. A release comes after deployment and includes all the activities that are needed to move part of, or all, production traffic to the new version. All the risks and things that could go wrong - downtime, lost revenue, angry managers and customers - are related to the release, and not deploy, to production. You can conclude that releasing a piece of software is an exciting and pretty risky activity. It’s an activity that deserves more attention.
SSL is short for Secure Socket Layer, and an SSL certificate will give you a way of encrypting information while it travels online. This is particularly important as it ensures that data in transit cannot be intercepted by those with malicious intent. ------------------------------------------------------------------------------ HTTPS stands for HyperText Transfer Protocol Secure, but it is also referred to as HTTP Secure and HTTP over SSL. Ever since the internet came about HTTP was the protocol that was used to move data across the world. HTTP moves data in plain text, which is no longer secure because it is readily available for anyone to read, HTTPS provides a secure method, using encryption to move data over the internet. ------------------------------------------------------------------------------ What is the difference between SSL and HTTPS? HTTPS is a combination of the Hypertext Transfer Protocol (HTTP) with either SSL or TLS. It provides encrypted communications and a secure ID of a web server. SSL is simply a protocol that enables secure communications online. It was originally developed in 1994. Since its introduction, there have been different, more improved SSL products. TLS is the latest ‘version’ of SSL. Aside from HTTPS, TLS/SSL can be utilized in order to secure other app-specific protocols. Namely, these are; SMTP, FTP, XMPP, and NNTP. ------------------------------------------------------------------------------ What is TLS, and is it the same as SSL? Short for Transport Layer Security, TLS is essentially the successor of SSL, yet it is more secure. Despite SSL now having a successor, and because it is still one of the most popular protocols online, using either SSL or TLS are generally regarded as one of the same. For many years, HTTPS used SSL as its standard protocol. However, there is now a newer version of SSL, which is called TLS. They are quite similar in many respects; but essentially, TLS is the upgraded version of SSL. If you buy an SSL certificate online from a trusted provider, you will most likely get an SSL/TLS certificate. ------------------------------------------------------------------------------
The docker-compose.yml file: ======================= version: '3.7' services: myminio: image: minio/minio:latest networks: - minio-local volumes: - ./data:/data deploy: restart_policy: condition: on-failure max_attempts: 3 environment: MINIO_ROOT_USER: admin MINIO_ROOT_PASSWORD: a_password command: server --address :9000 --console-address :9001 /data healthcheck: test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] interval: 30s timeout: 20s retries: 3 minio-client: image: minio/mc:latest networks: - minio-local deploy: restart_policy: condition: on-failure max_attempts: 3 depends_on: - myminio entrypoint: > /bin/sh -c " /usr/bin/mc config host rm local; /usr/bin/mc config host add --quiet --api s3v4 local http://myminio:9000 admin a_password; tail -F /dev/null; " nginx: image: nginx:latest networks: - minio-local - traefik-public volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - myminio deploy: restart_policy: condition: on-failure max_attempts: 7 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.minio-http.rule=Host(`minio.mohsenhassani.com`) - traefik.http.routers.minio-http.entrypoints=http - traefik.http.routers.minio-http.middlewares=https-redirect - traefik.http.routers.minio-https.rule=Host(`minio.mohsenhassani.com`) - traefik.http.routers.minio-https.entrypoints=https - traefik.http.routers.minio-https.tls=true - traefik.http.routers.minio-https.tls.certresolver=le - traefik.http.services.minio.loadbalancer.server.port=9001 adminio-ui: image: rzrbld/adminio-ui:latest environment: API_BASE_URL: "http://adminioapi.mohsenhassani.com" ADMINIO_MULTI_BACKEND: "false" ADMINIO_BACKENDS: '[{"name":"myminio","url":"http://adminioapi.mohsenhassani.com"}]' NGX_ROOT_PATH: "/" networks: - minio-local - traefik-public deploy: restart_policy: condition: on-failure max_attempts: 7 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.adminio-http.rule=Host(`adminio.mohsenhassani.com`) - traefik.http.routers.adminio-http.entrypoints=http # - traefik.http.routers.adminio-http.middlewares=https-redirect # - traefik.http.routers.adminio-https.rule=Host(`adminio.mohsenhassani.com`) # - traefik.http.routers.adminio-https.entrypoints=https # - traefik.http.routers.adminio-https.tls=true # - traefik.http.routers.adminio-https.tls.certresolver=le - traefik.http.services.adminio.loadbalancer.server.port=80 adminio-api: image: rzrbld/adminio-api:latest environment: MINIO_ACCESS: admin MINIO_SECRET: a_password MINIO_HOST_PORT: myminio:9000 MINIO_SSE_MASTER_KEY: 1:da2f4cfa32bed76507dcd44b42872328a8e14f25cd2a1ec0fb85d299a192a447 ADMINIO_HOST_PORT: :8080 depends_on: - myminio - adminio-ui networks: - minio-local - traefik-public deploy: restart_policy: condition: on-failure max_attempts: 7 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.adminioapi-http.rule=Host(`adminioapi.mohsenhassani.com`) - traefik.http.routers.adminioapi-http.entrypoints=http # - traefik.http.routers.adminioapi-http.middlewares=https-redirect # - traefik.http.routers.adminioapi-https.rule=Host(`adminioapi.mohsenhassani.com`) # - traefik.http.routers.adminioapi-https.entrypoints=https # - traefik.http.routers.adminioapi-https.tls=true # - traefik.http.routers.adminioapi-https.tls.certresolver=le - traefik.http.services.adminioapi.loadbalancer.server.port=8080 networks: minio-local: attachable: true traefik-public: external: true ---------------------------------------------------------------------------------------- The nginx.conf file: =============== user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; # include /etc/nginx/conf.d/*.conf; upstream minio { server myminio:9001; } server { listen 9001; listen [::]:9001; server_name minio.mohsenhassani.com 127.0.0.1 localhost; # To allow special characters in headers ignore_invalid_headers off; # Allow any size file to be uploaded. # Set to a value such as 1000m; to restrict file size to a specific value client_max_body_size 0; # To disable buffering proxy_buffering off; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300; # Default is HTTP/1, keepalive is only enabled in HTTP/1.1 proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://minio; } } } --------------------------------------------------------------------------------- docker network create minio-local docker stack deploy -c docker-compose.yml minio docker stack rm minio ---------------------------------------------------------------------------------
1- The file docker-compose.yml: ======================= version: '3.7' services: myminio: image: minio/minio:latest hostname: minio-server networks: - minio-local - minio-public volumes: - ./data:/data deploy: restart_policy: condition: on-failure max_attempts: 3 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: minio123 command: server /data healthcheck: test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"] interval: 30s timeout: 20s retries: 3 nginx: image: nginx:latest networks: - minio-local - minio-public - traefik-public volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - myminio deploy: restart_policy: condition: on-failure max_attempts: 50 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.minio-http.rule=Host(`minio.mohsenhassani.com`) - traefik.http.routers.minio-http.entrypoints=http - traefik.http.routers.minio-http.middlewares=https-redirect - traefik.http.routers.minio-https.rule=Host(`minio.mohsenhassani.com`) - traefik.http.routers.minio-https.entrypoints=https - traefik.http.routers.minio-https.tls=true - traefik.http.routers.minio-https.tls.certresolver=le - traefik.http.services.minio.loadbalancer.server.port=9000 networks: minio-local: attachable: true minio-public: external: true traefik-public: external: true ---------------------------------------------------------------------------------- 2- The file nginx.conf ============== user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; # include /etc/nginx/conf.d/*.conf; upstream minio { server myminio:9000; } server { listen 9000; listen [::]:9000; server_name minio.mohsenhassani.com 127.0.0.1 localhost; # To allow special characters in headers ignore_invalid_headers off; # Allow any size file to be uploaded. # Set to a value such as 1000m; to restrict file size to a specific value client_max_body_size 0; # To disable buffering proxy_buffering off; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 300; # Default is HTTP/1, keepalive is only enabled in HTTP/1.1 proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://minio; } } } ---------------------------------------------------------------------------------- 3- Create a folder "data" next to these two files. 4- Create a docker network: docker network create --driver=overlay minio-public 5- Deploy the stack: docker stack deploy -c docker-compose.yml minio-server ----------------------------------------------------------------------------------
Get MinIO server information for the configured alias "myminio": mc admin info myminio ------------------------------------------------------------------------- MinIO server information: mc admin --json info play # The MinIO online test playground mc admin --json info myminio ------------------------------------------------------------------------- Restart all MinIO servers: mc admin service restart play ------------------------------------------------------------------------- Add a new user 'newuser' on MinIO: mc admin user add myminio/ newuser newuser123 Disable a user 'newuser' on MinIO: mc admin user disable myminio/ newuser Enable a user 'newuser' on MinIO: mc admin user enable myminio/ newuser Remove user 'newuser' on MinIO: mc admin user remove myminio/ newuser List all users on MinIO: mc admin user list --json myminio/ Display info of a user: mc admin user info myminio someuser ------------------------------------------------------------------------- List bucket quota on bucket 'mybucket' on MinIO: mc admin bucket quota myminio/mybucket Set a hard bucket quota of 64Mb for bucket 'mybucket' on MinIO: mc admin bucket quota myminio/mybucket --hard 64MB ------------------------------------------------------------------------- Bucket stats: mc ls -r --json minio For get stats of a bucket/folder: mc ls -r --json minio/abcd ------------------------------------------------------------------------- mc du minio mc du minio/abcd ------------------------------------------------------------------------- Quotas: https://docs.min.io/minio/baremetal/reference/minio-cli/minio-mc-admin/mc-admin-bucket-quota.html mc admin bucket quota myminio/bucket-1 --json -------------------------------------------------------------------------
1- Run the latest stable image of MinIO on a Docker container: docker run -p 9000:9000 \ --name minio1 \ -v /mnt/data:/data \ -e "MINIO_ROOT_USER=mohsen" \ -e "MINIO_ROOT_PASSWORD=M.Hassani" \ minio/minio server /data 2- Download MinIO Client: docker run -it --entrypoint=/bin/bash minio/mc 3- Add a MinIO Storage Service: (in the minio/mc server from the step 2): mc alias set minio <YOUR-MINIO-ENDPOINT> [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] mc alias set minio http://192.168.1.51:9000 mohsen M.Hassani To get the IP address or hostname of the mino server: - docker ps - docker inspect <CONTAINER ID> of the minio/mc - Search in the first lines for "Args". You will see the IP or hostname along with the credentials. 4- Test Your Setup: mc admin info minio
When demand for your application or website is increasing and you need to expand its accessibility, storage power, and availability levels, is it better to scale horizontally or vertically? Horizontal scaling means scaling by adding more machines to your pool of resources (also described as “scaling out”), whereas vertical scaling refers to scaling by adding more power (e.g. CPU, RAM) to an existing machine (also described as “scaling up”). One of the fundamental differences between the two is that horizontal scaling requires breaking a sequential piece of logic into smaller pieces so that they can be executed in parallel across multiple machines. In many respects, vertical scaling is easier because the logic really doesn’t need to change. Rather, you’re just running the same code on higher-spec machines. However, there are many other factors to consider when determining the appropriate approach.
iSCSI stands for Internet Small Computer Systems Interface. It is an IP-based storage networking standard for linking data storage facilities. It provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network. iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. It can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into storage arrays while providing clients (such as database and web servers) with the illusion of locally attached SCSI disks. It mainly competes with Fibre Channel, but unlike traditional Fibre Channel, which usually requires dedicated cabling, iSCSI can be run over long distances using existing network infrastructure.
Nginx: listens on port 80 for incoming HTTP requests from the internet. Gunicorn: listens on another port(8000 is the popular one) for HTTP requests from Nginx. Gunicorn is configured with our Django web app. It serves dynamic contents passed from Nginx. (Note that Gunicorn can handle static(CSS/JS/images) but Nginx is better optimized for it. So, we need both Nginx and Gunicorn for a proper Django deployment. WSGI (Web Server Gateway Interface) servers (such as Gunicorn, uWSGI, or mod_wsgi). A web server faces the outside world. It can serve files (HTML, images, CSS, etc) directly from the file system. However, it can't talk directly to Django applications; it needs something that will run the application, feed it requests from web clients (such as browsers) and return responses. With Nginx, mod_wsgi is out of the picture, and we have to choose between Gunicorn and uWSGI : WSGI sits between Nginx(webserver) and Django (python app). The WSGI server doesn't talk to our Django project, it imports our Django project. It does something like this: from mysite.wsgi import application application(args) uWSGI: It is a WSGI (python standard) implementation. The uWSGI is a fully-featured application server. Generally, uWSGI is paired with a reverse proxy (such as Nginx). It creates a Unix socket, and serves responses to the web server via the uWSGI protocol. the web client <-> the web server <-> the socket <-> uwsgi <-> Django ------------------------------------------------------------------- Gunicorn WSGI HTTP server Gunicorn is a pure-Python WSGI HTTP server and it has no dependencies and is easy to install and use. As a Python HTTP server, Gunicorn interfaces with both Nginx and our actual python web-app code to serve dynamic content while Nginx is responsible for serving static content and others. We can test Gunicorn by typing: gunicorn --bind 0.0.0.0:8000 mysite.wsgi:application Or simpler command: gunicorn mysite.wsgi -------------------------------------------------------------------
RabbitMQ is a message broker: it accepts and forwards messages. You can think about it as a post office: when you put the mail that you want posting in a post box, you can be sure that Mr. or Ms. Mailperson will eventually deliver the mail to your recipient. In this analogy, RabbitMQ is a post box, a post office, and a postman. The major difference between RabbitMQ and the post office is that it doesn't deal with paper, instead it accepts, stores, and forwards binary blobs of data ‒ messages. RabbitMQ is an open-source middleware message solution that natively uses AMQP communications but it has a good selection of plug-ins to support features like MQTT, MQTT Web Sockets, HTTP REST API, and server-to-server communications.
What is a message broker? A message broker is a software that enables applications, systems, and services to communicate with each other and exchange information. The message broker does this by translating messages between formal messaging protocols. This allows interdependent services to “talk” with one another directly, even if they were written in different languages or implemented on different platforms. Asynchronous messaging refers to the type of inter-application communication that message brokers make possible. It prevents the loss of valuable data and enables systems to continue functioning even in the face of the intermittent connectivity or latency issues common on public networks. ----------------------------------------------------------------------- Message brokers vs. APIs REST APIs are commonly used for communications between microservices. The term Representational State Transfer (REST) defines a set of principles and constraints that developers can follow when building web services. Any services that adhere to them will be able to communicate via a set of uniform shared stateless operators and requests. Application Programming Interface (API) denotes the underlying code that, if it conforms to REST rules, allows the services to talk to one another. REST APIs use Hypertext Transfer Protocol (HTTP) to communicate. Because HTTP is the standard transport protocol of the public Internet, REST APIs are widely known, frequently used, and broadly interoperable. HTTP is a request/response protocol, however, so it is best used in situations that call for a synchronous request/reply. This means that services making requests via REST APIs must be designed to expect an immediate response. If the client receiving the response is down, the sending service will be blocked while it awaits the reply. Failover and error handling logic should be built into both services. Message brokers enable asynchronous communications between services so that the sending service need not wait for the receiving service’s reply. This improves fault tolerance and resiliency in the systems in which they’re employed. In addition, the use of message brokers makes it easier to scale systems since a pub/sub messaging pattern can readily support changing numbers of services. Message brokers also keep track of consumers’ states. -----------------------------------------------------------------------
Producer: Application that sends the messages. Consumer: Application that receives the messages. Queue: Buffer that stores messages. Message: Information that is sent from the producer to a consumer through RabbitMQ. Connection: A TCP connection between your application and the RabbitMQ broker. Channel: A virtual connection inside a connection. When publishing or consuming messages from a queue - it's all done over a channel. Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. To receive messages, a queue needs to be bound to at least one exchange. Binding: A binding is a link between a queue and an exchange. Routing key: A key that the exchange looks at to decide how to route the message to queues. Think of the routing key like an address for the message. AMQP: Advanced Message Queuing Protocol is the protocol used by RabbitMQ for messaging. Users: It is possible to connect to RabbitMQ with a given username and password. Every user can be assigned permissions such as rights to read, write and configure privileges within the instance. Users can also be assigned permissions for specific virtual hosts. Vhost, virtual host: Provides a way to segregate applications using the same RabbitMQ instance. Different users can have different permissions to different vhost and queues and exchanges can be created, so they only exist in one vhost.
A message queue is a data structure, or a container - a way to hold messages for eventual consumption. A message broker is a separate component that manages queues. ------------------------------------------------------------------ Message Broker is built to extend MQ, and it is capable of understanding the content of each message that it moves through the Broker. ------------------------------------------------------------------
- High Availability or no downtime - Scalability or high performance - Disaster recovery - backup and restore
High Availability means that the application has no downtime so it's always accessible by users.
Kubernetes is much more complex to install and set-up because it is more complex with a high learning curve, but more powerful. Docker Swarm is more lightweight, however is limited in its functionalities. -------------------------------------------------------------------- Kubernetes supports auto-scaling. Docker Swarm needs manual scaling to be configured. -------------------------------------------------------------------- Kubernetes has built-in monitoring. Docker Swarm depends on third-party tools for monitoring. -------------------------------------------------------------------- Kubernetes needs to set up load balancing manually. Docker Swarm supports auto load balancing. -------------------------------------------------------------------- Kubernetes, you need to learn a new separate CLI tool, which is the KubeCTL. Docker Swarm, you actually have the same docker command line that you use with Docker. You don't need a separate CLI tool. --------------------------------------------------------------------
A stage, staging, or pre-production environment is an environment for testing that exactly resembles a production environment. It seeks to mirror an actual production environment as closely as possible and may connect to other production services and data, such as databases. For example, servers will be run on remote machines, rather than locally (as on a developer's workstation during dev, or on a single test machine during the test), which tests the effects of networking on the system.
Pipelines are the top-level component of continuous integration, delivery, and deployment. Pipelines provide an extensible set of tools for modeling build, testing, and deploying code. All jobs in a stage are executed simultaneously and, if it succeeds, the pipeline moves on to the next stage. If one of the jobs fails, as a rule, the next stage is not executed. There are the following pipeline job steps: 1. Build – compilation and packaging of the project. 2. Testing – automated testing with default data. 3. Staging – manual testing and decision on going live. 4. Production – manual.
A typical pipeline might consist of four stages, executed in the following order: - A build stage, with a job called compile. - A test stage, with two jobs called test1 and test2. - A staging stage, with a job called deploy-to-stage. - A production stage, with a job called deploy-to-prod.
Continuous Delivery adds that the software can be released to production at any time, often by automatically pushing changes to a staging system.
Continuous Integration is the practice of merging all the code that is being produced by developers. The merging usually takes place several times a day in a shared repository. From within the repository, or production environment, building and automated testing are carried out that ensure no integration issues and the early identification of any problems.
Kong is an API gateway built on top of Nginx. ---------------------------------------------------------------------------------- Kong is Orchestration Microservice API Gateway. Kong provides a flexible abstraction layer that securely manages communication between clients and microservices via API. Also known as an API Gateway, API middleware, or in some cases Service Mesh. ---------------------------------------------------------------------------------- https://docs.konghq.com/enterprise/ ---------------------------------------------------------------------------------- You can install it on your server or over Docker. The docker installation is below these instructions. Installation on a server: (For installing over Docker go to the section at the bottom). https://konghq.com/get-started/#install 1- Install kong: apt install -y apt-transport-https curl lsb-core echo "deb https://kong.bintray.com/kong-deb `lsb_release -sc` main" | sudo tee -a /etc/apt/sources.list curl -o bintray.key https://bintray.com/user/downloadSubjectPublicKey?username=bintray sudo apt-key add bintray.key sudo apt-get update sudo apt-get install -y kong 2- Copy the configuration file: cp /etc/kong/kong.conf.default /etc/kong/kong.conf 3- Install PostgreSQL. Provision a database with the name "kong" and a user with the name "kong". 4- Uncomment database variables in the configuration file /etc/kong/kong.conf: database = postgres pg_host pg_port pg_timeout pg_user pg_password pg_database 5- Run the Kong migrations: kong migrations bootstrap -c /etc/kong/kong.conf ---------------------------------------------------------------------------------- Install on Docker: https://konghq.com/get-started/#install 1- Create a custom network: docker network create kong-net 2- Download and run a dockerized PostgreSQL database: docker run -d --name kong-database \ --network=kong-net \ -p 5432:5432 \ -e "POSTGRES_USER=kong" \ -e "POSTGRES_DB=kong" \ -e "POSTGRES_PASSWORD=kong" \ postgres:9.6 3- Prepare your database: Run the migrations with an ephemeral Kong container. docker run --rm \ --network=kong-net \ -e "KONG_DATABASE=postgres" \ -e "KONG_PG_HOST=kong-database" \ -e "KONG_PG_USER=kong" \ -e "KONG_PG_DATABASE=kong" \ -e "KONG_PG_PASSWORD=kong" \ kong:latest kong migrations bootstrap 4- Start Kong: docker run -d --name kong \ --network=kong-net \ -e "KONG_DATABASE=postgres" \ -e "KONG_PG_HOST=kong-database" \ -e "KONG_PG_USER=kong" \ -e "KONG_PG_DATABASE=kong" \ -e "KONG_PG_PASSWORD=kong" \ -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \ -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \ -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \ -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \ -e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \ -p 8000:8000 \ -p 8443:8443 \ -p 127.0.0.1:8001:8001 \ -p 127.0.0.1:8444:8444 \ kong:latest 5- Test it: curl -i http://localhost:8001/ ---------------------------------------------------------------------------------- Installing Kona: 1- Prepare Konga’s database by starting an ephemeral container: docker run --rm \ --network=kong-net \ pantsel/konga -c prepare -a postgres -u postgresql://kong@kong-database:5432/konga_db 2- Running Konga on Docker: docker run --rm -p 1337:1337 \ --network=kong-net \ -e "DB_ADAPTER=postgres" \ -e "DB_HOST=kong-database" \ -e "DB_USER=kong" \ -e "DB_DATABASE=konga_db" \ -e "KONGA_HOOK_TIMEOUT=120000" \ -e "NODE_ENV=production" \ --name konga \ pantsel/konga ----------------------------------------------------------------------------------
Scalability is the ability of a program to scale. For example, if you can do something on a small database (say less than 1000 records), a program that is highly scalable would work well on a small set as well as working well on a large set (say millions, or billions of records). Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.
Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or DigitalOcean. ------------------------------------------------------------------- Docker is the command-line tool that uses containerization to manage multiple images and containers and volumes and such -- a container is basically a lightweight virtual machine. Until recently Docker didn't run on native Mac or Windows OS, so another tool was created, Docker-Machine, which creates a virtual machine (using yet another tool, e.g. Oracle VirtualBox), runs Docker on that VM, and helps coordinate between the host OS and the Docker VM. Docker-Compose is essentially a higher-level scripting interface on top of Docker itself, making it easier to manage to launch several containers simultaneously. Its config file (docker-compose.yml) is confusing since some of its settings are passed down to the lower-level docker process, and some are used only at the higher level. ------------------------------------------------------------------- Developers describe Docker Compose as "Define and run multi-container applications with Docker". With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. On the other hand, Docker Machine is detailed as "Machine management for a container-centric world". "Machine" lets you create Docker hosts on your computer, on cloud providers, and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them. -------------------------------------------------------------------
Amazon's Elastic Compute Cloud (EC2) is an offering that allows developers to provision and run their applications by creating instances of virtual machines in the cloud. EC2 also offers automatic scaling where resources are allocated based on the amount of traffic received. Just like any other AWS offerings, EC2 can be easily integrated with the other Amazon services such as the Simple Queue Service (SQS), or Simple Storage Service (S3), among others.
As Kubernetes is a container orchestrator, it needs a container runtime in order to orchestrate. Kubernetes is most commonly used with Docker, but it can also be used with any container runtime. RunC, cri-o, containerd are other container runtimes that you can deploy with Kubernetes. The Cloud Native Computing Foundation (CNCF) maintains a listing of endorsed container runtimes on their ecosystem landscape page and Kubernetes documentation provides specific instructions for getting set up using ContainerD and CRI-O.
Docker is commonly used without Kubernetes, in fact this is the norm. While Kubernetes offers many benefits, it is notoriously complex and there are many scenarios where the overhead of spinning up Kubernetes is unnecessary or unwanted. In development environments it is common to use Docker without a container orchestrator like Kubernetes. In production environments often the benefits of using a container orchestrator do not outweigh the cost of added complexity. Additionally, many public cloud services like AWS, GCP, and Azure provide some orchestration capabilities making the tradeoff of the added complexity unnecessary.
Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities, and from this, some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things. Docker is a platform and tool for building, distributing and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.
“Kubernetes vs. Docker” is a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker. The technology that is actually comparable with Kubernetes, is Docker Swarm. Docker Swarm is basically an alternative to Kubernetes which is a container orchestration tool. Instead of Kubelets (the service that actually enables Docker to run in Kubernetes clusters nodes), you would have services called Docker Daemons that will run on each node and instead of the Kubernetes engine, you would just have Docker, that spends those multiple nodes that make up the cluster. ----------------------------------------------------------------------------- Docker is a "container" technology, it creates an isolated environment for applications. Kubernetes is an infrastructure for managing those containers. ----------------------------------------------------------------------------- Docker automates building and deploying applications: CI/CD (before and when deploying) Kubernetes automates scheduling and management of application containers (after container deployment) ----------------------------------------------------------------------------- Docker platform is for configuring, building, and distributing containers. Kubernetes is an ecosystem for managing a cluster of Docker containers. ----------------------------------------------------------------------------- Docker is mainly used in the local development process, so when you're developing a sort of application you would use Docker containers for different services that your application depends on, like databases, message brokers, etc. It is also used in the CI process to build your application and package it into an isolated container environment. Once built, that container gets stored or pushed into a private repository; so now is where Kubernetes actually comes into the game. -----------------------------------------------------------------------------
Kubernetes becomes ever more popular as a container orchestration solution. Kubernetes is made up of many components that do not know or care about each other. The components all talk to each other through the API server. Each of these components operates its own function and then exposes metrics, that we can collect for monitoring later on. We can break down the components into three main parts. - The Control Plane - The Master. - Nodes - Where pods get scheduled. - Pods - Holds containers.
Cross-site scripting (XSS): XSS attacks enable an attacker to inject client-side scripts into browsers. Django templates protect your project from the majority of XSS attacks. --------------------------------------------------------------------------- Cross-site request forgery (CSRF): CSRF attacks allow a malicious user to execute actions using the credentials of another user. Django has built-in protection against most types of CSRF attacks. --------------------------------------------------------------------------- SQL injection: SQL injection is an attack where a malicious user is able to execute arbitrary SQL code on a database. Django’s querysets are protected from SQL injection since queries are constructed using parameterization. ---------------------------------------------------------------------------
Microservices are an application architecture style where independent, self-contained programs with a single purpose each can communicate with each other over a network. Typically, these microservices are able to be deployed independently because they have a strong separation of responsibilities via a well-defined specification with significant backward compatibility to avoid sudden dependency breakage. Successful applications begin with a monolith-first approach using a single, shared application codebase and deployment. Only after the application proves its usefulness is it then broken down into microservice components to ease further development and deployment. This approach is called the "monolith-first" or "MonolithFirst" pattern. Microservices should follow the principle of single responsibility. A microservice only handles a single business logic.
https://docs.influxdata.com/telegraf/v1.9/ ----------------------------------------------------------- Introduction: Telegraf is a plugin-driven server agent for collecting & reporting metrics, and is the first piece of the TICK stack. Telegraf has plugins to source a variety of metrics directly from the system it’s running on, pull metrics from third party APIs, or even listen for metrics via a statsd and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others. ----------------------------------------------------------- Installation: (Debian & Ubuntu are different! Take a loot at the link below.) https://docs.influxdata.com/telegraf/v1.9/introduction/installation/ 1- curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add - 2- source /etc/lsb-release 3- echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list 4- apt-get update && sudo apt-get install telegraf 5- service telegraf start ----------------------------------------------------------- Configuration: Create a configuration file with default input and output plugins: telegraf config > telegraf.conf -----------------------------------------------------------
For selecting the operating system & hosting software, refer to the following link: https://certbot.eff.org/ ----------------------------------------------------------------- Nginx: 1- apt install python-certbot-nginx 2- Add the following lines to your project nginx config: location /.well-known { alias /srv/me/.well-known; } 3- /etc/init.d/nginx restart 4- certbot --authenticator webroot --installer nginx When asked for "webroot", add "/srv/<your_project>/". ----------------------------------------------------------------- I had to repeat step four 3 times to finally get it work! Each time it raised errors like: Domain: pwa.tiptong.ir Type: unauthorized Detail: Invalid response from.... To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address. I added "listen 443 ssl;" to tiptong.conf in nginx config files. I think it solved the above problem. We were unable to install your certificate, however, we successfully restored your server to its prior configuration. Running the same step 4 command, fixed the above error. Weird! -----------------------------------------------------------------
Installation: http://nsq.io/deployment/installing.html 1-Download and extract: https://s3.amazonaws.com/bitly-downloads/nsq/nsq-1.0.0-compat.linux-amd64.go1.8.tar.gz 2-Copy: cp nsq-1.0.0-compat.linux-amd64.go1.8/bin/* /usr/local/bin/ -------------------------------------------------------------------------------------- Quick Start: 1- In one shell, start nsqlookupd: $ nsqlookupd 2- In another shell, start nsqd: $ nsqd --lookupd-tcp-address=127.0.0.1:4160 3- In another shell, start nsqadmin: $ nsqadmin --lookupd-http-address=127.0.0.1:4161 4- Publish an initial message (creates the topic in the cluster, too): $ curl -d 'hello world 1' 'http://127.0.0.1:4151/pub?topic=test' 5- Finally, in another shell, start nsq_to_file: $ nsq_to_file --topic=test --output-dir=/tmp --lookupd-http-address=127.0.0.1:4161 6- Publish more messages to nsqd: $ curl -d 'hello world 2' 'http://127.0.0.1:4151/pub?topic=test' $ curl -d 'hello world 3' 'http://127.0.0.1:4151/pub?topic=test' 7- To verify things worked as expected, in a web browser open http://127.0.0.1:4171/ to view the nsqadmin UI and see statistics. Also, check the contents of the log files (test.*.log) written to /tmp. The important lesson here is that nsq_to_file (the client) is not explicitly told where the test topic is produced, it retrieves this information from nsqlookupd and, despite the timing of the connection, no messages are lost. -------------------------------------------------------------------------------------- Clustering NSQ: nsqlookup nsqd --lookupd-tcp-address=10.10.0.101:4160,10.10.0.102:4160,10.10.0.103:4160 nsqadmin --lookupd-http-address=10.10.0.101:4161,10.10.0.102:4161,10.10.0.103:4161 --------------------------------------------------------------------------------------
1- apt-get install snmp snmpd 2- /etc/snmp/snmpd.conf Edit to: agentAddress udp:0.0.0.0:161 view systemonly included .1 Add to the bottom: com2sec readonly 10.10.0.198 public com2sec readonly 10.10.0.199 public com2sec readonly localhost public 3- /etc/init.d/snmpd restart ------------------------------------------------------------------------- For checking if snmpd is running, and on what ip/port it's listening to, you can use: netstat -apn | grep snmpd ------------------------------------------------------------------------- Test the Configuration with an SNMP Walk: snmpwalk -v1 -c public localhost snmpwalk -v1 -c public 10.10.0.192 ------------------------------------------------------------------------- For getting information based on OID: snmpwalk -v1 -c public localhost iso.3.6.1.2.1.1.1 The OID Tree: http://www.oidview.com/mibs/712/LANART-AGENT.html -------------------------------------------------------------------------
Integrated Lights-Out (iLO) is a remote server management processor embedded on the system boards of HP ProLiant and Blade servers that allows controlling and monitoring of HP servers from a remote location. HP iLO management is a powerful tool that provides multiple ways to configure, update, monitor, and run servers remotely. The embedded iLO management card has its own network connection and IP address to which server administrators can connect via Domain Name System (DNS)/Dynamic Host Configuration Protocol (DHCP) or through a separate dedicated management network. iLO provides a remote Web-based console, which can be used to administer the server remotely. The iLO port is an Ethernet port, which can be enabled through the ROM-Based Setup Utility (RBSU).
To make a script run when the server starts and stops: First make the script executable with this command: sudo chmod 755 <path to the script> Then: sudo /usr/sbin/update-rc.d -f <path to the script> defaults