Recent Notes

50 Notes
+ Handy Commands (Dec. 5, 2023, 11:11 a.m.)

git log -p -1 git reset --hard HEAD~1 git merge --no-commit --no-ff origin/feature/the-branch-name

+ ORM - Update a filed value (Dec. 4, 2023, 11:33 a.m.)

from django.db.models import F, Value from django.db.models.functions import Concat TemplateType.objects.filter(uuid=self.kwargs.get('uuid')).update( deleted=True, name=Concat(F('name'), Value('-DELETED-'), F('id')) )

+ Laptop Privacy Features (Nov. 10, 2023, 11:12 a.m.)

- Trusted Platform Module (TPM) 2.0: TPM 2.0 is a security chip that encrypts your passwords and other sensitive data. - Fingerprint reader: The fingerprint reader allows you to log in to your laptop with your fingerprint, which is a more secure way to log in than using a password. - Camera privacy shutter: The camera privacy shutter allows you to physically block the webcam so that no one can spy on you even if the webcam is turned on. - Kensington Security Slot: The Kensington Security Slot allows you to secure your laptop to a desk or other object, which helps to prevent theft.

+ Override the DHCP nameserver (Oct. 17, 2023, 3:51 p.m.)

Open the following file: vim /etc/dhcp/dhclient.conf Uncomment the following line and add the nameservers comma-separated: prepend domain-name-servers Then restart the dhclient service: dhclient -r; dhclient

+ Test ajax views (Sept. 19, 2023, 3:52 p.m.)

response = self.client.post( reverse('view_name', args=(obj.id,)), **{'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest'} )

+ CEFR Level IELTS (Sept. 8, 2023, 3:49 p.m.)

CEFR Level Language Proficiency Level Corresponding IELTS Band Score C2 Expert User 8.5 - 9.0 C1 Very Good User 8.0 Good User 7.0 - 7.5 B2 Competent User 6.0 - 6.5 Modest User 5.0 - 5.5 B1 Limited User 4.0 - 4.5 A2 Extremely Limited User 3.0 A1 Intermittent User 2.0 Non - User 1.0 https://www.kanan.co/blog/cefr-level-ielts/

+ Show conflicted files (Aug. 30, 2023, 12:40 p.m.)

git diff --check

+ Registry - Deploy (Aug. 29, 2023, 10:48 p.m.)

version: '3' services: registry: image: registry networks: - traefik-public environment: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data REGISTRY_HTTP_SECRET: MohsenRandomString REGISTRY_AUTH: htpasswd REGISTRY_AUTH_HTPASSWD_REALM: Registry REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password volumes: - ./data:/data - ./auth:/auth deploy: restart_policy: condition: on-failure max_attempts: 3 labels: - traefik.enable=true - traefik.docker.network=traefik-public - traefik.constraint-label=traefik-public - traefik.http.routers.docker-registry.rule=Host(`docker.mohsenhassani.com`) - traefik.http.routers.docker-registry.entrypoints=http - traefik.http.routers.docker-registry.middlewares=https-redirect - traefik.http.routers.docker-registry.service=docker-registry-secure - traefik.http.routers.docker-registry-https.rule=Host(`docker.mohsenhassani.com`) - traefik.http.routers.docker-registry-https.entrypoints=https - traefik.http.routers.docker-registry-https.tls=true - traefik.http.routers.docker-registry-https.tls.certresolver=le - traefik.http.routers.docker-registry-https.service=docker-registry-secure - traefik.http.services.docker-registry-secure.loadbalancer.server.port=5000 networks: traefik-public: external: true ---------------------------------------------------------------------------- apt install apache2-utils mkdir data auth cd auth tpasswd -Bc registry.password mohsen ---------------------------------------------------------------------------- Push To Our New Registry: docker pull python:3.11-slim-buster docker tag python:3.11-slim-buster docker.mohsenhassani.com/python:3.11-slim-buster docker push docker.mohsenhassani.com/python:3.11-slim-buster ---------------------------------------------------------------------------- Creating a Custom Docker Image To Push To Our New Registry: If you want to customize the package, follow the steps below in section A, or skip section B: Section A: (Customizing an image) docker pull python:3.11-slim-buster # Do whatever changes you need to the docker image: docker run -t -i python:3.11-slim-buster /bin/bash # Then, exit from the Ubuntu container. exit docker ps -lq docker commit 3edb10c50e41 python-mohsen-3.8-slim-buster docker stop 3edb10c50e41 docker images | grep mohsen Section B: (Publishing an Image to the Private Docker Registry) docker login https://docker.mohsenhassani.com # After entering the credentials you must receive: "Login Succeeded" # Tag your custom image that matches your registry server domain name: docker tag python-mohsen-3.8-slim-buster docker.mohsenhassani.com/python-mohsen-3.8-slim-buster docker images | grep mohsen docker push docker.mohsenhassani.com/python-mohsen-3.8-slim-buster ---------------------------------------------------------------------------- Pulling a Docker Image from the Private Docker Registry docker login https://docker.mohsenhassani.com docker pull docker.mohsenhassani.com/python:3.11-slim-buster ---------------------------------------------------------------------------- Running the docker run command to create a container from the downloaded image. docker run -it docker.mohsenhassani.com/python:3.11-slim-buster /bin/bash ----------------------------------------------------------------------------

+ (?=\w) (Aug. 25, 2023, 6:01 p.m.)

Positive lookahead: Matches a group after the main expression without including it in the result. ?= is a positive lookahead, a type of zero-width assertion. What it's saying is that the captured match must be followed by whatever is within the parentheses but that part isn't captured.

+ Online Tool (Aug. 25, 2023, 6 p.m.)

https://regexr.com/

+ Boolean / Logical operators (Aug. 22, 2023, 6:25 p.m.)

Python has three Boolean operators, or logical operators: "and", "or", and "not". ---------------------------------------------------------------------------------------------- In Python, the Boolean type "bool" is a subclass of "int" and can take the values True or False. >>> type(True) <class 'bool'> >>> isinstance(True, int) True >>> int(True) 1 >>> int(False) 0 ---------------------------------------------------------------------------------------------- Operator Logical Operation and Conjunction or Disjunction not Negation ----------------------------------------------------------------------------------------------

+ Firefox doesn't allow to paste into WhatsApp Web (Aug. 21, 2023, 5:13 p.m.)

about:config → dom.event.clipboardevents.enabled to false.

+ Assigning an empty dictionary as the default value for the parameter (Aug. 21, 2023, 12:42 p.m.)

def my_func(name, quantity, my_list={}): pass When you assign an empty dictionary as the default value for the parameter my_list, the first time that you call the function this dictionary is empty. However, as dictionaries are a mutable type, when you assign values to the dictionary, the default dictionary is no longer empty. When you call the function the second time and the default value for my_list is required again, the default dictionary is no longer empty as it was populated the first time you called the function. Since you’re calling the same function, you’re using the same default dictionary stored in memory. This behavior doesn’t happen with immutable data types. The solution to this problem is to use another default value, such as None, and then create an empty dictionary within the function when no optional argument is passed: def my_func(name, quantity, my_list=None): if my_list is None: my_list = {}

+ Firefox - Adjust default volume 100% (Aug. 17, 2023, 6:34 p.m.)

1- Go to the configurations page: about:config 2- Search for the following: media.volume_scale 3- Set the value to 5.0

+ List untracked files (Aug. 16, 2023, 6:13 p.m.)

git status -u git status --untracked-files

+ Tag (Aug. 16, 2023, 10:39 a.m.)

Types of Git Tags: - Lightweight Tags: These tags are simple pointers to a specific commit, with no additional information. They are useful for quick or internal references but lack the benefits provided by annotated tags. - Annotated Tags: These tags include additional metadata, such as the tagger's name, email, date, and an optional message. Annotated tags are recommended for public releases or significant milestones, as they provide a more complete record of the project's history. ------------------------------------------------------------------------------------------ Creating a Git Tag: To create a lightweight tag: git tag <tagname> Example: git tag v1.0 ====================== To create an annotated tag: git tag -a <tagname> -m "Your message here" Example: git tag -a v1.0 -m "Initial release" ------------------------------------------------------------------------------------------ Pushing Git Tags to a Remote Repository: By default, Git tags are not pushed to the remote repository when you execute git push. To push a specific tag, use the following command: git push origin <tagname> Example: git push origin v3.0 ------------------------------------------------------------------------------------------

+ Snake Case vs Camel Case vs Pascal Case vs Kebab Case (Aug. 10, 2023, 2:11 p.m.)

What is Snake Case? Snake case separates each word with an underscore character (_). When using snake case, all letters need to be lowercase. Here are some examples of how you would use the snake case: number_of_donuts = 34 fave_phrase = "Hello World" NUMBER_OF_DONUTS = 34 FAVE_PHRASE = "Hello World" ---------------------------------------------------------------------------------------------------------------- What is Kebab Case? The kebab case is very similar to the snake case. The difference between the snake case and the kebab case is that the kebab case separates each word with a dash character, -, instead of an underscore. So, all words are lowercase, and each word gets separated by a dash. The kebab case is another one of the most human-readable ways of combining multiple words into a single word. Here are some examples of how you would use the kebab case: number-of-donuts = 34 fave-phrase = "Hello World" You will encounter kebab cases mostly in URLs. ---------------------------------------------------------------------------------------------------------------- What is Camel Case? When using camel case, you start by making the first word lowercase. Then, you capitalize the first letter of each word that follows. So, a capital letter appears at the start of the second word and at each new subsequent word that follows it. Here are some examples of how you would use camel case: numberOfDonuts = 34 favePhrase = "Hello World" ---------------------------------------------------------------------------------------------------------------- What is Pascal Case? Pascal case is similar to the camel case. The only difference between the two is that the Pascal case requires the first letter of the first word to also be capitalized. So, when using pascal case, every word starts with an uppercase letter (in contrast to Camel case, where the first word is in lowercase). Here are some examples of how you would use pascal case: NumberOfDonuts = 34 FavePhrase = "Hello World" ----------------------------------------------------------------------------------------------------------------

+ Branch Naming Convention (Aug. 10, 2023, 12:21 p.m.)

https://dev.to/couchcamote/git-branching-name-convention-cch https://dev.to/varbsan/a-simplified-convention-for-naming-branches-and-commits-in-git-il4 ------------------------------------------------------------------------------------ ***** Code Flow Branches ***** These branches which we expect to be permanently available on the repository follow the flow of code changes starting from development until production. Development (dev): All new features and bug fixes should be brought to the development branch. Resolving developer code conflicts should be done as early as here. QA/Test (test) Contains all code ready for QA testing. Staging (staging, Optional) It contains tested features that the stakeholders wanted to be available either for a demo or a proposal before elevating into production. Decisions are made here if a feature should be finally brought to the production code. Master (master) If the repository is published, the production branch is the default branch being presented. ------------------------------------------------------------------------------------ ***** Temporary Branches ***** As the name implies, these are disposable branches that can be created and deleted by need of the developer or deployer. Feature Any code changes for a new module or use case should be done on a feature branch. This branch is created based on the current development branch. When all changes are Done, a Pull Request/Merge Request is needed to put all of these into the development branch. Examples: feature/integrate-swagger feature/JIRA-1234 feature/JIRA-1234_support-dark-theme It is recommended to use all lower caps letters and hyphens (-) to separate words unless it is a specific item name or ID. Underscore (_) could be used to separate the ID and description. Bug Fix If the code changes made from the feature branch were rejected after a release, sprint or demo, any necessary fixes after that should be done on the bugfix branch. Examples: bugfix/more-gray-shades bugfix/JIRA-1444_gray-on-blur-fix Hot Fix If there is a need to fix a blocker, do a temporary patch, or apply a critical framework or configuration change that should be handled immediately, it should be created as a Hotfix. It does not follow the scheduled integration of code and could be merged directly to the production branch, then into the development branch later. Examples: hotfix/disable-endpoint-zero-day-exploit hotfix/increase-scaling-threshold Experimental Any new feature or idea that is not part of a release or a sprint. A branch for playing around. Examples: experimental/dark-theme-support Build A branch specifically for creating specific build artifacts or for doing code coverage runs. Examples: build/jacoco-metric Release A branch for tagging a specific release version Examples: release/myapp-1.01.123 Git also supports tagging a specific commit history of the repository. A release branch is used if there is a need to make the code available for checkout or use. Merging A temporary branch for resolving merge conflicts, usually between the latest development and a feature or Hotfix branch. This can also be used if two branches of a feature being worked on by multiple developers need to be merged, verified, and finalized. Examples: merge/dev_lombok-refactoring merge/combined-device-support ------------------------------------------------------------------------------------ A git branch should start with a category. Pick one of these: feature, bugfix, hotfix, or test. - feature is for adding, refactoring, or removing a feature - bugfix is for fixing a bug - hotfix is for changing code with a temporary solution and/or without following the usual process (usually because of an emergency) - test is for experimenting outside of an issue/ticket After the category, there should be a "/" followed by a reference to the issue/ticket you are working on. If there's no reference, just add no-ref. After the reference, there should be another "/" followed by a description that sums up the purpose of this specific branch. This description should be short and "kebab-cased". By default, you can use the title of the issue/ticket you are working on. Just replace any special character with "-". To sum up, follow this pattern when branching: git branch <category/reference/description-in-kebab-case> git branch feature/issue-42/create-new-button-component git branch bugfix/issue-342/button-overlap-form-on-mobile git branch hotfix/no-ref/registration-form-not-working git branch test/no-ref/refactor-components-with-atomic-design ------------------------------------------------------------------------------------

+ Preview a Merge (Aug. 8, 2023, 10:52 a.m.)

git diff stage..origin/my-branch git log stage..origin/my-branch

+ Delete the last n commits (Aug. 7, 2023, 3:54 p.m.)

git reset --hard HEAD~2

+ Delete unpushed commits (Aug. 7, 2023, 3:18 p.m.)

Delete the most recent commit, keeping the work you've done: git reset --soft HEAD~1 ------------------------------------------------------------------------------------------------- Delete the most recent commit, destroying the work you've done: git reset --hard origin/stage -------------------------------------------------------------------------------------------------

+ List unpushed commits (Aug. 7, 2023, 3:07 p.m.)

See all commits on all branches that have not yet been pushed: git log --branches --not --remotes ------------------------------------------------------------------------------------------- See the most recent commit on each branch, as well as the branch names: git log --branches --not --remotes --simplify-by-decoration --decorate --oneline ------------------------------------------------------------------------------------------- git cherry -v origin stage ------------------------------------------------------------------------------------------- git log origin/stage.. ------------------------------------------------------------------------------------------- git reflog -------------------------------------------------------------------------------------------

+ Global gitignore (Aug. 3, 2023, 11:08 a.m.)

Configuring a Global .gitignore # Check if a git already has a global gitignore: git config --get core.excludesFile # Create it if it doesn't have it, and tell git where the file is: touch ~/.gitignore git config --global core.excludesFile '~/.gitignore' -------------------------------------------------------------------------------------------------- Configuring a Local .gitignore $ git config --local core.excludesFile .mygitignore -------------------------------------------------------------------------------------------------- git/info/exclude: This file is your own gitignore inside your local git folder, which means it will not be committed or shared with anyone else. You can basically edit this file and stop tracking any (untracked) file. --------------------------------------------------------------------------------------------------

+ List, Slice, Pop, shallow/deep copy (July 26, 2023, 10:53 a.m.)

>>> l = [1, 2, 3, 4, 5] >>> l.pop() 5 >>> l [1, 2, 3, 4] >>> l.pop(0) 1 >>> l [2, 3, 4] >>> l.pop(1) 3 >>> l [2, 4] >>> l.pop(-1) 4 >>> l [2] ------------------------------------------------------------------------------------ >>> [[0] * 5 for _ in range(5)] [ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0] ] This new version of your list comprehension is way more readable than the previous one. >>> [[0] * 5] * 5 [ [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0] ] ------------------------------------------------------------------------------------ Comparing Lists: When comparing lists, Python runs an item-by-item comparison. [2, 3] == [2, 3] So, for example, in the above expression, Python compares 2 and 2, which are equal. Then it compares 3 and 3 to conclude that both lists are equal. In some situations, Python will run what’s known as short-circuiting evaluation. This type of evaluation occurs when Python can determine the truth value of a Boolean expression before evaluating all the parts involved: >>> [5, 6] != [7, 6] True In this example, Python compares 5 and 7. They’re different, so Python returns True immediately without comparing 6 and 6 because it can already conclude that both lists are different. This behavior can make the code more efficient. ------------------------------------------------------------------------------------ Sort: l2 = [4, 2, 1, 3, 5] l3 = sorted(l2) >>> l3 [1, 2, 3, 4, 5] >>> sorted(l2) [1, 2, 3, 4, 5] >>> l2 [4, 2, 1, 3, 5] >>> l2.sort() >>> l2 [1, 2, 3, 4, 5] >>> l2.reverse() >>> l2 [5, 4, 3, 2, 1] ------------------------------------------------------------------------------------ >>> digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> digits[::-1] [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] With this operator, you create a reversed copy of the original list. ------------------------------------------------------------------------------------ >>> digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> reversed(digits) <list_reverseiterator object at 0x10b261a50> >>> list(reversed(digits)) [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] >>> digits [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] When you call reversed() with a list as an argument, you get a reverse iterator object. This iterator yields values from the input list in reverse order. In this example, you use the list() constructor to consume the iterator and get the reversed data as a list. ------------------------------------------------------------------------------------ >>> cache = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] >>> cache[0] = 2 >>> cache [2, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] cache.clear() OR cache[:] = [] >>> cache [] ------------------------------------------------------------------------------------ >>> numbers = [1, 2, 0, 0, 0, 0, 4, 5, 6, 7] >>> numbers[2:6] = [3] >>> numbers [1, 2, 3, 4, 5, 6, 7] ------------------------------------------------------------------------------------ >>> numbers = [1, 5, 6, 7] >>> numbers[1:1] = [2, 3, 4] >>> numbers [1, 2, 3, 4, 5, 6, 7] ------------------------------------------------------------------------------------ Python's built-in mutable collections like lists, dicts, and sets can be copied by calling their factory functions on an existing collection: new_list = list(original_list) new_dict = dict(original_dict) new_set = set(original_set) >>> l1 = [1, 2, 3] >>> l2 = list(l1) >>> l1 [1, 2, 3] >>> l2 [1, 2, 3] >>> l1[0] = 7 >>> l1 [7, 2, 3] >>> l2 [1, 2, 3] >>> l1 = [[1], [2], [3]] >>> l2 = list(l1) >>> l1[0][0] = 7 >>> l2 [[7], [2], [3]] >>> l1 [[7], [2], [3]] Shallow Copy: A shallow copy means constructing a new collection object and then populating it with references to the child objects found in the original. In essence, a shallow copy is only one level deep. The copying process does not recurse and therefore won’t create copies of the child objects themselves. Deep Copy: A deep copy makes the copying process recursive. It means first constructing a new collection object and then recursively populating it with copies of the child objects found in the original. Copying an object this way walks the whole object tree to create a fully independent clone of the original object and all of its children. ------------------------------------------------------------------------------------ Making Shallow Copies In the example below, we’ll create a new nested list and then shallowly copy it with the list() factory function: >>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> ys = list(xs) # Make a shallow copy This means ys will now be a new and independent object with the same contents as xs. You can verify this by inspecting both objects: >>> xs [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> ys [[1, 2, 3], [4, 5, 6], [7, 8, 9]] To confirm ys really is independent of the original, let’s devise a little experiment. You could try and add a new sublist to the original (xs) and then check to make sure this modification didn’t affect the copy (ys): >>> xs.append(['new sublist']) >>> xs [[1, 2, 3], [4, 5, 6], [7, 8, 9], ['new sublist']] >>> ys [[1, 2, 3], [4, 5, 6], [7, 8, 9]] As you can see, this had the expected effect. Modifying the copied list at a “superficial” level was no problem at all. However, because we only created a shallow copy of the original list, ys still contains references to the original child objects stored in xs. These children were not copied. They were merely referenced again in the copied list. Therefore, when you modify one of the child objects in xs, this modification will be reflected in ys as well—that’s because both lists share the same child objects. The copy is only a shallow, one-level deep copy: >>> xs[1][0] = 'X' >>> xs [[1, 2, 3], ['X', 5, 6], [7, 8, 9], ['new sublist']] >>> ys [[1, 2, 3], ['X', 5, 6], [7, 8, 9]] In the above example, we (seemingly) only made a change to xs. But it turns out that both sublists at index 1 in xs and ys were modified. Again, this happened because we had only created a shallow copy of the original list. Had we created a deep copy of xs in the first step, both objects would’ve been fully independent. This is the practical difference between shallow and deep copies of objects. ------------------------------------------------------------------------------------ Making Deep Copies Let’s repeat the previous list-copying example but with one important difference. This time we’re going to create a deep copy using the deepcopy() function defined in the copy module instead: >>> import copy >>> xs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> zs = copy.deepcopy(xs) When you inspect xs and its clone zs that we created with copy.deepcopy(), you’ll see that they both look identical again—just like in the previous example: >>> xs [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> zs [[1, 2, 3], [4, 5, 6], [7, 8, 9]] However, if you make a modification to one of the child objects in the original object (xs), you’ll see that this modification won’t affect the deep copy (zs). Both objects, the original and the copy are fully independent this time. xs was cloned recursively, including all of its child objects: >>> xs[1][0] = 'X' >>> xs [[1, 2, 3], ['X', 5, 6], [7, 8, 9]] >>> zs [[1, 2, 3], [4, 5, 6], [7, 8, 9]] ------------------------------------------------------------------------------------ slice(start, stop, step) >>> letters = ["A", "a", "B", "b", "C", "c", "D", "d"] >>> upper_letters = letters[slice(0, None, 2)] >>> upper_letters ['A', 'B', 'C', 'D'] >>> lower_letters = letters[slice(1, None, 2)] >>> lower_letters ['a', 'b', 'c', 'd'] Passing None to any arguments of slice() tells the function that you want to rely on its internal default value, which is the same as the equivalent index’s default in the slicing operator. In these examples, you pass None to stop, which tells slice() that you want to use len(letters) as the value for stop. ------------------------------------------------------------------------------------

+ Git Branching Strategies (July 18, 2023, 3:34 p.m.)

Branching Strategy: A branching strategy is a strategy that software development teams adopt when writing, merging, and deploying code when using a version control system. ---------------------------------------------------------- Common Git Branching Strategies: - GitFlow - GitHub Flow - GitLab Flow - Trunk-based development ---------------------------------------------------------- GitFlow: - Master - Develop - Feature - To develop new features that branch off the develop branch - Release - Help prepare a new production release; usually branched from the develop branch and must be merged back to both develop and master - Hotfix - Also helps prepare for a release but unlike release branches, hotfix branches arise from a bug that has been discovered and must be resolved; it enables developers to keep working on their own changes on the develop branch while the bug is being fixed. GitFlow pros and cons: The most obvious benefit of this model is that it allows for parallel development to protect the production code so the main branch remains stable for release while developers work on separate branches. Moreover, the various types of branches make it easier for developers to organize their work. This strategy contains separate and straightforward branches for specific purposes though for that reason it may become complicated for many use cases. It is also ideal when handling multiple versions of the production code. However, as more branches are added, they may become difficult to manage as developers merge their changes from the development branch to the main. Developers will first need to create the release branch then make sure any final work is also merged back into the development branch and then that release branch will need to be merged into the main branch. ---------------------------------------------------------- GitHub Flow: GitHub Flow is a simpler alternative to GitFlow ideal for smaller teams as they don’t need to manage multiple versions. Unlike GitFlow, this model doesn’t have release branches. You start off with the main branch then developers create branches, feature branches that stem directly from the master, to isolate their work which is then merged back into the main. The feature branch is then deleted. The main idea behind this model is to keep the master code in a constant deployable state and hence can support continuous integration and continuous delivery processes. GitHub Flow pros and cons: GitHub Flow focuses on Agile principles and so it is a fast and streamlined branching strategy with short production cycles and frequent releases. This strategy also allows for fast feedback loops so that teams can quickly identify issues and resolve them. Since there is no development branch as you are testing and automating changes to one branch which allows for quick and continuous deployment. This strategy is particularly suited for small teams and web applications and it is ideal when you need to maintain a single production version. Thus, this strategy is not suitable for handling multiple versions of the code. Furthermore, the lack of development branches makes this strategy more susceptible to bugs and so can lead to an unstable production code if branches are not properly tested before merging with the master-release preparation and bug fixes happen in this branch. The master branch, as a result, can become cluttered more easily as it serves as both a production and development branch. A further disadvantage is as this model is more suited to small teams and hence, as teams grow merge conflicts can occur as everyone is merging to the same branch and there is a lack of transparency meaning developers cannot see what other developers are working on. ---------------------------------------------------------- GitLab Flow: GitLab Flow is a simpler alternative to GitFlow that combines feature-driven development and feature branching with issue tracking. With GitFlow, developers create a develop branch and make that the default while GitLab Flow works with the main branch right away. GitLab Flow is great when you want to maintain multiple environments and when you prefer to have a staging environment separate from the production environment. Then, whenever the main branch is ready to be deployed, you can merge it back into the production branch and release it. Thus, this strategy offers proper isolation between environments allowing developers to maintain several versions of software in different environments. While GitHub Flow assumes that you can deploy into production whenever you merge a feature branch into the master, GitLab Flow seeks to resolve that issue by allowing the code to pass through internal environments before it reaches production, as seen in the image below. Therefore, this method is suited for situations where you don’t control the timing of the release, such as an iOS app that needs to go through the App store validation first or when you have specific deployment windows. ---------------------------------------------------------- Trunk-based development Trunk-based development is a branching strategy that in fact requires no branches but instead, developers integrate their changes into a shared trunk at least once a day. This shared trunk should be ready for release anytime. The main idea behind this strategy is that developers make smaller changes more frequently and thus the goal is to limit long-lasting branches and avoid merge conflicts as all developers work on the same branch. In other words, developers commit directly to the trunk without the use of branches. Consequently, trunk-based development is a key enabler of continuous integration (CI) and continuous delivery (CD) since changes are done more frequently to the trunk, often multiple times a day (CI) which allows features to be released much faster (CD). This strategy is often combined with feature flags. As the trunk is always kept ready for release, feature flags help decouple deployment from release so any changes that are not ready can be wrapped in a feature flag and kept hidden while features that are complete can be released to end-users without delay. Trunk-based development pros and cons As we’ve seen, trunk-based development paves the way for continuous integration as the trunk is kept constantly updated. It also enhances collaboration as developers have better visibility over what changes other developers are making as commits are made directly into the trunk without the need for branches. This is unlike other branching methods where each developer works independently in their own branch and any changes that occur in that branch can only be seen after merging into the main branch. Because trunk-based development does not require branches, this eliminates the stress of long-lived branches and hence, merge conflicts or the so-called ‘merge hell’ as developers are pushing small changes much more often. This also makes it easier to resolve any conflicts that may arise. Finally, this strategy allows for quicker releases as the shared trunk is kept in a constant releasable state with a continuous stream of work being integrated into the trunk which results in a more stable release. However, this strategy is suited to more senior developers as this strategy offers a great amount of autonomy which non-experienced developers might find daunting as they are interacting directly with the shared trunk. Thus, for a more junior team whose work you may need to monitor closely, you may opt for a Git branching strategy. ---------------------------------------------------------- How to choose the best branching strategy for your team: When first starting out, it’s best to keep things simple and so initially GitHub Flow or Trunk-based development may work best. They are also ideal for smaller teams requiring only a single version of a release to be maintained. GitFlow is great for open-source projects that require strict access control to changes. This is especially important as open-source projects allow anyone to contribute and so with Git Flow, you can check what is being introduced into the source code. However, GitFlow, as previously mentioned, is not suitable when wanting to implement a DevOps environment. In this case, the other strategies discussed are a better fit for an Agile DevOps process and support your CI and CD pipeline. ---------------------------------------------------------- Github Flow The GitHub Flow is a lightweight workflow. It was created by GitHub in 2011 and respects the following 6 principles: 1- Anything in the master branch is deployable. 2- To work on something new, create a branch off from master and give a descriptive name (ie: new-oauth2-scopes) 3- Commit to that branch locally and regularly push your work to the same named branch on the server 4- When you need feedback or help, or you think the branch is ready for merging, open a pull request 5- After someone else has reviewed and signed off on the feature, you can merge it into master 6- Once it is merged and pushed to the master, you can and should deploy it immediately Advantages - it is friendly for Continuous Delivery and Continuous Integration - A simpler alternative to Git Flow - It is ideal when it needs to maintain a single version in production Disadvantages - The production code can become unstable most easily - Are not adequate when it needs the release plans - It doesn’t resolve anything about deployment, environments, releases, and issues - It isn’t recommended when multiple versions in production are needed ---------------------------------------------------------- GitLab Flow The GitLab Flow is a workflow created by GitLab in 2014. It combines feature-driven development and feature branches with issue tracking. The most difference between GitLab Flow and GitHub Flow is that the environment branches have in GitLab Flow (e.g. staging and production) because there will be a project that isn’t able to deploy to production every time you merge a feature branch (e.g. SaaS applications and Mobile Apps) The GitLab Flow is based on 11 rules: 1- Use feature branches, no direct commits on master 2- Test all commits, not only ones on master 3- Run all the tests on all commits (if your tests run longer than 5 minutes have them run in parallel). 4- Perform code reviews before merging into master, not afterward. 5- Deployments are automatic, based on branches or tags. 6- Tags are set by the user, not by CI. 7- Releases are based on tags. 8- Pushed commits are never rebased. 9- Everyone starts from the master and targets the master. 10- Fix bugs in the master first and release branches second. 12- Commit messages reflect intent. Advantages - It defines how to make Continuous Integration and Continuous Delivery - The git history will be cleaner, less messy, and more readable (see why devs prefer squash and merge, instead of only merging, in this article https://softwareengineering.stackexchange.com/questions/263164/why-squash-git-commits-for-pull-requests) - It is ideal when it needs to single version in production Disadvantages - It is more complex than the GitHub Flow - It can become complex as Git Flow when it needs to maintain multiple versions in production ---------------------------------------------------------- GitHub Flow Branch Strategy In GitHub flow, the main branch contains your production-ready code. The other branches, feature branches, should contain work on new features and bug fixes and will be merged back into the main branch when the work is finished and properly reviewed. ---------------------------------------------------------- GitLab Flow Branch Strategy At its core, the GitLab flow branching strategy is a clearly-defined workflow. While similar to the GitHub flow branch strategy, the main differentiator is the addition of environment branches—ie production and pre-production—or release branches, depending on the situation. Just as in the other Git branch strategies, GitLab flow has a main branch that contains code that is ready to be deployed. However, this code is not the source of truth for releases. In GitLab flow, the feature branch contains work for new features and bug fixes which will be merged back into the main branch when they’re finished, reviewed, and approved. ----------------------------------------------------------

+ Adobe Acrobat (July 13, 2023, 11:12 p.m.)

wget ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i386linux_enu.deb sudo gdebi AdbeRdr9.5.5-1_i386linux_enu.deb apt install libgdk-pixbuf-xlib-2.0-0:i386 acroread

+ HTTP Server (July 8, 2023, 9:53 p.m.)

python3 -m http.server ----------------------------------------------------------------------------------- python3 -m http.server -b 127.0.0.42 8080 ----------------------------------------------------------------------------------- python3 -m http.server -d ~/Pictures/ ----------------------------------------------------------------------------------- python -m http.server -b "::" The double colon (::) is a shorthand notation for IPv6 unspecified address. -----------------------------------------------------------------------------------

+ Ext4Explore (July 4, 2023, 7:57 p.m.)

https://altushost-swe.dl.sourceforge.net/project/ext4explore/Ext4Explore_1_7_beta.zip

+ Resize pictures (May 16, 2023, 6:05 p.m.)

apt install imagemagick convert mohsen.jpg -resize 354x472 mohsen2.jpg ImageMagick will try to preserve the aspect ratio if you use this command. If you want to force the image to become a specific size — even if it messes up the aspect ratio — add an exclamation point to the dimensions: convert mohsen.jpg -resize 354x472! mohsen2.jpg Options: http://astroa.physics.metu.edu.tr/MANUALS/ImageMagick-6.2.5/www/convert.html

+ Ubuntu - Switching Keyboard Layout to alt+shift (May 15, 2023, 11:04 a.m.)

To see the current setting value use get command: gsettings get org.gnome.desktop.wm.keybindings switch-input-source gsettings get org.gnome.desktop.wm.keybindings switch-input-source-backward Set forward switch to Shift+Alt(left): gsettings set org.gnome.desktop.wm.keybindings switch-input-source "['<Shift>Alt_L']" Set backward switch to Alt+Shift(left): gsettings set org.gnome.desktop.wm.keybindings switch-input-source-backward "['<Alt>Shift_L']"

+ Save Image overwriting the previous name (May 14, 2023, 1:43 p.m.)

from django.core.files.storage import FileSystemStorage class OverwriteStorage(FileSystemStorage): def get_available_name(self, name, max_length=None): self.delete(name) return name def upload_to(instance, filename): return 'avatars/{filename}.jpg'.format(filename=instance.id) class Account(AbstractUser): avatar = models.ImageField( blank=True, null=True, upload_to=upload_to, storage=OverwriteStorage() )

+ Read CSV File (April 20, 2023, 4:03 p.m.)

import csv from pathlib import Path script_location = Path(__file__).absolute().parent with open(f'{script_location}/my_file.csv', newline='') as notification_file: notification_reader = csv.reader(notification_file, delimiter=',') for row in notification_reader: print(row)

+ Test if a port is reachable (March 31, 2023, 11:13 a.m.)

Single port: nc -zv 127.0.0.1 80 Multiple ports: nc -zv 127.0.0.1 22 80 8080 Range of ports: nc -zv 127.0.0.1 20-30

+ Byte / Characters (March 19, 2023, 9:47 p.m.)

A 1 kb document would contain 1,024 bytes of data or 1,024 characters of text. One-byte character sets can contain 256 characters.

+ OpenBSD - Tmux (March 18, 2023, 9:37 a.m.)

export PKG_PATH=http://mirrors.sonic.net/pub/OpenBSD/7.1/packages/amd64/ pkg_add libevent wget bash wget -q https://github.com/tmux/tmux/releases/download/3.3/tmux-3.3.tar.gz tar zxf tmux-3.3.tar.gz cd tmux-3.3 ./configure make; make install

+ Serial Ports (March 16, 2023, 5:18 p.m.)

Find out information about your serial ports: dmesg | egrep -i --color 'serial|ttyS' --------------------------------------------------------------------------------- cu -l /dev/ttyu1 -s 115200 --------------------------------------------------------------------------------- https://www.cyberciti.biz/hardware/5-linux-unix-commands-for-connecting-to-the-serial-console/

+ exec 3<> (March 15, 2023, 1:03 p.m.)

It's a file descriptor: 0 - stdin 1 - stdout 2 - stderr >name means redirect output to file name. >&number means redirect output to file descriptor number. & is needed to tell the shell you mean a file descriptor, not a file name. A file descriptor is a number that refers to an already open file. By default, both file descriptors 1 and 2 go to /dev/tty, so if you run some_command 3>&1 1>&2 2>&3 in a new shell, it doesn't change anything (except now you have a file descriptor number 3). -------------------------------------------------------------------------------- exec 3<> File.txt Open "File.txt" and assign file descriptor 3 to it. Maximum file descriptors: 255 -------------------------------------------------------------------------------- 3>&- means that file descriptor 3, opened for writing(same as stdout), is closed. The 3>&- close the file descriptor number 3 (it probably has been opened before with 3>filename) --------------------------------------------------------------------------------

+ tty and stty (March 15, 2023, 12:36 p.m.)

If we talk about the terminal, another term for this command-line interface is tty, which is short for “teletype”. The terminal is a command-line interface that allows the user to communicate with the machine and display the generated output in it. There is also a command called “stty” on UNIX-like operating systems, an abbreviated form of “Set Teletype”. This command manages terminal settings and allows a user to make changes on the terminal and display terminal line characteristics. Execute the “stty” command without passing any argument to display the characteristics of the terminal. The “stty” Linux command tool supports the following options: -a , –all Display all the current settings in the Human-Readable Format -g , –save Display all the current settings of the terminal in stty Readable Format -F , –file=DEVICE Execute the settings of the specified device by opening it --------------------------------------------------------------------------------------------- https://www.computerhope.com/unix/ustty.htm Special characters: * dsusp CHAR CHAR sends a terminal stop signal once input flushed. eof CHAR CHAR sends an end of file (terminate the input). eol CHAR CHAR ends the line. * eol2 CHAR Alternate CHAR for ending the line. erase CHAR CHAR erases the last character typed. intr CHAR CHAR sends an interrupt signal. kill CHAR CHAR erases the current line. * lnext CHAR CHAR enters the next character quoted. quit CHAR CHAR sends a quit signal. * rprnt CHAR CHAR redraws the current line. start CHAR CHAR restarts the output after stopping it. stop CHAR CHAR stops the output. susp CHAR CHAR sends a terminal stop signal. * swtch CHAR CHAR switches to a different shell layer. * werase CHAR CHAR erases the last word typed.

+ Select data_type (March 13, 2023, 10:33 a.m.)

select column_name, data_type from information_schema.columns where table_name='accounts_users' AND column_name = 'id';

+ Set up DMARC, DKIM, and SPF (March 11, 2023, 11:42 p.m.)

DMARC, DKIM, and SPF have to be set up in the domain's DNS settings. Administrators can contact their DNS provider — or, their web hosting platform may provide a tool that enables them to upload and edit DNS records. ----------------------------------------------------------------------------------- SPF Check if you have an existing SPF record: https://www.proofpoint.com/us/cybersecurity-tools/dmarc-spf-creation-wizard#spf-check Create SPF Record: - Start with v=spf1 (version 1) tag and follow it with the IP addresses that are authorized to send mail. For example, v=spf1 ip4:1.2.3.4 ip4:2.3.4.5 - If you use a third party to send an email on behalf of the domain in question, you must add an "include" statement in your SPF record (e.g., include:thirdparty.com) to designate that third party as a legitimate sender - Once you have added all authorized IP addresses and include statements, end your record with an ~all or -all tag. An ~all tag indicates a soft SPF fail while an -all tag indicates a hard SPF fail. In the eyes of the major mailbox providers ~all and -all will result in SPF failure. Validity recommends an -all as it is the most secure record. - SPF records cannot be over 255 characters in length and cannot include more than ten include statements, also known as “lookups.” Example: v=spf1 ip4:1.2.3.4 ip4:2.3.4.5 include:thirdparty.com -all ----------------------------------------------------------------------------------- DKIM https://www.cloudflare.com/learning/dns/dns-records/dns-dkim-record/ ----------------------------------------------------------------------------------- DMARC https://www.cloudflare.com/learning/dns/dns-records/dns-dmarc-record/ -----------------------------------------------------------------------------------

+ Check if an email has passed SPF, DKIM, and DMARC (March 11, 2023, 11:34 p.m.)

Most email clients provide an option labeled "Show details" or "Show original" that displays the full version of an email, including its header. The header — typically a long block of text above the body of the email — is where mail servers append the results of SPF, DKIM, and DMARC. Reading through the dense header can be tricky. Users viewing it on a browser can click "Ctrl+F" or "Command+F" and type "spf," "dkim," or "dmarc" to find these results. The relevant text might look like this: arc=pass (i=1 spf=pass spfdomain=example.com dkim=pass dkdomain=example.com dmarc=pass fromdomain=example.com); The appearance of the word "pass" in the text above indicates that the email has passed an authentication check. "spf=pass," for example, means the email did not fail SPF; it came from an authorized server with an IP address that is listed in the domain's SPF record. In this example, the email passed all three of SPF, DKIM, and DMARC, and the mail server was able to confirm it really came from example.com and not an impostor. It is also important to note that domain owners need to configure their SPF, DKIM, and DMARC records properly themselves — both in order to prevent spam from their domain and to make sure that legitimate emails from their domain are not marked as spam. Web hosting services do not necessarily do this automatically. Even domains that do not send emails should at least have DMARC records so that spammers cannot pretend to send emails from that domain.

+ SPF, DKIM, and DMARC (March 11, 2023, 10:16 p.m.)

SPF, DKIM, and DMARC are collections of free email authentication methods used to verify that senders are legitimately authorized to send emails from a specific domain. Together, they help prevent spammers, phishers, and other unauthorized parties from sending emails on behalf of a domain* they do not own. -------------------------------------------------------------------------------------- SPF SPF stands for Sender Policy Framework. It allows you to cache a list of authorized IP addresses that are allowed to send emails to your customers on your behalf. How does SPF work? When sending an email, the receiving end would mainly check for a published SPF record. When it detects an SPF record, it searches through the list of authorized addresses for the record. If a valid record exists, the validations are marked as "PASS." Otherwise, the email would be rejected and routed to the spam folder. -------------------------------------------------------------------------------------- DKIM DKIM stands for Domain Keys Identified Mail. DKIM is a stronger authentication method than SPF since it uses public-key cryptography instead of IP addresses. When using DKIM, a sender can attach DKIM signatures to email headers and validate them using a public cryptographic key found in the company's DNS record. The domain owner publishes the cryptographic key and configures it as a TXT record in its general DNS record. https://www.dkim.org/ -------------------------------------------------------------------------------------- DMARC DMARC (Domain-based Message Authentication, Reporting & Conformance) is an email authentication protocol that uses SPF and DKIM to decide the authenticity of an email. DMARC is very effective because it validates the sender of an email using both DKIM and SPF records. Furthermore, it assists mail systems in deciding what to do with messages sent from your domain that fails SPF or DKIM checks. --------------------------------------------------------------------------------------

+ Access migration model (March 9, 2023, 3:55 p.m.)

from django.db.migrations.recorder import MigrationRecorder MigrationRecorder.Migration.objects.values_list('id', 'app', 'name')

+ Install Postfix noninteractive (March 4, 2023, 3:20 a.m.)

Automate the installation of postfix: echo "postfix postfix/mailname string your.hostname.com" | debconf-set-selections echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections apt-get -y install postfix service postfix start

+ Enable Port 25 in Django Docker Traefik (March 4, 2023, 2:57 a.m.)

Add Ports section to the file docker/docker-compose.yml: services: tiptong_api-django: image: tiptong_api-django:latest ports: - 25:25 networks: - tiptong_api-local - tiptong_api-public After deploying using "docker stack deploy", install "postfix" in the docker container: docker exec -it --user root $(docker container ls -a -q --filter=name=tiptong_api-django) /bin/bash apt update apt install postfix -y service postfix start Now test accessing the port from your computer/laptop: telnet api.tiptong.io 25 Configuring Traefik config files or the deploy section in the compose.yml file is unnecessary. Probably this way, only "api.tiptong.io" container will use the port and other services/containers will not be able to use the port, and actually, that's why we need to configure the Traefik section properly. Test sending an email from the container: apt install telnet -y telnet localhost 25 mail from: whatever@whatever.com rcpt to: mohsen@mohsenhassani.com data (press enter) Type something for the body of the email. . (put an extra period on the last line and then press enter again) If everything works out, you should see a confirmation message resembling this: 250 2.0.0 Ok: queued as CC732427AE Type "quit" to exit. --------------------------------------------------------------------------------------------- In case the email is not sent, install the "rsyslog" to track the issue: apt install rsyslog service rsyslog start service postfix restart nano /var/log/syslog nano /var/log/mail.log nano /var/log/mail.info --------------------------------------------------------------------------------------------- Having the following error, I had to comment the line "myhostname = cc70132a9df8" in the /etc/postfix/main.cf file: postfix/smtp[1469]: EF329761203: to=<mohsen@mohsenhassani.com>, relay=mail.mohsenhassani.com[5.9.154.209]:25, delay=33, delays=12/0.01/21/0.01, dsn=5.0.0, status=bounced (host mail.mosenhassani.com[5.9.154.209] said: 550 Access denied - Invalid HELO name (See RFC2821 4.1.1.1) (in reply to MAIL FROM command)) service postfix restart ---------------------------------------------------------------------------------------------

+ MX Linux - Repo (March 2, 2023, 8:51 p.m.)

https://github.com/MX-Linux/mx-repo-list/blob/master/repos.txt https://mxrepo.com/

+ Diff of text files (Feb. 27, 2023, 12:08 p.m.)

- name: Display the diff of a text file hosts: all strategy: free become: true become_user: mohsen environment: HOME: /home/mohsen tasks: - ansible.builtin.copy: src: /home/mohsen/Projects/some_file.py dest: /srv/projects/some_file.py check_mode: yes diff: yes

+ XdoTool - Stimulate Mouse Clicks and Keystrokes (Feb. 13, 2023, 12:01 p.m.)

https://linuxhint.com/xdotool_stimulate_mouse_clicks_and_keystrokes/ apt install xdotool You can find the correct names for keyboard keys (to be used in the following commands) by using: xev --------------------------------------------------------------------------------------------------------- Simulate a Keystroke: xdotool key n --------------------------------------------------------------------------------------------------------- Simulate a Keystroke with a Modifier Key: xdotool key ctrl+s --------------------------------------------------------------------------------------------------------- Simulate Repeat Keys / Turbo / Rapid Fire: xdotool key --repeat 5 --delay 50 n for i in {1..3}; do xdotool key n; sleep 2; done while true; do xdotool key n; sleep 2; done --------------------------------------------------------------------------------------------------------- Simulate a Key Sequence: xdotool key x y z --------------------------------------------------------------------------------------------------------- Simulate Mouse Clicks: xdotool click 3 Replace “3” with with any number from the reference below: 1 – Left click 2 – Middle click 3 – Right-click 4 – Scroll wheel up 5 – Scroll wheel down --------------------------------------------------------------------------------------------------------- If you want to use a different set of coordinates, use a command in the following format: xdotool mousemove 100 100 click 3 Replace “100” with your desired coordinates as “X” and “Y” from the top left corner of the screen. --------------------------------------------------------------------------------------------------------- Get Active Window and Minimize It: xdotool getactivewindow windowminimize --------------------------------------------------------------------------------------------------------- Find the window ID of a window by its window title: xdotool search --name "Spotify" Put the focus on this window and bring it into the foreground: xdotool windowactivate 16777219 ---------------------------------------------------------------------------------------------------------

+ PlayerCTL - For VLC, MPV, RhythmBox, Web Browsers, CMUS, MPD, Spotify, and ... (Feb. 13, 2023, 11:36 a.m.)

https://github.com/altdesktop/playerctl -------------------------------------------------------------------------------------------- playerctl --list-all List the names of the current players that are running on the system. -------------------------------------------------------------------------------------------- # Player | Next playerctl next # Player | Play/Pause playerctl play-pause # Player | Previous playerctl previous # Player | Seek Backward playerctl position 10- # Player | Seek Forward playerctl position 10+ # Player | Stop playerctl stop --------------------------------------------------------------------------------------------

+ Copy To Remote and Exclude (Feb. 3, 2023, 6:29 p.m.)

- name: Copy a directory to the remote server, exclude some child files/folders hosts: all strategy: free become: true become_user: mohsen environment: HOME: /home/mohsen tasks: - ansible.posix.synchronize: mode: push src: /home/mohsen/Projects/my-reports dest: /srv/projects/ recursive: true rsync_opts: - "--exclude=.idea" - "--exclude=.git" - "--exclude=.gitignore" - "--exclude=*.pyc"