Recent Notes

50 Notes
+ Mount FreeBSD Partition (Jan. 31, 2023, 10:59 a.m.)

mount -t ufs -o ufstype=ufs2 /dev/vdb1 /mnt/

+ Difference between min SDK version/target SDK version vs. compile SDK version? (Jan. 28, 2023, 11:54 p.m.)

The min SDK version is the earliest release of the Android SDK that your application can run on. Usually, this is because of a problem with the earlier APIs, lacking functionality, or some other behavioral issue. The target SDK version is the version your application was targeted to run on. Ideally, this is because of some sort of optimal run conditions. If you were to make your app for version 19, this is where that would be specified. It may run on earlier or later releases, but this is what you were aiming for. This is mostly to indicate how current your application is for use in the marketplace, etc. The compile SDK version is the version of Android your IDE (or other means of compiling I suppose) uses to make your app when you publish an APK file. This is useful for testing your application as it is a common need to compile your app as you develop it. As this will be the version to compile to an APK, it will naturally be the version of your release. Likewise, it is advisable to have this match your target SDK version. ------------------------------------------------------------------------------------------------- The formula is minSdkVersion <= targetSdkVersion <= compileSdkVersion -------------------------------------------------------------------------------------------------

+ Sample applications (Jan. 27, 2023, 11:41 p.m.)

# Get the list of sample applications flutter create --list-samples test.json # Create a project with a sample project flutter create --sample=widgets.SingleChildScrollView.1 my_sample

+ Set bash as default shell (Jan. 27, 2023, 5:13 p.m.)

# chsh -s /usr/local/bin/bash {username} # chsh -s bash # bash

+ Command - wait (Jan. 27, 2023, 1:40 p.m.)

wait: Without any parameters, the wait command waits for all background processes to finish before continuing the script. --------------------------------------------------------------------------------- wait <process or job ID> An added PID or job ID waits for a specific process to end before continuing the script. --------------------------------------------------------------------------------- wait -n Waits for only the following background process to complete and returns an exit status. --------------------------------------------------------------------------------- wait -f Terminating a program first waits for the background task to finish before quitting. --------------------------------------------------------------------------------- Example: # Create a simple background process: sleep 10 & # Confirm the job is running in the background with: jobs -l # Use the wait command without any parameters to pause until process completion: wait # The terminal waits for the background process to finish. # After 10 seconds (due to sleep 10), the console prints a Done message. --------------------------------------------------------------------------------- wait %1 Pauses for process 1 to complete. ---------------------------------------------------------------------------------

+ Chaining Operators (Jan. 27, 2023, 12:54 p.m.) ---------------------------------------------------------------------------- Ampersand Operator (&) The function of ‘&‘ is to make the command run in the background. Just type the command followed by a white space and ‘&‘. You can execute more than one command in the background, in a single go. Run one command in the background: tecmint@localhost:~$ ping ­c5 & Run two commands in the background, simultaneously: root@localhost:/home/tecmint# apt-get update & apt-get upgrade & ---------------------------------------------------------------------------- semi-colon Operator (;) The semi-colon operator makes it possible to run, several commands in a single go and the execution of the command occurs sequentially. root@localhost:/home/tecmint# apt-get update ; apt-get upgrade ; mkdir test The above command combination will first execute the update instruction, then the upgrade instruction, and finally will create a ‘test‘ directory under the current working directory. ---------------------------------------------------------------------------- AND Operator (&&) The AND Operator (&&) would execute the second command only if the execution of the first command SUCCEEDS, i.e., the exit status of the first command is 0. This command is very useful in checking the execution status of the last command. For example, I want to visit the website using the links command, in the terminal but before that, I need to check if the host is live or not. root@localhost:/home/tecmint# ping -c3 && links ---------------------------------------------------------------------------- OR Operator (||) The OR Operator (||) is much like an ‘else‘ statement in programming. The above operator allows you to execute the second command only if the execution of the first command fails, i.e., the exit status of the first command is ‘1‘. For example, I want to execute "apt-get update" from a non-root account and if the first command fails, then the second ‘links‘ command will execute. tecmint@localhost:~$ apt-get update || links In the above command, since the user was not allowed to update the system, it means that the exit status of the first command is ‘1’, and hence the last command ‘links‘ gets executed. What if the first command is executed successfully, with an exit status ‘0‘? Obviously! The second command won’t execute. tecmint@localhost:~$ mkdir test || links Here, the user creates a folder ‘test‘ in his home directory, for which the user is permitted. The command executed successfully giving an exit status ‘0‘ and hence the last part of the command is not executed. ---------------------------------------------------------------------------- NOT Operator (!) The NOT Operator (!) is much like an ‘except‘ statement. This command will execute all except the condition provided. To understand this, create a directory ‘tecmint‘ in your home directory and ‘cd‘ to it. tecmint@localhost:~$ mkdir tecmint tecmint@localhost:~$ cd tecmint Next, create several types of files in the folder ‘tecmint‘. touch a.doc b.doc a.pdf b.pdf a.xml b.xml a.html b.html Now delete all the files except ‘html‘ file all at once, in a smart way. rm -r !(*.html) ---------------------------------------------------------------------------- AND – OR operator (&& – ||) The above operator is actually a combination of the ‘AND‘ and ‘OR‘ operators. It is much like an ‘if-else‘ statement. For example, let’s do a ping to If success echo ‘Verified‘ else echo ‘Host Down‘. ping -c3 && echo "Verified" || echo "Host Down" ---------------------------------------------------------------------------- PIPE Operator (|) This PIPE operator is very useful where the output of the first command acts as an input to the second command. For example, pipeline the output of ‘ls -l‘ to ‘less‘ and see the output of the command. tecmint@localhost:~$ ls -l | less ---------------------------------------------------------------------------- Command Combination Operator {} Combine two or more commands, the second command depends upon the execution of the first command. For example, check if a directory ‘bin‘ is available or not, and output the corresponding output. tecmint@localhost:~$ [ -d bin ] || { echo Directory does not exist, creating directory now.; mkdir bin; } && echo Directory exists. ---------------------------------------------------------------------------- Precedence Operator () The Operator makes it possible to execute commands in precedence order. Command_x1 && Command_x2 || Command_x3 && Command_x4. In the above pseudo command, what if the Command_x1 fails? Neither of the Command_x2, Command_x3, or Command_x4 would be executed, for this, we use Precedence Operator, as: (Command_x1 && Command_x2) || (Command_x3 && Command_x4) In the above pseudo command, if Command_x1 fails, Command_x2 also fails but still, Command_x3 and Command_x4 execute depending upon the exit status of Command_x3. ---------------------------------------------------------------------------- Concatenation Operator (\) The Concatenation Operator (\) as the name specifies, is used to concatenate large commands over several lines in the shell. For example, The below command will open a text file test(1).txt. tecmint@localhost:~/Downloads$ nano test\(1\).txt ----------------------------------------------------------------------------

+ Shell piping (Jan. 27, 2023, 12:52 p.m.)

In Linux, a pipeline is a mechanism that allows two or more processes to be combined or executed concurrently. That means the process output will be handled as an input for the next one, and so on. It's not called a pipeline for anything: It refers to the concept of a process flow being channeled through a pipe from a source to a destination. command1 | command2 | command3

+ File Descriptors 2> 2>&1 &>> (Jan. 27, 2023, 12:30 p.m.)

What are file descriptors? File Descriptors are integers (numbers) that act as unique identifiers for an open file (or other I/O resources) in a Linux system. In Unix-like systems, everything is a file descriptor or a process. Everything can have a file descriptor. It is important and useful to understand how the so-called three standard file descriptors or standard streams work because all processes use these channels for input and output operations. User interactions with the system are input through standard input (stdin), which is channel/stream 0, usually by using a keyboard. Then, any command executed through an interactive shell connects to a text terminal on which the shell is running and sends the output through either standard output (stdout), which is channel/stream 1. if it is OK, or through standard error (stderr), which is channel/stream 2 if it is not OK. The stdout is usually the terminal displayed by the monitor. There are other channels and streams (3 and up) that any process can use and don't have a default input or output. ----------------------------------------------------------------------------------------------------- Shell I/O redirection: You can manipulate and change the default behavior of these three basic file descriptors by leveraging redirection and pipelines. For example, you can change your input from a keyboard to a file. Instead of getting messages in your terminal, you can redirect them to a file or even discard error messages instead of seeing them on your monitor. You can also redirect your output to the terminal and a file simultaneously. You may even process a command output as an input to another command. ----------------------------------------------------------------------------------------------------- There are three redirectors to work with: >, >>, and < ----------------------------------------------------------------------------------------------------- Redirection with > command > file: Sends standard output to <file> command 2> file: Sends error output to <file> command 2>&1: Sends error output to standard output command > file 2>&1: Sends standard output and the error output to a file command &> file: Sends standard output and the error output to a file command 2>&1 > file: Sends error output to standard input and the standard input to a file Append with >> command >> file: Appends standard output to a file command 2>> file: Appends error output to a file command >> file 2>&1: Appends standard output and error output to a file command &>> file: Appends standard output and error output to a file command 2>&1 >> file: Sends error output to standard input and appends standard input to a file Redirect with < command < input: Feeds a command input from <input> command << input: Feeds a command or interactive program with a list defined by a delimiter; this is known as a here-document (heredoc) command <<< input: Feeds a command with <input>; this is known as a here-string ----------------------------------------------------------------------------------------------------- Examples: # Redirect the standard output for a given command to a file: echo "Enable Sysadmin" > myfile cat myfile Enable Sysadmin # Redirect error output for a given command to a file: ls /root 2> myfile cat myfile ls: cannot open directory '/root': permission denied # Redirect error output for a given command to the standard output, the terminal: ls /root 2>&1 ls: cannot open directory 'root/': Permission denied # Redirect both standard output and error output for a given command to a file: find /usr -name ls > myfile 2>&1 find /usr -name ls &> myfile -----------------------------------------------------------------------------------------------------

+ Command "read" (Jan. 27, 2023, 11:18 a.m.)

Bash has no built-in function to take the user’s input from the terminal. The "read" command of Bash is used to take the user’s input from the terminal. This command has different options to take input from the user in different ways. Multiple inputs can be taken using the single read command. ------------------------------------------------------------------------------------- read [options] [var1, var2, var3…] ------------------------------------------------------------------------------------- -d <delimiter> It is used to take the input until the delimiter value is provided. -n <number> It is used to take the input of a particular number of characters from the terminal and stop taking the input earlier based on the delimiter. -N <number> It is used to take the input of a particular number of characters from the terminal, ignoring the delimiter. -p <prompt> It is used to print the output of the prompt message before taking the input. -s It is used to take the input without an echo. This option is mainly used to take the input for the password input. -a It is used to take the input for the indexed array. -t <time> It is used to set a time limit for taking the input. -u <file descriptor> It is used to take the input from the file. -r It is used to disable the backslashes. ------------------------------------------------------------------------------------- Using the "read" Command without Any Option and variable: If no variable is used with the read command, the input value is stored in the $REPLY variable echo "Enter your favorite color: " read echo "Your favorite color is $REPLY" ------------------------------------------------------------------------------------- Using Read Command with a Variable: echo "Enter the product name: " read item echo "Enter the color variations of the product: " read color1 color2 color3 echo "The product name is $item." echo "Available colors are $color1, $color2, and $color3." ------------------------------------------------------------------------------------- Using Read Command with -p Option: read -p "Enter the book name: " book echo "Book name: $book" ------------------------------------------------------------------------------------- Using Read Command with -s Option: read -sp "Enter your password: " password ------------------------------------------------------------------------------------- Using Read Command with -a Option: echo "Enter the country names: " read -a countries # country1, country2, country3, country4 echo "Country names are:" for country in ${countries[@]} do echo $country done ------------------------------------------------------------------------------------- Using Read Command with -n Option: echo "Enter the product code: " # Take the input of five characters read -n 5 code echo "" # To add a new line echo "The product code is $code" ------------------------------------------------------------------------------------- Using Read Command with -t Option: echo -n "Write the result of 10-6: " # Take the input of five characters read -t 3 answer # The prompt will wait for 3 for seconds until the user enters a value -------------------------------------------------------------------------------------

+ Create Project (Jan. 25, 2023, 10:28 p.m.)

flutter create my_app

+ Spotify D-BUS control (Jan. 19, 2023, 10:39 p.m.)

Play/Pause: dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause Next: dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Next Previous: dbus-send --print-reply --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Previous

+ D-Bus (Desktop Bus) (Jan. 18, 2023, 1:34 p.m.) --------------------------------------------------------------------------------------- D-Bus is a mechanism for interprocess communication under Linux and other Unix-like systems which provides a mechanism for applications to talk to each other. Every modern Linux desktop environment uses D-Bus, a system for allowing software applications to communicate with each other. Thanks to D-Bus, you can make your desktop work the way you want. With D-Bus, every program that offers services to other programs registers itself. Other programs then can look up which services are available. A program also is able to register itself for events, which some system services do, for example, to detect hot-swapping hardware. --------------------------------------------------------------------------------------- Message: A message is the unit of data transfer between processes. A message has a header, which identifies its sender, receiver, method (signal) name, and message body that contains a data payload. --------------------------------------------------------------------------------------- Message Types: There are four types of messages, SIGNAL, METHOD_CALL, METHOD_RETURN, and ERROR. A SIGNAL is a message that is broadcast by a process and can be received by other interested processes. A METHOD_CALL message is a request by the sender for a particular operation on an object of the receiver. For example, the receiver may be a service with a singleton object. The sender could be a client, requesting the execution of a "method" by the server. The method call message has the name of the method to be executed and also the arguments required for execution. The receiver is required to execute the method and respond back to the sender with a METHOD_RETURN message containing the result(s) of the operation. Or, if there was an error, the receiver can respond with an ERROR message. --------------------------------------------------------------------------------------- Message bus A message bus is a daemon process that routes messages between other processes. --------------------------------------------------------------------------------------- Service A service is a daemon process that provides some utility in the system. A service is a server process that does work for the clients. A service has a singleton object. --------------------------------------------------------------------------------------- Object An object is an entity in a process, which does some work. An object is identified by a path name. A path is like a complete file name in the system. So, an object might have a path name like, /com/example/some server. An object has members, which means methods and signals. --------------------------------------------------------------------------------------- Interfaces An interface is a group of functions. An object supports one or more interfaces. The interfaces supported by an object specify the members of that object. --------------------------------------------------------------------------------------- Connection names When an application connects with the D-Bus daemon, it is assigned a unique connection name. A unique connection name starts with the colon character ":". An application may also ask for a well-known name to be assigned to a connection. This is in the form of a reverse domain name like com.example.some_name. --------------------------------------------------------------------------------------- D-Bus Configuration The D-Bus daemon configuration files are located in the /usr/share/dbus-1 directory. ---------------------------------------------------------------------------------------

+ Gnome - Disable Dock Hotkeys (Jan. 12, 2023, 3:06 p.m.)

gsettings set hot-keys false

+ 2> (Dec. 6, 2022, 4:17 p.m.)

File descriptor 2 represents standard error. Other special file descriptors include 0 for standard input and 1 for standard output. 2> /dev/null means to redirect standard error to /dev/null. /dev/null is a special device that discards everything that is written to it.

+ Find Postgres version from Django (Nov. 19, 2022, 11:47 p.m.)

from django.db import connection print(connection.cursor().connection.server_version)

+ Remmina (Nov. 15, 2022, 10:25 p.m.)

apt install -t stretch-backports remmina remmina-plugin-rdp remmina-plugin-secret remmina-plugin-spice remmina-plugin-vnc apt search remmina plugin

+ Transmission (Nov. 12, 2022, 12:18 p.m.)

Torrent location: ~/.config/transmission/torrents

+ Create SSH Keys (Nov. 7, 2022, 12:25 p.m.)

from io import StringIO import paramiko rsa_key = paramiko.RSAKey.generate(bits=4096) private_string = StringIO() rsa_key.write_private_key(private_string) public_key = rsa_key.get_base64() print(public_key, private_string.getvalue())

+ Coding style - Imports (Nov. 3, 2022, 9:32 p.m.)

# future from __future__ import unicode_literals # standard library import json from itertools import chain # third-party import bcrypt # Django from django.http import Http404 from django.http.response import ( Http404, HttpResponse, HttpResponseNotAllowed, StreamingHttpResponse, cookie, ) # local Django from .models import LogEntry # try/except try: import yaml except ImportError: yaml = None CONSTANT = 'foo' class Example: # ...

+ OwnCloud - Traefik (Oct. 25, 2022, 3:13 p.m.)

docker-compose.yml version: "3" volumes: files: driver: local mysql: driver: local redis: driver: local services: owncloud: image: owncloud/server depends_on: - mariadb - redis environment: - OWNCLOUD_DOMAIN=localhost:8080 - OWNCLOUD_DB_TYPE=mysql - OWNCLOUD_DB_NAME=owncloud - OWNCLOUD_DB_USERNAME=owncloud - OWNCLOUD_DB_PASSWORD=owncloud - OWNCLOUD_DB_HOST=mariadb - OWNCLOUD_ADMIN_USERNAME=admin - OWNCLOUD_ADMIN_PASSWORD=MohseN4301! - OWNCLOUD_MYSQL_UTF8MB4=true - OWNCLOUD_REDIS_ENABLED=true - OWNCLOUD_REDIS_HOST=redis networks: - owncloud-local - traefik-public healthcheck: test: ["CMD", "/usr/bin/healthcheck"] interval: 30s timeout: 10s retries: 5 volumes: - files:/mnt/data deploy: restart_policy: condition: on-failure max_attempts: 3 labels: - traefik.enable=true - - traefik.constraint-label=traefik-public - traefik.http.routers.owncloude-http.rule=Host(``) || Host(``) || Host(``) - mariadb: image: mariadb:10.6 # minimum required ownCloud version is 10.9 environment: - MYSQL_ROOT_PASSWORD=owncloud - MYSQL_USER=owncloud - MYSQL_PASSWORD=owncloud - MYSQL_DATABASE=owncloud command: ["--max-allowed-packet=128M", "--innodb-log-file-size=64M"] healthcheck: test: ["CMD", "mysqladmin", "ping", "-u", "root", "--password=owncloud"] interval: 10s timeout: 5s retries: 5 volumes: - mysql:/var/lib/mysql networks: - owncloud-local redis: image: redis:6 command: ["--databases", "1"] healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 volumes: - redis:/data networks: - owncloud-local networks: owncloud-local: traefik-public: external: true ------------------------------------------------------------------------- .env OWNCLOUD_VERSION=10.11 OWNCLOUD_DOMAIN=localhost:8080 ADMIN_USERNAME=mohsen ADMIN_PASSWORD=mohsen_2222 HTTP_PORT=8080 -------------------------------------------------------------------------

+ Control SSH Login by Time (Oct. 20, 2022, 2:09 a.m.)

1- Open the following file: vim /etc/pam.d/sshd 2- Add the following line to the file: account required 3- Edit the file "time.conf" to configure when the specific times the users can log in: vim /etc/security/time.conf sshd;*;mohsen;Al0800-2000 User "mohsen" can log into the system using SSH from 8:00 am to 8:00 pm. ------------------------------------------------------------------ The available day options: Mo: Monday Tu: Tuesday We: Wednesday Th: Thursday Fr: Friday Sa: Saturday Su: Sunday Wk: Week days Wd: Week-end Days Al: All 7 Days of the Week ------------------------------------------------------------------ Except: use the exclamation point “!” to say “not”. sshd;*;mohsen;!Wk0800-1700 Now, the "mohsen" user can log in at any time except for week-days between 8:00 am and 5:00 pm. ------------------------------------------------------------------ Specifying Multiple Days: sshd;*;mohsen;MoWeFr1000-1400 ------------------------------------------------------------------ Multiple Users: sshd;*;mohsen|hadi;SaSu0800-2200 ------------------------------------------------------------------ Logs for users' authentications: tail -f /var/log/auth.log ------------------------------------------------------------------ Forcing Logout of Users: sudo crontab -e 0 17 * * 1-5 /usr/bin/pkill -KILL -u mohsen ------------------------------------------------------------------

+ Disk usage for hidden directorie (Oct. 10, 2022, 11:21 a.m.)

du -sh .[^.]*

+ Add GPG public key (PUBKEY) to the apt key manager (Oct. 1, 2022, 1:17 a.m.)

sudo apt-key adv --keyserver --recv-keys 68980A0EA10B4DE8

+ Check if IPv6 is enabled (Sept. 26, 2022, 9:35 a.m.)

cat /sys/module/ipv6/parameters/disable If IPv6 is in disabled state, the output would be "1". ------------------------------------------------------------------------- ip -6 addr If IPv6 is in disabled state then you will get an empty output. ------------------------------------------------------------------------- lsof -a -i6 If IPv6 is in disabled state, then the output of the same command would be empty. -------------------------------------------------------------------------

+ List keys added to ssh-agent (Aug. 26, 2022, 10:56 a.m.)

ssh-add -l Get the full key in OpenSSH format ssh-add -l

+ NetworkManager Logs (Aug. 25, 2022, 11:19 a.m.)

journalctl -f -u NetworkManager tail -f /var/log/syslog | grep NetworkManager

+ Install OpenSSL 3 (Aug. 20, 2022, 10:47 p.m.)

Installing this OpenSSL will not solve the problem of the Python compilation requirement for the _ssl module. You need to go to Python Installation in my notes and see how to provide the path of this extracted OpenSSL file. But if you need to install OpenSSL for other purposes, follow these steps. ------------------------------------------------------------- 1- Download OpenSSL 3.0, extract and CD to it: 2- Compile, make, and install OpenSSL: cd /usr/src/openssl-3.0.0 ./config make make install 3- Create symlinks to libssl and libcrypto: ln -s /usr/local/lib64/ /usr/lib64/ ln -s /usr/local/lib64/ /usr/lib64/ 4- Test the installed version with: openssl version

+ Logs (Aug. 19, 2022, 11:37 a.m.)

docker logs <container ID> docker logs --follow <container ID> docker logs --tail 100 <container ID> docker logs --follow --until=30m docker logs --since 2019-03-02 <container ID>

+ Logging (Aug. 18, 2022, 3:19 p.m.)

from logging import getLogger LOG = getLogger(__name__) LOG.debug('message')

+ Find IP address (Aug. 17, 2022, 4:37 p.m.)

docker inspect 3f52acaa7ba9 | grep IPAddress

+ Error - Failed to mount (July 30, 2022, 6:21 p.m.)

Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware To fix this error: sudo ntfsfix /dev/sdb1

+ venv (July 12, 2022, 11:08 a.m.)

python3.9 -m venv venv source venv/bin/activate (venv)$ pip install -r requirements.txt

+ ansible-playbook - Copy file from remote server (June 24, 2022, 6:29 p.m.)

- name: Copy file from the remote server hosts: all strategy: free become: true become_user: mohsen environment: HOME: /home/mohsen tasks: - ansible.builtin.fetch: src: /var/mohsen/my_backups/storage/backup.db dest: /home/mohsen/Temp/db_backups/{{ inventory_hostname }}/ flat: yes -------------------------------------------------------------------------------------------- The {{ inventory_hostname }} already exists as a special variable in Ansible. It creates a folder per host and copies the file in them. In the destination whatever path you write, it will create them on your computer if do not exist. The flat, if yes, will copy the file in the folder, if no, it will create separate nested folders /per/file/path. --------------------------------------------------------------------------------------------

+ Current directory - dirname vs abspath (June 18, 2022, 12:57 a.m.)

This returns the current folder that the scripts exists in: os.path.abspath(os.path.dirname(os.path.abspath(__file__))) OR os.path.dirname(__file__) # /home/mohsen/my project ------------------------------------------------------------------------------------------------------ This returns the parent folder that the scripts exists in: os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # /home/mohsen ------------------------------------------------------------------------------------------------------

+ Color Scheme (June 13, 2022, 11:59 a.m.)

:colorscheme then Space followed by TAB. :colo --------------------------------------------------------------------------------- The short version of the command is :colo --------------------------------------------------------------------------------- :colo murphy ---------------------------------------------------------------------------------

+ virt-manager -Assign Mobile to Guest (June 11, 2022, 10:15 p.m.)

Assigning Host USB device to a Guest VM:: 1- Enable Developer Mode on the mobile. 2- Enable USB Debugging on the mobile. 3- Open VirtManager. Select the VM. From the menus click on "Edit", and select the "Virtual Machine Details". 4- Click the "Add Hardware" button, and choose "USB Host Device" and finally choose your mobile name from the list.

+ Check the list of connected devices (June 11, 2022, 7:35 p.m.)

Android emulator: flutter emulators Android device: flutter devices

+ Show Schemas (June 5, 2022, 1:28 a.m.)

SELECT * FROM pg_catalog.pg_namespace ORDER BY nspname;

+ Schemas and privileges (June 5, 2022, 1:25 a.m.)

Users can only access objects in the schemas that they own. It means they cannot access any objects in the schemas that do not belong to them. To allow users to access the objects in the schema that they do not own, you must grant the USAGE privilege of the schema to the users: GRANT USAGE ON SCHEMA schema_name TO role_name; To allow users to create objects in the schema that they do not own, you need to grant them the CREATE privilege of the schema to the users: GRANT CREATE ON SCHEMA schema_name TO user_name; Note that, by default, every user has the CREATE and USAGE on the public schema.

+ Create / Alter / Drop schema (June 5, 2022, 1:11 a.m.)

CREATE SCHEMA sales; Create a schema for a user: CREATE SCHEMA AUTHORIZATION john; ALTER SCHEMA schema_name RENAME TO new_name; ALTER SCHEMA schema_name OWNER TO { new_owner | CURRENT_USER | SESSION_USER}; DROP SCHEMA IF EXISTS accounting; DROP SCHEMA IF EXISTS finance, marketing; DROP SCHEMA sales CASCADE; ----------------------------------------------------------------------- To add the new schema to the search path: SET search_path TO sales, public; ----------------------------------------------------------------------- Now, if you create a new table named staff without specifying the schema name, PostgreSQL will put this staff table into the sales schema: ----------------------------------------------------------------------- To access the "staff" table in the "sales" schema you can use one of the following statements: SELECT * FROM staff; Or SELECT * FROM sales.staff; The "public" schema is the second element in the search path, so to access the "staff" table in the "public" schema, you must qualify the table name as follows: SELECT * FROM public.staff; If you use the following command, you will need to explicitly refer to objects in the "public" schema using a fully qualified name: SET search_path TO public; The public schema is not a special schema, therefore, you can drop it too. -----------------------------------------------------------------------

+ What is a schema? (June 5, 2022, 1:10 a.m.)

A schema is a namespace that contains named database objects such as tables, views, indexes, data types, functions, stored procedures, and operators. A database can contain one or multiple schemas and each schema belongs to only one database. Two schemas can have different objects that share the same name. For example, you may have "sales" schema that has "staff" table and the "public" schema which also has the "staff" table. When you refer to the "staff" table you must qualify it as follows: public.staff Or sales.staff ----------------------------------------------------------------------------------------- Why do you need to use schemas? There are some scenarios that you want to use schemas: - Schemas allow you to organize database objects e.g., tables into logical groups to make them more manageable. - Schemas enable multiple users to use one database without interfering with each other. ----------------------------------------------------------------------------------------- The public schema: PostgreSQL automatically creates a schema called "public" for every new database. Whatever object you create without specifying the schema name, PostgreSQL will place it into this "public" schema. Therefore, the following statements are equivalent: CREATE TABLE table_name( ... ); and CREATE TABLE public.table_name( ... ); ----------------------------------------------------------------------------------------- The schema search path: In practice, you will refer to a table without its schema name e.g., "staff" table instead of a fully qualified name such as "sales.staff" table. When you reference a table using its name only, PostgreSQL searches for the table by using the "schema search path", which is a list of schemas to look in. PostgreSQL will access the first matching table in the schema search path. If there is no match, it will return an error, even if the name exists in another schema in the database. The first schema in the search path is called the current schema. Note that when you create a new object without explicitly specifying a schema name, PostgreSQL will also use the current schema for the new object. The current_schema() function returns the current schema: SELECT current_schema(); Here is the output: current_schema ---------------- public (1 row) This is why PostgreSQL uses "public" for every new object that you create. To view the current search path, you use the SHOW command in psql tool: SHOW search_path; The output is as follows: search_path ----------------- "$user", public (1 row) -----------------------------------------------------------------------------------------

+ Querying across schemas (June 5, 2022, 12:49 a.m.)

SELECT column_name from schema_name.table_name where ...

+ Connect to remote host (June 5, 2022, 12:29 a.m.)

psql -h -p 5432 -d testdb -U testuser -W -W option will prompt for password. For example:

+ Transaction Atomic (June 4, 2022, 10:40 p.m.) By default, each Django ORM query is automatically committed to the database. Sometimes, however, we need to group multiple operations, so that they happen either altogether, or not at all. The property of queries grouped indivisibly like this is known as atomicity. from django.db import transaction def transfer(source: Account, destination: Account, amount: int) -> None: with transaction.atomic(): BalanceLine.objects.create( account=source, amount=-amount, ) BalanceLine.objects.create( account=destination, amount=amount, ) Because we’ve wrapped the queries in transaction.atomic, we know that whatever happens, the two balance lines will either both make it into the database together or not at all. This is a very good thing because if only the first balance line was written, the books wouldn’t balance. -------------------------------------------------------------------------------------------------------------- How does it work? Under the hood, Django begins a database transaction when transaction.atomic is entered and does not commit it until the context exits without an exception being raised. If for some reason, the second ORM operation does raise an exception, the database will roll back the transaction. Corrupt data is avoided. We call this operation ‘atomic’ because, like an atom, it’s indivisible. task_billing_checks_periodic -------------------------------------------------------------------------------------------------------------- Nested atomic blocks: transaction.atomic also supports nesting: with transaction.atomic(): ... ... with transaction.atomic(): ... with transaction.atomic(): ... ... with transaction.atomic(): ... ... The details of how these works are a little harder to wrap one’s head around. In many database engines, such as PostgreSQL, there’s no such thing as a nested transaction. So instead Django implements this using a single outer transaction, and then a series of database savepoints: with transaction.atomic(): # Begin transaction ... ... with transaction.atomic(): # Savepoint 1 ... with transaction.atomic(): # Savepoint 2 ... ... with transaction.atomic(): # Savepoint 3 ... ... ... # Transaction will now be committed. All the inner atomic blocks behave slightly differently from the outer ones: in the event of an exception, rather than rolling back the whole transaction, it rolls back to the savepoint set when it entered the context manager. The crucial thing to realize here is that the transaction is only committed when we exit the outer block. This means that any database operations executed in inner blocks are still subject to rollback. --------------------------------------------------------------------------------------------------------------

+ Move cursor to the first or last line (June 4, 2022, 10:31 a.m.)

For going to the first line: gg For going to the last line: Press the Esc key and then press Shift + G

+ zgrep (June 3, 2022, 10:43 a.m.)

Syntax: zgrep -c "exception" logs.txt.gz zgrep ismail auth.log.2.gz auth.log.3.gz auth.log.4.gz zgrep ismail auth.log.*.gz zgrep -e "ismail" -e "ahmet" auth.log.2.gz zgrep -i 'stop|shutdown' ----------------------------------------------------------------------------- -c : This option is used to display the number of matching lines for each file. -i : This option is used to ignore case sensitivity. -n : This option is used to display the line number of files if the given expression is present in the line. -v : This option is used to display the lines which don’t have the expression present in them. Basically, invert the search function. -e : This option is used to specify the expression but can be used multiple times. -o : This option is used to display only the matched section of the line from the given expression. -l : This option is used to display the names of the files with the expression present in them. -w : By default, zgrep command displays lines even if the expression is found as a sub-string. This option only displays lines only if the whole expression is found. -h : This option is used to display the matched lines but doesn’t display the file names. -s supresses errors about unreadable files that may clutter the output -----------------------------------------------------------------------------

+ List SQL statements (June 1, 2022, 1:09 p.m.)

from django.db import connections connection.queries ----------------------------------------------------------------- from django.db import reset_queries reset_queries() -----------------------------------------------------------------

+ An (May 31, 2022, 1:50 p.m.)

The letters of the English alphabet whose names begin with a consonant SOUND are B, C, D, G, J, K, P, Q, T, U, V, W, Y, and Z. The letters of the English alphabet whose names begin with a vowel SOUND are A, E, F, H, I, L, M, N, O, R, S, and X. The only variation is that some people use “haitch" as the name of H rather than “aitch” or “itch”. So it is always “an R”, never “a R”.

+ Reset xfce panels (May 31, 2022, 9:43 a.m.)

xfce4-panel --quit pkill xfconfd rm -rf ~/.config/xfce4/panel rm -rf ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-panel.xml xfce4-panel

+ Intel SGX (May 25, 2022, 3:58 p.m.)

Intel SGX: Intel Software Guard Extensions (Intel SGX) is an Intel technology for application developers who are seeking to protect select code and data from disclosure or modification. ---------------------------------------------------------------------------------------------------- Enclave: - A trusted execution environment embedded in a process. - The core idea of SGX is the creation of a software ‘enclave’. - The enclave is basically a separated and encrypted region for code and data. - The enclave is only decrypted inside the processor, so it is even safe from the RAM being read directly. ----------------------------------------------------------------------------------------------------