Topics: 62 *** Notes: 943

View Topic List
+Fix Cleartext Traffic Error in Android 9 Pie (April 2, 2020, 1:57 a.m.)

1- Add a network security config file under res/xml:

2- Add a domain config and set cleartextTrafficPermitted to "true":
<?xml version="1.0" encoding="utf-8"?>
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true"></domain>

3- Add your network security config to your Android manifest file under application:

+AndroidX (Nov. 8, 2019, 7:07 p.m.)

AndroidX and the Android Support Library cannot live side-by-side in the same Android project - doing so will lead to build failures.

AndroidX (Jetpack) is the successor to the Android Support Library.

+AndroidX / Jetifier (Nov. 8, 2019, 7 p.m.)

android.useAndroidX: When set to true, this flag indicates that you want to start using AndroidX from now on. If the flag is absent, Android Studio behaves as if the flag were set to false.

android.enableJetifier: When set to true, this flag indicates that you want to have tool support (from the Android Gradle plugin) to automatically convert existing third-party libraries as if they were written for AndroidX. If the flag is absent, Android Studio behaves as if the flag were set to false.


Jetifier tool migrates support-library-dependent libraries to rely on the equivalent AndroidX packages instead. The tool lets you migrate an individual library directly, instead of using the Android Gradle plugin bundled with Android Studio.

For Example:

If you have in your dependency. That uses support library AppCompatImageView.

` import;`

This class is moved now to androidx package, so how will PhotoView get androidx AppCompatImageView? And the app still runs in the device.

Who made this run?

Jetifier, which converts all support packages of dependency at build time.

Jetifier will convert to androidx.appcompat.widget.AppCompatImageView while building the project.

Enabling Jetifier is important when you migrate from Support Libraries to AndroidX.


+Animations (June 14, 2019, 2:36 p.m.)

+Platform codenames, versions, API levels, and NDK releases (May 26, 2019, 11:01 p.m.)

Codename Version API level/NDK release
Pie 9 API level 28
Oreo 8.1.0 API level 27
Oreo 8.0.0 API level 26
Nougat 7.1 API level 25
Nougat 7.0 API level 24
Marshmallow 6.0 API level 23
Lollipop 5.1 API level 22
Lollipop 5.0 API level 21
KitKat 4.4 - 4.4.4 API level 19
Jelly Bean 4.3.x API level 18
Jelly Bean 4.2.x API level 17
Jelly Bean 4.1.x API level 16
Ice Cream Sandwich 4.0.3 - 4.0.4 API level 15, NDK 8
Ice Cream Sandwich 4.0.1 - 4.0.2 API level 14, NDK 7
Honeycomb 3.2.x API level 13
Honeycomb 3.1 API level 12, NDK 6
Honeycomb 3.0 API level 11
Gingerbread 2.3.3 - 2.3.7 API level 10
Gingerbread 2.3 - 2.3.2 API level 9, NDK 5
Froyo 2.2.x API level 8, NDK 4
Eclair 2.1 API level 7, NDK 3
Eclair 2.0.1 API level 6
Eclair 2.0 API level 5
Donut 1.6 API level 4, NDK 2
Cupcake 1.5 API level 3, NDK 1
(no codename) 1.1 API level 2
(no codename) 1.0 API level 1

+Action Bar, Toolbar, App Bar (May 26, 2019, 9:17 p.m.)

Toolbar is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events.

You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.
In other words, the ActionBar now became a special kind of Toolbar.

The app bar, formerly known as the action bar in Android, is a special kind of toolbar that is used for branding, navigation, search, and actions.


Toolbar provides greater control to customize its appearance unlike old ActionBar. It fully supported Toolbar features to lower android os devices via AppCompact support library.

Use a Toolbar as an replacement to ActionBar. In this you can still continued to use the ActionBar features such as menus, selections, etc.

Use a standalone Toolbar, whereever you want to place in your application.


Toolbar’s are more flexible than ActionBar. We can easily modify its color, size and position. We can also add labels, logos, navigation icons and other views in it. In Material Design Android has updated the AppCompat support libraries so that we can use Toolbar’s in our devices running API Level 7 and up.


+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Java tells them how to behave.

+Common naming conventions for icon assets (April 22, 2019, 4:02 a.m.)

Asset Type Prefix Example
Icons ic_ ic_star.png
Launcher icons ic_launcher ic_launcher_calendar.png
Menu icons and Action Bar icons ic_menu ic_menu_archive.png
Status bar icons ic_stat_notify ic_stat_notify_msg.png
Tab icons ic_tab ic_tab_recent.png
Dialog icons ic_dialog ic_dialog_info.png

+Android Studio - Transparent Background Launcher Icon (April 22, 2019, 2:51 a.m.)

1- File > New > Image Asset.

2- Turn to Launcher Icons (Adaptive and Legacy) in Icon Type.

3- Choose Image in Asset Type and select your picture inside Path field (Foreground Layer tab).

4- Create or download below a PNG file with transparent background of 512x512 px size (this is a size of ic_launcher-web.png).
PNG link:

5- In Background Layer tab select Image in Asset Type and load the transparent background from step 4.

6- In Legacy tab select Yes for all Generate, None for Shape.

7- In Foreground Layer and Background Layer tabs you can change trim size.

Though you will see a black background behind the image in Preview window, after pressing Next, Finish and compiling an application you will see a transparent background in Android 5, Android 8.

+NDK (April 19, 2019, 6:38 p.m.)

The Native Development Kit (NDK) is a set of tools that allow you to use C and C++ code in your Android app. It provides platform libraries to manage native activities and access hardware components such as sensors and touch input.

The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for the following cases:

- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

- Reuse code between your iOS and Android apps.

- Use libraries like FFMPEG, OpenCV, etc.

+SDK / NDK (April 19, 2019, 6:34 p.m.)

Software Development Kit (SDK)
Native Development Kit (NDK)

Traditionally, all Software Development Kit (SDK) were in C, very few in C++. Then Google comes along and releases a Java based library for Android and calls it a SDK.

However, then came the demand for C/C++ based library for development. Primarily from C/C++ developers aiming game development and some high performance apps.

So, Google released a C/C++ based library called Native Development Kit (NDK).

+ADB (Oct. 2, 2015, 5:04 p.m.)

apt install android-tools-adb android-tools-fastboot

+Android Development Environment (July 6, 2016, 11:58 a.m.)

Visit the following links to get information about the dependencies you might need for the SDK version you intend to download:


You might find the tools and all the dependencies in the following links:


1- Create a folder preferably name it "android-sdk-linux" in any location.

2- Downloading SDK Tools:
From the following link, scroll to the bottom of the page, the table having the title "Command line tools only" and download the "Linux" package.
Extract the downloaded file "" to the folder you created in step 1.

3- Download an API level (for example, or which is for Android 4.0.4).
Create a folder named "platforms" in "android-sdk-linux" and extract the downloaded file to it.

4- Download the latest version of `build-tools` (
Create a folder named `build-tools` in `android-sdk-linux` and extract it to it.
You need to rename the extracted folder to `25`.

5- Download the latest version of `platform-tools` (
Extract it to the folder `android-sdk-linux`. It should have already a folder named `platform-tools`, so no need to create any further folders.

6- Open the file `~/.bashrc` and add the following line to it:
export ANDROID_HOME=/home/mohsen/Programs/Android/Development/android-sdk-linux

7- apt install openjdk-9-jdk
If you got errors like this:
\dpkg: warning: trying to overwrite '/usr/lib/jvm/java-9-openjdk-amd64/include/linux/jawt_md.h', which is also in package openjdk-9-jdk-headless

To solve the error:
apt-get -o Dpkg::Options::="--force-overwrite" install openjdk-9-jdk


+AVD with HAXM or KVM (Emulators) (April 10, 2016, 9:25 a.m.)

Official Website:


For a faster emulator, use the HAXM device driver.
Linux Link:

As described in the above link, Linux users need to use KVM.
Taken from the above website:
(Since Google mainly supports Android build on Linux platform (with Ubuntu 64-bit OS as top Linux platform, and OS X as 2nd), and a lot of Android Developers are using AVD on Eclipse or Android Studio hosted by a Linux system, it is very critical that Android developers take advantage of Intel hardware-assisted KVM virtualization for Linux just like HAXM for Windows and OS X.)


KVM Installation:

1- egrep -c '(vmx|svm)' /proc/cpuinfo
If the output is 0 it means that your CPU doesn't support hardware virtualization.

2- apt install cpu-checker
Now you can check if your cpu supports kvm:
# kvm-ok

3- To see if your processor is 64-bit, you can run this command:
egrep -c ' lm ' /proc/cpuinfo
If 0 is printed, it means that your CPU is not 64-bit.
If 1 or higher, it is.
Note: lm stands for Long Mode which equates to a 64-bit CPU.

4- Now see if your running kernel is 64-bit:
uname -m

5- apt install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils ia32-libs-multiarch
If a screen with `Postfix Configuration` was displayed, ignore it by selecting `No Configuration`.

6- Next is to add your <username> account to the group kvm and libvirtd
sudo adduser mohsen kvm
sudo adduser mohsen libvirtd

7-Verify Installation:
You can test if your install has been successful with the following command:
sudo virsh -c qemu:///system list
Your screen will paint the following below if successful:
Id Name State


8- Install Java:
Oracle java has to be installed in order to run Android emulator x86 system Images.
sudo apt-get install openjdk-8-jre

9- Download a System Image from the following link:
Create a folder named `system-images` in `android-sdk-linux` and extract the downloaded system image in it. (You might need to create another folder inside, named `default`.)
Run the Android SDK Manager, you will probably see the system image under `Extras` which is broken.
If it was so, for solving the problem, you need to download its API from this link and extract it in `platforms` folder:

9- Start the AVD from Android SDK Directly from Terminal and create a Virtual Device:
~/Programs/Android/Development/android-sdk-linux/tools/android avd


+AsyncSubject (Oct. 27, 2019, 7:07 p.m.)


This is very different from the others. With AsyncSubject, you’re only wanting the very last value as the subject completes. Figure that we have a bunch of values that can be sent out potentially, but you’re only interested in the most up to date value.

AsyncSubject emits the last value and only the last value to subscribers when the sequence of data that’s being sent out is actually completed.


While the BehaviorSubject and ReplaySubject both store values, the AsyncSubject works a bit different. The AsyncSubject is aSubject variant where only the last value of the Observable execution is sent to its subscribers, and only when the execution completes.


+ReplaySubject (Oct. 27, 2019, 7:04 p.m.)


As BehaviorSubject, ReplaySubject can also replay the last value that was sent out to any new subscribers. The difference is it can also replay all of the previous values if you like. You can think of this as kind of caching any data that’s been sent out so that any other components that subscribe still can get that data.

With ReplaySubject we can replay everything that was previously sent.


The ReplaySubject is comparable to the BehaviorSubject in the way that it can send “old” values to new subscribers. It however has the extra characteristic that it can record a part of the observable execution and therefore store multiple old values and “replay” them to new subscribers.


When creating the ReplaySubject you can specify how much values you want to store and for how long you want to store them. In other words you can specify: “I want to store the last 5 values, that have been executed in the last second prior to a new subscription”.


+BehaviorSubject (Oct. 27, 2019, 7 p.m.)


BehaviorSubject is very similar to Subject, except that it has one big feature that Subject doesn’t have. The ability for subscribers that come in later in the flow to still get some of the previous data.

BehaviorSubject allows you send the last piece of data to any new observers, any new subscribers. In that way they can still stay in sync. They’re not going to have all the previous values, but at least they have the latest value.


The BehaviorSubject has the characteristic that it stores the “current” value. This means that you can always directly get the last emitted value from the BehaviorSubject.


There are two ways to get this last emited value. You can either get the value by accessing the .value property on the BehaviorSubject or you can subscribe to it. If you subscribe to it, the BehaviorSubject will directly emit the current value to the subscriber. Even if the subscriber subscribes much later than the value was stored.


+RxJS (ReactiveX) (Oct. 27, 2019, 6:48 p.m.)

RxJS is a library for composing asynchronous and event-based programs by using observable sequences. It provides one core type, the Observable, satellite types (Observer, Schedulers, Subjects) and operators inspired by Array#extras (map, filter, reduce, every, etc) to allow handling asynchronous events as collections.

ReactiveX combines the Observer pattern with the Iterator pattern and functional programming with collections to fill the need for an ideal way of managing sequences of events.


The essential concepts in RxJS which solve async event management are:

- Observable: represents the idea of an invokable collection of future values or events.
- Observer: is a collection of callbacks that knows how to listen to values delivered by the Observable.
- Subscription: represents the execution of an Observable, is primarily useful for cancelling the execution.
- Operators: are pure functions that enable a functional programming style of dealing with collections with operations like map, filter, concat, flatMap, etc.
- Subject: is the equivalent to an EventEmitter, and the only way of multicasting a value or event to multiple Observers.
- Schedulers: are centralized dispatchers to control concurrency, allowing us to coordinate when computation happens on e.g. setTimeout or requestAnimationFrame or others.



Normally you register event listeners.

var button = document.querySelector('button');
button.addEventListener('click', () => console.log('Clicked!'));

Using RxJS you create an observable instead.

var button = document.querySelector('button');
Rx.Observable.fromEvent(button, 'click')
.subscribe(() => console.log('Clicked!'));


+Subjects (Oct. 27, 2019, 1:52 p.m.)

Subject provides a way to send one or more data values to listeners.

With Subject we send data to subscribed observers, but any previously emitted data is not going to be sent as you subscribed later. You’re only going to get the data that occurs after you’ve subscribed.


A Subject is like an Observable. It can be subscribed to, just like you normally would with Observables. It also has methods like next(), error() and complete() just like the observer you normally pass to your Observable creation function.

The main reason to use Subjects is to multicast. An Observable by default is unicast. Unicasting means that each subscribed observer owns an independent execution of the Observable.


Subjects are used for multicasting Observables. This means that Subjects will make sure each subscription gets the exact same value as the Observable execution is shared among the subscribers. You can do this using the Subject class. But rxjs offers different types of Subjects, namely: BehaviorSubject, ReplaySubject and AsyncSubject.


import * as Rx from "rxjs";

const observable = Rx.Observable.create((observer) => {;

// subscription 1
observable.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)

// subscription 2
observable.subscribe((data) => {
console.log(data); // 0.004617340049055896 (random number)


How to use Subjects to multicast:
Multicasting is a characteristic of a Subject. You don’t have to do anything special to achieve this behaviour.

import * as Rx from "rxjs";

const subject = new Rx.Subject();

// subscriber 1
subject.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)

// subscriber 2
subject.subscribe((data) => {
console.log(data); // 0.24957144215097515 (random number)


+Observables (Oct. 27, 2019, 1:59 p.m.)

Angular uses observables extensively in the event system and the HTTP service.

Observables provide the support for passing the messages between publishers (Creator of Observables) and subscribers (User of Observables) in your application.

Observables are declarative, that is, you define the function for publishing values, but it is not executed until the consumer subscribes to it.


Define Angular Observers:

The handler for receiving the observable notifications implements the Observer interface. It is an object that defines the callback methods to handle the three types of notifications that an observable can send. These are the following.

- next: Required. The handler for each delivered value called zero or more times after execution starts.
- error: Optional. The handler for error notification. The error halts the execution of the observable instance.
- complete: Optional. The handler for an execution-complete notification. The delayed values can continue to be delivered to a next handler after execution is complete.


+ECMAScript(ES) (Oct. 27, 2019, 1:56 p.m.)

ECMAScript is a simple standard for JavaScript and adding new features to JavaScript.

ECMAScript is a subset of JavaScript.

JavaScript is basically ECMAScript at its core but builds upon it.

Languages such as ActionScript, JavaScript, JScript all use ECMAScript as its core

As a comparison, AS/JS/JScript are 3 different cars, but they all use the same engine… each of their exteriors is different though, and there have been several modifications done to each to make it unique.

+Sort array of objects (Oct. 6, 2019, 8:26 a.m.)

this.menus.sort((obj1, obj2) => {
return obj1.ordering - obj2.ordering;

+Forms (Oct. 2, 2019, 10:52 p.m.)

Angular provides two different approaches for managing the forms:
1- Reactive approach (or Model-driven forms)
2-Template-driven approach


Both reactive and template-driven forms share underlying common building blocks which are the following.

1- FormControl: It tracks the value and validation status of the individual form control.
2- FormGroup: It tracks the same values and status for the collection of form controls.
3- FormArray: It tracks the same values and status for the array of the form controls.
4- ControlValueAccessor: It creates the bridge between Angular FormControl instances and native DOM elements.


Reactive forms:
Reactive forms or Model-driven forms are more robust, scalable, reusable, and testable. If forms are the key part of your application, or you’re already using reactive patterns for building your web application, use reactive forms.

In Reactive Forms, most of the work is done in the component class.


Template-driven forms:
Template-driven forms are useful for adding the simple form to an app, such as the email list signup form. They’re easy to add to a web app, but they don’t scale as well as the reactive forms.

If you have the fundamental form requirements and logic that can be managed solely in the template, use template-driven forms.

In template-driven forms, most of the work is done in the template.


It tracks the value and validity status of an angular form control. It matches to an HTML form control like an input.

this.username = new FormControl('agustin', Validators.required);


It tracks the value and validity state of a FormBuilder instance group. It aggregates the values of each child FormControl into one object, using the name of each form control as the key.
It calculates its status by reducing the statuses of its children. If one of the controls inside a group is invalid, the entire group becomes invalid.

this.user_data = new FormGroup({
username: new FormControl('agustin', Validators.required),
city: new FormControl('Montevideo', Validators.required)


It is a variation of FormGroup. The main difference is that its data gets serialized as an array, as opposed to being serialized as an object in case of FormGroup. This might be especially useful when you don’t know how many controls will be present within the group, like in dynamic forms.

this.user_data = new FormArray({
new FormControl('agustin', Validators.required),
new FormControl('Montevideo', Validators.required)


It is a helper class that creates FormGroup, FormControl and FormArray instances for us. It basically reduces the repetition and clutter by handling details of form control creation for you.

this.validations_form ={
username: new FormControl('', Validators.required),
email: new FormControl('', Validators.compose([


+Material Design (Aug. 31, 2019, 9:54 a.m.)

ng add @angular/material



Using a pre-built theme:

Material Design Icons:

+Libraries / Packages (Aug. 31, 2019, 3:42 a.m.)

npm install bootstrap jquery popper.js

Material Design:
npm install --save @angular/material @angular/cdk @angular/animations @angular/flex-layout material-design-icons hammerjs

npm install rxjs-compat --save
npm install ng2-slim-loading-bar @angular/core --save

+Angular Releases (Aug. 31, 2019, 2:15 a.m.)

+CLI commands (June 28, 2019, 7:39 p.m.)

Display list of available commands:

ng new project_name

ng --version

npm install bootstrap jquery popper.js --save

ng serve -o
ng serve --watch

ng g c product-add --skipTests=true

ng build --prod

+Install / Update Angular CLI (June 28, 2019, 7:33 p.m.)

Angular CLI helps us to create projects, generate application and library code, and perform a variety of ongoing development tasks such as testing, bundling, and deployment.

First, install Nodejs using my Nodejs notes, then:
sudo npm install -g @angular/cli

+Common Options (May 16, 2018, 3:06 p.m.)


Ask for su password (deprecated, use become)



Ask for sudo password (deprecated, use become)



Run operations as this user (default=root)



Outputs a list of matching hosts; does not execute anything else



List all tasks that would be executed


--private-key, --key-file

Use this file to authenticate the connection


--start-at-task <START_AT_TASK>

Start the playbook at the task matching this name



One-step-at-a-time: confirm each task before running



Perform a syntax check on the playbook, but do not execute it


-C, --check

Don’t make any changes; instead, try to predict some of the changes that may occur


-D, --diff

When changing (small) files and templates, show the differences in those files; works great with –check


-K, --ask-become-pass

Ask for privilege escalation password


-S, --su

Run operations with su (deprecated, use become)


-b, --become

Run operations with become (does not imply password prompting)


-e, --extra-vars

Set additional variables as key=value or YAML/JSON, if filename prepend with @


-f <FORKS>, --forks <FORKS>

Specify number of parallel processes to use (default=5)


-i, --inventory, --inventory-file

Specify inventory host path (default=[[u’/etc/ansible/hosts’]]) or comma separated host list. –inventory-file is deprecated


-k, --ask-pass

Ask for connection password



Connect as this user (default=None)


-v, --verbose

Verbose mode (-vvv for more, -vvvv to enable connection debugging)


+Display output to console (May 16, 2018, 4:40 p.m.)

Every ansible task when run can save its results into a variable. To do this you have to specify which variable to save the results in, using "register" parameter.

Once you save the value to a variable you can use it later in any of the subsequent tasks. So for example if you want to get the standard output of a specific task you can write the following:

ansible-playbook ansible/postgres.yml -e delete_old_backups=true

- hosts: localhost
- name: Delete old database backups
command: echo '{{ delete_old_backups }}'
register: out
- debug:
var: out.stdout_lines


You can also use -v when running ansible-playbook.


+Pass conditional boolean value (May 16, 2018, 4:53 p.m.)

- name: Delete old database backups
command: echo {{ delete_old_backups }}
when: delete_old_backups|bool

+Basic Commands (Jan. 7, 2017, 11:54 a.m.)

ansible test_servers -m ping


ansible-playbook playbook.yml

ansible-playbook playbook.yml --check


ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2

ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"


+Inventory File (Jan. 7, 2017, 11:04 a.m.)

[postgres_servers] ansible_user=root ansible_user=mohsen




localhost ansible_connection=local ansible_connection=ssh ansible_user=mpdehaan ansible_connection=ssh ansible_user=mdehaan


Host Variables:

host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909


Group Variables:




Groups of Groups, and Group Variables:

It is also possible to make groups of groups using the :children suffix. Just like above, you can apply variables using :vars:







+Installation (Dec. 13, 2016, 4:33 p.m.)

sudo apt-get install libffi-dev libssl-dev python-pip python-setuptools
pip install ansible
pip install markupsafe

+Auth Types (Oct. 14, 2019, midnight)

# Backward compatibility with apache 2.2
Order allow,deny
Allow from all

# Forward compatibility with apache 2.4
Require all granted
Satisfy Any


<IfVersion < 2.4>
Allow from all
<IfVersion >= 2.4>
Require all granted


+Installation (Sept. 6, 2017, 11:11 a.m.)

For Debian earlier than Stretch:
apt-get install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

For Debian Stretch:
apt-get install apache2 apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

+Password Protect via .htaccess (Feb. 26, 2017, 6:14 p.m.)

1- Create a file named `.htaccess` in the root of website, with this content:

AuthName "Deskbit's Support"
AuthUserFile /etc/apache2/.htpasswd
AuthType Basic
require valid-user
2- htpasswd -c /etc/apache2/.htpasswd mohsen
3- Add this to <Directory> block:

<Directory /var/www/support/>
Options Indexes FollowSymLinks
AllowOverride ALL
Require all granted
4- Restart apache
/etc/init.d/apache2 restart

+Configs for two different ports on same IP (Sept. 26, 2016, 10:07 p.m.)

NameVirtualHost *:80
<VirtualHost *:80>
LogLevel warn
ErrorLog /home/mohsen/logs/eccgroup_error.log
WSGIScriptAlias / /home/mohsen/websites/ecc/ecc/
WSGIDaemonProcess ecc python-path=/home/mohsen/websites/ecc:/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages
WSGIProcessGroup ecc

Alias /static /home/mohsen/websites/ecc/ecc/static
<Directory /home/mohsen/websites/ecc/ecc/static>
Require all granted

<Directory />
Require all granted

Listen 8081
NameVirtualHost *:8081
<VirtualHost *:8081>

ErrorLog /var/log/apache2/freepbx.error.log
CustomLog /var/log/apache2/freepbx.access.log combined
DocumentRoot /var/www/html

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

+Error Check (March 4, 2015, 12:06 p.m.)

sudo systemctl status apache2.service -l

# tail -f /var/log/apache2/error.log

+VirtualHost For Django Sites (March 4, 2015, 10:34 a.m.)

For Centos:
1- yum install mod_wsgi httpd httpd-devel


For Debian:

2- Create a virtual host:
sudo nano /etc/apache2/sites-available/
sudo nano /etc/httpd/conf.d/


3- Create your new virtual host node which should look something like this:

<VirtualHost *:80>
DocumentRoot /srv/mpei
WSGIScriptAlias / /srv/mpei/mpei/

LogLevel info
ErrorLog /var/log/mpei_error.log

WSGIDaemonProcess mpei processes=2 threads=15 python-path=/var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages
# WSGISocketPrefix /var/run/wsgi

Alias /media/ /srv/mpei/mpei/media/
Alias /static/ /srv/mpei/mpei/static/

<Directory /srv/mpei/mpei/static>
# For Apache 2.2
Allow from all

# For Apache 2.4
Require all granted

<Directory /srv/mpei/mpei/media>
# For Apache 2.2
Allow from all

# For Apache 2.4
Require all granted

<Directory /srv/mpei/mpei>
# For Apache 2.2
Order deny,allow
Allow from all

# For Apache 2.4
Require all granted


4- Edit the file within the main app of your project:
import os
import sys

# Add the app's directory to the PYTHONPATH

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mpei.settings")

# Activate your virtualenv

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()


5- Enable the virtual host (For Debian):


6- If you want to disable a site, you can run a2dissite


Compiling wsgi_mod

If you're using another version of python, you'll need to compile mod_wsgi from source to match your virtual env.

1- Download the latest version from the following website:

2- Untar it, CD to the folder, and:
sudo ./configure --with-python=/usr/local/bin/python3.6
sudo LD_RUN_PATH=/usr/local/lib make
sudo make install

It will get replaced by the one, which you had probably installed via Linux package manager, and solves any probable import errors.


Serving the admin files:

cd /srv/mpei/mpei/static/
ln -s /var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages/django/contrib/admin/static/admin .


For debuggig use the ErrorLog directive in the above apache config:
tail -f /var/log/mpei_error.log


Listen 8000
WSGISocketPrefix /run/wsgi
<VirtualHost *:8000>
DocumentRoot /srv/mpei
WSGIScriptAlias / /srv/mpei/mpei/

LogLevel info
ErrorLog /var/log/mpei_error.log

WSGIDaemonProcess mpei processes=2 threads=15 python-path=/srv/.virtualenvs/django-1.7/lib/python3.4/site-packages
WSGIProcessGroup mpei

Alias /media/ /srv/mpei/mpei/media/
Alias /static/ /srv/mpei/mpei/static/

<Directory /srv/mpei/mpei/static>
Allow from all

<Directory /srv/mpei/mpei/media>
Allow from all

<Directory /srv/mpei/mpei>
Require all granted

Alias /recordings /var/spool/asterisk/
<Directory /var/spool/asterisk/>
Require all granted
Options Indexes FollowSymlinks


+Apache config files (Jan. 5, 2015, 4:51 p.m.)

Contents of file: /etc/apache2/sites-enabled/000-default.conf

<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html

ScriptAlias /cgi-bin/ /var/cgi-bin/
<Directory "/var/cgi-bin">
AllowOverride All
Options None
Order allow,deny
Allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
Create a file named .htaccess in the /var/cgi-bin with this content.
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /var/cgi-bin/.htpasswd
Require user mohsen
htpasswd -c /etc/apache2/.htpasswd mohsen
And enter a desired password to create the password file.

+Creating /etc/init.d/asterisk (Jan. 5, 2015, 2:08 p.m.)

1-cp asterisk-13.1.0/contrib/init.d/rc.debian.asterisk /etc/init.d/asterisk

2-Change the lines to these values:

If you run it right now, you will get the error:
Restarting asterisk (via systemctl): asterisk.serviceFailed to restart asterisk.service: Unit asterisk.service failed to load: No such file or directory.

I restarted the server (reboot) and after booting up it was run successfully (/etc/init.d/asterisk start)

+Perl Packages/Libraries for Debian (Jan. 2, 2015, 12:29 p.m.)

Before starting installation, be careful that, you will need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
1-apt-get install libghc-ami-dev

2-Install this file `dpkg --install libasterisk-ami-perl_0.2.8-1_all.deb`
If you don't have it, refer to the following link for creating this .deb file

3-Copy the codecs binary `` to the path `/usr/lib/asterisk/modules`
Rename it to `` and based on other modules in this directory, set the chmod and chown of the file.
You can find it from this link:

+Running Asterisk as a Service (Dec. 15, 2014, 2:44 p.m.)

The most common way to run Asterisk in a production environment is as a service. Asterisk includes both a make target for installing Asterisk as a service, as well as a script - live_asterisk - that will manage the service and automatically restart Asterisk in case of errors.

Asterisk can be installed as a service using the make config target:
# make config
/etc/rc0.d/K91asterisk -> ../init.d/asterisk
/etc/rc1.d/K91asterisk -> ../init.d/asterisk
/etc/rc6.d/K91asterisk -> ../init.d/asterisk
/etc/rc2.d/S50asterisk -> ../init.d/asterisk
/etc/rc3.d/S50asterisk -> ../init.d/asterisk
/etc/rc4.d/S50asterisk -> ../init.d/asterisk
/etc/rc5.d/S50asterisk -> ../init.d/asterisk
Asterisk can now be started as a service:
# service asterisk start
* Starting Asterisk PBX: asterisk [ OK ]
And stopped:
# service asterisk stop
* Stopping Asterisk PBX: asterisk [ OK ]
And restarted:
# service asterisk restart
* Stopping Asterisk PBX: asterisk [ OK ]
* Starting Asterisk PBX: asterisk [ OK ]

+Executing as another User (Dec. 15, 2014, 2:42 p.m.)

Do not run as root
Running Asterisk as root or as a user with super user permissions is dangerous and not recommended. There are many ways Asterisk can affect the system on which it operates, and running as root can increase the cost of small configuration mistakes.

Asterisk can be run as another user using the -U option:
# asterisk -U asteriskuser

Often, this option is specified in conjunction with the -G option, which specifies the group to run under:
# asterisk -U asteriskuser -G asteriskuser

When running Asterisk as another user, make sure that user owns the various directories that Asterisk will access:
# sudo chown -R asteriskuser:asteriskuser /usr/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/spool/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/log/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/run/asterisk
# sudo chown asteriskuser:asteriskuser /usr/sbin/asterisk

+Commands (Dec. 15, 2014, 12:59 p.m.)

You can get a CLI (Command Line Interface) console to an already-running daemon by typing
asterisk -r
Another description for option '-r':
In order to connect to a running Asterisk process, you can attach a remote console using the -r option
To disconnect from a connected remote console, simply hit Ctrl+C.
To shut down Asterisk, issue:
core stop gracefully
There are three common commands related to stopping the Asterisk service. They are:
core stop now - This command stops the Asterisk service immediately, ending any calls in progress.
core stop gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk stops.
core stop when convenient - This command waits until Asterisk has no calls in progress, and then it stops the service. It does not prevent new calls from entering the system.

There are three related commands for restarting Asterisk as well.
core restart now - This command restarts the Asterisk service immediately, ending any calls in progress.
core restart gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk restarts.
core restart when convenient - This command waits until Asterisk has no calls in progress, and then it restarts the service. It does not prevent new calls from entering the system.

There is also a command if you change your mind.
core abort shutdown - This command aborts a shutdown or restart which was previously initiated with the gracefully or when convenient options.
sip show peers - returns a list of chan_sip loaded peers
voicemail show users - returns a list of app_voicemail loaded users
core set debug 5 - sets the core debug to level 5 verbosity.
core show version
asterisk -h : Help. Run '/sbin/asterisk -h' to get a list of the available command line parameters.
asterisk -C <configfile>: Starts Asterisk with a different configuration file than the default /etc/asterisk/asterisk.conf.
-f : Foreground. Starts Asterisk but does not fork as a background daemon.
-c : Enables console mode. Starts Asterisk in the foreground (implies -f), with a console command line interface (CLI) that can be used to issue commands and view the state of the system.
-r : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon.
-R : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon and attempts to reconnect if disconnected.
-t : Record soundfiles in /var/tmp and move them where they belong after they are done.
-T : Display the time in "Mmm dd hh:mm:ss" format for each line of output to the CLI.
-n : Disable console colorization (for use with -c or -r)
-i: Prompt for cryptographic initialization passcodes at startup.
-p : Run as pseudo-realtime thread. Run with a real-time priority. (Whatever that means.)
-q : Quiet mode (supress output)
-v : Increase verbosity (multiple v's = more verbose)
-V : Display version number and exit.
-d : Enable extra debugging across all modules.
-g : Makes Asterisk dump core in the case of a segmentation violation.
-G <group> : Run as a group other than the caller.
-U <user> : Run as a user other than the caller
-x <cmd> : Execute command <cmd> (only valid with -r)

+Installation (Dec. 14, 2014, 9:36 p.m.)

Before starting installation, be careful that, you need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
Install these libraries first:
1-apt-get install libapache2-mod-auth-pgsql libanyevent-perl odbc-postgresql unixODBC unixODBC-dev libltdl-dev

2-Download the file asterisk-13-current.tar.gz from this link:
a) Untar it.
You will need this untarred asterisk file in the following steps.

----------- Building and Installing pjproject -----------
1-Using the link download pjproject-2.3.tar.bz2

a) Untar and CD to the pjproject

b) ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'

c) make dep

d) make

e) make install

f) ldconfig

Now, for checking if you have successfully installed pjproject and asterisk detects the libraries, untar and CD to asterisk directory (I know you have not installed it yet, just move to the folder now :D), and enter the following command:

g) apt-get install libjansson-dev uuid-dev snmpd libperl-dev libncurses5-dev libxml2-dev libsqlite3-dev

*** important ***
Before continuing to next step, you have to know that based on needs of Shetab company you need to enable `res_snmp` module. For enabling it you need to install `net-snmp_5.4.3`, and since it's not in the Synaptic, you have to install it from the source:
1-Download it from:
2-Install it using ./configure, make and make install
*** End of important ***

h) ./configure --without-pwlib (If you don't use this --without switch, you will get the following error, even if you have installed those ptlib package already!)
Cannot find ptlib-config - please install and try again

i) make menuselect

j) Browse to the eleventh category `Resource Modules` and make sure the `res_snmp` module at the bottom of the list is checked. Using escape key exit the menu and continue with installing asterisk.

----------- Building and Installing Asterisk -----------
2- Make sure you are still in the asterisk directory).

c) make
I got so many errors surrounded by '**************' (so many asterisks) telling me these modules were needed:
res_curl, res_odbcm, res_crypto, res_config_curl ... (and so many more) I just installed postgresql and the command `make` continued working with no errors!

d) make install

e) make samples

f) make progdocs

Now continue installation process with Perl packages from my tutorials.
After that, refer to `Creating /etc/init.d/asterisk` in my tutorials.

Beautiful Soup
+Remove tags from an element (March 7, 2020, 5:44 p.m.)

comments = soup.findAll('div', {'class': 'cmnt-text'})
for comment in comments:

+Methods (March 7, 2020, 5:24 p.m.)

comment = soup.find('div', {'class': 'comment-user'})

<class 'bs4.element.Tag'>


comment = soup.findAll('div', {'class': 'comment-user'})

<class 'bs4.element.ResultSet'>


question = soup.find('p', {'itemprop': 'text'}).text


image_url = image_tag.find('img').get('src')




comment_boxes = comments_placeholder.findAll('app-comment')


comment = comment_box.find('p', {'class': 'text', 'itemprop': 'text'})


+Usages (March 7, 2020, 5:10 p.m.)

From local file:

soup = BeautifulSoup(open('source.html'), 'html.parser')
comments = soup.find('app-comment-list')


From URL:

response = requests.get(url='URL')
comments = soup.find('app-comment-list')


From URL pass data as POST to URL:

data = {'from_post': 1, 'to_post': 100)
response = requests.get(url='URL', json=data)
comments = soup.find('app-comment-list')


From URL using requests and proxy:

params = {
'timeout': 20,
'verify': False,
'proxies': {'https': ''},
'url': URL,
'json': {}

response = requests.get(**params)
comments = soup.find('app-comment-list')


+Installation (March 7, 2020, 5:09 p.m.)

pip install beautifulsoup4


apt-get install python3-bs4

+PTR Record (Aug. 19, 2018, 7:59 p.m.)

A Pointer (PTR) record resolves an IP address to a fully-qualified domain name (FQDN) as an opposite to what A record does. PTR records are also called Reverse DNS records.

PTR records are mainly used to check if the server name is actually associated with the IP address from where the connection was initiated.

IP addresses of all Intermedia mail servers already have PTR records created.


What is PTR Record?

PTR records are used for the Reverse DNS (Domain Name System) lookup. Using the IP address you can get the associated domain/hostname. An A record should exist for every PTR record. The usage of a reverse DNS setup for a mail server is a good solution.

While in the domain DNS zone the hostname is pointed to an IP address, using the reverse zone allows pointing an IP address to a hostname.
In the Reverse DNS zone, you need to use a PTR Record. The PTR Record resolves the IP address to a domain/hostname.


+Errors (Aug. 7, 2015, 3:31 p.m.)

managed-keys-zone ./IN: loading from master file managed-keys.bind

For solving it:
nano /etc/bind/named.conf
add include "/etc/bind/bind.keys";

And also create an empty file:
touch /etc/bind/managed-keys.bind
When working with the Reverse DNS (, and the zone file ( you can use the tool:
to check the validity of the files.

+Configuration (Aug. 21, 2014, 12:48 p.m.)

This file contains a summary of my own experiences:

1-There are some default zones in "/etc/bind/named.conf.external-zones"; no need to change them, neither to exclude it from the file "/etc/bind/named.conf"
2-Add a line at the bottom of the file "/etc/bind/named.conf":
"include "/etc/bind/named.conf.external-zones";
3-Create a file named "/etc/bind/named.conf.external-zones" and fill it up with:
// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------

// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------
4-There is an empty directory in "/etc/bind/zones/". This the place for holding the data for above paths. So create a file named "" and fill it up with:
$TTL 3h
@ IN SOA (


ns IN A
@ IN A
5-Repeat the earlier step with different file name and data. I mean create a file named "" in "/zones/" and fill it up with:

$TTL 3h
@ IN SOA (
1h )

; main domain name servers
; main domain mail servers
IN MX 10
; A records for name servers above
www IN A
pania IN A
; A record for mail server above
mail IN A
6- OK, Done!
When I was done doing this configurations, I was testing my work with "dig" but I got error like:

root@mohsenhassani:/home/mohsen# dig
; <<>> DiG 9.7.3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

; IN A

;; Query time: 383 msec
;; WHEN: Sat Mar 16 17:00:19 2013
;; MSG SIZE rcvd: 34

In the line which is like ";; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929"
The word "SERVFAIL" shows that I have errors; There are many many many reasons which might cuase this error, and you may solve the error with its id.
Anyway for this error I had to do this:
sudo nano /etc/resolv.conf
And add: to first line.
It had already and

Then doing "dig" there was no more errors:
root@mohsenhassani:/home/mohsen# dig

; <<>> DiG 9.7.3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39792
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

; IN A




;; Query time: 0 msec
;; WHEN: Sat Mar 16 17:02:26 2013
;; MSG SIZE rcvd: 83
Oh! And you have to create two sub-domains named "ns1.mohsenhassani.COM" and "ns2.mohsenhassani.COM" so that you can forward the ".ir" domains to these sub-domains.

+Installation (Aug. 7, 2015, 4:22 p.m.)
apt-get install bind9 bind9utils

When installing and configuring or restarting bind, in case of encountering errors, check the log files. The log files are not stored separately. BIND stores the logs in the syslog:
nano /var/log/syslog
1-nano /etc/bind/named.conf.options
We need to modify the forwarder. This is the DNS server to which your own DNS will forward the requests he cannot process.

forwarders {
# Replace the address below with the address of your provider's DNS server;
2-Add this line to the file: /etc/bind/named.conf
include "/etc/bind/named.conf.external-zones";
3-nano /etc/bind/named.conf.external-zones
This is where we will insert our zones. By the way, a zone is a domain name that is referenced in the DNS server.

// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------
4-nano /etc/bind/zones/
$TTL 3h
@ IN SOA (
1h )

@ IN A
5-Restart BIND:
sudo /etc/init.d/bind9 restart

in case of failing, check the errors:
nano /var/log/syslog

We can now test the new DNS server...
Modify the file resolv.conf with the following settings:
sudo nano /etc/resolv.conf

enter the following:

Now, test your DNS:

In case of errors, refer to errors in BIND category

+Description (Aug. 21, 2014, 12:45 p.m.)

Every system on the Internet must have a unique IP address. (This does not include systems that are behind a NAT firewall because they are not directly on the Internet.) DNS acts as a directory service for all of these systems, allowing you to specify each one by its hostname. A telephone book allows you to look up an individual person by name and get their telephone number, their unique identifier on the telephone system's network. DNS allows you to look up individual server by name and get its IP address, its unique identifier on the Internet.
There are other hostname-to-IP directory services in use, mainly for LANs. Windows LANs can use WINS. UNIX LANs can use NIS. But because DNS is the directory service for the Internet (and can also be used for LANs) it is the most widely used. UNIX LANs could always use DNS instead of NIS, and starting with Windows 2000 Server, Windows LANs could use DNS instead of, or in addition to, WINS. And on small LANs where there are only a few machines you could just use HOSTS files on each system instead of setting up a server running DNS, NIS, or WINS.

As a service, DNS is critical to the operation of the Internet. When you enter in a Web browser, it's DNS that takes the www host name and translates it to an IP address. Without DNS, you could be connected to the Internet just fine, but you ain't goin' no where. Not unless you keep a record of the IP addresses of all of the resources you access on the Internet and use those instead of host/domain names.

So when you visit a Web site, you are actually doing so using the site's IP address even though you specified a host and domain name in the URL. In the background your computer quickly queried a DNS server to get the IP address that corresponds to the Web site's server and domain names. Now you know why you have to specify one or two DNS server IP addresses in the TCP/IP configuration on your desktop PC (in the resolv.conf file on a Linux system and the TCP/IP properties in the Network Control Panel on Windows systems).

A "cannot connect" error doesn't necessarily indicate there isn't a connection to the destination server. There may very well be. The error may indicate a failure in "resolving" the domain name to an IP address. I use the open-source Firefox Web browser on Windows systems because the status bar gives more informational messages like "Resolving host", "Connecting to", and "Transferring data" rather than just the generic "Opening page" with IE. (It also seems to render pages faster than IE.)

In short, always check for correct DNS operation when troubleshooting a problem involving the inability to access an Internet resource. The ability to resolve names is critical, and later in this page, we'll show you some tools you can use to investigate and verify this ability.
When you are surfing the Web viewing Web pages or sending an e-mail your workstation is sending queries to a DNS server to resolve server/domain names. (Back on the Modems page we showed you how to set up your resolv.conf file to do this.) When you have your own Web site that other people visit you need a DNS server to respond to the queries from their workstations.

When you visit Web sites, the DNS server your workstation queries for name resolution is typically run by your ISP, but you could have one of your own. When you have your own Web site the DNS servers which respond to visitors' queries are typically run by your Web hosting provider, but you could likewise have your own one of these too. Actually, if you set up your own DNS server it could be used to respond to both "internal" (from your workstation) and "external" (from your Web site's visitors) queries.

Even if you don't have your own domain name or even your own LAN, you can still benefit from using a DNS server to allow others to access your Debian system. If you have a single system connected to the Internet via a cable or DSL connection, you can have it act as a Web/e-mail/FTP server using a neat service called "dynamic DNS" which we'll cover later. Dynamic DNS will even work with a modem if you want to play around with it.

DNS Server Functions:
You can set up a DNS server for several different reasons:
Internet Domain Support: If you have a domain name and you're operating Web, e-mail, FTP, or other Internet servers, you'll use a DNS server to respond to resolution queries so others can find and access your server(s). This is a serious undertaking and you'd have to set up a minimum of two of them. On this page, we'll refer to these types of DNS servers as authoritative DNS servers for reasons you'll see later. However, there are alternatives to having your own authoritative DNS server if you have (or want to have) your own domain name. You can have someone else host your DNS records for you. Even if someone else is taking care of your domain's DNS records you could still set up one of the following types of DNS servers.

Local Name Resolution: Similar to the above scenario, this type of DNS server would resolve the hostnames of systems on your LAN. Typically in this scenario, there is one DNS server and it does both jobs. The first being that it receives queries from workstations and the second being that it serves as the authoritative source for the responses (this will be more clear as we progress). Having this type of DNS server would eliminate the need to have (and manually update) a HOSTS file on each system on your LAN. On this page, we'll refer to these as LAN DNS servers.

During the Debian installation, you are asked to supply a domain name. This is an internal (private) domain name that is not visible to the outside world so like the private IP address ranges you use on a LAN, it doesn't have to be registered with anyone. A LAN DNS server would be authoritative for this internal, private domain. For security reasons, the name for this internal domain should not be the same as any public domain name you have registered. Private domain names are not restricted to using one of the established public TLD (Top Level Domain) names such as .com or .net. You could use .corp or .inc or anything else for your TLD. Since a single DNS server can be authoritative for multiple domains, you could use the same DNS server for both your public and private domains. However, the server would need to be accessible from both the Internet and the LAN so you'd need to locate it in a DMZ. Though you want to use different public and private domain names, you can use the same name for the second-level domain. For example, for the public name and for the private name.

Internet Name Resolution: LAN workstations and other desktop PCs need to send Internet domain name resolution queries to a DNS server. The DNS server most often used for this is the ISP's DNS servers. These are often the DNS servers you specify in your TCP/IP configuration. You can have your own DNS server respond to these resolution queries instead of using your ISP's DNS servers. My ISP recently had a problem where they would intermittently lose connectivity to the network segment that their DNS servers were connected to so they couldn't be contacted. It took me about 30 seconds to turn one of my Debian systems into this type of DNS server and I was surfing with no problems. On this page, we'll refer to these as simple DNS servers. If a simple DNS server fails, you could just switch back to using your ISP's DNS servers. As a matter of fact, given that you typically specify two DNS servers in the TCP/IP configuration of most desktop PCs, you could have one of your ISP's DNS servers listed as the second (fallback) entry and you'd never miss a beat if your simple DNS server did go down. Turning your Debian system into a simple DNS server is simply a matter of entering a single command.

Don't take from this that you need three different types of DNS servers. If you were to set up a couple of authoritative DNS servers they could also provide the functionality of LAN and simple DNS servers. And a LAN DNS server can simultaneously provide the functionality of a simple DNS server. It's a progressive type of thing.

If you were going to set up authoritative DNS servers or a simple DNS server you'd have to have a 24/7 broadband connection to the Internet. Naturally, a LAN DNS server that didn't resolve Internet host/domain names wouldn't need this.

A DNS server is just a Debian system running a DNS application. The most widely used DNS application is BIND (Berkeley Internet Name Domain) and it runs a daemon called named that, among other things, responds to resolution queries. We'll see how to install it after we cover some basics.

DNS Basics:
Finding a single server out of all of the servers on the Internet is like trying to find a single file on the drive with thousands of files. In both cases, it helps to have some hierarchy built into the directory to logically group things. The DNS "namespace" is hierarchical in the same type of upside-down tree structure seen with file systems. Just as you have the root of a partition or drive, the DNS namespace has a root which is signified by a period.

Namespace Root --> Top Level Domains --> Second Level Domains
Namesapce Root: .
Top Level Domains: com, net, org
Second Level Domains: com --> aboutdebian, cnn, net --> sbc, org --> samba, debian

When specifying the absolute path to a file in a file system you start at the root and go to the file:

When specifying the absolute path to a server in the DNS namespace you start at the server and go to the root:

Note that period after the 'com' as it's important. It's how you specify the root of the namespace. An absolute path in the DNS namespace is called an FQDN (Fully Qualified Domain Name). The use of FQDNs is prevalent in DNS configuration files and it's important that you always use that trailing period.

Internet resources are usually specified by a domain name and a server hostname. The www part of a URL is often the hostname of the Web server (or it could be an alias to a server with a different hostname). DNS is basically just a database with records for these hostnames. The directory for the entire telephone system is not stored in one huge phone book. Rather, it is broken up into many pieces with each city having and maintaining, its piece of the entire directory in its phone book. By the same token, pieces of the DNS directory database (the "zones") are stored, and maintained, on many different DNS servers located around the Internet. If you want to find the telephone number for a person in Poughkeepsie, you'd have to look in the Poughkeepsie telephone book. If you want to find the IP address of the www server in the domain, you'd have to query the DNS server that stores the DNS records for that domain.

The entries in the database map a host/domain name to an IP address. Here is a simple logical view of the type of information that is stored (we'll get to the A, CNAME, and MX designations in a bit).


This is why a real Internet server needs a static (unchanging) IP address. The IP address of the server's NIC connected to the Internet has to match whatever address is in the DNS database. Dynamic DNS does provide a way around this for home servers however, which we'll see later.

When you want to browse to your DNS server (the one you specify in the TCP/IP configuration on your desktop computer) most likely won't have a DNS record for the domain so it has to contact the DNS server that does. When your DNS server contacts the DNS server that has the DNS records (referred to as "resource records" or "zone records") for your DNS server gets the IP address of the www server and relays that address back to your desktop computer. So which DNS server has the DNS records for a particular domain?

When you register a domain name with someone like Network Solutions, one of the things they ask you for are the server names and addresses of two or three "name servers" (DNS servers). These are the servers where the DNS records for your domain will be stored (and queried by the DNS servers of those browsing to your site). So where do you get the "name servers" information for your domain? Typically, when you host your Web site using a Web hosting service they not only provide a Web server for your domain's Web site files but they will also provide a DNS server to store your domain's DNS records. In other words, you'll want to know who your Web hosting provider is going to be before you register a domain name (so you can enter the provider's DNS server information in the name servers section of the domain name registration application).

You'll see the term "zone" used in DNS references. Most of the time a zone just equates to a domain. The only times this wouldn't be true is if you set up subdomains and set up separate DNS servers to handle just those subdomains. For example, a company would set up the subdomains and and would "delegate" a separate DNS server to each one of them. In the case of these two DNS servers, their zone would be just the subdomains. The zone of the DNS server for the parent (which would contain the servers and would only contain records for those few machines in the parent domain.

Note that in the above example "us" and "Europe" are subdomains while "www" and "mail" are hostnames of servers in the parent domain.

Once you've got your Web site up and running on your Web hosting provider's servers and someone surf's to your site, the DNS server they specified in their local TCP/IP configuration will query your hosting provider's DNS servers to get the IP address for your Web site. The DNS servers that host the DNS records for your domain, i.e. the DNS servers you specify in your domain name registration application, are the authoritative DNS servers for your domain. The surfer's DNS server queries one of your site's authoritative DNS servers to get an address and gets an authoritative response. When the surfer's DNS server relays the address information back to the surfer's local PC it is a "non authoritative" response because the surfer's DNS server is not an authoritative DNS server for your domain.

Example: If you surf to MIT's Web site the DNS server you have specified in your TCP/IP configuration queries one of MIT's authoritative DNS servers and gets an authoritative response with the IP address for the 'www' server. Your DNS server then sends a non-authoritative response back to your PC. You can easily see this for yourself. At a shell prompt, or a DOS window on a newer Windows system, type in:


First, you'll see the name and IP address of your locally-specified DNS server. Then you'll see the non-authoritative response your DNS server sent back containing the name and IP address of the MIT Web server.

If you're on a Linux system you can also see which name server(s) your DNS server contacted to get the IP address. At a shell prompt type in:


and you'll see three authoritative name servers listed with the hostnames STRAWB, W20NS, and BITSY. The 'whois' command simply returns the contents of a site's domain record.

DNS Records and Domain Records

Don't confuse DNS zone records with domain records. Your domain record is created when you fill out a domain name registration application and is maintained by the domain registration service (like Network Solutions) you used to register the domain name. A domain only has one domain record and it contains administrative and technical contact information as well as entries for the authoritative DNS servers (aka "name servers") that are hosting the DNS records for the domain. You have to enter the hostnames and addresses for multiple DNS servers in your domain record for redundancy (fail-over) purposes.

DNS records (aka zone records) for a domain are stored in the domain's zone file on the authoritative DNS servers. Typically, it is stored on the DNS servers of whatever Web hosting service is hosting your domain's Web site. However, if you have your own Web server (rather than using a Web hosting service) the DNS records could be hosted by you using your own authoritative DNS servers (as in MIT's case), or by a third party like EasyDNS.

In short, the name servers you specified in your domain record host the domain's zone file containing the zone records. The name servers, whether they be your Web hosting provider's, those of a third party like EasyDNS, or your own, which host the domain's zone file are authoritative DNS servers for the domain.

Because DNS is so important to the operation of the Internet, when you register a domain name you must specify a minimum of two name servers. If you set up your own authoritative DNS servers for your domain you must set up a minimum of two of them (for redundancy) and these would be the servers you specify in your domain record. While the multiple servers you specify in your domain record are authoritative for your domain, only one DNS server can be the primary DNS server for a domain. Any others are "secondary" servers. The zone file on the primary DNS server is "replicated" (transferred) to all secondary servers. As a result, any changes made to DNS records must be made on the primary DNS server. The zone files on secondary servers are read-only. If you made changes to the records in a zone file on a secondary DNS server they would simply be overwritten at the next replication. As you will see below, the primary server for a domain and the replication frequency are specified in a special type of zone record.

Early on in this page, we said that the DNS zone records are stored in a DNS database which we now know is called a zone file. The term "database" is used quite loosely. The zone file is actually just a text file that you can edit with any text editor. A zone file is domain-specific. That is, each domain has its own zone file. Actually, there are two zone files for each domain but we're only concerned with one right now. The DNS servers for a Web hosting provider will have many zone files, two for each domain it's hosting zone records for. A zone "record" is, in most cases, nothing more than a single line in the text zone file.

There are different types of DNS zone records. These numerous record types give you flexibility in setting up the servers in your domain. The most common types of zone records are:

An A (Address) record is a "host record" and it is the most common type. It is simply a static mapping of a hostname to an IP address. A common hostname for a Web server is 'www' so the A record for this server gives the IP address for this server in the domain.

An MX (Mail eXchanger) record is specifically for mail servers. It's a special type of service-specifier record. It identifies a mail server for the domain. That's why you don't have to enter a hostname like 'www' in an e-mail address. If you're running Sendmail (mail server) and Apache (Web server) on the same system (i.e. the same system is acting as both your Web server and e-mail server), both the A record for the system and the MX record would refer to the same server.

To offer some fail-over protection for e-mail, MX records also have a Priority field (numeric). You can enter two or three MX records each pointing to a different mail server, but the server specified in the record with the highest priority (lowest number) will be chosen first. A mail server with a priority of 10 in the MX record will receive an e-mail before a server with a priority of 20 in its MX record. Note that we are only talking about receiving mail from other Internet mail servers here. When a mail server is sending mail, it acts as a desktop PC when it comes to DNS. The mail server looks at the domain name in the recipient's e-mail address and the mail server then contacts its local DNS server (specified in the resolv.conf file) to get the IP address for the mail server in the recipient's domain. When an authoritative DNS server for the recipient's domain receives the query from the sender's DNS server it sends back the IP addresses from the MX records it has in that domain's zone file.

A CNAME (Canonical Name) record is an alias record. It's a way to have the same physical server to respond to two different hostnames. Let's say you're not only running Sendmail and Apache on your server, but you're also running WU-FTPD so it also acts as an FTP server. You could create a CNAME record with the alias name 'FTP' so people would use and to access different services on the same server.

Another use for a CNAME record was illustrated in the example near the top of the page. Suppose you name your Web server 'Debian' instead of 'www'. You could simply create a CNAME record with the alias name 'www' but with the hostname 'Debian' and Debian's IP address.

NS (Name Server) records specify the authoritative DNS servers for a domain.

There can multiples of all of the above record types. There is one special record type of which there is only one record in the zone file. That's the SOA (Start Of Authority) record and it's the first record in the zone file. An SOA record is only present in a zone file located on authoritative DNS servers (non-authoritative DNS servers can cache zone records). It specifies such things as:

The primary authoritative DNS server for the zone (domain).
The e-mail address of the zone's (domain's) administrator. In zone files, the '@' has a specific meaning (see below) so the e-mail address is written as

Timing information as to when secondary DNS servers should refresh or expire a zone file and a serial number to indicate the version of the zone file for the sake of comparison.

The SOA record is the one that takes up several lines.

Several important points to note about the records in a zone file:

Records can specify servers in other domains. This is most commonly used with MX and NS records when backup servers are located in a different domain but receive mail or resolve queries for your domain.

There must be an A record for systems specified in all MX, NS, and CNAME records.

A and CNAME records can specify workstations as well as servers (which you'll see when we set up a LAN DNS server).

Now let's look at a typical zone file. When a Debian system is set up as a DNS server the zone files are stored in the /etc/bind directory. In a zone file, the two parentheses around the timer values act as line-continuation characters as does the '\' character at the end of the second line. The ';' is the comment character. The 'IN' indicates an INternet-class record.

$TTL 86400 IN SOA \ {
2004011522 ; Serial no., based on date
21600 ; Refresh after 6 hours
3600 ; Retry after 1 hour
604800 ; Expire after 7 days
3600 ; Minimum TTL of 1 hour
;Name servers
debns1 IN A IN A

@ IN NS debns1 IN NS

;Mail servers
debmail1 IN A IN A

@ IN MX 10 debmail1 IN MX 20

;Aliased servers
debhp IN A IN A



+Django Celery with django-celery-results extension (Nov. 11, 2016, 10:37 a.m.)

pip install celery
pip install django_celery_results
pip install django_celery_beat


# project/project/

from __future__ import absolute_import, unicode_literals
import os

from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')

def debug_task(self):
print('Request: {0!r}'.format(self.request))


# project/project/

from __future__ import absolute_import, unicode_literals

from .celery import app as celery_app

__all__ = ['celery_app']



from __future__ import absolute_import

from celery import shared_task

def begin_ping():
return 'hi'






python migrate django_celery_results
python migrate django_celery_beat


apt install rabbitmq-server
For running it:


Run these two commands in separated activated virtualenvs:
celery -A project beat -l info -S django
celery -A project worker -l info

The "celery -A project beat -l info -S django" is for "DatabaseScheduler" which gets the schedules from Django admin panel.
You can use "celery -A project beat -l info" which is for "PersistentScheduler" which gets the schedules from scripts in the tasks.

For having the schedules from Admin panel, refer to the link "Intervals" and define a suitable interval.
Then follow the link "Periodic tasks" and select the defined interval in the "Interval" dropdown list.


+Celery and RabbitMQ with Django (Oct. 14, 2018, 9:54 a.m.)

1- pip install Celery


2- apt-get install rabbitmq-server


3- Enable and start the RabbitMQ service
systemctl enable rabbitmq-server
systemctl start rabbitmq-server


4- Add configuration to the file:
CELERY_BROKER_URL = 'amqp://localhost'


5- Create a new file named in your app:
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')

app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')


6- Edit the file in the project root:

from .celery import app as celery_app

__all__ = ['celery_app']


7- Create a file named inside a Django app:

from celery import shared_task

def my_task(x, y):
return x, y


8- In

from .tasks import my_task

my_task.delay(x, y)

Instead of calling the "my_task" directly, we call my_task.delay(). This way we are instructing Celery to execute this function in the background.


9- Starting The Worker Process:

Open a new terminal tab, and run the following command:
celery -A mysite worker -l info


+Periodic Tasks from (Oct. 14, 2018, 10:24 a.m.)

import datetime
from celery.task import periodic_task

def myfunc():
print 'periodic_task'

+Periodic Tasks from (Oct. 14, 2018, 10:53 a.m.)

'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=30),
'args': (16, 16)

+Running tasks in shell (Oct. 11, 2018, 10:49 a.m.)

celery -A project_name beat

celery -A cdr worker -l info

+Daemon Scripts (Sept. 29, 2015, 11:39 a.m.)

These scripts are needed when you want to run the worker as a daemon.

The first is used for seeing the output of running tasks. For example, I had something printed in the console, from within the task, and I could see the output (the printed string) in this terminal.

The second is for firing up / starting the tasks.

1- Create a file /etc/supervisor/conf.d/celeryd.conf with this content:
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery worker -A cdr --loglevel=INFO


; Need to wait for currently executing tasks to finish at shutdown. Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it send SIGKILL to its whole process group instead, taking care of its children as well.

; if rabbitmq is supervised, set its priority higher
; so it starts first


2- Create a file /etc/supervisor/conf.d/celerybeat.conf with this content:

; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery beat -A cdr

; remove the -A myapp argument if you are not using an app instance


; if rabbitmq is supervised, set its priority higher so it starts first

+RBD (Oct. 30, 2017, 10:01 a.m.)

rbd is a utility for manipulating rados block device (RBD) images, used by the Linux rbd driver and the rbd storage driver for Qemu/KVM. RBD images are simple block devices that are striped over objects and stored in a RADOS object store. The size of the objects the image is striped over must be a power of two.
rbd -p image ls

rbd -p image info Windows7x8

rbd -p image rm Win7x86WithApps

rbd export --pool=image disk_user01_2 /root/Windows7x86.qcow2

The "2" is the ID of the Template in deskbit admin panel.

+Changing a Monitor’s IP address (Sept. 19, 2017, 4:42 p.m.)
ceph mon getmap -o /tmp/a

monmaptool --print /tmp/a

monmaptool --rm vdiali /tmp/a

monmaptool --add vdiali /tmp/a

monmaptool --print /tmp/a

systemctl stop ceph-mon*

ceph-mon -i vdimohsen --inject-monmap /tmp/a

Change IP in the following files:

+Properly remove an OSD (Aug. 23, 2017, 12:35 p.m.)

Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.

$ ceph osd crush reweight osd.<ID> 0.0

Then you wait for rebalancing to be completed. Eventually completely remove the OSD:

$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
From the docs:
Remove an OSD

To remove an OSD from the CRUSH map of a running cluster, execute the following:
ceph osd crush remove {name}

For getting the name:
ceph osd tree

+Errors - undersized+degraded+peered (July 4, 2017, 5:25 p.m.)
ceph osd crush rule create-simple same-host default osd

ceph osd pool set rbd crush_ruleset 1

+Commands (July 3, 2017, 3:53 p.m.)

ceph osd tree

ceph osd dump

ceph osd lspools

ceph osd pool ls

ceph osd pool get rbd all

ceph osd pool set rbd size 2

ceph osd crush rule ls
ceph-osd -i 0

ceph-osd -i 0 --mkfs --mkkey
ceph -w

ceph -s

ceph health detail
ceph-disk activate /var/lib/ceph/osd/ceph-0

ceph-disk list

chown ceph:disk /dev/sda1 /dev/sdb1
ceph-mon -f --cluster ceph --id vdi --setuser ceph --setgroup ceph
systemctl -a | grep ceph

systemctl status ceph-osd*

systemctl status ceph-mon*

systemctl enable
rbd -p image ls

rbd export --pool=image disk_win_7 /root/win7.img
cd /var/lib/ceph/osd/
ceph-2 ceph-3 ceph-8

mount | grep -i vda
mount | grep -i vdb
mount | grep -i vdc
mount | grep ceph

fdisk -l

mount /dev/vdc1 ceph-3/

systemctl restart ceph-osd@3
ceph osd tree
systemctl restart ceph-osd@5

mount | grep -i ceph

systemctl restart ceph-osd@5
Job for ceph-osd@5.service failed because the control process exited with error code.
See "systemctl status ceph-osd@5.service" and "journalctl -xe" for details.

systemctl daemon-reload
systemctl restart ceph-osd@5
ceph osd tree
ceph -w


+ceph-ansible (Jan. 7, 2017, 10:58 a.m.)
0- apt-get update # Ensure you do this step before running ceph-ansible!!!

1- apt-get install libffi-dev libssl-dev python-pip python-setuptools sudo python-dev

git clone
2- pip install markupsafe ansible
3-Setup your Ansible inventory file:

4-Now enable the site.yml and group_vars files:

cp site.yml.sample site.yml

You need to copy all files within `group_vars` directory; omit the `.sample` part:
for f in *.sample; do cp "$f" "${f/.sample/}"; done
5-Open the file `group_vars/all.yml` for editing:

nano group_vars/all.yml

Uncomment the variable `ceph_origin` and replace `upstream` with `distro`:
ceph_origin: 'distro'

Uncomment and replace:
monitor_interface: eth0

journal_size: 5120
6-Choosing a scenario:
Open the file `group_vars/osds.yml` and uncomment and set to `true` the following variables:

osd_auto_discovery: true
journal_collocation: true
7- Any needed configs for ceph should be added to the file `group_vars/all.yml`.
Uncomment and change:

osd_pool_default_pg_num: 8
osd_pool_default_size: 1
Path to variables file:

+Adding Monitors (Jan. 4, 2017, 2:13 p.m.)

A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two Ceph Monitors to your cluster.
ceph-deploy mon add node2
ceph-deploy mon add node3
Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

ceph quorum_status --format json-pretty
When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.

+Adding an OSD (Jan. 4, 2017, 2:08 p.m.)

1- mkdir /var/lib/ceph/osd/ceph-3

2- ceph-disk prepare /var/lib/ceph/osd/ceph-3

3- ceph-disk activate /var/lib/ceph/osd/ceph-3

4- Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI:
ceph -w

You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

+Storage Cluster (Jan. 3, 2017, 3:10 p.m.)

To purge the Ceph packages, execute: (Used for when you want to purge data)
ceph-deploy purge node1

If at any point you run into trouble and you want to start over, execute the following to purge the configuration:
ceph-deploy purgedata node1
ceph-deploy forgetkeys
1-Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster:
mkdir my-cluster
cd my-cluster
2-Create the cluster:
ceph-deploy new node1

Using `ls` command, you should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster.
3-Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section:

osd pool default size = 2
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64

These two last options are for EXT4; based on this link:
4-Install Ceph:
ceph-deploy install node1

The ceph-deploy utility will install Ceph on each node.
5-Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial

Once you complete the process, your local directory should have the following keyrings:

6-Add OSDs:
For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon.

for details on using separate disks/partitions for OSDs and journals.

Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.
ssh node2
sudo mkdir /var/local/osd0

ssh node3
sudo mkdir /var/local/osd1

Then, from your admin node, use ceph-deploy to prepare the OSDs.
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

Finally, activate the OSDs:
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
7-Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin node1 node2

Login to nodes and ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health

+Ceph Node Setup (Jan. 3, 2017, 2:55 p.m.)

1-Create a user on each Ceph Node.
2-Add sudo privileges for the user on each Ceph Node.
3-Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen and ssh-copy-id
4-Modify the ~/.ssh/config file of your ceph-deploy admin node so that it logs into Ceph Nodes as the user you created.
Host node1
Hostname node1
User root
Host node2
Hostname node2
User root
Host node3
Hostname node3
User root
5-Add to /etc/hosts: node1 node2 node3 node4
6-Change the hostname of each node to the ones from the earlier stpe (node1, node2, node3, ...):
nano /etc/hostname
reboot each node

+Acronyms (Jan. 1, 2017, 3:40 p.m.)

CRUSH: Controlled Replication Under Scalable Hashing
EBOFS: Extent and B-tree based Object File System
HPC: High-Performance Computing
MDS: MetaData Server
OSD: Object Storage Device
PG: Placement Group
PGP = Placement Group for Placement purpose
POSIX: Portable Operating System Interface for Unix
RADOS: Reliable Autonomic Distributed Object Store
RBD: RADOS Block Devices

+Ceph Deploy (Dec. 28, 2016, 12:51 p.m.)

The admin node must be password-less SSH access to Ceph nodes. When ceph-deploy logs into a Ceph node as a user, that particular user must have passwordless sudo privileges.

We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server
For ALL Ceph Nodes perform the following steps:
sudo apt-get install openssh-server
Create a Ceph Deploy User:
The ceph-deploy utility must log into a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the username. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}). The following procedure, substituting {username} for the username you define, describes how to create a user with passwordless sudo.

sudo useradd -d /home/{username} -m {username}
sudo passwd {username}


+Installation (Dec. 27, 2016, 3:57 p.m.)
1- wget -q -O- '' | sudo apt-key add -

2- echo deb $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

3- sudo apt-get install ceph ceph-deploy

+Definitions (Dec. 27, 2016, 1:10 p.m.)

Ceph is a storage technology.
A cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.
Clustering vs. Clouding:
Cluster differs from Cloud and Grid in that a cluster is a group of computers connected by a local area network (LAN), whereas cloud is more wide scale and can be geographically distributed. Another way to put it is to say that a cluster is tightly coupled, whereas a cloud is loosely coupled. Also, clusters are made up of machines with similar hardware, whereas clouds are made up of machines with possibly very different hardware configurations.
Ceph Storage Cluster:
A distributed object store that provides storage of unstructured data for applications.
Ceph Object Gateway:
A powerful S3- and Swift-compatible gateway that brings the power of the Ceph Object Store to modern applications.
Ceph Block Device:
A distributed virtual block device that delivers high-performance, cost-effective storage for virtual machines and legacy applications.
Ceph File System:
A distributed, scale-out filesystem with POSIX semantics that provides storage for a legacy and modern applications.
A reliable, autonomous, distributed object store comprised of self-healing, self-managing intelligent storage nodes.
A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP.
A bucket-based REST gateway, compatible with S3 and Swift.
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver.
Ceph FS:
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE.
pg_num = number of placement groups mapped to an OSD
Placement Groups (PGs):

Ceph maps objects to placement groups. Placement groups are shards or fragments of a logical object pool that place objects as a group into OSDs. Placement groups reduce the amount of per-object metadata when Ceph stores the data in OSDs. A larger number of placement groups (e.g., 100 per OSD) leads to better balancing.

+Table - Cellpadding and cellspacing (Feb. 7, 2020, 11:48 a.m.)

table {
border-spacing: 10px;
border-collapse: separate;


#table2 {
border-collapse: separate;
border-spacing: 15px 50px;


+Remove href values when printing (July 2, 2019, 12:11 a.m.)

@media print {
a[href]:after {
visibility: hidden;

+Removing page title and date when printing (July 2, 2019, 12:09 a.m.)

@page {
size: auto;
margin: 0;

+Media Queries (Feb. 9, 2016, 12:05 p.m.)

@media all and (max-width: 480px) {


@media all and (min-width: 480px) and (max-width: 768px) {


@media all and (min-width: 768px) and (max-width: 1024px) {


@media all and (min-width: 1024px) {


Responsive Grid Media Queries - 1280, 1024, 768, 480
1280-1024 - desktop (default grid)
1024-768 - tablet landscape
768-480 - tablet
480-less - phone landscape & smaller
@media all and (min-width: 1024px) and (max-width: 1280px) { }

@media all and (min-width: 768px) and (max-width: 1024px) { }

@media all and (min-width: 480px) and (max-width: 768px) { }

@media all and (max-width: 480px) { }

Foundation Media Queries

/* Small screens - MOBILE */
@media only screen { } /* Define mobile styles - Mobile First */

@media only screen and (max-width: 40em) { } /* max-width 640px, mobile-only styles, use when QAing mobile issues */

/* Medium screens - TABLET */
@media only screen and (min-width: 40.063em) { } /* min-width 641px, medium screens */

@media only screen and (min-width: 40.063em) and (max-width: 64em) { } /* min-width 641px and max-width 1024px, use when QAing tablet-only issues */

/* Large screens - DESKTOP */
@media only screen and (min-width: 64.063em) { } /* min-width 1025px, large screens */

@media only screen and (min-width: 64.063em) and (max-width: 90em) { } /* min-width 1024px and max-width 1440px, use when QAing large screen-only issues */

/* XLarge screens */
@media only screen and (min-width: 90.063em) { } /* min-width 1441px, xlarge screens */

@media only screen and (min-width: 90.063em) and (max-width: 120em) { } /* min-width 1441px and max-width 1920px, use when QAing xlarge screen-only issues */

/* XXLarge screens */
@media only screen and (min-width: 120.063em) { } /* min-width 1921px, xlarge screens */


/* Portrait */
@media screen and (orientation:portrait) { /* Portrait styles here */ }
/* Landscape */
@media screen and (orientation:landscape) { /* Landscape styles here */ }

/* CSS for iPhone, iPad, and Retina Displays */

/* Non-Retina */
@media screen and (-webkit-max-device-pixel-ratio: 1) {

/* Retina */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min--moz-device-pixel-ratio: 1.5),
only screen and (min-device-pixel-ratio: 1.5) {

/* iPhone Portrait */
@media screen and (max-device-width: 480px) and (orientation:portrait) {

/* iPhone Landscape */
@media screen and (max-device-width: 480px) and (orientation:landscape) {

/* iPad Portrait */
@media screen and (min-device-width: 481px) and (orientation:portrait) {

/* iPad Landscape */
@media screen and (min-device-width: 481px) and (orientation:landscape) {

<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" />

Live demo samples

+Media Tag (Sept. 2, 2015, 4:44 p.m.)

@media (max-width: 767px) {
#inner-coffee-machine > div > img {
width: 30%;
height: 18%;

#inner-coffee-machine > div > div h3 {
font-size: 2.5vh;
font-weight: bold;

#inner-coffee-machine > div > div h5 {
font-size: 2vh;

#club-inner {
display: inline-table;

#inner-coffee-machine > div > div {
width: 100%;

@media (min-width: 768px) and (max-width: 991px) {


@media (min-width: 992px) and (max-width: 1199px) {


@media (min-width: 1200px) {


+Define new font (Sept. 1, 2015, 11:21 a.m.)

@font-face {
font-family: nespresso;
src: url("../fonts/nespresso.otf") format("opentype"),
url("../fonts/nespresso.ttf") format("truetype");

@font-face {
font-family: 'yekan';
src: url(../fonts/yekan.eot) format("eot"),
url(../fonts/yekan.woff) format("woff"),
url(../fonts/yekan.ttf) format("truetype");

+CSS for different IE versions (July 27, 2015, 1:40 p.m.)


* html #div {
height: 300px;

*+html #div {
height: 300px;

#div {
height: 300px\0/;
IE-7 & IE-8

#div {
height: 300px\9;

#div {
_height: 300px;
Hide from IE 6 and LOWER:

#div {
height/**/: 300px;
html > body #div {
height: 300px;

+Fonts (July 13, 2015, 1:15 p.m.)

+white-space (July 9, 2015, 3:44 a.m.)

white-space: normal;
The text will wrap.
If you want to prevent the text from wrapping, you can apply:
white-space: nowrap;
If we want to force the browser to display line breaks and extra white space characters we can use:
white-space: pre;
If you want white space and breaks, but you need the text to wrap instead of potentially break out of its parent container:
white-space: pre-wrap;
white-space: pre-line;
Will break lines where they break in code, but extra white space is still stripped.

+Map - forEach (April 10, 2020, 12:19 a.m.)

Map<int, String> _levels = <int, String>{
0: 'All Levels',
1: 'Beginner',
2: 'Intermedidate',
3: 'Advanced'

_levels.forEach((int value, String title) {
items.add(new DropdownMenuItem(
value: value.toString(),
child: new Text(title),

+Getters and setters (March 27, 2020, 11:44 p.m.)

You can define getters and setters whenever you need more control over a property than a simple field allows.

For example, you can make sure a property’s value is valid:

class MyClass {
int _aProperty = 0;

int get aProperty => _aProperty;

set aProperty(int value) {
if (value >= 0) {
_aProperty = value;

You can also use a getter to define a computed property:

class MyClass {
List<int> _values = [];

void addValue(int value) {

// A computed property.
int get count {
return _values.length;

+Cascades (March 27, 2020, 11:42 p.m.)


To perform a sequence of operations on the same object, use cascades (..).


It invokes someMethod() on myObject, and the result of the expression is the return value of someMethod().

Here’s the same expression with a cascade:


Although it still invokes someMethod() on myObject, the result of the expression isn’t the return value — it’s a reference to myObject! Using cascades, you can chain together operations that would otherwise require separate statements. For example, consider this code:

var button = querySelector('#confirm');
button.text = 'Confirm';
button.onClick.listen((e) => window.alert('Confirmed!'));

With cascades, the code becomes much shorter, and you don’t need the button variable:

..text = 'Confirm'
..onClick.listen((e) => window.alert('Confirmed!'));

+Arrow syntax (March 27, 2020, 11:41 p.m.)

bool hasEmpty = aListOfStrings.any((s) => s.isEmpty);

bool hasEmpty = aListOfStrings.any((s) {
return s.isEmpty;


+Collection literals (March 27, 2020, 11:38 p.m.)

Dart has built-in support for lists, maps, and sets. You can create them using literals:

final aListOfStrings = ['one', 'two', 'three'];
final aSetOfStrings = {'one', 'two', 'three'};
final aMapOfStringsToInts = {
'one': 1,
'two': 2,
'three': 3,


Dart’s type inference can assign types to these variables for you. In this case, the inferred types are List<String>, Set<String>, and Map<String, int>.

Or you can specify the type yourself:

final aListOfInts = <int>[];
final aSetOfInts = <int>{};
final aMapOfIntToDouble = <int, double>{};

Specifying types is handy when you initialize a list with contents of a subtype, but still want the list to be List<BaseType>:

final aListOfBaseType = <BaseType>[SubType(), SubType()];


+String interpolation (March 27, 2020, 11:36 p.m.)

'${3 + 2}' '5'

'${"word".toUpperCase()}' 'WORD'

'$myObject' The value of myObject.toString()

+Null-aware Operators (March 27, 2020, 11:31 p.m.)


Use ?? when you want to evaluate and return an expression IFF another expression resolves to null.

exp ?? otherExp

is similar to

((x) => x == null ? otherExp : x)(exp)



Use ??= when you want to assign a value to an object IFF that object is null. Otherwise, return the object.

obj ??= value

is similar to

((x) => x == null ? obj = value : x)(obj)



Use ?. when you want to call a method/getter on an object IFF that object is not null (otherwise, return null).


is similar to

((x) => x == null ? null : x.method())(obj)

You can chain ?. calls, for example:


If obj, or child1, or child2 are null, the entire expression returns null. Otherwise, getter is called and returned.



Dart 2.3 brings in a spread operator (…) and with it comes a new null aware operator, ?... !

Placing ... before an expression inside a collection literal unpacks the result of the expression and inserts its elements directly inside the new collection.

So now, these two are equivalent.

List numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];


List lowerNumbers = [1, 2, 3, 4, 5];
List upperNumbers = [6, 7, 8, 9, 10];
List numbers = […lowerNumbers…upperNumbers];

To benefit from the new null aware operator, you can use it like this.

List lowerNumbers = [1, 2, 3, 4, 5];
List upperNumbers = [6, 7, 8, 9, 10];
List numbers = […lowerNumbers?…upperNumbers];

which is the equivalent to

List numbers = [];
if(upperNumbers != null){


+Array utility methods (March 27, 2020, 8:36 p.m.)


var fruits = [‘banana’, ‘pineapple’, ‘watermelon’];
fruits.forEach((fruit) => print(fruit)); // => banana pineapple watermelon



var mappedFruits = => ‘I love $fruit’).toList();
print(mappedFruits); // => ['I love banana', ‘I love pineapple’, ‘I love watermelon’]



var numbers = [1, 3, 2, 5, 4];
print(numbers.contains(2)); // => true



numbers.sort((num1, num2) => num1 - num2); // => [1, 2, 3, 4, 5]


reduce(), fold()

Compresses the elements to a single value, using the given function.

var sum = numbers.reduce((curr, next) => curr + next);
print(sum); // => 15

const initialValue = 10;
var sum2 = numbers.fold(initialValue, (curr, next) => curr + next);
print(sum2); // => 25



Confirms that every element satisfies the test.

List<Map<String, dynamic>> users = [
{ “name”: ‘John’, “age”: 18 },
{ “name”: ‘Jane’, “age”: 21 },
{ “name”: ‘Mary’, “age”: 23 },

var is18AndOver = users.every((user) => user[“age”] >= 18);
print(is18AndOver); // => true

var hasNamesWithJ = users.every((user) => user[“name”].startsWith('J'));
print(hasNamesWithJ); // => false


where(), firstWhere(), singleWhere()

Returns a collection of elements that satisfy a test.

// See the example above for the users list
var over21s = users.where((user) => user[“age”] > 21);
print(over21s.length); // => 1

var nameJ = users.firstWhere((user) => user[“name”].startsWith(‘J’), orElse: () => null);
print(nameJ); // => {name: John, age: 18}

var under18s = users.singleWhere((user) => user[“age”] < 18, orElse: () => null);
print(under18s); // => null

firstWhere() returns the first match in the list, while singleWhere() returns the first match provided there is exactly one match.


take(), skip()

Returns a collection while including or skipping elements.

var fiboNumbers = [1, 2, 3, 5, 8, 13, 21];
print(fiboNumbers.take(3).toList()); // => [1, 2, 3]
print(fiboNumbers.skip(5).toList()); // => [13, 21]
print(fiboNumbers.take(3).skip(2).take(1).toList()); // => [3]



Creates a new list from the given collection.

var clonedFiboNumbers = List.from(fiboNumbers);
print(‘Cloned list: $clonedFiboNumbers’);



Expands each element into zero or more elements.

var pairs = [[1, 2], [3, 4]];
var flattened = pairs.expand((pair) => pair).toList();
print(‘Flattened result: $flattened’); // => [1, 2, 3, 4]

var input = [1, 2, 3];
var duplicated = input.expand((i) => [i, i]).toList();
print(duplicated); // => [1, 1, 2, 2, 3, 3]


+Filtering list values (March 27, 2020, 8:31 p.m.)

List languages = new List();
List short = languages.where((l) => l.length < 5).toList();
print(short); // [Perl, Dart]


var fruits = ['apples', 'oranges', 'bananas'];
fruits.where((f) => f.startsWith('a')).toList(); //apples


_AnimatedMovies = AllMovies.where((i) => i.isAnimated).toList();


+Constructors (March 27, 2020, 4:20 a.m.)

class Student {
int id = -1;
String name;

Student(,; // Parameterised Constructor

Student.myCustomConstructor() { // Named Constructor
print("This is my custom constructor");

Student.myAnotherNamedConstructor(,; // Named Constructor

void study() {
print("${} is now studying");

void sleep() {
print("${} is now sleeping");

+Exception Handling (March 27, 2020, 4:13 a.m.)

void main() {

print("CASE 1");
// CASE 1: When you know the exception to be thrown, use ON Clause
try {
int result = 12 ~/ 0;
print("The result is $result");
} on IntegerDivisionByZeroException {
print("Cannot divide by Zero");

print(""); print("CASE 2");
// CASE 2: When you do not know the exception use CATCH Clause
try {
int result = 12 ~/ 0;
print("The result is $result");
} catch (e) {
print("The exception thrown is $e");

print(""); print("CASE 3");
// CASE 3: Using STACK TRACE to know the events occurred before Exception was thrown
try {
int result = 12 ~/ 0;
print("The result is $result");
} catch (e, s) {
print("The exception thrown is $e");
print("STACK TRACE \n $s");

print(""); print("CASE 4");
// CASE 4: Whether there is an Exception or not, FINALLY Clause is always Executed
try {
int result = 12 ~/ 3;
print("The result is $result");
} catch (e) {
print("The exception thrown is $e");
} finally {
print("This is FINALLY Clause and is always executed.");

print(""); print("CASE 5");
// CASE 5: Custom Exception
try {
} catch (e) {
} finally {
// Code

class DepositException implements Exception {
String errorMessage() {
return "You cannot enter amount less than 0";

void depositMoney(int amount) {
if (amount < 0) {
throw new DepositException();

+Basics (March 27, 2020, 2:56 a.m.)

int age = 32;
var age = 32;

They're both the same.


int result = 12 ~/ 4; // There will be a warning "A value of type double can't be assigned to int."

int result = 12 ~/ 4; // This way it will return the result in form of integer.


Final and Const:

If you never want to change a value then use "final" and "const" keywords.

final cityName = 'Tehran';
const PI = 3.14

The "final" variable can only be set once and it is initialized when accessed.

The "const" variable is implicitly final but it is a compile-time constant, i.e. it is initialized during compilation.

class Circle {
final color = 'red';
static const PI = 3.14; // Only static fields can be declared as const.


Conditional Expressions - Ternary Operator:

int a = 2;
int b = 3;
a < b ? print("$a is smaller") : print("$b is smaller");

smallerNumber = a < b ? a : b;


Conditional Expressions -Ternary Operator:

String name = 'Mohsen';
String nameToPrint = name ?? 'Hassani'; // It will print "Mohsen".

String name;
String nameToPrint = name ?? 'Hassani'; // It will print "Hassani".


For Loop:

List colorNames = ["Blue", "Yello", "Green", "Red"];
for (String color in colorNames) {


Do-While Loop:

int i = 1;
do {
} while (i <= 10);

int i = 1;
do {

if ( i % 2 == 0 ) {

} while (i <= 10);


Break Keyword:

myOuterLoop: for ( int i = 1; i <= 3; i++ ) {

innerLoop: for ( int j = 1; j <= 3; j++ ) {
print("$i $j");

if ( i == 2 && j == 2 ) {
break myOuterLoop;


Optional Positional Parameters in Functions:

void printCountries(String name1, [String name2, String name3]) {

printCountries("Iran") // Prints Iran, null, null


Optional Named Parameters:

int findVolume(int length, {int breadth, int height}) {
print("Volume is ${length * breadth * height}");

findVolume(10, breadth: 5, height: 20);


Optional Default Parameters:

int findVolume(int length, {int breadth = 2, int height = 20}) {
print("Volume is ${length * breadth * height}");


+Queryset - Filter on ManyToMany count (May 17, 2020, 12:43 p.m.)

questions = Question.objects.annotate(num_answers=Count('answers')) \

+CBV - CheckboxSelectMultiple (May 16, 2020, 2:31 p.m.)

from django.forms.models import modelform_factory

class ModelFormWidgetMixin:
def get_form_class(self):
return modelform_factory(


class ProductCreate(ModelFormWidgetMixin, CreateView):
model = Product
fields = [
'name', 'image', 'units', 'default_unit', 'extra_info', 'is_enabled'
template_name = 'manager/occupations/products-create.html'
widgets = {
'units': forms.CheckboxSelectMultiple,
'extra_info': forms.CheckboxSelectMultiple


+Template - Get form field ID (May 16, 2020, 11:55 a.m.)

{{ field.auto_id }}

{{ field.id_for_label }}

{{ field.html_name }}

+Signals (May 16, 2020, 10:11 a.m.)

from django.db.models.signals import post_save
from django.dispatch import receiver

class Occupation(models.Model):

@receiver(post_save, sender=Occupation)
def create_tag(sender, instance, created, **kwargs):
"""Create a tag for new occupation. Update the name if already created."""
if created:

+GenericForeignKey (May 13, 2020, 5:40 p.m.)

from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType

class Tag(models.Model):
content_type = models.ForeignKey(
object_id = models.CharField(max_length=50, blank=True, null=True)
content_object = GenericForeignKey('content_type', 'object_id')


from django.contrib.contenttypes.models import ContentType



+Class-Based Views (CBV) (May 11, 2020, 12:24 p.m.)

+CBV - SuccessMessageMixin (May 11, 2020, 12:10 p.m.)

from django.contrib.messages.views import SuccessMessageMixin
from django.utils.text import format_lazy

class CategoryCreateView(SuccessMessageMixin, CreateView):
model = Category
fields = ['name']
success_message = format_lazy(
_('The {item} was created successfully.'), item=_('category')

+CBV - Set asterisk for required fields (May 11, 2020, 12:09 p.m.)

class CategoryCreateView(CreateView, ListView):

def get_form(self, form_class=None):
form = super().get_form(form_class)
form.required_css_class = 'required'
return form

+CBV - Delete old image on form save (May 10, 2020, 5:33 p.m.)

def form_valid(self, form):
data = form.cleaned_data

config = Config.objects.get(pk=1)
if config.occupations_default_image != data['occupations_default_image']:
if config.products_default_image != data['products_default_image']:

return super().form_valid(form)

+DRF - Save a list of object PKs (May 9, 2020, 11:36 a.m.)

class TimetableWriteSerializer(serializers.ModelSerializer):
timetables = serializers.PrimaryKeyRelatedField(

class Meta:
model = ExpertProfile
fields = ['timetables']

+Run django code in python file (May 9, 2020, 9:39 a.m.)

Add these two lines at the top of the file:

import django


+Models - limit_choices_to (May 8, 2020, 7:53 p.m.)

report_permissions = models.ManyToManyField(
verbose_name=_('report permissions'),
limit_choices_to={'category': '1'}


taff_member = models.ForeignKey(
limit_choices_to={'is_staff': True},


from django.db.models import Q

limit_choices_to=Q(share_holder=True) | Q(distributor=True)


product = models.ForeignKey(


+CBV - Simple View (May 4, 2020, 4:38 p.m.)

class RestoreQuestion(View):
def get(self, request, *args, **kwargs):
pk = self.kwargs.get('pk')

if pk:
return HttpResponseRedirect(reverse_lazy('deletions:home'))
return HttpResponseRedirect(reverse_lazy('home'))

+CBV - Pass Params to Form (May 4, 2020, 11:58 a.m.)


class QuestionEditView(SuccessMessageMixin, UpdateView):
model = Question
template_name = 'expert_verified/edit.html'
success_message = format_lazy(
_('The {item} was updated successfully.'), item=_('question')
form_class = QuestionForm
context_object_name = 'question'

def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs.update({'request': self.request})
return kwargs


class QuestionForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.request = kwargs.pop('request')
super().__init__(*args, **kwargs)

+Makemessages exclude (April 11, 2020, 5:01 p.m.)

django-admin makemessages -i apps/ -l fa

+Thread in TemplateView (April 8, 2020, 10:59 a.m.)

class QuestionsImportView(TemplateView):
template_name = 'questions/index.html'

def get(self, request, *args, **kwargs):
context = self.get_context_data(**kwargs)
return self.render_to_response(context)

+DRF - get_extra_kwargs method (March 3, 2020, 3:51 p.m.)

def get_extra_kwargs(self):
# Set "document" to non-required for partial-update action.
extra_kwargs = {'city': {'required': True}}
action = self.context['view'].action
if action == 'partial_update':
extra_kwargs['document'] = {'required': False}
return extra_kwargs

+DRF - Viewsets (March 1, 2020, 3:23 p.m.)

The ViewSet class inherits from APIView. You can use any of the standard attributes such as permission_classes, authentication_classes in order to control the API policy on the viewset.

A ViewSet class is simply a type of class-based View, that does not provide any method handlers such as .get() or .post(), and instead provides actions such as .list() and .create().

class UserViewSet(viewsets.ViewSet):
A simple ViewSet for listing or retrieving users.
def list(self, request):
queryset = User.objects.all()
serializer = UserSerializer(queryset, many=True)
return Response(

def retrieve(self, request, pk=None):
queryset = User.objects.all()
user = get_object_or_404(queryset, pk=pk)
serializer = UserSerializer(user)
return Response(


ViewSet actions:

def list(self, request):

def create(self, request):

def retrieve(self, request, pk=None):

def update(self, request, pk=None):

def partial_update(self, request, pk=None):

def destroy(self, request, pk=None):


The ViewSet class does not provide any implementations of actions. In order to use a ViewSet class you'll override the class and define the action implementations explicitly.



The GenericViewSet class inherits from GenericAPIView, and provides the default set of get_object, get_queryset methods and other generic view base behavior, but does not include any actions by default.

In order to use a GenericViewSet class you'll override the class and either mixin the required mixin classes, or define the action implementations explicitly.



The ModelViewSet class inherits from GenericAPIView and includes implementations for various actions, by mixing in the behavior of the various mixin classes.

The actions provided by the ModelViewSet class are .list(), .retrieve(), .create(), .update(), .partial_update(), and .destroy().



The ReadOnlyModelViewSet class also inherits from GenericAPIView. As with ModelViewSet it also includes implementations for various actions, but unlike ModelViewSet only provides the 'read-only' actions, .list() and .retrieve().


+Faker & Factoryboy (Feb. 24, 2020, 11:53 a.m.)


from accounts.factories import AccountFactory





+Create Test Database Permission Denied (Jan. 22, 2020, 5:08 p.m.)

When running "python test" command, you might get the error:

"Got an error creating the test database: permission denied to create database"

For solving the problem, you need to give permission to your project database user:

sudo su
su postgres

+Test - RequestFactory vs Client (Jan. 22, 2020, 1:01 p.m.)

TestCase is used when you want to test an HTTP request and RequestFactory is used when you want to test the views by calling them inside Django.


Typically just use the test client. That'll ensure that you're testing your project more completely, as the full request-response cycle is under test, including routing and middleware.


Use RequestFactory if you want to write unit tests that require a request instance, or if you have some good reason to want to test just the view itself.


RequestFactory will be much quicker, which is important when you have a lot of tests. It is also only testing the part you want to test, which is a better unit test - you will more quickly know where the error is. You should still have some tests with TestCase to test the full HTTP request.


Testing a GET request

Before now, you may well have used the Django test client to test views. That is fine for higher-level tests, but if you want to test a view in isolation, it’s no use because it emulates a real web server and all of the middleware and authentication, which we want to keep out of the way. Instead, we need to use RequestFactory:

from django.test import RequestFactory

RequestFactory actually implements a subset of the functionality of the Django test client, so while it will feel somewhat familiar, it won’t have all the same functionality. For instance, it doesn’t support middleware, so rather than logging in using the test client’s login() method, you instead attach a user directly to the request, as in this example:

request = RequestFactory()

request.user = user


RequestFactory returns a request, while Client returns a response.

The RequestFactory does what it says - it's a factory to create request objects. Nothing more, nothing less.

The Client is used to fake a complete request-response cycle. It will create a request object, which it then passes through a WSGI handler. This handler resolves the URL, calls the appropriate middleware and runs the view. It then returns the response object. It has the added benefit that it gathers a lot of extra data on the response object that is extremely useful for testing.

The RequestFactory doesn't actually touch any of your code, but the request object can be used to test parts of your code that require a valid request. The Client runs your views, so in order to test your views, you need to use the Client and inspect the response.


+Files and Directories (Jan. 3, 2020, 3:42 p.m.)





if voicemail_record.reply_recording_path:


+Excel Files (Dec. 24, 2019, 8:57 p.m.)

import xlrd

if request.POST and request.FILES:
excel_file = request.FILES['excel_file'].read()
book = xlrd.open_workbook(file_contents=excel_file)
sheet = book.sheet_by_index(0)
for row_num in range(sheet.nrows):

+DRF - to_representation vs to_internal_value (Dec. 23, 2019, 1:23 p.m.)

The to_representation() method is called to convert the initial datatype into a primitive, serializable datatype.

The to_internal_value() method is called to restore a primitive datatype into its internal python representation. This method should raise a "serializers.ValidationError" if the data is invalid.


def to_internal_value(self, data):
data_ = data.copy()

# Get registrar type
if data.get('registrar_type'):
if data['registrar_type'] == 'student':
data_['registrar_type'] = '1'
elif data['registrar_type'] == 'teacher':
data_['registrar_type'] = '2'

return super().to_internal_value(data_)


+CBV - Raise form error in form_valid() (Dec. 18, 2019, 4:07 p.m.)

def form_valid(self, form):
data = form.cleaned_data
if data['default_unit'].pk not in data['units'].values_list('pk', flat=True):
form.add_error('default_unit', _('The selected item does not exist in selected units.'))
return self.form_invalid(form)
return super().form_valid(form)

+CBV - Change form widget in generic CreateView/UpdateView (Dec. 18, 2019, 11:56 a.m.)

from django.forms.models import modelform_factory

class ModelFormWidgetMixin:
def get_form_class(self):
return modelform_factory(self.model,


from django import forms

class ProductCreate(ModelFormWidgetMixin, CreateView):

widgets = {
'units': forms.CheckboxSelectMultiple


+Queries - Use regex (Dec. 13, 2019, 12:26 a.m.)

The records whose caller_id field has only 3 diits:



+URLs - include (Dec. 8, 2019, 10:54 a.m.)

include(module, namespace=None)
include((pattern_list, app_namespace), namespace=None)

application namespace:
This describes the name of the application that is being deployed. Every instance of a single application will have the same application namespace. For example, Django’s admin application has a somewhat predictable application namespace of 'admin'.

instance namespace:
This identifies a specific instance of an application. Instance namespaces should be unique across your entire project. However, an instance namespace can be the same as the application namespace. This is used to specify a default instance of an application. For example, the default Django admin instance has an instance namespace of 'admin'.

+Create uploadpath based on the current date (Nov. 30, 2019, 12:30 p.m.)

ef get_upload_path(instance, filename):
return os.path.join('account/avatars/', now().date().strftime("%Y/%m/%d"), filename)

class User(AbstractUser):
avatar = models.ImageField(blank=True, upload_to=get_upload_path)

+Modeling Polymorphism (Nov. 13, 2019, 1:48 p.m.)

Polymorphism is the ability of an object to take on many forms. Common examples of polymorphic objects include event streams, different types of users, and products in an e-commerce website. A polymorphic model is used when a single entity requires different functionality or information.


+ugettext_lazy, ugettext, ugettext_noop (Nov. 10, 2019, 1:27 p.m.)

ugettext_lazy holds a reference to the translation string instead of the actual translated text, so the translation occurs when the value is accessed rather than when they’re called.

When to use ugettext() or ugettext_lazy():

- (fields, verbose_name, help_text, methods short_description);
- (labels, help_text, empty_label);
- (verbose_name).

- Other modules similar to view functions that are executed during the request process


ugettext: The function returns the translation for the currently selected language.

ugettext_lazy: The function marks the string as translation string, but only fetches the translated string when it is used in a string context, such as when rendering a template.

ugettext_noop: This function only marks a string as a translation string, it does not have any other effect; that is, it always returns the string itself.


ugettext_noop example:

import logging
from django.http import HttpResponse
from django.utils.translation import ugettext as _, ugettext_noop as _noop

def view(request):
msg = _noop("An error has occurred")
return HttpResponse(_(msg))


+Managers (Aug. 15, 2019, 11:06 a.m.)

class MyManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(last_data__startswith='SIP/Mohsen')

class MyModel(models.Model):

objects = models.Manager()
my_objects = MyManager()

+FloatField vs DecimalField (July 31, 2019, 2:58 a.m.)

Always use DecimalField for money. Even simple operations (addition, subtraction) are not immune to float rounding issues.



- DecimalFields must define a 'decimal_places' and a 'max_digits' attribute.

- You get two free form validations included here from the above required attributes, i.e. If you set max_digits to 4, and you type in a decimal that is 4.00000 (5 digits), you will get this error: Ensure that there are no more than 4 digits in total.

- You also get a similar form validation done for decimal places (which in most browsers will also validate on the front end using the step attribute on the input field. If you set decimal_places = 1 and type in 0.001 as the value you will get an error that the minimum value has to be 0.1.

- With a Decimal type, rounding is also handled for you due to the required attributes that need to be set.

- In the database (postgresql), the DecimalField is saved as a numeric(max_digits,decimal_laces) Type, and Storage is set as "main"



- No smart rounding, and can actually result in rounding issues as described in Seths answer.

- Does not have the extra form validation that you get from DecimalField

- In the database (postgresql), the FloatField is saved as a "double precision" Type, and Storage is set as "plain"


+Aggregation vs Annotation (July 22, 2019, 12:18 p.m.)

Aggregate calculates values for the entire queryset.
Aggregate generates result (summary) values over an entire QuerySet. It operates over the rowset to get a single value from the rowset.(For example sum of all prices in the rowset). It is applied on entire QuerySet and generates result (summary) values over an entire QuerySet.

Returns a dictionary containing the average price of all books in the queryset.


Annotate calculates summary values for each item in the queryset.
Annotate generates an independent summary for each object in a QuerySet.(We can say it iterates each object in a QuerySet and applies operation)

>>> q = Book.objects.annotate(num_authors=Count('authors'))
>>> q[0].num_authors
>>> q[1].num_authors
q is the queryset of books, but each book has been annotated with the number of authors.

videos = Video.objects.values('id', 'name','video').annotate(Count('user_likes',distinct=True)

+m2m (July 11, 2019, 9:07 p.m.)

For ModelForm just do:

If you had to use commit=False in, then you have to save the m2m manually:
if form.is_valid():
project =
# Do something extra with "project" ....


if form.fields.get('units'):


+Style Admin Interface in (April 18, 2018, 7:39 a.m.)

class NoteAdmin(admin.ModelAdmin):
search_fields = ('title', 'note')
list_filter = ('category',)

class Media:
css = {
'all': ('admin/css/interface.css',)


The path to "interface.css" is:


And finally, I couldn't make "nginx" recognize this file. For solving the problem I had to comment the "location /static/admin/" block in nginx file, and do "collectstatic" in my project to just gather together all admin static files.


+Ajax and CSRF (April 22, 2018, 7:08 p.m.)

type: 'POST',
url: $(this).attr('href'),
data: {
csrfmiddlewaretoken: '{{ csrf_token }}',
dataType: 'json',
success: function (status) {

error: function () {


+Django-2 Sample (April 29, 2018, 2:44 p.m.)

import os
import re

def gettext_noop(s):
return s

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

ROOT_URLCONF = 'mohsenhassani.urls'

DEBUG = True

ADMINS = [('Mohsen Hassani', '')]

ALLOWED_HOSTS.extend(['localhost', ''])

TIME_ZONE = 'Asia/Tehran'

USE_TZ = True


LANGUAGES = [('en', gettext_noop('English')),
('fa', gettext_noop('Persian'))]

USE_I18N = True
os.path.join(BASE_DIR, 'locale'),

USE_L10N = True

SERVER_EMAIL = 'report@mohsenhassani'

'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mohsenhassanidb',
'USER': 'root',


'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'context_processors': [




SECRET_KEY = 'xqb&)90m*_!n3ovc$@%mo8!8!7j5d9o=8nm(iyw%#mzz&o1n6)'

MEDIA_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'media/')
MEDIA_URL = '/media/'

STATIC_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'static/')
STATIC_URL = '/static/'

FILE_UPLOAD_MAX_MEMORY_SIZE = 52428800 # i.e. 50 MB

WSGI_APPLICATION = 'mohsenhassani.wsgi.application'



AUTH_USER_MODEL = 'accounts.User'

LOGIN_URL = '/accounts/login/'

LOGIN_REDIRECT_URL = '/accounts/profile/'





+Send HTML Email with Attachment (April 30, 2018, 6:40 p.m.)

from django.core.mail import EmailMessage

email = EmailMessage('subject',

email.content_subtype = "html"

if data['attachment']:
file_ = data['attachment']
email.attach(,, file_.content_type)



for attachment in request.FILES:
if data[attachment]:
file_ = data[attachment]
email.attach(,, file_.content_type)


+URL - Login Required & is_superuser (May 1, 2018, 11:56 a.m.)

from django.contrib.auth.decorators import login_required
from django.contrib.auth.decorators import user_passes_test

urlpatterns = [
path('reports/', user_passes_test(lambda u: u.is_superuser)(
login_required(report.reports)), name='reports'),

iIt seems "user_passes_test" already does check the "login_required" somehow... so remove that decorator:

path('reports/', user_passes_test(lambda u: u.is_superuser)(report.reports), name='reports'),

+Database Functions, Aggregation, Annotations (June 16, 2018, 11:55 a.m.)

from django.db.models import F

OrgPayment.objects.update(shares=F('shares') / 70000)
Property.objects.filter(id=pid).update(views=F('views') + 1)


from django.db.models import Count



from django.db.models import Avg



from django.db.models import Avg, Count



Database Functions:


from django.db.models import Sum, Value
from django.db.models.functions import Coalesce

certificates_total_hours = reward_request.chosen_certificates.aggregate(total_hours=Coalesce(Sum('course_hours'), Value(0)))



# Get the display name as "name (goes_by)"

from django.db.models import CharField, Value as V
from django.db.models.functions import Concat

Author.objects.create(name='Margaret Smith', goes_by='Maggie')
author = Author.objects.annotate(
screen_name=Concat('name', V(' ('), 'goes_by', V(')'),



Accepts a single text field or expression and returns the number of characters the value has. If the expression is null, then the length will also be null.

from django.db.models.functions import Length

Author.objects.create(name='Margaret Smith')
author = Author.objects.annotate(
print(author.name_length, author.goes_by_length)



Accepts a single text field or expression and returns the lowercase representation.

Usage example:

>>> from django.db.models.functions import Lower
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_lower=Lower('name')).get()
>>> print(author.name_lower)
margaret smith



Returns a substring of length (length) from the field or expression starting at position pos. The position is 1-indexed, so the position must be greater than 0. If the length is None, then the rest of the string will be returned.

Usage example:

>>> # Set the alias to the first 5 characters of the name as lowercase
>>> from django.db.models.functions import Substr, Lower
>>> Author.objects.create(name='Margaret Smith')
>>> Author.objects.update(alias=Lower(Substr('name', 1, 5)))
>>> print(Author.objects.get(name='Margaret Smith').alias)



Accepts a single text field or expression and returns the uppercase representation.

>>> from django.db.models.functions import Upper
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_upper=Upper('name')).get()
>>> print(author.name_upper)


+Create directories if they don't exist (June 17, 2018, 6:09 p.m.)

import os

from django.conf import settings

avatar_path = '%s/images/avatars' % settings.MEDIA_ROOT
if not os.path.exists(os.path.dirname(avatar_path)):

+Serve media files in debug mode (April 15, 2019, 12:11 p.m.)

from django.conf import settings
from django.conf.urls.static import static

if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

+Save file path to Django ImageField (June 17, 2018, 7:10 p.m.)
avatar = models.ImageField(_('avatar'), upload_to='manager/images/avatars/', null=True, blank=True)

-------- = 'images/avatars/mohsen.png'

+Forms - Validate Excel File (July 2, 2018, 10:53 a.m.)

from xlrd import open_workbook, XLRDError

from django import forms
from django.utils.translation import ugettext_lazy as _

class UploadExcelForm(forms.Form):
file = forms.FileField(label=_('file'))

def clean_excel_file(self):
excel_file = self.cleaned_data['excel_file']

except XLRDError:
raise forms.ValidationError(_('Please upload a valid excel file.'))

return excel_file

+Messages (July 6, 2018, 8:57 p.m.)

from django.contrib import messages

messages.success(request, _('The information was saved successfully.'))
return HttpResponseRedirect(reverse('url', args=(code,)))



{% if messages %}
<ul class="messages">
{% for message in messages %}
<li {% if message.tags %} class="{{ message.tags }}" {% endif %}>{{ message }}</li>
{% endfor %}
{% endif %}


{% if message.tags == 'success' %}

+QuerySet - Filter based on Text Length (July 16, 2018, 3:04 p.m.)

from django.db.models.functions import Length

invalid_username = Driver.objects.annotate(

+QuerySet - Duplicate objects based on a specific field (July 16, 2018, 3:16 p.m.)

duplicate_plate_number_ids = Driver.objects.values(
plate_number__count__gt=1).values_list('plate_number', flat=True)

+Bulk Insert / Bulk Create (Oct. 7, 2018, 11:07 a.m.)

entry_records = []

for i in range(2000):
entry_records.append(Entry(headline='This is a test'))


+Force files to open in the browser instead of downloading (Oct. 9, 2018, 8:48 a.m.)

Force browser that the file should be viewed in the browser:

Content-Type: application/pdf
Content-Disposition: inline; filename="filename.pdf"

To have the file downloaded rather than viewed:

Content-Type: application/pdf
Content-Disposition: attachment; filename="filename.pdf"

+Database creation error when running django tests (April 13, 2019, 2:10 p.m.)

In case of having this error when running django tests:
Got an error creating the test database: permission denied to create database

Log in to psql shell and let your database user to create databases:
alter user my_user createdb;

+Find Model Relations (Oct. 17, 2018, 4:48 p.m.)

for field in [f for f in file._meta.get_fields() if not f.concrete]


model = field.related_model

model = type(instance)

# For deferred instances
model = instance._meta.proxy_for_model


app_label = model._meta.app_label

app_label = instance._meta.app_label


model_name = model.__name__


if field.get_internal_type() == 'ForeignKey':





ct = ContentType.objects.get_for_model(model)




+Pass JSON object data from view to template (April 13, 2019, 11:32 a.m.)


import json

data = json.dumps(the_dictionary)
return render(request, 'abc.html', {'data': data})



<script type="text/javascript">
{{ data|safe }}

+Form - Access Field type in template (Dec. 8, 2018, 12:23 p.m.)

{{ field.field.widget.input_type }}

+QuerySet - Group By (Dec. 14, 2018, 8:52 a.m.)

requests = Loan.objects.filter(loan__type='n',
status__status__in=['1', '2', '3'])
stats = requests.values('personnel__center__title'

{% for stat in stats %}
<td>{{ forloop.counter }}</td>
<td>{{ stat.personnel__center__title }}</td>
<td>{{ stat.id__count }}</td>
{% endfor %}


this_week_articles = Article.objects.filter(

# Result is:
<QuerySet [{'creating_user__last_name': 'Hassani', 'creating_user__first_name': 'Mohsen', 'pk__count': 286}, {'creating_user__last_name': 'BiGheri', 'creating_user__first_name': 'Mehdi', 'pk__count': 31}]>


from itertools import groupby

def extract_call_id(call):
return call.call_id

grouped_call_ids = [list(g) for t, g in groupby(today_calls, key=extract_call_id)]


+Google reCAPTCHA API (Dec. 17, 2018, 12:55 p.m.)

1- Register your application in the reCAPTCHA admin:

2- After registering your website, you will be handed a Site key and a Secret key. The Site key will be used in the reCAPTCHA widget which is rendered within the page where you want to place it. The Secret key will be stored safely in the server, made available through the module.

3- Add the following tag to the head:
<script src=''></script>

4- Add the following tag to the form:
<div class="g-recaptcha" data-sitekey=""></div>

5- pip install requests

import requests
from django.conf import settings

if request.POST:
recaptcha_response = request.POST.get('g-recaptcha-response')
data = {
'response': recaptcha_response
response =
'', data=data)
result = response.json()

if result['success']:


+Split QuerySets (Dec. 17, 2018, 10:26 p.m.)

def chunks(items, length):
for chunk in range(0, len(items), length):
yield items[chunk:chunk + length]


Usage Example:

excel_file = get_object_or_404(ExcelFile, id=eid)

job_list = list(chunks(excel_file.tempdata_set.all(), 250))


+Get all related Django model objects (Dec. 30, 2018, 12:30 p.m.)

from django.db.models.deletion import Collector
from django.contrib.admin.utils import NestedObjects

user = User.objects.get(id=1)

collector = NestedObjects(using="default")

+Admin - Render checkboxes for m2m (Jan. 13, 2019, 10:06 a.m.)


from django.contrib.auth.admin import UserAdmin
from django.db import models
from django.forms import CheckboxSelectMultiple

class PersonnelAdmin(UserAdmin):
formfield_overrides = {
models.ManyToManyField: {'widget': CheckboxSelectMultiple}

+Truncate a long string (Jan. 27, 2019, 1:47 a.m.)

data = data[:75]


import textwrap

textwrap.shorten("Hello world!", width=12)

textwrap.shorten("Hello world", width=10, placeholder="...")


from django.utils.text import Truncator

value = Truncator(value).chars(75)


+Model Conventions (Feb. 8, 2019, 7:53 a.m.)

+CSRF Token in an external javascript file (March 16, 2019, 2:11 p.m.)

function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie != '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = cookies[i].trim();
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) == (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
return cookieValue;

// Then call it like the following:

+Forms - Validation (March 11, 2018, 4:29 p.m.)

class ReportForm1(forms.Form):
src_server_ip = forms.CharField(required=False)
dst_server_ip = forms.CharField(required=False)

def clean(self):
if self.cleaned_data['src_server_ip'] == '' and self.cleaned_data[
'dst_server_ip'] == '':
'At lease a source or destination is required.')

+URL Regex that accepts all characters (Jan. 20, 2018, 1:14 a.m.)


+Forms - Custom ModelChoiceField (Nov. 15, 2017, 3:54 p.m.)

class AppointmentChoiceField(forms.ModelChoiceField):
def label_from_instance(self, appointment):
return "%s" % appointment.get_time()


class IntCommaChoiceField(forms.ModelChoiceField):
def label_from_instance(self, base_amount):
return "%s" % intcomma(base_amount)


class LoanAmountEditForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['base_amount'] = IntCommaChoiceField(
label=_('base amount')

class Meta:
model = LoanAmount
exclude = []


+JPG Validator (July 17, 2017, 10:23 a.m.)

from PIL import Image

def jpg_validator(certificate):
file_type =
if file_type == 'jpg' or file_type == 'JPEG':
return True
raise ValidationError(_('The extension of certificate file should be jpg.'))

+Views - order_by sum of fields (June 10, 2017, 1:24 p.m.)

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},


If you need to do some filtering, you can add filter() to the end:

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},

+Use MySQL or MariaDB with Django (May 18, 2017, 10:11 p.m.)

1- Installation:
sudo apt-get install python-pip python-dev mysql-server libmysqlclient-dev

sudo apt-get install python-pip python-dev mariadb-server libmariadbclient-dev libssl-dev

2- mysql -u root -p


4- CREATE USER myprojectuser@localhost IDENTIFIED BY 'password';

5- GRANT ALL PRIVILEGES ON myproject.* TO myprojectuser@localhost;


7- exit

8- In the project environment:
pip install mysqlclient

+X-Frame-Options (Sept. 26, 2016, 9:05 p.m.)

Error in remote calling:
..does not permit cross-origin framing

There is a special header to allow or disallow showing page inside i-frame - X-Frame-Options It's used to prevent an attack called clickjacking. You can check the Django's doc about it

Sites that want their content to be shown in i-frame just don't set this header.

In your installation of Django this protection is turned on by default. If you wan't to allow embedding your content inside i-frames you can either disable the clickjack protection in your settings for the whole site, or use per view control with:

django.views.decorators.clickjacking decorators


Per view control is a better option.



from django.views.decorators.clickjacking import xframe_options_exempt

def home(request):

+Django Session Key (Sept. 20, 2016, 8:58 p.m.)

if not request.session.exists(request.session.session_key):
session_key = request.session.session_key

+Django REST Framework - Installation and Configuration (Sept. 20, 2016, 12:44 a.m.)

1-pip install djangorestframework django-filter markdown

2-Add 'rest_framework' to your INSTALLED_APPS setting.

3-If you're intending to use the browsable API you'll probably also want to add REST framework's login and logout views. Add the following to your root file.

urlpatterns = [
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))

+User Timezone (Sept. 5, 2016, 2:12 a.m.)

There are several plugins you can use, but I guess there are reasons I need to avoid using them:

- They mainly require big .dat files which contain the timezones allover the world

- They use middlewares to check the user's timezone, which might be called on every request and finally cause speed problem when opening pages.

- They only work with templates (using template tags and filters).


The simplest way I have achieved is using a snippet which uses an online web service:
import requests
import pytz
user_time_zone = requests.get('').json()['time_zone']

This snippet can be used in only the views which need to detect user's timezone; no need of middleware.


If you ever needed to use it in every request, you can use it in a middleware.

Create a file named `` and add this middleware to it:

import requests
import pytz

from django.utils import timezone

class UserTimezoneMiddleware(object):
def process_request(self, request):
freegeoip_response = requests.get('')
freegeoip_response_json = freegeoip_response.json()
user_time_zone = freegeoip_response_json['time_zone']
return None

Add the `UserTimezoneMiddleware` class to `MIDDLEWARE_CLASSES` variable.

Now you can get the date/time based on user's timezone:


+Timestamp from datetime field (Sept. 5, 2016, 1:05 a.m.)

You can do it in template or in view.


{% now "U" %}
{{ value|date:"U" }}



from django.utils.dateformat import format
format(mymodel.mydatefield, 'U')
import time

+Manually create a POST/GET QueryDict from a dictionary (Aug. 27, 2016, 3:11 a.m.)

from django.http import QueryDict, MultiValueDict

get_data = {'p_type': request.GET['p_type'], 'facilities': request.GET.getlist('facilities')}
get_data = dict(request.GET.iteritems())

qdict = QueryDict('', mutable=True)
qdict.update(MultiValueDict({'facilities': get_data['facilities']}))
request.POST = qdict

+Django Dumpdata Field (Aug. 26, 2016, 3:43 a.m.)

1- pip install django-dumpdata-field


3- dumpdata_field facemelk.province --fields=id,province_name > /home/mohsen/Projects/facemelk/facemelk/fixtures/provinces_fields.json

+Ajax File Upload (Aug. 22, 2016, 10:20 p.m.)

<form action="{% url 'glasses:upload-face' %}" method="POST" id="upload-face-form" enctype="multipart/form-data"> {% csrf_token %}
<input type="file" id="upload-face" name="face" />


$('#upload-face').change(function() {
var form = $('#upload-face-form');
var form_data = new FormData(form[0]);
type: form.attr('method'),
url: form.attr('action'),
data: form_data,
contentType: false,
cache: false,
processData: false,
dataType: 'json',
success: function(image) {

}, error: function(error) {



def upload_face(request):
if request.is_ajax():
image = request.FILES.get('face')
if image:
face = open('face.jpg', 'wb')
for chunk in image.chunks():
return JsonResponse({'hi': 'hi'})
return HttpResponseRedirect(reverse('home'))


+Django Grappelli (May 16, 2016, 4:04 a.m.)

Official Website:





pip install django-grappelli




2-Add URL-patterns:
urlpatterns = [
url(r'^grappelli/', include('grappelli.urls')),
url(r'^admin/', include(,

3-Add the request context processor (needed for the Dashboard and the Switch User feature):
'context_processors': [

4-Collect the media files:
python collectstatic




Dashboard Setup:


Third Party Applications:

+Views - Receive and parse JSON data from a request using django-cors-headers (May 4, 2016, 3:19 a.m.)

import json

from django.views.decorators.csrf import csrf_exempt

def update_note(request):
request_json_data = bytes.decode(request.body)
request_data = json.loads(request_json_data)


You need to install a plugin too:

1- pip install django-cors-headers




+Internationalization (May 2, 2016, 10:56 p.m.)

from django.conf.urls.i18n import i18n_patterns

urlpatterns += i18n_patterns()




And finally in a file, add some snippet like this:

def change_language(request):
if '/admin/' not in request.get_full_path():
if '/fa/' not in request.get_full_path():
return {}
return {}


{% get_language_info for LANGUAGE_CODE as lang %}
{% get_language_info for "pl" as lang %}

You can then access the information:

Language code: {{ lang.code }}<br />
Name of language: {{ lang.name_local }}<br />
Name in English: {{ }}<br />
Bi-directional: {{ lang.bidi }}
Name in the active language: {{ lang.name_translated }}

There are also simple filters available for convenience:
{{ LANGUAGE_CODE|language_name }} (“German”)
{{ LANGUAGE_CODE|language_name_local }} (“Deutsch”)
{{ LANGUAGE_CODE|language_bidi }} (False)
{{ LANGUAGE_CODE|language_name_translated }} (“německy”, when active language is Czech)

<form action="{% url 'set_language' %}" method="post">{% csrf_token %}
<input name="next" type="hidden" value="{{ redirect_to }}" />
<select name="language">
{% get_current_language as LANGUAGE_CODE %}
{% get_available_languages as LANGUAGES %}
{% get_language_info_list for LANGUAGES as languages %}
{% for language in languages %}
<option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} selected="selected"{% endif %}>
{{ language.name_local }} ({{ language.code }})
{% endfor %}
<input type="submit" value="Go" />

from django.utils import translation
user_language = 'fr'
request.session[translation.LANGUAGE_SESSION_KEY] = user_language

from django.http import HttpResponse

def hello_world(request, count):
if request.LANGUAGE_CODE == 'de-at':
return HttpResponse("You prefer to read Austrian German.")
return HttpResponse("You prefer to read another language.")


from django.conf import settings
from django.utils import translation

class ForceLangMiddleware:

def process_request(self, request):
request.LANG = getattr(settings, 'LANGUAGE_CODE', settings.LANGUAGE_CODE)
request.LANGUAGE_CODE = request.LANG


+Admin - Access ModelForm properties (April 23, 2016, 9:09 a.m.)

def __init__(self, *args, **kwargs):
initial = kwargs.get('initial', {})
initial['material'] = 'Test'
kwargs['initial'] = initial
super(ArtefactForm, self).__init__(*args, **kwargs)


for field in self.fields.items():
print(field[0]) # Prints field names
print(field[1].label) # Prints field labels

+View - Replace/Populate POST data (April 19, 2016, 11:38 a.m.)

If the request was the result of a Django form submission, then it is reasonable for POST being immutable to ensure the integrity of the data between the form submission and the form validation. However, if the request was not sent via a Django form submission, then POST is mutable as there is no form validation.

mutable = request.POST._mutable
request.POST._mutable = True
request.POST['some_data'] = 'test data'
request.POST._mutable = mutable


In an HttpRequest object, the GET and POST attributes are instances of django.http.QueryDict, a dictionary-like class customized to deal with multiple values for the same key. This is necessary because some HTML form elements, notably <select multiple>, pass multiple values for the same key.

The QueryDicts at request.POST and request.GET will be immutable when accessed in a normal request/response cycle. To get a mutable version you need to use .copy().


request.POST = request.POST.copy()
request.POST['some_key'] = 'some_value'



QueryDict implements all the standard dictionary methods because it’s a subclass of dictionary. Exceptions are outlined here:

QueryDict.__init__(query_string=None, mutable=False, encoding=None)[source]

Instantiates a QueryDict object based on query_string.

>>> QueryDict('a=1&a=2&c=3')
<QueryDict: {'a': ['1', '2'], 'c': ['3']}>

If query_string is not passed in, the resulting QueryDict will be empty (it will have no keys or values).

Most QueryDicts you encounter, and in particular those at request.POST and request.GET, will be immutable. If you are instantiating one yourself, you can make it mutable by passing mutable=True to its __init__().

Strings for setting both keys and values will be converted from encoding to unicode. If encoding is not set, it defaults to DEFAULT_CHARSET.


Returns the value for the given key. If the key has more than one value, __getitem__() returns the last value. Raises django.utils.datastructures.MultiValueDictKeyError if the key does not exist. (This is a subclass of Python’s standard KeyError, so you can stick to catching KeyError.)

QueryDict.__setitem__(key, value)[source]

Sets the given key to [value] (a Python list whose single element is value). Note that this, as other dictionary functions that have side effects, can only be called on a mutable QueryDict (such as one that was created via copy()).


Returns True if the given key is set. This lets you do, e.g., if "foo" in request.GET.

QueryDict.get(key, default=None)

Uses the same logic as __getitem__() above, with a hook for returning a default value if the key doesn’t exist.

QueryDict.setdefault(key, default=None)[source]

Just like the standard dictionary setdefault() method, except it uses __setitem__() internally.


Takes either a QueryDict or standard dictionary. Just like the standard dictionary update() method, except it appends to the current dictionary items rather than replacing them. For example:

>>> q = QueryDict('a=1', mutable=True)
>>> q.update({'a': '2'})
>>> q.getlist('a')
['1', '2']
>>> q['a'] # returns the last


Just like the standard dictionary items() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.items()
[('a', '3')]


Just like the standard dictionary iteritems() method. Like QueryDict.items() this uses the same last-value logic as QueryDict.__getitem__().


Like QueryDict.iteritems() except it includes all values, as a list, for each member of the dictionary.


Just like the standard dictionary values() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.values()


Just like QueryDict.values(), except an iterator.

In addition, QueryDict has the following methods:


Returns a copy of the object, using copy.deepcopy() from the Python standard library. This copy will be mutable even if the original was not.

QueryDict.getlist(key, default=None)

Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.

QueryDict.setlist(key, list_)[source]

Sets the given key to list_ (unlike __setitem__()).

QueryDict.appendlist(key, item)[source]

Appends an item to the internal list associated with key.

QueryDict.setlistdefault(key, default_list=None)[source]

Just like setdefault, except it takes a list of values instead of a single value.


Like items(), except it includes all values, as a list, for each member of the dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.lists()
[('a', ['1', '2', '3'])]


Returns a list of values for the given key and removes them from the dictionary. Raises KeyError if the key does not exist. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.pop('a')
['1', '2', '3']


Removes an arbitrary member of the dictionary (since there’s no concept of ordering), and returns a two value tuple containing the key and a list of all values for the key. Raises KeyError when called on an empty dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.popitem()
('a', ['1', '2', '3'])


Returns dict representation of QueryDict. For every (key, list) pair in QueryDict, dict will have (key, item), where item is one element of the list, using same logic as QueryDict.__getitem__():

>>> q = QueryDict('a=1&a=3&a=5')
>>> q.dict()
{'a': '5'}


Returns a string of the data in query-string format. Example:

>>> q = QueryDict('a=2&b=3&b=5')
>>> q.urlencode()

Optionally, urlencode can be passed characters which do not require encoding. For example:

>>> q = QueryDict(mutable=True)
>>> q['next'] = '/a&b/'
>>> q.urlencode(safe='/')

+Admin - Hide fields dynamically (April 11, 2016, 7:07 p.m.)

def get_fields(self, request, obj=None):
fields = admin.ModelAdmin.get_fields(self, request)
if settings.DEBUG:
return fields
return ('parent', 'name_en', 'name_fa', 'content_en', 'content_fa', 'ordering',
'languages', 'header_image', 'project_thumbnail')

+Error ==> Permission denied when trying to access database after restore (migration) (April 10, 2016, 10:47 p.m.)

Enter the commands in postgresql shell:
psql mohsen_notesdb -c "GRANT ALL ON ALL TABLES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL SEQUENCES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL FUNCTIONS IN SCHEMA public to mohsen_notes;"

+Admin - Reisze Image Signal (April 5, 2016, 11:51 a.m.)

Create a file `` with this content:

from PIL import Image

from django.conf import settings

def resize_image(sender, instance, created, **kwargs):
if instance.position == 't':
width = settings.TOP_ADS_WIDTH
height = settings.TOP_ADS_HEIGHT
width = settings.BOTTOM_ADS_WIDTH
height = settings.BOTTOM_ADS_HEIGHT

img =
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')


After model definition in your file, import `resize_image` and:
models.signals.post_save.connect(resize_image, sender=TheModel)

+Admin - Hide model in admin dynamically (Feb. 29, 2016, 9:50 a.m.)

class AccessoryCategoryAdmin(admin.ModelAdmin):
def get_model_perms(self, request):
perms = admin.ModelAdmin.get_model_perms(self, request)
if request.user.username == settings.SECOND_ADMIN:
return {}
return perms

+Admin - Display readonly fields based on conditions (Feb. 28, 2016, 3:02 p.m.)

class AccessoryAdmin(admin.ModelAdmin):
list_display = ('name', 'category', 'price', 'quantity', 'ordering', 'display')
list_filter = ('category', 'display')

def get_readonly_fields(self, request, obj=None):
if request.user.username == settings.SECOND_ADMIN:
readonly_fields = ('category', 'name', 'image', 'price', 'main_image', 'description', 'ordering', 'url_name')
return readonly_fields
return self.readonly_fields

+Form - How to add a star after fields (Feb. 27, 2016, 10:47 p.m.)

Add the `required_css_class` property to Form class like this:

class ProfileForm(forms.Form):
required_css_class = 'required'

first_name = forms.CharField(label=_('first name'), max_length=30)
last_name = forms.CharField(label=_('last name'), max_length=30)
cellphone_number = forms.CharField(label=_('cellphone'), max_length=20)

Then use the property `label_tag` of form fields to set the titles:
{{ form.first_name.errors }} {{ form.first_name.label_tag }}
{{ form.last_name.errors }} {{ form.last_name.label_tag }}
{{ form.cellphone_number.errors }} {{ form.cellphone_number.label_tag }}

Use it in CSS to style it or add an asterisk:
<style type="text/css">
.required:after {
content: " *";
color: red;

+Decorators (Jan. 29, 2016, 4:34 p.m.)

Create a python file named `` in the app and write your decorators as follows:

def login_required(view_func):
def wrap(request, *args, **kwargs):
if request.user.is_authenticated():
return view_func(request, *args, **kwargs)
return render(request, 'issue_tracker/access_denied.html',
{'login_required': 'yes'})
return wrap


from django.utils.functional import wraps

def can_participate_poll(view):
def inner(request, *args, **kwargs):
print(kwargs) # Prints {'qnum': 11, 'qid': 23}
return view(request, *args, **kwargs)
return inner

This will print the args which are passed to the view.

def poll_view(request, qid, qnum):


from django.contrib.auth.decorators import user_passes_test

@user_passes_test(lambda u: u.is_superuser)
def my_view(request):


+Admin - Change Header Title (Jan. 14, 2016, 8:44 p.m.)

In the main file: = _('YouStone Administration')

+Change app name for admin (Jan. 27, 2016, 11:51 p.m.)

1- Create a python file named `` in the app:

from django.apps import AppConfig
from django.utils.translation import ugettext_lazy as _

class CourseConfig(AppConfig):
name = 'course'
verbose_name = _('course')

2- Edit the file within the app:
default_app_config = 'course.apps.CourseConfig'

+Save File/Image (Dec. 1, 2015, 3:16 p.m.)

import uuid
from PIL import Image as PILImage
import imghdr
import os

from django.conf import settings

from manager.home.models import Image

def save_image(img_file, width=0, height=0):
# Generate a random image name
img_name = uuid.uuid4().hex + '.' +'.')[-1]

# Saving the picture on disk
img = open(settings.IMG_ROOT + img_name, 'wb')
for chunk in img_file.chunks():

img = open(
# Is the saved image a valid image file!?
if not imghdr.what(img) or imghdr.what(img).lower() not in ['jpg', 'jpeg', 'gif', 'png']:
return {'is_image': False}
if width or height:
# Resizing the image
pil_img =

if pil_img.mode != 'RGB':
pil_img = pil_img.convert('RGB')
pil_img.resize((width, height), PILImage.ANTIALIAS).save(, format='JPEG')

# Saving the image location on the database
img = Image.objects.create(name=img_name)
return {'is_image': True, 'image': img}

def create_unique_file_name(path, file_name):
while os.path.exists(path + file_name):
if '.' in file_name:
file_name = file_name.replace('.', '_.', -1)
file_name += '_'

return file_name

+Custom Middleware Class (Nov. 21, 2015, 10:39 p.m.)

Create a file named `` in a module and add your middleware like this:

from django.shortcuts import render

from nespresso.models import Settings

class UnderConstruction:
def process_request(self, request):
settings_ = Settings.objects.all()
if settings_ and settings_[0].under_construction:
return render(request, 'nespresso/under_construction.html')

After defining a middleware, add it to the settings:


Django 2:

from django.shortcuts import HttpResponseRedirect
from django.urls import reverse

class UnderConstructionMiddleWare:
def __init__(self, get_response):
self.get_response = get_response

def __call__(self, request):
response = self.get_response(request)
# Do the conditions here
return HttpResponseRedirect(reverse('under_construction:home'))

Add the name to MIDDLEWARE


+Add Action Form to Action (Oct. 13, 2015, 10:48 a.m.)

from django.contrib.admin.helpers import ActionForm
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)

class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
self.message_user(request, _('Successfully updated membership type for selected rows.'), messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Admin - Hide action (Oct. 8, 2015, 10:56 a.m.)

class MyAdmin(admin.ModelAdmin):

def has_delete_permission(self, request, obj=None):
return False

def get_actions(self, request):
actions = super(MyAdmin, self).get_actions(request)
if 'delete_selected' in actions:
del actions['delete_selected']
return actions


def get_actions(self, request):
actions = admin.ModelAdmin.get_actions(self, request)
if request.user.username == settings.SECOND_ADMIN:
return []
return actions

+Model - Disable the Add and / or Delete action for a specific model (March 10, 2016, 11:02 p.m.)

def has_add_permission(self, request):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return perms

def has_delete_permission(self, request, obj=None):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return perms

+URLS - Redirect (Oct. 6, 2015, 11:27 a.m.)

from django.views.generic import RedirectView

url(r'^$', RedirectView.as_view(url='/online-calls/'), name='home'),

+Send HTML email using send_mail (Sept. 28, 2015, 4:48 p.m.)

from django.template import loader
from django.core.mail import send_mail

html = loader.render_to_string('nespresso/admin_order_notification.html', {'order': order})
send_mail('Nespresso New Order from - %s' % order.customer.user.get_full_name(),
OrderingEmail.objects.all().values_list('email', flat=True),

+Admin - Many to Many Inline (Sept. 28, 2015, 10:23 a.m.)

class OrderInline(admin.TabularInline):
model = Order.items.through

class OrderItemAdmin(admin.ModelAdmin):
inlines = [OrderInline]

class OrderAdmin(admin.ModelAdmin):
list_display = ('customer', 'get_order_url',)
exclude = ('items',)
inlines = [OrderInline], OrderAdmin)

+Change list display link in django admin (Sept. 27, 2015, 5:47 p.m.)

In file:

class Order(models.Model):
customer = models.ForeignKey(Customer, null=True, on_delete=models.SET_NULL)
total_price = models.PositiveIntegerField()
items = models.ManyToManyField(OrderItem)
date_time = models.DateTimeField(default=now)

def __str__(self):
return '%s' % self.customer

def get_order_url(self):
return '<a href="%s" target="_blank">%s - %s</a>' % (reverse('customer:order', args=(,)),
# In django prior to version 2.0:
get_order_url.allow_tags = True

# In django after version 2.0:
from django.utils.safestring import mark_safe # At the top of your file
mark_safe('<a href="#"></a>')


And then in file:

class OrderAdmin(admin.ModelAdmin):
list_display = ('get_order_url',)

+Admin - Override User Form (Sept. 15, 2015, 2:13 p.m.)

from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from django import forms

from .models import Supervisor

class SupervisorChangeForm(UserChangeForm):
class Meta(UserChangeForm.Meta):
model = Supervisor

class SupervisorCreationForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = Supervisor

def clean_username(self):
username = self.cleaned_data['username']
except Supervisor.DoesNotExist:
return username
raise forms.ValidationError(self.error_messages['duplicate_username'])

class SupervisorAdmin(UserAdmin):
form = SupervisorChangeForm
add_form = SupervisorCreationForm
fieldsets = (
(None, {'fields': ('username', 'password')}),
('Personal info', {'fields': ('first_name', 'last_name', 'email')}),
('Permissions', {'fields': ('is_active',)}),
(None, {'fields': ('allowed_online_calls',)}),
exclude = ['user_permission'], SupervisorAdmin)


If you need to override the form fields:

class SupervisorChangeForm(UserChangeForm):

def __init__(self, *args, **kwargs):
super(UserChangeForm, self).__init__(*args, **kwargs)
self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(

class Meta(UserChangeForm.Meta):
model = Supervisor

+Ajax (Aug. 22, 2015, 3:54 p.m.)

def delete_order(request, p_type, pid):
if request.is_ajax():
return JsonResponse({'orders_length': len(request.session['orders']),
'total_price': request.session['orders_total_price'],
'status': 'deleted'})


return HttpResponse('rejected', content_type='text/plain')


$(document).ready(function () {
$('#id_province').change(function () {
type: 'POST',
url: "{% url 'shared:get-cities' %}",
data: {
csrfmiddlewaretoken: '{{ csrf_token }}',
'province_id': $(this).val()
dataType: 'json',
success: function (cities) {
$('<option value="0"> ---------- </option>').appendTo('#id_city');
$.each(cities, function (idx, city) {
console.log(idx, city);
$('<option value="' + city['pk'] + '">' + city['fields']['name'] + '</option>').appendTo($('#id_city'));
error: function () {
console.log('Error occurred!')


def get_cities(request):
if request.is_ajax():
cities = City.objects.filter(province=request.POST['province_id'])
return HttpResponse(serialize('json', cities, fields=('pk', 'name')))


+Models - Ranges of IntegerFields (Aug. 21, 2015, 10:22 p.m.)

A 64 bit integer, much like an IntegerField except that it is guaranteed to fit numbers from -9223372036854775808 to 9223372036854775807


Values from -2147483648 to 2147483647 are safe in all databases supported by Django.


Like an IntegerField, but must be either positive or zero (0). Values from 0 to 2147483647 are safe in all databases supported by Django. The value 0 is accepted for backward compatibility reasons.


Like a PositiveIntegerField, but only allows values under a certain (database-dependent) point. Values from 0 to 32767 are safe in all databases supported by Django.


Like an IntegerField, but only allows values under a certain (database-dependent) point. Values from -32768 to 32767 are safe in all databases supported by Django.


+Admin - Adding Action to Export/Download CSV file (Aug. 24, 2015, 1:04 p.m.)

class VirtualOfficeAdmin(admin.ModelAdmin):
actions = ['download_csv']
list_display = ('persian_name', 'english_name', 'office_type', 'active')
list_filter = ('office_type', 'active')

def download_csv(self, request, queryset):
import csv
from django.http import HttpResponse
import StringIO
from django.utils.encoding import smart_str

f = f = StringIO.StringIO()
writer = csv.writer(f)
["owner", "office type", "persian name", "english name", "cellphone number", "phone number", "address"])
for s in queryset:
owner = smart_str(s.owner.get_full_name())
persian_name = smart_str(s.persian_name)

# Office Type
office_type = s.office_type
if office_type == 're':
office_type = smart_str(ugettext('Real Estate'))
elif office_type == 'en':
office_type = smart_str(ugettext('Engineer'))
elif office_type == 'ar':
office_type = smart_str(ugettext('Architect'))
office_type = office_type

[owner, office_type, persian_name, s.english_name, '09' + s.owner.username, s.phone_number, s.address])
response = HttpResponse(f, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=stat-info.csv'
return response

download_csv.short_description = _("Download CSV file for selected stats.")
from django.contrib.admin.helpers import ActionForm
from django import forms
from django.utils.translation import ugettext_lazy as _
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)

class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm
actions = ['change_membership_type']

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
self.message_user(request, _('Successfully updated membership type for %d rows') % (queryset.count(),),
change_membership_type.short_description = _('Change Membership Type')

+Custom Template Tags & FIlters (April 6, 2016, 2:30 p.m.)

1- Create a module named `templatetags` in an app.

2- Create a py file with a desired name. (I usually choose the app name for this python file name)

3- Write the methods you need, in the python file.

4- There is no need to introduce these methods or files in ``.


================= Template Filters Examples =================

from django.template import Library

register = Library()

def trim_value(value):
value = str(value)
if value.endswith('.0'):
return value.replace('.0', '')
return value


def get_decimal(value):
if value:
import decimal
return str(decimal.Decimal('{0:.4f}'.format(value)))
return '0'


def get_minutes(total_seconds):
if total_seconds:
return round(total_seconds / 60, 2)
return 0


def get_acd(request):
if request:
minutes = get_minutes(request.session['total_seconds'])
if minutes:
return round(minutes / request.session['total_calls'], 2)
return 0
return 0


def round_values(value, digit):
if digit and digit.isdigit():
return round(value, int(digit))
return value


def calculate_currency_rate(value, invoice):
from decimal import Decimal
if invoice.rate_currency:
return round(Decimal(value) * Decimal(invoice.rate), 2)
return value


================= Template Tags Examples =================

Important Hint:
You can return anything you like from a tag, including a queryset. However, you can't use a tag inside the for tag ; you can only use a variable there (or a variable passed through a filter).

from django.template import Library, Node, TemplateSyntaxError, Variable

from youstone.models import Ad

register = Library()

class AdsNode(Node):
def __init__(self, usage, position, province):
self.usage, self.position, self.province = Variable(usage), Variable(position), Variable(province)

def render(self, context):
usage = self.usage.resolve(context)
position = self.position.resolve(context)
province = self.province.resolve(context)
ads = Ad.objects.filter(active=True, usage=usage)
if position:
ads = ads.filter(position=position)

if province:
print('PROVINCE', province)

context['ads'] = ads

return ''

def get_ads(parser, token):
tag_name, usage, position, province, _as, var_name = token.split_contents()
except ValueError:
raise TemplateSyntaxError(
'get_ads takes 4 positional arguments but %s were given.' % len(token.split_contents()))

if _as != 'as':
raise TemplateSyntaxError('get_ads syntax must be "get_ads <usage> <position> <province> as <var_name>."')

return AdsNode(usage, position, province)


Then you can use the template tag like this in the template:
{% get_ads usage position province as ads %}
{% for ad in ads %}

{% endfor %}


+Resize Image (Aug. 9, 2015, 10:34 p.m.)

Create a python module named and copy & paste this snippet:


from PIL import Image

from django.conf import settings

def resize_image(sender, instance, created, **kwargs):
width = settings.SLIDER_WIDTH
height = settings.SLIDER_HEIGHT

img =
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

Note that resize() returns a resized copy of an image. It doesn't modify the original.
So do not write codes like this:
img.resize((width, height), Image.ANTIALIAS), format='JPEG')


In the settings:
# Slider Image Size


from resize_image import resize_image

class Slider(models.Model):

models.signals.post_save.connect(resize_image, sender=Slider)


+Extending User Model using OneToOne relationship (Aug. 5, 2015, 4:43 p.m.)

from django.db.models.signals import post_save
from django.conf import settings

class Customer(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, unique=True, primary_key=True)

def create_customer(sender, instance, created, **kwargs):
if created:

post_save.connect(create_customer, sender=settings.AUTH_USER_MODEL)

+Admin - Overriding admin ModelForm (Nov. 30, 2015, 3:49 p.m.)

class MachineCompareForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(MachineCompareForm, self).__init__(*args, **kwargs)
self.model_fields = [['field_%s' %, title.feature,] for title in CompareTitle.objects.all()]
for field in self.model_fields:
self.base_fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
self.fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
feature = CompareFeature.objects.filter(, feature=field[2])
if feature:
self.base_fields[field[0]].initial = feature[0].value
self.fields[field[0]].initial = feature[0].value

def save(self, commit=True):
instance = super(MachineCompareForm, self).save(commit=False)
for field in self.model_fields:
if CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]):
CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]).update(

if commit:
return instance

class Meta:
model = MachineCompare
exclude = []

class MachineCompareAdmin(admin.ModelAdmin):
form = MachineCompareForm

def get_form(self, request, obj=None, **kwargs):
return MachineCompareForm


class SpecialPageAdmin(admin.ModelAdmin):
list_display = ('company', 'url_name', 'active',)
search_fields = ('company__name', 'url_name')
form = SpecialPageForm

def get_form(self, request, obj=None, **kwargs):
return SpecialPageForm

class SpecialPageForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(SpecialPageForm, self).__init__(*args, **kwargs)
for i in range(1, 16):
self.fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))
self.base_fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))

class Meta:
model = SpecialPage
exclude = []


+Model - Overriding delete method in model (Nov. 28, 2015, 12:29 p.m.)

from django.db.models.signals import pre_delete
from django.dispatch.dispatcher import receiver

@receiver(pre_delete, sender=MyModel)
def _mymodel_delete(sender, instance, **kwargs):
print "deleting"

+Union of querysets (July 20, 2015, 5:14 p.m.)

import itertools

result = itertools.chain(qs1, qs2, qs3, qs4)


records = query1 | query2


+Views - Concatenating querysets and converting to JSON (July 17, 2015, 9:05 p.m.)

from itertools import chain

combined = list(chain(collectionA, collectionB))
json = serializers.serialize('json', combined)


final_queryset = (queryset1 | queryset2)

+Template - nbsp template tag (Replace usual spaces in string by non breaking spaces) (July 9, 2015, 2:45 a.m.)

from django import template
from django.utils.safestring import mark_safe

register = template.Library()

def nbsp(value):
return mark_safe("&nbsp;".join(value.split(' ')))
{% load nbsp %}

{{ user.full_name|nbsp }}


{{ note.note|nbsp|linebreaksbr }}

+Views - Delete old uploaded file/image before saving the new one (July 8, 2015, 8:24 p.m.)

import os
from django.conf import settings

os.remove(settings.BASE_DIR +
except (OSError, IOError):

+Admin - list_display with a callable (Jan. 3, 2016, 10:17 a.m.)

class ExcelFile(models.Model):
file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' %

def get_file_name(self):
get_file_name.short_description = _('File Name')
class ExcelFileAdmin(admin.ModelAdmin):
list_display = ['get_file_name', 'business']
def change_order(self):
return '<a href="review/">%s</a>' % _('Edit Order')
change_order.short_description = _('Edit Order')
change_order.allow_tags = True

+Admin - Hide fields (July 8, 2015, 1:31 p.m.)

from django.contrib import admin

from .models import ExcelFile

class ExcelFileAdmin(admin.ModelAdmin):
exclude = ['companies'], ExcelFileAdmin)

+Model - Validators (Jan. 28, 2016, 12:03 a.m.)

from django.core.exceptions import ValidationError

def validate_excel_file(file):
except xlrd.XLRDError:
raise ValidationError(_('%s is not an Excel File') %

class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])


from django.core import validators

mobile_number = models.CharField(
_('mobile number'),
message=_('Please enter a valid mobile number.')


+Admin - Allow only one instance of object to be created (July 8, 2015, 12:41 p.m.)

def validate_only_one_instance(obj):
model = obj.__class__
if model.objects.count() > 0 and != model.objects.get().id:
raise ValidationError(
_('Can only create 1 %s instance') % model.__name__

class Settings(models.Model):
banner = models.ImageField(_('banner'), upload_to='images/machines/settings',
help_text=_('The required image size is 960px in 250px.'))

def __str__(self):
return '%s' % _('Settings')

def clean(self):

--------------------------- ANOTHER ONE ---------------------------------------------

class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' %

def clean(self):
model = self.__class__
validation_error = _("Can only create 1 %s instance") %
business = model.objects.filter(
# If the user is updating/editing an object
if business and != business[0].pk:
raise ValidationError(validation_error)
# If the user is inserting/creating an object
if business:
raise ValidationError(validation_error)

+Errors (Aug. 13, 2015, 12:05 a.m.)

_imagingft C module is not installed:
I got this error when django-simple-captcha tries to load the image.

1-apt-get install libfreetype6-dev
2-pip uninstall pillow
3-pip install pillow
4-restart the project

If you still got the same error, you need to look if the file has even been created at all!?
1-sudo update
2-locate _imagingft

If the file exists (and probably with a name (a little bit) different with what in error message looks for), you need to rename it:
The path and file name might be something like this:
You need to rename it to:
And restart the project.

If the file is not found with locate command in the virtualenv you're working on, try to re-install pillow (even download the most updated version from, and install it).
Anyway, you need to install it in a way, to get that file even with a different name.
decoder jpeg not available
sudo apt-get install libjpeg-dev
pip install -I pillow

sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib

Or for Ubuntu 32bit:

sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/

pip install -I pillow
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet:

from django.conf import settings

from django.contrib.auth import get_user_model

User = settings.AUTH_USER_MODEL
except ImportError:
from django.contrib.auth.models import User

+Speeding Up Django Links (June 18, 2015, 12:41 p.m.)

+Django Analytical (June 7, 2015, 4:52 p.m.)

1-easy_install django-analytical


3-In the base.html
{% load analytical %}
<!DOCTYPE ... >
{% analytical_head_top %}


{% analytical_head_bottom %}
{% analytical_body_top %}


{% analytical_body_bottom %}

4-Create an account on this site:
I have already registered: Username is Mohsen_Hassani and the password MohseN4301

5-There are some javascript codes which should be taken from to you template. Those are like:

This should be before the </body> </html> tags:
<script src="//" type="text/javascript"></script>
<script type="text/javascript">try{ clicky.init(100851091); }catch(e){}</script>
<noscript><p><img alt="Clicky" width="1" height="1" src="//" /></p></noscript>

+Templates - Do Mathematic (Jan. 14, 2016, 2:14 p.m.)

Using Django’s widthratio template tag for multiplication & division.

I find it a bit odd that Django has a template filter for adding values, but none for multiplication and division. It’s fairly straightforward to add your own math tags or filters, but why bother if you can use the built-in one for what you need?

Take a closer look at the widthratio template tag. Given {% widthratio a b c %} it computes (a/b)*c

So, if you want to do multiplication, all you have to do is pass b=1, and the result will be a*c.

Of course, you can do division by passing c=1. (a=1 would also work, but has possible rounding side effects)

Note: The results are rounded to an integer before returning, so this may have marginal utility for many cases.

So, in summary:

to compute A*B: {% widthratio A 1 B %}
to compute A/B: {% widthratio A B 1 %}

And, since add is a filter and not a tag, you can always to crazy stuff like:

compute A^2: {% widthratio A 1 A %}
compute (A+B)^2: {% widthratio A|add:B 1 A|add:B %}
compute (A+B) * (C+D): {% widthratio A|add:B 1 C|add:D %}

+URLS - Allow entering dot (.) in url pattern (Dec. 2, 2014, 10:03 p.m.)


+Change the value of QuerySet (Nov. 18, 2014, 2:17 a.m.)

If you change the value of QuerySet you will get an error:
“This QueryDict instance is immutable”

So this is how you should change the value of it: (the whole of it or any item inside)
mutable = request.POST._mutable
request.POST._mutable = True
request.session['search_criteria']['region'] = rid
request.POST = request.session['search_criteria']
request.POST._mutable = mutable

+Templates - Conditional Extend (Sept. 22, 2014, 11:45 a.m.)

{% extends supervising|yesno:"supervising/tasks.html,desktop/tasks_list.html" %}

{% extends variable %} uses the value of variable. If the variable evaluates to a string, Django will use that string as the name of the parent template. If the variable evaluates to a Template object, Django will use that object as the parent template.

+Adding CSS class in a ModelForm (Sept. 13, 2014, 1:15 a.m.)

self.fields['specie'].widget.attrs['class'] = 'autocomplete'

+Views - JSON object serialization (AJAX) (Jan. 3, 2016, 3:03 p.m.)

from django.core import serializers

foos = Foo.objects.all()
data = serializers.serialize('json', foos)

return HttpResponse(data, mimetype='application/json')


import json

def json_response(something):
return HttpResponse(json.dumps(something), content_type='application/javascript; charset=UTF-8')


from django.core.serializers.json import DjangoJSONEncoder

def categories_view(request):
categories = Category.objects.annotate(notes_count=Count('notes__pk')).values('pk', 'name', 'notes_count')
data = json.dumps(list(categories), cls=DjangoJSONEncoder)
return HttpResponse(data, content_type='application/json')


data = serializers.serialize('xml', SomeModel.objects.all(), fields=('name','size'))


all_objects = list(Restaurant.objects.all()) + list(Place.objects.all())
data = serializers.serialize('xml', all_objects)

For Django 1.7 +

from django.http import JsonResponse

return JsonResponse({'foo':'bar'})


Serializing non-dictionary objects
In order to serialize objects other than dict you must set the safe parameter to False:

return JsonResponse([1, 2, 3], safe=False)
Without passing safe=False, a TypeError will be raised.



indexed_companies = Company.objects.filter(index=True, business_group_id=request.POST['bid'])
indexed_companies = serialize('json', indexed_companies)

companies = Company.objects.filter(business_group_id=request.POST['bid'])
companies = serialize('json', filter_companies(companies, request.POST))
return JsonResponse({'indexed_companies': indexed_companies, 'companies': companies})


$('.search-forms').submit(function(e) {
type: 'POST',
url: $(this).attr("action"),
data: $(this).serialize(),
dataType: 'json',
success: function(json) {
var indexed_companies = $.parseJSON(json['indexed_companies']);
var companies = $.parseJSON(json['companies']);
$.each(indexed_companies, function(idx, indexed_company) {
$('<tr>').appendTo('#indexed-members table');
$('<td>' + (idx + 1) + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.province + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.manager + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + + '</td>').appendTo('#indexed-members table tr:last-child');
$('</tr>').appendTo('#indexed-members table');
error: function() {
$('#search-preloader').css('display', 'none');
console.log('{% trans "Problem with connecting to the server" %}.');


If you need to serialize some fields of an object, you can not use this:
return JsonResponse({'products': serialize('json', Coffee.objects.all().values('id', 'name'))})

The correct way is:
return JsonResponse({'products': serialize('json', Coffee.objects.all(), fields=('id', 'name'))})


+Models - Overriding save method (Aug. 21, 2014, 1:03 p.m.)

from tastypie.utils.timezone import now
from django.contrib.auth.models import User
from django.db import models
from django.utils.text import slugify

class Entry(models.Model):
user = models.ForeignKey(User)
pub_date = models.DateTimeField(default=now)
title = models.CharField(max_length=200)
slug = models.SlugField()
body = models.TextField()

def __unicode__(self):
return self.title

def save(self, *args, **kwargs):
# For automatic slug generation.
if not self.slug:
self.slug = slugify(self.title)[:50]

return super(Entry, self).save(*args, **kwargs)

+auto_now / auto_now_add (Aug. 21, 2014, 1:02 p.m.)

created_at = models.DateTimeField(_('created at'), auto_now_add=True)
updated_at = models.DateTimeField(_('updated at'), auto_now=True)


class Blog(models.Model):
title = models.CharField(max_length=100)
added = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)

auto_now_add tells Django that when you add a new row, you want the current date & time added. auto_now tells Django to add the current date & time will be added EVERY time the record is saved.



Automatically set the field to now every time the object is saved. Useful for “last-modified” timestamps.

The field is only automatically updated when calling The field isn’t updated when making updates to other fields in other ways such as QuerySet.update(), though you can specify a custom value for the field in an update like that.



Automatically set the field to now when the object is first created. Useful for the creation of timestamps.

Even if you set a value for this field when creating the object, it will be ignored. If you want to be able to modify this field, set the following instead of auto_now_add=True:

- For DateField: -> from
- For DateTimeField: -> from


+Query - Call a field name by dynamic values (Aug. 21, 2014, 12:58 p.m.)

properties = Properties.objects.filter(**{'%s__age_status' % p_type: request.POST['age_status']})

+Settings - Set a settings for shell (Aug. 21, 2014, 12:56 p.m.)

python shell --settings=nimkatonilne.settings

+Admin - Deleting the file/image on deleting an object (Aug. 21, 2014, 12:54 p.m.)

1-Create a file named `` with the following contents:

import os

from django.conf import settings

def clean_up(sender, instance, *args, **kwargs):
for field in sender._meta.get_fields():
field_types = ['FileBrowseField', 'ImageField', 'FileField']
if field.__class__.__name__ in field_types:
os.remove(settings.MEDIA_ROOT + str(getattr(instance,
except (OSError, IOError):
2- Open the file:

Import the `clean_up` function from the `clean_up` module and add the following line at the bottom of each model having a FileField or ImageField or FileBrowseField:

models.signals.post_delete.connect(clean_up, sender=Ads)

+URLS - Redirect to a URL in (Aug. 21, 2014, 12:53 p.m.)

from django.views.generic import RedirectView
from django.core.urlresolvers import reverse_lazy

(r'^one/$', RedirectView.as_view(url='/another/')),


url(r'^some-page/$', RedirectView.as_view(url=reverse_lazy('my_named_pattern'))),

+Forms - Overriding and manipulating fields (Nov. 30, 2015, 12:35 p.m.)

class CheckoutForm(forms.ModelForm):

def __init__(self, request, *args, **kwargs):
super(CheckoutForm, self).__init__(*args, **kwargs)
self.request = request

class Meta:
model = Address
exclude = ('fax_number',)


class ProfileForm(forms.Form):
required_css_class = 'required'


def __init__(self, request, *args, **kwargs):
super(InstituteRegistrationForm, self).__init__(*args, **kwargs)
self.request = request


if request.user.cellphone:
self.fields['cell_phone_number'].widget.attrs['readonly'] = 'true'


self.fields['email'].widget.attrs['readonly'] = 'true'


self.fields['city'].queryset = City.objects.filter(province__allow_delete=False)
self.fields['city'].initial = '1'


self.fields['first_name'].required = True
self.fields['first_name'].widget.attrs['required'] = True


for field in self.fields.values():
field.widget.attrs['required'] = True
field.required = True


self.fields['national_team'].empty_label = None


self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(


Hide a field:
self.fields['state'].widget = forms.HiddenInput()


class UpdateShare(forms.ModelForm):
class Meta:
model = ManualEntries
exclude = ['dt']
widgets = {
'description': forms.Textarea(attrs={'rows': 3}),


class QuestionnaireForm(forms.ModelForm):
class Meta:
model = Questionnaire
fields = ['code', 'title', 'grades', 'description', 'enable']
widgets = {
'grades': forms.CheckboxSelectMultiple


self.fields['amount'].help_text = 'AAA'


Change ModelChoiceField items text:

self.fields['parent'].label_from_instance = lambda obj: obj.other_name


def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)


When passing instance in render, like:
{'form': ProfileForm(instance=request.user)}

if you needed to change values in __init__ of ModelForm use "self.initial":

self.initial['first_name'] = 'aa'


class CertificateForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
date = self['date'].value()
if date and not isinstance(date, str):
self.initial['date'] = '-'.join([str(x) for x in list(get_persian_date(date).values())])


Change max_length validator error message:

caller_id.validators[-1].message = _('The text is too long.')


+Docker behind socsk proxy (Oct. 24, 2018, 2:59 p.m.)

1- mkdir -p /etc/systemd/system/docker.service.d

2- vim /etc/systemd/system/docker.service.d/http-proxy.conf


4- systemctl daemon-reload

5- systemctl restart docker

+Commands (Oct. 24, 2018, 3:22 p.m.)

docker run <image>
This command will download the image, if it is not already present, and runs it as a container.


docker start <name | id>


Get the process ID of the container
docker inspect container | grep Pid


Stop a running container:
docker stop ContainerID


We can see the ports by running:
docker port InstanceID


See the top processes within a container:
docker top ContainerID


docker images

docker images -q
q − It tells the Docker command to return the Image IDs only.


docker inspect <image>
The output will show detailed information on the Image.


docker ps [-a include stopped containers]
docker container ls


Statistics of a running container:
docker stats ContainerID
The output will show the CPU and Memory utilization of the Container.


Delete a container:
docker rm ContainerID


Pause the processes in a running container:
docker pause ContainerID
The above command will pause the processes in a running container.


docker unpause ContainerID


Kill the processes in a running container
docker kill ContainerID


Attach to a running container:
docker attach ContainerID

I think this will hang/freeze, or I can't have any outputs. Use the following command instead:
docker exec -it <container-id> bash


docker pull gitlab/gitlab-ce


Listing All Docker Networks:
docker network ls


Inspecting a Docker network:
If you want to see more details on the network associated with Docker, you can use the Docker network inspect command.
docker network inspect networkname
docker network inspect bridge


docker logs -f <name>


--detach --name


See all the commands that were run with an image via a container:
docker history ImageID


Removing Docker Images:
docker rmi ImageID


Set the hostname inside the container:


docker run centos -it /bin/bash
The -it argument is used to mention that we want to run in interactive tty mode.
/bin/bash is used to run the bash shell once CentOS is up and running.


docker run -p 8080:8080 -p 50000:50000 jenkins

The -p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly.


Tell Docker to expose the HTTP and SSH ports from GitLab on ports 30080 and 30022, respectively.

--publish 30080:80

--publish 30022:22


See information on the Docker running on the system:

docker info

Return Value

The output will provide the various details of the Docker installed on the system such as:

Number of containers
Number of images
The storage driver used by Docker
The root directory used by Docker
The execution driver used by Docker


Stop all running containers:
docker stop $(docker ps -a -q)

Delete all stopped containers:
docker rm $(docker ps -a -q)


+Docker Compose (Oct. 24, 2018, 8:31 p.m.)

1- curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

2- chmod +x /usr/local/bin/docker-compose

+Difference between image and container (Dec. 14, 2018, 1:02 a.m.)

An instance of an image is called a container. When the image is started, you have a running container of this image. You can have many running containers of the same image.

You can see all your images with "docker images" whereas you can see your running containers with "docker ps" (and you can see all containers with docker ps -a).

+Command Examples - docker run (Dec. 14, 2018, 1:36 a.m.)

docker run -v /full/path/to/html/directory:/usr/share/nginx/html:ro -p 8080:80 -d nginx

-v /full/path/to/html/directory:/usr/share/nginx/html:ro
Maps the directory holding our web page to the required location in the image. The ro field instructs Docker to mount it in read-only mode. It’s best to pass Docker the full paths when specifying host directories.

-p 8080:80 maps network service port 80 in the container to 8080 on our host system.

-d detaches the container from our command line session. We don’t want to interact with this container.


docker run --name foo -d -p 8080:80 mynginx

- name foo gives the container a name, rather than one of the randomly assigned names.


docker run busybox echo "hello from busybox"


-P will publish all exposed ports to random ports

We can see the ports by running:
docker port InstanceID


docker run -d -p 80:80 my_image service nginx start


docker run -d -p 80:80 my_image nginx -g 'daemon off;'


Restart policies

Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.

Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.

Always restart the container regardless of the exit status, including on daemon startup, except if the container was put into a stopped state before the Docker daemon was stopped.


VOLUME (shared filesystems):

-v, --volume=[host-src:]container-dest[:<options>]: Bind mount a volume.
The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy].

The 'host-src' is an absolute path or a name value.
If neither 'rw' or 'ro' is specified then the volume is mounted in read-write mode.

The `nocopy` mode is used to disable automatically copying the requested volume path in the container to the volume storage location.
For named volumes, `copy` is the default mode. Copy modes are not supported for bind-mounted volumes.

--volumes-from="": Mount all volumes from the given container(s)



-u="", --user="": Sets the username or UID used and optionally the groupname or GID for the specified command.



The default working directory for running binaries within a container is the root directory (/), but the developer can set a different default with the Dockerfile WORKDIR command. The operator can override this with:

-w="": Working directory inside the container


docker run \
--rm \
--detach \
--env KEY=VALUE \
--ip \
--publish 3000:3000 \
--volume my_volume \
--name my_container \
--tty --interactive \
--volume /my_volume \
--workdir /app \
IMAGE bash


--rm Automatically remove the container when it exits. The alternative would be to manually stop it and then remove it.


+Managing Ports (Dec. 14, 2018, 1:24 a.m.)

In Docker, the containers themselves can have applications running on ports. When you run a container, if you want to access the application in the container via a port number, you need to map the port number of the container to the port number of the Docker host.

To understand what ports are exposed by the container, you should use the Docker inspect command to inspect the image:
docker inspect jenkins

The output of the inspect command gives a JSON output. If we observe the output, we can see that there is a section of "ExposedPorts" and see that there are two ports mentioned. One is the data port of 8080 and the other is the control port of 50000.

To run Jenkins and map the ports, you need to change the Docker run command and add the ‘p’ option which specifies the port mapping. So, you need to run the following command:

docker run -p 8080:8080 -p 50000:50000 jenkins

The left-hand side of the port number mapping is the Docker host port to map to and the right-hand side is the Docker container port number.

+Docker Network (Dec. 14, 2018, 2:03 a.m.)

When docker is installed, it creates three networks automatically.
docker network ls

c2c695315b3a bridge bridge local
a875bec5d6fd host host local
ead0e804a67b none null local


The bridge network is the network in which containers are run by default. So that means when we run a container, it runs in this bridge network. To validate this, let's inspect the network:

docker network inspect bridge


You can see that our container is listed under the Containers section in the output. What we also see is the IP address this container has been allotted -


Defining our own networks:

docker network create my-network-net
docker run -d --name es --net my-network-net -p 9200:9200 -p 9300:9300


+When to use --hostname in docker? (Dec. 15, 2018, 2:55 a.m.)

The --hostname flag only changes the hostname inside your container. This may be needed if your application expects a specific value for the hostname. It does not change DNS outside of docker, nor does it change the networking isolation, so it will not allow others to connect to the container with that name.

You can use the container name or the container's (short, 12 character) id to connect from container to container with docker's embedded dns as long as you have both containers on the same network and that network is not the default bridge.

+Installation (Feb. 28, 2017, 10:31 a.m.)


1- Install packages to allow apt to use a repository over HTTPS:
apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common

2- Add Docker’s official GPG key:
curl -fsSL | apt-key add -

4- Use the following command to set up the stable repository.
add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"

5- apt update

6- apt install docker-ce


Install Community Edition (CE)

1- Install the dnf-plugins-core package which provides the commands to manage your DNF repositories from the command line.
dnf -y install dnf-plugins-core

2- Use the following command to set up the stable repository. (You might need a proxy)
proxychains4 dnf config-manager --add-repo

3- Install the latest version of Docker CE: (You might need a proxy)
dnf install docker-ce


+Introduction (Feb. 27, 2017, 12:30 p.m.)

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Ansible, CFEngine, Chef, Google Cloud Platform, IBM Bluemix, HPE Helion Stackato, Jelastic, Jenkins, Kubernetes, Microsoft Azure, OpenStack Nova, OpenSVC, Oracle Container Cloud Service, Puppet, Salt, Vagrant, and VMware vSphere Integrated Containers.

ELK Stack
+beats (May 19, 2019, 9:05 p.m.)

This input plugin enables Logstash to receive events from the Elastic Beats framework.

The following example shows how to configure Logstash to listen on port 5044 for incoming Beats connections and to index into Elasticsearch:

input {
beats {
port => 5044

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

+Difference between Logstash and Beats (May 19, 2019, 9:01 p.m.)

Beats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to Elasticsearch. Beats have a small footprint and use fewer system resources than Logstash.

Logstash has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources.

+Elasticsearch cat APIs (April 22, 2019, 1:24 a.m.)

To check the cluster health, we will be using the _cat API.

cat APIs

JSON is great… for computers. Even if it’s pretty-printed, trying to find relationships in the data is tedious. Human eyes, especially when looking at a terminal, need compact and aligned text. The cat API aims to meet this need.


curl ''



List All Indices:
curl ''


+Installation (April 19, 2019, 10:25 p.m.)

apt install openjdk-8-jdk apt-transport-https curl nginx libpcre3-dev



1- wget -qO - | sudo apt-key add -

2- echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

3- apt update

4- apt install elasticsearch

5- Uncomment the following options from the file "/etc/elasticsearch/elasticsearch.yml" localhost
http.port: 9200

systemctl restart elasticsearch
systemctl enable elasticsearch

7- Check the status of the elasticsearch server: (Its server takes time to start listening.)
curl -X GET http://localhost:9200



1- apt install kibana

2- systemctl enable kibana

echo "admin:$(openssl passwd -apr1 my_password)" | sudo tee -a /etc/nginx/htpasswd.kibana

4- vim /etc/nginx/sites-enabled/kibana
server {
listen 80;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

5- systemctl restart nginx



1- apt install logstash

2- Create a logstash filter config file in "/etc/logstash/conf.d/logstash.conf", with this content:
input {
tcp {
port => 4300 # optional port number
codec => json

filter { }

output {
elasticsearch { }
stdout { } # or stdout {codec => json} in case you want to see the data in logs for debugging

3- Restart logstash services:
systemctl restart logstash
systemctl enable logstash


For debugging:
tcpdump -nti any port 4300
tail -f /var/log/syslog
tail -f /var/log/logstash/logstash*.log


+Introduction / Definitions (April 19, 2019, 10:24 p.m.)

First Underlying Layer: Logstash + Beats

Upper Layer: Elasticsearch

Upper Layer: Kibabana


"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.

Elasticsearch is a search and analytics engine.

Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.

Kibana lets users visualize data with charts and graphs in Elasticsearch.


Elasticsearch is a distributed, RESTful search and analytics NoSQL engine based on Lucene.

Logstash is a light-weight data processing pipeline for managing events and logs from a wide variety of sources.

Kibana is a web application for visualizing data that works on top of Elasticsearch.


The Elastic Stack is the next evolution of the ELK Stack.


+Make body skrinkable and extensible (Sept. 18, 2019, 1:05 a.m.)

.holy-grail, .holy-grail-body {
display: flex;
flex: 1 1 auto;
flex-direction: column;

+Parsing complex JSON in Flutter (March 29, 2020, 6:53 p.m.)

+BloC Description (March 26, 2020, 8:27 p.m.)

Think of a stream as a pipe filled with water that flows from the A-side to the B-side.

Let’s say you are in side-A and want to send some colorful tiny children’s balls to side-B. You sink these balls one after the other inside the pipe and the water will transport them to side-B one by one in a stream fashion.

The balls exit the pipe from side-B. Let’s say they fall and make a noise. Let’s say there is another person inside-B waiting for the balls. Because this person doesn’t know when exactly a ball arrives, he decided to read a newspaper. It’s only when he hears the sound of a ball that he is aware of the arrival of a ball. At that time, he can catch the ball and make use of it.

In real BloC:
- The pipe is StreamController
- The flow of water is StreamController.stram
- The action of pushing balls from A-side is StreamController.sink
- The colorful children’s balls are data of any type
- The person in the side-B listening to the ball falling is StreamController.stram.listen.


For each of your variables you need to define five things:

1- Your variable name
2- StreamController
3- Stream
4- Sink
5- Close StreamController


class YourBloc {

var yourVar;

final yourVarController = StreamController<yourType>();

Stream<yourType> get yourVarStream =>;

StreamSink<yourType> get yourVarSink => counterController.sink;

yourMethod() {

// some logic staff;
yourVar = yourNewValue;

dispose() {


+setState method description (March 26, 2020, 8:13 p.m.)

Flutter is declarative. This means that Flutter rebuilds its user interface (UI) from scratch to reflect the current state of your app each time setState() method is called.

+BLoC (March 26, 2020, 2:28 p.m.)

BLoC stands for Business Logic Controller. It was created by Google and introduced at Google I/O 2018. It is created based on Streams and Reactive Programming.

These are the classes that act as a layer between data and UI components. The BLoC listens to events passed from it, and after receiving a response, it emits an appropriate state.



Allows sending data, error and done events on its stream. This class can be used to create a simple stream that others can listen on, and to push events to that stream.


BloC pattern is often used with the third party library RxDart because it has many features not available in the standard dart StreamController.


+Useful LInks (March 21, 2020, 11 p.m.)

Hiding the Bottom Navigation Bar on Scroll:



+Log (May 6, 2020, 1:21 p.m.)

git log

git log --pretty=oneline

git log --pretty=oneline --abbrev-commit

+See changes of a commit (April 27, 2020, 10:48 a.m.)

git show <COMMIT>


Shows the changes made in the most recent commit:

git show


+View logs of a user's commits (April 27, 2020, 10:47 a.m.)

git log --author="Mohsen"

+See changes before pulling (March 15, 2020, 4:39 p.m.)

1- Fetch the changes from the remote:
git fetch origin

2- Show commit logs of the changes:
git log develop ..origin/develop

3- Show diffs of changes:
git diff develop..origin/develop

4- Apply the changes by merge:
git merge origin/develop
Or just pull the changes:
git pull

+Clone a specific branch (Feb. 20, 2020, 10:25 p.m.)

git clone -b <branch> <remote_repo>

+Server certificate verification failed. CAfile (Feb. 20, 2020, 7:16 p.m.)

git config --global http.sslverify "false"

+git clean (Feb. 19, 2020, 2:23 p.m.)

Remove files from your working directory that are not tracked.

If you change your mind, there is often no retrieving the content of those files.


A safer option is to run git stash --all to remove everything but save it in a stash.


git clean

git clean -d -n

interactive mode
git clean -x -i


+.git/info/exclude vs. .gitignore (Aug. 8, 2018, 11:36 a.m.)

gitignore is applied to every clone of the repo (it comes along as a versioned file),
.git/info/exclude only applies to your local copy of the repository.


The advantage of .gitignore is that it can be checked into the repository itself, unlike .git/info/exclude. Another advantage is that you can have multiple .gitignore files, one inside each directory/subdirectory for directory-specific ignore rules, unlike .git/info/exclude.

So, .gitignore is available across all clones of the repository. Therefore, in large teams, all people are ignoring the same kind of files Example *.db, *.log. And you can have more specific ignore rules because of multiple .gitignore.

.git/info/exclude is available for individual clones only, hence what one person ignores in his clone is not available in some other person's clone. For example, if someone uses Eclipse for development it may make sense for that developer to add a .build folder to .git/info/exclude because other devs may not be using Eclipse.

In general, files/ignore rules that have to be universally ignored should go in .gitignore, and otherwise files that you want to ignore only on your local clone should go into .git/info/exclude

+Change Remote Origin (Oct. 24, 2018, 2:26 p.m.)

git remote rm origin
git remote add origin
git config master.remote origin
git config master.merge refs/heads/master

+Force Push (Oct. 14, 2018, 2:13 p.m.)

git push https://git.... --force

git push --force origin .....

git push https://git.... -f

git push -f origin .....

+Cancel a local git commit (Feb. 25, 2019, 4:04 p.m.)

Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .


Undo most recent commit:
git reset HEAD~1


+Delete from reflog (Feb. 25, 2019, 4:04 p.m.)

git reflog delete HEAD@{3}

+Revert all local changes (Feb. 25, 2019, 4:03 p.m.)

Unstaged local changes (before you commit)

Discard all local changes, but save them for possible re-use later:
git stash

Discarding local changes (permanently) to a file:
git checkout -- <file>

Discard all local changes to all files permanently:
git reset --hard

+Comparing two branches (Feb. 25, 2019, 1:20 p.m.)

git diff branch_1 branch_2

+Rename a branch (Feb. 25, 2019, 4:01 p.m.)

1- Rename the local branch name:

If you are on the branch:
git branch -m <newname>

If you are on a different branch:
git branch -m <oldname> <newname>

2- Delete the old name remote branch and push the new name local branch:
git push origin :old-name new-name

3- Reset the upstream branch for the new-name local branch:

Switch to the branch and then:
git push origin -u new-name

+Delete a branch (Feb. 28, 2019, 9:17 a.m.)

Delete a Local GIT branch:

Use either of the following commands:
git branch -d branch_name
git branch -D branch_name

The -d option stands for --delete, which would delete the local branch, only if you have already pushed and merged it with your remote branches.

The -D option stands for --delete --force, which deletes the branch regardless of its push and merge status, so be careful using this one!


Delete a remote GIT branch:

Use either of the following commands:
git push <remote_name> --delete <branch_name>
git push <remote_name> :<branch_name>


Push to remote branch and delete:

If you ever want to push your local branch to remote and delete your local, you can use git push with the -d option as an alias for --delete.


+Fetch vs Pull (March 2, 2019, 10:03 a.m.)

In the simplest terms, git pull does a git fetch followed by a git merge.


git fetch only downloads new data from a remote repository - but it doesn't integrate any of this new data into your working files. Fetch is great for getting a fresh view of all the things that happened in a remote repository.


git pull, in contrast, is used with a different goal in mind: to update your current HEAD branch with the latest changes from the remote server. This means that pull not only downloads new data; it also directly integrates it into your current working copy files. This has a couple of consequences:

Since "git pull" tries to merge remote changes with your local ones, a so-called "merge conflict" can occur.

Like for many other actions, it's highly recommended to start a "git pull" only with a clean working copy. This means that you should not have any uncommitted local changes before you pull. Use Git's Stash feature to save your local changes temporarily.

+Merge (March 4, 2019, 12:29 p.m.)

Switch to the production branch and:
git merge other_branch

+Untracking/Re-indexing files based on .gitignore (March 4, 2019, 1:05 a.m.)

git add .

git commit -m "Some Message"

git push origin master

git rm -r --cached .

git add .

git commit -m "Reindexing..."

+Stash (March 4, 2019, 3:57 p.m.)

git stash

git stash pop

git stash list

git stash apply

git stash show stash@{0}

git stash apply --index


git stash --patch

Git will not stash everything that is modified but will instead prompt you interactively which of the changes you would like to stash and which you would like to keep in your working directory.


Creating a Branch from a Stash

git stash branch <new branchname>


+Submodule (Nov. 29, 2017, 6:17 p.m.)

1- CD to the path you need the module get cloned.

2- git submodule add


In case of this error raises:
blah blah already exists in the index :-D
git rm --cached blah blah
and you should also delete the files from this path:
rm -rf .git/modules/...


To remove a submodule you need to:

Delete the relevant section from the .gitmodules file.
Stage the .gitmodules changes git add .gitmodules
Delete the relevant section from .git/config.
Run git rm --cached path_to_submodule (no trailing slash).
Run rm -rf .git/modules/path_to_submodule
Commit git commit -m "Removed submodule <name>"
Delete the now untracked submodule files
rm -rf path_to_submodule


+Commands (July 29, 2017, 11:26 a.m.)

git pull


git fetch


git pull master


Create a branch:
git checkout -b branch_name


Work on an existing branch:
git checkout branch_name


View the changes you've made:
git status


View differences:
git diff


Delete all changes in the Git repository:
To delete all local changes in the repository that have not been added to the staging area, and leave unstaged files/folders, type:

git checkout .


Delete all untracked changes in the Git repository:
git clean -f


Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .


Undo most recent commit:
git reset HEAD~1


Merge created branch with master branch:

You need to be in the created branch.

git checkout NAME-OF-BRANCH
git merge master


Merge master branch with created branch:

You need to be in the master branch.

git checkout master
git merge NAME-OF-BRANCH


+Diff (July 29, 2017, 11:17 a.m.)

If you want to see what you haven't git added yet:
git diff myfile.txt

or if you want to see already-added changes
git diff --cached myfile.txt

+Modify existing / unpushed commits (Jan. 28, 2017, 3:12 p.m.)

git commit --amend -m "New commit message"

+Delete file from repository (Jan. 28, 2017, 3:04 p.m.)

If you deleted a file from the working tree, then commit the deletion:
git add . -A
git commit -m "Deleted some files..."
git push origin master


Remove a file from a Git repository without deleting it from the local filesystem:
git rm --cached <filename>
git rm --cached -r <dir_name>
git commit -m "Removed folder from repository"
git push origin master

+.gitingore Rules (Jan. 28, 2017, 2:56 p.m.)

A blank line matches no files, so it can serve as a separator for readability.

A line starting with # serves as a comment.

An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again. If a negated pattern matches, this will override lower precedence patterns sources.

If the pattern ends with a slash, it is removed for the purpose of the following description, but it would only find a match with a directory. In other words, foo/ will match a directory foo and paths underneath it, but will not match a regular file or a symbolic link foo (this is consistent with the way how path spec works in general in git).

If the pattern does not contain a slash /, git treats it as a shell glob pattern and checks for a match against the pathname relative to the location of the .gitignore file (relative to the top level of the work tree if not from a .gitignore file).

Otherwise, git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, Documentation/*.html matches Documentation/git.html but not Documentation/ppc/ppc.html or tools/perf/Documentation/perf.html.

A leading slash matches the beginning of the pathname. For example, /*.c matches cat-file.c but not mozilla-sha1/sha1.c.

+Examples (Aug. 21, 2014, 1:29 p.m.)

cd my_project
git init
git remote add origin
git commit -m 'initial commit'
git push origin master

--------------------mkdir my_project---------------------------------------------------------------------

After each change in project:
git add .
git commit -m '<the comment>'
git push origin master


git config http.postBuffer 1048576000
git config --global "Mohsen Hassani"
git config --global ""
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto
git config --list
git log

git add -A .
git commit -m "File nonsense.txt is now removed"

git commit -m "message with a tpyo here"
git commit --amend -m "More changes - now correct"

git remote
git remote -v

export http_proxy=http://proxy:8080
// Set proxy for git globally
git config --global http.proxy http://proxy:8080
// To check the proxy settings
git config --get http.proxy
// Just in case you need to you can also revoke the proxy settings
git config --global --unset http.proxy

+Gitlab Flow (Oct. 8, 2018, 3:08 p.m.)

In git you add files from the working copy to the staging area. After that you commit them to the local repo. The third step is pushing to a shared remote repository. After getting used to these three steps the branching model becomes the challenge.

Since many organizations new to git have no conventions how to work with it, it can quickly become a mess. The biggest problem they run into is that many long running branches that each contain part of the changes are around. People have a hard time figuring out which branch they should develop on or deploy to production. Frequently the reaction to this problem is to adopt a standardized pattern such as git flow and GitHub flow. We think there is still room for improvement and will detail a set of practices we call GitLab flow.

Git flow and its problems:
Git flow was one of the first proposals to use git branches and it has gotten a lot of attention. It advocates a master branch and a separate develop branch as well as supporting branches for features, releases and hotfixes. The development happens on the develop branch, moves to a release branch and is finally merged into the master branch. Git flow is a well defined standard but its complexity introduces two problems. The first problem is that developers must use the develop branch and not master, master is reserved for code that is released to production. It is a convention to call your default branch master and to mostly branch from and merge to this. Since most tools automatically make the master branch the default one and display that one by default it is annoying to have to switch to another one. The second problem of git flow is the complexity introduced by the hotfix and release branches. These branches can be a good idea for some organizations but are overkill for the vast majority of them. Nowadays most organizations practice continuous delivery which means that your default branch can be deployed. This means that hotfix and release branches can be prevented including all the ceremony they introduce. An example of this ceremony is the merging back of release branches. Though specialized tools do exist to solve this, they require documentation and add complexity. Frequently developers make a mistake and for example changes are only merged into master and not into the develop branch. The root cause of these errors is that git flow is too complex for most of the use cases. And doing releases doesn't automatically mean also doing hotfixes.

GitHub flow as a simpler alternative:
In reaction to git flow a simpler alternative was detailed, GitHub flow. This flow has only feature branches and a master branch. This is very simple and clean, many organizations have adopted it with great success. Atlassian recommends a similar strategy although they rebase feature branches. Merging everything into the master branch and deploying often means you minimize the amount of code in 'inventory' which is in line with the lean and continuous delivery best practices. But this flow still leaves a lot of questions unanswered regarding deployments, environments, releases and integrations with issues. With GitLab flow we offer additional guidance for these questions.

Production branch with GitLab flow:
GitHub flow does assume you are able to deploy to production every time you merge a feature branch. This is possible for e.g. SaaS applications, but there are many cases where this is not possible. One would be a situation where you are not in control of the exact release moment, for example an iOS application that needs to pass App Store validation. Another example is when you have deployment windows (workdays from 10am to 4pm when the operations team is at full capacity) but you also merge code at other times. In these cases you can make a production branch that reflects the deployed code. You can deploy a new version by merging in master to the production branch. If you need to know what code is in production you can just checkout the production branch to see. The approximate time of deployment is easily visible as the merge commit in the version control system. This time is pretty accurate if you automatically deploy your production branch. If you need a more exact time you can have your deployment script create a tag on each deployment. This flow prevents the overhead of releasing, tagging and merging that is common to git flow.

Environment branches with GitLab flow:
It might be a good idea to have an environment that is automatically updated to the master branch. Only in this case, the name of this environment might differ from the branch name. Suppose you have a staging environment, a pre-production environment and a production environment. In this case the master branch is deployed on staging. When someone wants to deploy to pre-production they create a merge request from the master branch to the pre-production branch. And going live with code happens by merging the pre-production branch into the production branch. This workflow where commits only flow downstream ensures that everything has been tested on all environments. If you need to cherry-pick a commit with a hotfix it is common to develop it on a feature branch and merge it into master with a merge request, do not delete the feature branch. If master is good to go (it should be if you are practicing continuous delivery) you then merge it to the other branches. If this is not possible because more manual testing is required you can send merge requests from the feature branch to the downstream branches.

Release branches with GitLab flow:
Only in case you need to release software to the outside world you need to work with release branches. In this case, each branch contains a minor version (2-3-stable, 2-4-stable, etc.). The stable branch uses master as a starting point and is created as late as possible. By branching as late as possible you minimize the time you have to apply bug fixes to multiple branches. After a release branch is announced, only serious bug fixes are included in the release branch. If possible these bug fixes are first merged into master and then cherry-picked into the release branch. This way you can't forget to cherry-pick them into master and encounter the same bug on subsequent releases. This is called an 'upstream first' policy that is also practiced by Google and Red Hat. Every time a bug-fix is included in a release branch the patch version is raised (to comply with Semantic Versioning) by setting a new tag. Some projects also have a stable branch that points to the same commit as the latest released branch. In this flow it is not common to have a production branch (or git flow master branch).

Merge/pull requests with GitLab flow:
Merge or pull requests are created in a git management application and ask an assigned person to merge two branches. Tools such as GitHub and Bitbucket choose the name pull request since the first manual action would be to pull the feature branch. Tools such as GitLab and others choose the name merge request since that is the final action that is requested of the assignee. In this article we'll refer to them as merge requests.

If you work on a feature branch for more than a few hours it is good to share the intermediate result with the rest of the team. This can be done by creating a merge request without assigning it to anyone, instead you mention people in the description or a comment (/cc @mark @susan). This means it is not ready to be merged but feedback is welcome. Your team members can comment on the merge request in general or on specific lines with line comments. The merge requests serves as a code review tool and no separate tools such as Gerrit and reviewboard should be needed. If the review reveals shortcomings anyone can commit and push a fix. Commonly the person to do this is the creator of the merge/pull request. The diff in the merge/pull requests automatically updates when new commits are pushed on the branch.

When you feel comfortable with it to be merged you assign it to the person that knows most about the codebase you are changing and mention any other people you would like feedback from. There is room for more feedback and after the assigned person feels comfortable with the result the branch is merged. If the assigned person does not feel comfortable they can close the merge request without merging.

In GitLab it is common to protect the long-lived branches (e.g. the master branch) so that normal developers can't modify these protected branches. So if you want to merge it into a protected branch you assign it to someone with maintainer authorizations.

Issue tracking with GitLab flow:
GitLab flow is a way to make the relation between the code and the issue tracker more transparent.

Any significant change to the code should start with an issue where the goal is described. Having a reason for every code change is important to inform everyone on the team and to help people keep the scope of a feature branch small. In GitLab each change to the codebase starts with an issue in the issue tracking system. If there is no issue yet it should be created first provided there is significant work involved (more than 1 hour). For many organizations this will be natural since the issue will have to be estimated for the sprint. Issue titles should describe the desired state of the system, e.g. "As an administrator I want to remove users without receiving an error" instead of "Admin can't remove users.".

When you are ready to code you start a branch for the issue from the master branch. The name of this branch should start with the issue number, for example '15-require-a-password-to-change-it'.

When you are done or want to discuss the code you open a merge request. This is an online place to discuss the change and review the code. Opening a merge request is a manual action since you do not always want to merge a new branch you push, it could be a long-running environment or release branch. If you open the merge request but do not assign it to anyone it is a 'Work In Progress' merge request. These are used to discuss the proposed implementation but are not ready for inclusion in the master branch yet. Pro tip: Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it's ready.

When the author thinks the code is ready the merge request is assigned to reviewer. The reviewer presses the merge button when they think the code is ready for inclusion in the master branch. In this case the code is merged and a merge commit is generated that makes this event easily visible later on. Merge requests always create a merge commit even when the commit could be added without one. This merge strategy is called 'no fast-forward' in git. After the merge the feature branch is deleted since it is no longer needed, in GitLab this deletion is an option when merging.

Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case it is no problem to reuse the same branch name since it was deleted when the branch was merged. At any time there is at most one branch for every issue. It is possible that one feature branch solves more than one issue.

+Uninstall (Oct. 23, 2018, 4:25 p.m.)

1- sudo gitlab-ctl uninstall

2- sudo gitlab-ctl cleanse

3- sudo gitlab-ctl remove-accounts

4- sudo dpkg -P gitlab-ce

5- Delete these directories:
rm -r /opt/gitlab/
rm -r /var/opt/gitlab
rm -r /etc/gitlab
rm -r /var/log/gitlab

+Docker (Dec. 15, 2018, 4:04 p.m.)

docker pull gitlab/gitlab-ce:latest


docker run -d --hostname -p 30443:443 -p 3080:80 -p 3022:22 --name gitlab --restart always -v /var/docker_data/gitlab/config:/etc/gitlab -v /var/docker_data/gitlab/logs:/var/log/gitlab -v /var/docker_data/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest


+Markdown Cheatsheet (March 10, 2018, 8:14 p.m.)

+Runner - .gitlab-ci.yml sample (Feb. 14, 2018, 11:38 a.m.)

- mkdocs build
- ssh-keyscan -H >> ~/.ssh/known_hosts
- scp -rC site/*
- ssh "/etc/init.d/nginx restart"

+Send Notifications to Email (April 12, 2017, 3:03 p.m.)
To test the mail server:
1- sudo gitlab-rails console production
2- Look at the ActionMailer delivery_method:
3- Check the mail settings:

If it's configured with smtp:

If it is sendmail:

You may need to check your local mail logs (e.g. /var/log/mail.log) for more details.
4- Send a test message via the console.
Notify.test_email('', 'Hello World', 'This is a test message').deliver_now

In case the email is not sent (after checking your mail), you can see the reason/error in:
tail -f /var/log/mail.log
5- If you needed to change any configs, refer to this file:

vim /var/opt/gitlab/gitlab-rails/etc/gitlab.yml

OR depending on your gitlab version, maybe this one:


And after any change to it:
gitlab-ctl reconfigure
For fixing some problems I had to replace "sendmail" with the default "postfix".
apt install sendmail (will remove postfix and install sendmail)

In /etc/hosts I had to put the required domain names to fix the error " Sender address rejected: Domain not found".

+Deleting a runner (March 8, 2017, 7:38 p.m.)

gitlab-runner unregister --name runner-0

For deleting all:
gitlab-runner verify --delete

+Install Gitlab Runner (Feb. 25, 2017, 3:09 p.m.)

GitLab Runner is an application which processes builds. It can be deployed separately and works with GitLab CI through an API.
In order to run tests, you need at least one GitLab instance and one GitLab Runner.
In GitLab CI, Runners run your YAML. A Runner is an isolated (virtual) machine that picks up jobs through the coordinator API of GitLab CI. A Runner can be specific to a certain project or serve any project in GitLab CI. A Runner that serves all projects is called a shared Runner.
1- Add GitLab's official repository:
apt-get install curl
curl -L | sudo bash

cat > /etc/apt/preferences.d/pin-gitlab-runner.pref <<EOF
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-ci-multi-runner
Pin: origin
Pin-Priority: 1001

3- Install gitlab-ci-multi-runner:
sudo apt-get install gitlab-ci-multi-runner

4- Register the Runner:
sudo gitlab-ci-multi-runner register

+Install GitLab on server (Feb. 25, 2017, 12:16 p.m.)
1- Install and configure the necessary dependencies:
sudo apt-get install curl openssh-server ca-certificates postfix

2- Add the GitLab package server and install the package:
curl -sS | sudo bash
sudo apt-get install gitlab-ce

3- Configure and start GitLab:
sudo gitlab-ctl reconfigure

4- Browse to the hostname and login:
On your first visit, you'll be redirected to a password reset screen to provide the password for the initial administrator account. Enter your desired password and you'll be redirected back to the login screen.
The default account's username is "root". Provide the password you created earlier and login. After login you can change the username if you wish.

+Install GitLab CI (Feb. 25, 2017, 11:46 a.m.)

GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab.
Starting from version 8.0, GitLab Continuous Integration (CI) is fully integrated into GitLab itself and is enabled by default on all projects.
GitLab offers a continuous integration service. If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your GitLab project to use a Runner, then each merge request or push, triggers your CI pipeline.

+iframe (June 3, 2018, 12:11 p.m.)

<!DOCTYPE html>
<title>Mohsen Hassani</title>
body, html {
margin: 0;
overflow: hidden;

iframe {
width: 100%;
height: 95vh;
border: 0;
<div class="iframe-link">
<iframe src="">
Please switch to another modern browser.

+Favicon (Feb. 20, 2019, 11:20 a.m.)

<link rel="shortcut icon" type="image/png" href="favicon.ico" />
<link rel="apple-touch-icon" href="/custom_icon.png" />

+Conditions If (July 27, 2015, 3:02 p.m.)

You might need to change all the below condition syntaxes with this syntax:
<![if gte IE 9]>

<!--[if IE]>
<link rel="stylesheet" type="text/css" href="all-ie-only.css" />
Target everything EXCEPT IE

<!--[if !IE]><!-->
<link rel="stylesheet" type="text/css" href="not-ie.css" />
Target IE 7 ONLY

<!--[if IE 7]>
<link rel="stylesheet" type="text/css" href="ie7.css">
Target IE 6 ONLY

<!--[if IE 6]>
<link rel="stylesheet" type="text/css" href="ie6.css" />
Target IE 5 ONLY

<!--[if IE 5]>
<link rel="stylesheet" type="text/css" href="ie5.css" />
Target IE 5.5 ONLY

<!--[if IE 5.5000]>
<link rel="stylesheet" type="text/css" href="ie55.css" />
Target IE 6 and LOWER

<!--[if lt IE 7]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />

<!--[if lte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
Target IE 7 and LOWER

<!--[if lt IE 8]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />

<!--[if lte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
Target IE 8 and LOWER

<!--[if lt IE 9]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />

<!--[if lte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
Target IE 6 and HIGHER

<!--[if gt IE 5.5]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />

<!--[if gte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
Target IE 7 and HIGHER

<!--[if gt IE 6]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />

<!--[if gte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
Target IE 8 and HIGHER

<!--[if gt IE 7]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />

<!--[if gte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />

+Queries (Dec. 12, 2018, 12:08 p.m.)

# influx

> show databases;

> show measurements

+Configuration (Dec. 9, 2018, 3:26 p.m.)

By default, InfluxDB uses the following network ports:

TCP port 8086 is used for client-server communication over InfluxDB’s HTTP API
TCP port 8088 is used for the RPC service for backup and restore

In addition to the ports above, InfluxDB also offers multiple plugins that may require custom ports. All port mappings can be modified through the configuration file, which is located at /etc/influxdb/influxdb.conf for default installations.


The system has internal defaults for every configuration file setting. View the default configuration settings with the "influxd config" command.


+Installation (Dec. 9, 2018, 3:24 p.m.)

Ubuntu & Debian installation are different. (Refer to the link above)

1- curl -sL | sudo apt-key add -

2- source /etc/lsb-release

3- echo "deb${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

4- apt-get update && sudo apt-get install influxdb

5- service influxdb start


+Introduction (Dec. 9, 2018, 3:12 p.m.)

InfluxDB is an open-source time series database (TSDB) developed by InfluxData. It is written in Go and optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics. It also has support for processing data from Graphite.

+Ionic Capacitor vs Apache Cordova (Nov. 1, 2019, 11:15 p.m.)

Ionic Capacitor is an open-source framework innovation to help you build Progressive Native Web, Mobile, and Desktop apps. On the other side Apache Cordova formerly PhoneGap does the same for accessing native features of the device from mobile WebView.


Using Cordova to build a mobile hybrid native app, you use Cordova plugin libraries, which behind the scene builds your app using Android SDK or iOS within the Cordova framework (cordova.js/phonegap.js).

With Ionic Capacitor, you create the app, without using any Cordova imports, not even cordova.js, instead Capacitor’s own native plugin repository imported as @capacitor/core. Capacitor can also be used without the Ionic framework and it’s backward compatible with Cordova.


In spirit, Capacitor and Cordova are very similar. Both manage a Web View and provide a structured way of exposing native functionality to your web code.

Both provide common core plugins out of the box for accessing services like Camera and the Filesystem. In fact, one of the design goals with Capacitor is to support Cordova plugins out of the box! While Capacitor doesn’t support every plugin (some are simply incompatible), it generally supports most plugins from the Cordova ecosystem.


Capacitor generally expects you to commit your native app project (Xcode, Android Studio, etc.) as a source artifact. This means it’s easy to add custom native code (for example, to integrate an SDK that requires modifying AppDelegate on iOS), build “plugins” to expose native functionality to your web app without having to actually build a standalone plugin, and also debug and manage your app in the way that embraces the best tooling for that platform.


No more deviceready!

Capacitor kills the deviceready event by loading all plugin JavaScript before your page loads, making every API available immediately. Also unlike Cordova, plugin methods are exposed directly as opposed to being called through an exec() function.

That means no more wondering why your app isn’t working and why deviceready hasn’t fired.


Embracing NPM & Easier Plugin Development

Capacitor embraces NPM for every dependency in your project, including plugins and platforms. That means you never capacitor install plugin-x, you just npm install plugin-x and then when you sync your project Capacitor will detect and automatically link in any plugins you’ve installed.


First-class Electron and PWA support

Capacitor embraces Electron for desktop functionality, along with adding first-class support for web apps and Progressive Web Apps.


+Storage (Oct. 27, 2019, 7:43 p.m.)


ionic cordova plugin add cordova-plugin-nativestorage
npm install @ionic-native/native-storage



import { NativeStorage } from '@ionic-native/native-storage/ngx';

constructor(private nativeStorage: NativeStorage) { }

this.nativeStorage.setItem('myitem', {property: 'value', anotherProperty: 'anotherValue'})
() => console.log('Stored item!'),
error => console.error('Error storing item', error)

data => console.log(data),
error => console.error(error)


+Capacitor - Installation (Oct. 15, 2019, 9:16 p.m.)

To add Capacitor to your web app, run the following commands:
npm install --save @capacitor/cli @capacitor/core

Then, initialize Capacitor with your app information.
npx cap init tiptong ir.tiptong.www

Next, install any of the desired native platforms:
npx cap add android
npx cap add ios
npx cap add electron

+Capacitor - Description (Oct. 15, 2019, 9:14 p.m.)

Capacitor is an open-source native container (similar to Cordova) built by the Ionic team that you can use to build web/mobile apps that run on iOS, Android, Electron (Desktop), and as Progressive Web Apps with the same code base. It allows you to access the full native SDK on each platform, and easily deploy to App Stores or create a PWA version of your application.

Capacitor can be used with Ionic or any preferred frontend framework and can be extended with plugins. It has a rich set of official plugins and you can also use it with Cordova plugins.


The Capacitor is a Native layer for Cross-platform Web Application development, which makes it possible to use hardware features like Geolocation, Camera, Vibrations, Network, Storage, Filesystem and many more. The catch is there no need to install any plugin to use such Native feature like we used to do by installing Cordova Plugin.


+PWA (Oct. 15, 2019, 8:54 p.m.)

Start an app:
npx create-stencil tiptong-pwa

+CLI Commands (June 28, 2019, 11:24 p.m.)

Generate a new project:
ionic start
ionic start myApp tabs

ionic serve

npm uninstall @ionic-native/splash-screen

ng add @angular/pwa

ionic build --prod

ionic generate module auth
ionic generate module auth --flat
ionic g m auth --flat

List installed plugins:
cordova plugins
cordova plugin ls

+Installation (June 28, 2019, 12:11 a.m.)

1- Install the latest version of Node.js and npm

2- sudo npm install -g ionic

+DataTables (Jan. 17, 2020, 8:55 p.m.)

<script type="text/javascript">
$(document).ready(function () {
language: {
url: "{% static 'mpei/json/dataTables/fa.json' %}"


<script type="text/javascript">
$(document).ready(function () {
processing: true,
serverSide: true,
paging: true,
searching: false,
ordering: false,
ajax: {
type: 'POST',
headers: {'X-CSRF-TOKEN': '{{ csrf_token }}'},
url: '{% url "voicemail:voice-records-list" %}',
data: {
'csrfmiddlewaretoken': '{{ csrf_token }}'
lengthMenu: [10, 25, 50, 100],
columns: [
{data: 'unique_id', class: 'text-right', orderable: false},
{data: 'voicemail_id', class: 'text-center', orderable: false},
{data: 'call_time', orderable: false, searchable: false, class: 'text-center'},
{data: 'caller_id', searchable: false, class: 'text-left'},
{data: 'tracking_code', name: 'tracking_code', searchable: false, class: 'text-center'},
data: 'checked',
name: 'checked',
class: 'text-center',
render: function (data, type, full, meta) {
if (data === true) {
return 'بله'
} else {
return 'خیر'
data: 'pk',
name: 'pk',
class: 'text-center',
render: function (data, type, full, meta) {
var url = "{% url 'voicemail:voice-records-check' 123456789876 %}";
url = url.replace('123456789876', full['pk']);
return '<a href=' + url + ' class="links">بررسی پیام</a>';
language: {
url: "{% static 'mpei/json/dataTables/fa.json' %}"

{# #}
englishNumber: true,
enableTimePicker: true,
targetTextSelector: '#id_from_dt',
disableAfterToday: true,
placement: 'right'

englishNumber: true,
enableTimePicker: true,
targetTextSelector: '#id_to_dt',
disableAfterToday: true,
placement: 'right'

+BLOB (Jan. 10, 2020, 10:52 a.m.)

BLOB stands for a Binary Large OBject.


A BLOB can store multimedia content like Images, Videos, and Audio but it can really store any kind of binary data. Since the default length of a BLOB isn't standard, you can define the storage capacity of each BLOB to whatever you'd like up to 2,147,483,647 characters in length.


Since jQuery doesn't have a way to handle blob's, you could try using the native Blob interface.

var oReq = new XMLHttpRequest();"GET", "/myfile.png", true);
oReq.responseType = "arraybuffer";

oReq.onload = function(oEvent) {
var blob = new Blob([oReq.response], {type: "image/png"});
// ...



+function - each (Dec. 21, 2019, 11:28 a.m.)

$.each(data, function(i, occupation) {
console.log(occupation['pk'], occupation['name']);

+TimeOuts / Inervals (April 24, 2019, 1:03 p.m.)

/// call your function here
}, 5000);


$(function () {
setTimeout(runMyFunction, 10000);


setTimeout(expression, timeout); runs the code/function once after the timeout.

setInterval(expression, timeout); runs the code/function in intervals, with the length of the timeout between them.


setInterval repeats the call, setTimeout only runs it once.


setTimeout allows us to run a function once after the interval of time.

setInterval allows us to run a function repeatedly, starting after the interval of time, then repeating continuously at that interval


+Find element by data attribute value (July 31, 2017, 1:18 p.m.)


+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;


or you can include this after loading the jquery.js file:
<script src=""></script>


+Call jquery code AFTER page loading (May 26, 2018, 6:07 p.m.)

$(window).on('load', function() {

+if checkbox is checked (July 21, 2018, 11:31 a.m.)

$('#receive-sms').click(function() {
if ($(this).is(':checked')) {


+Disable Arrows on Number Inputs (Oct. 3, 2018, 12:43 p.m.)


/* Hide HTML5 Up and Down arrows. */
input[type="number"]::-webkit-outer-spin-button, input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;

input[type="number"] {
-moz-appearance: textfield;


jQuery(document).ready( function($) {

// Disable scroll when focused on a number input.
$('form').on('focus', 'input[type=number]', function(e) {
$(this).on('wheel', function(e) {

// Restore scroll on number inputs.
$('form').on('blur', 'input[type=number]', function(e) {

// Disable up and down keys.
$('form').on('keydown', 'input[type=number]', function(e) {
if ( e.which == 38 || e.which == 40 )


+Combobox (Jan. 22, 2019, 12:43 p.m.)

Get the text value of a selected option:

$( "#myselect option:selected" ).text();


Get the value of a selected option:

$( "#myselect" ).val();



$('#my_select').change(function() {



+Bypass popup blocker on (Jan. 20, 2018, 12:53 a.m.)

$('#myButton').click(function () {
var redirectWindow ='', '_blank');
type: 'POST',
url: '/echo/json/',
success: function (data) {

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
or you can include this after loading the jquery.js file:
<script src=""></script>

+Find element by data attribute value (July 31, 2017, 1:18 a.m.)


+Smooth Scrolling (Feb. 21, 2017, 4:09 p.m.)

$(function() {
$('a[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) +']');
if (target.length) {
$('html, body').animate({
scrollTop: target.offset().top
}, 1000);
return false;

+Check image width and height before upload with Javascript (Oct. 5, 2016, 3:01 a.m.)

var _URL = window.URL || window.webkitURL;
$('#upload-face').change(function() {
var file, img;
if (file = this.files[0]) {
img = new Image();
img.onload = function () {
if (this.width < 255 || this.height < 330) {
alert('{% trans "The file dimension should be at least 255 x 330 pixels." %}');
img.src = _URL.createObjectURL(file);

+Get value of selected radio button (Aug. 1, 2016, 3:46 p.m.)


+Allow only numeric 0-9 in inputbox (April 25, 2016, 9:18 p.m.)

$(".numeric-inputs").keydown(function(event) {
// Allow only backspace, delete, tab, ctrlKey
if ( event.keyCode == 46 || event.keyCode == 8 || event.keyCode == 9 || event.ctrlKey ) {
// let it happen, don't do anything
else {
// Ensure that it is a number and stop the keypress
if ((event.keyCode >= 48 && event.keyCode <= 57) || (event.keyCode >= 96 && event.keyCode <= 105)) {
// let it happen, don't do anything
} else {

+Access parent of a DOM using the (event) parameter (April 25, 2016, 1:47 p.m.)

var membership_id = $('id');

+Prevent big files from uploading (March 5, 2016, 12:08 a.m.)

$('#id_certificate').bind('change', function() {
if(this.files[0].size > 1048576) {
alert("{% trans 'The file size should be less than 1 MB.' %}");

+Background FullScreen Slider + Fade Effect (Feb. 5, 2016, 7:21 p.m.)


$(document).ready(function() {
var images = [];
var titles = [];
{% for slider in sliders %}
images.push('{{ slider.image.url }}');
titles.push('{{ slider.image.motto_en }}');
{% endfor %}

var image_index = 0;
$('#iind-slider').css('background-image', 'url(' + images[0] + ')');
setInterval(function() {
if(image_index == images.length) {
image_index = 0;
$('#iind-slider').fadeOut('slow', function() {
$(this).css('background-image', 'url(' + images[image_index] + ')');
}, 4000);

#iind-slider {
width: 100%;
height: 100vh;
background: no-repeat fixed 0 0;
background-size: 100% 100%;

+Convert Seconds to real Hour, Minutes, Seconds (Feb. 1, 2016, 10:54 p.m.)

// Convert seconds to real Hour:Minutes:Seconds
function secondsTimeSpanToHMS(s) {
let h = Math.floor(s / 3600); // Get whole hours
s -= h * 3600;
let m = Math.floor(s / 60); // Get remaining minutes
s -= m * 60;
return h + ":" + (m < 10 ? '0' + m : m) + ":" + (s < 10 ? '0' + s : s); // Zero padding on minutes and seconds

setInterval(function() {
var left_time = secondsTimeSpanToHMS(server_left_time);
server_left_time -= 1;
}, 1000);

+Error - TypeError: $.browser is undefined (Jan. 15, 2016, 1:53 a.m.)

Find this script file and include it after the main jquery file:

+Multiple versions of jQuery in one page (Jan. 8, 2016, 5:54 p.m.)

1- Load the jquery libraries like the example:

<script type="text/javascript" src="{% static 'iind/js/jquery-1.7.1.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_7_1 = $.noConflict(true);
<script type="text/javascript" src="{% static 'iind/js/jquery-1.11.3.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_11_3 = $.noConflict(true);
2- Then use them as follows:

jQuery_1_11_3(document).ready(function() {
function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeIn("fast");
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
}, function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeOut("fast");
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
And change the last line of jQuery libraries like this:

}(jQuery, window, document));

}(jQuery_1_11_3, window, document));
And for bootstrap.min.js, I had to change this long line: (The last word, jQuery needed to be changed):

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery)

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery_1_11_3)

+Redirect Page (Dec. 20, 2015, 11:57 a.m.)

// similar behavior as an HTTP redirect

// similar behavior as clicking on a link
window.location.href = "";


+Smooth scrolling when clicking an anchor link (Sept. 10, 2015, midnight)

var $root = $('html, body');
$('a').click(function () {
scrollTop: $($.attr(this, 'href')).offset().top
}, 1500);
return false;

+Attribute Selector (Aug. 26, 2015, 4:01 p.m.)

$( "input[value='Hot Fuzz']" ).next().text( "Hot Fuzz" );
$("ul").find("[data-slide='" + current + "']");

$("ul[data-slide='" + current +"']");

+Underscore Library (Aug. 26, 2015, 2:01 p.m.)

if(_.contains(intensity_filters, intensity_value)) {
intensity_filters = _.without(intensity_filters, intensity_value);

+Get a list of checked/unchecked checkboxes (Aug. 26, 2015, 1:51 p.m.)

var selected = [];
$('#checkboxes input:checked').each(function() {
And for getting the unchecked ones:
$('#checkboxes input:not(:checked)').each(function() {} });

+Comma Separate Number (Aug. 14, 2015, 11:59 a.m.)

function commaSeparateNumber(val) {
while (/(\d+)(\d{3})/.test(val.toString())) {
val = val.toString().replace(/(\d+)(\d{3})/, '$1' + ',' + '$2');
return val;

+Hide a DIV when the user clicks outside of it (Aug. 12, 2015, 2:42 p.m.)

$(document).mouseup(function (e) {
var container = $("#my-cart-box");
if (! && container.has( === 0) {

+Reset a form in jquery (Aug. 1, 2015, 1:19 a.m.)


+Event binding on dynamically created elements (Aug. 14, 2015, 12:06 a.m.)

Add Click event for dynamically created tr in table

$('.found-companies-table').on('click', 'tr', function() {
$("body").on("mouseover mouseout", "select", function(e){

// Do some code here

$(staticAncestors).on(eventName, dynamicChild, function() {});
$('body').on('click', '.delete-order', function(e) { });

+Select all (table rows) except first (July 18, 2015, 3:12 a.m.)


+Deleting all rows in a table (July 15, 2015, 3:29 p.m.)

$("#mytable > tbody").html("");
---------------------------------------- OR ----------------------------------------
---------------------------------------- OR ----------------------------------------
---------------------------------------- OR ----------------------------------------
$("#myTable").children( 'tr:not(:first)' ).remove();

+Plugins (April 6, 2016, 8:13 p.m.)

+Focus the first input in your form (June 30, 2015, 3:05 p.m.)


+jQuery `data` vs `attr`? (Aug. 21, 2014, 3:03 p.m.)

If you are passing data to a DOM element from the server, you should set the data on the element:

<a id="foo" data-foo="bar" href="#">foo!</a>
The data can then be accessed using .data() in jQuery:

console.log( $('#foo').data('foo') );
//outputs "bar"
However when you store data on a DOM node in jQuery using data, the variables are stored in on the node object. This is to accommodate complex objects and references as storing the data on the node element as an attribute will only accommodate string values.

Continuing my example from above:
$('#foo').data('foo', 'baz');

console.log( $('#foo').attr('data-foo') );
//outputs "bar" as the attribute was never changed

console.log( $('#foo').data('foo') );
//outputs "baz" as the value has been updated on the object
Also, the naming convention for data attributes has a bit of a hidden "gotcha":

<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
console.log( $('#bar').data('fooBarBaz') );
//outputs "fizz-buzz" as hyphens are automatically camelCase'd
The hyphenated key will still work:

<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
console.log( $('#bar').data('foo-bar-baz') );
//still outputs "fizz-buzz"
However the object returned by .data() will not have the hyphenated key set:

$('#bar').data().fooBarBaz; //works
$('#bar').data()['fooBarBaz']; //works
$('#bar').data()['foo-bar-baz']; //does not work
It's for this reason I suggest avoiding the hyphenated key in javascript.

The .data() method will also perform some basic auto-casting if the value matches a recognized pattern:

<a id="foo"
$('#foo').data('str'); //`"bar"`
$('#foo').data('bool'); //`true`
$('#foo').data('num'); //`15`
$('#foo').data('json'); //`{fizz:['buzz']}`
This auto-casting ability is very convenient for instantiating widgets & plugins:

$('.widget').each(function () {
If you absolutely must have the original value as a string, then you'll need to use .attr():

<a id="foo" href="#" data-color="ABC123"></a>
<a id="bar" href="#" data-color="654321"></a>
$('#foo').data('color').length; //6
$('#bar').data('color').length; //undefined, length isn't a property of numbers

$('#foo').attr('data-color').length; //6
$('#bar').attr('data-color').length; //6

+Leading colon in a jQuery selector (Aug. 21, 2014, 3:01 p.m.)

What's the purpose of a leading colon in a jQuery selector?
The :input selector basically selects all form controls (input, textarea, select and button elements) where as input selector selects all the elements by tag name input.

Since radio button is a form element and also it uses input tag so they both can be used to select radio button. However both approaches differ the way they find the elements and thus each have different performance benefits.

+Colon and question mark (Aug. 21, 2014, 3 p.m.)

What is the meaning of the colon (:) and question mark (?) in jquery?
That's an inline if.
If true, do the thing after the question mark, otherwise do the thing after the colon. The thing before the question mark is what you're testing.

+Commands and examples (Aug. 21, 2014, 2:57 p.m.)

$('#toggle_message').attr('value', 'Show')
$(document).ready(function() {});
$(window).load(function() {});
$(window).unload(function() {
alert('You\'re leaving this page');
This alert will be raised when move to another window by clicking on a link or click on the back or preivous buttons of browser, or when you close the tab.
Returns the number of all the elements in the page.
$(':text').focusin(function() {});
$(':text').blur(function() {});
$('#email').attr('value', 'Write your email address').focus(function() {
# Some code
}).blur(function() {
# Some code
search_name = jQuery.trim($(this).val());
$("#names li:contains('" + search_name + "')").addClass('.highlight');
$('input[type="file"]').change(function() {
}).next().attr('disabled', 'disabled');
$('#menu_link').dbclick(function() {});
$('#click_me').toggle(function() {
# Code here
}, function() {
# Code here
var scroll_pos = $('#some_text').scrollTop();
$('#some_text').select(function() {});
$('a').bind('mouseenter mouseleave', function() {
bind() is specified to use for series of events.
$('.hover').mousemove(function(e) {
$('some_div').text('x: ' + e.clientX + ' y: ' + e.clientY);
Hover over description:
$('.hover').mousemove(function(e) {
var hovertext = $(this).attr('hovertext');
$('#hoverdiv').css('top', e.clientY+10).css('left', e.clientX+10);
}).mouseout(function() {

Create an empty div with id="hovertext" in HTML, and style it in CSS.
.addClass('class1 class2 class3')
$(":input').focus(function() {
Traversing using .each():

$('input[type="text"]').each(function(index) {
This index argument prints 0, 1, 2, ... per the items which are selected by .each statement/function.
These two statements do the same thing:
$('.names li:first').append('Hello');

if($(this).has('li').length == 0) { }

if($(this).has(':contains')) {}
This is useful when you want to toggle a sub-menu using the first/top item.
$(this).hide('slow', 'linear', function() {});

.stop() Will cause the animation of slide effect to stop
.fadeTo(100, 0.4, function() {})
$('.fadeto').not(this).fadeTo(100, 0.4);
$('.fadeto').css('opacity', '0.4');
$('.fadeto').mouseover(function() {
$(this).fadeTo(100, 1);
$('.fadeto').not(this).fadeTo(100, 0.4);
$('html, body').animate({scrollTop: 0}, 10000);
$('#terms').scroll(function() {
var textarea_height = $(this)[0].scrollHeight();
var scroll_height = textarea_height - $(this).innerHeight();

var scroll_top = $(this).scrollTop();
var names = ['Alex', 'Billy', 'Dale'];
if (jQuery.inArray('Alex', names) != '-1') {
$.each(names, function(index, value) {})
setInterval(function() {
var timestamp =;
}, 1);
(function($) {
$.fn.your_new_function_name = function() {}
$('#drag').draggable({axis: 'x'});
$('#drag').draggable({containment: 'document'});
$('#drag').draggable({containment: 'window'});
$('#drag').draggable({containment: 'parent'});
$('#drag').draggable({containment: [0, 0, 200, 200]});
$('#drag').draggable({cursor: 'pointer'});
$('#drag').draggable({opacity: 0.6});
$('#drag').draggable({grid: [20, 20]});
$('#drag').draggable({revert: true});
$('#drag').draggable({revertDuration: 1000});
$('#drag').draggable({start: function() {}});
$('#drag').draggable({drag: function() {}});
$('#drag').draggable({stop: function() {}});
$('#drop').droppable({hoverClass': 'border'});
$('#drop').droppable({tolerance': 'fit'});
$('#drop').droppable({tolerance': 'intersect'});
$('#drop').droppable({tolerance': 'pointer'});
$('#drop').droppable({tolerance': 'touch'});
$('#drop').droppable({accept': '.name'});
$('#drop').droppable({over': function() {}});
$('#drop').droppable({out': function() {}});
$('#drop').droppable({drop': function() {}});
$('#names').sortable({containment: 'parent'});
$('#names').sortable({tolerance: 'pointer'});
$('#names').sortable({cursor: 'pointer'});
$('#names').sortable({revert: true});
$('#names').sortable({opacity: 0.6});
$('#names').sortable({connectWith: '#palces, #names'});
$('#names').sortable({update: function() {}});
This required a css file `jquery-ui-custom.css`

$('#box').resizable({containment: 'document'});
$('#box').resizable({animate: true});
$('#box').resizable({ghost: true});

$('#box').resizable({animateDuration: 'slow'});
`slow`, `medium`, `fast`, `normal`, `1000`

$('#box').resizable({animateEasing: 'swing'});
`swing`, `linear`

$('#box').resizable({aspectRatio: true});
`0.4`, `2/5`, `9/10`

$('#box').resizable({autoHide: true});

$('#box').resizable({handles: 'n, e, se');
n=North, e=East, w=West, s=South, or `all`
If you do not specify `all`, you can not resize the box from left or top, as they are so closed to the browser.

$('#box').resizable({grid: [20, 20]});
$('#box').resizable({minHeight: 200);
$('#box').resizable({maxHeight: 100);
$('#box').resizable({minWidth: 200);
$('#box').resizable({maxWidth: 100);
$('#content').accordion({fillSpace: true})
$('#content').accordion({icons: {'header': 'ui-icon-plus', 'headerSelected': 'ui-icon-minus'}})
$('#content').accordion({collabsable: true})
$('#content').accordion({active: 2})
$('#dialog').attr('title', 'Saved').text('Settings were saved.').dialog();
.dialog({buttons: {'OK': function() {
closeOnEscape: true
draggable: false
resizable: false
show: 'fade', 'bounce'
modal: true
position: 'top', 'top, left', 'bottom', 'top, center', [100, 100]

var val = 0;
var interval = setInterval(function() {
val = val + 1;
$('#pb').progressbar({value: val});
$('#percent').text(val + '%');
if (val == 100) {

$("#header_menus img:not(.hover_menus)").mouseenter(function() {
$("#" + $(this).attr('data-hover')).show();

+KDE - Location of User Wallpapers (Oct. 23, 2019, 10:50 a.m.)


+Editing KDE Application Launcher Menus (May 11, 2015, 5:31 p.m.)

Use `kmenuedit`

+Delete session (March 20, 2015, 11:36 a.m.)

Delete the files in:
rm ~/.kde/share/config/session/*

And delete the file:

+Create a package for IOS (Nov. 4, 2015, 6:06 a.m.)

sudo apt-get install autoconf automake libtool pkg-config

+PyCharm Completion (March 19, 2015, 9:25 a.m.)
1-Download this jar plugin:

2-On Pycharm’s main menu, click "File" -> Import Settings

3-Select this file and PyCharm will present a dialog with filetypes ticked. Click OK.

4-You are done. Restart PyCharm

+Android API (Feb. 12, 2015, 9:54 p.m.)
I have this class in Java docs:

And in python it is:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech')

Baed on these, I thought for getting another class in Java (android.speech.tts.TextToSpeech.Engine) I had to:
Engine = autoclass('android.speech.tts.TextToSpeech.Engine')

But I got this error at runtime on my cellphone and the app would not open:
java.lang.ClassNotFoundException: android.speech.tts.TextToSpeech.Engine

I even could not access `Engine` using the pythonic way either:

I had to access the class by:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech$Engine')
Python Dictionaries = Java HashMap:

HashMap<String, String> phoneBook = new HashMap<String, String>();
phoneBook.put("Mike", "555-1111");
phoneBook.put("Lucy", "555-2222");
phoneBook.put("Jack", "555-3333");

phoneBook = {}
phoneBook = {"Mike":"555-1111", "Lucy":"555-2222", "Jack":"555-3333"}

And for implementing it Kivy:
HashMap = autoclass('java.util.HashMap')
hash_map = HashMap()
hash_map.put(key, value)
To access nested classes, use $ like: autoclass('android.provider.MediaStore$Images$Media').

+Sign apk files (Oct. 4, 2015, 11:42 a.m.)

1-Generate a private key using keytool. For example:
$ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
This example prompts you for passwords for the keystore and key, and to provide the Distinguished Name fields for your key. It then generates the keystore as a file called my-release-key.keystore. The keystore contains a single key, valid for 10000 days. The alias is a name that you will use later when signing your app.

2-Compile your app in release mode to obtain an unsigned APK:
buildozer android release

3-Sign your app with your private key using jarsigner:
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name
This example prompts you for passwords for the keystore and key. It then modifies the APK in-place to sign it. Note that you can sign an APK multiple times with different keys.

4-Verify that your APK is signed. For example:
jarsigner -verify -verbose -certs my_application.apk

5-Align the final APK package using zipalign.
The zipalign does not exist in Synaptic Package Manager, it exists in AndroidSD Build Tools. Use locate to find `zipalign` and create a symbolic link in /usr/bin:
ln -s /home/moh3en/Programs/Android/Development/android-sdk-linux/build-tools/android-5.0/zipalign /usr/bin/
zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk

buildozer android release

jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore excludes/my-release-key.keystore bin/NimkatOnline-1.2.4-release-unsigned.apk mohsen_hassani

jarsigner -verify -verbose -certs bin/NimkatOnline-1.2.4-release-unsigned.apk

zipalign -v 4 bin/NimkatOnline-1.2.4-release-unsigned.apk bin/NimkatOnline-1.2.4.apk

+Label (Feb. 12, 2015, 9:52 p.m.)

When creating a label, by default, it places at the bottom left corner with some part of it hidden, but by changing its `size` property it will be solved:
size: self.texture_size

Scrolling a Label:
text: str('A very long text' * 100)
font_size: 50
text_size: self.width, None
size_hint_y: None
height: self.texture.size[1]

+FloatLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to RelativeLayout, except now position is relative to window, and not Layout.
Thus in FloatLayout, pos = 0, 0 refers to lower-left corner.

+RelativeLayout (Feb. 12, 2015, 9:51 p.m.)

Each child widget size and position has to be give.
size_hint, pos_hint: numbers relative to Layout.
If those two parameters are used, it does not make any difference if RelativeLayout or FloatLayout are used, as both will yield the same result.

+GridLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to StackLayout 'lr-tb'
Either cols or rows has to be given and the Layout adjusts so the given number is the maximum number of cols or rows.

+Canvas (Feb. 12, 2015, 9:51 p.m.)

Canvas refers to graphical instructions.
The instructions could be non-visual, called context instructions, or visual, called vertex instructions.
An example of a non-visual instruction would be to set a color.
An example of a visual instruction would be draw a rectangle.

+StackLayout (Feb. 12, 2015, 9:50 p.m.)

1-More flexible than BoxLayout
right to left or left to right
top to bottom or bottom to top
rl-bt, rl-tb, lr-bt, lr-tb (Row-wise)
bt-rl, bt-lr, tb-rl, tb-lr (Column-wise)

+Snippets (Feb. 12, 2015, 9:50 p.m.)

pos_hint: {'x': .1}
size_hint: [.2, 2]
pos_hint: {'center_x': .3}
in kv file:
on_text: my_label.color = [random.random() for i in xrange(3)] + [1]

+on_touch_up vs on_release (Feb. 12, 2015, 9:49 p.m.)

When using on_touch_up event with partial, you have to pass three arguments to the calling method:

button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

def speak_word(word, arg1, arg2): # I don't know yet what these two extra args are used for.

After touching the button, all the same buttons on the page are also triggered. You have to solve it using something like this:
on_touch_up: vibrate() if self.collide_point(*args[1].pos) else None
But using on_release, two args are passed:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

def speak_word(word, button):

After clicking, the only button which has been touched, will be triggered. That's good!

+Partial (Feb. 12, 2015, 9:49 p.m.)

In Kivy, you register a button release callback with the “bind()” function:
But the signature of the “on_release” method is “on_release(self)”, which means that the method you provide will receive only one parameter — the button that generated the event. When you release the button, Kivy will invoke your callback method and pass in the button that you released.

So does this mean we can’t pass user-defined parameters to our handlers? Does it mean we need to use globals or a bunch of specialized methods to write our button handlers? No, this is where Python’s functools.partial comes in handy.

To oversimplify, partial allows you to create a function with one set of arguments that calls another function with a different set of arguments. For example, consider the following function that takes two arguments:

def addTwoNumbers(x, y):
print "x: %d, y: %d" % (x, y)
return x+y
You can create a partial from this that automatically supplies one or more of the arguments. Let’s create one that supplies ’1′ for ‘x’:

addOne = partial(addTwoNumbers, 1)
Which you would then invoke as such:

>>> #We pass in '2' for 'y' here. The partial fills in '1' for 'x'
>>> addOne(2)
x: 1, y: 2
Let’s create a function that can set any label to any text:

def changeLabel(label, text, button):
#Kivy gives us 'button' to let us know which button
# caused the event, but we don't use it
label.text = text

In our UI setup, we can then bind two different buttons to this handler, creating partials that supply values for the extra arguments:

startButton = Button(text='Start Car')
stopButton = Button(text='Stop Car')

"Starting Car..."))

"Stopping Car..."))
Now, by inspecting the setup code, it’s fairly easy to see what the UI does when various events occur. We can even extend this further to perform an action after setting the label:

def changeLabelAndRun(label, text, command, button):
label.text = text

This allows our setup code to specify a UI behavior and trigger an action (assume ‘startCar’ and ‘stopCar’ have been defined as functions elsewhere):

statusLabel, "Starting Car...",

statusLabel, "Stopping Car...",
Unlike C, there’s no casting, no packing things into structs, and it’s easy to extend for different needs. Snazzy! This might not scale perfectly to complicated UI interactions, but it greatly simplifies straightforward event processing, making it easier to see at a glance what the application is doing.

+BoxLayout vs. GridLayout (Sept. 9, 2015, 12:47 p.m.)

The widgets in a BoxLayout can have different width and height, but in a GridLayout, each row or column should have the same size.

The widgets in BoxLayout are placed from bottom to top, but those in a GridLayout are placed from top to bottom.

In a BoxLayout the widgets can not be placed next to each other! I mean, they are placed one widget per row (if orientation is vertical) or column (if orientation is horizontal)

+Background Image for Button (Feb. 12, 2015, 9:48 p.m.)

background_normal: 'home_button.png'
background_down: 'home_button_down.png'

+DropDown (Feb. 12, 2015, 9:48 p.m.)

1-First of all, make sure that dropdown doesn't get called while widget is not on screen. That is, you have to only instantiate it, do not use it for add_widget or anything so that it's called.

2-For getting the data which is passed through `a_button.on_release:'the_value')`, you have to use:
on_select: select_controller(args[1])
on the DropDown. Here is the exmaple:
on_select: select_controller(args[1]) # Try printing `args` to see the whole items.
text: 'Update Database'

+Spinner vs. DropDown (Sept. 9, 2015, 12:44 p.m.)

Spinner is a widget that provides a quick way to select one value from a set. In the default state, a spinner shows its currently selected value. Touching the spinner displays a dropdown menu with all other available values from which the user can select a new one.

+Commands (Feb. 12, 2015, 9:47 p.m.)

buildozer android debug

+Buildozer (Feb. 12, 2015, 9:47 p.m.)

1-git clone
2-Activate virtualenv (and test if the default `python` command will lead to python version 2.7) because buildozer needs python2.7
4-python install
buildozer init
buildozer android debug
buildozer android logcat
adb logcat
AndroidSDK and AndroidNDK are needed for buildozer, if you have already downloaded them, provide the paths like these:
android.ndk_path = /home/moh3en/Programs/Android/Development/android-ndk-r9c
android.sdk_path = /home/moh3en/Programs/Android/Development/android-sdk-linux

if not, buildozer will try to download them, but unfortunately because of the embargo, they won't get downloaded since the source originates from So you have to download them using proxy and untar/unzip them somewhere.
sudo adb uninstall com.nimkatonline.en
sudo adb install bin/NimkatOnline-1.2.0.apk

+Installing python packages (Feb. 12, 2015, 9:46 p.m.)

For installing python packages use this command:
./ -m "kivy requests==2.1.0 SQLAlchemy"

You will need these environment variables:
export ANDROIDSDK="/home/mohsen/Programs/android-sdk-linux"
export ANDROIDNDK="/home/mohsen/Programs/android-ndk-r8c"
export ANDROIDAPI=14

+Python Android Path (Feb. 12, 2015, 9:46 p.m.)

This is the path to the python used for android. Use this path for managing (installing or uninstalling) packages which are going to be installed, packed and used for your app.

+Error ==> Source resource does not exist: python-for-android/dist/default/ (Feb. 12, 2015, 9:43 p.m.)

export ANDROIDAPI=15

+Chat (Feb. 12, 2015, 9:42 p.m.)

<Mohsen_Hassani> Hello guys. I am very new to Kivy. I am using psycopg2 to read data from my remote VPS. I wanted to know if it will work after making apk too?
<brousch> Mohsen_Hassani: Pure Python modules will work fine. I'm not sure if psycopg2 is pure Python
<kovak> Mohsen_Hassani: the first step is to write a recipe for python-for-android to see if you can compile for ARM without any problems
<kovak> I think psycopg2 has C bits
<kovak> if it compiles in arm no problem you are good to go, if not you may need to patch the source
<brousch> However, except in very rare cases, your Android app should not be communicating directly with your database server. There should be a proper API on top of that database
<tito> Mohsen_Hassani: the best shot you have is to put your tgz into a directory, go into the directory, and start python -m SimpleHTTPServer
<tito> then do: URL_python=http://localhost:8000/Python-2.7.2.tar.bz2 URL_hostpython=http://localhost:8000/Python-2.7.2.tar.bz2 ./ -m 'openssl pil kivy'

+Building the application (Feb. 12, 2015, 9:36 p.m.)

cd dist/default
./ --permission INTERNET --orientation sensor --package com.mohsenhassani.notes --name My\ Notes --version 1.0 --dir ~/Projects/kivy_projects/notes/ debug
Install the debug apk to your device:
adb install bin/touchtracer-1.0-debug.apk
/usr/bin/python2.7 --name 'My Notes' --version 1.0 --package com.mohsenhassani.notes --private /home/mohsen/Projects/kivy_projects/notes/.buildozer/android/app --s
dk 14 --minsdk 8 --permission INTERNET --icon /home/mohsen/Projects/kivy_projects/notes/./static/icon.png --orientation sensor debug

+Installation (July 17, 2015, 1:26 a.m.)


Installation Steps:
1-apt-get install python-gst0.10-dev python-gst-1.0 freeglut3-dev libsdl-image1.2-dev libsdl-ttf2.0-dev libsdl-mixer1.2-dev libsmpeg-dev libportmidi-dev libswscale-dev libavformat-dev libavcodec-dev libv4l-dev libserf-1-1 libsvn1 subversion openjdk-7-jdk python-pygame
2-Create and activate a virtualenv
3-easy_install requests
4-easy_install -U setuptools
5-pip install cython==0.20
6-pip install pygments
7-pip install --allow-all-external pil --allow-unverified pil

8.1-For installing next step (pygame) you will need to link a file or get the following error. So first create the symlink:
fatal error: linux/videodev.h: No such file or directory:
sudo ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

8.2-pip install pygame (It won't be found or downloaded! You need to download the tar file from and install it using pip install <the_downloaded_tar_file>.)

9-pip install kivy

+Objects Declarations and Companion Objects - Singleton (May 22, 2019, 11:58 p.m.)

When we have just ONE INSTANCE of a class in the whole application.

object MySingleton

object MySingleton {
fun someFunction(...) {...}

And then use it:


In java, we define SINGLETON, by using the keyword "static" variables and methods.

In Kotlin we use "object" for declaring a class.
Contrary to a class, an object can’t have any constructor, but init blocks are allowed if some initialization code is needed.

object Customer {
var id: Int = -1 // Behaves like STATIC variable

init {


fun registerCustomer() { // Behaves like STATIC method


We don't need to instantiate the class! We call it without creating instance. = 27


Companion Objects are same as "object" but declared within a class.

class MyClass {
companion object {
var count: Int = -1 // Behaves like STATIC variable

fun typeOfCustomers(): String { // Behaves like STATIC method
return "American"




+Data class and Super class "Any" (May 22, 2019, 10:37 p.m.)

The purpose of Data class is to deal with Data, not the Objects!


var user1 = User("Mohsen", 10)
var user2 = User("Mohsen", 10)

if (user1 == user2 ) {
// returns false (They are not equal). The User class must be defined with "Any" keyword to have these variables equal.

class User(var name: String, var id: Int) {



data class User(var name: String, var id: Int) {



+lazy initialization (May 22, 2019, 9:09 p.m.)

// If you don't use the following "pi" variable anywhere in your codes, it is a waste of memory.
val pi: Float = 3.14f

You should use lazy initialization (lazy lambda function):
val pi: Float by lazy {
When you use the "pi" variable, it will get initialized.

- "Lazy initialization" was designed to prevent unnecessary initialization of objects.

- Your variables will not be initialized unless you use it in your code.

- It is initialized only once. Next time when you use it, you get the value from cache memory.

- It is thread-safe.
It is initialized in the thread where it is used for the first time.
Other threads can use the same value stored in the cache.

- The variable can be var or val.

- The variable can be nullable or non-nullable data types.

+lateinit keyword (May 22, 2019, 9:04 p.m.)

- lateinit used only with mutable data type [ var ]
- lateinit used only with non-nullable data type
- lateinit values must be initialized before you see it

class Country {
lateinit var name: String

+Null Safe (May 22, 2019, 8:46 p.m.)

We have a lot of null safety operators which help up avoid the NullPointerException:
?. Safe Call Operator

?: Elvis

!! Not-null Assertion

?.let { .. } Safe Call with let


val name: String = null // We can't do this.

val name: String? = null // Now it will accept null values


1- Safe Call (?. )
- Returns the length if "name" is not null else returns NULL
- Use it if you don't mind getting NULL value

println("The length of name is ${name?.length}") // returns null because it has null value at the top.


2- Safe Call with let ( ?.let )
- It executes the block ONLY IF name is NOT NULL

name?.let {
println("The length of name is ${name.length}")


3- Elvis-operator ( ?: )
- When we have nullable reference "name", we can say "if name is not null", use it, otherwise use some non-null value.

val len = if (name != null )

OR (the above code can be simplified as follow):

val len = name?.length ?: -1


4- Non-null assertion operator ( !! )
// Use it when you are sure the value is NOT Null
// Throws NullPointerException if the value is found to be NULL.

println("The length of name is ${name!!.length}")


+Predicates: a condition returning TRUE of FALSE (May 22, 2019, 8:35 p.m.)

"all": Do all elements satisfy the predicate/condition?

"any": Do any element in the list satisfy the predicate?

"count": Total elements that satisfy the predicate

"find", "last": Returns the FIRST/LAST element that satisfy predicate


val myNumbers = listOf( 2, 3, 4, 6, 23, 90)

check1 = myNumbers.all { it > 10 } // or all( { it > 10 } ) // Returns false


val check2: Boolean = myNumbers.any( { num -> num > 10 } ) // or { it > 10 } // Returns true


val totalCount: Int = myNumbers.count { it > 10 }


// Returns the first number that matches the predicate
val num: Int? = myNumbers.find { it > 10 }


Store lambda function as a variable:

val myPredicate = { num: Int -> num > 10 }


+Filter and Map using Lambdas (May 22, 2019, 8:21 p.m.)

val myNumbers: List<Int> listOf(2, 3, 4, 5, 23, 90)

val mySmallNums = myNumbers.filter { it < 10 } // or { num -> num < 10 }

for (num in mySmallNums) {
println(num) // Will print 2, 3, 4, 5


val mySquareNums = { it * it } // or { num -> num * num }

will return 4, 9, 16, 25, so on....


val mySmallSquareNums = myNumbers.filter { it < 10 }.map { it * it }


var people: List<Person> = listOf<Person>(Person(23, "Mohsen"), Person(30, "Ali"))

var names = { p -> } // or { }

var names = people.filter { person ->"M") }.map { }


+Collections - Set and Hash Set (May 22, 2019, 8:11 p.m.)

// "Set" contains unique elements
// "HashSet" also contains unique elements but sequence is not guaranteed in output

// The "9"s will get unify. It means there will be only ONE 9.
var mySet = setOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Immutable, Read Only

for (element in mySet) {


var mySet = mutableSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set, Read and Write


// HashSet, the sequence is not guaranteed in output.
var mySet = hashSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set


+Collections - Map and Hash Map (May 22, 2019, 4:41 p.m.)

// Immutable, Fixed Size, Read Only
var myMap = mapOf<Int, String>(2 to "Mohsen", 7 to "Mehdi")

for (key in myMap.keys) {
println(myMap[key]) // myMap.get(key)
println("Element at Key: $key = ${myMap.get(key)}") // ${myMap[key]}


// Mutable, Read and Write both, No Fixed Size
var myMap = HashMap<Int, String>() // You can also use mutableMapOf and hashMapOf
myMap.put(4, "Mohsen")
myMap.put(7, "Mehdi")

myMap.replace(4, "Akbar")
myMap.put(4, "Akbar")


+Collections - List and ArrayList (May 22, 2019, 4:16 p.m.)

Immutable Collections: Read Only Operations
- Immutable List: listOf
- Immutable Map: mapOf
- Immutable Set: setOf

Mutable Collections: Read and Write Both
- Mutable List: ArrayList, arrayListOf, mutableListOf
- Mutable Map: HashMap, hashMapOf, mutableMapOf
- Mutable Set: mutableSetOf, hashSetOf



var list = mutableListOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")
list.add(3, "Akbar")
list[2] = "Asghar"


An array with 5 elements, all values are zero.
var myArray = Array<Int>(5) { 0 } // Mutable. Fixed Size.

myArray[0] = 32
myArray[3] = 54


for (element in myArray) {

for (index in 0..myArray.size - 1) { }



// Fixed Size, Read Only, Immutable
var list = listOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")


ArrayList is an implementation of the MutableList interface in Kotlin:

class ArrayList<E> : MutableList<E>, RandomAccess

MutableList should be chosen whenever possible, but ArrayList is a MutableList. So if you're already using ArrayList, there's really no reason to use MutableList instead, especially since you can't actually directly create an instance of it (MutableList is an interface, not a class).

In fact, if you look at the mutableListOf() Kotlin extension method:

public inline fun <T> mutableListOf(): MutableList<T> = ArrayList()

you can see that it just returns an ArrayList of the elements you supplied.


+WITH and APPLY Lambdas (May 22, 2019, 4:14 p.m.)

fun main() {
var person = Person()

with(person) { // Using "with" you can do the same as ", person.age". It seems to be neater.
name = "Mohsen"
age = 33

person.apply { // Using "apply" you can also call the methods.
name = "Mohsen"
age = 33

class Person {
var name: String = ""
var age: Int = 0

fun someMethod() {
println("Some string")

+tailrec - Tail recursive functions (May 18, 2019, 3 p.m.)

When a function is marked with the tailrec modifier the compiler optimises out the recursion, leaving behind a fast and efficient loop based version instead.

+Infix Functions (May 18, 2019, 2:24 p.m.)

Infix Functions can be a Member Function or Extension Function.
They have SINGLE parameter.
They have prefix of "infix"

All Infix functions are extension function, but all extension functions are not Infix functions.
Infix function can only have ONE parameter.


infix fun Int.greaterValue(number: Int): Int {
if (this > number)
return this
return number

Then you can use it like this:
val x = Int = 6

val greaterVal = x.greaterValue(y)


val greaterVal = x greaterValue y

+Extension Functions (May 18, 2019, 2:22 p.m.)

Adds new function to the classes:
- Can "add" functions to a class without declaring it.
- The new functions added behaves like "static".

+Functions as Expressions - One line functions (May 18, 2019, 1:24 p.m.)

fun max(a: Int, b: Int): Int = if (a > b) a else b


fun max(a: Int, b: Int): Int
= if (a > b) {
print("$a is greater")
} else {
print("$b is greater")

+Functions and Methods (May 18, 2019, 1:13 p.m.)

fun findArea(length: Int, breadth: Int): Int {
return length * breadth

fun findArea(length: Int, breadth: Int): Unit {
print(length * breadth)

Unit is same as Void in Java

+BREAK statement with LABELED FOR Loop (May 18, 2019, 1:09 p.m.)

myLoop@ for (i in 1..3) {
for (j in 1..3) {
println("$i $j")
if (i == 2 && j == 2)

It will BREAK when reaching to "2 2" :
1 1
1 2
1 3
2 1
2 2

+do-while (May 18, 2019, 1:07 p.m.)

var i: Int = 1

do {
} while (i <= 10)

+when (May 18, 2019, 1:01 p.m.)

when (x) {
in 1..20 -> println("A message")
!in 5..9 -> println("Another message")
2 -> {

4 -> str = "A string value"
else -> {


+Ranges (May 18, 2019, 12:51 p.m.)

val r1 = 1..5 // 1, 2, 3, 4, 5

val r2 = 5 downTo 1 // 5, 4, 3, 2, 1

val r3 = 5 downTo 1 step 2 // 5, 3, 1

var r4 = 'a'..'z' // "a", "b", "c", .... "z"

var isPresent = 'c' in r4

var countDown = 10.downTo(1) // 10, 9, 8, .... 1

var moveUp = 1.rangeTo(10 // 1, 2, 3, ..... 10

+Class and Function Class (May 18, 2019, 12:38 p.m.)

class Person {
var name: String = ""


var personObj = Person() = "Mohsen"
print("My name is ${}")


class Student constructor(name: String) {
init {
println("The student name is $name")

You can also drop the constructor:

class Student(name: String) {
init {
println("The student name is $name")

// Secondary constructor
constructor(name: String, id: Int): this(name) {
// The body of the secondary constructor is called after the init block

constructor(my_name: String, var id: Int): this(my_name) { // var is not allowed here.
// You should do the following instead of putting var at the parameters: = id


By default all classes are "public" and "final" which means you can not inherit from a class.

public final class Student {
public final name: String = ""

You can drop "public final" keywords.


For inheritance you need to make a class "open".

open class Human { }

class Student: Human() { }



open class Animal {
open fun eat() {
println("Animal Eating")

class Dog: Animal() {
override fun eat() {
println("Dog is eating")

override fun eat() { // Better to use the next line, if used interfaces at the class definition.
print("Dog is eating")


Visibility Modifiers:

public // This is the default

open class Person {
private val = 1
protected val b = 2
internal val c = 3
val d = 10 // public by default

class Indian: Person() {
// a is not visible
// b, c, d are visible


+Variables and Data Types (May 18, 2019, 12:34 p.m.)

var age = 33 // Int

var grade = 21.5 // Float
var myName: String // Mutable String
myName = "Mohsen"
myName = "MohseNN"

val myFamilyName = "Hassani" // Immutable String

var gender: Char = 'M'

var percentage: Double = 90.78

var marks: Float = 97.4F

var isStudying: Boolean = true

+Static Members for class (May 17, 2019, 12:03 p.m.)

Most of the programming language have concepts where classes can have static members — fields that are only created once per class and can be accessed without an instance of their containing class.

Kotlin doesn’t have static member for class, it means that you can’t create static method and static variable in Kotlin class.

Fortunately, Kotlin object can handle this. If you declare a companion object inside your class, you'll be able to call its members with the same syntax as calling static methods in Java/C#, using only the class name as a qualifier.

class MyClass {
companion object {
val info = "This is info"
fun getMoreInfo():String { return "This is more fun" }
} // This is info
MyClass.getMoreInfo() // This is more fun

Note that, even though the members of companion objects look like static members in other languages, at runtime those are still instance members of real objects, and can, for example, implement interfaces.

+for Loop / Iteration (May 10, 2019, 11:22 a.m.)

for (item in collection) {
// body of loop


Iterate Through a Range:

fun main(args: Array<String>) {

for (i in 1..5) {


If the body of the loop contains only one statement (like above example), it's not necessary to use curly braces { }.

fun main(args: Array<String>) {
for (i in 1..5) println(i)


for (i in 1..5) print(i)

for (i in 5 downTo 1) print(i)

for (i in 1..5 step 2) print(i)

for (i in 5 downTo 1 step 2) print(i)


Iterating Through an Array:

var language = arrayOf("Ruby", "Koltin", "Python" "Java")
for (item in language)


Iterate through an array with an index:

var language = arrayOf("Ruby", "Koltin", "Python", "Java")

for (item in language.indices) {
// printing array elements having even index only
if (item%2 == 0)


Iterating Through a String:

var text= "Kotlin"
for (letter in text) {


+List (May 10, 2019, 10:41 a.m.)

List is by default immutable and mutable version of Lists is called MutableList!

val list: List<String> = ArrayList()
In this case you will not get an add() method as list is immutable.


val list: MutableList<String> = ArrayList()
Now you will see an add() method and you can add elements to list.


MUTABLE collection:
val list = mutableListOf(1, 2, 3)
list += 4


IMMUTABLE collection:
var list = listOf(1, 2, 3)
list += 4


+Getters and setters (May 9, 2019, 4:08 a.m.)

If you are calling
var side: Int = square.a

it does not mean that you are accessing a directly. It is same as:
int side = square.getA();

in Java, cause Kotlin autogenerates default getters and setters.

In Kotlin, only if you have special setter or getter you should specify it. Otherwise, Kotlin autogenerates it for you.

+Null Operators ? !! (May 9, 2019, 3:36 a.m.)

What is the meaning of ? in savedInstanceState: Bundle? ?
It means that savedInstanceState parameter can be Bundle type or null. Kotlin is null safety language.

var a : String // you will get a compilation error, cause a must be initialized and it cannot be null.

That means you have to write:
var a : String = "Init value"

Also, you will get a compilation error if you do:
a = null

To make a nullable, you have to write:
var a : String?

Let’s say that we have nullable nameTextView. The following code will give us NPE if it is null:

Kotlin will not allow us to even do such a thing. It will force us to use ? or !! operator.
If we use ? operator:

the line will be proceeded only if nameTextView is not a null. In another case, if we use !! operator:

it will give us NPE if nameTextView is a null. It is just for adventurers.

lateinit modifier allows us to have non-null variables waiting for initialization.

Kotlin - Android
+Components of a RecyclerView (June 22, 2019, 2:56 p.m.)

1- LayoutManagers:

A RecyclerView needs to have a layout manager and an adapter to be instantiated. A layout manager positions item views inside a RecyclerView and determines when to reuse item views that are no longer visible to the user.

RecyclerView provides these built-in layout managers:
- LinearLayoutManager shows items in a vertical or horizontal scrolling list.
- GridLayoutManager shows items in a grid.
- StaggeredGridLayoutManager shows items in a staggered grid.

To create a custom layout manager, extend the RecyclerView.LayoutManager class.


2- RecyclerView.Adapter

RecyclerView includes a new kind of adapter. It’s a similar approach to the ones you already used, but with some peculiarities, such as a required ViewHolder. You will have to override two main methods: one to inflate the view and its view holder, and another one to bind data to the view. The good thing about this is that the first method is called only when we really need to create a new view. No need to check if it’s being recycled.


3- ItemAnimator

RecyclerView.ItemAnimator will animate ViewGroup modifications such as add/delete/select that are notified to the adapter. DefaultItemAnimator can be used for basic default animations and works quite well. See the section of this guide for more information.


+RecyclerView Compared to ListView (June 22, 2019, 2:48 p.m.)

RecyclerView differs from its predecessor ListView primarily:

- Required ViewHolder in Adapters - ListView adapters do not require the use of the ViewHolder pattern to improve performance. In contrast, implementing an adapter for RecyclerView requires the use of the ViewHolder pattern for which it uses RecyclerView.Viewholder.

- Customizable Item Layouts - ListView can only layout items in a vertical linear arrangement and this cannot be customized. In contrast, the RecyclerView has a RecyclerView.LayoutManager that allows any item layouts including horizontal lists or staggered grids.

- Easy Item Animations - ListView contains no special provisions through which one can animate the addition or deletion of items. In contrast, the RecyclerView has the RecyclerView.ItemAnimator class for handling item animations.

- Manual Data Source - ListView had adapters for different sources such as ArrayAdapter and CursorAdapter for arrays and database results respectively. In contrast, the RecyclerView.Adapter requires a custom implementation to supply the data to the adapter.

- Manual Item Decoration - ListView has the android:divider property for easy dividers between items in the list. In contrast, RecyclerView requires the use of a RecyclerView.ItemDecoration object to setup much more manual divider decorations.

- Manual Click Detection - ListView has a AdapterView.OnItemClickListener interface for binding to the click events for individual items in the list. In contrast, RecyclerView only has support for RecyclerView.OnItemTouchListener which manages individual touch events but has no built-in click handling.

+Difference between gravity and layout_gravity (June 12, 2019, 3:47 a.m.)


- sets the gravity of the contents (i.e. its subviews) of the View it's used on.

- arranges the content inside the view.



- sets the gravity of the View or Layout relative to its parent.

- arranges the view's position outside of itself.


HTML/CSS Equivalents:

Android CSS
android:layout_gravity float
android:gravity text-align

+Retrofit (May 25, 2019, 10:53 a.m.)

1- Create an Interface:
that will contain various functions which will map to the endpoint URLs of your web service, such as:

2- Create a service that calls the functions present within the interface.
createService( <T> Service) -> studentsService

3- Last step, within your activity, you have to initialize the step-2 service and then call the functions of the interface in step-1.

+Shared Preferences (May 14, 2019, 12:54 a.m.)

It allows activities and applications to keep preferences, in the form of key-value pairs similar to a Map that will persist even when the user closes the application.

Android stores Shared Preferences settings as XML file in shared_prefs folder under DATA/data/{application package} directory. The DATA folder can be obtained by calling Environment.getDataDirectory().


SharedPreferences is application specific, i.e. the data is lost on performing one of the following options:
- on uninstalling the application
- on clearing the application data (through Settings)


As the name suggests, the primary purpose is to store user-specified configuration details, such as user specific settings, keeping the user logged into the application.


To get access to the preferences, we have three APIs to choose from:
- getPreferences() : used from within your Activity, to access activity-specific preferences

- getSharedPreferences() : used from within your Activity (or other application Context), to access application-level preferences

- getDefaultSharedPreferences() : used on the PreferenceManager, to get the shared preferences that work in concert with Android’s overall preference framework


// Storing Data:
sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
with(sharedPref.edit()) {
putBoolean("intro_screen_displayed", true)

// Retrieving Data
var sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
if (sharedPref.getBoolean("intro_screen_displayed", false))


editor.putBoolean("key_name", true); // Storing boolean - true/false
editor.putString("key_name", "string value"); // Storing string
editor.putInt("key_name", "int value"); // Storing integer
editor.putFloat("key_name", "float value"); // Storing float
editor.putLong("key_name", "long value"); // Storing long

pref.getString("key_name", null); // getting String
pref.getInt("key_name", -1); // getting Integer
pref.getFloat("key_name", null); // getting Float
pref.getLong("key_name", null); // getting Long
pref.getBoolean("key_name", null); // getting boolean


// Clearing or Deleting Data:
remove(“key_name”) is used to delete that particular value.

clear() is used to remove all data


+Repeat background image (May 11, 2019, 10:11 p.m.)

1- Copy the background image in drawable

2- Create a file in drawable "bg_pattern.xml" with this content:
<bitmap xmlns:android=""
android:tileMode="repeat" />

3- Add the following attribute to the XML file for the specific view:

+Get asset image by its string name (May 11, 2019, 4:17 p.m.)


var icon: Bitmap? = BitmapFactory.decodeStream("intro_screen/img1.jpg"))

+dimensions (May 10, 2019, 9:51 p.m.)

xxxhdpi: 1280x1920 px
xxhdpi: 960x1600 px
xhdpi: 640x960 px
hdpi: 480x800 px
mdpi: 320x480 px
ldpi: 240x320 px

+mipmap directories (May 10, 2019, 9:40 p.m.)

Like all other bitmap assets, you need to provide density-specific versions of you app icon. However, some app launchers display your app icon as much as 25% larger than what's called for by the device's density bucket.

For example, if a device's density bucket is xxhdpi and the largest app icon you provide is in drawable-xxhdpi, the launcher app scales up this icon, and that makes it appear less crisp. So you should provide an even higher density launcher icon in the mipmap-xxxhdpi directory. Now the launcher can use the xxxhdpi asset instead.

Because your app icon might be scaled up like this, you should put all your app icons in mipmap directories instead of drawable directories. Unlike the drawable directory, all mipmap directories are retained in the APK even if you build density-specific APKs. This allows launcher apps to pick the best resolution icon to display on the home screen.

+Configuration qualifiers for different pixel densities (May 10, 2019, 9:31 p.m.)

ldpi Resources for low-density (ldpi) screens (~120dpi).
mdpi Resources for medium-density (mdpi) screens (~160dpi). (This is the baseline density.)
hdpi Resources for high-density (hdpi) screens (~240dpi).
xhdpi Resources for extra-high-density (xhdpi) screens (~320dpi).
xxhdpi Resources for extra-extra-high-density (xxhdpi) screens (~480dpi).
xxxhdpi Resources for extra-extra-extra-high-density (xxxhdpi) uses (~640dpi).
nodpi Resources for all densities. These are density-independent resources. The system does not scale resources tagged with this qualifier, regardless of the current screen's density.
tvdpi Resources for screens somewhere between mdpi and hdpi; approximately 213dpi. This is not considered a "primary" density group. It is mostly intended for televisions and most apps shouldn't need it—providing mdpi and hdpi resources is sufficient for most apps and the system will scale them as appropriate. If you find it necessary to provide tvdpi resources, you should size them at a factor of 1.33*mdpi. For example, a 100px x 100px image for mdpi screens should be 133px x 133px for tvdpi.

+ConstraintLayout (March 24, 2019, 2:45 p.m.)

Constraints help us to describe what are relations of views.


A constraint is a connection or an alignment to the element the constraint is tied to. You define various constraints for every child view relative to other views present. This gives you the ability to construct complex layouts with a flat view hierarchy.

A constraint is simply a relationship between two components within the layout that controls how the view will be positioned.


The ConstraintLayout system has three parts: constraints, equations, and solver.

Constraints are relationships between your views and are determined when you set up your UI. Once you create these relationships, the system will translate them into a linear system of equations.

The equations go in the solver and it returns the positions, and view sizes to be used in the layout.


The ConstraintLayout becomes very necessary most especially when building complex layouts. Android actually has other layouts, which have their own unique features. Some of which could be used to build complex layouts also. However, they have their own bottlenecks, hence the need to introduce a new layout.

These older layouts have rules that tend to be too rigid. As a result of this, the tendency to nest layouts become higher. For instance, the LinearLayout only permits placing views linearly, either horizontally or vertically. The FrameLayout places views in a stacked manner, the topmost view hides the rest. The RelativeLayout places the views relative to each other.


When creating constraints, there are a few rules to follow:
Every view must have at least two constraints: one horizontal and one vertical. If a constraint for any axis is not added, your view jumps to the zero point of that axis.

You can create constraints only between a constraint handle and an anchor point that share the same plane. So a vertical plane (the left and right sides) of a view can be constrained only to another vertical plane, and baselines can constrain only to other baselines.

Each constraint handle can be used for just one constraint, but you can create multiple constraints (from different views) to the same anchor point.


+Custom font (April 26, 2019, 10:46 p.m.)

+Creating actions in the action bar / toolbar (April 26, 2019, 12:40 a.m.)

Buttons in the toolbar are typically called actions.

Space in the app bar is limited. If an app declares more actions than can fit in the app bar, the app bar sends the excess actions to an overflow menu.

The app can also specify that an action should always be shown in the overflow menu, instead of being displayed on the app bar.


Add Action Buttons:

All action buttons and other items available in the action overflow are defined in an XML menu resource.

To add actions to the action bar, create a new XML file in your project's res/menu/ directory as follows:
1- In Android Studio, in project view, select "Project", right click on "res" folder and choose the menu "New" -> "Android Resource File".

2- In the window for "file name" set for example "main_toolbar" and for "Resource type" choose "menu", hit OK button.

3- Add an <item> element for each item you want to include in the action bar, as shown in this code example of a menu XML file:

<menu xmlns:android="" >


<!-- Settings, should always be in the overflow -->
<item android:id="@+id/action_settings"


4- Add the following code to MainActivity.kt
override fun onCreateOptionsMenu(menu: Menu): Boolean {
menuInflater.inflate(, menu)
return true

// This is to only display where the above code should be placed.
override fun onCreate(savedInstanceState: Bundle?) { }

+Set up the app bar (Toolbar) (April 26, 2019, 12:20 a.m.)

1- Replace android:theme="@style/AppTheme" with android:theme="@style/Theme.AppCompat.Light.NoActionBar" in AndroidManifest.xml

2- Add a Toolbar to the activity's layout (activity_main.xml)

It might display an error about "This view is not constrained vertically...", for fixing the error:
Go to Design View, use the magic wand icon in the toolbar menu above the design preview. This will automatically add some lines in the text field and the red line will be removed.

You can also set the background color to transparent:

3- Add the 3rd line to MainActivity.kt

+Views (April 25, 2019, 1:52 p.m.)

A view is basically any of the widgets that make up a typical utility app.

Examples include images (ImageViews), text (TextView), editable text boxes (EditText), web pages (WebViews), and buttons (err, Button).

+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Kotlin tells them how to behave.

Sometimes XML will be used to describe types of data other than views in your apps; acting as a kind of index that your code can refer to. This is how most apps will define their color palettes for instance, meaning that there’s just one file you need to edit if you want to change the look of your entire app.

+Install WINE on Kubuntu (May 6, 2020, 11:42 a.m.)

dpkg --add-architecture i386

apt-get -y install software-properties-common wget

wget -qO - | sudo apt-key add -

apt-add-repository 'deb bionic main'

add-apt-repository ppa:cybermax-dexter/sdl2-backport

apt install --install-recommends winehq-stable

+Kubuntu - Upgrade (April 26, 2020, 10:05 a.m.)

Test correct version is found:
do-release-upgrade -c

Upgrade in case the correct version is shown:


upgrade to the development state:

do-release-upgrade -d


if you wish to do the entire upgrade in a terminal:

do-release-upgrade -m desktop


+OpenConnect VPN Server (April 24, 2020, 10:03 p.m.)


1- apt install ocserv

2- Check its status:
systemctl status ocserv

If not started, use the following command to start the service:
systemctl start ocserv

By default OpenConnect VPN server listens on TCP and UDP port 443. If it’s being used by the web server, then the VPN server can’t be started. (Fix this problem in step 5).

3- Installing Let’s Encrypt Client (Certbot):
apt install software-properties-common
add-apt-repository ppa:certbot/certbot
apt update
apt install certbot

4- Obtaining a TLS Certificate from Let’s Encrypt:
If there’s no web server running on your Ubuntu 16.04/18.04 server and you want OpenConnect VPN server to use port 443, then you can use the standalone plugin to obtain TLS certificate from Let’s Encrypt. Run the following command. Don’t forget to set A record for your domain name.

sudo certbot certonly --standalone --preferred-challenges http --agree-tos --email your-email-address -d

5- If you had a problem in step 2, follow this step. If not, skip this step.

If your server has a web-server listening on port 80 and 443, and you want OpenConnect VPN server to use a different port, then it’s a good idea to use the webroot plugin to obtain a certificate because the webroot plugin works with pretty much every web server and we don’t need to install the certificate in the web server.

First, you need to create a virtual host for

If you are using Nginx, then

sudo nano /etc/nginx/conf.d/

Paste the following lines into the file.

server {
listen 80;

root /var/www/;

location ~ /.well-known/acme-challenge {
allow all;

Save and close the file. Then create the web root directory.

sudo mkdir -p /var/www/

Set www-data (Nginx user) as the owner of the web root.

sudo chown www-data:www-data /var/www/ -R

Reload Nginx for the changes to take effect.

sudo systemctl reload nginx

Once virtual host is created and enabled, run the following command to obtain Let’s Encrypt certificate using webroot plugin.

sudo certbot certonly --webroot --agree-tos --email your-email-address -d -w /var/www/

6- Editing OpenConnect VPN Server Configuration File:
vim /etc/ocserv/ocserv.conf

Comment the line:
auth = "pam[gid-min=1000]"
Uncomment and edit two lines below it, to:
auth = "plain[passwd=./ocpasswd]"

server-cert = /etc/letsencrypt/csr/0000_csr-certbot.pem
server-key = /etc/letsencrypt/keys/0000_key-certbot.pem

try-mtu-discovery = true

default-domain =

ipv4-network =

tunnel-all-dns = true

dns =

Comment out all the route parameters:
route =
route =
route = fef4:db8:1000:1001::/64
no-route =

Save and close the file Then restart the VPN server for the changes to take effect.
systemctl restart ocserv

7- Fixing DTLS Handshake Failure:
cp /lib/systemd/system/ocserv.service /etc/systemd/system/ocserv.service
vim /etc/systemd/system/ocserv.service

Comment out the following two lines:

Save and close the file. Then reload systemd:
systemctl daemon-reload

Stop ocserv.socket and disable it:
systemctl stop ocserv.socket
systemctl disable ocserv.socket

Restart ocserv service:
systemctl restart ocserv.service

Check the status:
systemctl status ocserv

8- Creating VPN Accounts using the ocpasswd tool:
ocpasswd -c /etc/ocserv/ocpasswd mohsen

+iptables (April 23, 2020, 9:02 p.m.)

Delete prerouting rule:

1- List NAT rules:
iptables -t nat -v -L -n --line-number

2- Delete a NAT rule:
iptables -t nat -D POSTROUTING 1


+PPTP / L2TP - Descriptions (April 17, 2020, 11:12 a.m.)

PPTP or Point-to-Point Tunneling Protocol is an outdated method for implementing VPNs.

It is developed by Microsoft and the easiest protocol to configure. PPTP VPN has low overhead and that makes it faster than other VPN protocols.

PPTP VPN encrypts data using 128-bit encryption which makes it the fastest but the weakest in terms of security.

When you use a VPN connection, it usually affects your Internet speeds due to the encryption process. However, you don’t have to worry about that when using a PPTP VPN because of its low-level encryption.


L2TP or Layer 2 Tunneling Protocol (L2TP) is the result of a partnership between Cisco and Microsoft. It was created to provide a more secure VPN protocol than PPTP.

L2TP is a tunneling protocol like PPTP that allows users to access the common network remotely.

L2TP VPN is a combined protocol that has all the features of PPTP, but runs over a faster transport protocol (UDP) thus making it more firewall-friendly.

It encrypts data using 256-bit encryption and therefore uses more CPU resources than PPTP. However, the increased overhead required to manage this security protocol makes it perform slower than PPTP.


+Limit network bandwidth (March 11, 2020, 9:43 a.m.)

apt install wondershaper

wondershaper eth1 256 128

wondershaper clear eth1


+Enable /etc/rc.local (March 9, 2020, 11:02 a.m.)

1- Create the following file:
vim /etc/systemd/system/rc-local.service

2- Add the following content to it:
Description=/etc/rc.local Compatibility

ExecStart=/etc/rc.local start


3- Create the rc.local file:
printf '%s\n' '#!/bin/bash' 'exit 0' | sudo tee -a /etc/rc.local

4- Then add execute permission to /etc/rc.local file:
chmod +x /etc/rc.local

5- Enable and start the service on system boot:
systemctl enable rc-local
systemctl start rc-local

+Packages (March 3, 2020, 11:15 a.m.)

balena etcher

+Command History (Feb. 24, 2020, 11:21 a.m.)

history n
Shows the stuff typed – add a number to limit the last n items


Ctrl + r
Interactively search through previously typed commands


Execute the last command typed that starts with ‘value’


Print to the console the last command typed that starts with ‘value’


Execute the last command typed



Print to the console the last command typed


+Chaining Commands (Feb. 24, 2020, 11:16 a.m.)

commandA; commandB

Run command A and then B, regardless of the success of A


commandA && commandB

Run command B if A succeeded


commandA || commandB

Run command B if A failed



Run command A in background


+Terminal Shortcuts (Feb. 24, 2020, 10:53 a.m.)

Controlling the Screen:

Stop all output to the screen. This is particularly useful when running commands with a lot of long, verbose output, but you don’t want to stop the command itself with Ctrl+C.

Ctrl+Q: Resume output to the screen after stopping it with Ctrl+S


Moving the Cursor:

Ctrl+A or Home: Go to the beginning of the line.

Ctrl+E or End: Go to the end of the line.

Alt+B: Go left (back) one word.

Ctrl+B: Go left (back) one character.

Alt+F: Go right (forward) one word.

Ctrl+F: Go right (forward) one character.

Ctrl+XX: Move between the beginning of the line and the current position of the cursor. This allows you to press Ctrl+XX to return to the start of the line, change something, and then press Ctrl+XX to go back to your original cursor position. To use this shortcut, hold the Ctrl key and tap the X key twice.


Deleting Text:

Ctrl+D or Delete: Delete the character under the cursor.

Alt+D: Delete all characters after the cursor on the current line.

Ctrl+H or Backspace: Delete the character before the cursor.


Fixing Typos:

Alt+T: Swap the current word with the previous word.

Ctrl+T: Swap the last two characters before the cursor with each other. You can use this to quickly fix typos when you type two characters in the wrong order.

Ctrl+_: Undo your last keypress. You can repeat this to undo multiple times.


Cutting and Pasting:

Ctrl+W: Cut the word before the cursor, adding it to the clipboard.

Ctrl+K: Cut the part of the line after the cursor, adding it to the clipboard.

Ctrl+U: Cut the part of the line before the cursor, adding it to the clipboard.

Ctrl+Y: Paste the last thing you cut from the clipboard. The y here stands for “yank”.


Working With Your Command History:

Ctrl+P or Up Arrow:
Go to the previous command in the command history. Press the shortcut multiple times to walk back through history.

Ctrl+N or Down Arrow:
Go to the next command in the command history. Press the shortcut multiple times to walk forward through the history.

Alt+R: Revert any changes to a command you’ve pulled from your history if you’ve edited it.

Recall the last command matching the characters you provide. Press this shortcut and start typing to search your bash history for a command.

Ctrl+O: Run a command you found with Ctrl+R.

Ctrl+G: Leave history searching mode without running a command.


Resets the terminal display


+VNC Server (Feb. 5, 2020, 3:13 p.m.)

Try this method first, the second method has been bumped into dark screen problem:

1- Install VNC server on server machine
apt install x11vnc

2- Run the GUI or command line x11vnc application/command.

3- Install vnc viewer on the client machine (Windows or Linux) and connect to the IP:port

================== Second Method ==================

1- apt install vnc4server

2- With normal linux user, enter the following command and set a password:
$ vncserver

3- vim /etc/vnc.conf
$localhost = "no";
$vncStartup = "$ENV{HOME}/.vnc/xstartup";

4- Create a file in ~/.vnc/xstartup with the following content:

vim ~/.vnc/xstartup


exec startx


vncserver -kill :1


vncserver -list :*


+CUDA (Feb. 4, 2020, 3:35 p.m.)



mv /etc/apt/preferences.d/cuda-repository-pin-600


dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb

apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/

apt update

apt install cuda


+netplan (Feb. 4, 2020, 9:16 a.m.)


version: 2
dhcp4: no
addresses: []
addresses: [,]


version: 2
renderer: networkd
search: [mydomain, otherdomain]
addresses: [,]


netplan try


netplan apply


+Nvidia GPU Drivers for Tensorflow (Feb. 3, 2020, 3:08 p.m.)

1- Download and nvidia machine learning repo package:


dpkg -i nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb


2- Download nvidia cuda repo package:


dpkg -i cuda-repo-ubuntu1804_10.2.89-1_amd64.deb


+Test GPU (Feb. 3, 2020, 2:30 p.m.)

Run google-chrome and navigate to the URL about:gpu. If chrome has figured out how to use OpenGL, you will get extremely detailing information about your GPU.


cat /proc/driver/nvidia/gpus/*/information


lspci | grep ' VGA ' | cut -d" " -f 1


lspci -v -s $(lspci | grep ' VGA ' | cut -d" " -f 1)


nvidia-smi --list-gpus

nvidia-smi -q


+Nvidia Drivers (Feb. 3, 2020, 10:11 a.m.)

1- Enable the non-free repository.

vim /etc/apt/sources.list
deb buster main non-free

2- Update the repository index files and install nvidia-detect utility:
apt update
apt install nvidia-detect

3- Detect your Nvidia card model and suggested Nvidia driver:
# nvidia-detect

4- As suggested install the recommended driver by the previous step:
apt install nvidia-driver

5- Reboot:
systemctl reboot


1- Search and download the driver file from Nvidia website:

2- apt install build-essential linux-headers-`uname -r`

3- bash (The file you downloaded in step 1)


+Motherboard (Feb. 2, 2020, 10:44 a.m.)

To find your motherboard model, use dmidecode or inxi command:

dmidecode -t baseboard | grep -i 'Product'


apt install inxi

inxi -M


+VGA / GPU (Feb. 2, 2020, 10:33 a.m.)

Fetch details about graphics unit (vga card or video card)

lspci -vnn | grep VGA -A 12


apt install lshw

lshw -numeric -C display

lshw -class display

lshw -short | grep -i --color display


+eyeD3 (Jan. 10, 2020, 6:53 p.m.)

eyeD3 is a Python tool for working with audio files, specifically MP3 files containing ID3 metadata. It provides a command-line tool (eyeD3) and a Python library (import eyed3) that can be used to write your own applications or plugins that are callable from the command-line tool.


It's better to use a virtualenv for installing eyeD3 and its plugins (if you need any):
create and activate a virtualenv with Python 3, then install eyeD3 and its "display" plugin:

pip install eyed3[display-plugin]


For example, to set some song information in an mp3 file called song.mp3:

$ eyeD3 -a Integrity -A "Humanity Is The Devil" -t "Hollow" -n 2 song.mp3

With this command, we’ve set the artist (-a/--artist), album (-A/--album), title (-t/--title), and track number (-n/--track-num) properties in the ID3 tag of the file.


eyeD3 song.mp3

The same can be accomplished using Python.

import eyed3
audiofile = eyed3.load("song.mp3")
audiofile.tag.artist = u"Integrity"
audiofile.tag.album = u"Humanity Is The Devil"
audiofile.tag.album_artist = u"Integrity"
audiofile.tag.title = u"Hollow"
audiofile.tag.track_num = 2


Rename mp3 files to their titles and prepend the index number:


for file in "${files[@]}"; do
i=$(( i + 1 ))
eyeD3 --rename ''"$i"'- $title' $file


Display title: (you need "display" plugin to be installed")

eyeD3 -P display -p %t%

eyeD3 -P display -p %title%


+Watch (Jan. 9, 2020, 1:12 p.m.)

watch -d -n 0.2 du -sh



highlights the changes in the command output.


-n, --interval <secs>

seconds to wait between updates


-t, --no-title

turn off header


+Send Remote Commands Via SSH (Jan. 6, 2020, 5:07 p.m.)

ssh 'ls -l'

ssh 'ls -l; ps -aux; whoami'

ssh -t 'top'

The -t flag tells ssh that you'll be interacting with the remote shell. Without the -t flag top will return results after which ssh will log you out of the remote host immediately. With the -t flag, ssh keeps you logged in until you exit the interactive command. The -t flag can be used with most interactive commands, including text editors like pico and vi.

+Remap keyboard keys (Dec. 22, 2019, 5:43 p.m.)

1- run xev in terminal

2- You need to get the code of the key you intend to switch. So after runing xev press the key you want to switch and note the keycode.

3- Suppose you want to the change that key with left shift. So using the following example, get the name of the left shift command:
xmodmap -pke | grep -i shift

4- Now you can change the key functionality with the following command:
xmodmap -e "keycode 94 = Shift_L"

5- To make this change permanent, you need to put the command in ~/.profile
vim ~/.profile
xmodmap -e "keycode 94 = Shift_L"

+MkDocs (Nov. 5, 2019, 3:35 p.m.)

1- apt install mkdocs

2- Create new MkDocs project:
mkdocs new my_project
cd my_project

mkdocs serve
Open up

4- Building the site:
mkdocs build


Change development address:
dev_addr: ''




Installing a new theme:


Serve in remote host:
mkdocs serve -a


Markdown documentation:

1- Emphasis:

~~strike through~~
`inline code`
==*you* **can** ^^combine^^ `too`==

2- Soft & Hard Line Breaks:

Put 2 spaces at the end of a line to force a line break.
You can also force a break anywhere using the <br> tag.

3- Lists:

* need a blank line above to start a new list
+ valid bullet symbols
+ `*`, `-` or '+'
- 4 spaces or 1 tab
- to indent

1. use *numbers* for ordered
* can nest
2. **numbers** can be in order
3. can also nest
1. but it will fix them if not

- list item with two paragraphs.

anything like this paragraph
should be indented by 4 spaces
or a tab

- you can add blocks too

> :memo:
> * list under lists
> * under lists

4- Tasks:

- [ ] Task Lists `- [ ]`
- [x] x instead of space
- [x] will mark it complete
- [ ] work just like lists
* can can contain indents
* or anything else a list can

1. Or can be nested under others lists
- [ ] like this
- [ ] and this

2. This can help
- [ ] like this
- [ ] and this

5- Links:

[simple link]( )
[with optional title]( "Google's Homepage")
point to a [relative file or md](./embedding/ or
mail link with emoji [📧]( or
click this cloud icon to see the list of icon options

or [use an image ![](images/dingus/image-small.png)](images/dingus/image.png)

[Reference-Style Links][some reference id]
put link at bottom of paragraph or page.
you can use numbers or text for
[reference-style link definitions][1]
or leave it empty and
just use the [link text itself]

to [open in new tab]({.new-tab}
use `{target=_blank} or {.new-tab}` attributes
use it on [ref links][new tab]{.new-tab} too

Indenting _reference links_
2 spaces is not required
but a recommended convention

[some reference id]:
[link text itself]: ./images/material.png
[new tab]:

6- Images:

inline ![](images/dingus/image-small.png)
with alt text ![foo](images/dingus/image-small.png)
with ref links ![img-small][]
can use [sizing attributes](blocks/#sizing-alignment)

Put `zoomify` in the alt text bracket to enable
clicking to zoom. Try clicking on any of
these images ![zoomify][img-dingus]{.tiny}

![zoomify](images/dingus/image.png){.center .xsmall}

> :camera: **Figure Title**
> ![zoomify](images/dingus/image.png){.center .small}

[img-small]: ./images/dingus/image-small.png
[img-dingus]: ./images/dingus/image.png

7- Abbreviations:

here are some abbr's

>:bulb: if your editor gets confused by
not having and enclosing * then
just add it to end of abbr def.


>:warning: Don't indent these, doesn't seem to work

*[abbr]: Abbreviations
*[def]: Definition
*[HTML]: Hyper Text Markup Language
*[FUBAR]: You know what it means*

8- Footnotes:

Footnotes[^1] work like reference links
They auto-number like ordered lists[^3]
You can use any
reference id[^text reference]
like ref links they can be
organized at bottom
of paragraph or page.

[^1]: footnote, click the return icon here to go back ->
[^3]: the number will not necessarily be what you use
[^text reference]: text reference

9- Tables:

Colons can be used to align columns.
3 dashes min to separate headers.
Outer pipes (|) are optional,
and you don't need to make the
raw Markdown line up prettily.
You can also use inline Markdown.

| Tables | Are | Cool |
| -------- |:-------------:| ---------:|
| col 3 is | right-aligned | $1600 |
| col 2 is | centered | $12 |
| | **Total** | **$1612** |

==Table== | **Format** | 👀 _scramble_
--- | --- | ---
*Still* | `renders` | **nicely**
[with links](images/dingus/image-small.png) | images ![zoomify](images/dingus/image-small.png){.tiny} | emojis 🍔
icons _cloud_{.icon} | footnotes[^1] | use `<br>` <br> for multi-line <br> line breaks

Colons can be used to align columns. 3 dashes min to separate headers. Outer pipes (|) are optional, and you don't need to make the raw Markdown line up prettily. You can also use inline Markdown.
Tables Are Cool
col 3 is right-aligned $1600
col 2 is centered $12
Total $1612

10- Blockquotes:

> Blockquotes are handy to callout text.
they are greedy and will keep
grabbing text. The '>' is optional unless trying join
paragraphs, tables etc.

a blank line and a new paragraph
or other markdown thing end them

use a `---` seperator or `<br>`
if you want multiple sepearte block quotes


> can have nested
> > blockquotes inside of block quotes
block quotes can also contain any valid markdown

11- Blocks - admonitions, callouts, sidebars:

> :memo: **Memo Admonition**
use blockquotes
with emoji indicators for
admonition memos, callout etc..


> :boom:
Title title like above is optional


> :bulb: See [the section about blocks](
for the list of emojis that can be used.

12- Row Divs:

<div markdown="1" class="two-column">

13- Headings & Breaks:

# h1 Heading
## h2 Heading
### h3 Heading
#### h4 Heading

Horizontal Rules


Material Design:

pip install mkdocs-material

Sample Config:

# Configuration
name: 'material'
primary: 'purple'
accent: 'purple'
tabs: true

# Extensions
- admonition
- codehilite:
guess_lang: false
- toc:
permalink: true



mkdocs build

A folder named "site" will be created. Zip and scp it to the server and serve it using Nginx or any other web servers.

If you don't want the output files to get built in the "site" directory, set another name for site_dir configuration option, in mkdocs.yml file.


+Set up PPTP VPN Server (Nov. 1, 2019, 7:41 p.m.)

1- Install pptpd and a toolkit to save iptables-rules:
apt install pptpd iptables-persistent -y

2- Edit the file /etc/ppp/pptpd-options

And comment the following lines:

3- Add VPN User Accounts:
vim /etc/ppp/chap-secrets

Add the user and password as follows. Use the tab key to separate them.
mohsen pptpd my-password *

4- Allocate Private IP for VPN Server and Clients:
vim /etc/pptpd.conf

Add the following lines to the end of the file.

5- Enable IP Forwarding:
vim /etc/sysctl.conf

Add the following line:
net.ipv4.ip_forward = 1
sysctl -p

6- Configure Firewall for IP Masquerading:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A POSTROUTING -t nat -o ppp+ -j MASQUERADE
# Enable IP forwarding
iptables -F FORWARD
iptables -A FORWARD -j ACCEPT
# Accept GRE packets
iptables -A INPUT -p 47 -j ACCEPT
iptables -A OUTPUT -p 47 -j ACCEPT
# Accept incoming connections to port 1723 (PPTP)
iptables -A INPUT -p tcp --dport 1723 -j ACCEPT
# Accept all packets via ppp* interfaces (for example, ppp0)
iptables -A INPUT -i ppp+ -j ACCEPT
iptables -A OUTPUT -o ppp+ -j ACCEPT

7- Save iptables rules for taking activating the VPN on each reboot:
iptables-save >/etc/iptables/rules.v4

vim etc/network/if-pre-up.d/iptables-restore-pptp
/sbin/iptables-restore < /etc/iptables/rules.v4
Save the file.

8- Start pptpd Daemon:
service pptpd start
service pptpd stop
service pptpd restart
service pptpd status
update-rc.d pptpd enable

9- In order to verify that it is running and listening for incoming connections:

netstat -alpn | grep pptp


Install the following packages on client system:
apt install pptp-linux network-manager-pptp

In network manager add a PPTP VPN.
You will only need the following information:
- Gateway: Which is the IP address of your VPN server.
- Login: Which is the username in the above chap-secrets file
- Password: Which is the password in the above chap-secrets file.


+Network Manager Logs (Nov. 1, 2019, 6:12 p.m.)

journalctl -fu NetworkManager

+UFW - Uncomplicated Firewall (Oct. 29, 2019, 11:25 a.m.)

The default firewall configuration tool for Ubuntu is ufw. Developed to ease iptables firewall configuration, ufw provides a user friendly way to create an IPv4 or IPv6 host-based firewall. By default UFW is disabled.


ufw enable

ufw status verbose

ufw show raw


ufw allow <port>/<optional: protocol>

To allow incoming tcp and udp packet on port 53
ufw allow 53

To allow incoming tcp packets on port 53
ufw allow 53/tcp

To allow incoming udp packets on port 53
ufw allow 53/udp

To allow packets from
ufw allow from

ufw allow from

ufw allow from to any port 22

ufw allow from to any port 22 proto tcp


ufw deny <port>/<optional: protocol>

To deny tcp and udp packets on port 53
ufw deny 53

To deny incoming tcp packets on port 53
ufw deny 53/tcp

To deny incoming udp packets on port 53
ufw deny 53/udp

Deny by specific IP:
ufw deny from

ufw deny from to any port 22


Delete Existing Rule:
ufw delete deny 80/tcp



Allow by Service Name:
ufw allow <service name>
ufw allow ssh

Deny by Service Name:
ufw deny <service name>
ufw deny ssh


Checking the status of ufw will tell you if ufw is enabled or disabled and also list the current ufw rules that are applied to your iptables.

ufw status



To enable logging use:
ufw logging on

To disable logging use:
ufw logging off


+httrack (Oct. 19, 2019, 1:09 a.m.)

1- Installation:
apt install httrack

2- Usage:
httrack -r2 '-*' '+*mp3' -X0 --update

+Radio Streaming Apps (Feb. 20, 2019, 10:32 a.m.)


apt install cantata mpd

Favorite List file location:



apt install snapd
snap install odio



add-apt-repository ppa:gnumdk/lollypop
apt update
apt install lollypop

If not found, maybe it's "lollypop-xenial". Do an apt-cache search lollypop to find the correct name.



add-apt-repository ppa:anonbeat/guayadeque
apt-get update
apt install guayadeque


+CentOS - yum nogpgcheck (July 7, 2019, 9:39 p.m.)

yum --nogpgcheck localinstall packagename.arch.rpm

+CentOS - EPEL (July 7, 2019, 7 p.m.)

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

+CentOS - Check version (July 7, 2019, 6:56 p.m.)

rpm -q centos-release

+SMB (June 26, 2019, 9:27 p.m.)

apt install smbclient


List all shares:
smbclient -L <IP Address> -U Mohsen

Connect to a Disk or other services:
smbclient //<IP Address>/<Disk or Service Name> -U Mohsen


To copy the file from the local file system to the SMB server:
smb: \> put local_file remote_file

To copy the file from the SMB server to the local file system:
smb: \> get remote_file local_file


+aria2c (April 26, 2018, 10:55 a.m.)

aria2c -d ~/Downloads/ -i ~/Downloads/dl.txt --summary-interval=20 --check-certificate=false -c -x16 -s16 -j1

For limiting speed add:


+Download dependencies and packages to directory (June 24, 2019, 1:38 p.m.)

1- In server with no Internet:
apt-get --print-uris --yes install <my_package_name> | grep ^\' | cut -d\' -f2 > downloads.list

2- Download the links from another server with Internet connection:
wget --input-file downloads.list

3- Copy the files to the location /var/cache/apt/archives in destination server.

4- Install the package using apt install.

+Change/Rename username/group (June 16, 2019, 5:13 p.m.)

usermod -l new-name old-name

groupmod -n new-group old-group


If following error occurred:
usermod: user tom is currently used by process 123:

pkill -u old_name 123
pkill -9 -u old_name


+rsync (May 5, 2018, 11:26 a.m.)

--delete : delete files that don't exist on sender (system)
-v : Verbose (try -vv for more detailed information)
-e "ssh options" : specify the ssh as remote shell
-a : archive mode
-r : recurse into directories
-z : compress file data


rsync -civarzhne 'ssh -p 22' --no-g --no-p --delete --force --exclude-from 'fair/rsync' fair


rsync -arvb --exclude-from 'my_project/rsync-exclude-list.txt' --delete --backup-dir='my_project/my_project/rsync-deletions' -e ssh my_project


rsync -varPe 'ssh' --ignore-existing* /home/mohsen/Audio/Music/Unsorted/music/


Exclude files and folders:

--exclude 'sources.txt'
--exclude '*.pyc'

--exclude '/static'
--exclude 'abc*'

--exclude 'sources.txt' --exclude 'abc*'


-a = recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.

-v = verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?

--delete = This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

l = preserves any links you may have created.

--progress = shows the progress of each file transfer. Can be useful to know if you have large files being backup up.

--stats = Adds a little more output regarding the file transfer status.

-I, --ignore-times
Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated.

-b, --backup
With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. Note that if you don’t specify --backup-dir, (1) the --omit-dir-times option will be implied, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a "protect" filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P *~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of ’*’, the auto-added rule would never be reached).

In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts
with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module’s path hierarchy, so take extra care not to delete it or copy into it.

This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string.

-u, --update
This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file’s, it will be updated if the sizes are different.) Note that this does not affect the copying of symlinks or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a transfer rule, not an exclude, so it doesn’t affect the data that goes into the file-lists, and thus it doesn’t affect deletions. It just limits the files that the receiver requests to be transferred.


+Shadowsocks - Proxy tool (May 13, 2018, 9:25 p.m.)

Server Installation:

(Use python 2.7)
1- pip install shadowsocks
(You can create a virtualenv if you want.)

2- Create a file /etc/shadowsocks.json:
"server": "[server ip address]",
"port_password": {
"8381": "Mohsen123",
"8382": "Mohsen321",
"8383": "MoMo"
"local_port": 1080,
"timeout": 600,
"method": "aes-256-cfb"

3- ssserver --manager-address /var/run/shadowsocks-manager.sock -c /etc/shadowsocks.json start
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "ssserver")

If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/ undefined symbol: EVP_CIPHER_CTX_cleanup
Refer to the bottom of this note for solution!

If you got these errors:
[Errno 98] Address already in use
can not bind to manager address
Delete the file in:
rm /var/run/shadowsocks-manager.sock

4- Open Firewall Port to Shadowsocks Client for each ports defined at the above json file:
ufw allow proto tcp to port 8381 comment "Shadowsocks server listen port"
Do the same for other ports too, 8382, 8383, etc

5- Automatically Start Shadowsocks Service:
put the whole line in step 3 in the file /etc/rc.local


Client Installation: (Linux)

1- pip install shadowsocks
(You can create a virtualenv if you want.)

2- Create a file /etc/shadowsocks.json with the exact content from step 2 of "Server Installation".

3- sslocal -c /etc/shadowsocks.json
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "sslocal")


Client Installation: (Android)

Install the Shadowsocks app from the link below:


If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/ undefined symbol: EVP_CIPHER_CTX_cleanup

Open the file:
vim /usr/local/lib/python2.7/dist-packages/shadowsocks/crypto/

Replace "cleanup" with "reset" in line 52:
libcrypto.EVP_CIPHER_CTX_cleanup.argtypes = (c_void_p,)
libcrypto.EVP_CIPHER_CTX_reset.argtypes = (c_void_p,)

And also replace "cleanup" with "reset" in line 111:


+Check if a disk is an SSD or an HDD (Dec. 18, 2018, 9:21 a.m.)

cat /sys/block/sda/queue/rotational

You should get the value 0 for an SSD


lsblk -d -o name,rota


Verify VPS provided is on SSD drive:

dd if=/dev/zero of=/tmp/basezap.img bs=512 count=1000 oflag=dsync

This command should take only a few seconds if it is an SSD. If it took longer, it is a normal hard disk.


time for i in `seq 1 1000`; do
dd bs=4k if=/dev/sda count=1 skip=$(( $RANDOM * 128 )) >/dev/null 2>&1;


+ffmpeg (May 10, 2019, 4:30 p.m.)

Cut Movies:
ffmpeg -i 4.VOB -ss 00:14 -t 02:11 -c copy cut2.mp4


Resize resolution:
ffmpeg -i input.mp4 -s 640x480 -b:v 1024k -vcodec mpeg4 -acodec copy input.mp4

List of all formats & codes supported by ffmpeg:
ffmpeg -formats

ffmpeg -codecs


Converting mp4 to mp3:

ffmpeg -i video.mp4 -vn -acodec libmp3lame -ac 2 -qscale:a 4 -ar 48000 audio.mp3


Merge audio & video:

ffmpeg -i video.mp4 -i audio.mp3 -c:v copy -c:a mp3 -strict experimental output.mp4


m2t to mp3:
ffmpeg -i mohsen.m2t -f mp3 -acodec mp3 -ab 320 -ar 44100 -vn mohsen.mp3



+OpenVPN (Nov. 18, 2018, 9:52 a.m.)

=================== Server Configuration ===================

1- apt install openvpn easy-rsa

2- make-cadir /var/openvpn-ca

3- Build the Certificate Authority:
cd /var/openvpn-ca
mv openssl-1.0.0.cnf openssl.cnf
source vars

4- Create the Server Certificate, Key, and Encryption Files:
./build-key-server server
When asked for "Sign the certificate" reply "y"

5- Generate an HMAC signature to strengthen the server's TLS integrity verification capabilities:
openvpn --genkey --secret keys/ta.key

6- Generate a Client Certificate and Key Pair:
./build-key user1

7- Copy the Files to the OpenVPN Directory:
cd keys
cp ca.crt server.crt server.key ta.key dh2048.pem /etc/openvpn
If the file "dh2048.pem" was not available, you can copy it from:
cp /usr/share/doc/openvpn/examples/sample-keys/dh2048.pem /etc/openvpn
or you might need to locate it.

8- Copy and unzip a sample OpenVPN configuration file into configuration directory:
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | tee /etc/openvpn/server.conf

9- Adjust the OpenVPN Configuration:
vim /etc/openvpn/server.conf

* Find the directive "tls-auth ta.key 0", uncomment it (if it's commented) and add "key-direction 0" below it.

* Find "cipher AES-256-CBC", uncomment it and add "auth SHA256" below it.

* Find and uncomment:
user nobody
group nogroup
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS"
push "dhcp-option DNS"

10- Allow IP Forwarding:
Uncomment the line "net.ipv4.ip_forward" in the file "vim /etc/sysctl.conf".
To read the file and adjust the values for the current session, type:
sysctl -p

11- Adjust the UFW Rules to Masquerade Client Connections:
Find the public network interface using:
ip route | grep default
The part after "dev" is the public network interface. We need it for next step.

12- Add the following lines to the the bottom of the file "/etc/ufw/before.rules":
There is a "COMMIT" at the end of the file. Do not delete or comment that "COMMIT".
Just add this block at the end of the file. Each "COMMIT" apply their own block rules.

# NAT table rules
# Allow traffic from OpenVPN client to server public network interface
-A POSTROUTING -s -o <your_public_network_interface> -j MASQUERADE

13- Open the file "/etc/default/ufw":

14- Open the OpenVPN Port and Enable the Changes:
ufw allow 1194/udp
ufw allow OpenSSH
ufw disable
ufw enable

15- Start and Enable the OpenVPN Service:
systemctl start openvpn@server
systemctl status openvpn@server

Also check that the OpenVPN tun0 interface is available:
ip addr show tun0

16- Enable the service so that it starts automatically at boot:
systemctl enable openvpn@server

17- Create the Client Config Directory Structure:
mkdir -p /var/client-configs/files
chmod 700 /var/client-configs/files

18- Copy an example client configuration:
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /var/client-configs/base.conf

19- Open the "/var/client-configs/base.conf" file and enter your server IP to the directive:
remote <your_server_ip> 1194

user nobody
group nogroup

# ca ca.crt
# cert client.crt
# key client.key

Add "auth SHA256" after the line "cipher AES-256-CBC"

Add "key-direction 1" somewhere in the file.

Add a few commented out lines:
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
If your client is running Linux and has an /etc/openvpn/update-resolv-conf file, you should uncomment these lines from the generated OpenVPN client configuration file.

20- Creating a Configuration Generation Script:
vim /var/client-configs/

Paste the following script:

# First argument: Client identifier


cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn

21- Mark the file as executable:
chmod 700 /var/client-configs/

22- Generate Client Configurations:
cd /var/client-configs/
./ user1

If everything went well, we should have a "user1.ovpn" file in our "/var/client-configs/files" directory.

23- Transferring Configuration to Client Devices:
Use scp or any other methods to download a copy of the create "user1.ovpn" file to your client.

=================== Client Configuration ===================

24- Install the Client Configuration:
apt install openvpn

25- Check to see if your distribution includes a "/etc/openvpn/update-resolv-conf" script:
ls /etc/openvpn
If you see a file "update-resolve-conf":
Edit the OpenVPN client configuration file you transferred and uncomment the three lines we placed in to adjust the DNS settings.

26- If you are using CentOS, change the group from nogroup to nobody to match the distribution's available groups:

27- Now, you can connect to the VPN by just pointing the openvpn command to the client configuration file:
sudo openvpn --config user1.ovpn

+DVB - TV Card Driver (April 17, 2015, 7:49 p.m.)

This will install the driver automatically:

1- mkdir it9135 && cd it9135

2- wget

3- unzip

4- dd if=dvb-usb-it9135.fw ibs=1 skip=64 count=8128 of=dvb-usb-it9135-01.fw

5- dd if=dvb-usb-it9135.fw ibs=1 skip=12866 count=5817 of=dvb-usb-it9135-02.fw

6- rm dvb-usb-it9135.fw

7- sudo install -D *.fw /lib/firmware

8- sudo chmod 644 /lib/firmware/dvb-usb-it9135* && cd .. && rm -rf it9135

9- sudo apt install kaffeine

After the above solution, you should be able to watch Channels via Kaffeine (or any other DVB Players). Just grab Kaffein, scan the frequencies and you should be fine!


If you had problems with the above solution, check the older method below:

1-sudo apt-get install libproc-processtable-perl git libc6-dev

2-git clone git://

3-cd media_build

4-$ ./build

5-sudo make install

6-apt-get install me-tv kaffeine

7-reboot for loading the driver (I don't know the driver for modprobe yet).


Scan channels using Kaffein:

1-Open Kaffein

2-From `Television` menu, choose `Configure Television`.

3-From `Device 1` tab, from `Source` option, choose `Autoscan`

4-From `Television` menu choose `Channels`

5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.


Scan channels using Me-TV

1-Open Me-TV

2-When the scan dialog opens, choose `Czech Republic` from `Auto Scan`.


+Permanently set $PATH (April 19, 2019, 9:39 p.m.)

vim /root/.profile

export PATH="$PATH:/usr/share/logstash/bin/"

+Test if a port is open (April 7, 2018, 9:07 p.m.)

telnet 80
nc -z 80

+sed - inline string replace (April 7, 2018, 6:29 p.m.)

echo "the old string . . . " | sed -e "s/old/new/g/"

+Install GRUB manually (March 9, 2018, 12:05 p.m.)

sudo mount /dev/sdax /mnt
sudo mount --bind /dev /mnt/dev
sudo mount --bind /dev/pts /mnt/dev/pts
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt

update-initramfs -u

+Forwarding X (March 6, 2018, 7:55 p.m.)

1- Edit the file sshd_config:
vim /etc/ssh/sshd_config

X11Forwarding yes
X11UseLocalhost no

2- Restart ssh server:
/etc/init.d/ssh reload

3- Install xauth:
apt install xauth

4- SSH to the server:
ssh -X

+Partitioning Error - Partition table entries are not in disk order (Feb. 13, 2018, 5:37 p.m.)

sudo gdisk /dev/sda
p (the p-command prints the recent partition-table on-screen)
s (the s-command sorts the partition-table entries)
p (use the p-command again to see the result on your screen)
w (write the changed partition-table to the disk)
q (quit gdisk)

+OwnCloud (Feb. 3, 2018, 3:37 p.m.)


1- apt install -y apache2 mariadb-server libapache2-mod-php7.0 php7.0-gd php7.0-json php7.0-mysql php7.0-curl php7.0-intl php7.0-mcrypt php-imagick php7.0-zip php7.0-xml php7.0-mbstring php-apcu php-redis redis-server php7.0-ldap php-smbclient

2- Download tar file from the address:
Extract the file to /srv/

3- Remove the config files in /etc/apache2/sites-available and "sites-enabled".
Create an Apache config file with the content:
vim /etc/apache2/sites-available/owncloud.conf

Redirect permanent /owncloud
<VirtualHost *:443>
Header add Strict-Transport-Security: "max-age=15768000;includeSubdomains"
SSLEngine on

DocumentRoot /srv/owncloud

<Directory /srv/owncloud>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

<IfModule mod_dav.c>
Dav off

SetEnv HOME /srv/owncloud
SetEnv HTTP_HOME /srv/owncloud

4- Create a symlink:
ln -s /etc/apache2/sites-available/owncloud.conf /etc/apache2/sites-enabled/owncloud.conf

5- Enable some required modules for Apache:
systemctl restart apache2
a2enmod rewrite
a2enmod headers

6- chown -R www-data:www-data /srv/owncloud

7- Configure Database:
mysql -u root -p
GRANT ALL PRIVILEGES ON owncloud.* TO 'root'@'localhost' IDENTIFIED BY 'password';

8- Open the server address in browser and complete the installation:

9- vim /etc/php/7.0/cli/conf.d/20-apcu.ini

10- Add these two lines at the top of the file /srv/owncloud/data/.htaccess
deny from all
IndexIgnore *

11- Check the owncloud config file is the same as the following: /srv/owncloud/config/config.php
$CONFIG = array (
'instanceid' => '...',
'passwordsalt' => '...',
'secret' => '...',
'trusted_domains' =>
array (
0 => '',
'datadirectory' => '/srv/owncloud/data',
'overwrite.cli.url' => '',
'dbtype' => 'mysql',
'version' => '',
'dbname' => 'owncloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => 'oc_admin',
'dbpassword' => '...',
'logtimezone' => 'UTC',
'installed' => true,
'filelocking.enabled' => true,
'memcache.local' => '\OC\Memcache\APCu',
'memcache.locking' => '\OC\Memcache\APCu',

12- Enabling SSL:
a2enmod ssl
a2ensite default-ssl
service apache2 reload

13- Edit the file /etc/php/7.0/cli/conf.d/20-apcu.ini and make sure it has only the value:

Restart apache:
/etc/init.d/apache2 restart

Management Commands:

sudo -u www-data php /var/www/owncloud/occ user:resetpassword admin
See OwnCloud version:
sudo -u www-data php /var/www/owncloud/occ -V
sudo -u www-data php /var/www/owncloud/occ status
User Commands:
user:add adds a user
user:delete deletes the specified user
user:disable disables the specified user
user:enable enables the specified user
user:inactive reports users who are known to owncloud,
but have not logged in for a certain number of days
user:lastseen shows when the user was logged in last time
user:list list users
user:list-groups list groups for a user
user:report shows how many users have access
user:resetpassword Resets the password of the named user
user:setting Read and modify user settings
user:sync Sync local users with an external backend service


+pmacct configuration with PostgreSQL (Jan. 30, 2018, 10:19 p.m.)
su postgres
psql -d template1 -f pmacct-create-db.pgsql
psql -d pmacct -f pmacct-create-table_v1.pgsql
vim /etc/pmacct/pmacctd.conf

+Get Hardware Information (Jan. 24, 2018, 4:40 p.m.)


+tcpdump (Jan. 13, 2018, 11:29 a.m.)

apt install tcpdump

sudo tcpdump -i any -n host
sudo tcpdump -nti any port 80

+Use cURL on specific interface (Jan. 9, 2018, 1:09 p.m.)

curl -o rootLast.tbz2 --interface eno2

+pmacct (Jan. 1, 2018, 10:49 a.m.)

su postgres
psql -d template1 -f /tmp/pmacct-create-db.pgsql
psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
Configuration Directives:
vim /etc/pmacct/nfacctd.conf
! nfacctd configuration
daemonize: true
pidfile: /var/run/
syslog: daemon
! interested in in and outbound traffic
aggregate: src_host,dst_host
! on this network
pcap_filter: net
! on this interface
interface: lo
! storage methods
plugins: pgsql
sql_host: localhost
sql_passwd: myrealsecurepwd
! refresh the db every minute
sql_refresh_time: 600
! reduce the size of the insert/update clause
sql_optimize_clauses: false
! accumulate values in each row for up to an hour
sql_history: 10m
! create new rows on the minute, hour, day boundaries
sql_history_roundoff: 10m
! in case of emergency, log to this file
!sql_recovery_logfile: /var/lib/pmacct/nfacctd_recovery_log
nfacctd_port: 6653
imt_mem_pools_number: 0
plugin_pipe_size: 4096000
! plugin_buffer_size: 32212254720

+Chroot (Dec. 25, 2017, 11:11 a.m.)

chroot /srv/root /bin/bash

+PDF Conversions (Nov. 6, 2017, 3:21 p.m.)

apt install graphicsmagick-imagemagick-compat


Convert multiple images to a PDF file:
convert *.jpg aa.pdf


Convert a PDF file to images:

convert 1.pdf 1.jpg

For a single page:
convert 1.pdf[4] 1.jpg


If the following error occurred:
convert: not authorized `1.pdf' @ error/constitute.c/ReadImage/412.
convert: no images defined `1.jpg' @ error/convert.c/ConvertImageCommand/3210.

This problems comes from a security update.
Edit the file: /etc/ImageMagick-6/policy.xml
Change "none" to "read|write" in the line:
<policy domain="coder" rights="read|write" pattern="PDF" />


+Add a New Disk to an Existing Linux Server (Oct. 25, 2017, 3:44 p.m.)

1- Check if the added disk is shown:
fdisk -l

2- For partitioning:
fdisk /dev/vdb
+49G (For a 50G disk)
Now format the disk with mkfs command.
mkfs.ext4 /dev/vdb1

Make an entry in /etc/fstab file for permanent mount at boot time:
/dev/vdb1 /mnt/ftp ext4 defaults 0 0

+DevStack (Oct. 4, 2017, 12:36 a.m.)
apt install sudo git sudo

1- Add Stack User
useradd -s /bin/bash -d /opt/stack -m stack

2- Since this user will be making many changes to your system, it should have sudo privileges:
echo "stack ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/stack
su - stack

3- Download DevStack
git clone
cd devstack

4- Create a local.conf with the following content

+Clear Terminal Completely (Sept. 18, 2017, 6:13 p.m.)

clear && printf '\e[3J'

+Add SSH Private Key (Sept. 18, 2017, 5:01 p.m.)

ssh-add .ssh/id_rsa

If you got an error:
Could not open a connection to your authentication agent.

For fixing it run:
eval `ssh-agent -s`
eval $(ssh-agent)

And then repeat the earlier command (ssh-add ....)
Add SSH private key permanently:
Create a file ~/.ssh/config with the content:
IdentityFile ~/.ssh/id_mohsen

+Commands - IP (Sept. 16, 2017, 5:29 p.m.)

Assign an IP Address to Specific Interface:
ip addr add dev eth1


Check an IP Address
ip addr show


Remove an IP Address
ip addr del dev eth1


Enable Network Interface
ip link set eth1 up


Disable Network Interface
ip link set eth1 down


Check Route Table
ip route show


Add Static Route
ip route add via dev eth0


Remove Static Route
ip route del


Add Default Gateway
ip route add default via


+Commands - Find (Sept. 12, 2017, 11:08 a.m.)

Find Files Using Name in Current Directory
find . -name mohsen.txt


Find Files Under Home Directory
find /home -name mohsen.txt


Find Files Using Name and Ignoring Case
find /home -iname mohsen.txt


Find Directories Using Name
find / -type d -name Mohsen


Find PHP Files Using Name
find . -type f -name mohsen.php


Find all PHP Files in Directory
find . -type f -name "*.php"


Find Files With 777 Permissions
find . -type f -perm 0777 -print


Find Files Without 777 Permissions
find / -type f ! -perm 777


Find SGID Files with 644 Permissions
find / -perm 2644


Find Sticky Bit Files with 551 Permissions
find / -perm 1551


Find SUID Files
find / -perm /u=s


Find SGID Files
find / -perm /g=s


Find Read Only Files
find / -perm /u=r


Find Executable Files
find / -perm /a=x


Find Files with 777 Permissions and Chmod to 644
find / -type f -perm 0777 -print -exec chmod 644 {} \;


Find Directories with 777 Permissions and Chmod to 755
find / -type d -perm 777 -print -exec chmod 755 {} \;


Find and remove single File
find . -type f -name "tecmint.txt" -exec rm -f {} \;


Find and remove Multiple File
find . -type f -name "*.txt" -exec rm -f {} \;
# find . -type f -name "*.mp3" -exec rm -f {} \;


Find all Empty Files
find /tmp -type f -empty


Find all Empty Directories
find /tmp -type d -empty


File all Hidden Files
find /tmp -type f -name ".*"


Find Single File Based on User
find / -user root -name mohsen.txt


Find all Files Based on User
find /home -user mohsen


Find all Files Based on Group
find /home -group developer


Find Particular Files of User
find /home -user mohsen -iname "*.txt"


Find Last 50 Days Modified Files
find / -mtime 50


Find Last 50 Days Accessed Files
find / -atime 50


Find Last 50-100 Days Modified Files
find / -mtime +50 –mtime -100


Find Changed Files in Last 1 Hour
find / -cmin -60


Find Modified Files in Last 1 Hour
find / -mmin -60


Find Accessed Files in Last 1 Hour
find / -amin -60


Find 50MB Files
find / -size 50M


Find Size between 50MB – 100MB
find / -size +50M -size -100M


Find and Delete 100MB Files
find / -size +100M -exec rm -rf {} \;


Find Specific Files and Delete
find / -type f -name *.mp3 -size +10M -exec rm {} \;


Find + grep

find . -type f -iname "*.py" -exec grep --exclude=./PC-Projects/* -Riwl 'sqlalchemy' {} \;


find /var/mohsen_backups -name "*`date --date='-20 days' +%Y-%m-%d`.tar.gz" -exec rm {} +


Files created/modified before the date "2019-05-07":
find . ! -newermt "2019-05-07"

After the date:
find . -newermt "2019-05-07"

Using datetime:
find . ! -newermt "2019-05-07 12:23:17"

find . -newermt "june 01, 2019"
find . -not -newermt "june 01, 2019"

find . -type f ! -newermt "June 01, 2019" -exec rm {} +


find . -name "*.mp4" -exec mv {} videos/ \;


+Commands - Netstat (Sept. 12, 2017, 11 a.m.)

netstat (network statistics)
Listing all the LISTENING Ports of TCP and UDP connections
netstat -a
Listing TCP Ports connections
netstat -at
Listing UDP Ports connections
netstat -au
Listing all LISTENING Connections
netstat -l
Listing all TCP Listening Ports
netstat -lt
Listing all UDP Listening Ports
netstat -lu
Listing all UNIX Listening Ports
netstat -lx
Showing Statistics by Protocol
netstat -s
Showing Statistics by TCP Protocol
netstat -st
Showing Statistics by UDP Protocol
netstat -su
Displaying Service name with PID
netstat -tp
Displaying Promiscuous Mode
netstat -ac 5 | grep tcp
Displaying Kernel IP routing
netstat -r
Showing Network Interface Transactions
netstat -i
Showing Kernel Interface Table
netstat -ie
Displaying IPv4 and IPv6 Information
netstat -g
Print Netstat Information Continuously
netstat -c
Finding non supportive Address
netstat --verbose
Finding Listening Programs
netstat -ap | grep http
Displaying RAW Network Statistics
netstat --statistics --raw

+NSQ (Sept. 12, 2017, 9:45 a.m.)


1-Download and extract:

cp nsq-1.0.0-compat.linux-amd64.go1.8/bin/* /usr/local/bin/
Quick Start:
1- In one shell, start nsqlookupd:
$ nsqlookupd

2- In another shell, start nsqd:
$ nsqd --lookupd-tcp-address=

3- In another shell, start nsqadmin:
$ nsqadmin --lookupd-http-address=

4- Publish an initial message (creates the topic in the cluster, too):
$ curl -d 'hello world 1' ''

5- Finally, in another shell, start nsq_to_file:
$ nsq_to_file --topic=test --output-dir=/tmp --lookupd-http-address=

6- Publish more messages to nsqd:
$ curl -d 'hello world 2' ''
$ curl -d 'hello world 3' ''

7- To verify things worked as expected, in a web browser open to view the nsqadmin UI and see statistics. Also, check the contents of the log files (test.*.log) written to /tmp.

The important lesson here is that nsq_to_file (the client) is not explicitly told where the test topic is produced, it retrieves this information from nsqlookupd and, despite the timing of the connection, no messages are lost.
Clustering NSQ:


nsqd --lookupd-tcp-address=,,

nsqadmin --lookupd-http-address=,,

+Reverse SSH Tunneling (Sept. 10, 2017, 3:08 p.m.)

1- SSH from the destination to the source (with public IP) using the command below:
ssh -R 19999:localhost:22 sourceuser@
* port 19999 can be any unused port.

2- Now you can SSH from source to destination through SSH tunneling:
ssh localhost -p 19999

3- 3rd party servers can also access through Destination (
Destination ( <- |NAT| <- Source ( <- Bob's server

3.1 From Bob's server:
ssh sourceuser@

3.2 After the successful login to Source:
ssh localhost -p 19999

The connection between destination and source must be alive at all time.
Tip: you may run a command (e.g. watch, top) on Destination to keep the connection active.

+Auto Mount Hard Disk using /etc/fstab (Sept. 8, 2017, 8:11 a.m.)

UUID=e6a27fec-b822-4cc1-9f41-ca14655f938c /media/mohsen/4TB-Internal ext4 rw,user,exec 00

+Traffic Control - Limit Network Interface (Aug. 28, 2017, 4:58 p.m.)

For slowing an interface down:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
tc qdisc add dev eno3 root tbf rate 8096kbit latency 1ms burst 4096

qdisc - queueing discipline
latency - number of bytes that can be queued waiting for tokens to become available.
burst - Size of the bucket, in bytes.
rate - speedknob

+Crontab (July 11, 2017, 12:55 a.m.)

The crontab (cron derives from chronos, Greek for time; tab stands for table).


To see what crontabs are currently running on your system:

sudo crontab -l
crontab -u username -l


To edit the list of cronjobs::
sudo crontab -e


To remove or erase all crontab jobs:
crontab -r


Running GUI Applications:
0 1 * * * env DISPLAY=:0.0 transmission-gtk

Replace :0.0 with your actual DISPLAY.
Use "echo $DISPLAY" to find the display.


Cronjobs are written in the following format:

* * * * * /bin/execute/this/

As you can see there are 5 stars. The stars represent different date parts in the following order:

minute (from 0 to 59)
hour (from 0 to 23)
day of month (from 1 to 31)
month (from 1 to 12)
day of week (from 0 to 6) (0=Sunday)


Execute every minute:

* * * * * /bin/execute/this/

This means execute /bin/execute/this/

every minute
of every hour
of every day of the month
of every month
and every day in the week.


Execute every Friday 1 AM

0 1 * * 5 /bin/execute/this/


Execute on workdays 1AM

0 1 * * 1-5 /bin/execute/this/


Execute 10 past after every hour on the 1st of every month

10 * 1 * * /bin/execute/this/


Run every 10 minutes:

0,10,20,30,40,50 * * * * /bin/execute/this/

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/


Special words:

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *"
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/


List of the English abbreviated day of the week, which can be used in place of numbers:

0 -> Sun

1 -> Mon
2 -> Tue
3 -> Wed
4 -> Thu
5 -> Fri
6 -> Sat

7 -> Sun

Having two numbers for Sunday (0 and 7) can be useful for writing weekday ranges starting with 0 or ending with 7.

Examples of Number or Abbreviation Use

The next four examples will do all the same and execute a command every Friday, Saturday, and Sunday at 9.15 o'clock:

15 09 * * 5,6,0 command
15 09 * * 5,6,7 command
15 09 * * 5-7 command
15 09 * * Fri,Sat,Sun command


Getting output from a cron job on the terminal:
You can redirect the output of your program to the pts file of an already existing terminal!
To know the pts file just type tty command
And then add it to the end of your cron task:
38 23 * * * /home/mohsen/Programs/ >> /dev/pts/4


Cron jobs get logged to:

You can see just cron jobs in that logfile by running:
grep CRON /var/log/syslog


tail -f /var/log/syslog | grep CRON


Mailing the crontab output

By default, cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:


Mailing the crontab output of just one cronjob.
If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/ 2>&1 | mail -s "Cronjob ouput"


Trashing the crontab output

Now that's easy:

*/10 * * * * /bin/execute/this/ > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.


Many scripts are tested in a Bash environment with the PATH variable set. This way it's possible your scripts work in your shell, but when running from cron (where the PATH variable is different), the script cannot find referenced executables and fails.

It's not the job of the script to set PATH, it's the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).


Applicable Examples:

0 * * * DISPLAY=:0 /home/mohsen/Programs/
0 11 * * * /home/mohsen/Programs/

Do not forget to chomd +x both the following files.

#! /bin/bash

/usr/bin/transmission-gtk > /dev/null &
echo $! > /tmp/

#! /bin/bash

if [ -f /tmp/ ]
/bin/kill $(cat /tmp/


How do I use operators?

An operator allows you to specify multiple values in a field. There are three operators:

The asterisk (*): This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,) : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.

The dash (-): This operator specifies a range of values, for example, “5-15” days, which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.

The separator (/): This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.

+fdisk (July 8, 2017, 5:03 p.m.)

Merge Partitions:

1- fdisk /dev/sda

2- p
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 6293503 6291456 3G 83 Linux
/dev/sda2 6295550 10483711 4188162 2G 5 Extended

3- Delete both partitions you are going to merge:
Partition number (1,2, default 2): 2
Partition 2 has been deleted.

Command (m for help): d
Partition number (1-4): 1

4- n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 2): 1
First sector (63-1953520064, default: 63): (Choose the default value)
Last sector, +sectors... (Choose the default value)

5- t
Partition number (1-4): 1
Hex code (type L to list codes): 83

6- Make sure you've got what you're expecting:
Command (m for help): p

7- Finally, save it:
Command (m for help): w

8- resize2fs /dev/sda1
Reboot the system, then check if the partitions have been merged by:
fdisk -l

+Removing Swap Space (July 8, 2017, 2:52 p.m.)

1- swapoff /dev/sda5

2- Remove its entry from /etc/fstab

3- Remove the partition using parted:
apt-get install parted
parted /dev/sda
Type "print" to view the existing partitions and determine the minor number of the swap partition you wish to delete.
rm 5 (5 is the NUMBER of the partition.
Type "quit" to exit parted.


Now you need to merge the unused partition space with another partition. You can do it using the "fdisk" note.

+GRUB Timeout (July 3, 2017, 12:30 p.m.)


+KDE - Location of User Wallpapers (July 2, 2017, 9:51 a.m.)


+NFS (July 1, 2017, 10:19 a.m.)

NFS is a network-based file system that allows computers to access files across a computer network.
1- Installation:
apt-get install nfs-kernel-server nfs-common
2- Server Configuration:
In order to expose a directory over NFS, open the file /etc/exports and attach the following line at the bottom:

This IP is the client which is going to have access to the shared folder. You can also use IP range.

service nfs-kernel-server restart
3- Client Configuration:
sudo apt-get install nfs-common

Create a directory named "Audio" and:
mount /mnt/Audio/

By running df -h, you can ensure that your operation was successful.
For MacOS use this command:
sudo mount -o resvport /mnt/Audio/

+Trim & Merge MP3 files (June 25, 2017, 2:11 p.m.)

sudo apt-get install sox libsox-fmt-mp3
sox infile outfile trim 0 1:06
sox infile outfile trim 1:52 =2:40
sox first.mp3 second.mp3 third.mp3 result.mp3
Merge two audio files with a pad:
sox short.ogg -p pad 6 0 | sox - -m long.ogg output.ogg

+Fix Wireless Headphone Problem (June 10, 2017, 5:34 p.m.)

+Convert deb to iso (May 14, 2017, 3:37 p.m.)

mkisofs firmware-bnx2_0.43_all.deb > iso

+Change DNS settings (May 9, 2017, 4:27 p.m.)

The DNS servers that the system uses for name resolution are defined in the /etc/resolv.conf file.
That file should contain at least one nameserver line.
Each nameserver line defines a DNS server.
The name servers are prioritized in the order the system finds them in the file.

+Samba - Active Directory Infrastructure (May 7, 2017, 10:31 a.m.)

1- sudo apt-get install samba krb5-user krb5-config winbind libpam-winbind libnss-winbind

2- While the installation is running a series of questions will be asked by the installer in order to configure the domain controller.
Second, deskbit.local
Third, deskbit.local

3- Provision Samba AD DC for Your Domain:
systemctl stop samba-ad-dc.service smbd.service nmbd.service winbind.service
systemctl disable samba-ad-dc.service smbd.service nmbd.service winbind.service

4- Rename or remove samba original configuration. This step is absolutely required before provisioning Samba AD because at the provision time Samba will create a new configuration file from scratch and will throw up some errors in case it finds an old smb.conf file.
sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.initial

5- Start the domain provisioning interactively:
samba-tool domain provision --use-rfc2307 --interactive
(Leave everything as default and set a desired password.)
Here is the last result after the process gets finished:
Server Role: active directory domain controller
Hostname: samba
DNS Domain: deskbit.local
DOMAIN SID: S-1-5-21-163349405-2119569559-686966403

6- Rename or remove Kerberos main configuration file from /etc directory and replace it using a symlink with Samba newly generated Kerberos file located in /var/lib/samba/private path:
mv /etc/krb5.conf /etc/krb5.conf.initial
ln -s /var/lib/samba/private/krb5.conf /etc/

7- Start and enable Samba Active Directory Domain Controller daemons:
systemctl start samba-ad-dc.service
systemctl status samba-ad-dc.service (You may get some error logs, like (Cannot contact any KDC for requested realm), which is okay.
systemctl enable samba-ad-dc.service

8- Use netstat command in order to verify the list of all services required by an Active Directory to run properly.
netstat –tulpn| egrep 'smbd|samba'

9- At this moment Samba should be fully operational at your premises. The highest domain level Samba is emulating should be Windows AD DC 2008 R2.
It can be verified with the help of samba-tool utility.
samba-tool domain level show

10- In order for DNS resolution to work locally, you need to open end edit network interface settings and point the DNS resolution by modifying dns-nameservers statement to the IP Address of your Domain Controller (use for local DNS resolution) and dns-search statement to point to your realm.
When finished, reboot your server and take a look at your resolver file to make sure it points back to the right DNS name servers.

11- Test the DNS resolver by issuing queries and pings against some AD DC crucial records, as in the below excerpt. Replace the domain name accordingly.
ping -c3 deskbit.local # Domain Name
ping -c3 samba.deskbit.local # FQDN
ping -c3 samba # Host

+OpenLDAP (May 6, 2017, 6:22 p.m.)


OpenLDAP is an open-source software implementation of Lightweight Directory Access Protocol, created by OpenLDAP project. It is released under OpenLDAP public license; it is available for all major Linux operating systems, AIX, Android, HP-UX, OS X, Solaris,z/OS, and Windows.

It works like a relational database in certain ways and can be used to store any information. It is not limited to store the information; it can also be used as a backend database for “single sign-on”.
1- sudo apt-get -y install slapd ldap-utils
During the installation, the installer will prompt you to set a password for LDAP administrator. Just enter a password of your wish.
2- Reconfigure OpenLDAP Server:
The installer will automatically create an LDAP directory based on the hostname of your server which is not we want, so we are now going to reconfigure the LDAP. To do that, execute the following command.

sudo dpkg-reconfigure slapd

You would need to answer for series of questions prompted by reconfiguration tool.
Omit OpenLDAP server configuration? Select "No". (If you select yes, it will just cancel the configuration)


Choose the backend format for LDAP: HDB

Choose whether you want the database to be removed when slapd is purged. Select No.

If you have any old data in the LDAP, you could consider moving the database out of the way before creating a database. Select Yes.

You have the option to allow or disable LDAPv2 protocol. Select No.
3- Verify the LDAP:
sudo netstat -antup | grep -i 389
4- Generate base.ldif file for your domain:
vim /root/base.ldif

dn: ou=People,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: Group
5- Build the directory structure:
ldapadd -x -W -D "cn=admin,dc=itzgeek,dc=local" -f /root/base.ldif
6- Add LDAP Accounts:
Let’s create an LDIF (LDAP Data Interchange Format) file for a new user “ldapuser”:
vim /root/ldapuser.ldif

dn: uid=ldapuser,ou=People,dc=deskbit,dc=local
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldapuser
uid: ldapuser
uidNumber: 9999
gidNumber: 100
homeDirectory: /home/ldapuser
loginShell: /bin/bash
gecos: Test LdapUser
userPassword: {crypt}x
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
7- Use the ldapadd command to create a new user “ldapuser” in OpenLDAP directory:
ldapadd -x -W -D "cn=admin,dc=deskbit,dc=local" -f /root/ldapuser.ldif




+Date and Time From Command Prompt (May 3, 2017, 1:42 p.m.)

Display Current Date and Time:
$ date


Display The Hardware Clock (RTC):

# hwclock -r

OR show it in Coordinated Universal time (UTC):
# hwclock --show --utc


Set Date Command Example:
date -s "2 OCT 2006 18:00:00"

date --set="2 OCT 2006 18:00:00"


Set Time Examples:

date +%T -s "10:13:13"

Use %p locale’s equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"


How do I set the Hardware Clock to the current System Time?

Use the following syntax:
# hwclock --systohc

# hwclock -w


A note about systemd based Linux system

With systemd based system you need to use the timedatectl command to set or view the current date and time. Most modern distro such as RHEL/CentOS v.7.x+, Fedora Linux, Debian, Ubuntu, Arch Linux and other systemd based system need to the timedatectl utility. Please note that the above command should work on modern system too.


timedatectl: Display the current date and time:

$ timedatectl


Change the current date using the timedatectl command:
# timedatectl set-time YYYY-MM-DD

$ sudo timedatectl set-time YYYY-MM-DD

For example set the current date to 2015-12-01 (1st, Dec, 2015):
# timedatectl set-time '2015-12-01'
# timedatectl


To change both the date and time, use the following syntax:
# timedatectl set-time '2015-11-23 08:10:40'
# date


To set the current time only:

The syntax is:
# timedatectl set-time HH:MM:SS
# timedatectl set-time '10:42:43'
# date


Set the time zone using timedatectl command:

To see the list of all available time zones, enter:
$ timedatectl list-timezones
$ timedatectl list-timezones | more
$ timedatectl list-timezones | grep -i asia
$ timedatectl list-timezones | grep America/New

To set the time zone to ‘Asia/Kolkata’, enter:
# timedatectl set-timezone 'Asia/Kolkata'

Verify it:
# timedatectl


How to synchronizing the system clock with a remote server using NTP?

# timedatectl set-ntp yes

Verify it:
$ timedatectl


For changing the timezone:
dpkg-reconfigure tzdata


+OpManager (May 3, 2017, 10:37 a.m.)

1- apt-get install iputils-ping

2- Download OpManager for linux:
or another earlier version from the archive link:

chmod a+x ManageEngine_OpManager_64bit.bin
./ManageEngine_OpManager_64bit.bin -console
cd /opt/ManageEngine/OpManager/bin

+SNMP (May 1, 2017, 3:51 p.m.)

1- apt-get install snmp snmpd

2- /etc/snmp/snmpd.conf
Edit to:
agentAddress udp:
view systemonly included .1

Add to the bottom:
com2sec readonly public
com2sec readonly public
com2sec readonly localhost public

3- /etc/init.d/snmpd restart
For checking if snmpd is running, and on what ip/port it's listening to, you can use:
netstat -apn | grep snmpd
Test the Configuration with an SNMP Walk:
snmpwalk -v1 -c public localhost
snmpwalk -v1 -c public
For getting information based on OID:
snmpwalk -v1 -c public localhost iso.

The OID Tree:

+SPICE (April 29, 2017, 1:21 p.m.)

What is SPICE?
SPICE (Simple Protocol for Independent Computing Environments) is a communication protocol for virtual environments. It allows users to see the console of virtual machines (VM) from anywhere via the Internet. It is a client-server model that imagines Virtualization Station as a host and users can connect to VMs via the SPICE client.
remote-viewer spice://srv1:5908
remote-viewer "spice://srv1:5901?password=1362913207771306286"
SPICE Tools:
To compile SPICE agent on Linux, download the agent from the following link:

Install the following packages:
1- apt install libglib2.0-dev libdrm-dev sudo libxxf86vm-dev libxt-dev xutils-dev flex bison xcb libx11-xcb-dev libxcb-glx0 libxcb-glx0-dev xorg-dev libxcb-dri2-0-dev libasound2-dev libdbus-1-dev

2- Extract the already downloaded agent file, and:
sudo make install
SPICE client on Ubuntu:
1- sudo apt install spice-vdagent
2- Create a file /etc/default/spice-vdagentd with the value:

+Extract ISO files (April 26, 2017, 12:28 p.m.)

sudo mount -o loop an_iso_file.iso /home/mohsen/Temp/foo/

+List all IPs in the connected network (April 21, 2017, 1:53 p.m.)

sudo apt-get install arp-scan
sudo arp-scan --interface=eth0 --localnet
sudo apt-get install nmap
nmap -sn

+reprepro (March 4, 2017, 11:46 a.m.)
1-Install GnuPG and generate a GPG key for Signing Packages:
apt-get install gnupg dpkg-sig rng-tools
2-Open /etc/default/rng-tools:
vim /etc/default/rng-tools

and make sure you have the following line in it:

Then start rng-tools:
/etc/init.d/rng-tools start
3-Generate your key:
gpg --gen-key
4-Install and configure reprepro:
apt-get install reprepro

Let's use the directory /var/www/repo as the root directory for our repository. Create the directory /var/www/repo/conf:
mkdir -p /var/www/repo/conf
5-Let's find out about the key we have created in step 3:
gpg --list-keys

Our public key is D753ED90. We have to use this from now on.
6-Create the file /var/www/repo/conf/distributions as follows:
vim /var/www/repo/conf/distributions
7-The address of our apt repository will be, so we use this in the Origin and Label lines. In the SignWith line, we add our public key (D753ED90). Drop out the "2048R/" part:

Origin: reprepro.deskbit.local
Label: reprepro.deskbit.local
Codename: stable
Architectures: amd64
Components: main
Description: Deskbit Proprietary Softwares
SignWith: D753ED90
8-Create the (empty) file /var/www/repo/conf/override.stable:
touch /var/www/repo/conf/override.stable
9-Then create the file /var/www/repo/conf/options with this content:
basedir /var/www/repo
10-To sign our deb packages with our public key, we need the package dpkg-sig:
dpkg-sig -k D753ED90 --sign builder /usr/src/my-packages/*.deb
11-Now we import the deb packages into our apt repository:
cd /var/www/repo
reprepro includedeb stable /usr/src/my-packages/*.deb
12-Configuring nginx:
We need a webserver to serve our apt repository. In this example, I'm using an nginx webserver.

server {
listen 80;

access_log /var/log/nginx/packages-error.log;
error_log /var/log/nginx/packages-error.log;

location / {
root /var/packages;
index index.html;
autoindex on;

location ~ /(.*)/conf {
deny all;

location ~ /(.*)/db {
deny all;
OR for Apache:

<VirtualHost *:80>
ServerName reprepro.deskbit.local
DocumentRoot /var/www/repo
ServerName reprepro.deskbit.local
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
13-Let's create a GPG key for the repository:
gpg --armor --output /var/www/repo/ --export C7C1365D
14-To use the repository, place the following line in your /etc/apt/sources.list:
vim /etc/apt/sources.list

deb stable main
15-If you want this repository to always have precedence over other repositories, you should have this line right at the beginning of your /etc/apt/sources.list and add the following entry to /etc/apt/preferences:

vim /etc/apt/preferences:

Package: *
Pin: origin
Pin-Priority: 1001
16-Before we can use the repository, we must import its key:
wget -O - -q | apt-key add -

apt-get update

+Packages to Install (Feb. 24, 2017, 10:15 a.m.)

pavucontrol proxychains android-tools-adb android-tools-fastboot gimp-plugin-registry gimp gir1.2-keybinder-3.0 quodlibet python3-dev python-dev libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev zip python-setuptools vim postgresql-server-dev-all postgresql libpq-dev curl geany python-pip tmux git virtaal gdebi-core gdebi smplayer yakuake vlc gparted krita transmission-gtk htop graphicsmagick-imagemagick-compat network-manager-l2tp python3-pip kaffeine pptp-linux network-manager-pptp aria2


pip3 install pipenv


Xtreme Download Manager:

wget -O xdman.deb


+PulseAudio Volume Control (Jan. 25, 2017, 9:12 a.m.)


+Find Gateway IP (Jan. 8, 2017, 2:49 p.m.)

ip route | grep default

+Faster grep (Jan. 7, 2017, 4:59 p.m.)

1- Install `parallel`
sudo apt-get install parallel

2- Begin search:
find . -type f | parallel -k -j150% -n 1000 -m grep -H -n "keyring doesn\'t exist" {}

+OpenCV - Facial Keypoint Detection (Sept. 24, 2016, 10:58 a.m.)

As computer vision engineers and researchers we have been trying to understand the human face since the very early days. The most obvious application of facial analysis is Face Recognition. But to be able to identify a person in an image we first need to find where in the image a face is located. Therefore, face detection — locating a face in an image and returning a bounding rectangle / square that contains the face — was a hot research area.

Once you have a bounding box around the face, the obvious research problem is to see if you can find the location of different facial features ( e.g. corners of the eyes, eyebrows, and the mouth, the tip of the nose etc ) accurately. Facial feature detection is also referred to as “facial landmark detection”, “facial keypoint detection” and “face alignment” in the literature, and you can use those keywords in Google for finding additional material on the topic.

+Check outgoing port (Sept. 14, 2016, 10:27 p.m.)

Use one of the tools to check if the outgoing VPS port is blocked:

telnet 80
nc -v 80
wget -qO-

+Write ISO file to DVD in terminal (Sept. 3, 2016, 9:13 p.m.)

Using this command, check where the DVD Writer is mounted: (/dev/sr0)
inxi -d

And using this command, start writing on the DVD:
wodim -eject -tao speed=8 dev=/dev/sr0 -v -data Downloads/linuxmint-18-kde-64bit-beta.iso

+See Linux Version (Aug. 15, 2016, 3:26 p.m.)

cat /etc/os-release

cat /etc/*release

uname -a

lsb_release -a

+Install OpenCV 3.0 with Python 3.4+ (Aug. 3, 2016, 4:31 p.m.)

sudo apt-get install libopenexr-dev
Install the above package in addition to the packages the links says to! It does not include in the documents.

First try doing the way the tutorial links in github says:

If you encountered probles, you could try the following notes too.
The following caused errors about ffmpeg libraries not being found but the link above solved it.
1- sudo apt-get install build-essential cmake git pkg-config libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev libatlas-base-dev gfortran python3.4-dev libgtk-3-dev libgstreamer0.10-dev libgstreamer-plugins-base1.0-dev libv4l-dev libopencv-dev build-essential cmake git libgtk2.0-dev pkg-config python-dev python-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff4-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libxine-dev libtbb-dev libqt4-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip libavresample-dev yasm libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libx264-dev libxvidcore-dev libxvidcore4

ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

It think this part is not needed. It was supposed to help fixing ffmpeg errors when builing opencv but it did not! :

cd ~/MyTemp/
tar xvf ffmpeg-0.11.1.tar.bz2
cd ffmpeg-0.11.1
./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab
make -j4
sudo make install

2- Create a virtualenv and activate it

3- pip install numpy

4- Build and install OpenCV 3.0 with Python 3.4+ bindings:
cd ~/MyTemp
git clone
cd opencv
git checkout 3.0.0 (Referring to this website you can see what version you need to write instead of 3.0.0: As of right now it's 3.1.0)

5- We’ll also need to grab the opencv_contrib repo as well:
cd ~/MyTemp
git clone
cd opencv_contrib
git checkout 3.0.0
Again, make sure that you checkout the same version for opencv_contrib that you did for opencv above, otherwise you could run into compilation errors.

6- Time to setup the build:
cd ~/MyTemp/opencv
mkdir build
cd build
-D OPENCV_EXTRA_MODULES_PATH=~/MyTemp/opencv_contrib/modules \

7- make -j8


+PyCharm / IntelliJ IDEA allows only two spaces (July 26, 2016, 12:37 p.m.)

In settings search for `EditorConfig` and disable the plugin.

+Enable/Disalbe Bluetooth (July 26, 2016, 10:42 a.m.)

sudo rfkill block bluetooth
sudo update-rc.d bluetooth disable
service bluetooth status
sudo rfkill unblock bluetooth
sudo update-rc.d bluetooth enable
service bluetooth status

+Identify Computer Model (July 23, 2016, 10:48 a.m.)

sudo grep "" /sys/class/dmi/id/[bpc]*

+Error: Fixing recursive fault but reboot is needed! (July 17, 2016, 9:49 a.m.)

sudo nano /etc/default/grub



sudo update-grub2

+No partitions found while installing Linux (July 15, 2016, 9:28 p.m.)

1- Boot up linux with Live CD (the installation disk)
2- sudo su
3- sudo apt-get install gdisk
4- sudo gdisk /dev/sda
5- Select (1) for MBR
6- Type x for expert stuff
7- Type z to zap the GPT data
8- Type y to proceed destroying GPT data
9- Type n in order to not lose MBR data

Now restart the installation procedure.

+VMware Workstation (June 21, 2016, 5:37 p.m.)

Using this address, find the bundle file in "/linux/core/":

Extract the file (if it's a tar file) and run the bundle file with root permission:
# bash ./VMware-Workstation-12.5.2-4638234.x86_64.bundle
After installation, you'll need a serial number. Google the version and you'll find it finally ;-)
For this current version (12.5.2) the serial number is:

+Remove invalid characters from filenames (May 29, 2016, 8:18 a.m.)

find . -exec rename 's/[^\x00-\x7F]//g' "{}" \;

+PyCharm Regex (May 23, 2016, 2:07 a.m.)

{8}"name_ru": ".+?",\n
Search for any occurrences starting with a double quote:

+SASL authentication for IRC network using freenode (April 14, 2016, 7:36 p.m.)
port: 6697
Make sure to use "Secure Connectsion (SSL)"

+PouchDB (April 13, 2016, 9:54 a.m.)

sudo npm -g install pouchdb
sudo npm -g install angular-pouchdb
ionic plugin add cordova-sqlite-storage
There is a Chrome extension called PouchDB Inspector that allows you to view the contents of the database in the Chrome Developer Tools.
You can not use the PouchDB Inspector if you loaded the app with ionic serve --lab because it uses iframes to display the iOS and the Androw views. The PouchDB Inspector needs to access PouchDB via window.PouchDB and it can't access that when the window is inside an <iframe>
Keep in mind that when you're testing your Ionic app on a desktop browser it will use an IndexedDB or WebSQL adapter, depending on which browser you use. If you'd like to know which adapter is used by PouchDB, you can look it up:
var db = new PouchDB('birthdays');
On a mobile device the adapter will be displayed as websql even if it is using SQLite, so to confirm that it is actually using SQLite you'll have to do this (see answer on StackOverflow):

var db = new PouchDB('birthdays');;
This will output an object with a sqlite_plugin set to true or false.
There are 2 ways to insert data, the post method and the put method. The difference is that if you add something with the post method, PouchDB will generate an _id for you, whereas if you use the put method you're generating the _id yourself.
SQLite plugin for Cordova/PhoneGap

On Cordova/PhoneGap, the native SQLite database is often a popular choice, because it allows unlimited storage (compared to IndexedDB/WebSQL storage limits). It also offers more flexibility in backing up and pre-loading databases, because the SQLite files are directly accessible to app developers.

Luckily, there is a SQLite Plugin (also known as SQLite Storage) that accomplishes exactly this. If you include this plugin in your project, then PouchDB will automatically pick it up based on the window.sqlitePlugin object.

However, this only occurs if the adapter is 'websql', not 'idb' (e.g. on Android 4.4+). To force PouchDB to use the WebSQL adapter, you can do:
var db = new PouchDB('myDB', {adapter: 'websql'});

If you are unsure whether PouchDB is using the SQLite Plugin or not, just run:;

This will print some database information, including the attribute sqlite_plugin, which will be true if the SQLite Plugin is being used.

+KDE Menu Editor (April 2, 2016, 9:14 a.m.)


+Batch rename files (March 11, 2016, 10:53 a.m.)

for file in *.html
mv "$file" "${file%.html}.txt"


for file in *
do mv "$file" "$file.mp3"

Remove the word "crop_" in all files:

for file in *; do mv "$file" "${file/crop_/}"; done


+Thinkpad Lenovo Bluetooth Driver (Feb. 15, 2016, 10:12 a.m.)
sudo apt-get install build-essential linux-headers-generic
cd rtl8723au_bt-troy
sudo make install

+Genymotion (April 10, 2016, 7:22 p.m.)

1-apt-get install libdouble-conversion1

2-Download `Ubuntu 14.10 and older, Debian 8` genymotion version from the following link:
The downloaded file name should be `genymotion-2.8.0-linux_x64.bin`.

3-sudo bash ./genymotion-2.8.0-linux_x64.bin

4-For running it, use this command:

5-You should already have the genymotion VirtualBox (ovd) files. If so, you need to change the path of VirtualBox Virtual devices in settings, to the location of your files.
Settings --> Virtualbox (tab) --> Browse

After this step I still could not see the list of virtual devices in genymotion program. I imported the ovd files in virtualbox program, and they got displayed in genymotion too.

+ADB (Nov. 2, 2015, 5:04 p.m.)

sudo apt-get install android-tools-adb android-tools-fastboot

+Gimp Plugin (Nov. 2, 2015, 5:03 p.m.)

sudo apt-get install gimp-plugin-registry

+Diff over SSH (Oct. 12, 2015, 10:40 a.m.)

diff /home/mohsen/Projects/Shetab/nespresso/nespresso/ <(ssh 'cat /home/shetab/websites/nespresso/nespresso/')

+Handbrake in Mint (Sept. 14, 2015, 5:02 p.m.)

sudo add-apt-repository ppa:stebbins/handbrake-snapshots
sudo apt-get update
sudo apt-get install handbrake

+Trim/Cut video files (Sept. 14, 2015, 2:03 p.m.)

ffmpeg -i video.mp4 -ss 10 -t 10 -c copy cut2.mp4

The first 10 is the start time in seconds:
10 ==> 10 seconds from start
1:10 ==> One minute and 10 seconds
1:10:10 ==> One hour, one minute and ten seconds

The second 10 is the duration.

+Retrieve Video File Information (Sept. 14, 2015, 12:02 p.m.)

mplayer -vo null -ao null -frames 0 -identify test.mp4

+Routing (Aug. 22, 2015, 4:58 p.m.)

ip route add {dst ip} via {gateway ip} dev ethx src {src ip}

+Change Hostname (Aug. 6, 2015, 11:14 p.m.)

nano /etc/hostname
/etc/init.d/ start

nano /etc/hosts
service hostname restart

+Get public IP address and email it (July 25, 2015, 1:17 p.m.)

Getting public IP address in bash:

wget -qO-
Getting it and emailing it (copy this script and paste it in a file with `.sh` extension):
IPADDRESS=$(wget -qO-
# IPADDRESS=$(curl
if [[ "${IPADDRESS}" != $(cat ~/.current_ip) ]]
echo "Your new IP address is ${IPADDRESS}" |
mail -s "IP address change"
echo ${IPADDRESS} >|~/.current_ip

+Libreoffice - Add/Remove RTL and LTR buttons to formating toolbar to Libreoffice (July 8, 2015, 7:41 p.m.)

You have to enable Complex Text Layout (CTL) support:
Tools → Options → Language Settings → Languages
Enable `Complex Text Layout (CTL)`
Restart libreoffice.

+Installing Irancell 3G-4G Modem Driver (July 8, 2015, 10:53 a.m.)

1-sudo apt-get install g++-multilib libusb-dev libusb-0.1-4:i386

2-Connect the modem and copy the `linuxdrivers.tar.gz` file to your computer, extract it and cd to the directory.

3-CD to directory `drivers` and using the `install_driver` file, install the driver:
sudo ./install_driver

4-Create a shortcut from the file `` to make the connection procedure easier:
ln -s /home/mohsen/Programs/linuxdrivers/drivers/ .

5-To establish a connection use the command:
sudo ~/
And this is the output:

Looking for default devices ...
Found default devices (1)
Accessing device 007 on bus 003 ...

USB description data (for identification)
Manufacturer: Longcheer
Product: LH9207
Serial No.:
Looking for active driver ...
No driver found. Either detached before or never attached
Setting up communication with interface 0 ...
Trying to send the message to endpoint 0x01 ...
OK, message successfully sent
-> Run lsusb to note any changes. Bye.

sleep 3
ifconfig ecm0 up
dhclient ecm0
mohsen drivers #

+Installing KDE and/or Gnome in Debian (June 9, 2015, 9:22 a.m.)

Install KDE in debian

#apt-get install x-window-system-core kde

You'll probably also want to install KDM, for the KDE-style login screen.

#apt-get install kdm

Starting KDE

To start KDE, type


you may need to start X-Server if it is not running, to start it run


To start KDE each time (you probably want this) you'll need to edit your startup files. If you use KDM or XDM to log in, edit .xsession, otherwise edit .xinitrc or .Xclients.

Install Gnome in Debian

#apt-get install gnome

This will install additional software (gnome-office, evolution) that you may or may not want.


For a smaller set of apps, you can also do

# aptitude install gnome-desktop-environment

A set of additional productivity apps will be installed by

# aptitude install gnome-fifth-toe

+Quodlibet Multimedia Keys (June 3, 2015, 9:12 p.m.)

apt-get install gir1.2-keybinder-3.0

+Connecting to wifi network through command line (June 3, 2015, 6:13 p.m.)

1-sudo iwlist wlan0 scan
2-sudo iwconfig wlan0 essid "THE SSID"
3-iwconfig wlan0 key s:password
4-sudo dhclient wlan0

+Root Password Recovery (May 27, 2015, 1:24 p.m.)

rw init=/bin/bash

+Locale Settings (Feb. 5, 2016, 1:40 a.m.)

This first solution has been worked. So before checking the other solutions, try this one first!

nano /etc/environment

Restart server and it should be fixed now!


locale-gen en_US.UTF-8

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales


This is a common problem if you are connecting remotely, so the solution is to not forward your locale. Edit /etc/ssh/ssh_config and comment out SendEnv LANG LC_* line.

+Proxy (May 10, 2015, 3:48 p.m.)

1-sudo apt-get install proxychains
2-ssh -D 1080 -fN root@
3-nano /ect/proxychains.conf
4-At the bottom of the file:
# add proxy here
# defaults set to "tor"
# socks4 9050
socks5 1080

5-sudo proxychains synaptic

6-If you did everything with a normal user or super user, keep in mind that in terminal you should use the proxy using the same user. I mean if you did (ssh -D ...) using the root user, that port is only available in root.

+Recover/Restore Firefox Master Password (April 19, 2015, 9:58 a.m.)

For resetting copy this url in the address-bar:

+TV Card Driver (April 17, 2015, 7:08 p.m.)
1-sudo apt-get install libproc-processtable-perl git libc6-dev
2-git clone git://
3-cd media_build
4-$ ./build
5-sudo make install
6-apt-get install me-tv kaffeine
7-reboot for loading the driver (I don't know the driver for modprobe yet).
Scan channels using Kaffein:
1-Open Kaffein
2-From `Television` menu, choose `Configure Television`.
3-From `Device 1` tab, from `Source` option, choose `Autoscan`
4-From `Television` menu choose `Channels`
5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.
Scan channels using Me-TV
1-Open Me-TV
2-When the scan dialog opens, choose `Czech Republic` from `Auto scan`.

+PYTHONHOME and PYTHONPATH (April 4, 2015, 3:29 p.m.)

For most installations, you should not set these variables since they are not needed for Python to run. Python knows where to find its standard library.

The only reason to set PYTHONPATH is to maintain directories of custom Python libraries that you do not want to install in the global default location (i.e., the site-packages directory).

PYTHONHOME actually points to the directory of the standard library by default (e.g. /usr/local/lib/pythonXX).

+Environment Variable (April 3, 2015, 8:46 p.m.)
Commonly Used Shell Variables:
Use `set` command to display current environment
The $PATH defines the search path for commands. It is a colon-separated list of directories in which the shell looks for commands.
You can display the value of a variable using printf or echo command:
$ echo "$HOME"
You can modify each environmental or system variable using the export command. Set the PATH environment variable to include the directory where you installed the bin directory with perl and shell scripts:

export PATH=${PATH}:/home/vivek/bin


export PATH=${PATH}:${HOME}/bin
You can set multiple paths as follows:
export ANT_HOME=/path/to/ant/dir
export PATH=${PATH}:${ANT_HOME}/bin:${JAVA_HOME}/bin
How Do I Make All Settings permanent?
The ~/.bash_profile ($HOME/.bash_profile) or ~/.prfile file is executed when you login using console or remotely using ssh. Type the following command to edit ~/.bash_profile file, enter:
$ vi ~/.bash_proflle
Append the $PATH settings, enter:
export PATH=${PATH}:${HOME}/bin
Save and close the file.

+subprocess installed post-installation script returned error exit status 1 (March 19, 2015, 12:30 a.m.)


Setting up python-gst0.10-dev (0.10.22-3ubuntu2) ...
dpkg: error processing package python-gst0.10-dev (--configure):
subprocess installed post-installation script returned error exit status 1
E: Sub-process /usr/bin/dpkg returned an error code (1)
sh -x /var/lib/dpkg/info/python-gst0.10-dev.postinst configure 0.10.22-3ubuntu2

+ set -e
+ pyversions --default
+ PYTHON_DEFAULT=pyversions: /usr/bin/python does not match the python default version. It must be reset to point to python2.7
ln -sf /usr/bin/python2.7 /usr/bin/python

+Ubuntu Sources List Generator (March 18, 2015, 3:52 p.m.)

+Delete special files recursively (March 7, 2015, 2:36 p.m.)

find . -name "*.bak" -type f -delete

find . -name "*.bak" -type f

+How to stop services / programs from starting automatically (March 3, 2015, 11:27 a.m.)

update-rc.d -f apache2 remove

+Truetype Fonts (Arial Font) (Feb. 22, 2015, 1:10 p.m.)
apt-get install ttf-liberation

+Add Resolutions (Feb. 15, 2015, 11:19 a.m.)

1. Install arandr
apt install arandr

2. Run "arandr" from the applications menu.

3. Create a resolution by doing the following:
In this example, the resolution I want is 1920x1080
cvt 1920 1080

This will create a modeline like this:
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync

Create the new mode:
xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync

4. Add the mode (resolution) to the desired monitor: (Get the list of active outputs from the "output" menu in Arandr application)
xrandr --addmode VGA-1 "1920x1080_60.00"

5- For switching to the newly created resolution:
xrandr -s 1920x1080


xrandr --output VGA-1 --mode "1920x1080"


5. Run arandr and position your monitors correctly

6. Choose 'layout' then 'save as' to save the script

7. I found the best place to load the script (under Xubuntu) is the settings manager:


Menu -> Settings -> Settings Manager -> Session and Startup -> Application Autostart

+Dump traffic on a network (Feb. 7, 2015, 11:33 a.m.)

tcpdump -nti any port 4301

To connect to it:
telnet 4301

+Show open ports and listening services (Feb. 7, 2015, 10:33 a.m.)

netstat -an | egrep 'Proto|LISTEN'
netstat -lnptu

+Make Bootable USB stick (Jan. 8, 2015, 7:50 p.m.)

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdx oflag=direct bs=1048576


This method works better for making Windows images:

Download "WoeUSB" from the following link and use the GUI application to create the USB disk.


+Change locale/timezone and set the clock (Sept. 20, 2015, 1:57 p.m.)

1- ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
2- apt install ntp
3- ntpd
4- hwclock -w


Linux Set Date Command Example
# date -s "2 OCT 2006 18:00:00"


# date --set="2 OCT 2006 18:00:00"


# date +%Y%m%d -s "20081128"


# date +%T -s "10:13:13"

10: Hour (hh)
13: Minute (mm)
13: Second (ss)

Use %p locale's equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"


yum install ntp
ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
/etc/init.d/ntpd stop

+error ==> error while loading shared libraries (Dec. 18, 2014, 10:02 p.m.)

Locate the file using locate <> and copy it to /usr/lib

I also needed to copy it here too:

Locate the file using locate <> and copy it to /usr/lib64

+error ==> make command not found (Dec. 18, 2014, 11:46 a.m.)

apt-get install make build-essential

+wget certificate error (Dec. 18, 2014, 11:38 a.m.)

ERROR: The certificate of `' is not trusted.
ERROR: The certificate of `' hasn't got a known issuer.

If you don't care about checking the validity of the certificate just add the --no-check-certificate option on the wget command-line.

wget --no-check-certificate <url_link>

+Split and Join/Merging Files (Nov. 28, 2014, 11:58 a.m.)

split --bytes=1M NimkatOnline-1.0.0.apk NimkatOnline
-l ==> lines

b ==> bytes
M ==> Megabyte
G ==> Gigabytes

split --bytes=1M images/myimage.jpg new

split -b 22 newfile.txt new
Split the file newfile.txt into three separate files called newaa, newab and newac..., with each file containing 22 bytes of data.

split -l 300 file.txt new
Split the file newfile.txt into files beginning with the name new, each containing 300 lines of text.
For merging or joining files:
cat new* > newimage.jpg

+Locate (Nov. 13, 2014, 10:03 p.m.)

Match the exact filename:
locate -b '\filename'

Don’t output all the results, but only the number of matching entries.
locate -c test

+SSH login without password (Nov. 13, 2014, 7:29 p.m.)

1-ssh-keygen -t rsa (No need to set a password)

Now you can log in without a password

+APT - The location where apt-get caches/stores .deb files (Oct. 18, 2014, 6:16 a.m.)


+nano - Replace (Oct. 3, 2014, 11:08 p.m.)

In some versions of nano for `replacing` you can use:
Shift + Tab

And in some other versions:
CTRL + \

+Recover Files (Sept. 14, 2014, 7:24 p.m.)

Using this program you can undelete/recover deleted files:

After selecting the desired Hard Disk, press capital (p) the `P` key to show all the deleted files.

+Setting Proxy Variable (Aug. 22, 2014, 12:44 p.m.)

export http_proxy="localhost:9000"
export https_proxy="localhost:9000"
export ftp_proxy="localhost:9000"

And for removing environment variables:
unset http_proxy
unset https_proxy
unset ftp_proxy

+Getting folder size (Aug. 22, 2014, 12:38 p.m.)

For getting the folder size along with its sub-folders:
du -sh /path/to/directory

+Join *.001, *.002, .... files (Aug. 22, 2014, 12:33 p.m.)

cat filename.avi.* > filename.avi

+Virtualbox (Nov. 4, 2015, 11:31 a.m.)

Virtualbox has some dependencies. You'd better follow this solution to install it.

1- Add the following line to your /etc/apt/sources.list:
deb xenial contrib

According to your distribution, replace 'xenial' by 'vivid', 'utopic', 'trusty', 'raring', 'quantal', 'precise', 'lucid', 'jessie', 'wheezy', or 'squeeze'.

For viewing the complete list of dists:

To see your Linux dist:
cat /etc/*release
Based on the line:
choose the dist! (which is xenial)

2- apt-get update (using a proxy tool like proxychains)

3- apt-key adv --keyserver --recv-keys A2F683C52980AECF
The key depends on what you might get after apt-get update.
You need to re-run apt-get update.
Virtualbox has some dependencies. You'd better follow the top solution to install it.

Virtualbox 5 Download link: (It's blocked for us in Iran; use a proxy tool to bypass it).


You can download the file directly from: (It's also blocked; use a proxy tool).
Installing virtualbox:
apt-get install virtualbox virtualbox-4.3 virtualbox-dkms
For enabling USB.2 in Virtual Box, when checking the `Enable USB 2.0...` in settings, I noticed an alert at the bottom of the window `Invalid settings detected`. Hovering the mouse over it, it displayed:
"USB 2.0 is currently enabled for this virtual machine. However, this requires the Oracle VM VirtualBox Extension Pack to be installed..."

So, for solving this problem:
1-Check what version of virtual box you're using:
VBoxManage -version
It will display something like 4.3.6_Debianr91406

2-Open this link and follow the version of virtual box you got from `step 1`:

3-Find the package and download it:
Don't forget to find the whole version number... I mean the 91406 (from the `step 1`)

4-Install the package:
sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.6-91406.vbox-extpack

5-Now, you need to add your username to the "vboxusers" group in order to gain access to your USB devices in the Virtual Machine:
sudo usermod -a -G vboxusers mohsen

6-Restart your PC/Laptop.

For viewing a list of installed packages:
VBoxManage list extpacks

For uninstalling the package:
sudo vboxmanage extpack unistall "Oracle VM VirtualBox Extension Pack"
bash: /etc/init.d/vboxdrv: No such file or directory
sudo apt-get install build-essential linux-headers-`uname -r`

sudo dpkg-reconfigure virtualbox-dkms
sudo dpkg-reconfigure virtualbox
Increase VDI size:
vboxmanage modifymedium /media/mohsen/Programs/Virtual\ OS/VirtualBox\ VMs/Windows\ 10/ --resize 22000

After resizing, using the Disk Management tool available in Windows, right click on partition C: and extend it.
VBoxManage list vms
VBoxManage startvm "Debian - 8"

+Help, Mannual (Aug. 22, 2014, 12:34 p.m.)

Get help:
Some commands don't have help messages or don't use --help to invoke them. On these mysterious commands, use this trick:

First, find out where the executable file is located (this trick will only work with programs, not shell builtins):
which command

The `which` command will tell you the path and file name of the executable program. Next, use the `strings` command to display text that may be embedded within the executable file. For example, if you wanted to look inside the bash program, you would do the following:
which bash
strings /bin/bash

The strings command will display any human readable content buried inside the program. This might include copyright notices, error messages, help text, etc.

Finally, if you have a very inquisitive nature, get the command's source code and read that. Even if you cannot fully understand the programming language in which the command is written, you may be able to gain valuable insight by reading the author's comments in the program's source.

+Dolphin (Aug. 22, 2014, 12:34 p.m.)

When working with `dolphin`, I can't disable the notification sounds. It breaks the alsa volume too. The only way to disable the sounds is to delete or move (or rename) the sound files. So here is the path to the sounds. Do whatever that pleases you :D

+ISO files (Aug. 22, 2014, 12:33 p.m.)

Convert .DAA Files To .ISO

Download and install power PowerISO using the following link:
Scroll to the bottom of the page, in `Other downloads` section to get the linux version.

1- wget

2- tar -zxvf poweriso-1.3.tar.gz

3- You can copy the extracted file “poweriso” to /usr/bin to help all users of a computer to use it.
Now if you want to convert for example a .daa file to .iso use this command:
poweriso convert /path/to/source.daa -o /path/to/target.iso -ot iso
There are more useful commands of poweriso:
Task: list all files and directories in home direcory of /media/file.iso

poweriso list /media/file.iso /
poweriso list /media/file.iso / -r
Fore more commands please type
poweriso -?
Convert DMG to ISO

1- Install the tool
sudo apt-get install dmg2img

2- The following command will convert the .dmg to .img file in ISO format:
dmg2img <file_name>.dmg

3- And finally, rename the extension:
mv <file_name>.img <file_name>.iso
Create ISO file from a directory:
mkisofs -allow-limited-size -o abcd.iso abcd

+Installing Flash Player (Aug. 22, 2014, 12:32 p.m.)

sudo apt-get install adobe-flashplugin

+Nautilus Bookmarks (Aug. 22, 2014, 12:26 p.m.)

Nautilus bookmarks configuration file location:

For seeing which version of nautilus you have:
nautilus --version

+Convert mp3 to ogg (Aug. 22, 2014, 12:32 p.m.)

Convert mp3 to ogg:
1-apt-get install mpg321 vorbis-tools
2-mpg321 input.mp3 -w raw && oggenc raw -o output.ogg

+Convert rmp to deb (Aug. 22, 2014, 12:26 p.m.)

Convert rmp to deb:
1-apt-get install alien
2-alien -d package-name.rpm

+Tmux (Aug. 22, 2014, 12:31 p.m.)

Prompt not following normal bash colors:

For fixing the problem, create a file `~/.tmux.conf` if it does not exist, and add the following to it:
set -g default-terminal "screen-256color"

set -g history-limit 100000


Tmux Plugin Manager:

git clone ~/.tmux/plugins/tpm

Put this at the bottom of ~/.tmux.conf:

# List of plugins
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

# Initialize TMUX plugin manager (keep this line at the very bottom of tmux.conf)
run '~/.tmux/plugins/tpm/tpm'


Installing plugins:

1-Add new plugin to ~/.tmux.conf with set -g @plugin '...'
2-Press prefix + I (capital I, as in Install) to fetch the plugin.


Uninstalling plugins:

1-Remove (or comment out) plugin from the list.
2-Press prefix + alt + u (lowercase u as in uninstall) to remove the plugin.


Tmux-continuum plugin:

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'

Automatic restore:
Last saved environment is automatically restored when tmux is started.
Put this in tmux.conf to enable:
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'


CPU/RAM/battery stats chart bar:

install the plugin using CPAN:
sudo cpan -i App::rainbarf

If it's the first time you're using CPAN you might be asked to let some plugins get installed automatically...
You choose (yes) and then choose(sudo) to let the plugin installed.

After installation, create a config file ~/.rainbarf.conf with this content:

width=20 # widget width
bolt # fancy charging character
remaining # display remaining battery
rgb # 256-colored palette


Whole config file:

set -g default-terminal "screen-256color"
set-option -g status-utf8 on

set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @plugin 'tmux-plugins/tmux-logging'
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'

set -g history-limit 500000

set -g status-right '#(rainbarf)'
set -g default-command bash

run '~/.tmux/plugins/tpm/tpm'


PRESS CTRL+B and CTRL+I to install plugins after editing the .tmux.conf file.


CTRL + B and SHIFT + P to start (and end) logging in current pane.
CTRL + B and ALT + P to start (and end) to capture screen.

Save complete history:
CTRL + B and ALT + SHIFT + P

Clear pane history:
CTRL + B and ALT + C


Swap Window:
swap-window -s 3 -t 1


Copy paste in Tmux:

1- Enter copy mode using Control+b [
2- Navigate to beginning of text, you want to select and hit Control+Space.
3- Move around using arrow keys to select region.
4- When you reach end of region simply hit Alt+w to copy the region.
5- Now Control+b ] will paste the selection.


+PIL (Feb. 15, 2016, 11:04 a.m.)

For a successful and complete installation of PIL, you need to install these packages before installing PIL:

sudo apt-get install libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev

If you're going to install it on python3:
apt-get install python3-dev
If it's for python 2:
apt-get install python-dev
The installation should be finished by now. Do the following if you still get errors and the jpeg library is not recognized by linux:

# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib

Now proceed and reinstal PiL, pip install -U PIL

In case of this error:
#include <freetype/fterrors.h>
Create a symlink as follow:
ln -s /usr/local/include/freetype2/ /usr/local/include/freetype

+Undeleteing (Aug. 22, 2014, 12:30 p.m.)

1-Install extundelete: apt-get install extundelete

2-Either "unmount" or "remount" the partition as read-only:
sudo mount -t vfat -O remount,ro /dev/sdb /mnt

To remount it back to read-write: (This task is not part of this tutorial. It's just for keeping a note.)
sudo mount -t vfat -O remount,rw /dev/sdb /mnt

3-For restoring the files from the whole partition:
extundelete /dev/sdb1 –restore-all
And for restoring important files quickly, you may use the --restore-file, --restore-files, or --restore-directory options.

+Error - ia32-libs : Depends: ia32-libs-i386 but it is not installable (Aug. 22, 2014, 12:29 p.m.)

The ia32-libs-i386 package is only installable from the i386 repository, which becomes available with the following commands:

dpkg --add-architecture i386
apt-get update

+Driver - Samsung Printer (July 20, 2015, 11:23 p.m.)

Installing My Samsung Printer Driver (SCX-4521F):

1-Add the following repository to /etc/apt/sources.list:
deb debian extra

2-Install the GPG key:
sudo apt-get install suldr-keyring
apt-get update

3-Install these packages:
apt-get install samsungmfp-driver-4.00.39 suld-configurator-2-qt4

+Grub rescue (Aug. 22, 2014, 12:02 p.m.)

I haven't tried it yet, so keep in mind to correct the problems:
mount /dev/masax /mnt
groub-install --root-directory=/mnt/ /dev/sda


Another day I just used these commands, some would give me errors, but some would work...but in my surprise it worked:
set prefix=(hd0,1)/boot/grub
insmod (hd0,1)/boot/grub/linux.mod
insmod part_msdos
insmod ext2
set root=(hd0,1)
reboot using CTRL+ALT+DELETE

+Commands - iftop (Aug. 22, 2014, 12:23 p.m.)

iftop: InterFace Table of Processes

Install iftop for viewing what applications are using/eating up Internet.

iftop -i eth1

# The logs from xchat help:
in iftop hit `p` to toggle port display
now you know which port on your machine is connecting out to that domain
now use netstat -nlp to list all pids on which ports are connecting out
you should now know which pid is hitting that domain... provided all traffic originates on your local box
also consider using lsof for this sort of mining

+Error - Cannot Open Display (Aug. 22, 2014, 12:04 p.m.)

export XAUTHORITHY=/home/<user>/.Xauthority


Try this new method:
"aptitude -r install linux-headers-2.6-`uname -r|sed 's,[^-]*-[^-]*-,,'` nvidia-kernel-dkms nvidia-glx && mkdir /etc/X11/xorg.conf.d ; echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf

This is the old xorg.conf:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 280.13 ( Wed Jul 27 17:15:58 PDT 2011

Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"

Section "Files"

Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"

Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"

Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 28.0 - 33.0
VertRefresh 43.0 - 72.0
Option "DPMS"

Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:1:0:0"
# option "MetaModes" "1280x1024"
option "MetaModes" "1920x1080"

Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24

+unrar (Aug. 22, 2014, 12:03 p.m.)

How to use Unrar command
First move the rar file to a directory, and then extract it there:
$ unrar e file.rar

$ unrar l file.rar
Unrar all files:
for file in *.part01.rar; do unrar x ${file}; done;

+Swap file (Aug. 22, 2014, 12:02 p.m.)

How to create a swap file:
1-dd if=/dev/zero of=/swapfile1 bs=1024 count=524288

if=/dev/zero : Read from /dev/zero file. /dev/zero is a special file in that provides as many null characters to build storage file called /swapfile1.
of=/swapfile1 : Read from /dev/zero write stoage file to /swapfile1.
bs=1024 : Read and write 1024 BYTES bytes at a time.
count=524288 : Copy only 523288 BLOCKS input blocks.

2-mkswap /swapfile1

3-chown root:root /swapfile1
chmod 0600 /swapfile1

4-swapon /swapfile1

5-nano /etc/fstab
Append the following line:
/swapfile1 swap swap defaults 0 0

6-To test/see the free space:
free -m

+Commands - rm (Aug. 22, 2014, noon)

rm -rfv `find . -iname "*.pyc"`

+Define aliases (Aug. 22, 2014, noon)

Defining alias:
1-Open the file ~/.bashrc and write an alias like this:
alias myvps='ssh -p 54321'
2-Enter this command to make the changes affect:
source .bashrc
3-Keep in mind that every time a change is done to .bashrc file, you have to reload it with:
source .bashrc

+Commands - mount (Aug. 22, 2014, noon)

mount -t ntfs /dev/sda1 /mnt/exhdd

To mount a floppy image:
sudo mount -t msdos -o loop -o umask=000 ./floppy.img /media/floppy

+Error - Errors were encountered while processing (Aug. 22, 2014, 11:59 a.m.)

E: Sub-process /usr/bin/dpkg returned an error code (1)
rm /var/lib/dpkg/info/samsungmfp-*

+ALSA (Aug. 22, 2014, 11:58 a.m.)

Find ALSA version:
cat /proc/asound/version
My sound card was installed. I knew it, using the command:
cat /proc/asound/modules
cat /proc/asound/cards

But there was no sound from my Laptop. I ran gstreamer-properties in normal user bash (not root), to test Audio device of my Laptop. I saw I don't have ALSA in the plugins section. Installing it, I could test my sound card. I heard sound and it solved my problem.
so using repo I found gstreamer0.10-alsa and installed it.

And I of course had to use the command:
alsactl init

For not doing the above command every time the system is turned on, I made my snd-hda-intel as the default sound card. (The tutorial is in this same file.)

+Commands - scp (Aug. 22, 2014, 11:43 a.m.)

The scp command allows you to copy files over ssh connections. This is pretty useful if you want to transport files between computers, for example to backup something. The scp command uses the ssh command and they are very much alike. However, there are some important differences.
The scp command can be used in three* ways:
1-To copy from a (remote) server to your computer.
2-To copy from your computer to a (remote) server.
3-To copy from a (remote) server to another (remote) server.

In the third case, the data is transferred directly between the servers; your own computer will only tell the servers what to do. These options are very useful for a lot of things that require files to be transferred, so let's have a look at the syntax of this command:
scp examplefile yourusername@yourserver:/home/yourusername/
You can also copy a file (or multiple files) from the (remote) server to your own computer. Let's have a look at an example of that:
scp yourusername@yourserver:/home/yourusername/examplefile .

The dot at the end means the current local directory. This is a handy trick that can be used about everywhere in Linux. Besides a single dot, you can also type a double dot ( .. ), which is the parent directory of the current directory.
You probably already guessed that the following command copies a file from a (remote) server to another (remote) server:
scp yourusername@yourserver:/home/yourusername/examplefile yourusername2@yourserver2:/home/yourusername2/
Please note that, to make the above command work, the servers must be able to reach each other, as the data will be transferred directly between them. If the servers somehow can't reach each other (for example, if port 22 is not open on one of the sides) you won't be able to copy anything. In that case, copy the files to your own computer first, then to the other host. Or make the servers able to reach each other (for example by opening the port).
Specifying a port with scp:
The scp command acts a little different when it comes to ports. You'd expect that specifying a port should be done this way:
scp -p yourport yourusername@yourserver:/home/yourusername/examplefile .
However, that will not work. You will get an error message like this one:
cp: cannot stat `yourport': No such file or directory
This is caused by the different architecture of scp. It aims to resemble cp, and cp also features the -p option. However, in cp terms it means 'preserve', and it causes the cp command to preserve things like ownership, permissions and creation dates. The scp command can also preserve things like that, and the -p option enables this feature. The port specification should be done with the -P option. Therefore, the following command will work:
scp -P yourport yourusername@yourserver:/home/yourusername/examplefile .
Also note that the -P option must be in front of the (remote) server. The ssh command will still work if you put -p yourport behind the host syntax, but scp won't. Why? Because scp also supports copying between two servers and therefore needs to know which server the -P option applies to.
Copying files from a remote computer using ssh
scp root@ /home/mohsen/Desktop/

To copy from the local machine, to the remote machine, just reverse things:
scp /home/mohsen/Desktop/ root@

+Auto start script at boot time (Aug. 22, 2014, 11:39 a.m.)

To make a script run when the server starts and stops:
First make the script executable with this command:
sudo chmod 755 <path to the script>
sudo /usr/sbin/update-rc.d -f <path to the script> defaults

+Hardware - Sound card (Aug. 22, 2014, 11:36 a.m.)

Removing and Re-installing Sound card
sudo apt-get --purge remove linux-sound-base alsa-base alsa-utils
sudo apt-get install linux-sound-base alsa-base alsa-utils

+Network - Server config (Aug. 22, 2014, 11:33 a.m.)

I used this command in rc.local to allow the eth0 get IP:
route add -net netmask gw

Add this to /etc/network/interfaces

Create a file named /etc/resolv.conf and write this command in it:

ifconfig eth0 broadcast

+Backlight (Screen Brightness) (Aug. 22, 2014, 11:32 a.m.)

For solving the back light brightness problem, got to /etc/default/grub and edit the line: GRUB_CMDLINE_LINUX_DEFAULT to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=Linux acpi_backlight=vendor splash"
And then:
Check if graphics card is intel:
ls /sys/class/backlight

You should see something like:
ideapad intel_backlight

Fix backlight:
Create this file: /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
Driver "intel"
Option "Backlight" "intel_backlight"
Identifier "card0"

Logout and Login. Done.

+IRC (Aug. 22, 2014, 11:28 a.m.)

1-Join the Freenode network. Open your favorite IRC client and type:

2-Choose a user name or nick. This user name should consist only of the letters from A-Z, the numbers from 0-9 and certain symbols such as "_" and "-". It may have a maximum of 16 characters.

3-Change your user name to the user name you have chosen. Suppose you chose the nickname "awesomenickname". Type the following in the window titled Freenode:
/nick awesomenickname

4-Register your nick or user name. Type the following command and replace "your_password" with a password that will be easy to remember, and replace "your_email_address" with your email address.
/msg nickserv register your_password your_email_address

5-Verify your registration. After you register, you will not be able to identify to NickServ until you have verified your registration. To do this, check your email for an account verification code.

6-Group an alternate nickname with your main one. If you would like to register an alternate nickname, first switch to the alternate nickname that you want while you are identified as the main one, then group your nicks together with this command:
/msg nickserv group

7-Identify with Nickserv. Each time you connect, you should sign in, or "identify" yourself, using the following command:
/msg nickserv identify your_password

You can send private messages anytime after step 4. The advantage of the other steps is to make your registration much more secure. To send a private message, you simply do the following, replacing Nick with the nick or user name of the person you wish to contact privately and message with the message you want to start with:
/msg Nick message

Take care to follow this process in the Freenode window, not directly in a channel. If you type all the commands correctly, nothing should be visible to others, but it's very easy to type something else by mistake, and in so doing, you could expose your password.

Choose a nick between 5 and 8 characters long. This will make it easier to identify and avoid confusion. Choose your nick wisely. Remember that users will identify this name with your person.

User names will automatically expire after 60 days of disuse. This is counted from the last time it was identified with NickServ. If the nickname you want is not in use and you want it, you can contact somebody with Freenode staff to unassign it for you. If you will not be able to use IRC for 60 days you can extend the time using the vacation command (/msg nickserv vacation). Vacation will be disabled automatically next time you identify to NickServ.

To check when a nick was last identified with NickServ, use /msg NickServ info Nick

The Freenode staff have an option enabled to receive private messages from unregistered users so if you wish to request that a nick be freed, you do not have to register another.
To contact a member of the staff, use the command /stats p or /quote stats p if the first doesn't work. Send them a private message using /query nick.
In case there is no available staff member in /stats p, use /who freenode/staff/* or join the channel #freenode using /join #freenode.

Avoid using user names that are brand names or famous people, to avoid conflicts.

If you don't want your IP to be seen to the public, contact FreeNode staff and they can give you a generic "unaffiliated" user cloak, if you are not a member of a project.

If you want to hide your email address, use /msg nickserv set hidemail on.

If you need to change your password, type /ns set password new_password. You will need to be logged in.
# select nick name
/nick yournickname

# better don't show your email address:
/ns set hide email on

# register (only one time needed) - PW is in clear text!!
/msg NickServ register [password] [email]

# identify yourself to the IRC server (always needed) (xxxx == pw)
/msg NickServ IDENTIFY xxxx

# Join a channel
/join #grass
Registering a channel:
1-To check whether a channel has already been registered, use the command:
/msg ChanServ info #Mohsen or ##Mohsen

2-/join #Mohsen

3-/msg ChanServ register #Mohsen

For gaining OP:
/MSG chanserv op #shahbal Mohsen_Hassani

+zip (Aug. 22, 2014, 11:25 a.m.)

To zip just one file (file.txt) to a zipfile (, type the following:
zip file.txt

To zip an entire directory:
zip -r directory

zip -r -e saverestorepassword saverestore
The -e flag will prompt you to specify a password and then verify the password. You will see nothing happening in Terminal as you type the password. This will create a password protected zip file named containg your saverestore directory.
In the above examples, the name of the zip file can be whatever name you choose.


unzip -d music
This will extract the contents of to the music folder. Caveat, the directory must already exist.

Now let's extract the file. In this example I'll extract it to my music folder so I don't overwrite my current data in the saverestore folder. Again, this assumes you've just launched Terminal:
cd /media/internal
unzip -d music

In the above two examples, the -d flag indicates to extract the zip file to the directory specified, music in this case.
For excluding a directory in zip:
zip -r test -x "path/to/exclusion/directory/*"
1-Take note that the exclusion path should be in quotes, and a star at the end.
2-There is a * (star) at the end of the command which is used to exclude `ALL` the sub-files and sub-directories, so don't forget it use it!
3-The path should not be started from '/home/mohsen/....' it should be started from the path you're using the command.

+Commands - ssh (Aug. 22, 2014, 11:22 a.m.)

SSH is some kind of an abbreviation of Secure SHell. It is a protocol that allows secure connections between computers.
To move the ssh service to another port:
ssh -p yourport yourusername@yourserver

Running a command on the remote server:
Sometimes, especially in scripts, you'll want to connect to the remote server, run a single command and then exit again. The ssh command has a nice feature for this. You can just specify the command after the options, username and hostname. Have a look at this:
ssh yourusername@yourserver updatedb
This will make the server update its searching database. Of course, this is a very simple command without arguments. What if you'd want to tell someone about the latest news you read on the web? You might think that the following will give him/her that message:
ssh yourusername@yourserver wall "Hey, I just found out something great! Have a look at!"
However, bash will give an error if you run this command:
bash: !": event not found
What happened? Bash (the program behind your shell) tried to interpret the command you wanted to give ssh. This fails because there are exclamation marks in the command, which bash will interpret as special characters that should initiate a bash function. But we don't want this, we just want bash to give the command to ssh! Well, there's a very simple way to tell bash not to worry about the contents of the command but just pass it on to ssh already: wrapping it in single quotes. Have a look at this:
ssh yourusername@yourserver 'wall "Hey, I just found out something great! Have a look at!"'
The single quotes prevent bash from trying to interpret the command, so ssh receives it unmodified and can send it to the server as it should. Don't forget that the single quotes should be around the whole command, not anywhere else.
sudo ssh-keygen -R hostname
Creating ssh key:
ssh-keygen -t rsa
When the server is just installed, the first access is possible via:
ssh-keygen -R <ip of server>
SSH Tunnel:
1-Create a user on the server:
adduser <username>

2-Copy the user's ssh_key from his computer to the server:
ssh-copy-id -i ~/.ssh/ <username>@<server_ip>

3-Run this command on user's computer:
ssh -D <an optional port, like 9000> -fN <username>@<server_ip>

4-Change the Connection Settings of Mozilla, SOCKS Host:
localhost 9000

+Error - GPG error: ... NO_PUBKEY (Aug. 22, 2014, 11:21 a.m.)

While "apt-get update" I encountered an error telling me "GPG error: ... NO_PUBKEY DB141E2302FDF932"
So, for solving the problem I used this command:
apt-key adv --keyserver --recv-keys DB141E2302FDF932

+wget (Aug. 22, 2014, 11:18 a.m.)

ERROR: The certificate of `' is not trusted.
ERROR: The certificate of `' hasn't got a known issuer.

wget --no-check-certificate <url_link>


Mirror an entire website
wget -m


Mirror entire website:

wget --mirror --random-wait --convert-links --adjust-extension --page-requisites --no-host-directories -erobots=off --no-cache


Print file to stdout like curl does:

wget -O -


Recursively download only files with the pdf extension upto two levels away:

wget -r -l 2 -A "*.pdf"


Get your external ip address from and echo to STDOUT:

wget -O - | tail


Open tarball without downloading:

wget -qO - "" | tar zxvf -


The option -c or --continue will resume an interrupted download:

wget -c


Download a list of urls from a file:

wget -i urls.txt


Save file into directory:

wget -P path/to/directory


Saves the HTML of a webpage to a particular file:

wget -O bro.html


Download entire website:

Short Version:
wget --user-agent="Mozilla" -mkEpnp


wget --mirror --convert-links --adjust-extension --page-requisites --no-parent

Explanation of the various flags:

--mirror – Makes (among other things) the download recursive.
--convert-links – convert all the links (also to stuff like CSS stylesheets) to relative, so it will be suitable for offline viewing.
--adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content-type.
--page-requisites – Download things like CSS style-sheets and images required to properly display the page offline.
--no-parent – When recursing do not ascend to the parent directory. It useful for restricting the download to only a portion of the site.


+VPN (Aug. 22, 2014, 11:15 a.m.)

Configure VPN:
Start by browsing to System » Preferences » Network Connections » VPN.
If you have never setup a VPN connection before there is a good chance that all the buttons, like "Add", are grayed out. Fix this by opening a terminal and running this command:
sudo apt-get install pptp-linux network-manager-pptp
Now go back to the Network Connections window and the VPN tab inside of it; the Add button should now be clickable. Click it, select Point-to-Point Tunneling Protocol (PPTP) in the drop-down and click Create.
Type something like RaptorVPN in for Connection name. For Gateway, enter
Type in the RaptorVPN-provided password and then click Advanced.
In the Authentication section, uncheck all but MSCHAPv2.
In the Security and Compression section, check the box for Use Point-to-Point encryption (MPPE) and select 128-bit (most secure) in the drop-down below it. Then check the box for Allow stateful encryption and click OK and Apply.
If at any point during the VPN setup you see a keyring message like the one below, click Always Allow.
Restart the network manager by running this command in the terminal:
sudo /etc/init.d/network-manager restart
Now you are ready to take your new RaptorVPN connection for a test drive. Click the network icon in the taskbar and click on your new VPN connection.
A few seconds later you should be successfully connected!

+Change default sound card (Aug. 22, 2014, 11:14 a.m.)

nano /etc/modprobe.d/alsa-base.conf
and add:
options audigy (or whatever it is called) index=0
options logitech (or whatever it is called) index=1
and restart alsa
/etc/init.d/alsa-utils restart
asoundconf set-default-card Xmod
In terminal type
less /proc/asound/modules
That will show you which soundcards occupy which slot and what're their names.
My output is
0 snd_au8830
1 snd_intel8x0
so it should look something like that.
Now identify which cards you don't wanna use and take their names.
In terminal now type
sudo nano /etc/modprobe.d/alsa-base.conf
Find the place where it says something like
# Prevent abnormal drivers from grabbing index 0
and in the list below add
options snd_whateveryourcardnameswere index=-2
Since you have two card you want to blacklist you add two lines with different names then.
Now save /etc/modprobe.d/alsa-base.conf and reboot the computer.

+Commands - lsof (Aug. 22, 2014, 11:12 a.m.)

lsof -i:<port>
Example: lsof -i:80
Displayes the process which uses port 80.

+VGA Switcheroo (Aug. 22, 2014, 11:11 a.m.)

Once you've ensured that vga_switcheroo is available, you can use these options to switch between GPUs.
echo ON > /sys/kernel/debug/vgaswitcheroo/switch
Turns on the GPU that is disconnected (not currently driving outputs), but does not switch outputs.
echo IGD > /sys/kernel/debug/vgaswitcheroo/switch
Connects integrated graphics with outputs.
echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
Connects discrete graphics with outputs.
echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
Turns off the graphics card that is currently disconnected.
There are also a couple of options that are useful from inside an X-Windows session:
echo DIGD > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to integrated graphics to occur when the X server is next restarted.
echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to discrete graphics to occur when the X server is next restarted.

+Changing the boot count down time (Aug. 22, 2014, 11:07 a.m.)

nano /etc/default/grub

+Commands - ps (Aug. 22, 2014, 11:06 a.m.)

Lists all processes

ps -A
Displays all processes

kill + PID of process
Terminates a process

+Changing the attributes of a file/directory (Aug. 22, 2014, 11:05 a.m.)

Use the chmod command.
The attributes are read/write/execute for root/user/group with the values being:
4-2-1, 4-2-1, 4-2-1.

To give everyone execute only access to a file, you'd
chmod 111

or all permissions, it'd be
chmod 777

Root only r/w/x would be
chmod 700

4 = owner
2 = group
1 = other

+Commands - ls (Aug. 22, 2014, 11:04 a.m.)

ls -r
Reverse order while sorting

ls -F
Shows executable files with '*' sign and link files with '@'

ls -t
Sort by time

+Commands - echo (Aug. 22, 2014, 11:03 a.m.)

echo + message
Displayes the message on the screen.

echo + message > + filename
If the filename exists, it overwrites the "message" to the content of the file. And if the file doesn't exist, it creates the file and writes the "message" in it.

echo + message >> + filename
Adds the "message" to the end of the file.

+Commands - head and tail (Aug. 22, 2014, 10:57 a.m.)

prints the first part of files
head + filename + 4
prints the 4 lines of the file

prints the last part of files

+Bash - Adding commands to bash (Aug. 22, 2014, 10:54 a.m.)

1-Using this command, you can see the paths that Linux uses to find the commands:
env | grep PATH

2-Now you should add the address of your program to this PATH, using 'export' command, as follows:
If you use
export PATH=address-of-porgam
The existence addresses will be removed and it will cause the terminal to not recognize the commands.

So the thing you should do is:
Copy and Paste what there is in "env | grep PATH" and add "The address of specific program" like this command:
export PATH=/usr/local/sbin:/usr/local/bin:...:/home/mohsen/Programs/Debian/MyBashCommands

This directory MyBashCommand should be created already and only the executable files should be copied

+Kernel - Remove (Aug. 22, 2014, 10:53 a.m.)

Delete the files/directories:
/boot/vmlinuz -*kernel version*
/boot/ initrd-*kernel version*
/boot/config-*kernel version*
/boot/*kernel version*

/lib/modules/*kernel version*

/var/lib/initramfs-tools/*"kernel version"

update-initramsfs -u

+Kernel - Update (Aug. 22, 2014, 10:51 a.m.)

First way:

Copy kernel to /usr/src
tar -xvf kernel-source.tar.bz2
cd kernel-source
mkdir ../build
make clean
make mrproper
make O=../build menuconfig
make -j3 O=../build
make O=../build modules_install install
cd /boot/
mkinitramfs -v -o linux_version // if it didn't create initrd.img+linux_version, then use the following command
update-initramfs -u
//update-initramfs -c -k linux_version // to see the list of available versions go to /lib/modules
Second Way:

1-What to install before starting:
kernel-source-2.4.18 (or whatever kernel sources you will be using)
tk8.0 or tk8.1 or tk8.3
bin86 (for building 2.2.x kernels on PCs)

2-Expanding the source tarball
Copy the kernel-source to /usr/src and unzip it using the following command:
tar -jxf kernel-source-2.4.18.tar.bz2

3-Setting up the symlink
ln -s kernel-source-2.4.18 linux

4-Checking Current Minimal Requirements
The part of "Current Minimal Requirements" should be studied and the requirements should be installed.

5-Configuring the kernel:
make xconfig
make menuconfig
This command should display a long list of available kernel elemets so that we can select what to be compiled.

This command makes the system prepare the kernel using the selected elemets, which might take hours to finish this step.

7-Check in the same /usr/src address to see if the new Kernel-image-2.6.38_...Custom.deb is created!

8-Making the kernel image:
fakeroot make-kpkg clean
fakeroot make-kpkg --append-to-version=.030320 kernel_image

9-Installing the kernel-image package:
dpkg -i kernel-image-

10-echo "kernel-image- hold" | dpkg --set-selections
After this command, when you use this command "dpkg --get-selections | grep kernel-image", the output should be like this: "kernel-image- hold"

11-Removing the symlink:
cd /usr/src
rm linux

12-(Optional) Removing old kernels:
cd /boot
dpkg -P kernel-image-
dpkg -P pcmcia-modules-

13-Updating Grub
update-initramfs -c -k 2.6.38-1-amd64 // to see the list of available versions go to /lib/modules



+Driver - A site for checking and reporting device drivers (Aug. 22, 2014, 10:49 a.m.)

+Fan (Aug. 22, 2014, 10:48 a.m.)

echo -n 3 > /proc/acpi/fan/FAN/state
The value 3 my need to be 1 or 0.
0 turns the fan on and other to turn off.

+Dictionary - StarDict (Aug. 22, 2014, 10:42 a.m.)

sdcv is the console version of Stardict.

apt-get install sdcv

Install downloaded dictionaries:
Make the directory where sdcv looks for the dictionary:
sudo mkdir -p /usr/share/stardict/dic/

-l: display list of available dictionaries and exit.
-u: for search use only dictionary with this bookname
-n: for use in scripts
--data-dir path/to/directory: Use this directory as path to stardict data directory. This means that sdcv search dictionaries in data-dir/dic directory.

Converting Babylon glossaries to StarDict dictionary:
The output of this command is three file :

Place all these 3 files in /usr/share/stardict/dic/ creating a separate folder for each dictionary.

+Shutting down (Aug. 22, 2014, 10:42 a.m.)

shutdown -r now
shutdown -r 7:00

+Directories (Aug. 22, 2014, 10:28 a.m.)

/bin - Essential user commands
The /bin directory contains essential commands that every user will need. This includes your login shell and basic utilities like ls. The contents of this directory are usually fixed at the time you install Linux. Programs you install later will usually go elsewhere.

/usr/bin - Most user commands
The /usr hierarchy contains the programs and related files meant for users. (The original Unix makers had a thing for abbreviation.) The /usr/bin directory contains the program binaries. If you just installed a software package and don't know where the binary went, this is the first place to look. A typical desktop system will have many programs here.

/usr/local/bin - "Local" commands
When you compile software from source code, those install files are usually kept separate from those provided as part of your Linux distribution. That is what the /usr/local/ hierarchy is for.

/sbin - Essential System Admin Commands
The /sbin directory contains programs needed by the system administrator, like fsck, which is used to check file systems for errors. Like /bin, /sbin is populated when you install your Linux system, and rarely changes.

/usr/sbin - Non-essential System Administration Programs (binaries)
This is where you will find commands for optional system services and network servers. Desktop tools will not show up here, but if you just installed a new mail server, this is where to look for the binaries.

/usr/local/sbin - "Local" System Administration Commands
When you compile servers or administration utilities from source code, this is where the binaries normally will go.

Libraries are shared bits of code. On Windows these are called DLL files (Dynamic Loading Libraries). On Linux systems they are usually called SO (Shared Object) files. As to location, are you detecting a pattern yet? There are three directories where library files are placed: /lib, /usr/lib, and /usr/local/lib.

Documentation is a minor exception to the pattern of file placement. Pages of the system manual (man pages) follow the same pattern as the programs they document: /man, /usr/man, and /usr/local/man. You should not access these files directly, however, but by using the man command.
Many programs install addition documentation in the form of text files, HTML, or other things not man pages. This extra documentation is stored in directories under /usr/share/doc or /usr/local/share/doc. (On older systems you may find this under /usr/doc instead.)

+configure (Aug. 22, 2014, 10:25 a.m.)

When installing a package, the first phase is `./configure`. This is some information about it:

The primary job of the configure script is to detect information about your system and "configure" the source code to work with it.
Usually it will do a fine job at this. The secondary job of the configure script is to allow you, the system administrator, to customize the software a bit.
Running ./configure --help should give you a list of command line arguments you can pass to the configure script. Usually these extra arguments are for enabling or disabling optional features of the software, and it is often safe to ignore them and just type ./configure to take the default configuration.

There is one common argument to configure that you should be aware of. The --prefix argument defines where you want the software installed. In most source packages this will default to /usr/local/ and that is usually what you want. But sometimes you may not have root access to the system, and you would like to install the software into your home directory. You can do this with the last command in the example, ./configure --prefix=/home/vince (where vince is your user name).

+Tarballs (Tar Archive) (Aug. 22, 2014, 10:21 a.m.)

tar -xzvf filename.tar.gz

x : eXtract
j : deal with bzipped file
f : read from a file (rather than a tape device)


Creating a tar File:
tar -cvf output.tar /dirname

tar -cvf Projects.tar Projects --exclude=Projects/virtualenvs --exclude=".buildozer" --exclude=".git"

tar -cvf output.tar /dirname1 /dirname2 filename1 filename2

tar -cvf output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

tar -cvf /tmp/output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt


-c : Create a tar ball.
-v : Verbose output (show progress).
-f : Output tar ball archive file name.
-x : Extract all files from archive.tar.
-t : Display the contents (file list) of an archive.


Create a tar Archive File:
tar -cf abcd.tar /home/mohsen/abcd

Untar Single file from tar File:
tar -xf abcd.tar x.png
tar --extract --file=abcd.tar x.png

Untar Multiple files:
tar -xf abcd.tar "x.png" "y.png" "z.png"


Create tar.gz Archive File (compressed gzip archive):
tar -czf abcd.gz /home/mohsen/abcd

Uncompress tar.gz Archive File:
tar -xf abcd.tar.gz
tar -xf abcd.tar.gz -C /home/mohsen/Temp/

List Content tar.gz Archive File:
tar -tvf abcd.tar.gz

Untar Single file from tar.gz File:
tar -zxf abcd.tar.gz x.png
tar --extract --file=abcd.tar.gz x.png

Untar Multiple files:
tar -zxf abcd.tar.gz "x.png" "y.png" "z.png"


Create tar.bz2 Archive File:

The bz2 feature compresses and creates archive files less than the size of the gzip. The bz2 compression takes more time to compress and decompress files as compared to gzip which takes less time.

tar -cfj abcd.tar.bz2 /home/mohsen/abcd

Uncompress tar.bz2 Archive File:
tar -xf abcd.tar.bz2

List content tar.bz2 archive file:
tar -tvf abcd.tar.bz2

Untar single file from tar.bz2 File:
tar -jxf abcd.tar.bz2 home/mohsen/x.png
tar --extract --file=abcd.tar.bz2 /home/mohsen/x.png

Untar multiple files:
tar -jxf abcd.tar.bz2 "x.png" "y.png" "z.png"


Extract group of files using wildcard:
tar -xf abcd.tar --wildcards '*.png'
tar -zxf abcd.tar.gz --wildcards '*.png'
tar -jxf abcd.tar.bz2 --wildcards '*.png'


Add files or directories to tar archive file:
Use the option r (append)

tar -rf abcd.tar m.png
tar -rf abcd.tar images

The tar command doesn’t have an option to add files or directories to an existing compressed tar.gz and tar.bz2 archive file. If we do try will get the following error:
tar: This does not look like a tar archive
tar: Skipping to next header


Create a tar archive using xz compression:
tar -cJf abcd.tar.xz /path/to/archive/

tar xf abcd.tar.xz


Compress supporting source and destination directory:
tar -cf /home/mohsen/Temp/abcd.tar -P /home/mohsen/Temp/abcd
tar -cPf /home/mohsen/Temp/abcd.tar /home/mohsen/Temp/abcd


Tar Usage and Options:

c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.

-P (--absolute-names) – don't strip leading '/'s from file names


tar -cJf my_folder.tar.xz my_folder


tar zc --exclude node_modules -f tiptong.tar.gz tiptong


Extract to a different directory:

tar -xf -C /path/to/directory

tar xf file.tar --directory /path/to/directory


+apt-get (Aug. 22, 2014, 10:21 a.m.)

apt-get upgrade
Updating the software

apt-get -s upgrade
To simulate an update installation, i.e. to see which software will be updated.

+Search for text in files (Aug. 9, 2015, 9:45 p.m.)

find . -name "*.txt" | xargs grep -i "text_pattern"
find / -type f -exec grep -l "text-to-find-here" {} \;
grep word_to_find file_name -n --c
The --c is for coloring the words
grep "<the word or text to be searched>" / -Rn --color -T
/: The location to be searched
R: Search in recursive mode
n: Display the number of the line in which the occurrence word or text is located
color: Display the search result colored
T: Separate the search result with a tab
l: stands for "show the file name, not the result itself"
grep -Rin "text-to-find-here" /
grep --color -Rin "text-to-find-here" / (to make it colorful)
egrep -w -R 'word1|word2' ~/projects/ (for two words)

i stands for upper/lower case
w stands for whole word
Find specific files and search for specific words:

find . -name '*.py' -exec grep -Rin 'resize' {} +
Finds the word `resize` in python files.
find -iname "*.py" | xargs grep -i django

+dpkg (Aug. 22, 2014, 10:19 a.m.)

dpkg --get-selections
To get list of all installed software

dpkg-query -W
To get list of installed software packages

dpkg -l
Description of installed software packages

+Driver - See PCI devices along with their kernel modules (device drivers) (Aug. 22, 2014, 10:05 a.m.)

lspci -k

It first shows you all the PCI devices attached to your system and then tells you what kernel modules (device drivers), are being used by them.

+sources.list (Aug. 22, 2014, 9:58 a.m.)

deb jessie/updates main
deb-src jessie/updates main

deb jessie-updates main
deb-src jessie-updates main

deb jessie main
deb-src jessie main


deb stretch main
deb-src stretch main

deb stretch-updates main
deb-src stretch-updates main

deb stretch/updates main
deb-src stretch/updates main

+PIP (Aug. 22, 2014, 9:14 a.m.)

Install SomePackage and it’s dependencies from PyPI using Requirement Specifiers
pip install SomePackage # latest version
pip install SomePackage==1.0.4 # specific version
pip install 'SomePackage>=1.0.4' # minimum version

Install a list of requirements specified in a file.
pip install -r requirements.txt

Upgrade an already installed SomePackage to the latest from PyPI.
pip install --upgrade SomePackage

Install a local project in “editable” mode
pip install -e . # project in current directory
pip install -e path/to/project # project in another directory

Install a project from VCS in “editable” mode. See the sections on VCS Support and Editable Installs.
pip install -e git+https://git.repo/some_pkg.git#egg=SomePackage # from git
pip install -e hg+https://hg.repo/some_pkg.git#egg=SomePackage # from mercurial
pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomePackage # from svn
pip install -e git+https://git.repo/some_pkg.git@feature#egg=SomePackage # from 'feature' branch

Install a package with setuptools extras.
pip install SomePackage[PDF]
pip install SomePackage[PDF]==3.0
pip install -e .[PDF]==3.0 # editable project in current directory

Install a particular source archive file.
pip install ./downloads/SomePackage-1.0.4.tar.gz
pip install http://my.package.repo/

Install from alternative package repositories. (Install from a different index, and not PyPI):
pip install --index-url http://my.package.repo/simple/ SomePackage

Search an additional index during install, in addition to PyPI:
pip install --extra-index-url http://my.package.repo/simple SomePackage

Install from a local flat directory containing archives (and don’t scan indexes):
pip install --no-index --find-links:file:///local/dir/ SomePackage
pip install --no-index --find-links:/local/dir/ SomePackage
pip install --no-index --find-links:relative/dir/ SomePackage

Find pre-release and development versions, in addition to stable versions. By default, pip only finds stable versions.
pip install --pre SomePackage


pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...

pip is able to uninstall most installed packages. Known exceptions are:
Pure distutils packages installed with python install, which leave behind no metadata to determine what files were installed.
Script wrappers installed by python develop.

-r, --requirement <file>
Uninstall all the packages listed in the given requirements file. This option can be used multiple times.

-y, --yes
Don't ask for confirmation of uninstall deletions.

Uninstall a package.
pip uninstall simplejson


pip freeze [options]

Output installed packages in requirements format.

-r, --requirement <file>
Use the order in the given requirements file and it’s comments when generating output.

-f, --find-links <url>
URL for finding packages, which will be added to the output.

-l, --local
If in a virtualenv that has global access, do not output globally-installed packages.

Generate output suitable for a requirements file.
$ pip freeze

Generate a requirements file and then install from it in another environment.
$ env1/bin/pip freeze > requirements.txt
$ env2/bin/pip install -r requirements.txt


pip list [options]

List installed packages, including editable ones.

-o, --outdated
List outdated packages (excluding editables)

-u, --uptodate
List up-to-date packages (excluding editables)

-e, --editable
List editable projects.

-l, --local
If in a virtualenv that has global access, do not list globally-installed packages.

Include pre-release and development versions. By default, pip only finds stable versions.

List installed packages.
$ pip list
Pygments (1.5)
docutils (0.9.1)
Sphinx (1.1.2)
Jinja2 (2.6)

List outdated packages (excluding editables), and the latest version available
$ pip list --outdated
docutils (Current: 0.9.1 Latest: 0.10)
Sphinx (Current: 1.1.2 Latest: 1.1.3)


pip show [options] <package> ...

Show information about one or more installed packages.

-f, --files
Show the full list of installed files for each package.

Show information about a package:
$ pip show sphinx
`the output will be`:
Name: Sphinx
Version: 1.1.3
Location: /my/env/lib/pythonx.x/site-packages
Requires: Pygments, Jinja2, docutils


pip search [options] <query>

Search for PyPI packages whose name or summary contains <query>.

--index <url>
Base URL of Python Package Index (default

Search for “peppercorn”
pip search peppercorn
pepperedform - Helpers for using peppercorn with formprocess.
peppercorn - A library for converting a token stream into [...]


pip zip [options] <package> ...

Zip individual packages.

Unzip (rather than zip) a package.

Do not include .pyc files in zip files (useful on Google App Engine).

-l, --list
List the packages available, and their zip status.

With –list, sort packages according to how many files they contain.

--path <paths>
Restrict operations to the given paths (may include wildcards).

-n, --simulate
Do not actually perform the zip/unzip operation.


This command will download the zipped/tar file in the specified location:
pip download `package_name`

pip download \
--only-binary=:all: \
--platform linux_x86_64 \
--python-version 33 \
--implementation cp \
--abi cp34m \

pip download \
--only-binary=:all: \
--platform macosx-10_10_x86_64 \
--python-version 27 \
--implementation cp \


pip install --allow-all-external pil --allow-unverified pil


ReadTimeoutError: HTTPSConnectionPool(host='', port=443)

pip install --default-timeout=200 <package_name>


pip install pip-review
pip-review --local --interactive


mkdir pip_files && cd pip_files
pip download -r requirements.txt


+Hardware - Modem - WiMAX modem (Aug. 17, 2015, 9:55 a.m.)

For installing the driver, install these packages first:
apt-get install linux-headers-`uname -r` libssl-dev usb-modeswitch zip
The wimaxd would not get recognized by the terminal. So I copied it in the /bin directory.
There was an error "error while loading shared libraries: cannot open shared object file" so I did the following:
To fix the problem, I added the "" path to /etc/ and re-ran ldconfig.

Another incident which is not related to WiMAX, is that, one day when I was installing and running Apache, there was an error similar to this error of WiMAX: "error while loading shared libraries: cannot open shared object file", so I searched for the file using "locate" command and copied it in the address "/usr/lib" and ran Apache, it was solved!
WiMAX linux-headers error:
make: *** /lib/modules/3.13.0-37-generic/source: No such file or directory. Stop.

1-rm /lib/modules/3.13.0-37-generic/source
2-ln -s /usr/src/linux-headers-3.13.0-37 /lib/modules/3.13.0-37-generic/source
2-wimaxd -D -c wimaxd.conf
3- (in another console) wimaxc -i
4-(in another console) su
4.1-dhclient eth1

+Version, Distro, Release (Aug. 4, 2014, 4:38 a.m.)

uname -r
Find or identify which version of Debian Linux you are running:
cat /etc/debian_version
What is my current linux distribution
cat /etc/issue
How Do I Find Out My Kernel Version?
uname -mrs
lsb_release Command:
The lsb_release command displays certain LSB (Linux Standard Base) and distribution-specific information.
lsb_release -a

+List hardware information (Aug. 4, 2014, 4:37 a.m.)


+Hard Disk information (Aug. 4, 2014, 4:36 a.m.)

fdisk -l

+Sudoer (Aug. 4, 2014, 4:36 a.m.)

Scroll to the bottom of the page and enter:
mohsen ALL=(ALL) ALL

Mac OS
+VMware Tools (Jan. 23, 2017, 1:16 p.m.)

Darwin Image for VMware Tools for Mac OS X:

+Password Reset (Sept. 12, 2016, 12:39 a.m.)

1-Turn off your Mac (choose Apple > Shut Down).
2-Press the power button while holding down Command-R. The Mac will boot into Recovery mode. ...
3-Select Disk Utility and press Continue.
4-Choose Utilities > Terminal.
5-Enter resetpassword (all one word, lowercase letters) and press Return.
6-Select the volume containing the account (normally this will be your Main hard drive).
7-Choose the account to change with Select the User Account.
8-Enter a new password and re-enter it into the password fields.
9-Enter a new password hint related to the password.
10-Click Save.
11-A warning will appear that the password has changed, but not the Keychain Password. Click OK.
12-Click Apple > Shut Down.

Now start up the Mac. You can login using the new password.

+Install Ionic (June 21, 2016, 11:08 p.m.)

brew install npm

sudo npm install -g cordova ionic

npm install -g ios-sim

npm install -g ios-deploy
ionic platfrom add ios
ionic resources
ionic build ios

+Speed Up Mac by Disabling Features (June 21, 2016, 11:13 p.m.)

Disable Open/Close Window Animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
Disable Quick Look Animations
defaults write -g QLPanelAnimationDuration -float 0
Disable Window Size Adjustment Animations
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
Disable Dock Animations

defaults write launchanim -bool false
Disable the “Get Info” Animation
defaults write DisableAllAnimations -bool true
Get rid of Dashboard
defaults write mcx-disabled -boolean YES
killall Dock
Speed Up Window Resizing Animation Speed
defaults write -g NSWindowResizeTime -float 0.003
Disable The Eye Candy Transparent Windows & Effects
System Preferences -> Accessibility -> Display
Check the box for “Reduce Transparency”
Disable Unnecessary Widgets & Extensions in Notifications Center
System Preferences -> Extensions -> Today
Uncheck all options you don’t need or care about

+Disable SIP (June 20, 2016, 12:37 a.m.)

csrutil status
csrutil disable

+Recovery HD partition with El Capitan bootable via Clover (June 19, 2016, 7:46 p.m.)

1- diskutil list
You will get the partition list, note that the Recovery Partition is obviously named "Recovery HD"

2- Create a folder in Volumes folder for Recovery HD and mount it there:
sudo mkdir /Volumes/Recovery\ HD
sudo mount -t hfs /dev/disk0s3 /Volumes/Recovery\ HD

3- Remove the file `prelinkedkernel`from the directory ``
sudo rm -rf /Volumes/Recovery\ HD/

4- Copy your working `prelinkedkernel` there:
sudo cp /System/Library/PrelinkedKernels/prelinkedkernel /Volumes/Recovery\ HD/

5- Reboot

+Mac OS X on Virtualbox (June 12, 2016, 3:29 p.m.)

vboxmanage modifyvm "Mac OS X 10.11" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

VBoxManage setextradata "Mac OS X 10.11" "VBoxInternal2/EfiBootArgs" " "

+Convert Installation DMG to ISO - Create a Bootable ISO (June 11, 2016, 10:04 p.m.)

You need to run these commands on a Mac OS X:

# Mount the installer image
hdiutil attach /Applications/Install\ OS\ X\ El\
-noverify -nobrowse -mountpoint /Volumes/install_app

# Create the ElCapitan Blank ISO Image of 7316mb with a Single Partition - Apple Partition Map
hdiutil create -o /tmp/ElCapitan.cdr -size 7316m -layout SPUD -fs HFS+J

# Mount the ElCapitan Blank ISO Image
hdiutil attach /tmp/ElCapitan.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

# Restore the Base System into the ElCapitan Blank ISO Image
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

# Remove Package link and replace with actual files
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/

# Copy El Capitan installer dependencies
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

# Unmount the installer image
hdiutil detach /Volumes/install_app

# Unmount the ElCapitan ISO Image
hdiutil detach /Volumes/OS\ X\ Base\ System/

# Convert the ElCapitan ISO Image to ISO/CD master (Optional)
hdiutil convert /tmp/ElCapitan.cdr.dmg -format UDTO -o /tmp/ElCapitan.iso

# Rename the ElCapitan ISO Image and move it to the desktop
mv /tmp/ElCapitan.iso.cdr ~/Desktop/ElCapitan.iso

+Commands (June 9, 2016, 1:45 p.m.)

Locate command:
To create the database for using `locate` command, run the following command:
sudo launchctl load -w /System/Library/LaunchDaemons/

updatedb ==> sudo /usr/libexec/locate.updatedb

+Installing Xcode (June 6, 2016, 3:31 p.m.)

For downloading Xcode or other development tools, you need to log into using your Apple ID account and then open the following link:

Download Xcode and Command Line Tools!

+Applications (June 5, 2016, 2:04 p.m.)

brew install proxychains-ng

sudo nano /usr/local/Cellar/proxychains-ng/4.11/etc/proxychains.conf
brew install npm
brew install ssh-copy-id
brew install tmux

+Installing Homebrew (June 5, 2016, 1:47 p.m.)

Reference Site:
1-You need to install Developer Tools first. Using the `gcc --version` command check if you have the tools first. If the tools were not installed, a dialog will be opened asking you if you want to install the tools. You choose Install.

2-The website says you only need to use the following command to install brew. (But it might be blocked for us in Iran, as of the time writing this tutorial):
/usr/bin/ruby -e "$(curl -fsSL"

If it was still blocked, for installing it you need to open the following URL in a proxy activated browser, and save the script in your Mac OS:

Install it using this command:

Mail Server
+What is reverse DNS (rDNS)? (Feb. 12, 2020, 3:07 p.m.)

Reverse DNS, or rDNS, does the opposite of the traditional DNS. That is, instead of resolving a domain name to an IP, it resolves an IP to a hostname.

The rDNS resolution is a completely separate mechanism from the regular DNS resolution. For example, if the domain “” points to IP (dummy IP address), it doesn’t necessarily mean that the reverse resolution for the IP is

To store rDNS records, there’s a specific type of DNS record called the PTR record. This record is also known as the “resource record” (RR), and specifies the IP addresses of all systems using an inverted notation.

This rDNS configuration allows you to search for an IP in the DNS, since the domain is added to the inverted IP notation, turning the IP into a domain name.

For example: in order to convert the IP address into a PTR record, we need to invert the IP and add the domain which results in the following record:


When is rDNS useful?

If you want to prevent email issues. If you’re hosting your own email server, rDNS becomes pretty useful for your outgoing emails. An rDNS record allows tracing the origin of the email, increasing the credibility of the email server, and becoming a trusted source for many popular email providers such as Gmail, Yahoo, Hotmail, and others. Some incoming email servers won’t even let your email arrive at their email boxes if you don’t have an rDNS record setup. So if you’re using your own mail server, you’ll want to keep it in mind.

When you’re performing a cybercrime investigation. Another popular use of reverse DNS records is to identify potential threats and mass scanners throughout the Internet. By using both security API endpoints, or web-based products like SurfaceBrowser, you or your team can easily identify authors and networks behind mass scanning, malware spreading or other types of malicious activities.


How can I perform a reverse DNS lookup?

There are many methods and rDNS lookup tools in use for doing the opposite of a normal DNS check: resolving a given IP to host.

Some of these web-based utilities are known as reverse DNS tools, and they all do the same thing, query a given IP to resolve a hostname. Let’s look at some terminal-based examples first:

dig -x



+Difference Between Maildir and Mbox Directory Structure (Feb. 12, 2020, 1:03 p.m.)

Maildir and Mbox are email formats that act as a directory for storing messages in email applications. Mbox was the original mail storage system on a cPanel server, but now Maildir is the default option. Mbox places all messages in the same file on the server, whereas, Maildir stores messages in individual files with unique names.


Directories in the Maildir format has three subdirectories. They are:

1) new: Each file in a new subdirectory is all incoming email messages received in a limited time. It is used for notifying the user to have a new message. The modification time of the files in the new directory is the delivery date of the message. The message is normally in RFC 822 format in which it starts with a “Return-path” line and a “Delivered-to” line.

2) Cur: The files in the cur directory are the same as the new directory but, the files in cur are no longer new mail. They have been seen by the user’s mail reading program. That is, it saves only those messages, which have been read by the user.

3) tmp: tmp directory includes a temporary data file associated with the Maildir file extension directory. It is used for ensuring the reliable delivery of the message.

Benefits of Maildir

1) Maildir is more current.

2) Faster and stable than mbox.

3) The main advantage of this file format is that it can easily classify into subdirectories. When a new message arrives, it filters accordingly and moves in the respective subdirectories.

4) These files can be distributed over the network without any compatibility issues.

5) Compatible with both courier and dovecot mail servers.

6) Most secure format and minimum chances of data corruption.

7) Maildir directory creates one single file for every incoming mail messages.


Mailbox file format is also known as Mbox. Mbox is an email file type, which stores messages in plain text format. The email contents in the file comprise in the form of 7-bit ASCII text and the rest of the email components (attachments, metadata, etc..) are stored in encoded form. Mailbox works in a single file format in which all email messages are stored in a single file on the account, usually inbox.

Benefits of Mbox

1) The file format is universally supported.

2) Appending new mail in the mailbox is faster.

3) Searching text inside the mailbox is faster.

It has some file locking problems and problems when used with network file systems.

+PostfixAdmin (Feb. 10, 2020, 10:07 a.m.)

1- Download the latest version of PostfixAdmin:
cd /srv/
wget -O postfixadmin.tgz
tar -zxf postfixadmin.tgz
mv postfixadmin-postfixadmin-3.2 postfixadmin

2- Copy the "PHP Configuration" from my notes in "Nginx" category to nginx sites-enabled.
root /srv/postfixadmin/public;

3- Create a PostgreSQL user "postfix" and a database named "postfix"

4- Create /srv/postfixadmin/config.local.php file for your local configuration.
vim /srv/postfixadmin/config.local.php

Configure PostfixAdmin so it can find the database. Add the following lines to config.local.php:
$CONF['database_type'] = 'pgsql';
$CONF['database_user'] = 'postfix';
$CONF['database_password'] = 'your_password';
$CONF['database_name'] = 'postfix';

$CONF['configured'] = true;

You can see for all available config options and their default value.
You can also edit instead of creating a config.local.php, but this will make updates harder and is therefore not recommended.

5- Create a template directory for smarty cache:
mkdir -p /srv/postfixadmin/templates_c
chown -R www-data /srv/postfixadmin/templates_c

6- Install the following packages:
apt install php7.3-imap dovecot-pgsql postfix-pgsql dovecot-pop3d dovecot-imapd dovecot-lmtpd

7- Check settings, and create Admin user.
Restart nginx and open the following link in your computer browser:

You will be asked to set a setup password. After setting it, you will be given a hash password. Put it in the config file you created at the earlier steps.
$CONF['setup_password'] = ''

Then you will be asked to create a superadmin account.

8- Since we are configuring a mail server with virtual users we need one system user which will be the owner of all mailboxes and will be used by the virtual users to access their email messages on the server.
groupadd -g 5000 vmail
useradd -u 5000 -g vmail -s /usr/sbin/nologin -d /var/mail/vmail -m vmail

9- Dovecot setup
vim /etc/dovecot/conf.d/10-mail.conf
mail_location = maildir:/var/mail/vmail/%d/%n/

If you don't have ssl:
vim /etc/dovecot/conf.d/10-ssl.conf
ssl = no

Login for outlook express and mobile applications:
vim /etc/dovecot/conf.d/10-auth.conf
disable_plaintext_auth = yes
auth_mechanisms = plain login
Comment this line so that you don't get errors like "pam_authenticate() failed: Authentication failure". We are using virtual user (from database) no need for PAM which is for operating system user authentications.
#!include auth-system.conf.ext
Uncommend this line:
!include auth-sql.conf.ext

vim /etc/dovecot/dovecot-sql.conf.ext
driver = pgsql
password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1'
user_query = SELECT '/var/mail/vmail/' || maildir AS home, 5000 AS uid, 5000 AS gid, '*:bytes=' || quotaAS quota_rule FROM mailbox WHERE username = '%u' AND active = true
connect = host=localhost dbname=postfix user=postfix password=my_password
default_pass_scheme = MD5 # depends on your $CONF['encrypt'] Postfixadmin settings

10- Add the following lines to Postfix configurations file:
vim /etc/postfix/

relay_domains = $mydestination, proxy:pgsql:/etc/postfix/pgsql/
virtual_alias_maps = proxy:pgsql:/etc/postfix/pgsql/
virtual_mailbox_domains = proxy:pgsql:/etc/postfix/pgsql/
virtual_mailbox_maps = proxy:pgsql:/etc/postfix/pgsql/
virtual_mailbox_base = /var/mail/vmail
virtual_mailbox_limit = 512000000
virtual_minimum_uid = 8
virtual_transport = virtual
virtual_uid_maps = static:8
virtual_gid_maps = static:8
local_transport = virtual
local_recipient_maps = $virtual_mailbox_maps

# SASL Auth for SMTP relaying
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header = yes
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
broken_sasl_auth_clients = yes

11- Create a folder and some config files, then add the following lines in each file:
mkdir /etc/postfix/pgsql

vim /etc/postfix/pgsql/
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT domain FROM domain WHERE domain='%s'

vim /etc/postfix/pgsql/
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT goto FROM alias WHERE address='%s' AND active = true

vim /etc/postfix/pgsql/
user = postfix
password = whatever
hosts = localhost
dbname = postfix
#query = SELECT domain FROM domain WHERE domain='%s'
#optional query to use when relaying for backup MX
query = SELECT domain FROM domain WHERE domain='%s' and backupmx = false and active = true

vim /etc/postfix/pgsql/
user = postfix
password = whatever
hosts = localhost
dbname = postfix
query = SELECT maildir FROM mailbox WHERE username='%s' AND active = true

chmod 777 /etc/postfix/pgsql -R
chown root:postfix /etc/postfix/pgsql -R
postfix set-permissions

12- Enable Roundcube password plugin to enable database-based authentication:
vim /srv/roundcubemail/config/
// Enable plugins
$config['plugins'] = array('managesieve','password');
// Configure managesieve plugin
$rcmail_config['managesieve_port'] = 4190;
// Configure password plugin
$config['password_driver'] = 'sql';
$config['password_db_dsn'] = 'pgsql://postfix:my_password@localhost/postfix';
$config['password_query'] = 'UPDATE mailbox SET password=%c WHERE username=%u';



These postmap queries should return the found string:

Note that we are NOT authenticating against the credentials set for each email account, we are only testing the ability of Postfix to detect those records in the database.

postmap -q pgsql:/etc/postfix/pgsql/
postmap -q pgsql:/etc/postfix/pgsql/

doveadm auth test -x service=imap -x rip=

tail -f /var/log/mail*.log

If you're having trouble, try uncommenting the following lines in the file:
vim /etc/dovecot/conf.d/10-logging.conf
auth_debug = yes
auth_debug_passwords = yes
auth_verbose = yes


+Roundcube - Enable emoticons plugin (Dec. 25, 2019, 8:57 p.m.)

1- Edit the file

2- Add 'emoticons' to line 49:
$config['plugins'] = array('emoticons')

+Virtual domains (Aug. 22, 2014, 9:54 a.m.)

1- Add these lines to /etc/postfix/
virtual_alias_domains =
virtual_alias_maps = hash:/etc/postfix/virtual

2- Create a file "/etc/postfix/virtual" and specify the domains and users to accept mail for. mohsen mohsen nozhanrayan nozhanrayan

3- postmap /etc/postfix/virtual

4- /etc/init.d/postfix restart

+Find Postfix mail server version (Dec. 15, 2018, 2:54 a.m.)

postconf -d mail_version

+Roundcube (Dec. 15, 2019, 2:52 a.m.)

1- You will need these packages for Roundcube installer:
apt install php-mbstring php-gd php-imagick php-pgsql php-intl php-pear php-zip php-common php-cli php-fpm

2- Download and extract the latest "complete" Roundcube version from:
Extract it and give it write/read permission:
chmod 777 roundcubemail -R

3- Copy the "PHP Configuration" from my notes in "Nginx" category to nginx sites-enabled.

4- Create a PostgreSQL user "roundcube", with a password, and a database named "roundcubemail".
You need the initial SQL database structure for PostgreSQL database. This file exists in the root folder of the roundcube you just downloaded, "roundcubemail/SQL/postgres.initial.sql". Use the following command to load the structure into the database:
psql -U roundcube -f /srv/roundcubemail/SQL/postgres.initial.sql roundcubemail

When setting configurations in step 6, if you got error "DB Schema: NOT OK(Database schema differs)" you might need another version of the above structure file. You can download it from the following link:
You need to DOWNLOAD the file as raw, do not download the file directly. Click on the link, then click "raw" and copy the link from browser URL, download the raw file using wget, something like the following link:
psql -U roundcube -f postgres.initial.sql roundcubemail

5- Edit the file "/etc/php/7.3/fpm/php.ini" and set:
date.timezone = 'Asia/Tehran'
upload_max_filesize = 300M
post_max_size = 300M

6- After restarting the required services, such as Nginx and probably php7.0-fpm, browse the address:

7- Add the following line to the file /srv/roundcube/config/
$config['mail_domain'] = '';
$config['smtp_port'] = 25;

8- Enable creation of primary folders upon user login:
vim /srv/roundcubemail/config/
$config['create_default_folders'] = true;


You can edit the settings and configurations you have selected or filled-up in the installer web page using this file:


For debug purpose:
tail -f /srv/roundcube/logs/errors
tail -f /var/log/mail*.log

+Web Mail Installation (Dec. 15, 2019, 2:52 a.m.)

apt install postfix dovecot-core dovecot-imapd


For connecting your cellphone to the webmail:

Add these lines to /etc/postfix/
mydestination = (Do not put Only the main domain name!)
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

Edit these lines from /etc/dovecot/conf.d/10-auth.conf:
disable_plaintext_auth = no
auth_mechanisms = plain login

If there was any problem when connecting cellphone to your webmail, check the logs for solving the problems:
tail -f /var/log/mail*.log


Edit the file /etc/dovecot/conf.d/10-master.conf:

# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0666
user = postfix
group = postfix


For having "Maildir", edit the file /etc/dovecot/conf.d/10-mail.conf:
mail_location = maildir:~/Maildir

And the file /etc/postfix/
home_mailbox = Maildir/

mkdir ~/Maildir
chmod 700 ~/Maildir
chown mohsen:mohsen ~/Maildir


After making the above changes, restart the services:


Debug IMAP:

telnet 143

Now type each line as a command:
a examine inbox
a logout


When receiving mails, I noticed "delivered to command: procmail -a" message in logs. Mails would not appear in inbox. For solving the problem I had to use the following commands:

postconf -e 'home_mailbox = Maildir/'
postconf -e 'mailbox_command ='
/etc/init.d/postfix restart


+TXT Records (Dec. 15, 2019, 2:51 a.m.)

Create an account in, and using the instructions create DMARC DNS records.

You need to create TXT record like this:
Host Name:
Destination: <The values the site gives you> (without the double quotations)


DMARC stands for “Domain-based Message Authentication, Reporting & Conformance”, is an email authentication, policy, and reporting protocol. It builds on the widely deployed SPF and DKIM protocols, adding linkage to the author (“From:”) domain name, published policies for recipient handling of authentication failures, and reporting from receivers to senders, to improve and monitor protection of the domain from fraudulent email.


Creating an SPF or Caller ID record:

Create a TXT record:
Host Name:
Destination: v=spf1 mx ip4: -all


+Test your Reverse PTR record (April 8, 2019, 2:51 a.m.)

+Is your domain's SPF record correct? (Dec. 15, 2018, 2:50 a.m.)

+Is your domain's DKIM record correct? (Dec. 15, 2018, 2:50 a.m.)

+Check your server IP is not on any email blacklists (Dec. 15, 2018, 2:48 a.m.)

+Description (Aug. 22, 2014, 9:49 a.m.)

Debian Mail Server Setup with Postfix + Dovecot + SASL

Postfix is an attempt to provide an alternative to the widely-used Sendmail program. Postfix attempts to be fast, easy to administer, and hopefully secure, while at the same time being sendmail compatible enough to not upset your users.

Dovecot is an open source IMAP and POP3 server for Linux/UNIX-like systems, written with security primarily in mind. Dovecot is an excellent choice for both small and large installations. It’s fast, simple to set up, requires no special administration and it uses very little memory.

When sending mail, the Postfix SMTP client can look up the remote SMTP server hostname or destination domain (the address right-hand part) in a SASL password table, and if a username/password is found, it will use that username and password to authenticate to the remote SMTP server. And as of version 2.3, Postfix can be configured to search its SASL password table by the sender email address.

Note : If you install Postfix/Dovecot mail server you will ONLY be able to send mail within your network. You can only send mail externally if you install SASL authentication with TLS. As otherwise you get “Relay Access Denied” error.

SASL Configuration + TLS (Simple authentication security layer with transport layer security) used mainly to authenticate users before sending email to external server, thus restricting relay access. If your relay server is kept open, then spammers could use your mail server to send spam. It is very essential to protect your mail server from misuse.

+Profiling (April 13, 2020, 8:41 p.m.)

Profiling is a process of measurement metrics of your project, such as server response time, CPU usage, memory usage, etc.

+Deserialize and Serialize (April 7, 2020, 11:37 a.m.)

Serialization means to convert an object into a string, and deserialization is its inverse operation (convert string -> object).

Serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later.

The opposite operation, extracting a data structure from a series of bytes, is deserialization.

+Firefox - Addons (Feb. 23, 2020, 12:28 p.m.)

YouTube Video Downloader 1-Click Group


+NextCloud Server (Feb. 8, 2020, 5:04 p.m.)

1- Install all the dependencies:
apt install apache2 libapache2-mod-php mariadb-server php-xml php-cli php-cgi php-mysql php-mbstring php-gd php-curl php-zip

2- Restart Apache to make sure that it's using the PHP module:
systemctl restart apache2

3- Nextcloud keeps track of everything in a database. Plus, like most web applications, it stores its own information and settings in it too.
run the built-in secure installation script to remove junk and set up your admin account.

sudo mysql_secure_installation

Follow the instructions, and set up a new root password when asked. You can accept the defaults for everything.

4- Sign in to MariaDB using the root password that you just established:
mysql -u root -p
CREATE USER 'nextclouduser'@'localhost' IDENTIFIED BY 'yourpassword';
GRANT ALL ON nextcloud.* TO 'nextclouduser'@'localhost';

5- Download Nextcloud from the following link:
unzip nextcloud-*.zip
cp -r nextcloud /var/www/html/nextcloud
chown -R www-data:www-data /var/www/html/nextcloud

6- Open your browser, and navigate to your Nextcloud server:
You'll arrive on the Nextcloud setup page. Enter a username and password for your admin user.
Next, scroll down, and enter the information for the database that you set up, including the username and password of the user you created to manage it.

+AHCI vs. IDE vs. RAID (Feb. 7, 2020, 2:04 p.m.)

IDE, AHCI, and RAID are all operating modes in SATA environments. Each has its relative strengths and weakness.

IDE and AHCI are peripheral component interconnect (PCI) devices that move data between system memory and SATA controllers. Both add more advanced storage features.

AHCI is newer than IDE and enables more advanced storage features. However, both are older technologies that are not in widespread usage in storage arrays, especially with the growth of SSDs.

RAID is hardware or software that provides redundancy in multiple device environments, and accelerates HDDs. Like AHCI and IDE, RAID supports SATA controllers, and many RAID products enable AHCI upon installation to provide advanced storage features for single-disk applications.

In practice, the technologies are viewed as such:
- IDE is largely an obsolete technology, used only in older scenarios.
- AHCI still acts as a bus in some older SATA HDD arrays and hybrid arrays.
- RAID is still widely deployed for HDD and hybrid array data protection and redundancy.


What is AHCI?

Advanced Host Controller Interface (AHCI) is an Intel computer standard that is limited to Intel chipsets. AHCI has been around since 2004, where it replaced the older IDE/Parallel ATA interface in new devices.

AHCI is not identical to SATA but acts as the bus between the host and AHCI or SATA controllers on the motherboard. The protocol improves storage management features on the SATA controller by enabling Native Command Queuing (NCQ) and hot-swapping.


What is IDE?

Integrated Drive Electronics (IDE) is older than AHCI. It specifies a computer interface that connects disk storage with the motherboard bus. In 1986, Western Digital released the IDE spec in partnership with Compaq and Control Data Corp.

At the time, IDE-supported ATA drives were much faster than standard SCSI drives, and the market widely deployed the new IDE platforms. Also called parallel ATA, or PATA, IDE interconnects transfer 16 bits at a time across two device connections per channel.


What is RAID?

RAID, or “redundant array of independent disk” is another mature technology but is widely deployed in storage environments.

RAID provides high availability and data protection across multiple nodes, which enables HDDs and SSDs to keep running after the loss of a device. RAID is available for SSD arrays. But since it does not accelerate SSD performance, all-flash arrays are likelier to use proprietary RAID that provides redundancy and accelerate performance on SSDs.

The most widely used RAID types, or levels, are 0, 1, 5, 6, and 10. There are also SSD-specific RAID options in the market.

Raid 0: Striping. Splits files and stripes the data across two disks or more, treating the striped disks as a single partition.

RAID 1: Mirroring. Copies protected disk to 2nd disk. If the mirrored disk fails, the functioning disk takes over.

RAID 5: Striping with Parity. Distributes striping and parity (raw binary data containing data values) at a block level.

RAID 6: Striping with double parity. Like RAID 5, but with a minimum of 4 disks.

RAID 10: Striping and Mirroring. Stripes across at least 4 disks for higher performance, and mirrors for redundancy.

SSDs can use traditional RAID levels. However, although RAID can improve performance on HDDs, SSDs native high speeds do not benefit from RAID speed enhancements. SSD vendors are concentrating on adding proprietary RAID functions for all-flash array.


+HTTP Status Codes (Jan. 15, 2020, 4:18 p.m.)

1×× Informational
100 Continue
101 Switching Protocols
102 Processing

2×× Success
200 OK
201 Created
202 Accepted
203 Non-authoritative Information
204 No Content
205 Reset Content
206 Partial Content
207 Multi-Status
208 Already Reported
226 IM Used

3×× Redirection
300 Multiple Choices
301 Moved Permanently
302 Found
303 See Other
304 Not Modified
305 Use Proxy
307 Temporary Redirect
308 Permanent Redirect

4×× Client Error
400 Bad Request
401 Unauthorized
402 Payment Required
403 Forbidden
404 Not Found
405 Method Not Allowed
406 Not Acceptable
407 Proxy Authentication Required
408 Request Timeout
409 Conflict
410 Gone
411 Length Required
412 Precondition Failed
413 Payload Too Large
414 Request-URI Too Long
415 Unsupported Media Type
416 Requested Range Not Satisfiable
417 Expectation Failed
418 I'm a teapot
421 Misdirected Request
422 Unprocessable Entity
423 Locked
424 Failed Dependency
426 Upgrade Required
428 Precondition Required
429 Too Many Requests
431 Request Header Fields Too Large
444 Connection Closed Without Response
451 Unavailable For Legal Reasons
499 Client Closed Request

5×× Server Error
500 Internal Server Error
501 Not Implemented
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported
506 Variant Also Negotiates
507 Insufficient Storage
508 Loop Detected
510 Not Extended
511 Network Authentication Required
599 Network Connect Timeout Error

+Telegram Font Problem (Sept. 10, 2019, 12:13 p.m.)

1- Download a TTF font:

2- Create a directory and copy the font to it:
Make sure to rename the file to small case letters.
mkdir ~/.fonts/

3- Edit the Telegram font config file:
vim ~/.local/share/TelegramDesktop/tdata/fc-custom-1.conf

Add this block after all the <match> tags:
<match target="pattern">
<test qual="any" name="family">
<edit name="family" mode="assign" binding="same">

+CAPTCHA (Oct. 14, 2018, 9:39 a.m.)

CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart.

This is a challenging test to differentiate between humans and automated bots based on the response. reCAPTCHA is one of the CAPTCHA spam protection services bought by Google. Now it is being offered for free to webmasters and Google also uses the reCAPTCHA on it’s own services like Google Search.


The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

+List of administrative divisions by country (Sept. 15, 2018, 12:57 p.m.)

+Accuracy of latitude and longitude (July 5, 2018, 8:39 a.m.)

1 10 kilometer