Topics: 59 *** Notes: 651

View Topic List
+Common naming conventions for icon assets (April 22, 2019, 4:02 a.m.)

Asset Type Prefix Example
Icons ic_ ic_star.png
Launcher icons ic_launcher ic_launcher_calendar.png
Menu icons and Action Bar icons ic_menu ic_menu_archive.png
Status bar icons ic_stat_notify ic_stat_notify_msg.png
Tab icons ic_tab ic_tab_recent.png
Dialog icons ic_dialog ic_dialog_info.png

+Android Studio - Transparent Background Launcher Icon (April 22, 2019, 2:51 a.m.)

1- File > New > Image Asset.

2- Turn to Launcher Icons (Adaptive and Legacy) in Icon Type.

3- Choose Image in Asset Type and select your picture inside Path field (Foreground Layer tab).

4- Create or download below a PNG file with transparent background of 512x512 px size (this is a size of ic_launcher-web.png).
PNG link:

5- In Background Layer tab select Image in Asset Type and load the transparent background from step 4.

6- In Legacy tab select Yes for all Generate, None for Shape.

7- In Foreground Layer and Background Layer tabs you can change trim size.

Though you will see a black background behind the image in Preview window, after pressing Next, Finish and compiling an application you will see a transparent background in Android 5, Android 8.

+NDK (April 19, 2019, 6:38 p.m.)

The Native Development Kit (NDK) is a set of tools that allow you to use C and C++ code in your Android app. It provides platform libraries to manage native activities and access hardware components such as sensors and touch input.

The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for the following cases:

- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

- Reuse code between your iOS and Android apps.

- Use libraries like FFMPEG, OpenCV, etc.

+SDK / NDK (April 19, 2019, 6:34 p.m.)

Software Development Kit (SDK)
Native Development Kit (NDK)

Traditionally, all Software Development Kit (SDK) were in C, very few in C++. Then Google comes along and releases a Java based library for Android and calls it a SDK.

However, then came the demand for C/C++ based library for development. Primarily from C/C++ developers aiming game development and some high performance apps.

So, Google released a C/C++ based library called Native Development Kit (NDK).

+ADB (Oct. 2, 2015, 5:04 p.m.)

apt install android-tools-adb android-tools-fastboot

+Android Development Environment (July 6, 2016, 11:58 a.m.)

Visit the following links to get information about the dependencies you might need for the SDK version you intend to download:


You might find the tools and all the dependencies in the following links:


1- Create a folder preferably name it "android-sdk-linux" in any location.

2- Downloading SDK Tools:
From the following link, scroll to the bottom of the page, the table having the title "Command line tools only" and download the "Linux" package.
Extract the downloaded file "" to the folder you created in step 1.

3- Download an API level (for example, or which is for Android 4.0.4).
Create a folder named "platforms" in "android-sdk-linux" and extract the downloaded file to it.

4- Download the latest version of `build-tools` (
Create a folder named `build-tools` in `android-sdk-linux` and extract it to it.
You need to rename the extracted folder to `25`.

5- Download the latest version of `platform-tools` (
Extract it to the folder `android-sdk-linux`. It should have already a folder named `platform-tools`, so no need to create any further folders.

6- Open the file `~/.bashrc` and add the following line to it:
export ANDROID_HOME=/home/mohsen/Programs/Android/Development/android-sdk-linux

7- apt install openjdk-9-jdk
If you got errors like this:
\dpkg: warning: trying to overwrite '/usr/lib/jvm/java-9-openjdk-amd64/include/linux/jawt_md.h', which is also in package openjdk-9-jdk-headless

To solve the error:
apt-get -o Dpkg::Options::="--force-overwrite" install openjdk-9-jdk


+Codenames, Tags, and Build Numbers (March 7, 2019, 11:01 a.m.)

+AVD with HAXM or KVM (Emulators) (April 10, 2016, 9:25 a.m.)

Official Website:


For a faster emulator, use the HAXM device driver.
Linux Link:

As described in the above link, Linux users need to use KVM.
Taken from the above website:
(Since Google mainly supports Android build on Linux platform (with Ubuntu 64-bit OS as top Linux platform, and OS X as 2nd), and a lot of Android Developers are using AVD on Eclipse or Android Studio hosted by a Linux system, it is very critical that Android developers take advantage of Intel hardware-assisted KVM virtualization for Linux just like HAXM for Windows and OS X.)


KVM Installation:

1- egrep -c '(vmx|svm)' /proc/cpuinfo
If the output is 0 it means that your CPU doesn't support hardware virtualization.

2- apt install cpu-checker
Now you can check if your cpu supports kvm:
# kvm-ok

3- To see if your processor is 64-bit, you can run this command:
egrep -c ' lm ' /proc/cpuinfo
If 0 is printed, it means that your CPU is not 64-bit.
If 1 or higher, it is.
Note: lm stands for Long Mode which equates to a 64-bit CPU.

4- Now see if your running kernel is 64-bit:
uname -m

5- apt install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils ia32-libs-multiarch
If a screen with `Postfix Configuration` was displayed, ignore it by selecting `No Configuration`.

6- Next is to add your <username> account to the group kvm and libvirtd
sudo adduser mohsen kvm
sudo adduser mohsen libvirtd

7-Verify Installation:
You can test if your install has been successful with the following command:
sudo virsh -c qemu:///system list
Your screen will paint the following below if successful:
Id Name State


8- Install Java:
Oracle java has to be installed in order to run Android emulator x86 system Images.
sudo apt-get install openjdk-8-jre

9- Download a System Image from the following link:
Create a folder named `system-images` in `android-sdk-linux` and extract the downloaded system image in it. (You might need to create another folder inside, named `default`.)
Run the Android SDK Manager, you will probably see the system image under `Extras` which is broken.
If it was so, for solving the problem, you need to download its API from this link and extract it in `platforms` folder:

9- Start the AVD from Android SDK Directly from Terminal and create a Virtual Device:
~/Programs/Android/Development/android-sdk-linux/tools/android avd


+Get the length of `object` (Sept. 9, 2016, 4:56 a.m.)


+Parse JSON Object (Sept. 9, 2016, 4:18 a.m.)

data = angular.fromJson(json)
data = JSON.parse(json);

+Service vs Provider vs Factory vs Value (Sept. 7, 2016, 12:32 p.m.)

service, factory, and value are all derived from provider.
All angular services are singletons:
This means that there is only one instance of a given service per injector.

Syntax: module.service( 'serviceName', function );
Result: When declaring serviceName as an injectable argument you will be provided with an instance of the function. In other words new FunctionYouPassedToService().


app.service('myService', function() {
// service is just a constructor function
// that will be called with 'new'
this.sayHello = function(name) {
return "Hi " + name + "!";

More info:
When you’re using Service, AngularJS instantiates it behind the scenes with the ‘new’ keyword. Because of that, you’ll add properties to ‘this’ and the service will return ‘this’. When you pass the service into your controller, those properties on ‘this’ will now be available on that controller through your service.

app.service(‘myService’, function(){
var _artist = ‘Nelly’;
this.getArtist = function(){
return _artist;

app.controller(‘myServiceCtrl’, function($scope, myService){
$scope.artist = myService.getArtist();

Syntax: module.factory( 'factoryName', function );
Result: When declaring factoryName as an injectable argument you will be provided with the value that is returned by invoking the function reference passed to module.factory.


app.factory('myFactory', function() {
// factory returns an object
// you can run some code before
return {
sayHello : function(name) {
return "Hi " + name + "!";

More info:
When you’re using a Factory you create an object, add properties to it, then return that same object. When you pass this factory into your controller, those properties on the object will now be available in that controller through your factory.


app.factory(‘myFactory’, function(){
var _artist = ‘Shakira’;
var service = {};

service.getArtist = function(){
return _artist;

return service;

app.controller(‘myFactoryCtrl’, function($scope, myFactory){
$scope.artist = myFactory.getArtist();

Syntax: module.provider( 'providerName', function );
Result: When declaring providerName as an injectable argument you will be provided with (new ProviderFunction()).$get(). The constructor function is instantiated before the $get method is called - ProviderFunction is the function reference passed to module.provider.

Providers have the advantage that they can be configured during the module configuration phase.

More info:
Providers are the only service you can pass into your .config() function. Use a provider when you want to provide module-wide configuration for your service object before making it available.

app.controller(‘myProvider’, function($scope, myProvider){
$scope.artist = myProvider.getArtist();
$ = myProvider.thingOnConfig;

app.provider(‘myProvider’, function(){
//Only the next two lines are available in the app.config()
this._artist = ‘’;
this.thingFromConfig = ‘’;
this.$get = function(){
var that = this;
return {
getArtist: function(){
return that._artist;
thingOnConfig: that.thingFromConfig

myProviderProvider.thingFromConfig = ‘This was set in config’;

+Directives (April 11, 2016, 1:42 p.m.)

The ng-app directive defines an AngularJS application.

The ng-model directive binds the value of HTML controls (input, select, textarea) to application data.

The ng-bind directive binds application data to the HTML view.
<div ng-app="">
<p>Name: <input type="text" ng-model="name"></p>
<p ng-bind="name"></p>

Example explained:
AngularJS starts automatically when the web page has loaded.
The ng-app directive tells AngularJS that the <div> element is the "owner" of an AngularJS application.
The ng-model directive binds the value of the input field to the application variable name.
The ng-bind directive binds the innerHTML of the <p> element to the application variable name.
AngularJS Directives
As you have already seen, AngularJS directives are HTML attributes with an ng prefix.
The ng-init directive initializes AngularJS application variables.

<div ng-app="" ng-init="firstName='John'">
<p>The name is <span ng-bind="firstName"></span></p>

Alternatively with valid HTML:
<div data-ng-app="" data-ng-init="firstName='John'">
<p>The name is <span data-ng-bind="firstName"></span></p>

+Scopes - Controllers (April 11, 2016, 11:49 a.m.)

Scope is nothing but an object that Glues the View with Controller. They hold the Model data that we need to pass to view. Scope uses Angular’s two-way data binding to bind model data to view.

Imagine $scope as an object that links Controller to the View. It is controllers responsibility to initialize the data that the view needs to display. This is done by making changes to $scope.

<div ng-controller="ContactController">
Email:<input type="text" ng-model="newcontact"/>
<button ng-click="add()">Add</button>
<li ng-repeat="contact in contacts"> {{ contact }} </li>

<script type="text/javascript">
function ContactController($scope) {
$scope.contacts = ["", ""];

$scope.add = function() {
$scope.newcontact = "";

This attribute defines a Controller to be bound with the view. In this case we defined a controller called ContactController in DIV using ng-controller attribute. Thus whatever we put inside that DIV, the ContactController will have its influence on it.

ContactController is nothing but a plain vanilla JavaScript function. In the demo we defined it as function. Also see the definition of ContactController function. There is an object $scope which we pass as an argument. This object is used to bind the controller with view. When AngularJS initializes this controller, it automatically creates and injects the $scope object to this function using dependency injection.

Notice how we displayed a list of contacts using an attribute ng-repeat.
<li ng-repeat="contact in contacts">{{ contact }}</li>

ngRepeat is one of the most used AngularJS attribute. It iterates through an array and bind the view with each element. So in our example it creates <li> tag for each item within contacts array. ngRepeat takes expression as argument. In our case “contact in contacts” where contact is user defined variable and contacts is an array within $scope.

In our final demo in this tutorial, we will use ng-repeat to iterate through an array of objects and paint each property in a table.
Initial state of a scope object

Typically, when you create an application you need to set up an initial state for an Angular scope. In our case we need initial state to be list of contacts.

On $scope object, we defined an array called contacts:
$scope.contacts = ["", ""]

When Angular initialize this function (ContactController), it automatically creates this array and binds it in $scope object. Then in our view we display the array using ng-repeat attribute.

Thus, $scope provides us with a way to pass/retrieve objects from Controller to View and vice-versa.

It is also possible to define functions on $scope and use the same in View. In our demo, we created a function add() on $scope and use it on Add button click:
$scope.add = function() {

The function add() is bound to Add button using an attribute ng-click. ng-click binds the click event on the button or link or any clickable element with the function that is defined within $scope. So in this case, whenever Add button is clicked, the add() method on $scope will be called.

In add() method we add (or push) a string in contacts array. This is the string that user types in textbox. Note that we bind textbox using ng-model attribute.
<input type="text" ng-model="contact" />

This textbox’s value we got in $ as we bind it using ng-model attribute.

+Filters (April 11, 2016, 11:44 a.m.)

AngularJS provides powerful mechanism to modify the data on the go using Filters. Filters typically transform the data to a new data type, formatting the data in the process. The general syntax for using filter is:
{{ expression | filter }}

You can use more than filter on an expression by chaining them like:
{{ expression | filter1 | filter2 }}

AngularJS by default provide few filters that we can use in our app. It is also possible to define your own custom filters. For now we will just check filters that Angular provide with framework.
Filter uppercase and lowercase

As its name suggest, this filter convert the expression into uppercase letters.

<h4>Uppercase: {{ sometext | uppercase }}</h4>
<h4>Lowercase: {{ sometext | lowercase }}</h4>

{{ date_expression | date[:format] }}

Formats date to a string based on the requested format.

{{ number_expression | number[:fractionSize] }}

Formats a number as text. If the input is not a number an empty string is returned.

There are more filters like json, limitTo, filter, orderBy. We will go through them in next few articles as and when we use them. For now you can refer to Filter documentation for more details

+Attributes (April 11, 2016, 11:42 a.m.)

ng-show / ng-hide:
<h1 ng-show="sometext">Hello {{ sometext }}</h1>

ng-show attribute conditionally show an element, depending on the value of a boolean expression. Similar to ng-show you can also use ng-hide, which exactly does opposite of ng-show.

+Installation (April 11, 2016, 11:26 a.m.)

First, install the latest version of NodeJs using my notes.

Install the Angular CLI:
# npm install -g @angular/cli

# npm install -g @angular/cli@latest


For uninstalling:
npm uninstall -g @angular/cli
npm cache clean --force


+Common Options (May 16, 2018, 3:06 p.m.)


Ask for su password (deprecated, use become)



Ask for sudo password (deprecated, use become)



Run operations as this user (default=root)



Outputs a list of matching hosts; does not execute anything else



List all tasks that would be executed


--private-key, --key-file

Use this file to authenticate the connection


--start-at-task <START_AT_TASK>

Start the playbook at the task matching this name



One-step-at-a-time: confirm each task before running



Perform a syntax check on the playbook, but do not execute it


-C, --check

Don’t make any changes; instead, try to predict some of the changes that may occur


-D, --diff

When changing (small) files and templates, show the differences in those files; works great with –check


-K, --ask-become-pass

Ask for privilege escalation password


-S, --su

Run operations with su (deprecated, use become)


-b, --become

Run operations with become (does not imply password prompting)


-e, --extra-vars

Set additional variables as key=value or YAML/JSON, if filename prepend with @


-f <FORKS>, --forks <FORKS>

Specify number of parallel processes to use (default=5)


-i, --inventory, --inventory-file

Specify inventory host path (default=[[u’/etc/ansible/hosts’]]) or comma separated host list. –inventory-file is deprecated


-k, --ask-pass

Ask for connection password



Connect as this user (default=None)


-v, --verbose

Verbose mode (-vvv for more, -vvvv to enable connection debugging)


+Display output to console (May 16, 2018, 4:40 p.m.)

Every ansible task when run can save its results into a variable. To do this you have to specify which variable to save the results in, using "register" parameter.

Once you save the value to a variable you can use it later in any of the subsequent tasks. So for example if you want to get the standard output of a specific task you can write the following:

ansible-playbook ansible/postgres.yml -e delete_old_backups=true

- hosts: localhost
- name: Delete old database backups
command: echo '{{ delete_old_backups }}'
register: out
- debug:
var: out.stdout_lines


You can also use -v when running ansible-playbook.


+Pass conditional boolean value (May 16, 2018, 4:53 p.m.)

- name: Delete old database backups
command: echo {{ delete_old_backups }}
when: delete_old_backups|bool

+Basic Commands (Jan. 7, 2017, 11:54 a.m.)

ansible test_servers -m ping


ansible-playbook playbook.yml

ansible-playbook playbook.yml --check


ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2

ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"


+Inventory File (Jan. 7, 2017, 11:04 a.m.)

[postgres_servers] ansible_user=root ansible_user=mohsen




localhost ansible_connection=local ansible_connection=ssh ansible_user=mpdehaan ansible_connection=ssh ansible_user=mdehaan


Host Variables:

host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909


Group Variables:




Groups of Groups, and Group Variables:

It is also possible to make groups of groups using the :children suffix. Just like above, you can apply variables using :vars:







+Installation (Dec. 13, 2016, 4:33 p.m.)

sudo apt-get install libffi-dev libssl-dev python-pip python-setuptools
pip install ansible
pip install markupsafe

+Installation (Sept. 6, 2017, 11:11 a.m.)

For Debian earlier than Stretch:
apt-get install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

For Debian Stretch:
apt-get install apache2 apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

+Password Protect via .htaccess (Feb. 26, 2017, 6:14 p.m.)

1- Create a file named `.htaccess` in the root of website, with this content:

AuthName "Deskbit's Support"
AuthUserFile /etc/apache2/.htpasswd
AuthType Basic
require valid-user
2- htpasswd -c /etc/apache2/.htpasswd mohsen
3- Add this to <Directory> block:

<Directory /var/www/support/>
Options Indexes FollowSymLinks
AllowOverride ALL
Require all granted
4- Restart apache
/etc/init.d/apache2 restart

+Configs for two different ports on same IP (Sept. 26, 2016, 10:07 p.m.)

NameVirtualHost *:80
<VirtualHost *:80>
LogLevel warn
ErrorLog /home/mohsen/logs/eccgroup_error.log
WSGIScriptAlias / /home/mohsen/websites/ecc/ecc/
WSGIDaemonProcess ecc python-path=/home/mohsen/websites/ecc:/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages
WSGIProcessGroup ecc

Alias /static /home/mohsen/websites/ecc/ecc/static
<Directory /home/mohsen/websites/ecc/ecc/static>
Require all granted

<Directory />
Require all granted

Listen 8081
NameVirtualHost *:8081
<VirtualHost *:8081>

ErrorLog /var/log/apache2/freepbx.error.log
CustomLog /var/log/apache2/freepbx.access.log combined
DocumentRoot /var/www/html

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

+Error Check (March 4, 2015, 12:06 p.m.)

sudo systemctl status apache2.service -l

# tail -f /var/log/apache2/error.log

+VirtualHost For Django Sites (March 4, 2015, 10:34 a.m.)
1-sudo service apache2 restart

2-Create a virtual host:
sudo nano /etc/apache2/sites-available/

3-Create your new virtual host node which should look something like this:
<VirtualHost *:80>
LogLevel warn
ErrorLog /home/mohsen/logs/eccgroup_error.log
WSGIScriptAlias / /home/mohsen/websites/ecc/ecc/
WSGIDaemonProcess ecc python-path=/home/mohsen/websites/ecc:/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages
WSGIProcessGroup ecc

<Directory />
Require all granted

Alias /static /home/mohsen/websites/ecc/ecc/static
<Directory /home/mohsen/websites/ecc/ecc/static>
Require expr %{HTTP_HOST} == ""
Require expr %{HTTP_HOST} == ""

4-Edit the file within the main app of your project:
import os
import sys

# Add the site-packages of the chosen virtualenv to work with

# Add the app's directory to the PYTHONPATH

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ecc.settings")

# Activate your virtualenv

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()

5-Enable the virtual host:

If you want to disable a site, you would run a2dissite

+Apache config files (Jan. 5, 2015, 4:51 p.m.)

Contents of file: /etc/apache2/sites-enabled/000-default.conf

<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html

ScriptAlias /cgi-bin/ /var/cgi-bin/
<Directory "/var/cgi-bin">
AllowOverride All
Options None
Order allow,deny
Allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
Create a file named .htaccess in the /var/cgi-bin with this content.
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /var/cgi-bin/.htpasswd
Require user mohsen
htpasswd -c /etc/apache2/.htpasswd mohsen
And enter a desired password to create the password file.

+Creating /etc/init.d/asterisk (Jan. 5, 2015, 2:08 p.m.)

1-cp asterisk-13.1.0/contrib/init.d/rc.debian.asterisk /etc/init.d/asterisk

2-Change the lines to these values:

If you run it right now, you will get the error:
Restarting asterisk (via systemctl): asterisk.serviceFailed to restart asterisk.service: Unit asterisk.service failed to load: No such file or directory.

I restarted the server (reboot) and after booting up it was run successfully (/etc/init.d/asterisk start)

+Perl Packages/Libraries for Debian (Jan. 2, 2015, 12:29 p.m.)

Before starting installation, be careful that, you will need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
1-apt-get install libghc-ami-dev

2-Install this file `dpkg --install libasterisk-ami-perl_0.2.8-1_all.deb`
If you don't have it, refer to the following link for creating this .deb file

3-Copy the codecs binary `` to the path `/usr/lib/asterisk/modules`
Rename it to `` and based on other modules in this directory, set the chmod and chown of the file.
You can find it from this link:

+Running Asterisk as a Service (Dec. 15, 2014, 2:44 p.m.)

The most common way to run Asterisk in a production environment is as a service. Asterisk includes both a make target for installing Asterisk as a service, as well as a script - live_asterisk - that will manage the service and automatically restart Asterisk in case of errors.

Asterisk can be installed as a service using the make config target:
# make config
/etc/rc0.d/K91asterisk -> ../init.d/asterisk
/etc/rc1.d/K91asterisk -> ../init.d/asterisk
/etc/rc6.d/K91asterisk -> ../init.d/asterisk
/etc/rc2.d/S50asterisk -> ../init.d/asterisk
/etc/rc3.d/S50asterisk -> ../init.d/asterisk
/etc/rc4.d/S50asterisk -> ../init.d/asterisk
/etc/rc5.d/S50asterisk -> ../init.d/asterisk
Asterisk can now be started as a service:
# service asterisk start
* Starting Asterisk PBX: asterisk [ OK ]
And stopped:
# service asterisk stop
* Stopping Asterisk PBX: asterisk [ OK ]
And restarted:
# service asterisk restart
* Stopping Asterisk PBX: asterisk [ OK ]
* Starting Asterisk PBX: asterisk [ OK ]

+Executing as another User (Dec. 15, 2014, 2:42 p.m.)

Do not run as root
Running Asterisk as root or as a user with super user permissions is dangerous and not recommended. There are many ways Asterisk can affect the system on which it operates, and running as root can increase the cost of small configuration mistakes.

Asterisk can be run as another user using the -U option:
# asterisk -U asteriskuser

Often, this option is specified in conjunction with the -G option, which specifies the group to run under:
# asterisk -U asteriskuser -G asteriskuser

When running Asterisk as another user, make sure that user owns the various directories that Asterisk will access:
# sudo chown -R asteriskuser:asteriskuser /usr/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/spool/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/log/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/run/asterisk
# sudo chown asteriskuser:asteriskuser /usr/sbin/asterisk

+Commands (Dec. 15, 2014, 12:59 p.m.)

You can get a CLI (Command Line Interface) console to an already-running daemon by typing
asterisk -r
Another description for option '-r':
In order to connect to a running Asterisk process, you can attach a remote console using the -r option
To disconnect from a connected remote console, simply hit Ctrl+C.
To shut down Asterisk, issue:
core stop gracefully
There are three common commands related to stopping the Asterisk service. They are:
core stop now - This command stops the Asterisk service immediately, ending any calls in progress.
core stop gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk stops.
core stop when convenient - This command waits until Asterisk has no calls in progress, and then it stops the service. It does not prevent new calls from entering the system.

There are three related commands for restarting Asterisk as well.
core restart now - This command restarts the Asterisk service immediately, ending any calls in progress.
core restart gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk restarts.
core restart when convenient - This command waits until Asterisk has no calls in progress, and then it restarts the service. It does not prevent new calls from entering the system.

There is also a command if you change your mind.
core abort shutdown - This command aborts a shutdown or restart which was previously initiated with the gracefully or when convenient options.
sip show peers - returns a list of chan_sip loaded peers
voicemail show users - returns a list of app_voicemail loaded users
core set debug 5 - sets the core debug to level 5 verbosity.
core show version
asterisk -h : Help. Run '/sbin/asterisk -h' to get a list of the available command line parameters.
asterisk -C <configfile>: Starts Asterisk with a different configuration file than the default /etc/asterisk/asterisk.conf.
-f : Foreground. Starts Asterisk but does not fork as a background daemon.
-c : Enables console mode. Starts Asterisk in the foreground (implies -f), with a console command line interface (CLI) that can be used to issue commands and view the state of the system.
-r : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon.
-R : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon and attempts to reconnect if disconnected.
-t : Record soundfiles in /var/tmp and move them where they belong after they are done.
-T : Display the time in "Mmm dd hh:mm:ss" format for each line of output to the CLI.
-n : Disable console colorization (for use with -c or -r)
-i: Prompt for cryptographic initialization passcodes at startup.
-p : Run as pseudo-realtime thread. Run with a real-time priority. (Whatever that means.)
-q : Quiet mode (supress output)
-v : Increase verbosity (multiple v's = more verbose)
-V : Display version number and exit.
-d : Enable extra debugging across all modules.
-g : Makes Asterisk dump core in the case of a segmentation violation.
-G <group> : Run as a group other than the caller.
-U <user> : Run as a user other than the caller
-x <cmd> : Execute command <cmd> (only valid with -r)

+Installation (Dec. 14, 2014, 9:36 p.m.)

Before starting installation, be careful that, you need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
Install these libraries first:
1-apt-get install libapache2-mod-auth-pgsql libanyevent-perl odbc-postgresql unixODBC unixODBC-dev libltdl-dev

2-Download the file asterisk-13-current.tar.gz from this link:
a) Untar it.
You will need this untarred asterisk file in the following steps.

----------- Building and Installing pjproject -----------
1-Using the link download pjproject-2.3.tar.bz2

a) Untar and CD to the pjproject

b) ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'

c) make dep

d) make

e) make install

f) ldconfig

Now, for checking if you have successfully installed pjproject and asterisk detects the libraries, untar and CD to asterisk directory (I know you have not installed it yet, just move to the folder now :D), and enter the following command:

g) apt-get install libjansson-dev uuid-dev snmpd libperl-dev libncurses5-dev libxml2-dev libsqlite3-dev

*** important ***
Before continuing to next step, you have to know that based on needs of Shetab company you need to enable `res_snmp` module. For enabling it you need to install `net-snmp_5.4.3`, and since it's not in the Synaptic, you have to install it from the source:
1-Download it from:
2-Install it using ./configure, make and make install
*** End of important ***

h) ./configure --without-pwlib (If you don't use this --without switch, you will get the following error, even if you have installed those ptlib package already!)
Cannot find ptlib-config - please install and try again

i) make menuselect

j) Browse to the eleventh category `Resource Modules` and make sure the `res_snmp` module at the bottom of the list is checked. Using escape key exit the menu and continue with installing asterisk.

----------- Building and Installing Asterisk -----------
2- Make sure you are still in the asterisk directory).

c) make
I got so many errors surrounded by '**************' (so many asterisks) telling me these modules were needed:
res_curl, res_odbcm, res_crypto, res_config_curl ... (and so many more) I just installed postgresql and the command `make` continued working with no errors!

d) make install

e) make samples

f) make progdocs

Now continue installation process with Perl packages from my tutorials.
After that, refer to `Creating /etc/init.d/asterisk` in my tutorials.

Beautiful Soup
+My Experience with text and string (Dec. 13, 2014, 3:12 a.m.)

I've been working a lot with these two methods/attributes and I noticed that:
If you use:
You might get errors if the some_tag has an inner tag, like:
The exception would be: AttributeError: can't set attribute

So, you have to use .text for solving this problem and you should know that it will ignore the inner tag:
content = some_tag.text
some_tag.string = 'some_new_title'

+Usage (Dec. 10, 2014, 2:10 p.m.)

To parse a document, pass it into the BeautifulSoup constructor. You can pass in a string or an open filehandle:
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("index.html"))
soup = BeautifulSoup("<html>data</html>")

First, the document is converted to Unicode, and HTML entities are converted to Unicode characters:
BeautifulSoup("Sacr&eacute; bleu!")
<html><head></head><body>Sacré bleu!</body></html>

Beautiful Soup then parses the document using the best available parser. It will use an HTML parser unless you specifically tell it to use an XML parser.
# <title>The Dormouse's story</title>
# u'title'
# u'The Dormouse's story'
# u'head'
# <p class="title"><b>The Dormouse's story</b></p>
# u'title'
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]
# <a class="sister" href="" id="link3">Tillie</a>
One common task is extracting all the URLs found within a page’s <a> tags:

for link in soup.find_all('a'):
Another common task is extracting all the text from a page:

# The Dormouse's story
# The Dormouse's story
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
# ...
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>')
tag = soup.b
# <class 'bs4.element.Tag'>
# u'b' = "blockquote"
# <blockquote class="boldest">Extremely bold</blockquote>
A tag may have any number of attributes. The tag <b class="boldest"> has an attribute “class” whose value is “boldest”. You can access a tag’s attributes by treating the tag like a dictionary:
# u'boldest'

You can access that dictionary directly as .attrs:
# {u'class': u'boldest'}

You can add, remove, and modify a tag’s attributes. Again, this is done by treating the tag as a dictionary:
tag['class'] = 'verybold'
tag['id'] = 1
# <blockquote class="verybold" id="1">Extremely bold</blockquote>

del tag['class']
del tag['id']
# <blockquote>Extremely bold</blockquote>

# KeyError: 'class'
# None
Multi-valued attributes
HTML 4 defines a few attributes that can have multiple values. HTML 5 removes a couple of them, but defines a few more. The most common multi-valued attribute is class (that is, a tag can have more than one CSS class). Others include rel, rev, accept-charset, headers, and accesskey. Beautiful Soup presents the value(s) of a multi-valued attribute as a list:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
# ["body", "strikeout"]

css_soup = BeautifulSoup('<p class="body"></p>')
# ["body"]

If an attribute looks like it has more than one value, but it’s not a multi-valued attribute as defined by any version of the HTML standard, Beautiful Soup will leave the attribute alone:

id_soup = BeautifulSoup('<p id="my id"></p>')
# 'my id'

When you turn a tag back into a string, multiple attribute values are consolidated:
rel_soup = BeautifulSoup('<p>Back to the <a rel="index">homepage</a></p>')
# ['index']
rel_soup.a['rel'] = ['index', 'contents']
# <p>Back to the <a rel="index contents">homepage</a></p>

If you parse a document as XML, there are no multi-valued attributes:

xml_soup = BeautifulSoup('<p class="body strikeout"></p>', 'xml')
# u'body strikeout'
You can’t edit a string in place, but you can replace one string with another, using replace_with():

tag.string.replace_with("No longer bold")
# <blockquote>No longer bold</blockquote>
This code gets the first <b> tag beneath the <body> tag:

# <b>The Dormouse's story</b>
Using a tag name as an attribute will give you only the first tag by that name:

# <a class="sister" href="" id="link1">Elsie</a>
.contents and .children
A tag’s children are available in a list called .contents:
head_tag = soup.head
# <head><title>The Dormouse's story</title></head>

[<title>The Dormouse's story</title>]

title_tag = head_tag.contents[0]
# <title>The Dormouse's story</title>
# [u'The Dormouse's story']

The BeautifulSoup object itself has children. In this case, the <html> tag is the child of the BeautifulSoup object.:
# 1
# u'html'

A string does not have .contents, because it can’t contain anything:
text = title_tag.contents[0]
# AttributeError: 'NavigableString' object has no attribute 'contents'

Instead of getting them as a list, you can iterate over a tag’s children using the .children generator:
for child in title_tag.children:
# The Dormouse's story
The .contents and .children attributes only consider a tag’s direct children. For instance, the <head> tag has a single direct child–the <title> tag:

# [<title>The Dormouse's story</title>]

But the <title> tag itself has a child: the string “The Dormouse’s story”. There’s a sense in which that string is also a child of the <head> tag. The .descendants attribute lets you iterate over all of a tag’s children, recursively: its direct children, the children of its direct children, and so on:
for child in head_tag.descendants:
# <title>The Dormouse's story</title>
# The Dormouse's story

The <head> tag has only one child, but it has two descendants: the <title> tag and the <title> tag’s child. The BeautifulSoup object only has one direct child (the <html> tag), but it has a whole lot of descendants:

# 1
# 25
# [<a class="sister" href="" id="link2">Lacie</a>]
# [<a class="sister" href="" id="link1">Elsie</a>]
This code finds all tags whose id attribute has a value, regardless of what the value is:
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]
You can filter multiple attributes at once by passing in more than one keyword argument:
soup.find_all(href=re.compile("elsie"), id='link1')
# [<a class="sister" href="" id="link1">three</a>]
Some attributes, like the data-* attributes in HTML 5, have names that can’t be used as the names of keyword arguments:
data_soup = BeautifulSoup('<div data-foo="value">foo!</div>')
# SyntaxError: keyword can't be an expression

You can use these attributes in searches by putting them into a dictionary and passing the dictionary into find_all() as the attrs argument:
data_soup.find_all(attrs={"data-foo": "value"})
# [<div data-foo="value">foo!</div>]
Searching by CSS class
It’s very useful to search for a tag that has a certain CSS class, but the name of the CSS attribute, “class”, is a reserved word in Python. Using class as a keyword argument will give you a syntax error. As of Beautiful Soup 4.1.2, you can search by CSS class using the keyword argument class_:
soup.find_all("a", class_="sister")
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]
As with any keyword argument, you can pass class_ a string, a regular expression, a function, or True:
# [<p class="title"><b>The Dormouse's story</b></p>]

def has_six_characters(css_class):
return css_class is not None and len(css_class) == 6
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]
Remember that a single tag can have multiple values for its “class” attribute. When you search for a tag that matches a certain CSS class, you’re matching against any of its CSS classes:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
css_soup.find_all("p", class_="strikeout")
# [<p class="body strikeout"></p>]

css_soup.find_all("p", class_="body")
# [<p class="body strikeout"></p>]
You can also search for the exact string value of the class attribute:

css_soup.find_all("p", class_="body strikeout")
# [<p class="body strikeout"></p>]

But searching for variants of the string value won’t work:

css_soup.find_all("p", class_="strikeout body")
# []

If you want to search for tags that match two or more CSS classes, you should use a CSS selector:"p.strikeout.body")
# [<p class="body strikeout"></p>]

In older versions of Beautiful Soup, which don’t have the class_ shortcut, you can use the attrs trick mentioned above. Create a dictionary whose value for “class” is the string (or regular expression, or whatever) you want to search for:

soup.find_all("a", attrs={"class": "sister"})
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]
# [u'Elsie']

soup.find_all(text=["Tillie", "Elsie", "Lacie"])
# [u'Elsie', u'Lacie', u'Tillie']

[u"The Dormouse's story", u"The Dormouse's story"]

def is_the_only_string_within_a_tag(s):
"""Return True if this string is the only child of its parent tag."""
return (s == s.parent.string)

# [u"The Dormouse's story", u"The Dormouse's story", u'Elsie', u'Lacie', u'Tillie', u'...']
Although text is for finding strings, you can combine it with arguments that find tags: Beautiful Soup will find all tags whose .string matches your value for text. This code finds the <a> tags whose .string is “Elsie”:

soup.find_all("a", text="Elsie")
# [<a href="" class="sister" id="link1">Elsie</a>]
The limit argument
find_all() returns all the tags and strings that match your filters. This can take a while if the document is large. If you don’t need all the results, you can pass in a number for limit. This works just like the LIMIT keyword in SQL. It tells Beautiful Soup to stop gathering results after it’s found a certain number.

soup.find_all("a", limit=2)
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>]
The recursive argument
If you call mytag.find_all(), Beautiful Soup will examine all the descendants of mytag: its children, its children’s children, and so on. If you only want Beautiful Soup to consider direct children, you can pass in recursive=False. See the difference here:

# [<title>The Dormouse's story</title>]

soup.html.find_all("title", recursive=False)
# []
Calling a tag is like calling find_all()

Because find_all() is the most popular method in the Beautiful Soup search API, you can use a shortcut for it. If you treat the BeautifulSoup object or a Tag object as though it were a function, then it’s the same as calling find_all() on that object. These two lines of code are equivalent:

These two lines are also equivalent:
find_parents() and find_parent()
find_next_siblings() and find_next_sibling()
find_previous_siblings() and find_previous_sibling()
find_all_next() and find_next()
find_all_previous() and find_previous()
CSS selectors

Beautiful Soup supports the most commonly-used CSS selectors. Just pass a string into the .select() method of a Tag object or the BeautifulSoup object itself.

You can find tags:"title")
# [<title>The Dormouse's story</title>]"p nth-of-type(3)")
# [<p class="story">...</p>]

Find tags beneath other tags:"body a")
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]"html head title")
# [<title>The Dormouse's story</title>]

Find tags directly beneath other tags:"head > title")
# [<title>The Dormouse's story</title>]"p > a")
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]"p > a:nth-of-type(2)")
# [<a class="sister" href="" id="link2">Lacie</a>]"p > #link1")
# [<a class="sister" href="" id="link1">Elsie</a>]"body > a")
# []

Find the siblings of tags:"#link1 ~ .sister")
# [<a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]"#link1 + .sister")
# [<a class="sister" href="" id="link2">Lacie</a>]

Find tags by CSS class:".sister")
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]"[class~=sister]")
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]

Find tags by ID:"#link1")
# [<a class="sister" href="" id="link1">Elsie</a>]"a#link2")
# [<a class="sister" href="" id="link2">Lacie</a>]

Test for the existence of an attribute:'a[href]')
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]

Find tags by attribute value:'a[href=""]')
# [<a class="sister" href="" id="link1">Elsie</a>]'a[href^=""]')
# [<a class="sister" href="" id="link1">Elsie</a>,
# <a class="sister" href="" id="link2">Lacie</a>,
# <a class="sister" href="" id="link3">Tillie</a>]'a[href$="tillie"]')
# [<a class="sister" href="" id="link3">Tillie</a>]'a[href*=".com/el"]')
# [<a class="sister" href="" id="link1">Elsie</a>]

Match language codes:
multilingual_markup = """
<p lang="en">Hello</p>
<p lang="en-us">Howdy, y'all</p>
<p lang="en-gb">Pip-pip, old fruit</p>
<p lang="fr">Bonjour mes amis</p>
multilingual_soup = BeautifulSoup(multilingual_markup)'p[lang|=en]')
# [<p lang="en">Hello</p>,
# <p lang="en-us">Howdy, y'all</p>,
# <p lang="en-gb">Pip-pip, old fruit</p>]

This is a convenience for users who know the CSS selector syntax. You can do all this stuff with the Beautiful Soup API. And if CSS selectors are all you need, you might as well use lxml directly: it’s a lot faster, and it supports more CSS selectors. But this lets you combine simple CSS selectors with the Beautiful Soup API.
Modifying .string
If you set a tag’s .string attribute, the tag’s contents are replaced with the string you give:
markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)

tag = soup.a
tag.string = "New link text."
# <a href="">New link text.</a>

Be careful: if the tag contained other tags, they and all their contents will be destroyed.

You can add to a tag’s contents with Tag.append(). It works just like calling .append() on a Python list:
soup = BeautifulSoup("<a>Foo</a>")

# <html><head></head><body><a>FooBar</a></body></html>
# [u'Foo', u'Bar']
BeautifulSoup.new_string() and .new_tag()
If you need to add a string to a document, no problem–you can pass a Python string in to append(), or you can call the factory method BeautifulSoup.new_string():

soup = BeautifulSoup("<b></b>")
tag = soup.b
new_string = soup.new_string(" there")
# <b>Hello there.</b>
# [u'Hello', u' there']

If you want to create a comment or some other subclass of NavigableString, pass that class as the second argument to new_string():

from bs4 import Comment
new_comment = soup.new_string("Nice to see you.", Comment)
# <b>Hello there<!--Nice to see you.--></b>
# [u'Hello', u' there', u'Nice to see you.']

(This is a new feature in Beautiful Soup 4.2.1.)

What if you need to create a whole new tag? The best solution is to call the factory method BeautifulSoup.new_tag():

soup = BeautifulSoup("<b></b>")
original_tag = soup.b

new_tag = soup.new_tag("a", href="")
# <b><a href=""></a></b>

new_tag.string = "Link text."
# <b><a href="">Link text.</a></b>

Only the first argument, the tag name, is required.
Tag.insert() is just like Tag.append(), except the new element doesn’t necessarily go at the end of its parent’s .contents. It’ll be inserted at whatever numeric position you say. It works just like .insert() on a Python list:

markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

tag.insert(1, "but did not endorse ")
# <a href="">I linked to but did not endorse <i></i></a>
# [u'I linked to ', u'but did not endorse', <i></i>]
insert_before() and insert_after()
The insert_before() method inserts a tag or string immediately before something else in the parse tree:

soup = BeautifulSoup("<b>stop</b>")
tag = soup.new_tag("i")
tag.string = "Don't"
# <b><i>Don't</i>stop</b>

The insert_after() method moves a tag or string so that it immediately follows something else in the parse tree:
soup.b.i.insert_after(soup.new_string(" ever "))
# <b><i>Don't</i> ever stop</b>
# [<i>Don't</i>, u' ever ', u'stop']

Tag.clear() removes the contents of a tag:

markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

# <a href=""></a>
PageElement.extract() removes a tag or string from the tree. It returns the tag or string that was extracted:

markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

i_tag = soup.i.extract()

# <a href="">I linked to</a>

# <i></i>


At this point you effectively have two parse trees: one rooted at the BeautifulSoup object you used to parse the document, and one rooted at the tag that was extracted. You can go on to call extract on a child of the element you extracted:

my_string = i_tag.string.extract()
# u''

# None
# <i></i>
Tag.decompose() removes a tag from the tree, then completely destroys it and its contents:

markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a


# <a href="">I linked to</a>
PageElement.replace_with() removes a tag or string from the tree, and replaces it with the tag or string of your choice:
markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

new_tag = soup.new_tag("b")
new_tag.string = ""

# <a href="">I linked to <b></b></a>
replace_with() returns the tag or string that was replaced, so that you can examine it or add it back to another part of the tree.
PageElement.wrap() wraps an element in the tag you specify. It returns the new wrapper:

soup = BeautifulSoup("<p>I wish I was bold.</p>")
# <b>I wish I was bold.</b>

# <div><p><b>I wish I was bold.</b></p></div>

This method is new in Beautiful Soup 4.0.5.
Tag.unwrap() is the opposite of wrap(). It replaces a tag with whatever’s inside that tag. It’s good for stripping out markup:

markup = '<a href="">I linked to <i></i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

# <a href="">I linked to</a>

Like replace_with(), unwrap() returns the tag that was replaced.
Output formatters
If you give Beautiful Soup a document that contains HTML entities like “&lquot;”, they’ll be converted to Unicode characters:
soup = BeautifulSoup("&ldquo;Dammit!&rdquo; he said.")
# u'<html><head></head><body>\u201cDammit!\u201d he said.</body></html>'

If you then convert the document to a string, the Unicode characters will be encoded as UTF-8. You won’t get the HTML entities back:

# '<html><head></head><body>\xe2\x80\x9cDammit!\xe2\x80\x9d he said.</body></html>'

By default, the only characters that are escaped upon output are bare ampersands and angle brackets. These get turned into “&amp;”, “&lt;”, and “&gt;”, so that Beautiful Soup doesn’t inadvertently generate invalid HTML or XML:

soup = BeautifulSoup("<p>The law firm of Dewey, Cheatem, & Howe</p>")
# <p>The law firm of Dewey, Cheatem, &amp; Howe</p>

soup = BeautifulSoup('<a href="">A link</a>')
# <a href=";bar=val2">A link</a>

You can change this behavior by providing a value for the formatter argument to prettify(), encode(), or decode(). Beautiful Soup recognizes four possible values for formatter.

The default is formatter="minimal". Strings will only be processed enough to ensure that Beautiful Soup generates valid HTML/XML:

french = "<p>Il a dit &lt;&lt;Sacr&eacute; bleu!&gt;&gt;</p>"
soup = BeautifulSoup(french)
# <html>
# <body>
# <p>
# Il a dit &lt;&lt;Sacré bleu!&gt;&gt;
# </p>
# </body>
# </html>

If you pass in formatter="html", Beautiful Soup will convert Unicode characters to HTML entities whenever possible:

# <html>
# <body>
# <p>
# Il a dit &lt;&lt;Sacr&eacute; bleu!&gt;&gt;
# </p>
# </body>
# </html>

If you pass in formatter=None, Beautiful Soup will not modify strings at all on output. This is the fastest option, but it may lead to Beautiful Soup generating invalid HTML/XML, as in these examples:

# <html>
# <body>
# <p>
# Il a dit <<Sacré bleu!>>
# </p>
# </body>
# </html>

link_soup = BeautifulSoup('<a href="">A link</a>')
# <a href="">A link</a>

Finally, if you pass in a function for formatter, Beautiful Soup will call that function once for every string and attribute value in the document. You can do whatever you want in this function. Here’s a formatter that converts strings to uppercase and does absolutely nothing else:

def uppercase(str):
return str.upper()

# <html>
# <body>
# <p>
# </p>
# </body>
# </html>

# </a>

If you’re writing your own function, you should know about the EntitySubstitution class in the bs4.dammit module. This class implements Beautiful Soup’s standard formatters as class methods: the “html” formatter is EntitySubstitution.substitute_html, and the “minimal” formatter is EntitySubstitution.substitute_xml. You can use these functions to simulate formatter=html or formatter==minimal, but then do something extra.

Here’s an example that replaces Unicode characters with HTML entities whenever possible, but also converts all strings to uppercase:

from bs4.dammit import EntitySubstitution
def uppercase_and_substitute_html_entities(str):
return EntitySubstitution.substitute_html(str.upper())

# <html>
# <body>
# <p>
# IL A DIT &lt;&lt;SACR&Eacute; BLEU!&gt;&gt;
# </p>
# </body>
# </html>

One last caveat: if you create a CData object, the text inside that object is always presented exactly as it appears, with no formatting. Beautiful Soup will call the formatter method, just in case you’ve written a custom method that counts all the strings in the document or something, but it will ignore the return value:

from bs4.element import CData
soup = BeautifulSoup("<a></a>")
soup.a.string = CData("one < three")
# <a>
# <![CDATA[one < three]]>
# </a>
If you only want the text part of a document or tag, you can use the get_text() method. It returns all the text in a document or beneath a tag, as a single Unicode string:

markup = '<a href="">\nI linked to <i></i>\n</a>'
soup = BeautifulSoup(markup)

u'\nI linked to\n'

You can specify a string to be used to join the bits of text together:

# soup.get_text("|")
u'\nI linked to ||\n'

You can tell Beautiful Soup to strip whitespace from the beginning and end of each bit of text:

# soup.get_text("|", strip=True)
u'I linked to|'

But at that point you might want to use the .stripped_strings generator instead, and process the text yourself:

[text for text in soup.stripped_strings]
# [u'I linked to', u'']
Any HTML or XML document is written in a specific encoding like ASCII or UTF-8. But when you load that document into Beautiful Soup, you’ll discover it’s been converted to Unicode:

markup = "<h1>Sacr\xc3\xa9 bleu!</h1>"
soup = BeautifulSoup(markup)
# <h1>Sacré bleu!</h1>
# u'Sacr\xe9 bleu!'

It’s not magic. (That sure would be nice.) Beautiful Soup uses a sub-library called Unicode, Dammit to detect a document’s encoding and convert it to Unicode. The autodetected encoding is available as the .original_encoding attribute of the BeautifulSoup object:


Unicode, Dammit guesses correctly most of the time, but sometimes it makes mistakes. Sometimes it guesses correctly, but only after a byte-by-byte search of the document that takes a very long time. If you happen to know a document’s encoding ahead of time, you can avoid mistakes and delays by passing it to the BeautifulSoup constructor as from_encoding.

Here’s a document written in ISO-8859-8. The document is so short that Unicode, Dammit can’t get a good lock on it, and misidentifies it as ISO-8859-7:

markup = b"<h1>\xed\xe5\xec\xf9</h1>"
soup = BeautifulSoup(markup)

We can fix this by passing in the correct from_encoding:

soup = BeautifulSoup(markup, from_encoding="iso-8859-8")

In rare cases (usually when a UTF-8 document contains text written in a completely different encoding), the only way to get Unicode may be to replace some characters with the special Unicode character “REPLACEMENT CHARACTER” (U+FFFD, �). If Unicode, Dammit needs to do this, it will set the .contains_replacement_characters attribute to True on the UnicodeDammit or BeautifulSoup object. This lets you know that the Unicode representation is not an exact representation of the original–some data was lost. If a document contains �, but .contains_replacement_characters is False, you’ll know that the � was there originally (as it is in this paragraph) and doesn’t stand in for missing data.
Output encoding
When you write out a document from Beautiful Soup, you get a UTF-8 document, even if the document wasn’t in UTF-8 to begin with. Here’s a document written in the Latin-1 encoding:

markup = b'''
<meta content="text/html; charset=ISO-Latin-1" http-equiv="Content-type" />
<p>Sacr\xe9 bleu!</p>

soup = BeautifulSoup(markup)
# <html>
# <head>
# <meta content="text/html; charset=utf-8" http-equiv="Content-type" />
# </head>
# <body>
# <p>
# Sacré bleu!
# </p>
# </body>
# </html>

Note that the <meta> tag has been rewritten to reflect the fact that the document is now in UTF-8.

If you don’t want UTF-8, you can pass an encoding into prettify():

# <html>
# <head>
# <meta content="text/html; charset=latin-1" http-equiv="Content-type" />
# ...

You can also call encode() on the BeautifulSoup object, or any element in the soup, just as if it were a Python string:

# '<p>Sacr\xe9 bleu!</p>'

# '<p>Sacr\xc3\xa9 bleu!</p>'

Any characters that can’t be represented in your chosen encoding will be converted into numeric XML entity references. Here’s a document that includes the Unicode character SNOWMAN:

markup = u"<b>\N{SNOWMAN}</b>"
snowman_soup = BeautifulSoup(markup)
tag = snowman_soup.b

The SNOWMAN character can be part of a UTF-8 document (it looks like ☃), but there’s no representation for that character in ISO-Latin-1 or ASCII, so it’s converted into “&#9731” for those encodings:

# <b></b>

print tag.encode("latin-1")
# <b>&#9731;</b>

print tag.encode("ascii")
# <b>&#9731;</b>
Unicode, Dammit
You can use Unicode, Dammit without using Beautiful Soup. It’s useful whenever you have data in an unknown encoding and you just want it to become Unicode:

from bs4 import UnicodeDammit
dammit = UnicodeDammit("Sacr\xc3\xa9 bleu!")
# Sacré bleu!
# 'utf-8'

Unicode, Dammit’s guesses will get a lot more accurate if you install the chardet or cchardet Python libraries. The more data you give Unicode, Dammit, the more accurately it will guess. If you have your own suspicions as to what the encoding might be, you can pass them in as a list:

dammit = UnicodeDammit("Sacr\xe9 bleu!", ["latin-1", "iso-8859-1"])
# Sacré bleu!
# 'latin-1'

Unicode, Dammit has two special features that Beautiful Soup doesn’t use.
Smart quotes
You can use Unicode, Dammit to convert Microsoft smart quotes to HTML or XML entities:

markup = b"<p>I just \x93love\x94 Microsoft Word\x92s smart quotes</p>"

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="html").unicode_markup
# u'<p>I just &ldquo;love&rdquo; Microsoft Word&rsquo;s smart quotes</p>'

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="xml").unicode_markup
# u'<p>I just &#x201C;love&#x201D; Microsoft Word&#x2019;s smart quotes</p>'

You can also convert Microsoft smart quotes to ASCII quotes:

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="ascii").unicode_markup
# u'<p>I just "love" Microsoft Word\'s smart quotes</p>'

Hopefully you’ll find this feature useful, but Beautiful Soup doesn’t use it. Beautiful Soup prefers the default behavior, which is to convert Microsoft smart quotes to Unicode characters along with everything else:

UnicodeDammit(markup, ["windows-1252"]).unicode_markup
# u'<p>I just \u201clove\u201d Microsoft Word\u2019s smart quotes</p>'
Inconsistent encodings

Sometimes a document is mostly in UTF-8, but contains Windows-1252 characters such as (again) Microsoft smart quotes. This can happen when a website includes data from multiple sources. You can use UnicodeDammit.detwingle() to turn such a document into pure UTF-8. Here’s a simple example:

snowmen = (u"\N{SNOWMAN}" * 3)
doc = snowmen.encode("utf8") + quote.encode("windows_1252")

This document is a mess. The snowmen are in UTF-8 and the quotes are in Windows-1252. You can display the snowmen or the quotes, but not both:

# I like snowmen!

# ☃☃☃“I like snowmen!”

Decoding the document as UTF-8 raises a UnicodeDecodeError, and decoding it as Windows-1252 gives you gibberish. Fortunately, UnicodeDammit.detwingle() will convert the string to pure UTF-8, allowing you to decode it to Unicode and display the snowmen and quote marks simultaneously:

new_doc = UnicodeDammit.detwingle(doc)
# “I like snowmen!”

UnicodeDammit.detwingle() only knows how to handle Windows-1252 embedded in UTF-8 (or vice versa, I suppose), but this is the most common case.

Note that you must know to call UnicodeDammit.detwingle() on your data before passing it into BeautifulSoup or the UnicodeDammit constructor. Beautiful Soup assumes that a document has a single encoding, whatever it might be. If you pass it a document that contains both UTF-8 and Windows-1252, it’s likely to think the whole document is Windows-1252, and the document will come out looking like ` ☃☃☃“I like snowmen!”`.

UnicodeDammit.detwingle() is new in Beautiful Soup 4.1.0.

+Differences between parsers (Dec. 10, 2014, 1:48 p.m.)

Beautiful Soup presents the same interface to a number of different parsers, but each parser is different. Different parsers will create different parse trees from the same document. The biggest differences are between the HTML parsers and the XML parsers. Here’s a short document, parsed as HTML:

BeautifulSoup("<a><b /></a>")
# <html><head></head><body><a><b></b></a></body></html>

Since an empty <b /> tag is not valid HTML, the parser turns it into a <b></b> tag pair.

Here’s the same document parsed as XML (running this requires that you have lxml installed). Note that the empty <b /> tag is left alone, and that the document is given an XML declaration instead of being put into an <html> tag.:

BeautifulSoup("<a><b /></a>", "xml")
# <?xml version="1.0" encoding="utf-8"?>
# <a><b/></a>

There are also differences between HTML parsers. If you give Beautiful Soup a perfectly-formed HTML document, these differences won’t matter. One parser will be faster than another, but they’ll all give you a data structure that looks exactly like the original HTML document.

But if the document is not perfectly-formed, different parsers will give different results. Here’s a short, invalid document parsed using lxml’s HTML parser. Note that the dangling </p> tag is simply ignored:

BeautifulSoup("<a></p>", "lxml")
# <html><body><a></a></body></html>

Here’s the same document parsed using html5lib:

BeautifulSoup("<a></p>", "html5lib")
# <html><head></head><body><a><p></p></a></body></html>

Instead of ignoring the dangling </p> tag, html5lib pairs it with an opening <p> tag. This parser also adds an empty <head> tag to the document.

Here’s the same document parsed with Python’s built-in HTML parser:

BeautifulSoup("<a></p>", "html.parser")
# <a></a>

Like html5lib, this parser ignores the closing </p> tag. Unlike html5lib, this parser makes no attempt to create a well-formed HTML document by adding a <body> tag. Unlike lxml, it doesn’t even bother to add an <html> tag.

Since the document “<a></p>” is invalid, none of these techniques is the “correct” way to handle it. The html5lib parser uses techniques that are part of the HTML5 standard, so it has the best claim on being the “correct” way, but all three techniques are legitimate.

Differences between parsers can affect your script. If you’re planning on distributing your script to other people, or running it on multiple machines, you should specify a parser in the BeautifulSoup constructor. That will reduce the chances that your users parse a document differently from the way you parse it.

+Introduction and Installation (Dec. 10, 2014, 1:41 p.m.)

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.

Beautiful Soup 4 works on both Python 2 (2.6+) and Python 3
You can install it with pip install beautifulsoup4 or easy_install beautifulsoup4. It's also available as the python-beautifulsoup4 package in recent versions of Debian, Ubuntu, and Fedora .

Beautiful Soup 3
Beautiful Soup 3 was the official release line of Beautiful Soup from May 2006 to March 2012. It is considered stable, and only critical bugs will be fixed. Here's the Beautiful Soup 3 documentation.
Beautiful Soup 3 works only under Python 2.x. It is licensed under the same license as Python itself.
Installing a parser

Beautiful Soup supports the HTML parser included in Python’s standard library, but it also supports a number of third-party Python parsers. One is the lxml parser. Depending on your setup, you might install lxml with one of these commands:

$ apt-get install python-lxml

$ easy_install lxml

$ pip install lxml

Another alternative is the pure-Python html5lib parser, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:

$ apt-get install python-html5lib

$ easy_install html5lib

$ pip install html5lib

+PTR Record (Aug. 19, 2018, 7:59 p.m.)

A Pointer (PTR) record resolves an IP address to a fully-qualified domain name (FQDN) as an opposite to what A record does. PTR records are also called Reverse DNS records.

PTR records are mainly used to check if the server name is actually associated with the IP address from where the connection was initiated.

IP addresses of all Intermedia mail servers already have PTR records created.


What is PTR Record?

PTR records are used for the Reverse DNS (Domain Name System) lookup. Using the IP address you can get the associated domain/hostname. An A record should exist for every PTR record. The usage of a reverse DNS setup for a mail server is a good solution.

While in the domain DNS zone the hostname is pointed to an IP address, using the reverse zone allows pointing an IP address to a hostname.
In the Reverse DNS zone, you need to use a PTR Record. The PTR Record resolves the IP address to a domain/hostname.


+Errors (Aug. 7, 2015, 3:31 p.m.)

managed-keys-zone ./IN: loading from master file managed-keys.bind

For solving it:
nano /etc/bind/named.conf
add include "/etc/bind/bind.keys";

And also create an empty file:
touch /etc/bind/managed-keys.bind
When working with the Reverse DNS (, and the zone file ( you can use the tool:
to check the validity of the files.

+Configuration (Aug. 21, 2014, 12:48 p.m.)

This file contains a summary of my own experiences:

1-There are some default zones in "/etc/bind/named.conf.external-zones"; no need to change them, neither to exclude it from the file "/etc/bind/named.conf"
2-Add a line at the bottom of the file "/etc/bind/named.conf":
"include "/etc/bind/named.conf.external-zones";
3-Create a file named "/etc/bind/named.conf.external-zones" and fill it up with:
// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------

// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------
4-There is an empty directory in "/etc/bind/zones/". This the place for holding the data for above paths. So create a file named "" and fill it up with:
$TTL 3h
@ IN SOA (


ns IN A
@ IN A
5-Repeat the earlier step with different file name and data. I mean create a file named "" in "/zones/" and fill it up with:

$TTL 3h
@ IN SOA (
1h )

; main domain name servers
; main domain mail servers
IN MX 10
; A records for name servers above
www IN A
pania IN A
; A record for mail server above
mail IN A
6- OK, Done!
When I was done doing this configurations, I was testing my work with "dig" but I got error like:

root@mohsenhassani:/home/mohsen# dig
; <<>> DiG 9.7.3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

; IN A

;; Query time: 383 msec
;; WHEN: Sat Mar 16 17:00:19 2013
;; MSG SIZE rcvd: 34

In the line which is like ";; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929"
The word "SERVFAIL" shows that I have errors; There are many many many reasons which might cuase this error, and you may solve the error with its id.
Anyway for this error I had to do this:
sudo nano /etc/resolv.conf
And add: to first line.
It had already and

Then doing "dig" there was no more errors:
root@mohsenhassani:/home/mohsen# dig

; <<>> DiG 9.7.3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39792
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

; IN A




;; Query time: 0 msec
;; WHEN: Sat Mar 16 17:02:26 2013
;; MSG SIZE rcvd: 83
Oh! And you have to create two sub-domains named "ns1.mohsenhassani.COM" and "ns2.mohsenhassani.COM" so that you can forward the ".ir" domains to these sub-domains.

+Installation (Aug. 7, 2015, 4:22 p.m.)
apt-get install bind9 bind9utils

When installing and configuring or restarting bind, in case of encountering errors, check the log files. The log files are not stored separately. BIND stores the logs in the syslog:
nano /var/log/syslog
1-nano /etc/bind/named.conf.options
We need to modify the forwarder. This is the DNS server to which your own DNS will forward the requests he cannot process.

forwarders {
# Replace the address below with the address of your provider's DNS server;
2-Add this line to the file: /etc/bind/named.conf
include "/etc/bind/named.conf.external-zones";
3-nano /etc/bind/named.conf.external-zones
This is where we will insert our zones. By the way, a zone is a domain name that is referenced in the DNS server.

// -------------- Begin --------------
zone "" {
type master;
file "/etc/bind/zones/";

zone "" {
type master;
file "/etc/bind/zones/";
// -------------- End --------------
4-nano /etc/bind/zones/
$TTL 3h
@ IN SOA (
1h )

@ IN A
5-Restart BIND:
sudo /etc/init.d/bind9 restart

in case of failing, check the errors:
nano /var/log/syslog

We can now test the new DNS server...
Modify the file resolv.conf with the following settings:
sudo nano /etc/resolv.conf

enter the following:

Now, test your DNS:

In case of errors, refer to errors in BIND category

+Description (Aug. 21, 2014, 12:45 p.m.)

Every system on the Internet must have a unique IP address. (This does not include systems that are behind a NAT firewall because they are not directly on the Internet.) DNS acts as a directory service for all of these systems, allowing you to specify each one by its hostname. A telephone book allows you to look up an individual person by name and get their telephone number, their unique identifier on the telephone system's network. DNS allows you to look up individual server by name and get its IP address, its unique identifer on the Internet.
There are other hostname-to-IP directory services in use, mainly for LANs. Windows LANs can use WINS. UNIX LANs can use NIS. But because DNS is the directory service for the Internet (and can also be used for LANs) it is the most widely used. UNIX LANs could always use DNS instead of NIS, and starting with Windows 2000 Server, Windows LANs could use DNS instead of, or in addition to, WINS. And on small LANs where there are only a few machines you could just use HOSTS files on each system instead of setting up a server running DNS, NIS, or WINS.

As a service, DNS is critical to the operation of the Internet. When you enter in a Web browser, it's DNS that takes the www host name and translates it to an IP address. Without DNS, you could be connected to the Internet just fine, but you ain't goin' no where. Not unless you keep a record of the IP addresses of all of the resources you access on the Internet and use those instead of host/domain names.

So when you visit a Web site, you are actually doing so using the site's IP address even though you specified a host and domain name in the URL. In the background your computer quickly queried a DNS server to get the IP address that corresponds to the Web site's server and domain names. Now you know why you have to specify one or two DNS server IP addresses in the TCP/IP configuration on your desktop PC (in the resolv.conf file on a Linux system and the TCP/IP properties in the Network Control Panel on Windows systems).

A "cannot connect" error doesn't necessarily indicate there isn't a connection to the destination server. There may very well be. The error may indicate a failure in "resolving" the domain name to an IP address. I use the open source Firefox Web browser on Windows systems because the status bar gives more informational messages like "Resolving host", "Connecting to", and "Transferring data" rather than just the generic "Opening page" with IE. (It also seems to render pages faster than IE.)

In short, always check for correct DNS operation when troubleshooting a problem involving the inability to access an Internet resource. The ability to resolve names is critical, and later in this page we'll show you some tools you can use to investigate and verify this ability.
When you are surfing the Web viewing Web pages or sending an e-mail your workstation is sending queries to a DNS server to resolve server/domain names. (Back on the Modems page we showed you how to set up your resolv.conf file to do this.) When you have you own Web site that other people visit you need a DNS server to respond to the queries from their workstations.

When you visit Web sites, the DNS server your workstation queries for name resolution is typically run by your ISP, but you could have one of your own. When you have your own Web site the DNS servers which respond to visitors queries are typically run by your Web hosting provider, but you could likewise have your own one of these too. Actually, if you set up your own DNS server it could be used to respond to both "internal" (from your workstation) and "external" (from your Web site's visitors) queries.

Even if you don't have your own domain name, or even your own LAN, you can still benefit from using a DNS server to allow others to access your Debian system. If you have a single system connected to the Interent via a cable or DSL connection, you can have it act as a Web/e-mail/FTP server using a neat service called "dynamic DNS" which we'll cover later. Dynamic DNS will even work with a modem if you want to play around with it.

DNS Server Functions:
You can set up a DNS server for several different reasons:
Internet Domain Support: If you have a domain name and you're operating Web, e-mail, FTP, or other Internet servers, you'll use a DNS server ro respond to resolution queries so others can find and access your server(s). This is a serious undertaking and you'd have to set up a minimum of two of them. On this page we'll refer to these types of DNS servers as authoritative DNS servers for reasons you'll see later. However, there are alternatives to having your own authoritative DNS server if you have (or want to have) your own domain name. You can have someone else host your DNS records for you. Even if someone else is taking care of your domain's DNS records you could still set up one of the following types of DNS servers.

Local Name Resolution: Similar to the above scenario, this type of DNS server would resolve the hostnames of systems on your LAN. Typically in this scenario there is one DNS server and it does both jobs. The first being that it receives queries from workstations and the second being that it serves as the authoritative source for the responses (this will be more clear as we progress). Having this type of DNS server would eliminate the need to have (and manually update) a HOSTS file on each system on your LAN. On this page we'll refer to these as LAN DNS servers.

During the Debian installation you are asked to supply a domain name. This is an internal (private) domain name which is not visible to the outside world so, like the private IP address ranges you use on a LAN, it doesn't have to be registered with anyone. A LAN DNS server would be authoritative for this internal, private domain. For security reasons, the name for this internal domain should not be the same as any public domain name you have registered. Private domain names are not restricted to using one of the established public TLD (Top Level Domain) names such as .com or .net. You could use .corp or .inc or anything else for your TLD. Since a single DNS server can be authoritative for multiple domains, you could use the same DNS server for both your public and private domains. However, the server would need to be accessible from both the Internet and the LAN so you'd need to locate it in a DMZ. Though you want to use different public and private domain names, you can use the same name for the second-level domain. For example, for the public name and for the private name.

Internet Name Resolution: LAN workstations and other desktop PCs need to send Internet domain name resolution queries to a DNS server. The DNS server most often used for this is the ISP's DNS servers. These are often the DNS servers you specify in your TCP/IP configuration. You can have your own DNS server respond to these resolution queries instead of using your ISP's DNS servers. My ISP recently had a problem where they would intermittently lose connectivity to the network segment that their DNS servers were connected to so they couldn't be contacted. It took me about 30 seconds to turn one of my Debian systems into this type of DNS server and I was surfing with no problems. On this page we'll refer to these as simple DNS servers. If a simple DNS server fails, you could just switch back to using your ISP's DNS servers. As a matter of fact, given that you typically specify two DNS servers in the TCP/IP configuration of most desktop PCs, you could have one of your ISP's DNS servers listed as the second (fallback) entry and you'd never miss a beat if your simple DNS server did go down. Turning your Debian system into a simple DNS server is simply a matter of entering a single command.

Don't take from this that you need three different types of DNS servers. If you were to set up a couple authoritative DNS servers they could also provide the functionality of LAN and simple DNS servers. And a LAN DNS server can simultaneously provide the functionality of a simple DNS server. It's a progressive type of thing.

If you were going to set up authoritative DNS servers or a simple DNS server you'd have to have a 24/7 broadband connection to the Internet. Naturally, a LAN DNS server that didn't resolve Internet host/domain names wouldn't need this.

A DNS server is just a Debian system running a DNS application. The most widely used DNS application is BIND (Berkeley Internet Name Domain) and it runs a daemon called named that, among other things, responds to resolution queries. We'll see how to install it after we cover some basics.

DNS Basics:
Finding a single server out of all of the servers on the Internet is like trying to find a single file on drive with thousands of files. In both cases it helps to have some hierarchy built into the directory to logically group things. The DNS "namespace" is hierarchical in the same type of upside-down tree structure seen with file systems. Just as you have the root of a partition or drive, the DNS namespace has a root which is signified by a period.

Namespace Root --> Top Level Domains --> Second Level Domains
Namesapce Root: .
Top Level Domains: com, net, org
Second Level Domains: com --> aboutdebian, cnn, net --> sbc, org --> samba, debian

When specifying the absolute path to a file in a file system you start at the root and go to the file:

When specifying the absolute path to a server in the DNS namespace you start at the server and go to the root:

Note that period after the 'com' as it's important. It's how you specify the root of the namespace. An absolute path in the DNS namespace is called a FQDN (Fully Qualified Domain Name). The use of FQDNs are prevalent in DNS configuration files and it's important that you always use that trailing period.

Internet resources are usually specified by a domain name and a server hostname. The www part of a URL is often the hostname of the Web server (or it could be an alias to a server with a different host name). DNS is basically just a database with records for these hostnames. The directory for the entire telephone system is not stored in one huge phone book. Rather, it is broken up into many pieces with each city having, and maintaining, its piece of the entire directory in its phone book. By the same token, pieces of the DNS directory database (the "zones") are stored, and maintained, on many different DNS servers located around the Internet. If you want to find the telephone number for a person in Poughkeepsie, you'd have to look in the Poughkeepsie telephone book. If you want to find the IP address of the www server in the domain, you'd have to query the DNS server that stores the DNS records for that domain.

The entries in the database map a host/domain name to an IP address. Here is a simplistic logical view of the type of information that is stored (we'll get to the A, CNAME, and MX designations in a bit).


This is why a real Internet server needs a static (unchanging) IP address. The IP address of the server's NIC connected to the Internet has to match whatever address is in the DNS database. Dynamic DNS does provide a way around this for home servers however, which we'll see later.

When you want to browse to your DNS server (the one you specify in the TCP/IP configuration on your desktop computer) most likely won't have a DNS record for the domain so it has to contact the DNS server that does. When your DNS server contacts the DNS server that has the DNS records (referred to as "resource records" or "zone records") for your DNS server gets the IP address of the www server and relays that address back to your desktop computer. So which DNS server has the DNS records for a particular domain?

When you register a domain name with someone like Network Solutions, one of the things they ask you for are the server names and addresses of two or three "name servers" (DNS servers). These are the servers where the DNS records for your domain will be stored (and queried by the DNS servers of those browsing to your site). So where do you get the "name servers" information for your domain? Typically, when you host your Web site using a Web hosting service they not only provide a Web server for your domain's Web site files but they will also provide a DNS server to store your domain's DNS records. In other words, you'll want to know who your Web hosting provider is going to be before you register a domain name (so you can enter the provider's DNS server information in the name servers section of the domain name registration application).

You'll see the term "zone" used in DNS references. Most of the time a zone just equates to a domain. The only times this wouldn't be true is if you set up subdomains and set up separate DNS servers to handle just those subdomains. For example, a company would set up the subdomains and and would "delegate" a separate DNS server to each one of them. In the case of these two DNS servers their zone would be just the subdomains. The zone of the DNS server for the parent (which would contain the servers and would only contain records for those few machines in the parent domain.

Note that in the above example "us" and "europe" are subdomains while "www" and "mail" are host names of servers in the parent domain.

Once you've got your Web site up and running on your Web hosting provider's servers and someone surf's to your site, the DNS server they specified in their local TCP/IP configuration will query your hosting provider's DNS servers to get the IP address for your Web site. The DNS servers that host the DNS records for your domain, i.e. the DNS servers you specify in your domain name registration application, are the authoritative DNS servers for your domain. The surfer's DNS server queries one of your site's authoritative DNS servers to get an address and gets an authoritative response. When the surfer's DNS server relays the address information back to the surfer's local PC it is a "non-authoritaive" response because the surfer's DNS server is not an authoritative DNS server for your domain.

Example: If you surf to MIT's Web site the DNS server you have specified in your TCP/IP configuration queries one of MIT's authoritative DNS servers and gets an authoritative response with the IP address for the 'www' server. Your DNS server then sends a non-authoritative response back to your PC. You can easily see this for yourself. At a shell prompt, or a DOS window on a newer Windows system, type in:


First you'll see the name and IP address of your locally-specified DNS server. Then you'll see the non-authoritative response your DNS server sent back containing the name and IP address of the MIT Web server.

If you're on a Linux system you can also see which name server(s) your DNS server contacted to get the IP address. At a shell prompt type in:


and you'll see three authoritative name servers listed with the hostnames STRAWB, W20NS, and BITSY. The 'whois' command simply returns the contents of a site's domain record.

DNS Records and Domain Records

Don't confuse DNS zone records with domain records. Your domain record is created when you fill out a domain name registration application and is maintained by the domain registration service (like Network Solutions) you used to register the domain name. A domain only has one domain record and it contains administrative and technical contact information as well as entries for the authoritative DNS servers (aka "name servers") that are hosting the DNS records for the domain. You have to enter the hostnames and addresses for multiple DNS servers in your domain record for redundancy (fail-over) purposes.

DNS records (aka zone records) for a domain are stored in the domain's zone file on the authoritative DNS servers. Typically, it is stored on the DNS servers of whatever Web hosting service is hosting your domain's Web site. However, if you have your own Web server (rather than using a Web hosting service) the DNS records could be hosted by you using your own authoritative DNS servers (as in MIT's case), or by a third party like EasyDNS.

In short, the name servers you specified in your domain record host the domain's zone file containing the zone records. The name servers, whether they be your Web hosting provider's, those of a third party like EasyDNS, or your own, which host the domain's zone file are auhoritative DNS servers for the domain.

Because DNS is so important to the operation of the Internet, when you register a domain name you must specify a minimum of two name servers. If you set up your own authoritative DNS servers for your domain you must set up a minimum of two of them (for redundency) and these would be the servers you specify in your domain record. While the multiple servers you specify in your domain record are authoritative for your domain, only one DNS server can be the primary DNS server for a domain. Any others are "secondary" servers. The zone file on the primary DNS server is "replicated" (transferred) to all secondary servers. As a result, any changes made to DNS records must be made on the primary DNS server. The zone files on secondary servers are read-only. If you made changes to the records in a zone file on a secondary DNS server they would simply be overwritten at the next replication. As you will see below, the primary server for a domain and the replication frequency are specified in a special type of zone record.

Early on in this page we said that the DNS zone records are stored in a DNS database which we now know is called a zone file. The term "database" is used quite loosely. The zone file is actually just a text file which you can edit with any text editor. A zone file is domain-specific. That is, each domain has its own zone file. Actually, there are two zone files for each domain but we're only concerned with one right now. The DNS servers for a Web hosting provider will have many zone files, two for each domain it's hosting zone records for. A zone "record" is, in most cases, nothing more than a single line in the text zone file.

There are different types of DNS zone records. These numerous record types give you flexibility in setting up the servers in your domain. The most common types of zone records are:

An A (Address) record is a "host record" and it is the most common type. It is simply a static mapping of a hostname to an IP address. A common hostname for a Web server is 'www' so the A record for this server gives the IP address for this server in the domain.

An MX (Mail eXchanger) record is specifically for mail servers. It's a special type of service-specifier record. It identifies a mail server for the domain. That's why you don't have to enter a hostname like 'www' in an e-mail address. If you're running Sendmail (mail server) and Apache (Web server) on the same system (i.e. the same system is acting as both your Web server and e-mail server), both the A record for the system and the MX record would refer to the same server.

To offer some fail-over protection for e-mail, MX records also have a Priority field (numeric). You can enter two or three MX records each pointing to a different mail server, but the server specified in the record with the highest priority (lowest number) will be chosen first. A mail server with a priority of 10 in the MX record will receive e-mail before a server with a priority of 20 in its MX record. Note that we are only talking about receiving mail from other Internet mail servers here. When a mail server is sending mail, it acts like a desktop PC when it comes to DNS. The mail server looks at the domain name in the recipient's e-mail address and the mail server then contacts its local DNS server (specified in the resolv.conf file) to get the IP address for the mail server in the recipient's domain. When an authoriative DNS server for the recipient's domain receives the query from the sender's DNS server it sends back the IP addresses from the MX records it has in that domain's zone file.

A CNAME (Canonical Name) record is an alias record. It's a way to have the same physical server respond to two different hostnames. Let's say you're not only running Sendmail and Apache on your server, but you're also running WU-FTPD so it also acts as an FTP server. You could create a CNAME record with the alias name 'ftp' so people would use and to access different services on the same server.

Another use for a CNAME record was illustrated in the example near the top of the page. Suppose you name your Web server 'debian' instead of 'www'. You could simply create a CNAME record with the alias name 'www' but with the hostname 'debian' and debian's IP address.

NS (Name Server) records specify the authoritative DNS servers for a domain.

There can multiples of all of the above record types. There is one special record type of which there is only one record in the zone file. That's the SOA (Start Of Authority) record and it's the first record in the zone file. An SOA record is only present in a zone file located on authoritative DNS servers (non-authoritative DNS servers can cache zone records). It specifies such things as:

The primary authoritative DNS server for the zone (domain).
The e-mail address of the zone's (domain's) administrator. In zone files, the '@' has a specific meaning (see below) so the e-mail address is written as

Timing information as to when secondary DNS servers should refresh or expire a zone file and a serial number to indicate the version of the zone file for the sake of comparison.

The SOA record is the one that takes up several lines.

Several important points to note about the records in a zone file:

Records can specify servers in other domains. This is most commonly used with MX and NS records when backup servers are located in a different domain but receive mail or resolve queries for your domain.

There must be an A record for systems specified in all MX, NS, and CNAME records.

A and CNAME records can specify workstations as well as servers (which you'll see when we set up a LAN DNS server).

Now lets look at a typical zone file. When a Debian system is set up as a DNS server the zone files are stored in the /etc/bind directory. In a zone file the two parantheses around the timer values act as line-continuation characters as does the '\' character at the end of second line. The ';' is the comment character. The 'IN' indicates an INternet-class record.

$TTL 86400 IN SOA \ {
2004011522 ; Serial no., based on date
21600 ; Refresh after 6 hours
3600 ; Retry after 1 hour
604800 ; Expire after 7 days
3600 ; Minimum TTL of 1 hour
;Name servers
debns1 IN A IN A

@ IN NS debns1 IN NS

;Mail servers
debmail1 IN A IN A

@ IN MX 10 debmail1 IN MX 20

;Aliased servers
debhp IN A IN A



+Django Celery with django-celery-results extension (Nov. 11, 2016, 10:37 a.m.)

pip install celery
pip install django_celery_results
pip install django_celery_beat


# project/project/

from __future__ import absolute_import, unicode_literals
import os

from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')

def debug_task(self):
print('Request: {0!r}'.format(self.request))


# project/project/

from __future__ import absolute_import, unicode_literals

from .celery import app as celery_app

__all__ = ['celery_app']



from __future__ import absolute_import

from celery import shared_task

def begin_ping():
return 'hi'






python migrate django_celery_results
python migrate django_celery_beat


apt install rabbitmq-server
For running it:


Run these two commands in separated activated virtualenvs:
celery -A project beat -l info -S django
celery -A project worker -l info

The "celery -A project beat -l info -S django" is for "DatabaseScheduler" which gets the schedules from Django admin panel.
You can use "celery -A project beat -l info" which is for "PersistentScheduler" which gets the schedules from scripts in the tasks.

For having the schedules from Admin panel, refer to the link "Intervals" and define a suitable interval.
Then follow the link "Periodic tasks" and select the defined interval in the "Interval" dropdown list.


+Celery and RabbitMQ with Django (Oct. 14, 2018, 9:54 a.m.)

1- pip install Celery


2- apt-get install rabbitmq-server


3- Enable and start the RabbitMQ service
systemctl enable rabbitmq-server
systemctl start rabbitmq-server


4- Add configuration to the file:
CELERY_BROKER_URL = 'amqp://localhost'


5- Create a new file named in your app:
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')

app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')


6- Edit the file in the project root:

from .celery import app as celery_app

__all__ = ['celery_app']


7- Create a file named inside a Django app:

from celery import shared_task

def my_task(x, y):
return x, y


8- In

from .tasks import my_task

my_task.delay(x, y)

Instead of calling the "my_task" directly, we call my_task.delay(). This way we are instructing Celery to execute this function in the background.


9- Starting The Worker Process:

Open a new terminal tab, and run the following command:
celery -A mysite worker -l info


+Periodic Tasks from (Oct. 14, 2018, 10:24 a.m.)

import datetime
from celery.task import periodic_task

def myfunc():
print 'periodic_task'

+Periodic Tasks from (Oct. 14, 2018, 10:53 a.m.)

'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=30),
'args': (16, 16)

+Running tasks in shell (Oct. 11, 2018, 10:49 a.m.)

celery -A project_name beat

celery -A cdr worker -l info

+Daemon Scripts (Sept. 29, 2015, 11:39 a.m.)

These scripts are needed when you want to run the worker as a daemon.

The first is used for seeing the output of running tasks. For example, I had something printed in the console, from within the task, and I could see the output (the printed string) in this terminal.

The second is for firing up / starting the tasks.

1- Create a file /etc/supervisor/conf.d/celeryd.conf with this content:
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery worker -A cdr --loglevel=INFO


; Need to wait for currently executing tasks to finish at shutdown. Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it send SIGKILL to its whole process group instead, taking care of its children as well.

; if rabbitmq is supervised, set its priority higher
; so it starts first


2- Create a file /etc/supervisor/conf.d/celerybeat.conf with this content:

; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery beat -A cdr

; remove the -A myapp argument if you are not using an app instance


; if rabbitmq is supervised, set its priority higher so it starts first

+RBD (Oct. 30, 2017, 10:01 a.m.)

rbd is a utility for manipulating rados block device (RBD) images, used by the Linux rbd driver and the rbd storage driver for Qemu/KVM. RBD images are simple block devices that are striped over objects and stored in a RADOS object store. The size of the objects the image is striped over must be a power of two.
rbd -p image ls

rbd -p image info Windows7x8

rbd -p image rm Win7x86WithApps

rbd export --pool=image disk_user01_2 /root/Windows7x86.qcow2

The "2" is the ID of the Template in deskbit admin panel.

+Changing a Monitor’s IP address (Sept. 19, 2017, 4:42 p.m.)
ceph mon getmap -o /tmp/a

monmaptool --print /tmp/a

monmaptool --rm vdiali /tmp/a

monmaptool --add vdiali /tmp/a

monmaptool --print /tmp/a

systemctl stop ceph-mon*

ceph-mon -i vdimohsen --inject-monmap /tmp/a

Change IP in the following files:

+Properly remove an OSD (Aug. 23, 2017, 12:35 p.m.)

Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.

$ ceph osd crush reweight osd.<ID> 0.0

Then you wait for rebalancing to be completed. Eventually completely remove the OSD:

$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
From the docs:
Remove an OSD

To remove an OSD from the CRUSH map of a running cluster, execute the following:
ceph osd crush remove {name}

For getting the name:
ceph osd tree

+Errors - undersized+degraded+peered (July 4, 2017, 5:25 p.m.)
ceph osd crush rule create-simple same-host default osd

ceph osd pool set rbd crush_ruleset 1

+Commands (July 3, 2017, 3:53 p.m.)

ceph osd tree

ceph osd dump

ceph osd lspools

ceph osd pool ls

ceph osd pool get rbd all

ceph osd pool set rbd size 2

ceph osd crush rule ls
ceph-osd -i 0

ceph-osd -i 0 --mkfs --mkkey
ceph -w

ceph -s

ceph health detail
ceph-disk activate /var/lib/ceph/osd/ceph-0

ceph-disk list

chown ceph:disk /dev/sda1 /dev/sdb1
ceph-mon -f --cluster ceph --id vdi --setuser ceph --setgroup ceph
systemctl -a | grep ceph

systemctl status ceph-osd*

systemctl status ceph-mon*

systemctl enable
rbd -p image ls

rbd export --pool=image disk_win_7 /root/win7.img
cd /var/lib/ceph/osd/
ceph-2 ceph-3 ceph-8

mount | grep -i vda
mount | grep -i vdb
mount | grep -i vdc
mount | grep ceph

fdisk -l

mount /dev/vdc1 ceph-3/

systemctl restart ceph-osd@3
ceph osd tree
systemctl restart ceph-osd@5

mount | grep -i ceph

systemctl restart ceph-osd@5
Job for ceph-osd@5.service failed because the control process exited with error code.
See "systemctl status ceph-osd@5.service" and "journalctl -xe" for details.

systemctl daemon-reload
systemctl restart ceph-osd@5
ceph osd tree
ceph -w


+ceph-ansible (Jan. 7, 2017, 10:58 a.m.)
0- apt-get update # Ensure you do this step before running ceph-ansible!!!

1- apt-get install libffi-dev libssl-dev python-pip python-setuptools sudo python-dev

git clone
2- pip install markupsafe ansible
3-Setup your Ansible inventory file:

4-Now enable the site.yml and group_vars files:

cp site.yml.sample site.yml

You need to copy all files within `group_vars` directory; omit the `.sample` part:
for f in *.sample; do cp "$f" "${f/.sample/}"; done
5-Open the file `group_vars/all.yml` for editing:

nano group_vars/all.yml

Uncomment the variable `ceph_origin` and replace `upstream` with `distro`:
ceph_origin: 'distro'

Uncomment and replace:
monitor_interface: eth0

journal_size: 5120
6-Choosing a scenario:
Open the file `group_vars/osds.yml` and uncomment and set to `true` the following variables:

osd_auto_discovery: true
journal_collocation: true
7- Any needed configs for ceph should be added to the file `group_vars/all.yml`.
Uncomment and change:

osd_pool_default_pg_num: 8
osd_pool_default_size: 1
Path to variables file:

+Adding Monitors (Jan. 4, 2017, 2:13 p.m.)

A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two Ceph Monitors to your cluster.
ceph-deploy mon add node2
ceph-deploy mon add node3
Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

ceph quorum_status --format json-pretty
When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.

+Adding an OSD (Jan. 4, 2017, 2:08 p.m.)

1- mkdir /var/lib/ceph/osd/ceph-3

2- ceph-disk prepare /var/lib/ceph/osd/ceph-3

3- ceph-disk activate /var/lib/ceph/osd/ceph-3

4- Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI:
ceph -w

You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

+Storage Cluster (Jan. 3, 2017, 3:10 p.m.)

To purge the Ceph packages, execute: (Used for when you want to purge data)
ceph-deploy purge node1

If at any point you run into trouble and you want to start over, execute the following to purge the configuration:
ceph-deploy purgedata node1
ceph-deploy forgetkeys
1-Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster:
mkdir my-cluster
cd my-cluster
2-Create the cluster:
ceph-deploy new node1

Using `ls` command, you should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster.
3-Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section:

osd pool default size = 2
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64

These two last options are for EXT4; based on this link:
4-Install Ceph:
ceph-deploy install node1

The ceph-deploy utility will install Ceph on each node.
5-Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial

Once you complete the process, your local directory should have the following keyrings:

6-Add OSDs:
For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon.

for details on using separate disks/partitions for OSDs and journals.

Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.
ssh node2
sudo mkdir /var/local/osd0

ssh node3
sudo mkdir /var/local/osd1

Then, from your admin node, use ceph-deploy to prepare the OSDs.
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

Finally, activate the OSDs:
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
7-Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin node1 node2

Login to nodes and ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health

+Ceph Node Setup (Jan. 3, 2017, 2:55 p.m.)

1-Create a user on each Ceph Node.
2-Add sudo privileges for the user on each Ceph Node.
3-Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen and ssh-copy-id
4-Modify the ~/.ssh/config file of your ceph-deploy admin node so that it logs into Ceph Nodes as the user you created.
Host node1
Hostname node1
User root
Host node2
Hostname node2
User root
Host node3
Hostname node3
User root
5-Add to /etc/hosts: node1 node2 node3 node4
6-Change the hostname of each node to the ones from the earlier stpe (node1, node2, node3, ...):
nano /etc/hostname
reboot each node

+Acronyms (Jan. 1, 2017, 3:40 p.m.)

CRUSH: Controlled Replication Under Scalable Hashing
EBOFS: Extent and B-tree based Object File System
HPC: High-Performance Computing
MDS: MetaData Server
OSD: Object Storage Device
PG: Placement Group
PGP = Placement Group for Placement purpose
POSIX: Portable Operating System Interface for Unix
RADOS: Reliable Autonomic Distributed Object Store
RBD: RADOS Block Devices

+Ceph Deploy (Dec. 28, 2016, 12:51 p.m.)

The admin node must be password-less SSH access to Ceph nodes. When ceph-deploy logs into a Ceph node as a user, that particular user must have passwordless sudo privileges.

We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server
For ALL Ceph Nodes perform the following steps:
sudo apt-get install openssh-server
Create a Ceph Deploy User:
The ceph-deploy utility must log into a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the username. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}). The following procedure, substituting {username} for the username you define, describes how to create a user with passwordless sudo.

sudo useradd -d /home/{username} -m {username}
sudo passwd {username}


+Installation (Dec. 27, 2016, 3:57 p.m.)
1- wget -q -O- '' | sudo apt-key add -

2- echo deb $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

3- sudo apt-get install ceph ceph-deploy

+Definitions (Dec. 27, 2016, 1:10 p.m.)

Ceph is a storage technology.
A cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.
Clustering vs. Clouding:
Cluster differs from Cloud and Grid in that a cluster is a group of computers connected by a local area network (LAN), whereas cloud is more wide scale and can be geographically distributed. Another way to put it is to say that a cluster is tightly coupled, whereas a cloud is loosely coupled. Also, clusters are made up of machines with similar hardware, whereas clouds are made up of machines with possibly very different hardware configurations.
Ceph Storage Cluster:
A distributed object store that provides storage of unstructured data for applications.
Ceph Object Gateway:
A powerful S3- and Swift-compatible gateway that brings the power of the Ceph Object Store to modern applications.
Ceph Block Device:
A distributed virtual block device that delivers high-performance, cost-effective storage for virtual machines and legacy applications.
Ceph File System:
A distributed, scale-out filesystem with POSIX semantics that provides storage for a legacy and modern applications.
A reliable, autonomous, distributed object store comprised of self-healing, self-managing intelligent storage nodes.
A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP.
A bucket-based REST gateway, compatible with S3 and Swift.
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver.
Ceph FS:
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE.
pg_num = number of placement groups mapped to an OSD
Placement Groups (PGs):

Ceph maps objects to placement groups. Placement groups are shards or fragments of a logical object pool that place objects as a group into OSDs. Placement groups reduce the amount of per-object metadata when Ceph stores the data in OSDs. A larger number of placement groups (e.g., 100 per OSD) leads to better balancing.

+Media Queries (Feb. 9, 2016, 12:05 p.m.)

@media all and (max-width: 480px) {


@media all and (min-width: 480px) and (max-width: 768px) {


@media all and (min-width: 768px) and (max-width: 1024px) {


@media all and (min-width: 1024px) {


Responsive Grid Media Queries - 1280, 1024, 768, 480
1280-1024 - desktop (default grid)
1024-768 - tablet landscape
768-480 - tablet
480-less - phone landscape & smaller
@media all and (min-width: 1024px) and (max-width: 1280px) { }

@media all and (min-width: 768px) and (max-width: 1024px) { }

@media all and (min-width: 480px) and (max-width: 768px) { }

@media all and (max-width: 480px) { }

Foundation Media Queries

/* Small screens - MOBILE */
@media only screen { } /* Define mobile styles - Mobile First */

@media only screen and (max-width: 40em) { } /* max-width 640px, mobile-only styles, use when QAing mobile issues */

/* Medium screens - TABLET */
@media only screen and (min-width: 40.063em) { } /* min-width 641px, medium screens */

@media only screen and (min-width: 40.063em) and (max-width: 64em) { } /* min-width 641px and max-width 1024px, use when QAing tablet-only issues */

/* Large screens - DESKTOP */
@media only screen and (min-width: 64.063em) { } /* min-width 1025px, large screens */

@media only screen and (min-width: 64.063em) and (max-width: 90em) { } /* min-width 1024px and max-width 1440px, use when QAing large screen-only issues */

/* XLarge screens */
@media only screen and (min-width: 90.063em) { } /* min-width 1441px, xlarge screens */

@media only screen and (min-width: 90.063em) and (max-width: 120em) { } /* min-width 1441px and max-width 1920px, use when QAing xlarge screen-only issues */

/* XXLarge screens */
@media only screen and (min-width: 120.063em) { } /* min-width 1921px, xlarge screens */


/* Portrait */
@media screen and (orientation:portrait) { /* Portrait styles here */ }
/* Landscape */
@media screen and (orientation:landscape) { /* Landscape styles here */ }

/* CSS for iPhone, iPad, and Retina Displays */

/* Non-Retina */
@media screen and (-webkit-max-device-pixel-ratio: 1) {

/* Retina */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min--moz-device-pixel-ratio: 1.5),
only screen and (min-device-pixel-ratio: 1.5) {

/* iPhone Portrait */
@media screen and (max-device-width: 480px) and (orientation:portrait) {

/* iPhone Landscape */
@media screen and (max-device-width: 480px) and (orientation:landscape) {

/* iPad Portrait */
@media screen and (min-device-width: 481px) and (orientation:portrait) {

/* iPad Landscape */
@media screen and (min-device-width: 481px) and (orientation:landscape) {

<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" />

Live demo samples

+Media Tag (Sept. 2, 2015, 4:44 p.m.)

@media (max-width: 767px) {
#inner-coffee-machine > div > img {
width: 30%;
height: 18%;

#inner-coffee-machine > div > div h3 {
font-size: 2.5vh;
font-weight: bold;

#inner-coffee-machine > div > div h5 {
font-size: 2vh;

#club-inner {
display: inline-table;

#inner-coffee-machine > div > div {
width: 100%;

@media (min-width: 768px) and (max-width: 991px) {


@media (min-width: 992px) and (max-width: 1199px) {


@media (min-width: 1200px) {


+Define new font (Sept. 1, 2015, 11:21 a.m.)

@font-face {
font-family: nespresso;
src: url("../fonts/nespresso.otf") format("opentype"),
url("../fonts/nespresso.ttf") format("truetype");

@font-face {
font-family: 'yekan';
src: url(../fonts/yekan.eot) format("eot"),
url(../fonts/yekan.woff) format("woff"),
url(../fonts/yekan.ttf) format("truetype");

+CSS for different IE versions (July 27, 2015, 1:40 p.m.)


* html #div {
height: 300px;

*+html #div {
height: 300px;

#div {
height: 300px\0/;
IE-7 & IE-8

#div {
height: 300px\9;

#div {
_height: 300px;
Hide from IE 6 and LOWER:

#div {
height/**/: 300px;
html > body #div {
height: 300px;

+Fonts (July 13, 2015, 1:15 p.m.)

+white-space (July 9, 2015, 3:44 a.m.)

white-space: normal;
The text will wrap.
If you want to prevent the text from wrapping, you can apply:
white-space: nowrap;
If we want to force the browser to display line breaks and extra white space characters we can use:
white-space: pre;
If you want white space and breaks, but you need the text to wrap instead of potentially break out of its parent container:
white-space: pre-wrap;
white-space: pre-line;
Will break lines where they break in code, but extra white space is still stripped.

+Style Admin Interface in (April 18, 2018, 7:39 a.m.)

class NoteAdmin(admin.ModelAdmin):
search_fields = ('title', 'note')
list_filter = ('category',)

class Media:
css = {
'all': ('admin/css/interface.css',)


The path to "interface.css" is:


And finally, I couldn't make "nginx" recognize this file. For solving the problem I had to comment the "location /static/admin/" block in nginx file, and do "collectstatic" in my project to just gather together all admin static files.


+Ajax and CSRF (April 22, 2018, 7:08 p.m.)

type: 'POST',
url: $(this).attr('href'),
data: {
csrfmiddlewaretoken: '{{ csrf_token }}',
dataType: 'json',
success: function (status) {

error: function () {


+Django-2 Sample (April 29, 2018, 2:44 p.m.)

import os
import re

def gettext_noop(s):
return s

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

ROOT_URLCONF = 'mohsenhassani.urls'

DEBUG = True

ADMINS = [('Mohsen Hassani', '')]

ALLOWED_HOSTS.extend(['localhost', ''])

TIME_ZONE = 'Asia/Tehran'

USE_TZ = True


LANGUAGES = [('en', gettext_noop('English')),
('fa', gettext_noop('Persian'))]

USE_I18N = True
os.path.join(BASE_DIR, 'locale'),

USE_L10N = True

SERVER_EMAIL = 'report@mohsenhassani'

'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mohsenhassanidb',
'USER': 'root',


'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'context_processors': [




SECRET_KEY = 'xqb&)90m*_!n3ovc$@%mo8!8!7j5d9o=8nm(iyw%#mzz&o1n6)'

MEDIA_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'media/')
MEDIA_URL = '/media/'

STATIC_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'static/')
STATIC_URL = '/static/'

FILE_UPLOAD_MAX_MEMORY_SIZE = 52428800 # i.e. 50 MB

WSGI_APPLICATION = 'mohsenhassani.wsgi.application'



AUTH_USER_MODEL = 'accounts.User'

LOGIN_URL = '/accounts/login/'

LOGIN_REDIRECT_URL = '/accounts/profile/'





+Send HTML Email with Attachment (April 30, 2018, 6:40 p.m.)

from django.core.mail import EmailMessage

email = EmailMessage('subject',

email.content_subtype = "html"

if data['attachment']:
file_ = data['attachment']
email.attach(,, file_.content_type)



for attachment in request.FILES:
if data[attachment]:
file_ = data[attachment]
email.attach(,, file_.content_type)


+URL - Login Required & is_superuser (May 1, 2018, 11:56 a.m.)

from django.contrib.auth.decorators import login_required
from django.contrib.auth.decorators import user_passes_test

urlpatterns = [
path('reports/', user_passes_test(lambda u: u.is_superuser)(
login_required(report.reports)), name='reports'),

iIt seems "user_passes_test" already does check the "login_required" somehow... so remove that decorator:

path('reports/', user_passes_test(lambda u: u.is_superuser)(report.reports), name='reports'),

+Database Functions, Aggregation, Annotations (June 16, 2018, 11:55 a.m.)

from django.db.models import F

OrgPayment.objects.update(shares=F('shares') / 70000)
Property.objects.filter(id=pid).update(views=F('views') + 1)


from django.db.models import Count



from django.db.models import Avg



from django.db.models import Avg, Count



Database Functions:


from django.db.models import Sum, Value
from django.db.models.functions import Coalesce

certificates_total_hours = reward_request.chosen_certificates.aggregate(total_hours=Coalesce(Sum('course_hours'), Value(0)))



# Get the display name as "name (goes_by)"

from django.db.models import CharField, Value as V
from django.db.models.functions import Concat

Author.objects.create(name='Margaret Smith', goes_by='Maggie')
author = Author.objects.annotate(
screen_name=Concat('name', V(' ('), 'goes_by', V(')'),



Accepts a single text field or expression and returns the number of characters the value has. If the expression is null, then the length will also be null.

from django.db.models.functions import Length

Author.objects.create(name='Margaret Smith')
author = Author.objects.annotate(
print(author.name_length, author.goes_by_length)



Accepts a single text field or expression and returns the lowercase representation.

Usage example:

>>> from django.db.models.functions import Lower
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_lower=Lower('name')).get()
>>> print(author.name_lower)
margaret smith



Returns a substring of length (length) from the field or expression starting at position pos. The position is 1-indexed, so the position must be greater than 0. If the length is None, then the rest of the string will be returned.

Usage example:

>>> # Set the alias to the first 5 characters of the name as lowercase
>>> from django.db.models.functions import Substr, Lower
>>> Author.objects.create(name='Margaret Smith')
>>> Author.objects.update(alias=Lower(Substr('name', 1, 5)))
>>> print(Author.objects.get(name='Margaret Smith').alias)



Accepts a single text field or expression and returns the uppercase representation.

>>> from django.db.models.functions import Upper
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_upper=Upper('name')).get()
>>> print(author.name_upper)


+Create directories if they don't exist (June 17, 2018, 6:09 p.m.)

import os

from django.conf import settings

avatar_path = '%s/images/avatars' % settings.MEDIA_ROOT
if not os.path.exists(os.path.dirname(avatar_path)):

+Serve media files in debug mode (April 15, 2019, 12:11 p.m.)

from django.conf import settings
from django.conf.urls.static import static

if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

+Save file path to Django ImageField (June 17, 2018, 7:10 p.m.)
avatar = models.ImageField(_('avatar'), upload_to='manager/images/avatars/', null=True, blank=True)

-------- = 'images/avatars/mohsen.png'

+Forms - Validate Excel File (July 2, 2018, 10:53 a.m.)

from xlrd import open_workbook, XLRDError

from django import forms
from django.utils.translation import ugettext_lazy as _

class UploadExcelForm(forms.Form):
file = forms.FileField(label=_('file'))

def clean_file(self):
except XLRDError:
raise forms.ValidationError(_('Please upload a valid excel file.'))
return self.cleaned_data['file']

+Messages (July 6, 2018, 8:57 p.m.)

from django.contrib import messages

messages.success(request, _('The information was saved successfully.'))
return HttpResponseRedirect(reverse('url', args=(code,)))



{% if messages %}
<ul class="messages">
{% for message in messages %}
<li {% if message.tags %} class="{{ message.tags }}" {% endif %}>{{ message }}</li>
{% endfor %}
{% endif %}


{% if message.tags == 'success' %}

+QuerySet - Filter based on Text Length (July 16, 2018, 3:04 p.m.)

from django.db.models.functions import Length

invalid_username = Driver.objects.annotate(

+QuerySet - Duplicate objects based on a specific field (July 16, 2018, 3:16 p.m.)

duplicate_plate_number_ids = Driver.objects.values(
plate_number__count__gt=1).values_list('plate_number', flat=True)

+Bulk Insert / Bulk Create (Oct. 7, 2018, 11:07 a.m.)

entry_records = []

for i in range(2000):
entry_records.append(Entry(headline='This is a test'))


+Force files to open in the browser instead of downloading (Oct. 9, 2018, 8:48 a.m.)

Force browser that the file should be viewed in the browser:

Content-Type: application/pdf
Content-Disposition: inline; filename="filename.pdf"

To have the file downloaded rather than viewed:

Content-Type: application/pdf
Content-Disposition: attachment; filename="filename.pdf"

+Database creation error when running django tests (April 13, 2019, 2:10 p.m.)

In case of having this error when running django tests:
Got an error creating the test database: permission denied to create database

Log in to psql shell and let your database user to create databases:
alter user my_user createdb;

+Find Model Relations (Oct. 17, 2018, 4:48 p.m.)

for field in [f for f in file._meta.get_fields() if not f.concrete]


model = field.related_model

model = type(instance)

# For deferred instances
model = instance._meta.proxy_for_model


app_label = model._meta.app_label

app_label = instance._meta.app_label


model_name = model.__name__


if field.get_internal_type() == 'ForeignKey':





ct = ContentType.objects.get_for_model(model)




+Pass JSON object data from view to template (April 13, 2019, 11:32 a.m.)


import json

data = json.dumps(the_dictionary)
return render(request, 'abc.html', {'data': data})



<script type="text/javascript">
{{ data|safe }}

+Form - Access Field type in template (Dec. 8, 2018, 12:23 p.m.)

{{ field.field.widget.input_type }}

+QuerySet - Group By (Dec. 14, 2018, 8:52 a.m.)

requests = Loan.objects.filter(loan__type='n',
status__status__in=['1', '2', '3'])
stats = requests.values('personnel__center__title'

{% for stat in stats %}
<td>{{ forloop.counter }}</td>
<td>{{ stat.personnel__center__title }}</td>
<td>{{ stat.id__count }}</td>
{% endfor %}

+Google reCAPTCHA API (Dec. 17, 2018, 12:55 p.m.)

1- Register your application in the reCAPTCHA admin:

2- After registering your website, you will be handed a Site key and a Secret key. The Site key will be used in the reCAPTCHA widget which is rendered within the page where you want to place it. The Secret key will be stored safely in the server, made available through the module.

3- Add the following tag to the head:
<script src=''></script>

4- Add the following tag to the form:
<div class="g-recaptcha" data-sitekey=""></div>

5- pip install requests

import requests
from django.conf import settings

if request.POST:
recaptcha_response = request.POST.get('g-recaptcha-response')
data = {
'response': recaptcha_response
response =
'', data=data)
result = response.json()

if result['success']:


+Split QuerySets (Dec. 17, 2018, 10:26 p.m.)

def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]


Usage Example:

excel_file = get_object_or_404(ExcelFile, id=eid)
job_list = list(chunks(excel_file.tempdata_set.all(), 250))


+Get all related Django model objects (Dec. 30, 2018, 12:30 p.m.)

from django.db.models.deletion import Collector
from django.contrib.admin.utils import NestedObjects

user = User.objects.get(id=1)

collector = NestedObjects(using="default")

+Admin - Render checkboxes for m2m (Jan. 13, 2019, 10:06 a.m.)


from django.contrib.auth.admin import UserAdmin
from django.db import models
from django.forms import CheckboxSelectMultiple

class PersonnelAdmin(UserAdmin):
formfield_overrides = {
models.ManyToManyField: {'widget': CheckboxSelectMultiple}

+Truncate a long string (Jan. 27, 2019, 1:47 a.m.)

data = data[:75]


import textwrap

textwrap.shorten("Hello world!", width=12)

textwrap.shorten("Hello world", width=10, placeholder="...")


from django.utils.text import Truncator

value = Truncator(value).chars(75)


+Model Conventions (Feb. 8, 2019, 7:53 a.m.)

+CSRF Token in an external javascript file (March 16, 2019, 2:11 p.m.)

function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie != '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = cookies[i].trim();
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) == (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
return cookieValue;

// Then call it like the following:

+Forms - Validation (March 11, 2018, 4:29 p.m.)

class ReportForm1(forms.Form):
src_server_ip = forms.CharField(required=False)
dst_server_ip = forms.CharField(required=False)

def clean(self):
if self.cleaned_data['src_server_ip'] == '' and self.cleaned_data[
'dst_server_ip'] == '':
'At lease a source or destination is required.')

+URL Regex that accepts all characters (Jan. 20, 2018, 1:14 a.m.)


+Forms - Custom ModelChoiceField (Nov. 15, 2017, 3:54 p.m.)

class AppointmentChoiceField(forms.ModelChoiceField):
def label_from_instance(self, appointment):
return "%s" % appointment.get_time()


class IntCommaChoiceField(forms.ModelChoiceField):
def label_from_instance(self, base_amount):
return "%s" % intcomma(base_amount)


class LoanAmountEditForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['base_amount'] = IntCommaChoiceField(
label=_('base amount')

class Meta:
model = LoanAmount
exclude = []


+JPG Validator (July 17, 2017, 10:23 a.m.)

from PIL import Image

def jpg_validator(certificate):
file_type =
if file_type == 'jpg' or file_type == 'JPEG':
return True
raise ValidationError(_('The extension of certificate file should be jpg.'))

+Views - order_by sum of fields (June 10, 2017, 1:24 p.m.)

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},


If you need to do some filtering, you can add filter() to the end:

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},

+Use MySQL or MariaDB with Django (May 18, 2017, 10:11 p.m.)

1- Installation:
sudo apt-get install python-pip python-dev mysql-server libmysqlclient-dev

sudo apt-get install python-pip python-dev mariadb-server libmariadbclient-dev libssl-dev

2- mysql -u root -p


4- CREATE USER myprojectuser@localhost IDENTIFIED BY 'password';

5- GRANT ALL PRIVILEGES ON myproject.* TO myprojectuser@localhost;


7- exit

8- In the project environment:
pip install mysqlclient

+X-Frame-Options (Sept. 26, 2016, 9:05 p.m.)

Error in remote calling:
..does not permit cross-origin framing

There is a special header to allow or disallow showing page inside i-frame - X-Frame-Options It's used to prevent an attack called clickjacking. You can check the Django's doc about it

Sites that want their content to be shown in i-frame just don't set this header.

In your installation of Django this protection is turned on by default. If you wan't to allow embedding your content inside i-frames you can either disable the clickjack protection in your settings for the whole site, or use per view control with:

django.views.decorators.clickjacking decorators


Per view control is a better option.



from django.views.decorators.clickjacking import xframe_options_exempt

def home(request):

+Django Session Key (Sept. 20, 2016, 8:58 p.m.)

if not request.session.exists(request.session.session_key):
session_key = request.session.session_key

+Django REST Framework - Installation and Configuration (Sept. 20, 2016, 12:44 a.m.)

1-pip install djangorestframework django-filter markdown

2-Add 'rest_framework' to your INSTALLED_APPS setting.

3-If you're intending to use the browsable API you'll probably also want to add REST framework's login and logout views. Add the following to your root file.

urlpatterns = [
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))

+User Timezone (Sept. 5, 2016, 2:12 a.m.)

There are several plugins you can use, but I guess there are reasons I need to avoid using them:

- They mainly require big .dat files which contain the timezones allover the world

- They use middlewares to check the user's timezone, which might be called on every request and finally cause speed problem when opening pages.

- They only work with templates (using template tags and filters).


The simplest way I have achieved is using a snippet which uses an online web service:
import requests
import pytz
user_time_zone = requests.get('').json()['time_zone']

This snippet can be used in only the views which need to detect user's timezone; no need of middleware.


If you ever needed to use it in every request, you can use it in a middleware.

Create a file named `` and add this middleware to it:

import requests
import pytz

from django.utils import timezone

class UserTimezoneMiddleware(object):
def process_request(self, request):
freegeoip_response = requests.get('')
freegeoip_response_json = freegeoip_response.json()
user_time_zone = freegeoip_response_json['time_zone']
return None

Add the `UserTimezoneMiddleware` class to `MIDDLEWARE_CLASSES` variable.

Now you can get the date/time based on user's timezone:


+Timestamp from datetime field (Sept. 5, 2016, 1:05 a.m.)

You can do it in template or in view.


{% now "U" %}
{{ value|date:"U" }}



from django.utils.dateformat import format
format(mymodel.mydatefield, 'U')
import time

+Manually create a POST/GET QueryDict from a dictionary (Aug. 27, 2016, 3:11 a.m.)

from django.http import QueryDict, MultiValueDict

get_data = {'p_type': request.GET['p_type'], 'facilities': request.GET.getlist('facilities')}
get_data = dict(request.GET.iteritems())

qdict = QueryDict('', mutable=True)
qdict.update(MultiValueDict({'facilities': get_data['facilities']}))
request.POST = qdict

+Django Dumpdata Field (Aug. 26, 2016, 3:43 a.m.)

1- pip install django-dumpdata-field


3- dumpdata_field facemelk.province --fields=id,province_name > /home/mohsen/Projects/facemelk/facemelk/fixtures/provinces_fields.json

+Ajax File Upload (Aug. 22, 2016, 10:20 p.m.)

<form action="{% url 'glasses:upload-face' %}" method="POST" id="upload-face-form" enctype="multipart/form-data"> {% csrf_token %}
<input type="file" id="upload-face" name="face" />


$('#upload-face').change(function() {
var form = $('#upload-face-form');
var form_data = new FormData(form[0]);
type: form.attr('method'),
url: form.attr('action'),
data: form_data,
contentType: false,
cache: false,
processData: false,
dataType: 'json',
success: function(image) {

}, error: function(error) {



def upload_face(request):
if request.is_ajax():
image = request.FILES.get('face')
if image:
face = open('face.jpg', 'wb')
for chunk in image.chunks():
return JsonResponse({'hi': 'hi'})
return HttpResponseRedirect(reverse('home'))


+Django Grappelli (May 16, 2016, 4:04 a.m.)

Official Website:





pip install django-grappelli




2-Add URL-patterns:
urlpatterns = [
url(r'^grappelli/', include('grappelli.urls')),
url(r'^admin/', include(,

3-Add the request context processor (needed for the Dashboard and the Switch User feature):
'context_processors': [

4-Collect the media files:
python collectstatic




Dashboard Setup:


Third Party Applications:

+Views - Receive and parse JSON data from a request using django-cors-headers (May 4, 2016, 3:19 a.m.)

import json

from django.views.decorators.csrf import csrf_exempt

def update_note(request):
request_json_data = bytes.decode(request.body)
request_data = json.loads(request_json_data)


You need to install a plugin too:

1- pip install django-cors-headers




+Internationalization (May 2, 2016, 10:56 p.m.)

from django.conf.urls.i18n import i18n_patterns

urlpatterns += i18n_patterns()




And finally in a file, add some snippet like this:

def change_language(request):
if '/admin/' not in request.get_full_path():
if '/fa/' not in request.get_full_path():
return {}
return {}


{% get_language_info for LANGUAGE_CODE as lang %}
{% get_language_info for "pl" as lang %}

You can then access the information:

Language code: {{ lang.code }}<br />
Name of language: {{ lang.name_local }}<br />
Name in English: {{ }}<br />
Bi-directional: {{ lang.bidi }}
Name in the active language: {{ lang.name_translated }}

There are also simple filters available for convenience:
{{ LANGUAGE_CODE|language_name }} (“German”)
{{ LANGUAGE_CODE|language_name_local }} (“Deutsch”)
{{ LANGUAGE_CODE|language_bidi }} (False)
{{ LANGUAGE_CODE|language_name_translated }} (“německy”, when active language is Czech)

<form action="{% url 'set_language' %}" method="post">{% csrf_token %}
<input name="next" type="hidden" value="{{ redirect_to }}" />
<select name="language">
{% get_current_language as LANGUAGE_CODE %}
{% get_available_languages as LANGUAGES %}
{% get_language_info_list for LANGUAGES as languages %}
{% for language in languages %}
<option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} selected="selected"{% endif %}>
{{ language.name_local }} ({{ language.code }})
{% endfor %}
<input type="submit" value="Go" />

from django.utils import translation
user_language = 'fr'
request.session[translation.LANGUAGE_SESSION_KEY] = user_language

from django.http import HttpResponse

def hello_world(request, count):
if request.LANGUAGE_CODE == 'de-at':
return HttpResponse("You prefer to read Austrian German.")
return HttpResponse("You prefer to read another language.")


from django.conf import settings
from django.utils import translation

class ForceLangMiddleware:

def process_request(self, request):
request.LANG = getattr(settings, 'LANGUAGE_CODE', settings.LANGUAGE_CODE)
request.LANGUAGE_CODE = request.LANG


+Admin - Access ModelForm properties (April 23, 2016, 9:09 a.m.)

def __init__(self, *args, **kwargs):
initial = kwargs.get('initial', {})
initial['material'] = 'Test'
kwargs['initial'] = initial
super(ArtefactForm, self).__init__(*args, **kwargs)


for field in self.fields.items():
print(field[0]) # Prints field names
print(field[1].label) # Prints field labels

+View - Replace/Populate POST data (April 19, 2016, 11:38 a.m.)

If the request was the result of a Django form submission, then it is reasonable for POST being immutable to ensure the integrity of the data between the form submission and the form validation. However, if the request was not sent via a Django form submission, then POST is mutable as there is no form validation.

mutable = request.POST._mutable
request.POST._mutable = True
request.POST['some_data'] = 'test data'
request.POST._mutable = mutable


In an HttpRequest object, the GET and POST attributes are instances of django.http.QueryDict, a dictionary-like class customized to deal with multiple values for the same key. This is necessary because some HTML form elements, notably <select multiple>, pass multiple values for the same key.

The QueryDicts at request.POST and request.GET will be immutable when accessed in a normal request/response cycle. To get a mutable version you need to use .copy().


request.POST = request.POST.copy()
request.POST['some_key'] = 'some_value'



QueryDict implements all the standard dictionary methods because it’s a subclass of dictionary. Exceptions are outlined here:

QueryDict.__init__(query_string=None, mutable=False, encoding=None)[source]

Instantiates a QueryDict object based on query_string.

>>> QueryDict('a=1&a=2&c=3')
<QueryDict: {'a': ['1', '2'], 'c': ['3']}>

If query_string is not passed in, the resulting QueryDict will be empty (it will have no keys or values).

Most QueryDicts you encounter, and in particular those at request.POST and request.GET, will be immutable. If you are instantiating one yourself, you can make it mutable by passing mutable=True to its __init__().

Strings for setting both keys and values will be converted from encoding to unicode. If encoding is not set, it defaults to DEFAULT_CHARSET.


Returns the value for the given key. If the key has more than one value, __getitem__() returns the last value. Raises django.utils.datastructures.MultiValueDictKeyError if the key does not exist. (This is a subclass of Python’s standard KeyError, so you can stick to catching KeyError.)

QueryDict.__setitem__(key, value)[source]

Sets the given key to [value] (a Python list whose single element is value). Note that this, as other dictionary functions that have side effects, can only be called on a mutable QueryDict (such as one that was created via copy()).


Returns True if the given key is set. This lets you do, e.g., if "foo" in request.GET.

QueryDict.get(key, default=None)

Uses the same logic as __getitem__() above, with a hook for returning a default value if the key doesn’t exist.

QueryDict.setdefault(key, default=None)[source]

Just like the standard dictionary setdefault() method, except it uses __setitem__() internally.


Takes either a QueryDict or standard dictionary. Just like the standard dictionary update() method, except it appends to the current dictionary items rather than replacing them. For example:

>>> q = QueryDict('a=1', mutable=True)
>>> q.update({'a': '2'})
>>> q.getlist('a')
['1', '2']
>>> q['a'] # returns the last


Just like the standard dictionary items() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.items()
[('a', '3')]


Just like the standard dictionary iteritems() method. Like QueryDict.items() this uses the same last-value logic as QueryDict.__getitem__().


Like QueryDict.iteritems() except it includes all values, as a list, for each member of the dictionary.


Just like the standard dictionary values() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.values()


Just like QueryDict.values(), except an iterator.

In addition, QueryDict has the following methods:


Returns a copy of the object, using copy.deepcopy() from the Python standard library. This copy will be mutable even if the original was not.

QueryDict.getlist(key, default=None)

Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.

QueryDict.setlist(key, list_)[source]

Sets the given key to list_ (unlike __setitem__()).

QueryDict.appendlist(key, item)[source]

Appends an item to the internal list associated with key.

QueryDict.setlistdefault(key, default_list=None)[source]

Just like setdefault, except it takes a list of values instead of a single value.


Like items(), except it includes all values, as a list, for each member of the dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.lists()
[('a', ['1', '2', '3'])]


Returns a list of values for the given key and removes them from the dictionary. Raises KeyError if the key does not exist. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.pop('a')
['1', '2', '3']


Removes an arbitrary member of the dictionary (since there’s no concept of ordering), and returns a two value tuple containing the key and a list of all values for the key. Raises KeyError when called on an empty dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.popitem()
('a', ['1', '2', '3'])


Returns dict representation of QueryDict. For every (key, list) pair in QueryDict, dict will have (key, item), where item is one element of the list, using same logic as QueryDict.__getitem__():

>>> q = QueryDict('a=1&a=3&a=5')
>>> q.dict()
{'a': '5'}


Returns a string of the data in query-string format. Example:

>>> q = QueryDict('a=2&b=3&b=5')
>>> q.urlencode()

Optionally, urlencode can be passed characters which do not require encoding. For example:

>>> q = QueryDict(mutable=True)
>>> q['next'] = '/a&b/'
>>> q.urlencode(safe='/')

+Admin - Hide fields dynamically (April 11, 2016, 7:07 p.m.)

def get_fields(self, request, obj=None):
fields = admin.ModelAdmin.get_fields(self, request)
if settings.DEBUG:
return fields
return ('parent', 'name_en', 'name_fa', 'content_en', 'content_fa', 'ordering',
'languages', 'header_image', 'project_thumbnail')

+Error ==> Permission denied when trying to access database after restore (migration) (April 10, 2016, 10:47 p.m.)

Enter the commands in postgresql shell:
psql mohsen_notesdb -c "GRANT ALL ON ALL TABLES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL SEQUENCES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL FUNCTIONS IN SCHEMA public to mohsen_notes;"

+Admin - Reisze Image Signal (April 5, 2016, 11:51 a.m.)

Create a file `` with this content:

from PIL import Image

from django.conf import settings

def resize_image(sender, instance, created, **kwargs):
if instance.position == 't':
width = settings.TOP_ADS_WIDTH
height = settings.TOP_ADS_HEIGHT
width = settings.BOTTOM_ADS_WIDTH
height = settings.BOTTOM_ADS_HEIGHT

img =
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')


After model definition in your file, import `resize_image` and:
models.signals.post_save.connect(resize_image, sender=TheModel)

+Admin - Hide model in admin dynamically (Feb. 29, 2016, 9:50 a.m.)

class AccessoryCategoryAdmin(admin.ModelAdmin):
def get_model_perms(self, request):
perms = admin.ModelAdmin.get_model_perms(self, request)
if request.user.username == settings.SECOND_ADMIN:
return {}
return perms

+Admin - Display readonly fields based on conditions (Feb. 28, 2016, 3:02 p.m.)

class AccessoryAdmin(admin.ModelAdmin):
list_display = ('name', 'category', 'price', 'quantity', 'ordering', 'display')
list_filter = ('category', 'display')

def get_readonly_fields(self, request, obj=None):
if request.user.username == settings.SECOND_ADMIN:
readonly_fields = ('category', 'name', 'image', 'price', 'main_image', 'description', 'ordering', 'url_name')
return readonly_fields
return self.readonly_fields

+Form - How to add a star after fields (Feb. 27, 2016, 10:47 p.m.)

Add the `required_css_class` property to Form class like this:

class ProfileForm(forms.Form):
required_css_class = 'required'

first_name = forms.CharField(label=_('first name'), max_length=30)
last_name = forms.CharField(label=_('last name'), max_length=30)
cellphone_number = forms.CharField(label=_('cellphone'), max_length=20)

Then use the property `label_tag` of form fields to set the titles:
{{ form.first_name.errors }} {{ form.first_name.label_tag }}
{{ form.last_name.errors }} {{ form.last_name.label_tag }}
{{ form.cellphone_number.errors }} {{ form.cellphone_number.label_tag }}

Use it in CSS to style it or add an asterisk:
<style type="text/css">
.required:after {
content: " *";
color: red;

+Decorators (Jan. 29, 2016, 4:34 p.m.)

Create a python file named `` in the app and write your decorators as follows:

def login_required(view_func):
def wrap(request, *args, **kwargs):
if request.user.is_authenticated():
return view_func(request, *args, **kwargs)
return render(request, 'issue_tracker/access_denied.html',
{'login_required': 'yes'})
return wrap


from django.utils.functional import wraps

def can_participate_poll(view):
def inner(request, *args, **kwargs):
print(kwargs) # Prints {'qnum': 11, 'qid': 23}
return view(request, *args, **kwargs)
return inner

This will print the args which are passed to the view.

def poll_view(request, qid, qnum):


from django.contrib.auth.decorators import user_passes_test

@user_passes_test(lambda u: u.is_superuser)
def my_view(request):


+Admin - Change Header Title (Jan. 14, 2016, 8:44 p.m.)

In the main file: = _('YouStone Administration')

+Change app name for admin (Jan. 27, 2016, 11:51 p.m.)

1- Create a python file named `` in the app:

from django.apps import AppConfig
from django.utils.translation import ugettext_lazy as _

class CourseConfig(AppConfig):
name = 'course'
verbose_name = _('course')

2- Edit the file within the app:
default_app_config = 'course.apps.CourseConfig'

+Save File/Image (Dec. 1, 2015, 3:16 p.m.)

import uuid
from PIL import Image as PILImage
import imghdr
import os

from django.conf import settings

from manager.home.models import Image

def save_image(img_file, width=0, height=0):
# Generate a random image name
img_name = uuid.uuid4().hex + '.' +'.')[-1]

# Saving the picture on disk
img = open(settings.IMG_ROOT + img_name, 'wb')
for chunk in img_file.chunks():

img = open(
# Is the saved image a valid image file!?
if not imghdr.what(img) or imghdr.what(img).lower() not in ['jpg', 'jpeg', 'gif', 'png']:
return {'is_image': False}
if width or height:
# Resizing the image
pil_img =

if pil_img.mode != 'RGB':
pil_img = pil_img.convert('RGB')
pil_img.resize((width, height), PILImage.ANTIALIAS).save(, format='JPEG')

# Saving the image location on the database
img = Image.objects.create(name=img_name)
return {'is_image': True, 'image': img}

def create_unique_file_name(path, file_name):
while os.path.exists(path + file_name):
if '.' in file_name:
file_name = file_name.replace('.', '_.', -1)
file_name += '_'

return file_name

+Custom Middleware Class (Nov. 21, 2015, 10:39 p.m.)

Create a file named `` in a module and add your middleware like this:

from django.shortcuts import render

from nespresso.models import Settings

class UnderConstruction:
def process_request(self, request):
settings_ = Settings.objects.all()
if settings_ and settings_[0].under_construction:
return render(request, 'nespresso/under_construction.html')

After defining a middleware, add it to the settings:

+Add Action Form to Action (Oct. 13, 2015, 10:48 a.m.)

from django.contrib.admin.helpers import ActionForm
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)

class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
self.message_user(request, _('Successfully updated membership type for selected rows.'), messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Admin - Hide action (Oct. 8, 2015, 10:56 a.m.)

class MyAdmin(admin.ModelAdmin):

def has_delete_permission(self, request, obj=None):
return False

def get_actions(self, request):
actions = super(MyAdmin, self).get_actions(request)
if 'delete_selected' in actions:
del actions['delete_selected']
return actions


def get_actions(self, request):
actions = admin.ModelAdmin.get_actions(self, request)
if request.user.username == settings.SECOND_ADMIN:
return []
return actions

+Model - Disable the Add and / or Delete action for a specific model (March 10, 2016, 11:02 p.m.)

def has_add_permission(self, request):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return perms

def has_delete_permission(self, request, obj=None):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return perms

+URLS - Redirect (Oct. 6, 2015, 11:27 a.m.)

from django.views.generic import RedirectView

url(r'^$', RedirectView.as_view(url='/online-calls/'), name='home'),

+Send HTML email using send_mail (Sept. 28, 2015, 4:48 p.m.)

from django.template import loader
from django.core.mail import send_mail

html = loader.render_to_string('nespresso/admin_order_notification.html', {'order': order})
send_mail('Nespresso New Order from - %s' % order.customer.user.get_full_name(),
OrderingEmail.objects.all().values_list('email', flat=True),

+Admin - Many to Many Inline (Sept. 28, 2015, 10:23 a.m.)

class OrderInline(admin.TabularInline):
model = Order.items.through

class OrderItemAdmin(admin.ModelAdmin):
inlines = [OrderInline]

class OrderAdmin(admin.ModelAdmin):
list_display = ('customer', 'get_order_url',)
exclude = ('items',)
inlines = [OrderInline], OrderAdmin)

+Change list display link in django admin (Sept. 27, 2015, 5:47 p.m.)

In file:

class Order(models.Model):
customer = models.ForeignKey(Customer, null=True, on_delete=models.SET_NULL)
total_price = models.PositiveIntegerField()
items = models.ManyToManyField(OrderItem)
date_time = models.DateTimeField(default=now)

def __str__(self):
return '%s' % self.customer

def get_order_url(self):
return '<a href="%s" target="_blank">%s - %s</a>' % (reverse('customer:order', args=(,)),
# In django prior to version 2.0:
get_order_url.allow_tags = True

# In django after version 2.0:
from django.utils.safestring import mark_safe # At the top of your file
mark_safe('<a href="#"></a>')


And then in file:

class OrderAdmin(admin.ModelAdmin):
list_display = ('get_order_url',)

+Admin - Override User Form (Sept. 15, 2015, 2:13 p.m.)

from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from django import forms

from .models import Supervisor

class SupervisorChangeForm(UserChangeForm):
class Meta(UserChangeForm.Meta):
model = Supervisor

class SupervisorCreationForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = Supervisor

def clean_username(self):
username = self.cleaned_data['username']
except Supervisor.DoesNotExist:
return username
raise forms.ValidationError(self.error_messages['duplicate_username'])

class SupervisorAdmin(UserAdmin):
form = SupervisorChangeForm
add_form = SupervisorCreationForm
fieldsets = (
(None, {'fields': ('username', 'password')}),
('Personal info', {'fields': ('first_name', 'last_name', 'email')}),
('Permissions', {'fields': ('is_active',)}),
(None, {'fields': ('allowed_online_calls',)}),
exclude = ['user_permission'], SupervisorAdmin)


If you need to override the form fields:

class SupervisorChangeForm(UserChangeForm):

def __init__(self, *args, **kwargs):
super(UserChangeForm, self).__init__(*args, **kwargs)
self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(

class Meta(UserChangeForm.Meta):
model = Supervisor

+Ajax (Aug. 22, 2015, 3:54 p.m.)

def delete_order(request, p_type, pid):
if request.is_ajax():
return JsonResponse({'orders_length': len(request.session['orders']),
'total_price': request.session['orders_total_price'],
'status': 'deleted'})


return HttpResponse('rejected', content_type='text/plain')


$('#send-message-form').submit(function(e) {
type: 'POST',
url: $(this).attr('action'),
data: $(this).serialize(),
dataType: 'json',
success: function(status) {

error: function() {



+Models - Ranges of IntegerFields (Aug. 21, 2015, 10:22 p.m.)

A 64 bit integer, much like an IntegerField except that it is guaranteed to fit numbers from -9223372036854775808 to 9223372036854775807
Values from -2147483648 to 2147483647 are safe in all databases supported by Django.
Like an IntegerField, but must be either positive or zero (0). Values from 0 to 2147483647 are safe in all databases supported by Django. The value 0 is accepted for backward compatibility reasons.
Like a PositiveIntegerField, but only allows values under a certain (database-dependent) point. Values from 0 to 32767 are safe in all databases supported by Django.
Like an IntegerField, but only allows values under a certain (database-dependent) point. Values from -32768 to 32767 are safe in all databases supported by Django.

+Admin - Adding Action to Export/Download CSV file (Aug. 24, 2015, 1:04 p.m.)

class VirtualOfficeAdmin(admin.ModelAdmin):
actions = ['download_csv']
list_display = ('persian_name', 'english_name', 'office_type', 'active')
list_filter = ('office_type', 'active')

def download_csv(self, request, queryset):
import csv
from django.http import HttpResponse
import StringIO
from django.utils.encoding import smart_str

f = f = StringIO.StringIO()
writer = csv.writer(f)
["owner", "office type", "persian name", "english name", "cellphone number", "phone number", "address"])
for s in queryset:
owner = smart_str(s.owner.get_full_name())
persian_name = smart_str(s.persian_name)

# Office Type
office_type = s.office_type
if office_type == 're':
office_type = smart_str(ugettext('Real Estate'))
elif office_type == 'en':
office_type = smart_str(ugettext('Engineer'))
elif office_type == 'ar':
office_type = smart_str(ugettext('Architect'))
office_type = office_type

[owner, office_type, persian_name, s.english_name, '09' + s.owner.username, s.phone_number, s.address])
response = HttpResponse(f, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=stat-info.csv'
return response

download_csv.short_description = _("Download CSV file for selected stats.")
from django.contrib.admin.helpers import ActionForm
from django import forms
from django.utils.translation import ugettext_lazy as _
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)

class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm
actions = ['change_membership_type']

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
self.message_user(request, _('Successfully updated membership type for %d rows') % (queryset.count(),),
change_membership_type.short_description = _('Change Membership Type')

+Custom Template Tags & FIlters (April 6, 2016, 2:30 p.m.)

1- Create a module named `templatetags` in an app.

2- Create a py file with a desired name. (I usually choose the app name for this python file name)

3- Write the methods you need, in the python file.

4- There is no need to introduce these methods or files in ``.


================= Template Filters Examples =================

from django.template import Library

register = Library()

def trim_value(value):
value = str(value)
if value.endswith('.0'):
return value.replace('.0', '')
return value


def get_decimal(value):
if value:
import decimal
return str(decimal.Decimal('{0:.4f}'.format(value)))
return '0'


def get_minutes(total_seconds):
if total_seconds:
return round(total_seconds / 60, 2)
return 0


def get_acd(request):
if request:
minutes = get_minutes(request.session['total_seconds'])
if minutes:
return round(minutes / request.session['total_calls'], 2)
return 0
return 0


def round_values(value, digit):
if digit and digit.isdigit():
return round(value, int(digit))
return value


def calculate_currency_rate(value, invoice):
from decimal import Decimal
if invoice.rate_currency:
return round(Decimal(value) * Decimal(invoice.rate), 2)
return value


================= Template Tags Examples =================

Important Hint:
You can return anything you like from a tag, including a queryset. However, you can't use a tag inside the for tag ; you can only use a variable there (or a variable passed through a filter).

from django.template import Library, Node, TemplateSyntaxError, Variable

from youstone.models import Ad

register = Library()

class AdsNode(Node):
def __init__(self, usage, position, province):
self.usage, self.position, self.province = Variable(usage), Variable(position), Variable(province)

def render(self, context):
usage = self.usage.resolve(context)
position = self.position.resolve(context)
province = self.province.resolve(context)
ads = Ad.objects.filter(active=True, usage=usage)
if position:
ads = ads.filter(position=position)

if province:
print('PROVINCE', province)

context['ads'] = ads

return ''

def get_ads(parser, token):
tag_name, usage, position, province, _as, var_name = token.split_contents()
except ValueError:
raise TemplateSyntaxError(
'get_ads takes 4 positional arguments but %s were given.' % len(token.split_contents()))

if _as != 'as':
raise TemplateSyntaxError('get_ads syntax must be "get_ads <usage> <position> <province> as <var_name>."')

return AdsNode(usage, position, province)


Then you can use the template tag like this in the template:
{% get_ads usage position province as ads %}
{% for ad in ads %}

{% endfor %}


+Resize Image (Aug. 9, 2015, 10:34 p.m.)

Create a python module named and copy & paste this snippet:


from PIL import Image

from django.conf import settings

def resize_image(sender, instance, created, **kwargs):
width = settings.SLIDER_WIDTH
height = settings.SLIDER_HEIGHT

img =
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

Note that resize() returns a resized copy of an image. It doesn't modify the original.
So do not write codes like this:
img.resize((width, height), Image.ANTIALIAS), format='JPEG')


In the settings:
# Slider Image Size


from resize_image import resize_image

class Slider(models.Model):

models.signals.post_save.connect(resize_image, sender=Slider)


+Extending User Model using OneToOne relationship (Aug. 5, 2015, 4:43 p.m.)

from django.db.models.signals import post_save
from django.conf import settings

class Customer(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, unique=True, primary_key=True)

def create_customer(sender, instance, created, **kwargs):
if created:

post_save.connect(create_customer, sender=settings.AUTH_USER_MODEL)

+Admin - Overriding admin ModelForm (Nov. 30, 2015, 3:49 p.m.)

class MachineCompareForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(MachineCompareForm, self).__init__(*args, **kwargs)
self.model_fields = [['field_%s' %, title.feature,] for title in CompareTitle.objects.all()]
for field in self.model_fields:
self.base_fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
self.fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
feature = CompareFeature.objects.filter(, feature=field[2])
if feature:
self.base_fields[field[0]].initial = feature[0].value
self.fields[field[0]].initial = feature[0].value

def save(self, commit=True):
instance = super(MachineCompareForm, self).save(commit=False)
for field in self.model_fields:
if CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]):
CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]).update(

if commit:
return instance

class Meta:
model = MachineCompare
exclude = []

class MachineCompareAdmin(admin.ModelAdmin):
form = MachineCompareForm

def get_form(self, request, obj=None, **kwargs):
return MachineCompareForm


class SpecialPageAdmin(admin.ModelAdmin):
list_display = ('company', 'url_name', 'active',)
search_fields = ('company__name', 'url_name')
form = SpecialPageForm

def get_form(self, request, obj=None, **kwargs):
return SpecialPageForm

class SpecialPageForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(SpecialPageForm, self).__init__(*args, **kwargs)
for i in range(1, 16):
self.fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))
self.base_fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))

class Meta:
model = SpecialPage
exclude = []


+Model - Overriding delete method in model (Nov. 28, 2015, 12:29 p.m.)

from django.db.models.signals import pre_delete
from django.dispatch.dispatcher import receiver

@receiver(pre_delete, sender=MyModel)
def _mymodel_delete(sender, instance, **kwargs):
print "deleting"

+Union of querysets (July 20, 2015, 5:14 p.m.)

import itertools

records = query1 | query2
result = itertools.chain(qs1, qs2, qs3, qs4)

+Views - Concatenating querysets and converting to JSON (July 17, 2015, 9:05 p.m.)

from itertools import chain

combined = list(chain(collectionA, collectionB))
json = serializers.serialize('json', combined)


final_queryset = (queryset1 | queryset2)

+Template - nbsp template tag (Replace usual spaces in string by non breaking spaces) (July 9, 2015, 2:45 a.m.)

from django import template
from django.utils.safestring import mark_safe

register = template.Library()

def nbsp(value):
return mark_safe("&nbsp;".join(value.split(' ')))
{% load nbsp %}

{{ user.full_name|nbsp }}


{{ note.note|nbsp|linebreaksbr }}

+Views - Delete old uploaded file/image before saving the new one (July 8, 2015, 8:24 p.m.)

import os
from django.conf import settings

os.remove(settings.BASE_DIR +
except (OSError, IOError):

+Admin - list_display with a callable (Jan. 3, 2016, 10:17 a.m.)

class ExcelFile(models.Model):
file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' %

def get_file_name(self):
get_file_name.short_description = _('File Name')
class ExcelFileAdmin(admin.ModelAdmin):
list_display = ['get_file_name', 'business']
def change_order(self):
return '<a href="review/">%s</a>' % _('Edit Order')
change_order.short_description = _('Edit Order')
change_order.allow_tags = True

+Admin - Hide fields (July 8, 2015, 1:31 p.m.)

from django.contrib import admin

from .models import ExcelFile

class ExcelFileAdmin(admin.ModelAdmin):
exclude = ['companies'], ExcelFileAdmin)

+Model - Validators (Jan. 28, 2016, 12:03 a.m.)

from django.core.exceptions import ValidationError

def validate_excel_file(file):
except xlrd.XLRDError:
raise ValidationError(_('%s is not an Excel File') %

class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])

+Admin - Allow only one instance of object to be created (July 8, 2015, 12:41 p.m.)

def validate_only_one_instance(obj):
model = obj.__class__
if model.objects.count() > 0 and != model.objects.get().id:
raise ValidationError(_("Can only create 1 %s instance") % model.__name__)

class Settings(models.Model):
banner = models.ImageField(_('banner'), upload_to='images/machines/settings',
help_text=_('The required image size is 960px in 250px.'))

def __str__(self):
return '%s' % _('Settings')

def clean(self):
--------------------------- ANOTHER ONE ---------------------------------------------
class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' %

def clean(self):
model = self.__class__
validation_error = _("Can only create 1 %s instance") %
business = model.objects.filter(
# If the user is updating/editing an object
if business and != business[0].pk:
raise ValidationError(validation_error)
# If the user is inserting/creating an object
if business:
raise ValidationError(validation_error)

+Errors (Aug. 13, 2015, 12:05 a.m.)

_imagingft C module is not installed:
I got this error when django-simple-captcha tries to load the image.

1-apt-get install libfreetype6-dev
2-pip uninstall pillow
3-pip install pillow
4-restart the project

If you still got the same error, you need to look if the file has even been created at all!?
1-sudo update
2-locate _imagingft

If the file exists (and probably with a name (a little bit) different with what in error message looks for), you need to rename it:
The path and file name might be something like this:
You need to rename it to:
And restart the project.

If the file is not found with locate command in the virtualenv you're working on, try to re-install pillow (even download the most updated version from, and install it).
Anyway, you need to install it in a way, to get that file even with a different name.
decoder jpeg not available
sudo apt-get install libjpeg-dev
pip install -I pillow

sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib

Or for Ubuntu 32bit:

sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/ /usr/lib/

pip install -I pillow
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet:

from django.conf import settings

from django.contrib.auth import get_user_model

User = settings.AUTH_USER_MODEL
except ImportError:
from django.contrib.auth.models import User

+Speeding Up Django Links (June 18, 2015, 12:41 p.m.)

+Django Analytical (June 7, 2015, 4:52 p.m.)

1-easy_install django-analytical


3-In the base.html
{% load analytical %}
<!DOCTYPE ... >
{% analytical_head_top %}


{% analytical_head_bottom %}
{% analytical_body_top %}


{% analytical_body_bottom %}

4-Create an account on this site:
I have already registered: Username is Mohsen_Hassani and the password MohseN4301

5-There are some javascript codes which should be taken from to you template. Those are like:

This should be before the </body> </html> tags:
<script src="//" type="text/javascript"></script>
<script type="text/javascript">try{ clicky.init(100851091); }catch(e){}</script>
<noscript><p><img alt="Clicky" width="1" height="1" src="//" /></p></noscript>

+Templates - Do Mathematic (Jan. 14, 2016, 2:14 p.m.)

Using Django’s widthratio template tag for multiplication & division.

I find it a bit odd that Django has a template filter for adding values, but none for multiplication and division. It’s fairly straightforward to add your own math tags or filters, but why bother if you can use the built-in one for what you need?

Take a closer look at the widthratio template tag. Given {% widthratio a b c %} it computes (a/b)*c

So, if you want to do multiplication, all you have to do is pass b=1, and the result will be a*c.

Of course, you can do division by passing c=1. (a=1 would also work, but has possible rounding side effects)

Note: The results are rounded to an integer before returning, so this may have marginal utility for many cases.

So, in summary:

to compute A*B: {% widthratio A 1 B %}
to compute A/B: {% widthratio A B 1 %}

And, since add is a filter and not a tag, you can always to crazy stuff like:

compute A^2: {% widthratio A 1 A %}
compute (A+B)^2: {% widthratio A|add:B 1 A|add:B %}
compute (A+B) * (C+D): {% widthratio A|add:B 1 C|add:D %}

+URLS - Allow entering dot (.) in url pattern (Dec. 2, 2014, 10:03 p.m.)


+Change the value of QuerySet (Nov. 18, 2014, 2:17 a.m.)

If you change the value of QuerySet you will get an error:
“This QueryDict instance is immutable”

So this is how you should change the value of it: (the whole of it or any item inside)
mutable = request.POST._mutable
request.POST._mutable = True
request.session['search_criteria']['region'] = rid
request.POST = request.session['search_criteria']
request.POST._mutable = mutable

+Templates - Conditional Extend (Sept. 22, 2014, 11:45 a.m.)

{% extends supervising|yesno:"supervising/tasks.html,desktop/tasks_list.html" %}

{% extends variable %} uses the value of variable. If the variable evaluates to a string, Django will use that string as the name of the parent template. If the variable evaluates to a Template object, Django will use that object as the parent template.

+Adding CSS class in a ModelForm (Sept. 13, 2014, 1:15 a.m.)

self.fields['specie'].widget.attrs['class'] = 'autocomplete'

+Views - JSON object serialization (AJAX) (Jan. 3, 2016, 3:03 p.m.)

from django.core import serializers

foos = Foo.objects.all()
data = serializers.serialize('json', foos)

return HttpResponse(data, mimetype='application/json')
import json

def json_response(something):
return HttpResponse(json.dumps(something), content_type='application/javascript; charset=UTF-8')
from django.core.serializers.json import DjangoJSONEncoder

def categories_view(request):
categories = Category.objects.annotate(notes_count=Count('notes__pk')).values('pk', 'name', 'notes_count')
data = json.dumps(list(categories), cls=DjangoJSONEncoder)
return HttpResponse(data, content_type='application/json')
data = serializers.serialize('xml', SomeModel.objects.all(), fields=('name','size'))
all_objects = list(Restaurant.objects.all()) + list(Place.objects.all())
data = serializers.serialize('xml', all_objects)
For Django 1.7 +

from django.http import JsonResponse

return JsonResponse({'foo':'bar'})
Serializing non-dictionary objects
In order to serialize objects other than dict you must set the safe parameter to False:

return JsonResponse([1, 2, 3], safe=False)
Without passing safe=False, a TypeError will be raised.

indexed_companies = Company.objects.filter(index=True, business_group_id=request.POST['bid'])
indexed_companies = serialize('json', indexed_companies)

companies = Company.objects.filter(business_group_id=request.POST['bid'])
companies = serialize('json', filter_companies(companies, request.POST))
return JsonResponse({'indexed_companies': indexed_companies, 'companies': companies})


$('.search-forms').submit(function(e) {
type: 'POST',
url: $(this).attr("action"),
data: $(this).serialize(),
dataType: 'json',
success: function(json) {
var indexed_companies = $.parseJSON(json['indexed_companies']);
var companies = $.parseJSON(json['companies']);
$.each(indexed_companies, function(idx, indexed_company) {
$('<tr>').appendTo('#indexed-members table');
$('<td>' + (idx + 1) + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.province + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.manager + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + + '</td>').appendTo('#indexed-members table tr:last-child');
$('</tr>').appendTo('#indexed-members table');
error: function() {
$('#search-preloader').css('display', 'none');
console.log('{% trans "Problem with connecting to the server" %}.');
If you need to serialize some fields of an object, you can not use this:
return JsonResponse({'products': serialize('json', Coffee.objects.all().values('id', 'name'))})

The correct way is:
return JsonResponse({'products': serialize('json', Coffee.objects.all(), fields=('id', 'name'))})

+Models - Overriding save method (Aug. 21, 2014, 1:03 p.m.)

from tastypie.utils.timezone import now
from django.contrib.auth.models import User
from django.db import models
from django.utils.text import slugify

class Entry(models.Model):
user = models.ForeignKey(User)
pub_date = models.DateTimeField(default=now)
title = models.CharField(max_length=200)
slug = models.SlugField()
body = models.TextField()

def __unicode__(self):
return self.title

def save(self, *args, **kwargs):
# For automatic slug generation.
if not self.slug:
self.slug = slugify(self.title)[:50]

return super(Entry, self).save(*args, **kwargs)

+Models - AUTO_NOW and AUTO_NOW_ADD (Aug. 21, 2014, 1:02 p.m.)

class Blog(models.Model):
title = models.CharField(max_length=100)
added = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)

auto_now_add tells Django that when you add a new row, you want the current date & time added. auto_now tells Django to add the current date & time will be added EVERY time the record is saved.

+Query - Call a field name by dynamic values (Aug. 21, 2014, 12:58 p.m.)

properties = Properties.objects.filter(**{'%s__age_status' % p_type: request.POST['age_status']})

+Settings - Set a settings for shell (Aug. 21, 2014, 12:56 p.m.)

python shell --settings=nimkatonilne.settings

+Admin - Deleting the file/image on deleting an object (Aug. 21, 2014, 12:54 p.m.)

1-Create a file named `` with the following contents:

import os

from django.conf import settings

def clean_up(sender, instance, *args, **kwargs):
for field in sender._meta.get_fields():
field_types = ['FileBrowseField', 'ImageField', 'FileField']
if field.__class__.__name__ in field_types:
os.remove(settings.MEDIA_ROOT + str(getattr(instance,
except (OSError, IOError):
2- Open the file:

Import the `clean_up` function from the `clean_up` module and add the following line at the bottom of each model having a FileField or ImageField or FileBrowseField:

models.signals.post_delete.connect(clean_up, sender=Ads)

+URLS - Redirect to a URL in (Aug. 21, 2014, 12:53 p.m.)

from django.views.generic import RedirectView
from django.core.urlresolvers import reverse_lazy

(r'^one/$', RedirectView.as_view(url='/another/')),


url(r'^some-page/$', RedirectView.as_view(url=reverse_lazy('my_named_pattern'))),

+Forms - Overriding and manipulating fields (Nov. 30, 2015, 12:35 p.m.)

class CheckoutForm(forms.ModelForm):

def __init__(self, request, *args, **kwargs):
super(CheckoutForm, self).__init__(*args, **kwargs)
self.request = request

class Meta:
model = Address
exclude = ('fax_number',)
def __init__(self, request, *args, **kwargs):
super(InstituteRegistrationForm, self).__init__(*args, **kwargs)
self.request = request
if request.user.cellphone:
self.fields['cell_phone_number'].widget.attrs['readonly'] = 'true'
self.fields['email'].widget.attrs['readonly'] = 'true'
self.fields['city'].queryset = City.objects.filter(province__allow_delete=False)
self.fields['city'].initial = '1'
self.fields['first_name'].required = True
self.fields['first_name'].widget.attrs['required'] = True
for field in self.fields.values():
field.widget.attrs['required'] = True
field.required = True
self.fields['national_team'].empty_label = None
self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(
Hide a field:
self.fields['state'].widget = forms.HiddenInput()
class UpdateShare(forms.ModelForm):
class Meta:
model = ManualEntries
exclude = ['dt']
widgets = {
'description': forms.Textarea(attrs={'rows': 3}),

+Installation (Feb. 28, 2017, 10:31 a.m.)

To install Docker, you need the 64-bit version of one of these Debian or Raspbian versions:

Stretch (testing)
Jessie 8.0 (LTS) / Raspbian Jessie
Wheezy 7.7 (LTS)
You can install Docker in different ways, depending on your needs:

- Most users set up Docker’s repositories and install from them, for ease of installation and upgrade tasks. This is the recommended approach.

- Some users download the DEB package and install it manually and manage upgrades completely manually.

- Some users cannot use the official Docker repositories and must rely on the version of Docker that comes with their operating system. This version of Docker may be out of date. Those users should consult their operating system documentation and not follow these procedures.
Install using the repository:

Before you install Docker for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install, update, or downgrade Docker from the repository.
Set up the repository:
1- Install packages to allow apt to use a repository over HTTPS:

Jessie or Stretch:
sudo apt-get install -y --no-install-recommends apt-transport-https ca-certificates curl software-properties-common

sudo apt-get install -y --no-install-recommends apt-transport-https ca-certificates curl python-software-properties

2- Add Docker’s official GPG key:
curl -fsSL | sudo apt-key add -

3- Verify that the key ID is 58118E89F3A912897C070ADBF76221572C52609D.
apt-key fingerprint 58118E89F3A912897C070ADBF76221572C52609D

4- Use the following command to set up the stable repository.
sudo add-apt-repository "deb debian-$(lsb_release -cs) main"
Install Docker:
1- sudo apt-get update

2- sudo apt-get -y install docker-engine

3- Verify that docker is installed correctly by running the hello-world image:
sudo docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Docker is installed and running. You need to use sudo to run Docker commands.


+Introduction (Feb. 27, 2017, 12:30 p.m.)

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Ansible, CFEngine, Chef, Google Cloud Platform, IBM Bluemix, HPE Helion Stackato, Jelastic, Jenkins, Kubernetes, Microsoft Azure, OpenStack Nova, OpenSVC, Oracle Container Cloud Service, Puppet, Salt, Vagrant, and VMware vSphere Integrated Containers.

ELK Stack
+Elasticsearch cat APIs (April 22, 2019, 1:24 a.m.)

To check the cluster health, we will be using the _cat API.

cat APIs

JSON is great… for computers. Even if it’s pretty-printed, trying to find relationships in the data is tedious. Human eyes, especially when looking at a terminal, need compact and aligned text. The cat API aims to meet this need.


curl ''



List All Indices:
curl ''


+Filebeat (April 21, 2019, 11:51 p.m.)

Filebeat sends log lines to Logstash.

Filebeat client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing.


filebeat modules list


Filebeat Modules:



im /etc/filebeat/modules.d/system.yml


+Installation (April 19, 2019, 10:25 p.m.)

apt install openjdk-8-jdk



1- wget -qO - | sudo apt-key add -

2- echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

3- apt update

4- apt install elasticsearch

vim /etc/elasticsearch/elasticsearch.yml localhost

systemctl restart elasticsearch
systemctl enable elasticsearch

7- Check the status of the elasticsearch server:
curl -X GET http://localhost:9200



1- apt install kibana

2- systemctl enable kibana

echo "admin:$(openssl passwd -apr1 my_password)" | sudo tee -a /etc/nginx/htpasswd.kibana

4- vim /etc/nginx/sites-enabled/kibana
server {
listen 80;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

5- systemctl restart nginx



1- apt install logstash

systemctl restart logstash
systemctl enable logstash


+Introduction / Definitions (April 19, 2019, 10:24 p.m.)

First Underlying Layer: Logstash + Beats

Upper Layer: Elasticsearch

Upper Layer: Kibabana


"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.

Elasticsearch is a search and analytics engine.

Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.

Kibana lets users visualize data with charts and graphs in Elasticsearch.


Elasticsearch is a distributed, RESTful search and analytics NoSQL engine based on Lucene.

Logstash is a light-weight data processing pipeline for managing events and logs from a wide variety of sources.

Kibana is a web application for visualizing data that works on top of Elasticsearch.


The Elastic Stack is the next evolution of the ELK Stack.


+Les Phrases (Dec. 24, 2017, 11:55 p.m.)

En collaboration avec
Bon, d'accord
All right, okay. So
Il y a très, très, très longtemps,
A long, long, lang, long time ago...
in a galaxy far, far away...
dans une galaxie éloignée
Oh, wait, wait, wait.
Oh, un instant.
Okay, go.
D'accord, vas-y.
That's perfect. That's perfect. Okay.
C'est parfait. Parfait. Bon.

+Submodule (Nov. 29, 2017, 6:17 p.m.)
1- CD to the path you need the module get cloned.

2- git submodule add
In case of this error raises:
blah blah already exists in the index :-D
git rm --cached blah blah
and you should also delete the files from this path:
rm -rf .git/modules/...
To remove a submodule you need to:

Delete the relevant section from the .gitmodules file.
Stage the .gitmodules changes git add .gitmodules
Delete the relevant section from .git/config.
Run git rm --cached path_to_submodule (no trailing slash).
Run rm -rf .git/modules/path_to_submodule
Commit git commit -m "Removed submodule <name>"
Delete the now untracked submodule files
rm -rf path_to_submodule

+Commands (July 29, 2017, 11:26 a.m.)

git pull

git fetch

git pull master

+Diff (July 29, 2017, 11:17 a.m.)

If you want to see what you haven't git added yet:
git diff myfile.txt

or if you want to see already-added changes
git diff --cached myfile.txt

+Modify existing / unpushed commits (Jan. 28, 2017, 3:12 p.m.)

git commit --amend -m "New commit message"

+Delete file from repository (Jan. 28, 2017, 3:04 p.m.)

If you deleted a file from the working tree, then commit the deletion:
git add . -A
git commit -m "Deleted some files..."
git push origin master
Remove a file from a Git repository without deleting it from the local filesystem:
git rm --cached <filename>
git rm --cached -r <dir_name>
git commit -m "Removed folder from repository"
git push origin master

+.gitingore Rules (Jan. 28, 2017, 2:56 p.m.)

A blank line matches no files, so it can serve as a separator for readability.

A line starting with # serves as a comment.

An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again. If a negated pattern matches, this will override lower precedence patterns sources.

If the pattern ends with a slash, it is removed for the purpose of the following description, but it would only find a match with a directory. In other words, foo/ will match a directory foo and paths underneath it, but will not match a regular file or a symbolic link foo (this is consistent with the way how path spec works in general in git).

If the pattern does not contain a slash /, git treats it as a shell glob pattern and checks for a match against the pathname relative to the location of the .gitignore file (relative to the top level of the work tree if not from a .gitignore file).

Otherwise, git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, Documentation/*.html matches Documentation/git.html but not Documentation/ppc/ppc.html or tools/perf/Documentation/perf.html.

A leading slash matches the beginning of the pathname. For example, /*.c matches cat-file.c but not mozilla-sha1/sha1.c.

+Examples (Aug. 21, 2014, 1:29 p.m.)

mkdir my_project
cd my_project
git init
git remote add origin
git commit -m 'initial commit'
git push origin master
After each change in project:
git add .
git commit -m '<the comment>'
git push origin master
git config http.postBuffer 1048576000
git config --global "Mohsen Hassani"
git config --global ""
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto
git config --list
git log

git add -A .
git commit -m "File nonsense.txt is now removed"

git commit -m "message with a tpyo here"
git commit --amend -m "More changes - now correct"

git remote
git remote -v

export http_proxy=http://proxy:8080
// Set proxy for git globally
git config --global http.proxy http://proxy:8080
// To check the proxy settings
git config --get http.proxy
// Just in case you need to you can also revoke the proxy settings
git config --global --unset http.proxy
A good online tutorial:

+Markdown Cheatsheet (March 10, 2018, 8:14 p.m.)

+Runner - .gitlab-ci.yml sample (Feb. 14, 2018, 11:38 a.m.)

- mkdocs build
- ssh-keyscan -H >> ~/.ssh/known_hosts
- scp -rC site/*
- ssh "/etc/init.d/nginx restart"

+Send Notifications to Email (April 12, 2017, 3:03 p.m.)
To test the mail server:
1- sudo gitlab-rails console production
2- Look at the ActionMailer delivery_method:
3- Check the mail settings:

If it's configured with smtp:

If it is sendmail:

You may need to check your local mail logs (e.g. /var/log/mail.log) for more details.
4- Send a test message via the console.
Notify.test_email('', 'Hello World', 'This is a test message').deliver_now

In case the email is not sent (after checking your mail), you can see the reason/error in:
tail -f /var/log/mail.log
5- If you needed to change any configs, refer to this file:

vim /var/opt/gitlab/gitlab-rails/etc/gitlab.yml

OR depending on your gitlab version, maybe this one:


And after any change to it:
gitlab-ctl reconfigure
For fixing some problems I had to replace "sendmail" with the default "postfix".
apt install sendmail (will remove postfix and install sendmail)

In /etc/hosts I had to put the required domain names to fix the error " Sender address rejected: Domain not found".

+Deleting a runner (March 8, 2017, 7:38 p.m.)

gitlab-runner unregister --name runner-0

For deleting all:
gitlab-runner verify --delete

+Install Gitlab Runner (Feb. 25, 2017, 3:09 p.m.)

GitLab Runner is an application which processes builds. It can be deployed separately and works with GitLab CI through an API.
In order to run tests, you need at least one GitLab instance and one GitLab Runner.
In GitLab CI, Runners run your YAML. A Runner is an isolated (virtual) machine that picks up jobs through the coordinator API of GitLab CI. A Runner can be specific to a certain project or serve any project in GitLab CI. A Runner that serves all projects is called a shared Runner.
1- Add GitLab's official repository:
apt-get install curl
curl -L | sudo bash

cat > /etc/apt/preferences.d/pin-gitlab-runner.pref <<EOF
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-ci-multi-runner
Pin: origin
Pin-Priority: 1001

3- Install gitlab-ci-multi-runner:
sudo apt-get install gitlab-ci-multi-runner

4- Register the Runner:
sudo gitlab-ci-multi-runner register

+Install GitLab on server (Feb. 25, 2017, 12:16 p.m.)
1- Install and configure the necessary dependencies:
sudo apt-get install curl openssh-server ca-certificates postfix

2- Add the GitLab package server and install the package:
curl -sS | sudo bash
sudo apt-get install gitlab-ce

3- Configure and start GitLab:
sudo gitlab-ctl reconfigure

4- Browse to the hostname and login:
On your first visit, you'll be redirected to a password reset screen to provide the password for the initial administrator account. Enter your desired password and you'll be redirected back to the login screen.
The default account's username is "root". Provide the password you created earlier and login. After login you can change the username if you wish.

+Install GitLab CI (Feb. 25, 2017, 11:46 a.m.)

GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab.
Starting from version 8.0, GitLab Continuous Integration (CI) is fully integrated into GitLab itself and is enabled by default on all projects.
GitLab offers a continuous integration service. If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your GitLab project to use a Runner, then each merge request or push, triggers your CI pipeline.

+Conditions If (July 27, 2015, 3:02 p.m.)

You might need to change all the below condition syntaxes with this syntax:
<![if gte IE 9]>

<!--[if IE]>
<link rel="stylesheet" type="text/css" href="all-ie-only.css" />
Target everything EXCEPT IE

<!--[if !IE]><!-->
<link rel="stylesheet" type="text/css" href="not-ie.css" />
Target IE 7 ONLY

<!--[if IE 7]>
<link rel="stylesheet" type="text/css" href="ie7.css">
Target IE 6 ONLY

<!--[if IE 6]>
<link rel="stylesheet" type="text/css" href="ie6.css" />
Target IE 5 ONLY

<!--[if IE 5]>
<link rel="stylesheet" type="text/css" href="ie5.css" />
Target IE 5.5 ONLY

<!--[if IE 5.5000]>
<link rel="stylesheet" type="text/css" href="ie55.css" />
Target IE 6 and LOWER

<!--[if lt IE 7]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />

<!--[if lte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
Target IE 7 and LOWER

<!--[if lt IE 8]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />

<!--[if lte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
Target IE 8 and LOWER

<!--[if lt IE 9]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />

<!--[if lte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
Target IE 6 and HIGHER

<!--[if gt IE 5.5]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />

<!--[if gte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
Target IE 7 and HIGHER

<!--[if gt IE 6]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />

<!--[if gte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
Target IE 8 and HIGHER

<!--[if gt IE 7]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />

<!--[if gte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />

+Commands (April 16, 2017, 11:46 a.m.)

ionic serve
cordova platform rm android
cordova platform add android@4.0
ionic run android
ionic run android --prod
ionic g page profile

+Animated Modal (Sept. 13, 2016, 6:02 a.m.)
Download and include these CSS files:

Using the ion-modal-view tag, create the modal in a template (using the custom style):
<ion-modal-view style="width: 80%; height: 60%; min-height: 0; max-height: 250px; top: 20%; left: 10%; right: 10%; bottom: 20%;">

This will cause a problem with the backdrop; for fixing it add this css to you style file:
@media (min-width: 0px) {
.modal-backdrop-bg {
opacity: 0.5 !important;
background-color: #000;

And finally using the link at the top of this note, continue with how to create and use the modal.

+Events (Sept. 10, 2016, 2:34 p.m.)

$scope.$on('$ionicView.loaded', function(){});
$scope.$on('$ionicView.enter', function(){});
$scope.$on('$ionicView.leave', function(){});
$scope.$on('$ionicView.beforeEnter', function(){});
$scope.$on('$ionicView.beforeLeave', function(){});
$scope.$on('$ionicView.afterEnter', function(){});
$scope.$on('$ionicView.afterLeave', function(){});
$scope.$on('$ionicView.unloaded', function(){});

+Requirements for building applications (Sept. 6, 2016, 11:58 a.m.)

Visit the following links to get information about the dependencies you might need for the SDK version you intend to download:
You might find the tools and all the dependencies in following links:
1- Create a folder preferably name it "android-sdk-linux" in any location.

2- Downloading SDK Tools:
From the following link, scroll to the bottom of the page, the table having the title "Get just the command line tools" and download the "Linux" package.
Extract it to the folder you created in step 1.

3- Download an API level (for example, or which is for Android 4.0.4).
Create a folder named "platforms" in "android-sdk-linux" and extract the downloaded file to it.

4- Download the latest version of `build-tools` (
Create a folder named `build-tools` in `android-sdk-linux` and extract it to it.
You need to rename the extracted folder to `25`.

5- Download the latest version of `platform-tools` (
Extract it to the folder `android-sdk-linux`. It should have already a folder named `platform-tools`, so no need to create any further folders.

6- Open the file `~/.bashrc` and add the following line to it:
export ANDROID_HOME=/home/mohsen/Programs/Android/Development/android-sdk-linux

7- sudo apt-get install openjdk-8-jdk

+Good Tutorial Websites (Aug. 11, 2016, 11:54 p.m.)

Passing data between pages:
Forms and Validation:
Different Native Modal Windows:
Handling Native View Animations:

+Using google maps (Aug. 11, 2016, 10:21 p.m.)
1- (Do not use bower! It will download so many crap files in the lib folder! Try to copy only the 3 needed js files from other projects or the websites. If you got those files, just copy them in the js folder and change the src attribute of step 2, from "lib/.../dist/...", to "js/...").
bower install angular-google-maps
bower install angular-simple-logger

<script src='lib/angular-simple-logger/dist/angular-simple-logger.js'></script>
<script src='lib/angular-google-maps/dist/angular-google-maps.min.js'></script>
<script src='lib/lodash/dist/lodash.min.js'></script>
<script src='//'></script>

3- angular.module('starter', ['ionic', 'ngCordova', 'nemLogging', 'uiGmapgoogle-maps'])

<ion-content data-tap-disabled="true">
<ui-gmap-google-map center='' zoom='map.zoom'>
<ui-gmap-marker coords="marker.coords" options="marker.options" idkey="">

$ = {
center: {latitude: $, longitude: $ },
zoom: 14,
pan: 1

$scope.marker = {
id: 0,
coords: {
latitude: $,
longitude: $

$scope.marker.options = {
draggable: false,
labelAnchor: "80 120",
labelClass: "marker-labels"

+Pass a search result or a fetched json to another page (July 14, 2016, 2:57 a.m.)

$state.go('', {'property': response});


.state('', {
url: 'property/',
views: {
'menuContent': {
templateUrl: 'templates/property.html',
controller: 'PropertyCtrl'
params: {'property': null}

in controller:
$ = $

+Use jQuery (July 14, 2016, 1:05 a.m.)

// find('#id')

//find('.classname'), assumes you already have the starting elem to search from

Angular doesn’t depend on jQuery. In fact, the Angular source contains an embedded lightweight alternative: jqLite. Still, when Angular detects the presence of a jQuery version in your page, it uses that full jQuery implementation in lieu of jqLite.

If jQuery is available, angular.element is an alias for the jQuery function. If jQuery is not available, angular.element delegates to Angular's built-in subset of jQuery, called "jQuery lite" or jqLite.

jqLite is a tiny, API-compatible subset of jQuery that allows Angular to manipulate the DOM in a cross-browser compatible way. jqLite implements only the most commonly needed functionality with the goal of having a very small footprint.

To use jQuery, simply ensure it is loaded before the angular.js file. You can also use the ngJq directive to specify that jqlite should be used over jQuery, or to use a specific version of jQuery if multiple versions exist on the page.

Note: Keep in mind that this function will not find elements by tag name / CSS selector. For lookups by tag name, try instead angular.element(document).find(...) or $document.find(), or use the standard DOM APIs, e.g. document.querySelectorAll().

Angular's jqLite
jqLite provides only the following jQuery methods:

addClass() - Does not support a function as first argument
attr() - Does not support functions as parameters
bind() - Does not support namespaces, selectors or eventData
children() - Does not support selectors
css() - Only retrieves inline-styles, does not call getComputedStyle(). As a setter, does not convert numbers to strings or append 'px', and also does not have automatic property prefixing.
find() - Limited to lookups by tag name
next() - Does not support selectors
on() - Does not support namespaces, selectors or eventData
off() - Does not support namespaces, selectors or event object as parameter
one() - Does not support namespaces or selectors
parent() - Does not support selectors
removeClass() - Does not support a function as first argument
toggleClass() - Does not support a function as first argument
triggerHandler() - Passes a dummy event object to handlers
unbind() - Does not support namespaces or event object as parameter
$element === angular.element() === jQuery() === $()

+Publishing Apps (May 18, 2016, 2:19 a.m.)

1-Remove unneeded plugins for production mode:
cordova plugin rm cordova-plugin-console

2-Generate a release build for Android
cordova build --release android

3-jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore ~/Studies/my-release-key.keystore platforms/android/build/outputs/apk/android-release-unsigned.apk mohsen_hassani

4-zipalign -v 4 platforms/android/build/outputs/apk/android-release-unsigned.apk platforms/android/build/outputs/apk/MyNotes-1.0.0.apk

+Using Sass (May 10, 2016, 10:02 a.m.)

ionic setup sass

+Icon and Splash Screen Image Generation (May 9, 2016, 2:52 p.m.)

Source page:

The icon image’s minimum dimensions should be 192x192 px.
Copy an image with an extension of `png` or `psd` to the path `resources/icon.png`.

ionic resources --icon
Splash Screen:
The source image’s minimum dimensions should be 2208x2208 px,
The splash screen’s artwork should roughly fit within a center square (1200x1200 px)
Copy an image with an extension of `png` or `psd` to the path `resources/splash.png`.

ionic resources --splash

+Post data (May 4, 2016, 3:47 a.m.)

var url = 'http://localhost:8000/api/note/' + $ + '/update/';
var data = {note: $scope.note.note};
var req = {
method: 'POST',
url: url,
data: data,
headers: {'Content-Type': 'application/x-www-form-urlencoded'}
$http(req).then(function (res) {});

In Django notes you can see the note `Receive and parse JSON data from a request` to get the values.

+Errors (April 30, 2016, 6:50 p.m.)

Cannot find module 'config-chain:

sudo npm uninstall -g npm-check-updates && sudo npm install -g npm-check-update
sudo npm uninstall -g cordova ionic
sudo npm install -g cordova ionic

+Plugins (April 18, 2016, 8:14 p.m.)
cordova plugin add
ionic plugin add cordova-plugin-network-information
ionic plugin add cordova-plugin-x-toast
cordova plugin add cordova-plugin-vibration
cordova plugin add
I think it's already installed... Delete this plugin if you got a message saying `already installed`...
cordova plugin add cordova-plugin-splashscreen
cordova plugin add cordova-plugin-geolocation

+Mobile Browser Database Introduction (April 13, 2016, 7:44 p.m.)

One of the biggest evolutions happening around HTML5 today is the availability of persistent storage on browsers. Before HTML5, we had no choice other than using cookies to store some kind of data on client side. With HTML5, you have several choices for storing your data, depending on what you want to store.

The most broadly used technology is called WebStorage, an API for persistent key-value data storage which most major browsers have supported for a while. WebStorage has persistent storage called LocalStorage, and temporary storage called SessionStorage which will be cleared after a session.

WebSQL Database is a relational database solution for browsers. Even though it is essentially deprecated and the W3C no longer maintains the specification, meaning browsers may or may not continue to support it, people still use it because it is the only common answer for structured data storage on mobile browsers.

Indexed Database is an object store. Since it's considered to be the final solution for storing structured data on browsers, its support and usage is gradually expanding.

FileSystem API is a file system solution for the web. Developers can store large objects in a sandboxed part of the user's file system and directly link to them via URL. Although Chrome and Opera are the only browsers that currently implement the feature, its standardization is ongoing.

Application Cache is a powerful cache mechanism targeting single page applications on browsers. Although there were number of complaints about this spec and an alternative spec called ServiceWorker is being proposed, there is no real offline webapp solutions other than this as of January 2014.

+Starting a Project (April 8, 2016, 8:35 p.m.)

ionic start project_name blank
ionic platform add ios
ionic platform add android

+Installation (April 8, 2016, 1:26 a.m.)

Official Website:

1-Install Node.js (Search in my notes category for NodeJs and using the `Installation` note.)

2- npm install -g cordova ionic

+Bypass popup blocker on (Jan. 20, 2018, 12:53 a.m.)

$('#myButton').click(function () {
var redirectWindow ='', '_blank');
type: 'POST',
url: '/echo/json/',
success: function (data) {

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
or you can include this after loading the jquery.js file:
<script src=""></script>

+Find element by data attribute value (July 31, 2017, 1:18 a.m.)


+Smooth Scrolling (Feb. 21, 2017, 4:09 p.m.)

$(function() {
$('a[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) +']');
if (target.length) {
$('html, body').animate({
scrollTop: target.offset().top
}, 1000);
return false;

+Check image width and height before upload with Javascript (Oct. 5, 2016, 3:01 a.m.)

var _URL = window.URL || window.webkitURL;
$('#upload-face').change(function() {
var file, img;
if (file = this.files[0]) {
img = new Image();
img.onload = function () {
if (this.width < 255 || this.height < 330) {
alert('{% trans "The file dimension should be at least 255 x 330 pixels." %}');
img.src = _URL.createObjectURL(file);

+Get value of selected radio button (Aug. 1, 2016, 3:46 p.m.)


+Allow only numeric 0-9 in inputbox (April 25, 2016, 9:18 p.m.)

$(".numeric-inputs").keydown(function(event) {
// Allow only backspace, delete, tab, ctrlKey
if ( event.keyCode == 46 || event.keyCode == 8 || event.keyCode == 9 || event.ctrlKey ) {
// let it happen, don't do anything
else {
// Ensure that it is a number and stop the keypress
if ((event.keyCode >= 48 && event.keyCode <= 57) || (event.keyCode >= 96 && event.keyCode <= 105)) {
// let it happen, don't do anything
} else {

+Access parent of a DOM using the (event) parameter (April 25, 2016, 1:47 p.m.)

var membership_id = $('id');

+Prevent big files to be uploaded (March 5, 2016, 12:08 a.m.)

$('#id_certificate').bind('change', function() {
if(this.files[0].size > 1048576) {
alert("{% trans 'The file size should be less than 1 MB.' %}");

+Background FullScreen Slider + Fade Effect (Feb. 5, 2016, 7:21 p.m.)


$(document).ready(function() {
var images = [];
var titles = [];
{% for slider in sliders %}
images.push('{{ slider.image.url }}');
titles.push('{{ slider.image.motto_en }}');
{% endfor %}

var image_index = 0;
$('#iind-slider').css('background-image', 'url(' + images[0] + ')');
setInterval(function() {
if(image_index == images.length) {
image_index = 0;
$('#iind-slider').fadeOut('slow', function() {
$(this).css('background-image', 'url(' + images[image_index] + ')');
}, 4000);

#iind-slider {
width: 100%;
height: 100vh;
background: no-repeat fixed 0 0;
background-size: 100% 100%;

+Convert Seconds to real Hour, Minutes, Seconds (Feb. 1, 2016, 10:54 p.m.)

function secondsTimeSpanToHMS(s) {
var h = Math.floor(s/3600); //Get whole hours
s -= h*3600;
var m = Math.floor(s/60); //Get remaining minutes
s -= m*60;
return h+":"+(m < 10 ? '0'+m : m)+":"+(s < 10 ? '0'+s : s); //zero padding on minutes and seconds

setInterval(function() {
var left_time = secondsTimeSpanToHMS(server_left_time);
server_left_time -= 1;
}, 1000);

+Error - TypeError: $.browser is undefined (Jan. 15, 2016, 1:53 a.m.)

Find this script file and include it after the main jquery file:

+Multiple versions of jQuery in one page (Jan. 8, 2016, 5:54 p.m.)

1- Load the jquery libraries like the example:

<script type="text/javascript" src="{% static 'iind/js/jquery-1.7.1.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_7_1 = $.noConflict(true);
<script type="text/javascript" src="{% static 'iind/js/jquery-1.11.3.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_11_3 = $.noConflict(true);
2- Then use them as follows:

jQuery_1_11_3(document).ready(function() {
function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeIn("fast");
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
}, function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeOut("fast");
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
And change the last line of jQuery libraries like this:

}(jQuery, window, document));

}(jQuery_1_11_3, window, document));
And for bootstrap.min.js, I had to change this long line: (The last word, jQuery needed to be changed):

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery)

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery_1_11_3)

+Redirect Page (Dec. 20, 2015, 11:57 a.m.)

// similar behavior as an HTTP redirect

// similar behavior as clicking on a link
window.location.href = "";


+Smooth scrolling when clicking an anchor link (Sept. 10, 2015, midnight)

var $root = $('html, body');
$('a').click(function () {
scrollTop: $($.attr(this, 'href')).offset().top
}, 1500);
return false;

+Attribute Selector (Aug. 26, 2015, 4:01 p.m.)

$( "input[value='Hot Fuzz']" ).next().text( "Hot Fuzz" );
$("ul").find("[data-slide='" + current + "']");

$("ul[data-slide='" + current +"']");

+Underscore Library (Aug. 26, 2015, 2:01 p.m.)

if(_.contains(intensity_filters, intensity_value)) {
intensity_filters = _.without(intensity_filters, intensity_value);

+Get a list of checked/unchecked checkboxes (Aug. 26, 2015, 1:51 p.m.)

var selected = [];
$('#checkboxes input:checked').each(function() {
And for getting the unchecked ones:
$('#checkboxes input:not(:checked)').each(function() {} });

+Comma Separate Number (Aug. 14, 2015, 11:59 a.m.)

function commaSeparateNumber(val) {
while (/(\d+)(\d{3})/.test(val.toString())) {
val = val.toString().replace(/(\d+)(\d{3})/, '$1' + ',' + '$2');
return val;

+Hide a DIV when the user clicks outside of it (Aug. 12, 2015, 2:42 p.m.)

$(document).mouseup(function (e) {
var container = $("#my-cart-box");
if (! && container.has( === 0) {

+Reset a form in jquery (Aug. 1, 2015, 1:19 a.m.)


+Event binding on dynamically created elements (Aug. 14, 2015, 12:06 a.m.)

Add Click event for dynamically created tr in table

$('.found-companies-table').on('click', 'tr', function() {
$("body").on("mouseover mouseout", "select", function(e){

// Do some code here

$(staticAncestors).on(eventName, dynamicChild, function() {});
$('body').on('click', '.delete-order', function(e) { });

+Select all (table rows) except first (July 18, 2015, 3:12 a.m.)


+Deleting all rows in a table (July 15, 2015, 3:29 p.m.)

$("#mytable > tbody").html("");
---------------------------------------- OR ----------------------------------------
---------------------------------------- OR ----------------------------------------
---------------------------------------- OR ----------------------------------------
$("#myTable").children( 'tr:not(:first)' ).remove();

+Anthology Plugin Websites (April 6, 2016, 8:13 p.m.)

-------------------------- SELECTIONS --------------------------

******************* SLIDER *******************

******************* MODALS *******************

******************* SCROLL *******************

******************* DRAWERS *******************

******************* GALLERY *******************


******************* TEXT EFFECT *******************

******************* IMAGE EFFECT *******************

************* BACKGROUND ANIMATE **************

*********** TICKER / TYPEWRITER *************

******************* MISC *******************

******************* AUTO SCROLL *******************

******************* FORM INPUTS *******************

******************* CHART *******************

******************* ACCORDION *******************

******************* MENU *******************

+Focus the first input in your form (June 30, 2015, 3:05 p.m.)


+jQuery `data` vs `attr`? (Aug. 21, 2014, 3:03 p.m.)

If you are passing data to a DOM element from the server, you should set the data on the element:

<a id="foo" data-foo="bar" href="#">foo!</a>
The data can then be accessed using .data() in jQuery:

console.log( $('#foo').data('foo') );
//outputs "bar"
However when you store data on a DOM node in jQuery using data, the variables are stored in on the node object. This is to accommodate complex objects and references as storing the data on the node element as an attribute will only accommodate string values.

Continuing my example from above:
$('#foo').data('foo', 'baz');

console.log( $('#foo').attr('data-foo') );
//outputs "bar" as the attribute was never changed

console.log( $('#foo').data('foo') );
//outputs "baz" as the value has been updated on the object
Also, the naming convention for data attributes has a bit of a hidden "gotcha":

<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
console.log( $('#bar').data('fooBarBaz') );
//outputs "fizz-buzz" as hyphens are automatically camelCase'd
The hyphenated key will still work:

<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
console.log( $('#bar').data('foo-bar-baz') );
//still outputs "fizz-buzz"
However the object returned by .data() will not have the hyphenated key set:

$('#bar').data().fooBarBaz; //works
$('#bar').data()['fooBarBaz']; //works
$('#bar').data()['foo-bar-baz']; //does not work
It's for this reason I suggest avoiding the hyphenated key in javascript.

The .data() method will also perform some basic auto-casting if the value matches a recognized pattern:

<a id="foo"
$('#foo').data('str'); //`"bar"`
$('#foo').data('bool'); //`true`
$('#foo').data('num'); //`15`
$('#foo').data('json'); //`{fizz:['buzz']}`
This auto-casting ability is very convenient for instantiating widgets & plugins:

$('.widget').each(function () {
If you absolutely must have the original value as a string, then you'll need to use .attr():

<a id="foo" href="#" data-color="ABC123"></a>
<a id="bar" href="#" data-color="654321"></a>
$('#foo').data('color').length; //6
$('#bar').data('color').length; //undefined, length isn't a property of numbers

$('#foo').attr('data-color').length; //6
$('#bar').attr('data-color').length; //6

+Leading colon in a jQuery selector (Aug. 21, 2014, 3:01 p.m.)

What's the purpose of a leading colon in a jQuery selector?
The :input selector basically selects all form controls (input, textarea, select and button elements) where as input selector selects all the elements by tag name input.

Since radio button is a form element and also it uses input tag so they both can be used to select radio button. However both approaches differ the way they find the elements and thus each have different performance benefits.

+Colon and question mark (Aug. 21, 2014, 3 p.m.)

What is the meaning of the colon (:) and question mark (?) in jquery?
That's an inline if.
If true, do the thing after the question mark, otherwise do the thing after the colon. The thing before the question mark is what you're testing.

+Commands and examples (Aug. 21, 2014, 2:57 p.m.)

$('#toggle_message').attr('value', 'Show')
$(document).ready(function() {});
$(window).load(function() {});
$(window).unload(function() {
alert('You\'re leaving this page');
This alert will be raised when move to another window by clicking on a link or click on the back or preivous buttons of browser, or when you close the tab.
Returns the number of all the elements in the page.
$(':text').focusin(function() {});
$(':text').blur(function() {});
$('#email').attr('value', 'Write your email address').focus(function() {
# Some code
}).blur(function() {
# Some code
search_name = jQuery.trim($(this).val());
$("#names li:contains('" + search_name + "')").addClass('.highlight');
$('input[type="file"]').change(function() {
}).next().attr('disabled', 'disabled');
$('#menu_link').dbclick(function() {});
$('#click_me').toggle(function() {
# Code here
}, function() {
# Code here
var scroll_pos = $('#some_text').scrollTop();
$('#some_text').select(function() {});
$('a').bind('mouseenter mouseleave', function() {
bind() is specified to use for series of events.
$('.hover').mousemove(function(e) {
$('some_div').text('x: ' + e.clientX + ' y: ' + e.clientY);
Hover over description:
$('.hover').mousemove(function(e) {
var hovertext = $(this).attr('hovertext');
$('#hoverdiv').css('top', e.clientY+10).css('left', e.clientX+10);
}).mouseout(function() {

Create an empty div with id="hovertext" in HTML, and style it in CSS.
.addClass('class1 class2 class3')
$(":input').focus(function() {
Traversing using .each():

$('input[type="text"]').each(function(index) {
This index argument prints 0, 1, 2, ... per the items which are selected by .each statement/function.
These two statements do the same thing:
$('.names li:first').append('Hello');

if($(this).has('li').length == 0) { }

if($(this).has(':contains')) {}
This is useful when you want to toggle a sub-menu using the first/top item.
$(this).hide('slow', 'linear', function() {});

.stop() Will cause the animation of slide effect to stop
.fadeTo(100, 0.4, function() {})
$('.fadeto').not(this).fadeTo(100, 0.4);
$('.fadeto').css('opacity', '0.4');
$('.fadeto').mouseover(function() {
$(this).fadeTo(100, 1);
$('.fadeto').not(this).fadeTo(100, 0.4);
$('html, body').animate({scrollTop: 0}, 10000);
$('#terms').scroll(function() {
var textarea_height = $(this)[0].scrollHeight();
var scroll_height = textarea_height - $(this).innerHeight();

var scroll_top = $(this).scrollTop();
var names = ['Alex', 'Billy', 'Dale'];
if (jQuery.inArray('Alex', names) != '-1') {
$.each(names, function(index, value) {})
setInterval(function() {
var timestamp =;
}, 1);
(function($) {
$.fn.your_new_function_name = function() {}
$('#drag').draggable({axis: 'x'});
$('#drag').draggable({containment: 'document'});
$('#drag').draggable({containment: 'window'});
$('#drag').draggable({containment: 'parent'});
$('#drag').draggable({containment: [0, 0, 200, 200]});
$('#drag').draggable({cursor: 'pointer'});
$('#drag').draggable({opacity: 0.6});
$('#drag').draggable({grid: [20, 20]});
$('#drag').draggable({revert: true});
$('#drag').draggable({revertDuration: 1000});
$('#drag').draggable({start: function() {}});
$('#drag').draggable({drag: function() {}});
$('#drag').draggable({stop: function() {}});
$('#drop').droppable({hoverClass': 'border'});
$('#drop').droppable({tolerance': 'fit'});
$('#drop').droppable({tolerance': 'intersect'});
$('#drop').droppable({tolerance': 'pointer'});
$('#drop').droppable({tolerance': 'touch'});
$('#drop').droppable({accept': '.name'});
$('#drop').droppable({over': function() {}});
$('#drop').droppable({out': function() {}});
$('#drop').droppable({drop': function() {}});
$('#names').sortable({containment: 'parent'});
$('#names').sortable({tolerance: 'pointer'});
$('#names').sortable({cursor: 'pointer'});
$('#names').sortable({revert: true});
$('#names').sortable({opacity: 0.6});
$('#names').sortable({connectWith: '#palces, #names'});
$('#names').sortable({update: function() {}});
This required a css file `jquery-ui-custom.css`

$('#box').resizable({containment: 'document'});
$('#box').resizable({animate: true});
$('#box').resizable({ghost: true});

$('#box').resizable({animateDuration: 'slow'});
`slow`, `medium`, `fast`, `normal`, `1000`

$('#box').resizable({animateEasing: 'swing'});
`swing`, `linear`

$('#box').resizable({aspectRatio: true});
`0.4`, `2/5`, `9/10`

$('#box').resizable({autoHide: true});

$('#box').resizable({handles: 'n, e, se');
n=North, e=East, w=West, s=South, or `all`
If you do not specify `all`, you can not resize the box from left or top, as they are so closed to the browser.

$('#box').resizable({grid: [20, 20]});
$('#box').resizable({minHeight: 200);
$('#box').resizable({maxHeight: 100);
$('#box').resizable({minWidth: 200);
$('#box').resizable({maxWidth: 100);
$('#content').accordion({fillSpace: true})
$('#content').accordion({icons: {'header': 'ui-icon-plus', 'headerSelected': 'ui-icon-minus'}})
$('#content').accordion({collabsable: true})
$('#content').accordion({active: 2})
$('#dialog').attr('title', 'Saved').text('Settings were saved.').dialog();
.dialog({buttons: {'OK': function() {
closeOnEscape: true
draggable: false
resizable: false
show: 'fade', 'bounce'
modal: true
position: 'top', 'top, left', 'bottom', 'top, center', [100, 100]

var val = 0;
var interval = setInterval(function() {
val = val + 1;
$('#pb').progressbar({value: val});
$('#percent').text(val + '%');
if (val == 100) {
Used in ins_menu.html(nimkatonline)
$("#header_menus img:not(.hover_menus)").mouseenter(function() {
$("#" + $(this).attr('data-hover')).show();

+Editing KDE Application Launcher Menus (May 11, 2015, 5:31 p.m.)

Use `kmenuedit`

+Delete session (March 20, 2015, 11:36 a.m.)

Delete the files in:
rm ~/.kde/share/config/session/*

And delete the file:

+Create a package for IOS (Nov. 4, 2015, 6:06 a.m.)

sudo apt-get install autoconf automake libtool pkg-config

+PyCharm Completion (March 19, 2015, 9:25 a.m.)
1-Download this jar plugin:

2-On Pycharm’s main menu, click "File" -> Import Settings

3-Select this file and PyCharm will present a dialog with filetypes ticked. Click OK.

4-You are done. Restart PyCharm

+Android API (Feb. 12, 2015, 9:54 p.m.)
I have this class in Java docs:

And in python it is:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech')

Baed on these, I thought for getting another class in Java (android.speech.tts.TextToSpeech.Engine) I had to:
Engine = autoclass('android.speech.tts.TextToSpeech.Engine')

But I got this error at runtime on my cellphone and the app would not open:
java.lang.ClassNotFoundException: android.speech.tts.TextToSpeech.Engine

I even could not access `Engine` using the pythonic way either:

I had to access the class by:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech$Engine')
Python Dictionaries = Java HashMap:

HashMap<String, String> phoneBook = new HashMap<String, String>();
phoneBook.put("Mike", "555-1111");
phoneBook.put("Lucy", "555-2222");
phoneBook.put("Jack", "555-3333");

phoneBook = {}
phoneBook = {"Mike":"555-1111", "Lucy":"555-2222", "Jack":"555-3333"}

And for implementing it Kivy:
HashMap = autoclass('java.util.HashMap')
hash_map = HashMap()
hash_map.put(key, value)
To access nested classes, use $ like: autoclass('android.provider.MediaStore$Images$Media').

+Sign apk files (Oct. 4, 2015, 11:42 a.m.)

1-Generate a private key using keytool. For example:
$ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
This example prompts you for passwords for the keystore and key, and to provide the Distinguished Name fields for your key. It then generates the keystore as a file called my-release-key.keystore. The keystore contains a single key, valid for 10000 days. The alias is a name that you will use later when signing your app.

2-Compile your app in release mode to obtain an unsigned APK:
buildozer android release

3-Sign your app with your private key using jarsigner:
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name
This example prompts you for passwords for the keystore and key. It then modifies the APK in-place to sign it. Note that you can sign an APK multiple times with different keys.

4-Verify that your APK is signed. For example:
jarsigner -verify -verbose -certs my_application.apk

5-Align the final APK package using zipalign.
The zipalign does not exist in Synaptic Package Manager, it exists in AndroidSD Build Tools. Use locate to find `zipalign` and create a symbolic link in /usr/bin:
ln -s /home/moh3en/Programs/Android/Development/android-sdk-linux/build-tools/android-5.0/zipalign /usr/bin/
zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk

buildozer android release

jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore excludes/my-release-key.keystore bin/NimkatOnline-1.2.4-release-unsigned.apk mohsen_hassani

jarsigner -verify -verbose -certs bin/NimkatOnline-1.2.4-release-unsigned.apk

zipalign -v 4 bin/NimkatOnline-1.2.4-release-unsigned.apk bin/NimkatOnline-1.2.4.apk

+Label (Feb. 12, 2015, 9:52 p.m.)

When creating a label, by default, it places at the bottom left corner with some part of it hidden, but by changing its `size` property it will be solved:
size: self.texture_size

Scrolling a Label:
text: str('A very long text' * 100)
font_size: 50
text_size: self.width, None
size_hint_y: None
height: self.texture.size[1]

+FloatLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to RelativeLayout, except now position is relative to window, and not Layout.
Thus in FloatLayout, pos = 0, 0 refers to lower-left corner.

+RelativeLayout (Feb. 12, 2015, 9:51 p.m.)

Each child widget size and position has to be give.
size_hint, pos_hint: numbers relative to Layout.
If those two parameters are used, it does not make any difference if RelativeLayout or FloatLayout are used, as both will yield the same result.

+GridLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to StackLayout 'lr-tb'
Either cols or rows has to be given and the Layout adjusts so the given number is the maximum number of cols or rows.

+Canvas (Feb. 12, 2015, 9:51 p.m.)

Canvas refers to graphical instructions.
The instructions could be non-visual, called context instructions, or visual, called vertex instructions.
An example of a non-visual instruction would be to set a color.
An example of a visual instruction would be draw a rectangle.

+StackLayout (Feb. 12, 2015, 9:50 p.m.)

1-More flexible than BoxLayout
right to left or left to right
top to bottom or bottom to top
rl-bt, rl-tb, lr-bt, lr-tb (Row-wise)
bt-rl, bt-lr, tb-rl, tb-lr (Column-wise)

+Snippets (Feb. 12, 2015, 9:50 p.m.)

pos_hint: {'x': .1}
size_hint: [.2, 2]
pos_hint: {'center_x': .3}
in kv file:
on_text: my_label.color = [random.random() for i in xrange(3)] + [1]

+on_touch_up vs on_release (Feb. 12, 2015, 9:49 p.m.)

When using on_touch_up event with partial, you have to pass three arguments to the calling method:

button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

def speak_word(word, arg1, arg2): # I don't know yet what these two extra args are used for.

After touching the button, all the same buttons on the page are also triggered. You have to solve it using something like this:
on_touch_up: vibrate() if self.collide_point(*args[1].pos) else None
But using on_release, two args are passed:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

def speak_word(word, button):

After clicking, the only button which has been touched, will be triggered. That's good!

+Partial (Feb. 12, 2015, 9:49 p.m.)

In Kivy, you register a button release callback with the “bind()” function:
But the signature of the “on_release” method is “on_release(self)”, which means that the method you provide will receive only one parameter — the button that generated the event. When you release the button, Kivy will invoke your callback method and pass in the button that you released.

So does this mean we can’t pass user-defined parameters to our handlers? Does it mean we need to use globals or a bunch of specialized methods to write our button handlers? No, this is where Python’s functools.partial comes in handy.

To oversimplify, partial allows you to create a function with one set of arguments that calls another function with a different set of arguments. For example, consider the following function that takes two arguments:

def addTwoNumbers(x, y):
print "x: %d, y: %d" % (x, y)
return x+y
You can create a partial from this that automatically supplies one or more of the arguments. Let’s create one that supplies ’1′ for ‘x’:

addOne = partial(addTwoNumbers, 1)
Which you would then invoke as such:

>>> #We pass in '2' for 'y' here. The partial fills in '1' for 'x'
>>> addOne(2)
x: 1, y: 2
Let’s create a function that can set any label to any text:

def changeLabel(label, text, button):
#Kivy gives us 'button' to let us know which button
# caused the event, but we don't use it
label.text = text

In our UI setup, we can then bind two different buttons to this handler, creating partials that supply values for the extra arguments:

startButton = Button(text='Start Car')
stopButton = Button(text='Stop Car')

"Starting Car..."))

"Stopping Car..."))
Now, by inspecting the setup code, it’s fairly easy to see what the UI does when various events occur. We can even extend this further to perform an action after setting the label:

def changeLabelAndRun(label, text, command, button):
label.text = text

This allows our setup code to specify a UI behavior and trigger an action (assume ‘startCar’ and ‘stopCar’ have been defined as functions elsewhere):

statusLabel, "Starting Car...",

statusLabel, "Stopping Car...",
Unlike C, there’s no casting, no packing things into structs, and it’s easy to extend for different needs. Snazzy! This might not scale perfectly to complicated UI interactions, but it greatly simplifies straightforward event processing, making it easier to see at a glance what the application is doing.

+BoxLayout vs. GridLayout (Sept. 9, 2015, 12:47 p.m.)

The widgets in a BoxLayout can have different width and height, but in a GridLayout, each row or column should have the same size.

The widgets in BoxLayout are placed from bottom to top, but those in a GridLayout are placed from top to bottom.

In a BoxLayout the widgets can not be placed next to each other! I mean, they are placed one widget per row (if orientation is vertical) or column (if orientation is horizontal)

+Background Image for Button (Feb. 12, 2015, 9:48 p.m.)

background_normal: 'home_button.png'
background_down: 'home_button_down.png'

+DropDown (Feb. 12, 2015, 9:48 p.m.)

1-First of all, make sure that dropdown doesn't get called while widget is not on screen. That is, you have to only instantiate it, do not use it for add_widget or anything so that it's called.

2-For getting the data which is passed through `a_button.on_release:'the_value')`, you have to use:
on_select: select_controller(args[1])
on the DropDown. Here is the exmaple:
on_select: select_controller(args[1]) # Try printing `args` to see the whole items.
text: 'Update Database'

+Spinner vs. DropDown (Sept. 9, 2015, 12:44 p.m.)

Spinner is a widget that provides a quick way to select one value from a set. In the default state, a spinner shows its currently selected value. Touching the spinner displays a dropdown menu with all other available values from which the user can select a new one.

+Commands (Feb. 12, 2015, 9:47 p.m.)

buildozer android debug

+Buildozer (Feb. 12, 2015, 9:47 p.m.)

1-git clone
2-Activate virtualenv (and test if the default `python` command will lead to python version 2.7) because buildozer needs python2.7
4-python install
buildozer init
buildozer android debug
buildozer android logcat
adb logcat
AndroidSDK and AndroidNDK are needed for buildozer, if you have already downloaded them, provide the paths like these:
android.ndk_path = /home/moh3en/Programs/Android/Development/android-ndk-r9c
android.sdk_path = /home/moh3en/Programs/Android/Development/android-sdk-linux

if not, buildozer will try to download them, but unfortunately because of the embargo, they won't get downloaded since the source originates from So you have to download them using proxy and untar/unzip them somewhere.
sudo adb uninstall com.nimkatonline.en
sudo adb install bin/NimkatOnline-1.2.0.apk

+Installing python packages (Feb. 12, 2015, 9:46 p.m.)

For installing python packages use this command:
./ -m "kivy requests==2.1.0 SQLAlchemy"

You will need these environment variables:
export ANDROIDSDK="/home/mohsen/Programs/android-sdk-linux"
export ANDROIDNDK="/home/mohsen/Programs/android-ndk-r8c"
export ANDROIDAPI=14

+Python Android Path (Feb. 12, 2015, 9:46 p.m.)

This is the path to the python used for android. Use this path for managing (installing or uninstalling) packages which are going to be installed, packed and used for your app.

+Error ==> Source resource does not exist: python-for-android/dist/default/ (Feb. 12, 2015, 9:43 p.m.)

export ANDROIDAPI=15

+Chat (Feb. 12, 2015, 9:42 p.m.)

<Mohsen_Hassani> Hello guys. I am very new to Kivy. I am using psycopg2 to read data from my remote VPS. I wanted to know if it will work after making apk too?
<brousch> Mohsen_Hassani: Pure Python modules will work fine. I'm not sure if psycopg2 is pure Python
<kovak> Mohsen_Hassani: the first step is to write a recipe for python-for-android to see if you can compile for ARM without any problems
<kovak> I think psycopg2 has C bits
<kovak> if it compiles in arm no problem you are good to go, if not you may need to patch the source
<brousch> However, except in very rare cases, your Android app should not be communicating directly with your database server. There should be a proper API on top of that database
<tito> Mohsen_Hassani: the best shot you have is to put your tgz into a directory, go into the directory, and start python -m SimpleHTTPServer
<tito> then do: URL_python=http://localhost:8000/Python-2.7.2.tar.bz2 URL_hostpython=http://localhost:8000/Python-2.7.2.tar.bz2 ./ -m 'openssl pil kivy'

+Building the application (Feb. 12, 2015, 9:36 p.m.)

cd dist/default
./ --permission INTERNET --orientation sensor --package com.mohsenhassani.notes --name My\ Notes --version 1.0 --dir ~/Projects/kivy_projects/notes/ debug
Install the debug apk to your device:
adb install bin/touchtracer-1.0-debug.apk
/usr/bin/python2.7 --name 'My Notes' --version 1.0 --package com.mohsenhassani.notes --private /home/mohsen/Projects/kivy_projects/notes/.buildozer/android/app --s
dk 14 --minsdk 8 --permission INTERNET --icon /home/mohsen/Projects/kivy_projects/notes/./static/icon.png --orientation sensor debug

+Installation (July 17, 2015, 1:26 a.m.)


Installation Steps:
1-apt-get install python-gst0.10-dev python-gst-1.0 freeglut3-dev libsdl-image1.2-dev libsdl-ttf2.0-dev libsdl-mixer1.2-dev libsmpeg-dev libportmidi-dev libswscale-dev libavformat-dev libavcodec-dev libv4l-dev libserf-1-1 libsvn1 subversion openjdk-7-jdk python-pygame
2-Create and activate a virtualenv
3-easy_install requests
4-easy_install -U setuptools
5-pip install cython==0.20
6-pip install pygments
7-pip install --allow-all-external pil --allow-unverified pil

8.1-For installing next step (pygame) you will need to link a file or get the following error. So first create the symlink:
fatal error: linux/videodev.h: No such file or directory:
sudo ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

8.2-pip install pygame (It won't be found or downloaded! You need to download the tar file from and install it using pip install <the_downloaded_tar_file>.)

9-pip install kivy

+DVB - TV Card Driver (April 17, 2015, 7:49 p.m.)

This will install the driver automatically:

1- mkdir it9135 && cd it9135

2- wget

3- unzip

4- dd if=dvb-usb-it9135.fw ibs=1 skip=64 count=8128 of=dvb-usb-it9135-01.fw

5- dd if=dvb-usb-it9135.fw ibs=1 skip=12866 count=5817 of=dvb-usb-it9135-02.fw

6- rm dvb-usb-it9135.fw

7- sudo install -D *.fw /lib/firmware

8- sudo chmod 644 /lib/firmware/dvb-usb-it9135* && cd .. && rm -rf it9135

9- sudo apt install kaffeine

After the above solution, you should be able to watch Channels via Kaffeine (or any other DVB Players). Just grab Kaffein, scan the frequencies and you should be fine!


If you had problems with the above solution, check the older method below:

1-sudo apt-get install libproc-processtable-perl git libc6-dev

2-git clone git://

3-cd media_build

4-$ ./build

5-sudo make install

6-apt-get install me-tv kaffeine

7-reboot for loading the driver (I don't know the driver for modprobe yet).


Scan channels using Kaffein:

1-Open Kaffein

2-From `Television` menu, choose `Configure Television`.

3-From `Device 1` tab, from `Source` option, choose `Autoscan`

4-From `Television` menu choose `Channels`

5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.


Scan channels using Me-TV

1-Open Me-TV

2-When the scan dialog opens, choose `Czech Republic` from `Auto Scan`.


+Permanently set $PATH (April 19, 2019, 9:39 p.m.)

vim /root/.profile

export PATH="$PATH:/usr/share/logstash/bin/"

+Test if a port is open (April 7, 2018, 9:07 p.m.)

telnet 80
nc -z 80

+sed - inline string replace (April 7, 2018, 6:29 p.m.)

echo "the old string . . . " | sed -e "s/old/new/g/"

+Install GRUB manually (March 9, 2018, 12:05 p.m.)

sudo mount /dev/sdax /mnt
sudo mount --bind /dev /mnt/dev
sudo mount --bind /dev/pts /mnt/dev/pts
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt

update-initramfs -u

+Forwarding X (March 6, 2018, 7:55 p.m.)

1- Edit the file sshd_config:
vim /etc/ssh/sshd_config

X11Forwarding yes
X11UseLocalhost no

2- Restart ssh server:
/etc/init.d/ssh reload

3- Install xauth:
apt install xauth

4- SSH to the server:
ssh -X

+Partitioning Error - Partition table entries are not in disk order (Feb. 13, 2018, 5:37 p.m.)

sudo gdisk /dev/sda
p (the p-command prints the recent partition-table on-screen)
s (the s-command sorts the partition-table entries)
p (use the p-command again to see the result on your screen)
w (write the changed partition-table to the disk)
q (quit gdisk)

+OwnCloud (Feb. 3, 2018, 3:37 p.m.)


1- apt install -y apache2 mariadb-server libapache2-mod-php7.0 php7.0-gd php7.0-json php7.0-mysql php7.0-curl php7.0-intl php7.0-mcrypt php-imagick php7.0-zip php7.0-xml php7.0-mbstring php-apcu php-redis redis-server php7.0-ldap php-smbclient

2- Download tar file from the address:
Extract the file to /srv/

3- Remove the config files in /etc/apache2/sites-available and "sites-enabled".
Create an Apache config file with the content:
vim /etc/apache2/sites-available/owncloud.conf

Redirect permanent /owncloud
<VirtualHost *:443>
Header add Strict-Transport-Security: "max-age=15768000;includeSubdomains"
SSLEngine on

DocumentRoot /srv/owncloud

<Directory /srv/owncloud>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

<IfModule mod_dav.c>
Dav off

SetEnv HOME /srv/owncloud
SetEnv HTTP_HOME /srv/owncloud

4- Create a symlink:
ln -s /etc/apache2/sites-available/owncloud.conf /etc/apache2/sites-enabled/owncloud.conf

5- Enable some required modules for Apache:
systemctl restart apache2
a2enmod rewrite
a2enmod headers

6- chown -R www-data:www-data /srv/owncloud

7- Configure Database:
mysql -u root -p
GRANT ALL PRIVILEGES ON owncloud.* TO 'root'@'localhost' IDENTIFIED BY 'password';

8- Open the server address in browser and complete the installation:

9- vim /etc/php/7.0/cli/conf.d/20-apcu.ini

10- Add these two lines at the top of the file /srv/owncloud/data/.htaccess
deny from all
IndexIgnore *

11- Check the owncloud config file is the same as the following: /srv/owncloud/config/config.php
$CONFIG = array (
'instanceid' => '...',
'passwordsalt' => '...',
'secret' => '...',
'trusted_domains' =>
array (
0 => '',
'datadirectory' => '/srv/owncloud/data',
'overwrite.cli.url' => '',
'dbtype' => 'mysql',
'version' => '',
'dbname' => 'owncloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => 'oc_admin',
'dbpassword' => '...',
'logtimezone' => 'UTC',
'installed' => true,
'filelocking.enabled' => true,
'memcache.local' => '\OC\Memcache\APCu',
'memcache.locking' => '\OC\Memcache\APCu',

12- Enabling SSL:
a2enmod ssl
a2ensite default-ssl
service apache2 reload

13- Edit the file /etc/php/7.0/cli/conf.d/20-apcu.ini and make sure it has only the value:

Restart apache:
/etc/init.d/apache2 restart

Management Commands:

sudo -u www-data php /var/www/owncloud/occ user:resetpassword admin
See OwnCloud version:
sudo -u www-data php /var/www/owncloud/occ -V
sudo -u www-data php /var/www/owncloud/occ status
User Commands:
user:add adds a user
user:delete deletes the specified user
user:disable disables the specified user
user:enable enables the specified user
user:inactive reports users who are known to owncloud,
but have not logged in for a certain number of days
user:lastseen shows when the user was logged in last time
user:list list users
user:list-groups list groups for a user
user:report shows how many users have access
user:resetpassword Resets the password of the named user
user:setting Read and modify user settings
user:sync Sync local users with an external backend service


+pmacct configuration with PostgreSQL (Jan. 30, 2018, 10:19 p.m.)
su postgres
psql -d template1 -f pmacct-create-db.pgsql
psql -d pmacct -f pmacct-create-table_v1.pgsql
vim /etc/pmacct/pmacctd.conf

+Get Hardware Information (Jan. 24, 2018, 4:40 p.m.)


+tcpdump (Jan. 13, 2018, 11:29 a.m.)

sudo tcpdump -i any -n host

+Use cURL on specific interface (Jan. 9, 2018, 1:09 p.m.)

curl -o rootLast.tbz2 --interface eno2

+pmacct (Jan. 1, 2018, 10:49 a.m.)

su postgres
psql -d template1 -f /tmp/pmacct-create-db.pgsql
psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
Configuration Directives:
vim /etc/pmacct/nfacctd.conf
! nfacctd configuration
daemonize: true
pidfile: /var/run/
syslog: daemon
! interested in in and outbound traffic
aggregate: src_host,dst_host
! on this network
pcap_filter: net
! on this interface
interface: lo
! storage methods
plugins: pgsql
sql_host: localhost
sql_passwd: myrealsecurepwd
! refresh the db every minute
sql_refresh_time: 600
! reduce the size of the insert/update clause
sql_optimize_clauses: false
! accumulate values in each row for up to an hour
sql_history: 10m
! create new rows on the minute, hour, day boundaries
sql_history_roundoff: 10m
! in case of emergency, log to this file
!sql_recovery_logfile: /var/lib/pmacct/nfacctd_recovery_log
nfacctd_port: 6653
imt_mem_pools_number: 0
plugin_pipe_size: 4096000
! plugin_buffer_size: 32212254720

+Chroot (Dec. 25, 2017, 11:11 a.m.)

chroot /srv/root /bin/bash

+Create PDF from pictures (Nov. 6, 2017, 3:21 p.m.)

convert *.jpg aa.pdf

+Add a New Disk to an Existing Linux Server (Oct. 25, 2017, 3:44 p.m.)

1- Check if the added disk is shown:
fdisk -l

2- For partitioning:
fdisk /dev/vdb
+49G (For a 50G disk)
Now format the disk with mkfs command.
mkfs.ext4 /dev/vdb1

Make an entry in /etc/fstab file for permanent mount at boot time:
/dev/vdb1 /mnt/ftp ext4 defaults 0 0

+DevStack (Oct. 4, 2017, 12:36 a.m.)
apt install sudo git sudo

1- Add Stack User
useradd -s /bin/bash -d /opt/stack -m stack

2- Since this user will be making many changes to your system, it should have sudo privileges:
echo "stack ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/stack
su - stack

3- Download DevStack
git clone
cd devstack

4- Create a local.conf with the following content

+Clear Terminal Completely (Sept. 18, 2017, 6:13 p.m.)

clear && printf '\e[3J'

+Add SSH Private Key (Sept. 18, 2017, 5:01 p.m.)

ssh-add .ssh/id_rsa

If you got an error:
Could not open a connection to your authentication agent.

For fixing it run:
eval `ssh-agent -s`
eval $(ssh-agent)

And then repeat the earlier command (ssh-add ....)
Add SSH private key permanently:
Create a file ~/.ssh/config with the content:
IdentityFile ~/.ssh/id_mohsen

+Commands - IP (Sept. 16, 2017, 5:29 p.m.)

Assign an IP Address to Specific Interface:
ip addr add dev eth1
Check an IP Address
ip addr show
Remove an IP Address
ip addr del dev eth1
Enable Network Interface
ip link set eth1 up
Disable Network Interface
ip link set eth1 down
Check Route Table
ip route show
Add Static Route
ip route add via dev eth0
Remove Static Route
ip route del
Add Default Gateway
ip route add default via

+Commands - Find (Sept. 12, 2017, 11:08 a.m.)

Find Files Using Name in Current Directory
find . -name mohsen.txt
Find Files Under Home Directory
find /home -name mohsen.txt
Find Files Using Name and Ignoring Case
find /home -iname mohsen.txt
Find Directories Using Name
find / -type d -name Mohsen
Find PHP Files Using Name
find . -type f -name mohsen.php
Find all PHP Files in Directory
find . -type f -name "*.php"
Find Files With 777 Permissions
find . -type f -perm 0777 -print
Find Files Without 777 Permissions
find / -type f ! -perm 777
Find SGID Files with 644 Permissions
find / -perm 2644
Find Sticky Bit Files with 551 Permissions
find / -perm 1551
Find SUID Files
find / -perm /u=s
Find SGID Files
find / -perm /g=s
Find Read Only Files
find / -perm /u=r
Find Executable Files
find / -perm /a=x
Find Files with 777 Permissions and Chmod to 644
find / -type f -perm 0777 -print -exec chmod 644 {} \;
Find Directories with 777 Permissions and Chmod to 755
find / -type d -perm 777 -print -exec chmod 755 {} \;
Find and remove single File
find . -type f -name "tecmint.txt" -exec rm -f {} \;
Find and remove Multiple File
find . -type f -name "*.txt" -exec rm -f {} \;
# find . -type f -name "*.mp3" -exec rm -f {} \;
Find all Empty Files
find /tmp -type f -empty
Find all Empty Directories
find /tmp -type d -empty
File all Hidden Files
find /tmp -type f -name ".*"
Find Single File Based on User
find / -user root -name mohsen.txt
Find all Files Based on User
find /home -user mohsen
Find all Files Based on Group
find /home -group developer
Find Particular Files of User
find /home -user mohsen -iname "*.txt"
Find Last 50 Days Modified Files
find / -mtime 50
Find Last 50 Days Accessed Files
find / -atime 50
Find Last 50-100 Days Modified Files
find / -mtime +50 –mtime -100
Find Changed Files in Last 1 Hour
find / -cmin -60
Find Modified Files in Last 1 Hour
find / -mmin -60
Find Accessed Files in Last 1 Hour
find / -amin -60
Find 50MB Files
find / -size 50M
Find Size between 50MB – 100MB
find / -size +50M -size -100M
Find and Delete 100MB Files
find / -size +100M -exec rm -rf {} \;
Find Specific Files and Delete
find / -type f -name *.mp3 -size +10M -exec rm {} \;

+Commands - Netstat (Sept. 12, 2017, 11 a.m.)

netstat (network statistics)
Listing all the LISTENING Ports of TCP and UDP connections
netstat -a
Listing TCP Ports connections
netstat -at
Listing UDP Ports connections
netstat -au
Listing all LISTENING Connections
netstat -l
Listing all TCP Listening Ports
netstat -lt
Listing all UDP Listening Ports
netstat -lu
Listing all UNIX Listening Ports
netstat -lx
Showing Statistics by Protocol
netstat -s
Showing Statistics by TCP Protocol
netstat -st
Showing Statistics by UDP Protocol
netstat -su
Displaying Service name with PID
netstat -tp
Displaying Promiscuous Mode
netstat -ac 5 | grep tcp
Displaying Kernel IP routing
netstat -r
Showing Network Interface Transactions
netstat -i
Showing Kernel Interface Table
netstat -ie
Displaying IPv4 and IPv6 Information
netstat -g
Print Netstat Information Continuously
netstat -c
Finding non supportive Address
netstat --verbose
Finding Listening Programs
netstat -ap | grep http
Displaying RAW Network Statistics
netstat --statistics --raw

+NSQ (Sept. 12, 2017, 9:45 a.m.)


1-Download and extract:

cp nsq-1.0.0-compat.linux-amd64.go1.8/bin/* /usr/local/bin/
Quick Start:
1- In one shell, start nsqlookupd:
$ nsqlookupd

2- In another shell, start nsqd:
$ nsqd --lookupd-tcp-address=

3- In another shell, start nsqadmin:
$ nsqadmin --lookupd-http-address=

4- Publish an initial message (creates the topic in the cluster, too):
$ curl -d 'hello world 1' ''

5- Finally, in another shell, start nsq_to_file:
$ nsq_to_file --topic=test --output-dir=/tmp --lookupd-http-address=

6- Publish more messages to nsqd:
$ curl -d 'hello world 2' ''
$ curl -d 'hello world 3' ''

7- To verify things worked as expected, in a web browser open to view the nsqadmin UI and see statistics. Also, check the contents of the log files (test.*.log) written to /tmp.

The important lesson here is that nsq_to_file (the client) is not explicitly told where the test topic is produced, it retrieves this information from nsqlookupd and, despite the timing of the connection, no messages are lost.
Clustering NSQ:


nsqd --lookupd-tcp-address=,,

nsqadmin --lookupd-http-address=,,

+Reverse SSH Tunneling (Sept. 10, 2017, 3:08 p.m.)

1- SSH from the destination to the source (with public IP) using the command below:
ssh -R 19999:localhost:22 sourceuser@
* port 19999 can be any unused port.

2- Now you can SSH from source to destination through SSH tunneling:
ssh localhost -p 19999

3- 3rd party servers can also access through Destination (
Destination ( <- |NAT| <- Source ( <- Bob's server

3.1 From Bob's server:
ssh sourceuser@

3.2 After the successful login to Source:
ssh localhost -p 19999

The connection between destination and source must be alive at all time.
Tip: you may run a command (e.g. watch, top) on Destination to keep the connection active.

+Auto Mount Hard Disk using /etc/fstab (Sept. 8, 2017, 8:11 a.m.)

UUID=e6a27fec-b822-4cc1-9f41-ca14655f938c /media/mohsen/4TB-Internal ext4 rw,user,exec 00

+Traffic Control - Limit Network Interface (Aug. 28, 2017, 4:58 p.m.)

For slowing an interface down:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
tc qdisc add dev eno3 root tbf rate 8096kbit latency 1ms burst 4096

qdisc - queueing discipline
latency - number of bytes that can be queued waiting for tokens to become available.
burst - Size of the bucket, in bytes.
rate - speedknob

+Crontab (July 11, 2017, 12:55 a.m.)

The crontab (cron derives from chronos, Greek for time; tab stands for table).


To see what crontabs are currently running on your system:

sudo crontab -l
crontab -u username -l


To edit the list of cronjobs::
sudo crontab -e


To remove or erase all crontab jobs:
crontab -r


Running GUI Applications:
0 1 * * * env DISPLAY=:0.0 transmission-gtk

Replace :0.0 with your actual DISPLAY.
Use "echo $DISPLAY" to find the display.


Cronjobs are written in the following format:

* * * * * /bin/execute/this/

As you can see there are 5 stars. The stars represent different date parts in the following order:

minute (from 0 to 59)
hour (from 0 to 23)
day of month (from 1 to 31)
month (from 1 to 12)
day of week (from 0 to 6) (0=Sunday)


Execute every minute:

* * * * * /bin/execute/this/

This means execute /bin/execute/this/

every minute
of every hour
of every day of the month
of every month
and every day in the week.


Execute every Friday 1 AM

0 1 * * 5 /bin/execute/this/


Execute on workdays 1AM

0 1 * * 1-5 /bin/execute/this/


Execute 10 past after every hour on the 1st of every month

10 * 1 * * /bin/execute/this/


Run every 10 minutes:

0,10,20,30,40,50 * * * * /bin/execute/this/

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/


Special words:

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *"
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/


List of the English abbreviated day of the week, which can be used in place of numbers:

0 -> Sun

1 -> Mon
2 -> Tue
3 -> Wed
4 -> Thu
5 -> Fri
6 -> Sat

7 -> Sun

Having two numbers for Sunday (0 and 7) can be useful for writing weekday ranges starting with 0 or ending with 7.

Examples of Number or Abbreviation Use

The next four examples will do all the same and execute a command every Friday, Saturday, and Sunday at 9.15 o'clock:

15 09 * * 5,6,0 command
15 09 * * 5,6,7 command
15 09 * * 5-7 command
15 09 * * Fri,Sat,Sun command


Getting output from a cron job on the terminal:
You can redirect the output of your program to the pts file of an already existing terminal!
To know the pts file just type tty command
And then add it to the end of your cron task:
38 23 * * * /home/mohsen/Programs/ >> /dev/pts/4


Cron jobs get logged to:

You can see just cron jobs in that logfile by running:
grep CRON /var/log/syslog


tail -f /var/log/syslog | grep CRON


Mailing the crontab output

By default, cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:


Mailing the crontab output of just one cronjob.
If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/ 2>&1 | mail -s "Cronjob ouput"


Trashing the crontab output

Now that's easy:

*/10 * * * * /bin/execute/this/ > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.


Many scripts are tested in a Bash environment with the PATH variable set. This way it's possible your scripts work in your shell, but when running from cron (where the PATH variable is different), the script cannot find referenced executables and fails.

It's not the job of the script to set PATH, it's the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).


Applicable Examples:

0 * * * DISPLAY=:0 /home/mohsen/Programs/
0 11 * * * /home/mohsen/Programs/

Do not forget to chomd +x both the following files.

#! /bin/bash

/usr/bin/transmission-gtk > /dev/null &
echo $! > /tmp/

#! /bin/bash

if [ -f /tmp/ ]
/bin/kill $(cat /tmp/


How do I use operators?

An operator allows you to specify multiple values in a field. There are three operators:

The asterisk (*): This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,) : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.

The dash (-): This operator specifies a range of values, for example, “5-15” days, which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.

The separator (/): This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.

+fdisk (July 8, 2017, 5:03 p.m.)

Merge Partitions:

1- fdisk /dev/sda

2- p
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 6293503 6291456 3G 83 Linux
/dev/sda2 6295550 10483711 4188162 2G 5 Extended

3- Delete both partitions you are going to merge:
Partition number (1,2, default 2): 2
Partition 2 has been deleted.

Command (m for help): d
Partition number (1-4): 1

4- n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 2): 1
First sector (63-1953520064, default: 63): (Choose the default value)
Last sector, +sectors... (Choose the default value)

5- t
Partition number (1-4): 1
Hex code (type L to list codes): 83

6- Make sure you've got what you're expecting:
Command (m for help): p

7- Finally, save it:
Command (m for help): w

8- resize2fs /dev/sda1
Reboot the system, then check if the partitions have been merged by:
fdisk -l

+Removing Swap Space (July 8, 2017, 2:52 p.m.)

1- swapoff /dev/sda5

2- Remove its entry from /etc/fstab

3- Remove the partition using parted:
apt-get install parted
parted /dev/sda
Type "print" to view the existing partitions and determine the minor number of the swap partition you wish to delete.
rm 5 (5 is the NUMBER of the partition.
Type "quit" to exit parted.


Now you need to merge the unused partition space with another partition. You can do it using the "fdisk" note.

+GRUB Timeout (July 3, 2017, 12:30 p.m.)


+KDE - Location of User Wallpapers (July 2, 2017, 9:51 a.m.)


+NFS (July 1, 2017, 10:19 a.m.)

NFS is a network-based file system that allows computers to access files across a computer network.
1- Installation:
apt-get install nfs-kernel-server nfs-common
2- Server Configuration:
In order to expose a directory over NFS, open the file /etc/exports and attach the following line at the bottom:

This IP is the client which is going to have access to the shared folder. You can also use IP range.

service nfs-kernel-server restart
3- Client Configuration:
sudo apt-get install nfs-common

Create a directory named "Audio" and:
mount /mnt/Audio/

By running df -h, you can ensure that your operation was successful.
For MacOS use this command:
sudo mount -o resvport /mnt/Audio/

+Trim & Merge MP3 files (June 25, 2017, 2:11 p.m.)

sudo apt-get install sox libsox-fmt-mp3
sox infile outfile trim 0 1:06
sox infile outfile trim 1:52 =2:40
sox first.mp3 second.mp3 third.mp3 result.mp3
Merge two audio files with a pad:
sox short.ogg -p pad 6 0 | sox - -m long.ogg output.ogg

+Fix Wireless Headphone Problem (June 10, 2017, 5:34 p.m.)

+Convert deb to iso (May 14, 2017, 3:37 p.m.)

mkisofs firmware-bnx2_0.43_all.deb > iso

+Change DNS settings (May 9, 2017, 4:27 p.m.)

The DNS servers that the system uses for name resolution are defined in the /etc/resolv.conf file.
That file should contain at least one nameserver line.
Each nameserver line defines a DNS server.
The name servers are prioritized in the order the system finds them in the file.

+Samba - Active Directory Infrastructure (May 7, 2017, 10:31 a.m.)

1- sudo apt-get install samba krb5-user krb5-config winbind libpam-winbind libnss-winbind

2- While the installation is running a series of questions will be asked by the installer in order to configure the domain controller.
Second, deskbit.local
Third, deskbit.local

3- Provision Samba AD DC for Your Domain:
systemctl stop samba-ad-dc.service smbd.service nmbd.service winbind.service
systemctl disable samba-ad-dc.service smbd.service nmbd.service winbind.service

4- Rename or remove samba original configuration. This step is absolutely required before provisioning Samba AD because at the provision time Samba will create a new configuration file from scratch and will throw up some errors in case it finds an old smb.conf file.
sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.initial

5- Start the domain provisioning interactively:
samba-tool domain provision --use-rfc2307 --interactive
(Leave everything as default and set a desired password.)
Here is the last result after the process gets finished:
Server Role: active directory domain controller
Hostname: samba
DNS Domain: deskbit.local
DOMAIN SID: S-1-5-21-163349405-2119569559-686966403

6- Rename or remove Kerberos main configuration file from /etc directory and replace it using a symlink with Samba newly generated Kerberos file located in /var/lib/samba/private path:
mv /etc/krb5.conf /etc/krb5.conf.initial
ln -s /var/lib/samba/private/krb5.conf /etc/

7- Start and enable Samba Active Directory Domain Controller daemons:
systemctl start samba-ad-dc.service
systemctl status samba-ad-dc.service (You may get some error logs, like (Cannot contact any KDC for requested realm), which is okay.
systemctl enable samba-ad-dc.service

8- Use netstat command in order to verify the list of all services required by an Active Directory to run properly.
netstat –tulpn| egrep 'smbd|samba'

9- At this moment Samba should be fully operational at your premises. The highest domain level Samba is emulating should be Windows AD DC 2008 R2.
It can be verified with the help of samba-tool utility.
samba-tool domain level show

10- In order for DNS resolution to work locally, you need to open end edit network interface settings and point the DNS resolution by modifying dns-nameservers statement to the IP Address of your Domain Controller (use for local DNS resolution) and dns-search statement to point to your realm.
When finished, reboot your server and take a look at your resolver file to make sure it points back to the right DNS name servers.

11- Test the DNS resolver by issuing queries and pings against some AD DC crucial records, as in the below excerpt. Replace the domain name accordingly.
ping -c3 deskbit.local # Domain Name
ping -c3 samba.deskbit.local # FQDN
ping -c3 samba # Host


+OpenLDAP (May 6, 2017, 6:22 p.m.)


OpenLDAP is an open-source software implementation of Lightweight Directory Access Protocol, created by OpenLDAP project. It is released under OpenLDAP public license; it is available for all major Linux operating systems, AIX, Android, HP-UX, OS X, Solaris,z/OS, and Windows.

It works like a relational database in certain ways and can be used to store any information. It is not limited to store the information; it can also be used as a backend database for “single sign-on”.
1- sudo apt-get -y install slapd ldap-utils
During the installation, the installer will prompt you to set a password for LDAP administrator. Just enter a password of your wish.
2- Reconfigure OpenLDAP Server:
The installer will automatically create an LDAP directory based on the hostname of your server which is not we want, so we are now going to reconfigure the LDAP. To do that, execute the following command.

sudo dpkg-reconfigure slapd

You would need to answer for series of questions prompted by reconfiguration tool.
Omit OpenLDAP server configuration? Select "No". (If you select yes, it will just cancel the configuration)


Choose the backend format for LDAP: HDB

Choose whether you want the database to be removed when slapd is purged. Select No.

If you have any old data in the LDAP, you could consider moving the database out of the way before creating a database. Select Yes.

You have the option to allow or disable LDAPv2 protocol. Select No.
3- Verify the LDAP:
sudo netstat -antup | grep -i 389
4- Generate base.ldif file for your domain:
vim /root/base.ldif

dn: ou=People,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: Group
5- Build the directory structure:
ldapadd -x -W -D "cn=admin,dc=itzgeek,dc=local" -f /root/base.ldif
6- Add LDAP Accounts:
Let’s create an LDIF (LDAP Data Interchange Format) file for a new user “ldapuser”:
vim /root/ldapuser.ldif

dn: uid=ldapuser,ou=People,dc=deskbit,dc=local
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldapuser
uid: ldapuser
uidNumber: 9999
gidNumber: 100
homeDirectory: /home/ldapuser
loginShell: /bin/bash
gecos: Test LdapUser
userPassword: {crypt}x
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
7- Use the ldapadd command to create a new user “ldapuser” in OpenLDAP directory:
ldapadd -x -W -D "cn=admin,dc=deskbit,dc=local" -f /root/ldapuser.ldif




+Date and Time From Command Prompt (May 3, 2017, 1:42 p.m.)

Display Current Date and Time:
$ date


Display The Hardware Clock (RTC):

# hwclock -r

OR show it in Coordinated Universal time (UTC):
# hwclock --show --utc


Set Date Command Example:
date -s "2 OCT 2006 18:00:00"

date --set="2 OCT 2006 18:00:00"


Set Time Examples:

date +%T -s "10:13:13"

Use %p locale’s equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"


How do I set the Hardware Clock to the current System Time?

Use the following syntax:
# hwclock --systohc

# hwclock -w


A note about systemd based Linux system

With systemd based system you need to use the timedatectl command to set or view the current date and time. Most modern distro such as RHEL/CentOS v.7.x+, Fedora Linux, Debian, Ubuntu, Arch Linux and other systemd based system need to the timedatectl utility. Please note that the above command should work on modern system too.


timedatectl: Display the current date and time:

$ timedatectl


Change the current date using the timedatectl command:
# timedatectl set-time YYYY-MM-DD

$ sudo timedatectl set-time YYYY-MM-DD

For example set the current date to 2015-12-01 (1st, Dec, 2015):
# timedatectl set-time '2015-12-01'
# timedatectl


To change both the date and time, use the following syntax:
# timedatectl set-time '2015-11-23 08:10:40'
# date


To set the current time only:

The syntax is:
# timedatectl set-time HH:MM:SS
# timedatectl set-time '10:42:43'
# date


Set the time zone using timedatectl command:

To see the list of all available time zones, enter:
$ timedatectl list-timezones
$ timedatectl list-timezones | more
$ timedatectl list-timezones | grep -i asia
$ timedatectl list-timezones | grep America/New

To set the time zone to ‘Asia/Kolkata’, enter:
# timedatectl set-timezone 'Asia/Kolkata'

Verify it:
# timedatectl


How to synchronizing the system clock with a remote server using NTP?

# timedatectl set-ntp yes

Verify it:
$ timedatectl


For changing the timezone:
dpkg-reconfigure tzdata


+OpManager (May 3, 2017, 10:37 a.m.)

1- apt-get install iputils-ping

2- Download OpManager for linux:
or another earlier version from the archive link:

chmod a+x ManageEngine_OpManager_64bit.bin
./ManageEngine_OpManager_64bit.bin -console
cd /opt/ManageEngine/OpManager/bin

+SNMP (May 1, 2017, 3:51 p.m.)

1- apt-get install snmp snmpd

2- /etc/snmp/snmpd.conf
Edit to:
agentAddress udp:
view systemonly included .1

Add to the bottom:
com2sec readonly public
com2sec readonly public
com2sec readonly localhost public

3- /etc/init.d/snmpd restart
For checking if snmpd is running, and on what ip/port it's listening to, you can use:
netstat -apn | grep snmpd
Test the Configuration with an SNMP Walk:
snmpwalk -v1 -c public localhost
snmpwalk -v1 -c public
For getting information based on OID:
snmpwalk -v1 -c public localhost iso.

The OID Tree:

+SPICE (April 29, 2017, 1:21 p.m.)

What is SPICE?
SPICE (Simple Protocol for Independent Computing Environments) is a communication protocol for virtual environments. It allows users to see the console of virtual machines (VM) from anywhere via the Internet. It is a client-server model that imagines Virtualization Station as a host and users can connect to VMs via the SPICE client.
remote-viewer spice://srv1:5908
remote-viewer "spice://srv1:5901?password=1362913207771306286"
SPICE Tools:
To compile SPICE agent on Linux, download the agent from the following link:

Install the following packages:
1- apt install libglib2.0-dev libdrm-dev sudo libxxf86vm-dev libxt-dev xutils-dev flex bison xcb libx11-xcb-dev libxcb-glx0 libxcb-glx0-dev xorg-dev libxcb-dri2-0-dev libasound2-dev libdbus-1-dev

2- Extract the already downloaded agent file, and:
sudo make install
SPICE client on Ubuntu:
1- sudo apt install spice-vdagent
2- Create a file /etc/default/spice-vdagentd with the value:

+Extract ISO files (April 26, 2017, 12:28 p.m.)

sudo mount -o loop an_iso_file.iso /home/mohsen/Temp/foo/

+List all IPs in the connected network (April 21, 2017, 1:53 p.m.)

sudo apt-get install arp-scan
sudo arp-scan --interface=eth0 --localnet
sudo apt-get install nmap
nmap -sn

+reprepro (March 4, 2017, 11:46 a.m.)
1-Install GnuPG and generate a GPG key for Signing Packages:
apt-get install gnupg dpkg-sig rng-tools
2-Open /etc/default/rng-tools:
vim /etc/default/rng-tools

and make sure you have the following line in it:

Then start rng-tools:
/etc/init.d/rng-tools start
3-Generate your key:
gpg --gen-key
4-Install and configure reprepro:
apt-get install reprepro

Let's use the directory /var/www/repo as the root directory for our repository. Create the directory /var/www/repo/conf:
mkdir -p /var/www/repo/conf
5-Let's find out about the key we have created in step 3:
gpg --list-keys

Our public key is D753ED90. We have to use this from now on.
6-Create the file /var/www/repo/conf/distributions as follows:
vim /var/www/repo/conf/distributions
7-The address of our apt repository will be, so we use this in the Origin and Label lines. In the SignWith line, we add our public key (D753ED90). Drop out the "2048R/" part:

Origin: reprepro.deskbit.local
Label: reprepro.deskbit.local
Codename: stable
Architectures: amd64
Components: main
Description: Deskbit Proprietary Softwares
SignWith: D753ED90
8-Create the (empty) file /var/www/repo/conf/override.stable:
touch /var/www/repo/conf/override.stable
9-Then create the file /var/www/repo/conf/options with this content:
basedir /var/www/repo
10-To sign our deb packages with our public key, we need the package dpkg-sig:
dpkg-sig -k D753ED90 --sign builder /usr/src/my-packages/*.deb
11-Now we import the deb packages into our apt repository:
cd /var/www/repo
reprepro includedeb stable /usr/src/my-packages/*.deb
12-Configuring nginx:
We need a webserver to serve our apt repository. In this example, I'm using an nginx webserver.

server {
listen 80;

access_log /var/log/nginx/packages-error.log;
error_log /var/log/nginx/packages-error.log;

location / {
root /var/packages;
index index.html;
autoindex on;

location ~ /(.*)/conf {
deny all;

location ~ /(.*)/db {
deny all;
OR for Apache:

<VirtualHost *:80>
ServerName reprepro.deskbit.local
DocumentRoot /var/www/repo
ServerName reprepro.deskbit.local
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
13-Let's create a GPG key for the repository:
gpg --armor --output /var/www/repo/ --export C7C1365D
14-To use the repository, place the following line in your /etc/apt/sources.list:
vim /etc/apt/sources.list

deb stable main
15-If you want this repository to always have precedence over other repositories, you should have this line right at the beginning of your /etc/apt/sources.list and add the following entry to /etc/apt/preferences:

vim /etc/apt/preferences:

Package: *
Pin: origin
Pin-Priority: 1001
16-Before we can use the repository, we must import its key:
wget -O - -q | apt-key add -

apt-get update

+Packages to Install (Feb. 24, 2017, 10:15 a.m.)

pavucontrol proxychains android-tools-adb android-tools-fastboot gimp-plugin-registry gimp gir1.2-keybinder-3.0 quodlibet python3-dev python-dev libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev zip python-setuptools vim postgresql-server-dev-all postgresql libpq-dev curl geany python-pip tmux git virtaal gdebi-core gdebi smplayer yakuake vlc gparted

+PulseAudio Volume Control (Jan. 25, 2017, 9:12 a.m.)


+Find Gateway IP (Jan. 8, 2017, 2:49 p.m.)

ip route | grep default

+Faster grep (Jan. 7, 2017, 4:59 p.m.)

1- Install `parallel`
sudo apt-get install parallel

2- Begin search:
find . -type f | parallel -k -j150% -n 1000 -m grep -H -n "keyring doesn\'t exist" {}

+tcdump (Oct. 2, 2016, 4:33 p.m.)

tcpdump -nti any port 80

+OpenCV - Facial Keypoint Detection (Sept. 24, 2016, 10:58 a.m.)

As computer vision engineers and researchers we have been trying to understand the human face since the very early days. The most obvious application of facial analysis is Face Recognition. But to be able to identify a person in an image we first need to find where in the image a face is located. Therefore, face detection — locating a face in an image and returning a bounding rectangle / square that contains the face — was a hot research area.

Once you have a bounding box around the face, the obvious research problem is to see if you can find the location of different facial features ( e.g. corners of the eyes, eyebrows, and the mouth, the tip of the nose etc ) accurately. Facial feature detection is also referred to as “facial landmark detection”, “facial keypoint detection” and “face alignment” in the literature, and you can use those keywords in Google for finding additional material on the topic.

+Check outgoing port (Sept. 14, 2016, 10:27 p.m.)

Use one of the tools to check if the outgoing VPS port is blocked:

telnet 80
nc -v 80
wget -qO-

+Write ISO file to DVD in terminal (Sept. 3, 2016, 9:13 p.m.)

Using this command, check where the DVD Writer is mounted: (/dev/sr0)
inxi -d

And using this command, start writing on the DVD:
wodim -eject -tao speed=8 dev=/dev/sr0 -v -data Downloads/linuxmint-18-kde-64bit-beta.iso

+See Linux Version (Aug. 15, 2016, 3:26 p.m.)

cat /etc/os-release

cat /etc/*release

uname -a

lsb_release -a

+Install OpenCV 3.0 with Python 3.4+ (Aug. 3, 2016, 4:31 p.m.)

sudo apt-get install libopenexr-dev
Install the above package in addition to the packages the links says to! It does not include in the documents.

First try doing the way the tutorial links in github says:

If you encountered probles, you could try the following notes too.
The following caused errors about ffmpeg libraries not being found but the link above solved it.
1- sudo apt-get install build-essential cmake git pkg-config libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev libatlas-base-dev gfortran python3.4-dev libgtk-3-dev libgstreamer0.10-dev libgstreamer-plugins-base1.0-dev libv4l-dev libopencv-dev build-essential cmake git libgtk2.0-dev pkg-config python-dev python-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff4-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libxine-dev libtbb-dev libqt4-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip libavresample-dev yasm libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libx264-dev libxvidcore-dev libxvidcore4

ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

It think this part is not needed. It was supposed to help fixing ffmpeg errors when builing opencv but it did not! :

cd ~/MyTemp/
tar xvf ffmpeg-0.11.1.tar.bz2
cd ffmpeg-0.11.1
./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab
make -j4
sudo make install

2- Create a virtualenv and activate it

3- pip install numpy

4- Build and install OpenCV 3.0 with Python 3.4+ bindings:
cd ~/MyTemp
git clone
cd opencv
git checkout 3.0.0 (Referring to this website you can see what version you need to write instead of 3.0.0: As of right now it's 3.1.0)

5- We’ll also need to grab the opencv_contrib repo as well:
cd ~/MyTemp
git clone
cd opencv_contrib
git checkout 3.0.0
Again, make sure that you checkout the same version for opencv_contrib that you did for opencv above, otherwise you could run into compilation errors.

6- Time to setup the build:
cd ~/MyTemp/opencv
mkdir build
cd build
-D OPENCV_EXTRA_MODULES_PATH=~/MyTemp/opencv_contrib/modules \

7- make -j8


+PyCharm / IntelliJ IDEA allows only two spaces (July 26, 2016, 12:37 p.m.)

In settings search for `EditorConfig` and disable the plugin.

+Enable/Disalbe Bluetooth (July 26, 2016, 10:42 a.m.)

sudo rfkill block bluetooth
sudo update-rc.d bluetooth disable
service bluetooth status
sudo rfkill unblock bluetooth
sudo update-rc.d bluetooth enable
service bluetooth status

+Identify Computer Model (July 23, 2016, 10:48 a.m.)

sudo grep "" /sys/class/dmi/id/[bpc]*

+Error: Fixing recursive fault but reboot is needed! (July 17, 2016, 9:49 a.m.)

sudo nano /etc/default/grub



sudo update-grub2

+No partitions found while installing Linux (July 15, 2016, 9:28 p.m.)

1- Boot up linux with Live CD (the installation disk)
2- sudo su
3- sudo apt-get install gdisk
4- sudo gdisk /dev/sda
5- Select (1) for MBR
6- Type x for expert stuff
7- Type z to zap the GPT data
8- Type y to proceed destroying GPT data
9- Type n in order to not lose MBR data

Now restart the installation procedure.

+VMware Workstation (June 21, 2016, 5:37 p.m.)

Using this address, find the bundle file in "/linux/core/":

Extract the file (if it's a tar file) and run the bundle file with root permission:
# bash ./VMware-Workstation-12.5.2-4638234.x86_64.bundle
After installation, you'll need a serial number. Google the version and you'll find it finally ;-)
For this current version (12.5.2) the serial number is:

+Remove invalid characters from filenames (May 29, 2016, 8:18 a.m.)

find . -exec rename 's/[^\x00-\x7F]//g' "{}" \;

+PyCharm Regex (May 23, 2016, 2:07 a.m.)

{8}"name_ru": ".+?",\n
Search for any occurrences starting with a double quote:

+SASL authentication for IRC network using freenode (April 14, 2016, 7:36 p.m.)
port: 6697
Make sure to use "Secure Connectsion (SSL)"

+PouchDB (April 13, 2016, 9:54 a.m.)

sudo npm -g install pouchdb
sudo npm -g install angular-pouchdb
ionic plugin add cordova-sqlite-storage
There is a Chrome extension called PouchDB Inspector that allows you to view the contents of the database in the Chrome Developer Tools.
You can not use the PouchDB Inspector if you loaded the app with ionic serve --lab because it uses iframes to display the iOS and the Androw views. The PouchDB Inspector needs to access PouchDB via window.PouchDB and it can't access that when the window is inside an <iframe>
Keep in mind that when you're testing your Ionic app on a desktop browser it will use an IndexedDB or WebSQL adapter, depending on which browser you use. If you'd like to know which adapter is used by PouchDB, you can look it up:
var db = new PouchDB('birthdays');
On a mobile device the adapter will be displayed as websql even if it is using SQLite, so to confirm that it is actually using SQLite you'll have to do this (see answer on StackOverflow):

var db = new PouchDB('birthdays');;
This will output an object with a sqlite_plugin set to true or false.
There are 2 ways to insert data, the post method and the put method. The difference is that if you add something with the post method, PouchDB will generate an _id for you, whereas if you use the put method you're generating the _id yourself.
SQLite plugin for Cordova/PhoneGap

On Cordova/PhoneGap, the native SQLite database is often a popular choice, because it allows unlimited storage (compared to IndexedDB/WebSQL storage limits). It also offers more flexibility in backing up and pre-loading databases, because the SQLite files are directly accessible to app developers.

Luckily, there is a SQLite Plugin (also known as SQLite Storage) that accomplishes exactly this. If you include this plugin in your project, then PouchDB will automatically pick it up based on the window.sqlitePlugin object.

However, this only occurs if the adapter is 'websql', not 'idb' (e.g. on Android 4.4+). To force PouchDB to use the WebSQL adapter, you can do:
var db = new PouchDB('myDB', {adapter: 'websql'});

If you are unsure whether PouchDB is using the SQLite Plugin or not, just run:;

This will print some database information, including the attribute sqlite_plugin, which will be true if the SQLite Plugin is being used.

+KDE Menu Editor (April 2, 2016, 9:14 a.m.)


+Batch rename files (March 11, 2016, 10:53 a.m.)

for file in *.html
mv "$file" "${file%.html}.txt"
for file in *
do mv "$file" "$file.mp3"

+Thinkpad Lenovo Bluetooth Driver (Feb. 15, 2016, 10:12 a.m.)
sudo apt-get install build-essential linux-headers-generic
cd rtl8723au_bt-troy
sudo make install

+Genymotion (April 10, 2016, 7:22 p.m.)

1-apt-get install libdouble-conversion1

2-Download `Ubuntu 14.10 and older, Debian 8` genymotion version from the following link:
The downloaded file name should be `genymotion-2.8.0-linux_x64.bin`.

3-sudo bash ./genymotion-2.8.0-linux_x64.bin

4-For running it, use this command:

5-You should already have the genymotion VirtualBox (ovd) files. If so, you need to change the path of VirtualBox Virtual devices in settings, to the location of your files.
Settings --> Virtualbox (tab) --> Browse

After this step I still could not see the list of virtual devices in genymotion program. I imported the ovd files in virtualbox program, and they got displayed in genymotion too.

+ADB (Nov. 2, 2015, 5:04 p.m.)

sudo apt-get install android-tools-adb android-tools-fastboot

+Gimp Plugin (Nov. 2, 2015, 5:03 p.m.)

sudo apt-get install gimp-plugin-registry

+Diff over SSH (Oct. 12, 2015, 10:40 a.m.)

diff /home/mohsen/Projects/Shetab/nespresso/nespresso/ <(ssh 'cat /home/shetab/websites/nespresso/nespresso/')

+Handbrake in Mint (Sept. 14, 2015, 5:02 p.m.)

sudo add-apt-repository ppa:stebbins/handbrake-snapshots
sudo apt-get update
sudo apt-get install handbrake

+Trim/Cut video files (Sept. 14, 2015, 2:03 p.m.)

ffmpeg -i video.mp4 -ss 10 -t 10 -c copy cut2.mp4

The first 10 is the start time in seconds:
10 ==> 10 seconds from start
1:10 ==> One minute and 10 seconds
1:10:10 ==> One hour, one minute and ten seconds

The second 10 is the duration.

+Retrieve Video File Information (Sept. 14, 2015, 12:02 p.m.)

mplayer -vo null -ao null -frames 0 -identify test.mp4

+Routing (Aug. 22, 2015, 4:58 p.m.)

ip route add {dst ip} via {gateway ip} dev ethx src {src ip}

+Change Hostname (Aug. 6, 2015, 11:14 p.m.)

nano /etc/hostname
/etc/init.d/ start

nano /etc/hosts
service hostname restart

+Get public IP address and email it (July 25, 2015, 1:17 p.m.)

Getting public IP address in bash:

wget -qO-
Getting it and emailing it (copy this script and paste it in a file with `.sh` extension):
IPADDRESS=$(wget -qO-
# IPADDRESS=$(curl
if [[ "${IPADDRESS}" != $(cat ~/.current_ip) ]]
echo "Your new IP address is ${IPADDRESS}" |
mail -s "IP address change"
echo ${IPADDRESS} >|~/.current_ip

+Libreoffice - Add/Remove RTL and LTR buttons to formating toolbar to Libreoffice (July 8, 2015, 7:41 p.m.)

You have to enable Complex Text Layout (CTL) support:
Tools → Options → Language Settings → Languages
Enable `Complex Text Layout (CTL)`
Restart libreoffice.

+Installing Irancell 3G-4G Modem Driver (July 8, 2015, 10:53 a.m.)

1-sudo apt-get install g++-multilib libusb-dev libusb-0.1-4:i386

2-Connect the modem and copy the `linuxdrivers.tar.gz` file to your computer, extract it and cd to the directory.

3-CD to directory `drivers` and using the `install_driver` file, install the driver:
sudo ./install_driver

4-Create a shortcut from the file `` to make the connection procedure easier:
ln -s /home/mohsen/Programs/linuxdrivers/drivers/ .

5-To establish a connection use the command:
sudo ~/
And this is the output:

Looking for default devices ...
Found default devices (1)
Accessing device 007 on bus 003 ...

USB description data (for identification)
Manufacturer: Longcheer
Product: LH9207
Serial No.:
Looking for active driver ...
No driver found. Either detached before or never attached
Setting up communication with interface 0 ...
Trying to send the message to endpoint 0x01 ...
OK, message successfully sent
-> Run lsusb to note any changes. Bye.

sleep 3
ifconfig ecm0 up
dhclient ecm0
mohsen drivers #

+Installing KDE and/or Gnome in Debian (June 9, 2015, 9:22 a.m.)

Install KDE in debian

#apt-get install x-window-system-core kde

You'll probably also want to install KDM, for the KDE-style login screen.

#apt-get install kdm

Starting KDE

To start KDE, type


you may need to start X-Server if it is not running, to start it run


To start KDE each time (you probably want this) you'll need to edit your startup files. If you use KDM or XDM to log in, edit .xsession, otherwise edit .xinitrc or .Xclients.

Install Gnome in Debian

#apt-get install gnome

This will install additional software (gnome-office, evolution) that you may or may not want.


For a smaller set of apps, you can also do

# aptitude install gnome-desktop-environment

A set of additional productivity apps will be installed by

# aptitude install gnome-fifth-toe

+Quodlibet Multimedia Keys (June 3, 2015, 9:12 p.m.)

apt-get install gir1.2-keybinder-3.0

+Connecting to wifi network through command line (June 3, 2015, 6:13 p.m.)

1-sudo iwlist wlan0 scan
2-sudo iwconfig wlan0 essid "THE SSID"
3-iwconfig wlan0 key s:password
4-sudo dhclient wlan0

+Root Password Recovery (May 27, 2015, 1:24 p.m.)

rw init=/bin/bash

+Locale Settings (Feb. 5, 2016, 1:40 a.m.)

This first solution has been worked. So before checking the other solutions, try this one first!

nano /etc/environment

Restart server and it should be fixed now!
locale-gen en_US.UTF-8

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales
This is a common problem if you are connecting remotely, so the solution is to not forward your locale. Edit /etc/ssh/ssh_config and comment out SendEnv LANG LC_* line.

+Proxy (May 10, 2015, 3:48 p.m.)

1-sudo apt-get install proxychains
2-ssh -D 1080 -fN root@
3-nano /ect/proxychains.conf
4-At the bottom of the file:
# add proxy here
# defaults set to "tor"
# socks4 9050
socks5 1080

5-sudo proxychains synaptic

6-If you did everything with a normal user or super user, keep in mind that in terminal you should use the proxy using the same user. I mean if you did (ssh -D ...) using the root user, that port is only available in root.

+Recover/Restore Firefox Master Password (April 19, 2015, 9:58 a.m.)

For resetting copy this url in the address-bar:

+TV Card Driver (April 17, 2015, 7:08 p.m.)
1-sudo apt-get install libproc-processtable-perl git libc6-dev
2-git clone git://
3-cd media_build
4-$ ./build
5-sudo make install
6-apt-get install me-tv kaffeine
7-reboot for loading the driver (I don't know the driver for modprobe yet).
Scan channels using Kaffein:
1-Open Kaffein
2-From `Television` menu, choose `Configure Television`.
3-From `Device 1` tab, from `Source` option, choose `Autoscan`
4-From `Television` menu choose `Channels`
5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.
Scan channels using Me-TV
1-Open Me-TV
2-When the scan dialog opens, choose `Czech Republic` from `Auto scan`.

+PYTHONHOME and PYTHONPATH (April 4, 2015, 3:29 p.m.)

For most installations, you should not set these variables since they are not needed for Python to run. Python knows where to find its standard library.

The only reason to set PYTHONPATH is to maintain directories of custom Python libraries that you do not want to install in the global default location (i.e., the site-packages directory).

PYTHONHOME actually points to the directory of the standard library by default (e.g. /usr/local/lib/pythonXX).

+Environment Variable (April 3, 2015, 8:46 p.m.)
Commonly Used Shell Variables:
Use `set` command to display current environment
The $PATH defines the search path for commands. It is a colon-separated list of directories in which the shell looks for commands.
You can display the value of a variable using printf or echo command:
$ echo "$HOME"
You can modify each environmental or system variable using the export command. Set the PATH environment variable to include the directory where you installed the bin directory with perl and shell scripts:

export PATH=${PATH}:/home/vivek/bin


export PATH=${PATH}:${HOME}/bin
You can set multiple paths as follows:
export ANT_HOME=/path/to/ant/dir
export PATH=${PATH}:${ANT_HOME}/bin:${JAVA_HOME}/bin
How Do I Make All Settings permanent?
The ~/.bash_profile ($HOME/.bash_profile) or ~/.prfile file is executed when you login using console or remotely using ssh. Type the following command to edit ~/.bash_profile file, enter:
$ vi ~/.bash_proflle
Append the $PATH settings, enter:
export PATH=${PATH}:${HOME}/bin
Save and close the file.

+subprocess installed post-installation script returned error exit status 1 (March 19, 2015, 12:30 a.m.)


Setting up python-gst0.10-dev (0.10.22-3ubuntu2) ...
dpkg: error processing package python-gst0.10-dev (--configure):
subprocess installed post-installation script returned error exit status 1
E: Sub-process /usr/bin/dpkg returned an error code (1)
sh -x /var/lib/dpkg/info/python-gst0.10-dev.postinst configure 0.10.22-3ubuntu2

+ set -e
+ pyversions --default
+ PYTHON_DEFAULT=pyversions: /usr/bin/python does not match the python default version. It must be reset to point to python2.7
ln -sf /usr/bin/python2.7 /usr/bin/python

+Ubuntu Sources List Generator (March 18, 2015, 3:52 p.m.)

+Delete special files recursively (March 7, 2015, 2:36 p.m.)

find . -name "*.bak" -type f -delete

find . -name "*.bak" -type f

+How to stop services / programs from starting automatically (March 3, 2015, 11:27 a.m.)

update-rc.d -f apache2 remove

+Truetype Fonts (Arial Font) (Feb. 22, 2015, 1:10 p.m.)
apt-get install ttf-liberation

+Add Resolutions (Feb. 15, 2015, 11:19 a.m.)
NOTE! This has been written to ensure compatibility with arandr!

1. Install arandr
sudo apt-get install arandr

2. Run xrandr
If your chosen resolution exists (regardless of which monitor it appears by) then skip the next step

3. If your resolution does not exist, create it by doing the following:
In this example the resolution I want is 1280x1024
cvt 1600 900

This will create a modeline like this:
"1280x1024_60.00" 109.00 1280 1368 1496 1712 1024 1027 1034 1063 -hsync +vsync

Create the new mode:
xrandr --newmode "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync

4. Add the mode (resolution) to the desired monitor (DP2):
xrandr --addmode DP2 "1600x900_60.00"

5- For switching to the newly created resolution:
xrandr -s 1600x900


xrandr --output DP2 --mode "1920x1080"


5. Run arandr and position your monitors correctly

6. Choose 'layout' then 'save as' to save the script

7. I found the best place to load the script (under Xubuntu) is the settings manager:


Menu -> Settings -> Settings Manager -> Session and Startup -> Application Autostart

+Dump traffic on a network (Feb. 7, 2015, 11:33 a.m.)

tcpdump -nti any port 4301

To connect to it:
telnet 4301

+Show open ports and listening services (Feb. 7, 2015, 10:33 a.m.)

netstat -an | egrep 'Proto|LISTEN'
netstat -lnptu

+Make Bootable USB stick (Jan. 8, 2015, 7:50 p.m.)

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdx oflag=direct bs=1048576

+Change locale/timezone and set the clock (Sept. 20, 2015, 1:57 p.m.)

1-ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
2-apt install ntp
4-hwclock -w


Linux Set Date Command Example
# date -s "2 OCT 2006 18:00:00"


# date --set="2 OCT 2006 18:00:00"


# date +%Y%m%d -s "20081128"


# date +%T -s "10:13:13"

10: Hour (hh)
13: Minute (mm)
13: Second (ss)

Use %p locale's equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"


yum install ntp
ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
/etc/init.d/ntpd stop

+error ==> error while loading shared libraries (Dec. 18, 2014, 10:02 p.m.)

Locate the file using locate <> and copy it to /usr/lib

I also needed to copy it here too:

Locate the file using locate <> and copy it to /usr/lib64

+error ==> make command not found (Dec. 18, 2014, 11:46 a.m.)

apt-get install make build-essential

+wget certificate error (Dec. 18, 2014, 11:38 a.m.)

ERROR: The certificate of `' is not trusted.
ERROR: The certificate of `' hasn't got a known issuer.

If you don't care about checking the validity of the certificate just add the --no-check-certificate option on the wget command-line.

wget --no-check-certificate <url_link>

+Split and Join/Merging Files (Nov. 28, 2014, 11:58 a.m.)

split --bytes=1M NimkatOnline-1.0.0.apk NimkatOnline
-l ==> lines

b ==> bytes
M ==> Megabyte
G ==> Gigabytes

split --bytes=1M images/myimage.jpg new

split -b 22 newfile.txt new
Split the file newfile.txt into three separate files called newaa, newab and newac..., with each file containing 22 bytes of data.

split -l 300 file.txt new
Split the file newfile.txt into files beginning with the name new, each containing 300 lines of text.
For merging or joining files:
cat new* > newimage.jpg

+Locate (Nov. 13, 2014, 10:03 p.m.)

Match the exact filename:
locate -b '\filename'

Don’t output all the results, but only the number of matching entries.
locate -c test

+SSH login without password (Nov. 13, 2014, 7:29 p.m.)

1-ssh-keygen -t rsa (No need to set a password)

Now you can log in without a password

+APT - The location where apt-get caches/stores .deb files (Oct. 18, 2014, 6:16 a.m.)


+nano - Replace (Oct. 3, 2014, 11:08 p.m.)

In some versions of nano for `replacing` you can use:
Shift + Tab

And in some other versions:
CTRL + \

+Recover Files (Sept. 14, 2014, 7:24 p.m.)

Using this program you can undelete/recover deleted files:

After selecting the desired Hard Disk, press capital (p) the `P` key to show all the deleted files.

+Setting Proxy Variable (Aug. 22, 2014, 12:44 p.m.)

export http_proxy="localhost:9000"
export https_proxy="localhost:9000"
export ftp_proxy="localhost:9000"

And for removing environment variables:
unset http_proxy
unset https_proxy
unset ftp_proxy

+Getting folder size (Aug. 22, 2014, 12:38 p.m.)

For getting the folder size along with its sub-folders:
du -sh /path/to/directory

+Join *.001, *.002, .... files (Aug. 22, 2014, 12:33 p.m.)

cat filename.avi.* > filename.avi

+Virtualbox (Nov. 4, 2015, 11:31 a.m.)

Virtualbox has some dependencies. You'd better follow this solution to install it.

1- Add the following line to your /etc/apt/sources.list:
deb xenial contrib

According to your distribution, replace 'xenial' by 'vivid', 'utopic', 'trusty', 'raring', 'quantal', 'precise', 'lucid', 'jessie', 'wheezy', or 'squeeze'.

For viewing the complete list of dists:

To see your Linux dist:
cat /etc/*release
Based on the line:
choose the dist! (which is xenial)

2- apt-get update (using a proxy tool like proxychains)

3- apt-key adv --keyserver --recv-keys A2F683C52980AECF
The key depends on what you might get after apt-get update.
You need to re-run apt-get update.
Virtualbox has some dependencies. You'd better follow the top solution to install it.

Virtualbox 5 Download link: (It's blocked for us in Iran; use a proxy tool to bypass it).


You can download the file directly from: (It's also blocked; use a proxy tool).
Installing virtualbox:
apt-get install virtualbox virtualbox-4.3 virtualbox-dkms
For enabling USB.2 in Virtual Box, when checking the `Enable USB 2.0...` in settings, I noticed an alert at the bottom of the window `Invalid settings detected`. Hovering the mouse over it, it displayed:
"USB 2.0 is currently enabled for this virtual machine. However, this requires the Oracle VM VirtualBox Extension Pack to be installed..."

So, for solving this problem:
1-Check what version of virtual box you're using:
VBoxManage -version
It will display something like 4.3.6_Debianr91406

2-Open this link and follow the version of virtual box you got from `step 1`:

3-Find the package and download it:
Don't forget to find the whole version number... I mean the 91406 (from the `step 1`)

4-Install the package:
sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.6-91406.vbox-extpack

5-Now, you need to add your username to the "vboxusers" group in order to gain access to your USB devices in the Virtual Machine:
sudo usermod -a -G vboxusers mohsen

6-Restart your PC/Laptop.

For viewing a list of installed packages:
VBoxManage list extpacks

For uninstalling the package:
sudo vboxmanage extpack unistall "Oracle VM VirtualBox Extension Pack"
bash: /etc/init.d/vboxdrv: No such file or directory
sudo apt-get install build-essential linux-headers-`uname -r`

sudo dpkg-reconfigure virtualbox-dkms
sudo dpkg-reconfigure virtualbox
Increase VDI size:
vboxmanage modifymedium /media/mohsen/Programs/Virtual\ OS/VirtualBox\ VMs/Windows\ 10/ --resize 22000

After resizing, using the Disk Management tool available in Windows, right click on partition C: and extend it.
VBoxManage list vms
VBoxManage startvm "Debian - 8"

+Help, Mannual (Aug. 22, 2014, 12:34 p.m.)

Get help:
Some commands don't have help messages or don't use --help to invoke them. On these mysterious commands, use this trick:

First, find out where the executable file is located (this trick will only work with programs, not shell builtins):
which command

The `which` command will tell you the path and file name of the executable program. Next, use the `strings` command to display text that may be embedded within the executable file. For example, if you wanted to look inside the bash program, you would do the following:
which bash
strings /bin/bash

The strings command will display any human readable content buried inside the program. This might include copyright notices, error messages, help text, etc.

Finally, if you have a very inquisitive nature, get the command's source code and read that. Even if you cannot fully understand the programming language in which the command is written, you may be able to gain valuable insight by reading the author's comments in the program's source.

+Dolphin (Aug. 22, 2014, 12:34 p.m.)

When working with `dolphin`, I can't disable the notification sounds. It breaks the alsa volume too. The only way to disable the sounds is to delete or move (or rename) the sound files. So here is the path to the sounds. Do whatever that pleases you :D

+ISO files (Aug. 22, 2014, 12:33 p.m.)

Convert .DAA Files To .ISO

Download and install power PowerISO using the following link:
Scroll to the bottom of the page, in `Other downloads` section to get the linux version.

1- wget

2- tar -zxvf poweriso-1.3.tar.gz

3- You can copy the extracted file “poweriso” to /usr/bin to help all users of a computer to use it.
Now if you want to convert for example a .daa file to .iso use this command:
poweriso convert /path/to/source.daa -o /path/to/target.iso -ot iso
There are more useful commands of poweriso:
Task: list all files and directories in home direcory of /media/file.iso

poweriso list /media/file.iso /
poweriso list /media/file.iso / -r
Fore more commands please type
poweriso -?
Convert DMG to ISO

1- Install the tool
sudo apt-get install dmg2img

2- The following command will convert the .dmg to .img file in ISO format:
dmg2img <file_name>.dmg

3- And finally, rename the extension:
mv <file_name>.img <file_name>.iso
Create ISO file from a directory:
mkisofs -allow-limited-size -o abcd.iso abcd

+Installing Flash Player (Aug. 22, 2014, 12:32 p.m.)

sudo apt-get install adobe-flashplugin

+Nautilus Bookmarks (Aug. 22, 2014, 12:26 p.m.)

Nautilus bookmarks configuration file location:

For seeing which version of nautilus you have:
nautilus --version

+Convert mp3 to ogg (Aug. 22, 2014, 12:32 p.m.)

Convert mp3 to ogg:
1-apt-get install mpg321 vorbis-tools
2-mpg321 input.mp3 -w raw && oggenc raw -o output.ogg

+Convert rmp to deb (Aug. 22, 2014, 12:26 p.m.)

Convert rmp to deb:
1-apt-get install alien
2-alien -d package-name.rpm

+tmux (Aug. 22, 2014, 12:31 p.m.)

Prompt not following normal bash colors:

For fixing the problem, create a file `~/.tmux.conf` if it does not exist, and add the following to it:
set -g default-terminal "screen-256color"

set -g history-limit 100000
Tmux Plugin Manager:
git clone ~/.tmux/plugins/tpm

Put this at the bottom of ~/.tmux.conf:

# List of plugins
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

# Initialize TMUX plugin manager (keep this line at the very bottom of tmux.conf)
run '~/.tmux/plugins/tpm/tpm'
Installing plugins:
1-Add new plugin to ~/.tmux.conf with set -g @plugin '...'
2-Press prefix + I (capital I, as in Install) to fetch the plugin.
Uninstalling plugins:
1-Remove (or comment out) plugin from the list.
2-Press prefix + alt + u (lowercase u as in uninstall) to remove the plugin.
tmux-continuum plugin:
set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'

Automatic restore:
Last saved environment is automatically restored when tmux is started.
Put this in tmux.conf to enable:
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'
CPU/RAM/battery stats chart bar:
install the plugin using CPAN:
sudo cpan -i App::rainbarf

If it's the first time you're using CPAN you might be asked to let some plugins get installed automatically...
You choose (yes) and then choose(sudo) to let the plugin installed.

After installation, create a config file ~/.rainbarf.conf with this content:
width=20 # widget width
bolt # fancy charging character
remaining # display remaining battery
rgb # 256-colored palette
Whole config file:
set -g default-terminal "screen-256color"
set-option -g status-utf8 on

set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @plugin 'tmux-plugins/tmux-logging'
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'

set -g history-limit 500000

set -g status-right '#(rainbarf)'
set -g default-command bash

run '~/.tmux/plugins/tpm/tpm'
PRESS CTRL+B and CTRL+I to install plugins after editing the .tmux.conf file.
CTRL + B and SHIFT + P to start (and end) logging in current pane.
CTRL + B and ALT + P to start (and end) to capture screen.

Save complete history:
CTRL + B and ALT + SHIFT + P

Clear pane history:
CTRL + B and ALT + C
Swap Window:
swap-window -s 3 -t 1

+PIL (Feb. 15, 2016, 11:04 a.m.)

For a successful and complete installation of PIL, you need to install these packages before installing PIL:

sudo apt-get install libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev

If you're going to install it on python3:
apt-get install python3-dev
If it's for python 2:
apt-get install python-dev
The installation should be finished by now. Do the following if you still get errors and the jpeg library is not recognized by linux:

# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib

Now proceed and reinstal PiL, pip install -U PIL

In case of this error:
#include <freetype/fterrors.h>
Create a symlink as follow:
ln -s /usr/local/include/freetype2/ /usr/local/include/freetype

+Undeleteing (Aug. 22, 2014, 12:30 p.m.)

1-Install extundelete: apt-get install extundelete

2-Either "unmount" or "remount" the partition as read-only:
sudo mount -t vfat -O remount,ro /dev/sdb /mnt

To remount it back to read-write: (This task is not part of this tutorial. It's just for keeping a note.)
sudo mount -t vfat -O remount,rw /dev/sdb /mnt

3-For restoring the files from the whole partition:
extundelete /dev/sdb1 –restore-all
And for restoring important files quickly, you may use the --restore-file, --restore-files, or --restore-directory options.

+Error - ia32-libs : Depends: ia32-libs-i386 but it is not installable (Aug. 22, 2014, 12:29 p.m.)

The ia32-libs-i386 package is only installable from the i386 repository, which becomes available with the following commands:

dpkg --add-architecture i386
apt-get update

+Driver - Samsung Printer (July 20, 2015, 11:23 p.m.)

Installing My Samsung Printer Driver (SCX-4521F):

1-Add the following repository to /etc/apt/sources.list:
deb debian extra

2-Install the GPG key:
sudo apt-get install suldr-keyring
apt-get update

3-Install these packages:
apt-get install samsungmfp-driver-4.00.39 suld-configurator-2-qt4

+Grub rescue (Aug. 22, 2014, 12:02 p.m.)

I haven't tried it yet, so keep in mind to correct the problems:
mount /dev/masax /mnt
groub-install --root-directory=/mnt/ /dev/sda


Another day I just used these commands, some would give me errors, but some would work...but in my surprise it worked:
set prefix=(hd0,1)/boot/grub
insmod (hd0,1)/boot/grub/linux.mod
insmod part_msdos
insmod ext2
set root=(hd0,1)
reboot using CTRL+ALT+DELETE

+Commands - iftop (Aug. 22, 2014, 12:23 p.m.)

iftop: InterFace Table of Processes

Install iftop for viewing what applications are using/eating up Internet.

iftop -i eth1

# The logs from xchat help:
in iftop hit `p` to toggle port display
now you know which port on your machine is connecting out to that domain
now use netstat -nlp to list all pids on which ports are connecting out
you should now know which pid is hitting that domain... provided all traffic originates on your local box
also consider using lsof for this sort of mining

+Error - Cannot Open Display (Aug. 22, 2014, 12:04 p.m.)

export XAUTHORITHY=/home/<user>/.Xauthority


Try this new method:
"aptitude -r install linux-headers-2.6-`uname -r|sed 's,[^-]*-[^-]*-,,'` nvidia-kernel-dkms nvidia-glx && mkdir /etc/X11/xorg.conf.d ; echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf

This is the old xorg.conf:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 280.13 ( Wed Jul 27 17:15:58 PDT 2011

Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"

Section "Files"

Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"

Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"

Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 28.0 - 33.0
VertRefresh 43.0 - 72.0
Option "DPMS"

Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:1:0:0"
# option "MetaModes" "1280x1024"
option "MetaModes" "1920x1080"

Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24

+unrar (Aug. 22, 2014, 12:03 p.m.)

How to use Unrar command
First move the rar file to a directory, and then extract it there:
$ unrar e file.rar

$ unrar l file.rar
Unrar all files:
for file in *.part01.rar; do unrar x ${file}; done;

+Swap file (Aug. 22, 2014, 12:02 p.m.)

How to create a swap file:
1-dd if=/dev/zero of=/swapfile1 bs=1024 count=524288

if=/dev/zero : Read from /dev/zero file. /dev/zero is a special file in that provides as many null characters to build storage file called /swapfile1.
of=/swapfile1 : Read from /dev/zero write stoage file to /swapfile1.
bs=1024 : Read and write 1024 BYTES bytes at a time.
count=524288 : Copy only 523288 BLOCKS input blocks.

2-mkswap /swapfile1

3-chown root:root /swapfile1
chmod 0600 /swapfile1

4-swapon /swapfile1

5-nano /etc/fstab
Append the following line:
/swapfile1 swap swap defaults 0 0

6-To test/see the free space:
free -m

+Commands - rm (Aug. 22, 2014, noon)

rm -rfv `find . -iname "*.pyc"`

+Define aliases (Aug. 22, 2014, noon)

Defining alias:
1-Open the file ~/.bashrc and write an alias like this:
alias myvps='ssh -p 54321'
2-Enter this command to make the changes affect:
source .bashrc
3-Keep in mind that every time a change is done to .bashrc file, you have to reload it with:
source .bashrc

+Commands - mount (Aug. 22, 2014, noon)

mount -t ntfs /dev/sda1 /mnt/exhdd

To mount a floppy image:
sudo mount -t msdos -o loop -o umask=000 ./floppy.img /media/floppy

+Error - Errors were encountered while processing (Aug. 22, 2014, 11:59 a.m.)

E: Sub-process /usr/bin/dpkg returned an error code (1)
rm /var/lib/dpkg/info/samsungmfp-*

+ALSA (Aug. 22, 2014, 11:58 a.m.)

Find ALSA version:
cat /proc/asound/version
My sound card was installed. I knew it, using the command:
cat /proc/asound/modules
cat /proc/asound/cards

But there was no sound from my Laptop. I ran gstreamer-properties in normal user bash (not root), to test Audio device of my Laptop. I saw I don't have ALSA in the plugins section. Installing it, I could test my sound card. I heard sound and it solved my problem.
so using repo I found gstreamer0.10-alsa and installed it.

And I of course had to use the command:
alsactl init

For not doing the above command every time the system is turned on, I made my snd-hda-intel as the default sound card. (The tutorial is in this same file.)

+Commands - scp (Aug. 22, 2014, 11:43 a.m.)

The scp command allows you to copy files over ssh connections. This is pretty useful if you want to transport files between computers, for example to backup something. The scp command uses the ssh command and they are very much alike. However, there are some important differences.
The scp command can be used in three* ways:
1-To copy from a (remote) server to your computer.
2-To copy from your computer to a (remote) server.
3-To copy from a (remote) server to another (remote) server.

In the third case, the data is transferred directly between the servers; your own computer will only tell the servers what to do. These options are very useful for a lot of things that require files to be transferred, so let's have a look at the syntax of this command:
scp examplefile yourusername@yourserver:/home/yourusername/
You can also copy a file (or multiple files) from the (remote) server to your own computer. Let's have a look at an example of that:
scp yourusername@yourserver:/home/yourusername/examplefile .

The dot at the end means the current local directory. This is a handy trick that can be used about everywhere in Linux. Besides a single dot, you can also type a double dot ( .. ), which is the parent directory of the current directory.
You probably already guessed that the following command copies a file from a (remote) server to another (remote) server:
scp yourusername@yourserver:/home/yourusername/examplefile yourusername2@yourserver2:/home/yourusername2/
Please note that, to make the above command work, the servers must be able to reach each other, as the data will be transferred directly between them. If the servers somehow can't reach each other (for example, if port 22 is not open on one of the sides) you won't be able to copy anything. In that case, copy the files to your own computer first, then to the other host. Or make the servers able to reach each other (for example by opening the port).
Specifying a port with scp:
The scp command acts a little different when it comes to ports. You'd expect that specifying a port should be done this way:
scp -p yourport yourusername@yourserver:/home/yourusername/examplefile .
However, that will not work. You will get an error message like this one:
cp: cannot stat `yourport': No such file or directory
This is caused by the different architecture of scp. It aims to resemble cp, and cp also features the -p option. However, in cp terms it means 'preserve', and it causes the cp command to preserve things like ownership, permissions and creation dates. The scp command can also preserve things like that, and the -p option enables this feature. The port specification should be done with the -P option. Therefore, the following command will work:
scp -P yourport yourusername@yourserver:/home/yourusername/examplefile .
Also note that the -P option must be in front of the (remote) server. The ssh command will still work if you put -p yourport behind the host syntax, but scp won't. Why? Because scp also supports copying between two servers and therefore needs to know which server the -P option applies to.
Copying files from a remote computer using ssh
scp root@ /home/mohsen/Desktop/

To copy from the local machine, to the remote machine, just reverse things:
scp /home/mohsen/Desktop/ root@

+Auto start script at boot time (Aug. 22, 2014, 11:39 a.m.)

To make a script run when the server starts and stops:
First make the script executable with this command:
sudo chmod 755 <path to the script>
sudo /usr/sbin/update-rc.d -f <path to the script> defaults

+Hardware - Sound card (Aug. 22, 2014, 11:36 a.m.)

Removing and Re-installing Sound card
sudo apt-get --purge remove linux-sound-base alsa-base alsa-utils
sudo apt-get install linux-sound-base alsa-base alsa-utils

+Network - Server config (Aug. 22, 2014, 11:33 a.m.)

I used this command in rc.local to allow the eth0 get IP:
route add -net netmask gw

Add this to /etc/network/interfaces

Create a file named /etc/resolv.conf and write this command in it:

ifconfig eth0 broadcast

+Backlight (Screen Brightness) (Aug. 22, 2014, 11:32 a.m.)

For solving the back light brightness problem, got to /etc/default/grub and edit the line: GRUB_CMDLINE_LINUX_DEFAULT to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=Linux acpi_backlight=vendor splash"
And then:
Check if graphics card is intel:
ls /sys/class/backlight

You should see something like:
ideapad intel_backlight

Fix backlight:
Create this file: /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
Driver "intel"
Option "Backlight" "intel_backlight"
Identifier "card0"

Logout and Login. Done.

+IRC (Aug. 22, 2014, 11:28 a.m.)

1-Join the Freenode network. Open your favorite IRC client and type:

2-Choose a user name or nick. This user name should consist only of the letters from A-Z, the numbers from 0-9 and certain symbols such as "_" and "-". It may have a maximum of 16 characters.

3-Change your user name to the user name you have chosen. Suppose you chose the nickname "awesomenickname". Type the following in the window titled Freenode:
/nick awesomenickname

4-Register your nick or user name. Type the following command and replace "your_password" with a password that will be easy to remember, and replace "your_email_address" with your email address.
/msg nickserv register your_password your_email_address

5-Verify your registration. After you register, you will not be able to identify to NickServ until you have verified your registration. To do this, check your email for an account verification code.

6-Group an alternate nickname with your main one. If you would like to register an alternate nickname, first switch to the alternate nickname that you want while you are identified as the main one, then group your nicks together with this command:
/msg nickserv group

7-Identify with Nickserv. Each time you connect, you should sign in, or "identify" yourself, using the following command:
/msg nickserv identify your_password

You can send private messages anytime after step 4. The advantage of the other steps is to make your registration much more secure. To send a private message, you simply do the following, replacing Nick with the nick or user name of the person you wish to contact privately and message with the message you want to start with:
/msg Nick message

Take care to follow this process in the Freenode window, not directly in a channel. If you type all the commands correctly, nothing should be visible to others, but it's very easy to type something else by mistake, and in so doing, you could expose your password.

Choose a nick between 5 and 8 characters long. This will make it easier to identify and avoid confusion. Choose your nick wisely. Remember that users will identify this name with your person.

User names will automatically expire after 60 days of disuse. This is counted from the last time it was identified with NickServ. If the nickname you want is not in use and you want it, you can contact somebody with Freenode staff to unassign it for you. If you will not be able to use IRC for 60 days you can extend the time using the vacation command (/msg nickserv vacation). Vacation will be disabled automatically next time you identify to NickServ.

To check when a nick was last identified with NickServ, use /msg NickServ info Nick

The Freenode staff have an option enabled to receive private messages from unregistered users so if you wish to request that a nick be freed, you do not have to register another.
To contact a member of the staff, use the command /stats p or /quote stats p if the first doesn't work. Send them a private message using /query nick.
In case there is no available staff member in /stats p, use /who freenode/staff/* or join the channel #freenode using /join #freenode.

Avoid using user names that are brand names or famous people, to avoid conflicts.

If you don't want your IP to be seen to the public, contact FreeNode staff and they can give you a generic "unaffiliated" user cloak, if you are not a member of a project.

If you want to hide your email address, use /msg nickserv set hidemail on.

If you need to change your password, type /ns set password new_password. You will need to be logged in.
# select nick name
/nick yournickname

# better don't show your email address:
/ns set hide email on

# register (only one time needed) - PW is in clear text!!
/msg NickServ register [password] [email]

# identify yourself to the IRC server (always needed) (xxxx == pw)
/msg NickServ IDENTIFY xxxx

# Join a channel
/join #grass
Registering a channel:
1-To check whether a channel has already been registered, use the command:
/msg ChanServ info #Mohsen or ##Mohsen

2-/join #Mohsen

3-/msg ChanServ register #Mohsen

For gaining OP:
/MSG chanserv op #shahbal Mohsen_Hassani

+zip (Aug. 22, 2014, 11:25 a.m.)

To zip just one file (file.txt) to a zipfile (, type the following:
zip file.txt

To zip an entire directory:
zip -r directory

zip -r -e saverestorepassword saverestore
The -e flag will prompt you to specify a password and then verify the password. You will see nothing happening in Terminal as you type the password. This will create a password protected zip file named containg your saverestore directory.
In the above examples, the name of the zip file can be whatever name you choose.


unzip -d music
This will extract the contents of to the music folder. Caveat, the directory must already exist.

Now let's extract the file. In this example I'll extract it to my music folder so I don't overwrite my current data in the saverestore folder. Again, this assumes you've just launched Terminal:
cd /media/internal
unzip -d music

In the above two examples, the -d flag indicates to extract the zip file to the directory specified, music in this case.
For excluding a directory in zip:
zip -r test -x "path/to/exclusion/directory/*"
1-Take note that the exclusion path should be in quotes, and a star at the end.
2-There is a * (star) at the end of the command which is used to exclude `ALL` the sub-files and sub-directories, so don't forget it use it!
3-The path should not be started from '/home/mohsen/....' it should be started from the path you're using the command.

+Commands - ssh (Aug. 22, 2014, 11:22 a.m.)

SSH is some kind of an abbreviation of Secure SHell. It is a protocol that allows secure connections between computers.
To move the ssh service to another port:
ssh -p yourport yourusername@yourserver

Running a command on the remote server:
Sometimes, especially in scripts, you'll want to connect to the remote server, run a single command and then exit again. The ssh command has a nice feature for this. You can just specify the command after the options, username and hostname. Have a look at this:
ssh yourusername@yourserver updatedb
This will make the server update its searching database. Of course, this is a very simple command without arguments. What if you'd want to tell someone about the latest news you read on the web? You might think that the following will give him/her that message:
ssh yourusername@yourserver wall "Hey, I just found out something great! Have a look at!"
However, bash will give an error if you run this command:
bash: !": event not found
What happened? Bash (the program behind your shell) tried to interpret the command you wanted to give ssh. This fails because there are exclamation marks in the command, which bash will interpret as special characters that should initiate a bash function. But we don't want this, we just want bash to give the command to ssh! Well, there's a very simple way to tell bash not to worry about the contents of the command but just pass it on to ssh already: wrapping it in single quotes. Have a look at this:
ssh yourusername@yourserver 'wall "Hey, I just found out something great! Have a look at!"'
The single quotes prevent bash from trying to interpret the command, so ssh receives it unmodified and can send it to the server as it should. Don't forget that the single quotes should be around the whole command, not anywhere else.
sudo ssh-keygen -R hostname
Creating ssh key:
ssh-keygen -t rsa
When the server is just installed, the first access is possible via:
ssh-keygen -R <ip of server>
SSH Tunnel:
1-Create a user on the server:
adduser <username>

2-Copy the user's ssh_key from his computer to the server:
ssh-copy-id -i ~/.ssh/ <username>@<server_ip>

3-Run this command on user's computer:
ssh -D <an optional port, like 9000> -fN <username>@<server_ip>

4-Change the Connection Settings of Mozilla, SOCKS Host:
localhost 9000

+Error - GPG error: ... NO_PUBKEY (Aug. 22, 2014, 11:21 a.m.)

While "apt-get update" I encountered an error telling me "GPG error: ... NO_PUBKEY DB141E2302FDF932"
So, for solving the problem I used this command:
apt-key adv --keyserver --recv-keys DB141E2302FDF932

+wget (Aug. 22, 2014, 11:18 a.m.)

Wget is a very cool command-line downloader for Linux and UNIX environments. Don’t be fooled by the fact that it is a command line tool. It is very powerful and versatile and can match some of the best graphical downloaders around today. It has features such as resuming of downloads, bandwidth control, it can handle authentication, and much more.

I’ll get you started with the basics of using wget and then I’ll show you how you can automate a complete backup of your website using wget and cron.

Let’s get started by installing wget. Most Linux distributions come with wget pre-installed. If you manage to land yourself a Linux machine without a copy of wget try the following.
On a Debian based system like Ubuntu:
sudo apt-get install wget

The most basic operation a download manager needs to perform is to download a file from a URL. Here’s how you would use wget to download a file:

# wget

Yes, it’s that simple. Now let’s do something more fun. Let’s download an entire website. Here’s a taste of the power of wget. If you want to download a website you can specify the depth that wget must fetch files from. Say you want to download the first level links of Yahoo!’s home page. Here’s how would do that:

# wget -r -l1

Here’s what each options does. The -r activates the recursive retrieval of files. The -l stands for level, and the number 1 next to it tells wget how many levels deep to go while fetching the files. Try increasing the number of levels to two and see how much longer wget takes.

Now if you want to download all the “jpeg” images from a website, a user familiar with the Linux command line might guess that a command like “wget*.jpeg” would work. Well, unfortunately, it won’t. What you need to do is something like this:

# wget -r -l1 –no-parent -A.jpeg
Another very useful option in wget is the resumption of a download. Say you started downloading a large file and you lost your Internet connection before the download could complete. You can use the -c option to continue your download from where you left it.

# wget -c

Now let’s move on to setting up a daily backup of a website. The following command will create a mirror of a site in your local disk. For this purpose wget has a specific option, –mirror. Try the following command, replacing with your website’s address.
When the command is done running you should have a local mirror of your website. This make for a pretty handy tool for backups. Let’s turn this command into a cool shell script and schedule it to run at midnight every night. Open your favorite text editor and type the following. Remember to adapt the path of the backup and the website URL to your requirements.


YEAR=`date +”%Y”`
MONTH=`date +”%m”`
DAY=`date +”%d”`

BACKUP_PATH=`/home/backup/` # replace path with your backup directory
WEBSITE_URL=`` # replace url with the address of the website you want to backup

# Create and move to backup directory
mkdir $DAY
cd $DAY

wget –mirror ${WEBSITE_URL}

Now save this file as something like and grant it executable permissions:

# chmod +x

Open your cron configuration with the crontab command and add the following line at the end:

0 0 * * * /path/to/

You should have a copy of your website in /home/backup/YEAR/MONTH/DAY every day. For more help using cron and crontab

+VPN (Aug. 22, 2014, 11:15 a.m.)

Configure VPN:
Start by browsing to System » Preferences » Network Connections » VPN.
If you have never setup a VPN connection before there is a good chance that all the buttons, like "Add", are grayed out. Fix this by opening a terminal and running this command:
sudo apt-get install pptp-linux network-manager-pptp
Now go back to the Network Connections window and the VPN tab inside of it; the Add button should now be clickable. Click it, select Point-to-Point Tunneling Protocol (PPTP) in the drop-down and click Create.
Type something like RaptorVPN in for Connection name. For Gateway, enter
Type in the RaptorVPN-provided password and then click Advanced.
In the Authentication section, uncheck all but MSCHAPv2.
In the Security and Compression section, check the box for Use Point-to-Point encryption (MPPE) and select 128-bit (most secure) in the drop-down below it. Then check the box for Allow stateful encryption and click OK and Apply.
If at any point during the VPN setup you see a keyring message like the one below, click Always Allow.
Restart the network manager by running this command in the terminal:
sudo /etc/init.d/network-manager restart
Now you are ready to take your new RaptorVPN connection for a test drive. Click the network icon in the taskbar and click on your new VPN connection.
A few seconds later you should be successfully connected!

+Change default sound card (Aug. 22, 2014, 11:14 a.m.)

nano /etc/modprobe.d/alsa-base.conf
and add:
options audigy (or whatever it is called) index=0
options logitech (or whatever it is called) index=1
and restart alsa
/etc/init.d/alsa-utils restart
asoundconf set-default-card Xmod
In terminal type
less /proc/asound/modules
That will show you which soundcards occupy which slot and what're their names.
My output is
0 snd_au8830
1 snd_intel8x0
so it should look something like that.
Now identify which cards you don't wanna use and take their names.
In terminal now type
sudo nano /etc/modprobe.d/alsa-base.conf
Find the place where it says something like
# Prevent abnormal drivers from grabbing index 0
and in the list below add
options snd_whateveryourcardnameswere index=-2
Since you have two card you want to blacklist you add two lines with different names then.
Now save /etc/modprobe.d/alsa-base.conf and reboot the computer.

+Commands - lsof (Aug. 22, 2014, 11:12 a.m.)

lsof -i:<port>
Example: lsof -i:80
Displayes the process which uses port 80.

+VGA Switcheroo (Aug. 22, 2014, 11:11 a.m.)

Once you've ensured that vga_switcheroo is available, you can use these options to switch between GPUs.
echo ON > /sys/kernel/debug/vgaswitcheroo/switch
Turns on the GPU that is disconnected (not currently driving outputs), but does not switch outputs.
echo IGD > /sys/kernel/debug/vgaswitcheroo/switch
Connects integrated graphics with outputs.
echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
Connects discrete graphics with outputs.
echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
Turns off the graphics card that is currently disconnected.
There are also a couple of options that are useful from inside an X-Windows session:
echo DIGD > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to integrated graphics to occur when the X server is next restarted.
echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to discrete graphics to occur when the X server is next restarted.

+Changing the boot count down time (Aug. 22, 2014, 11:07 a.m.)

nano /etc/default/grub

+Commands - ps (Aug. 22, 2014, 11:06 a.m.)

Lists all processes

ps -A
Displays all processes

kill + PID of process
Terminates a process

+Changing the attributes of a file/directory (Aug. 22, 2014, 11:05 a.m.)

Use the chmod command.
The attributes are read/write/execute for root/user/group with the values being:
4-2-1, 4-2-1, 4-2-1.

To give everyone execute only access to a file, you'd
chmod 111

or all permissions, it'd be
chmod 777

Root only r/w/x would be
chmod 700

4 = owner
2 = group
1 = other

+Commands - ls (Aug. 22, 2014, 11:04 a.m.)

ls -r
Reverse order while sorting

ls -F
Shows executable files with '*' sign and link files with '@'

ls -t
Sort by time

+Commands - echo (Aug. 22, 2014, 11:03 a.m.)

echo + message
Displayes the message on the screen.

echo + message > + filename
If the filename exists, it overwrites the "message" to the content of the file. And if the file doesn't exist, it creates the file and writes the "message" in it.

echo + message >> + filename
Adds the "message" to the end of the file.

+Commands - head and tail (Aug. 22, 2014, 10:57 a.m.)

prints the first part of files
head + filename + 4
prints the 4 lines of the file

prints the last part of files

+Bash - Adding commands to bash (Aug. 22, 2014, 10:54 a.m.)

1-Using this command, you can see the paths that Linux uses to find the commands:
env | grep PATH

2-Now you should add the address of your program to this PATH, using 'export' command, as follows:
If you use
export PATH=address-of-porgam
The existence addresses will be removed and it will cause the terminal to not recognize the commands.

So the thing you should do is:
Copy and Paste what there is in "env | grep PATH" and add "The address of specific program" like this command:
export PATH=/usr/local/sbin:/usr/local/bin:...:/home/mohsen/Programs/Debian/MyBashCommands

This directory MyBashCommand should be created already and only the executable files should be copied

+Kernel - Remove (Aug. 22, 2014, 10:53 a.m.)

Delete the files/directories:
/boot/vmlinuz -*kernel version*
/boot/ initrd-*kernel version*
/boot/config-*kernel version*
/boot/*kernel version*

/lib/modules/*kernel version*

/var/lib/initramfs-tools/*"kernel version"

update-initramsfs -u

+Kernel - Update (Aug. 22, 2014, 10:51 a.m.)

First way:

Copy kernel to /usr/src
tar -xvf kernel-source.tar.bz2
cd kernel-source
mkdir ../build
make clean
make mrproper
make O=../build menuconfig
make -j3 O=../build
make O=../build modules_install install
cd /boot/
mkinitramfs -v -o linux_version // if it didn't create initrd.img+linux_version, then use the following command
update-initramfs -u
//update-initramfs -c -k linux_version // to see the list of available versions go to /lib/modules
Second Way:

1-What to install before starting:
kernel-source-2.4.18 (or whatever kernel sources you will be using)
tk8.0 or tk8.1 or tk8.3
bin86 (for building 2.2.x kernels on PCs)

2-Expanding the source tarball
Copy the kernel-source to /usr/src and unzip it using the following command:
tar -jxf kernel-source-2.4.18.tar.bz2

3-Setting up the symlink
ln -s kernel-source-2.4.18 linux

4-Checking Current Minimal Requirements
The part of "Current Minimal Requirements" should be studied and the requirements should be installed.

5-Configuring the kernel:
make xconfig
make menuconfig
This command should display a long list of available kernel elemets so that we can select what to be compiled.

This command makes the system prepare the kernel using the selected elemets, which might take hours to finish this step.

7-Check in the same /usr/src address to see if the new Kernel-image-2.6.38_...Custom.deb is created!

8-Making the kernel image:
fakeroot make-kpkg clean
fakeroot make-kpkg --append-to-version=.030320 kernel_image

9-Installing the kernel-image package:
dpkg -i kernel-image-

10-echo "kernel-image- hold" | dpkg --set-selections
After this command, when you use this command "dpkg --get-selections | grep kernel-image", the output should be like this: "kernel-image- hold"

11-Removing the symlink:
cd /usr/src
rm linux

12-(Optional) Removing old kernels:
cd /boot
dpkg -P kernel-image-
dpkg -P pcmcia-modules-

13-Updating Grub
update-initramfs -c -k 2.6.38-1-amd64 // to see the list of available versions go to /lib/modules



+Driver - A site for checking and reporting device drivers (Aug. 22, 2014, 10:49 a.m.)

+Fan (Aug. 22, 2014, 10:48 a.m.)

echo -n 3 > /proc/acpi/fan/FAN/state
The value 3 my need to be 1 or 0.
0 turns the fan on and other to turn off.

+Dictionary - StarDict (Aug. 22, 2014, 10:42 a.m.)

sdcv is the console version of Stardict.

apt-get install sdcv

Install downloaded dictionaries:
Make the directory where sdcv looks for the dictionary:
sudo mkdir -p /usr/share/stardict/dic/

-l: display list of available dictionaries and exit.
-u: for search use only dictionary with this bookname
-n: for use in scripts
--data-dir path/to/directory: Use this directory as path to stardict data directory. This means that sdcv search dictionaries in data-dir/dic directory.

Converting Babylon glossaries to StarDict dictionary:
The output of this command is three file :

Place all these 3 files in /usr/share/stardict/dic/ creating a separate folder for each dictionary.

+Shutting down (Aug. 22, 2014, 10:42 a.m.)

shutdown -r now
shutdown -r 7:00

+Directories (Aug. 22, 2014, 10:28 a.m.)

/bin - Essential user commands
The /bin directory contains essential commands that every user will need. This includes your login shell and basic utilities like ls. The contents of this directory are usually fixed at the time you install Linux. Programs you install later will usually go elsewhere.

/usr/bin - Most user commands
The /usr hierarchy contains the programs and related files meant for users. (The original Unix makers had a thing for abbreviation.) The /usr/bin directory contains the program binaries. If you just installed a software package and don't know where the binary went, this is the first place to look. A typical desktop system will have many programs here.

/usr/local/bin - "Local" commands
When you compile software from source code, those install files are usually kept separate from those provided as part of your Linux distribution. That is what the /usr/local/ hierarchy is for.

/sbin - Essential System Admin Commands
The /sbin directory contains programs needed by the system administrator, like fsck, which is used to check file systems for errors. Like /bin, /sbin is populated when you install your Linux system, and rarely changes.

/usr/sbin - Non-essential System Administration Programs (binaries)
This is where you will find commands for optional system services and network servers. Desktop tools will not show up here, but if you just installed a new mail server, this is where to look for the binaries.

/usr/local/sbin - "Local" System Administration Commands
When you compile servers or administration utilities from source code, this is where the binaries normally will go.

Libraries are shared bits of code. On Windows these are called DLL files (Dynamic Loading Libraries). On Linux systems they are usually called SO (Shared Object) files. As to location, are you detecting a pattern yet? There are three directories where library files are placed: /lib, /usr/lib, and /usr/local/lib.

Documentation is a minor exception to the pattern of file placement. Pages of the system manual (man pages) follow the same pattern as the programs they document: /man, /usr/man, and /usr/local/man. You should not access these files directly, however, but by using the man command.
Many programs install addition documentation in the form of text files, HTML, or other things not man pages. This extra documentation is stored in directories under /usr/share/doc or /usr/local/share/doc. (On older systems you may find this under /usr/doc instead.)

+configure (Aug. 22, 2014, 10:25 a.m.)

When installing a package, the first phase is `./configure`. This is some information about it:

The primary job of the configure script is to detect information about your system and "configure" the source code to work with it.
Usually it will do a fine job at this. The secondary job of the configure script is to allow you, the system administrator, to customize the software a bit.
Running ./configure --help should give you a list of command line arguments you can pass to the configure script. Usually these extra arguments are for enabling or disabling optional features of the software, and it is often safe to ignore them and just type ./configure to take the default configuration.

There is one common argument to configure that you should be aware of. The --prefix argument defines where you want the software installed. In most source packages this will default to /usr/local/ and that is usually what you want. But sometimes you may not have root access to the system, and you would like to install the software into your home directory. You can do this with the last command in the example, ./configure --prefix=/home/vince (where vince is your user name).

+Tarballs (Tar Archive) (Aug. 22, 2014, 10:21 a.m.)

tar -xzvf filename.tar.gz

x : eXtract
j : deal with bzipped file
f : read from a file (rather than a tape device)


Creating a tar File:
tar -cvf output.tar /dirname

tar -cvf Projects.tar Projects --exclude=Projects/virtualenvs --exclude=".buildozer" --exclude=".git"

tar -cvf output.tar /dirname1 /dirname2 filename1 filename2

tar -cvf output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

tar -cvf /tmp/output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt


-c : Create a tar ball.
-v : Verbose output (show progress).
-f : Output tar ball archive file name.
-x : Extract all files from archive.tar.
-t : Display the contents (file list) of an archive.


Create a tar Archive File:
tar -cf abcd.tar /home/mohsen/abcd

Untar tar Archive File:
tar -xf abcd.tar
tar -xf abcd.tar -C /home/mohsen/Temp/

List Content of tar Archive File:
tar -tf abcd.tar
tar -tvf abcd.tar

Untar Single file from tar File:
tar -xf abcd.tar x.png
tar --extract --file=abcd.tar x.png

Untar Multiple files:
tar -xf abcd.tar "x.png" "y.png" "z.png"


Create tar.gz Archive File (compressed gzip archive):
tar -czf abcd.gz /home/mohsen/abcd

Uncompress tar.gz Archive File:
tar -xf abcd.tar.gz
tar -xf abcd.tar.gz -C /home/mohsen/Temp/

List Content tar.gz Archive File:
tar -tvf abcd.tar.gz

Untar Single file from tar.gz File:
tar -zxf abcd.tar.gz x.png
tar --extract --file=abcd.tar.gz x.png

Untar Multiple files:
tar -zxf abcd.tar.gz "x.png" "y.png" "z.png"


Create tar.bz2 Archive File:

The bz2 feature compresses and creates archive files less than the size of the gzip. The bz2 compression takes more time to compress and decompress files as compared to gzip which takes less time.

tar -cfj abcd.tar.bz2 /home/mohsen/abcd

Uncompress tar.bz2 Archive File:
tar -xf abcd.tar.bz2

List content tar.bz2 archive file:
tar -tvf abcd.tar.bz2

Untar single file from tar.bz2 File:
tar -jxf abcd.tar.bz2 home/mohsen/x.png
tar --extract --file=abcd.tar.bz2 /home/mohsen/x.png

Untar multiple files:
tar -jxf abcd.tar.bz2 "x.png" "y.png" "z.png"


Extract group of files using wildcard:
tar -xf abcd.tar --wildcards '*.png'
tar -zxf abcd.tar.gz --wildcards '*.png'
tar -jxf abcd.tar.bz2 --wildcards '*.png'


Add files or directories to tar archive file:
Use the option r (append)

tar -rf abcd.tar m.png
tar -rf abcd.tar images

The tar command doesn’t have an option to add files or directories to an existing compressed tar.gz and tar.bz2 archive file. If we do try will get the following error:
tar: This does not look like a tar archive
tar: Skipping to next header


Create a tar archive using xz compression:
tar -cJf abcd.tar.xz /path/to/archive/

tar xf abcd.tar.xz


Compress supporting source and destination directory:
tar -cf /home/mohsen/Temp/abcd.tar -P /home/mohsen/Temp/abcd
tar -cPf /home/mohsen/Temp/abcd.tar /home/mohsen/Temp/abcd


Tar Usage and Options:

c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.

-P (--absolute-names) – don't strip leading '/'s from file names


tar -cJf my_folder.tar.xz my_folder

+apt-get (Aug. 22, 2014, 10:21 a.m.)

apt-get upgrade
Updating the software

apt-get -s upgrade
To simulate an update installation, i.e. to see which software will be updated.

+Search for text in files (Aug. 9, 2015, 9:45 p.m.)

find . -name "*.txt" | xargs grep -i "text_pattern"
find / -type f -exec grep -l "text-to-find-here" {} \;
grep word_to_find file_name -n --c
The --c is for coloring the words
grep "<the word or text to be searched>" / -Rn --color -T
/: The location to be searched
R: Search in recursive mode
n: Display the number of the line in which the occurrence word or text is located
color: Display the search result colored
T: Separate the search result with a tab
l: stands for "show the file name, not the result itself"
grep -Rin "text-to-find-here" /
grep --color -Rin "text-to-find-here" / (to make it colorful)
egrep -w -R 'word1|word2' ~/projects/ (for two words)

i stands for upper/lower case
w stands for whole word
Find specific files and search for specific words:

find . -name '*.py' -exec grep -Rin 'resize' {} +
Finds the word `resize` in python files.
find -iname "*.py" | xargs grep -i django

+dpkg (Aug. 22, 2014, 10:19 a.m.)

dpkg --get-selections
To get list of all installed software

dpkg-query -W
To get list of installed software packages

dpkg -l
Description of installed software packages

+Driver - See PCI devices along with their kernel modules (device drivers) (Aug. 22, 2014, 10:05 a.m.)

lspci -k

It first shows you all the PCI devices attached to your system and then tells you what kernel modules (device drivers), are being used by them.

+sources.list (Aug. 22, 2014, 9:58 a.m.)

# Craigevil's Giant Debian /etc/apt/sources.list Updated October 24, 2012. Added siduction XFCE 4.10 repo
# This list is for Debian if you are using Ubuntu do not use this list.
# If you notice any repos not working please let me know in irc in #smxi on
# If you maintain a Debian repo and would like it added to the list email me at craigevil at gmail dot com

See for information about Debian GNU/Linux.
Three Debian releases are available on the main site:

Debian 6.0, or Squeeze. Access this release through dists/stable
Debian 6.0: 6.0.6 released. September 29th, 2012.

Testing, or Wheezy. Access this release through dists/testing. The
current tested development snapshot is named wheezy. Packages which
have been tested in unstable and passed automated tests propogate to
this release.

Unstable, or sid. Access this release through dists/unstable. The
current development snapshot is named sid. Untested candidate
packages for future releases.

Older releases of Debian are at

# Some helpful links:
# Presents of God Ministry -
# Debian GNU/Linux Installation Guide :
# Debian HCL; Debian GNU/Linux device driver check & report -
# The Debian Administrator's Handbook -
# Debian Social Contract -
# Debian -- Reasons to Choose Debian -
# Official Debian mirrors
# Also at
# Debian mirrors HTTP redirector -
# Debian oldstable repo
# Helps you get your packages into Debian
# Basics of the Debian package management system -
# Debian package management :
# Debian package management tools
# Newbiedoc :
# Aptitude user's manual -
# Apt - Debian Wiki -
# The APT, Dpkg Quick Reference
# Secure APT -
# Aptitude - Debian Wiki -
# SourcesList - Debian Wiki -
# AptPreferences - Debian Wiki :
# Apt-Pinning for Beginners :
# Search Debian -- Packages -
# Unofficial APT repositories -
# Debian-Database.ORG - Unofficial Debian Repositories Collected -
# UnofficialRepositories - Debian Wiki -
# Debian infographic :
# smxi - unofficial Debian maintenance script
# Exoodles multimedia installer script
# Howto: Set up and Maintain a Mixed Testing/Unstable System :
# Howto get newer package versions for Debian Stable -
# Grokking Debian GNU/Linux - :

## Start Repository List ##


## Debian Unstable ##
# Debian sid FAQ -
# There is no security, volatile or backports repo for unstable.
# Unstable Sid
#deb unstable main contrib non-free
# Unstable Sources
#deb-src unstable main contrib non-free

## Experimental ##
# Debian experimental
#deb experimental main contrib non-free

## Debian Testing ##
# Testing
#deb testing main contrib non-free
#deb-src testing main contrib non-free

# Testing Security
#deb wheezy/updates main contrib non-free
#deb-src wheezy/updates main contrib non-free

#Testing Proposed Updates
#deb testing-proposed-updates main contrib non-free
#deb-src testing-proposed-updates main contrib non-free

## Debian Stable ##
# Stable
#deb squeeze main contrib non-free
# Stable Sources
#deb-src squeeze main contrib non-free

# Security Updates
#deb squeeze/updates main contrib non-free

# Please note: The debian-volatile project has been discontinued with the Debian "Squeeze" release.
# See for details.
# Debian Volatile is now squeeze-updates
# Squeeze-updates
#deb squeeze-updates main contrib non-free

# Debian Stable Backports
# For information visit -
# deb squeeze-backports main contrib non-free
# Squeeze-backports-sloppy
# deb squeeze-backports-sloppy main contrib non-free

# Debian -- The "proposed-updates" mechanism -
# Also see
# deb squeeze-proposed-updates main
############################ End Debian Stable ##############################

## KDE/QT ##
# Debian Qt/KDE semi-official package repository -
# apt-get install pkg-kde-archive-keyring
# deb experimental-snapshots main
# deb-src experimental-snapshots main

## Emdebian ##
# Emdebian -- Cross-development toolchains -
# Also see
# Secure APT:apt-get install emdebian-archive-keyring
# deb squeeze main
# deb testing main
# deb unstable main

## Mono ##
# Mono for Debian -
# Unreleased preview packages for Debian unstable (sid)
# deb ./
# Debian/EXPERIMENTAL Current preview packages of Mono
# deb /

## Mentors ##
# -
# Repo for building packages from sources that aren't in the normal Debian repos
# deb-src unstable

## Mozilla Debian ##
# Debian Mozilla team APT archive -
# Iceweasel, Icedove, Iceape
# Secure APT: apt-get install pkg-mozilla-archive-keyring
# To use this archive, you need to add the following entry in /etc/apt/sources.list
# where release is the Debian release and pkg-ver the app version release,beta, aurora
# deb $release $pkg-$ver



Disclaimer: This is experimental software. Use at your own risk.
As with ANY software you download from a public server, you should be extremely careful with what you install.
craigevil cannot be held liable under any circumstances for damage to hardware or software, lost data, or other direct or indirect damage resulting from the use of this list. Packages may not be compatible with official Debian packages or may even be severely broken and cause damage to your system!
You have been warned.

## Liquorix Kernel by damentz .
# Secure Apt: apt-get install '^liquorix-([^-]+-)?keyring.?'
# Latest "stable" kernel
# deb sid main
# RC/Beta kernels
# deb sid main future

# Kernelsec, Debian and Ubuntu GrSecurity packages -
# Secure APT: Download the repository's gpg key , install it: apt-key add kernel-security.asc
# deb kernel-security/

# Linux-RT for Debian -
# Secure APT: apt-get install pengutronix-archive-keyring
# deb sid main contrib non-free

# Bumblebee Debian repository -
# Secure APT: wget -O - | apt-key add -
# deb sid main contrib
# deb-src sid main

# Google APT Repositories #
# wget -q -O - | apt-key add -
# or (gpg --keyserver --recv A040830F7FAC5991 && gpg --export --armor A040830F7FAC5991 | sudo apt-key add - )
# Google Chrome repo
# deb stable main
# Google Talk browser plugin
# deb stable main
# Google Earth
# deb stable main
# Google's Music Manager
# deb stable main

# Debian Multimedia Packages -
# DMM mirror list
# Note new updated repo line and keyring
# Secure apt: apt-get install deb-multimedia-keyring
# Debian Stable
# deb squeeze main non-free
# deb squeeze-backports main
# Debian Testing
# deb testing main non-free
# Debian Unstable/sid
# deb sid main non-free
# deb-src sid main
# Experimental Staging
# deb experimental main

# Gnash
# For Debian: wget | apt-key add gnashdev.key
# deb lenny main
# deb squeeze main
# deb sid main

# A shell script to automate the retrieval and installation of the Oracle (Sun) Java Runtime Environment
# Secure APT: apt-key adv --keyserver --recv-keys 5CB26B26
# deb debs all

# Spotify -
# only available to Spotify Premium and Spotify Unlimited subscribers
# Secure APT - gpg --keyserver --recv-keys 4E9CFF4E
# gpg --export 4E9CFF4E |sudo apt-key add -
# deb stable non-free

# Goggles Music Manager
# Also in official Debian repos
# Secure apt: apt-get install progchild-keyring
#deb stable main
#deb-src stable main

# Gmusicbrowser for Debian Stable
# It is already in normal Debian repos
#deb ./

# sdlmame, sdlmess (both packages are in normal Debian repos)
#deb lenny non-free
#deb-src lenny non-free

# Minitube
# Minitube is in Debian repos

# RareWares/Debian Multi-Media Repository for Unstable
# Info
#deb ./
# RareWares/Debian Multi-Media Repository for Unstable - Experimental Staging
#deb ./

# My Media System
# deb binary/
# deb-src source/

# Installing tovid/Debian - Tovid Wiki -
# Replace BRANCH with your Debian version: stable, testing, or unstable (the code names work as well, so you can use etch, lenny, or sid if you prefer)
# deb BRANCH contrib
# deb-src BRANCH contrib

# Freevo
# deb unstable main

# Both CinelerraCV 2.1.0 and CinelerraHV 4.2 packages are available from Debian-multimedia.
#x86 Debian packages are built from SVN by Andraz Tori,
#To install the package add this line to your sources list:
#For i386 processors:
deb ./
#For Pentium4 processors:
deb ./
#For Athlon processors:
#deb ./
# Apt-source:
#deb-src ./
#AMD64 Debian packages are built from SVN by Valentina Messeri.
#To install the package add this line to your sources list:
#deb ./
#deb-src ./

# PS3 Media Server • Easy setup guide for Ubuntu / Debian -
# the info below is outdated see the forum post
# ps3 media server -
# Secure APT - wget -q -O - | sudo apt-key add -
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free

# Plex Media Server -
# deb stable main

# Chromium also in the siduction repos , also WINE, GIMP and other various packages
# Chromium-browser is in Debian repos
# towo's repository for aptosid
# secure APT - apt-get install frickelplatz-archive-keyring frickelplatz-keyring frickelplatz-keyrings
# deb sid main contrib non-free

# Iceweasel release/beta/aurora packages for Debian
# Debian Mozilla team APT archive -
# To use this archive, you need to add the following entry in /etc/apt/sources.list
# deb $release $pkg-$ver

# Opera Web Browser Official packages.
# Secure APT - wget -O - | apt-key add -
# Opera Browser - Production release
#deb stable non-free
#deb testing non-free
#deb unstable non-free
# Opera Browser - Beta release
#deb stable non-free
#deb testing non-free
#deb unstable non-free

# JonDonym anonymous proxy servers
# Add the following line to /etc/apt/sources.list. Replace DISTRI by the name of your distribution.
# Secure Apt - apt-key add JonDos_GmbH.asc
# deb DISTRI main
# HowTo install the mix software using DEB packages
# Replace DISTRI by the name of your distribution. At the moment lenny, squeeze, sid, intrepid, jaunty, karmic and lucid are supported.
# deb DISTRI main

# I2P for more info see

# Tor
# Secure APT - gpg --keyserver --recv 886DDD89
# gpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | apt-key add -
# put the codename of your distribution (i.e. etch, lenny,squeeze, sid) in place of <DISTRIBUTION>.
# deb <DISTRIBUTION> main
# Experimental
#deb <DISTRIBUTION> main
#deb experimental-<DISTRIBUTION> main

# Peerguardian , moblock, mobloquer
#gpg --keyserver --recv 58712F29
#gpg --export --armor 58712F29 | sudo apt-key add -
#Debian (stable):
#deb squeeze main
#deb-src squeeze main
#Debian (testing/sid) use sid repo for testing and sid
#deb sid main
#deb-src sid main

### DESKTOPS ###
# Debian Desktop -
# sources.list [debian-desktop] -
# The packages provided by this site are unofficial.
# The packages provided on this site may cause trouble when upgrading to official Debian packages.
# KDE 4.6.x (squeeze, amd64, i386, powerpc) with extra packages
# taken from testing. created according to description
# deb debian-desktop main
# deb-src debian-desktop main
# XFce 4.8 (squeeze, i386, amd64)
# deb debian-desktop main
# deb-src debian-desktop main

# Debian Qt/KDE semi-official package repository
# Secure APT- aptitude install pkg-kde-archive-keyring
# deb experimental-snapshots main
# deb-src experimental-snapshots main

# MATE Desktop Environment Repository :
# Download [] -
# Secure APT: apt-get install mate-archive-keyring
# To install MATE, add the following line to your /etc/apt/sources.list file
# deb wheezy main

# Cinnamon Themes, Applets And Extensions LMDE :
# Secure APT : sudo apt-key adv --keyserver --recv-keys 4FA9719D
# deb oneiric main
# deb-src oneiric main

# Debian Trinity Repository KDE 3.5.13
# Documentation < Trinity Desktop Environment Wiki -
# Secure APT - apt-key adv --keyserver --recv-keys 2B8638D0
# For Squeeze (4 lines):
# deb squeeze main
# deb-src squeeze main
# deb squeeze main
# deb-src squeeze main

# XFCE -
# Debian's XFCE Group
# Secure APT: wget -O - | sudo apt-key add -
# deb unstable/
# deb-src unstable/
# deb-src UNRELEASED/

# ROX Desktop
# For info see:
# deb binary/

# i3 - improved tiling WM -
# Also in official Debian repos
# i3: Debian and Ubuntu repositories -
# Secure APT: apt-get --allow-unauthenticated install i3-autobuild-keyring
# deb sid main

# Enlightenment E17
# Enlightenment binary packages -
#deb lenny main extras
#deb squeeze main extras
#deb sid main extras

# Cathbards Debian/Mint Artwork repo
# Secure Apt: apt-get install cathbard-keyring
# deb ./

# Easy Linux for the Elderly
# WARNING only works with the old sun-java6-jre
# Brochure
# apt-get install eldy
# deb binary/

# Glx-Dock / Cairo-Dock - From the repository -
# Secure APT: wget -q -O- | apt-key add -
# don't forget to replace #CODENAME# by stable/testing/unstable
# deb #CODENAME# cairo-dock ## Cairo-Dock Stable

## Cairo Composite Manager -
# Caution packages Not signed, also in Solusos repo
# apt-get install cairo-compmgr cairo-compmgr-plugins
# deb sid main

## HostingControlPanels - Debian Wiki -

# Dotdeb - The repository for Debian-based LAMP servers Look in first!
# Mirrors:
# deb stable all
# deb-src stable all
# PHP 5.3
# deb stable all
# deb-src stable all

# Oracle Virtualbox non-ose
# Secure Apt: wget -q -O- | apt-key add -
# deb lenny contrib non-free
# deb squeeze contrib non-free

# Debian FAI - Fully Automatic Installation -
# Packages also in official Debian repos
# The packages are signed with the gpg key of Thomas Lange. This is how you can get the gpg key in your apt configuration
# Secure APT - gpg -a --recv-keys AB9B66FD; gpg -a --export AB9B66FD | apt-key add -
# deb squeeze koeln

# Minecraft Overviewer -
# APT Repository -
# Secure APT: Warning appears to be unsigned
# deb ./

# X2Go - everywhere@home :
# Secure APT: apt-key adv --recv-keys --keyserver E1F958385BFE2B6E
# X2Go Repository
# deb squeeze main
# X2Go Repository (sources)
# deb-src squeeze main

# Baruwa APT Repo -
# Baruwa (swahili for letter or mail) is a web 2.0 MailScanner front-end
# Secure APT: wget -O - | apt-key add -
# deb unstable main

# OpenVAS - Open Vulnerability Assessment System -
# OpenVAS - Install OpenVAS Packages -
# Secure APT: apt-key adv --keyserver hkp:// --recv-keys BED1E87979EAFD54
# deb ./

# PageSpeed Downloads - Make the Web Faster —
# Provides mod_pagespeed for Apache
# deb stable main

# Server monitoring - Server Density -
# Description: The Server Density monitoring agent.
# deb lenny main

# Jenkins CI -
# You need to have a JDK and JRE installed. openjdk-6-jre and openjdk-6-jdk are suggested.
# Debian Repository for Jenkins -
# Secure APT: wget -q -O - | sudo apt-key add -
# deb binary/

# Ajenti -
# Ajenti -
# Secure APT:wget -O- | apt-key add -
# deb main main

# Froxlor - Server Management Panel
# Froxlor -
# Secure apt: gpg --keyserver --recv-keys 4F9E9BBC && apt-key add /root/.gnupg/pubring.gpg
# deb <squeeze|wheezy|sid> main

# Vyatta4People.Org | Vyatta router automation and more! -
# Main Page - Unofficial Vyatta Wiki -
# Repo! | Vyatta4People.Org -
# deb experimental main

# | The Open Source Networking Community -
# Understanding the Vyatta Package Repositories | -
# deb stable main
# deb testing main

# Percona Server 5.5 apt Repository
# Secure APT - gpg --keyserver hkp:// --recv-keys 1C4CBDCDCD2EFD2A
# gpg -a --export CD2EFD2A | sudo apt-key add -
# Add this to /etc/apt/sources.list, replacing VERSION with the name of your distribution:
# deb VERSION main
# deb-src VERSION main

# Cloudkick Agent -
# Secure apt: wget | apt-key add -
# deb lucid main

# CDH3 Installation - Cloudera Support -
# CDH3 Installation -
# deb <RELEASE>-cdh3 contrib
# deb-src <RELEASE>-cdh3 contrib

# Aegir hosting system -
# Automatic install on Debian
# Secure APT: wget -q -O- | sudo apt-key add -
# deb stable main

# Nginx -
# ALSO in official Debian repos
# Install -
# deb squeeze nginx
# deb-src squeeze nginx

# Varnish Community -
# Installation on Debian | Varnish Community -
# Secure APT : wget | apt-key add -
# deb squeeze varnish-3.0

# OpenVPN Community -
# Secure APT: wget -O -|apt-key add -
# deb squeeze main
# OpenVPN Access
# deb lucid main
# OpenVPN Snapshot repo
# deb squeeze main

# Parallels Plesk
# Also see How to install Plesk control panel on Debian -
# deb lenny all

# Hudson CI
# Debian Repository for Hudson
# Also see Hudson Debian packages
# Secure Apt - wget -q -O - | sudo apt-key add -
# deb binary/

# | Download -
# Replace the i386 with amd64 if you are running a 64 bits distro
# apt-get install openpanel
# deb \openpanel main

# QRM keeps your data center up and running -
# Updated Package-Repository-URLs | openQRM -
# deb ./

# The Comprehensive Perl Archive Network -
# -- debified CPAN packages
# note these packages are not signed
#deb unstable main

# DebGem (beta), Ruby packages for Debian -
# Usage & FAQ :: DebGem (beta), Ruby packages for Debian -
# Secure APT: wget -q -O- | sudo apt-key add -
# deb debian-4.0 rubyforge
# Rubygems from github
# deb debian-4.0 github
# Rubygems from
# deb debian-4.0 webget

# Webmin
# See about the removal from Debian
# Secure APT - wget | apt-key add jcameron-key.asc
# deb sarge contrib
# deb sarge contrib

#On x86_64 systems: deb lenny/
#On x86 systems: deb lenny /

# Installing Oracle Database XE on Debian
# Secure APT - wget -O- | apt-key add -
# deb testing main non-free
# deb unstable main non-free

# opennms
# Secure APT - wget -O - | apt-key add -
#deb stable main
#deb-src stable main
#deb unstable main
#deb-src unstable main

# e-box
#deb ebox/
#deb extra/

# OPenVZ
#Secure APT - # apt-get install dso-archive-keyring
# etch repository:
#deb etch openvz
#deb-src etch openvz
# lenny repository:
#deb lenny openvz
#deb-src lenny openvz

# Chef - Opscode -
# New APT Repository for Chef 0.9 - Blog - Opscode -
# Secure APT - wget -O- | sudo apt-key add -
# Replace release with the distribution codename for your release, i.e. stable, testing, unstable
# deb <release> main

# Kamailio SIP Server -
# Secure Apt - wget | apt-key add kamailiodebkey.gpg
#Latest Kamailio stable release
# Debian Lenny
#deb lenny main
#deb-src lenny main
# Debian Squeeze
#deb squeeze main
#deb-src squeeze main

# MongoDB -
# Ubuntu and Debian packages - MongoDB -
# Debian Lenny (5.0)
# deb 5.0 10gen

# Ignite Realtime: Openfire Server -
# Either use the .deb at
# Build Your Own Openfire Chat Server on Debian Linux
# No known Debian repo

# MariaDB repository list -
# deb main
# deb-src main

# OurDelta - Enhanced, packaged convenience for MySQL and MariaDB -
# Secure APT - wget -O- | sudo apt-key add -
# MariaDB OurDelta repository for Debian 5.0 "Lenny" binary packages.
# deb lenny mariadb-ourdelta
#deb-src lenny mariadb-ourdelta
# OurDelta repository for Debian 5.0 "Lenny" binary packages.
# deb lenny ourdelta
# deb-src lenny ourdelta
# OurDelta Sail (bleeding edge) repository for Debian 5.0 "Lenny" binary packages.
# deb lenny ourdelta-sail
# deb-src lenny ourdelta-sail

# Open Access and Institutional Repositories with EPrints -
# Installing EPrints 3 via apt (Debian/Ubuntu) - EPrints -
# Secure Apt - Warning doesn't appear to be signed
# Stable Build
# deb stable/
# deb-src source/
# Testing Build
# deb unstable/
# deb-src source/
# Nightly Builds
# deb nightly/
# deb-src source/

# MirrorBrain
# Secure APT - apt-key adv --keyserver hkp:// --recv-keys 9584A164BD6D129A
# /

# Mailspect Email Defense, Mailspect Email Archive and Mailspect Manager
# Mailspect apt Repository for Debian Linux -
# Secure APT - wget | sudo apt-key add -
# deb mpp-testing non-free

NorduGrid Downloads - Repository Information
# Secure APT - wget -q -O- | sudo apt-key add -
# deb lenny main
# deb-src lenny main

# Citrix XenServer
# Secure APT : wget -q -O- | apt-key add -
# deb lenny main
# deb-src lenny main

# NSLS-II Repository of Debian Packages -
# EPICS - Experimental Physics and Industrial Control System -
# Secure APT: Download the signing key:
# Then as root do apt-key add
# deb squeeze main contrib
# deb-src squeeze main contrib

# Condor Project Homepage -
# Condor Debian Repository -
# Development only has amd64 packages
# deb squeeze contrib
# Stable
# deb squeeze contrib

## Debian based distros #
# Be very careful using repos from other distros.

## CrunchBang Linux 10.xx aka Statler
## Compatible with Debian Squeeze, but use at your own risk.
# deb statler main
# deb-src statler main

# Debian Multimedia Mirror
# deb squeeze main non-free
# deb-src squeeze main non-free

# Debian Mozilla Mirror
# deb squeeze-backports iceweasel-release
# deb-src squeeze-backports iceweasel-release
###################### End Crunchbang ###########################

# KNOPPIX Sources
# deb-src ./
# KNOPPIX Precompiled binaries
# deb ./

# SolusOS
# Secure APT: apt-get install solusos-keyring
# Official SolusOS repo
# deb eveline main upstream import non-free
# Source
# deb-src eveline main upstream import non-free

# LMDE - Linux Mint Debian
# New mint-debian-mirrors package -
# Secure APT - apt-get install linuxmint-keyring
# LMDE Sources.list (s) :
# LMDE incoming repo (how-to) :
# LMDE Incoming repos
# deb debian main upstream import incoming
# deb testing main contrib non-free
# deb testing/updates main contrib non-free
# deb testing main non-free
# LMDE Latest repos
# deb debian main import backport romeo upstream
# deb testing main contrib non-free
# deb testing/updates main contrib non-free
# deb testing main non-free
############ End LMDE repos ##########################

######## siduction #######
# Siduction Repositories - the community based OS -
# Download / Mirrors -
# siduction
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free
# Community
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free
# User
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free
# Fixes
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free
# Experimental
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free
# Experimental Snapshots
# deb experimental main
# deb-src experimental main
# Kde-next
# deb experimental main
# deb-src experimental main
# Razorqt
# deb unstable main
# deb-src unstable main
# XFCE4.10 xfcenext (amd64 / i386 )
# deb unstable main
# deb-src unstable main

############## End siduction repos ######################

######## aptosid ########
# aptosid (the new sidux)
# Technical University Carolo-Wilhelmina at Brunswick, Germany
# deb sid main fix.main
# deb-src sid main fix.main

# xadras aptosid kde4 repo -
# Secure APT - apt-get install xadras-keyring
# deb sid main

# slam's Software Repositories for Debian Sid and aptosid -
# Warning this repository is not signed
# deb sid main contrib non-free

# towo's repository for aptosid/Debian sid
# secure APT - apt-get install frickelplatz-archive-keyring frickelplatz-keyring frickelplatz-keyrings
# deb sid main contrib non-free
########################## End aptosid repositories #######################

### Mepis ###
# Mepis is a Gnu/Linux distribution based on Debian Stable
# Sources.list -
# MEPIS improvements, overrides and updates--the MEPIS magic
#deb mepis-11.0 main
# Alternate HTTP URL of mirror for those that can't use FTP
# deb mepis-11.0 main
# MEPIS master pools, please use only if mirror is slow or down
# deb mepis-11.0 main
# Mepis Community Main, Restricted, and Test Repos
# Community Repository -
# deb mepis11cr main non-free
# deb mepis11cr restricted restricted-non-free
# deb mepis11cr test
# deb mepis11cr test-restricted
################ End Mepis #############################

######## KANOTIX ########
# Kano's Scriptpage for KANOTIX -
# Kanotix Excalibur
#deb ./
#deb-src ./

# Kanotix Dragonfire
# deb ./
# deb-src ./

# Kanotix Hellfire
# deb ./
# deb-src ./

# Kanotix Hellfire Extra
# deb ./
# deb-src ./

# KDE4.8 (backported by acritox)
# deb dragonfire kde-backport
# deb-src dragonfire kde-backport

# Acritox
# deb trialshot main
# deb-src trialshot main

# Libre Office
# deb ./

######### End KANOTIX ############

# Lemote
# deb loongson main contrib
# deb testing main contrib
# deb testing main contrib
# deb-src testing main contrib
# deb testing main

## grml More information available at the grml-wiki:
# stable repository
# deb grml-stable main
# deb-src grml-stable main
# testing/development repository:
# deb grml-testing main
# deb-src grml-testing main

# OzOS | The Reality Different E17-
#deb hungrytiger main
#deb tinwoodman main

# Raspbian is a free operating system based on Debian for the Raspberry Pi.
# RaspbianFAQ - Raspbian :
# Raspbian Mirrors
# Secure APT: wget -O - | sudo apt-key add -
# deb wheezy main contrib non-free rpi
# deb-src wheezy main contrib non-free rpi

# Geany Debian/Ubuntu Nightly Builds
# Secure apt: wget -O- "" | apt-key add -
# deb unstable main
# deb stable main

# emacs-snapshot Debian packages -
# Secure APT : wget -q -O - | sudo apt-key add -
# Stable
# deb stable/
# deb-src stable/
# Unstable
# deb unstable/
# deb-src unstable/

# Printer Driver Packages | The Linux Foundation
# deb lsb3.2 main contrib main-nonfree

# The Samsung Unified Linux Driver Repository :
# deb debian extra

# Remastersys -
# Remastersys - Debian -
# Remastersys Squeeze
# deb squeeze/

# MeeGo -
# Index of /MeeGo/sdk/host/repos/debian/5.0 -
# deb

# Artificial Scanning-Charged-Particle-Microscope Image Generator (ARTIMAGEN)
# deb sid main
# deb-src sid main

# RawTherapee Downloads :
# Also in the siduction repos
# ewelot07's repository provides 32/64-bit packages of RawTherapee 3.0 for Debian 6.0.
# deb squeeze main

# Mondo Rescue - GPL disaster recovery solution -
# Secure APT: wget -q -O - | sudo apt-key add -
# deb 6.0 contrib
# deb-src 6.0 contrib

# Download Dropbox - Dropbox -
# Also in official Debian repos
# deb wheezy main

# MirDebian “WTF” Repository Index -
# you will need to install apt-transport-https beforehand
# then install wtf-debian-keyring and re-run apt-get update
# (or use a http mirror first, install all three of ca-bundle,
# wtf-debian-keyring and apt-transport-https simultaneously,
# then switch to the https mirror)
# main (recommended) mirror
# deb lenny wtf
# deb-src lenny wtf
# deb sid wtf
# deb-src sid wtf
# fallback mirror
# deb lenny wtf
# deb-src lenny wtf
# deb sid wtf
# deb-src sid wtf

# SpiderOak Online Backup -
# SpiderOak APT/RPM Repository -
# APT key:
# deb stable non-free

# Quantum GIS :
# Also in official Debian repos
# To add the the repository public key to your apt keyring, type:
# gpg --recv-key 1F9ADD375CA44993
# gpg --export --armor 1F9ADD375CA44993 | sudo apt-key add -
# Packages of QGIS 1.7 for Debian Lenny, Squeeze and Unstable for i386 and amd64 are available at:
# deb squeeze main
# deb-src squeeze main
# updated versions of GDAL and GRASS:
# deb squeeze main
# deb-src squeeze main
# Nightly builds of the Master are available from following repository (i386 and amd64):
# deb squeeze main
# deb-src squeeze main

# Duke Nukem 3D
# Secure APT- su && wget -O- | apt-key add -
# deb sid main
# deb-src sid main

# Toribash - Violence Perfected - A physics based fighting game. -
# Toribash - Download for free -
# Secure APT: wget -q -O - | sudo apt-key add -
# deb toribash/

# Bimoid - server and messenger for your network -
# Secure APT: wget -O - | apt-key add -
# deb stable non-free

# Paissad's Repo -
## Secure APT: wget -q -O - | sudo apt-key add -
# deb unstable main contrib non-free personal
# deb-src unstable main contrib non-free personal

# Alex_P's Unofficial repository for Debian/Ubuntu :
# Secure APT: wget -O - | apt-key add -
# deb squeeze main
# deb wheezy main
# deb sid main

# Mark's SynCE Corner -
# SynCE is a project to connect and sync Windows Mobile devices with Linux
# To keep apt happy about keys, run the following:
#gpg --keyserver --recv-key EEA242F0
#gpg -a --export EEA242F0 | sudo apt-key add -
#deb testing main
#deb-src testing main
#deb unstable main
#deb-src unstable main

# C-Pluff, a plug-in framework for C -
#deb stable cpluff
#deb-src stable cpluff
# MinGW cross-compilation on Debian GNU/Linux
#deb stable cpluff 3rdparty
#deb-src stable cpluff 3rdparty

# Debcreate - Debian package builder (Also in slam's repo)
# deb binary/

# Debian configuration packages -
# Secure APT: wget -N | apt-key add debathena-archive.asc
# Substitute the name of your distribution (lenny, squeeze, hardy, lucid, maverick, or natty) for DISTRO.
# deb DISTRO debathena
# deb-src DISTRO debathena

# Bluefish Editor :
# Secure APT -apt-get install wgdd-archive-keyring
# apt-get install bluefish
# deb unstable main contrib non-free
# deb-src unstable main contrib non-free

# Hadret's Debian PPA -
# CoverGloobus, Equinox, Murrine (from GIT), DeadBeef, Cairo (with David Turner's patch), Pino, Nautilus Elementary and few others.
# Package list
# Secure APT wget -O - | apt-key add -
#deb unstable main
#deb-src unstable main
#deb experimental main
#deb-src experimental main

# FawtyToo: Packages :
# deb squeeze main contrib non-free

# PIC microcontroller Lego Mindstorms
# Secure APT wget -q -O- | apt-key add -
# deb testing main non-free
# deb-src testing main

# Terminator -
# Description: Terminator is a cross-platform GPL terminal emulator with advanced features not yet found elsewhere.
# Secure APT - wget -O - | apt-key add -
# deb ./

# Jitsi (SIP Communicator)
# DebianRepository -
# deb unstable/

# Bitlbee
# These are Debian/Ubuntu package repositories. New packages are built nightly (if new changes are available).
# Pick the branch + version + architecture you want and add it to your sources.list, similar to this:
# deb ./
# deb ./

# Follow the instructions at:

# PlayonLinux
# Playonlinux is in the normal Debian repos.
# deb lenny main
# deb squeeze main

# Official site for latest version of Skype.
# Direct file download
# deb stable non-free

# RSSOwl - Powerful RSS / RDF / Atom News Feed Reader -
# Secure APT - wget -q -O- | apt-key add -
# deb lenny main

# DevZero’s repository for Debian Linux.
# List of packages
# Lenny-experimental archive
# deb lenny-experimental main
# Lenny-backports
# deb lenny-backports main
# deb-src lenny-backports main
# WARNING: packages from the ‘custom’ repository have been changed by the maintainer of this
# repository and may or may not do what you expect them to. Use at your own risk!
# deb lenny-custom main
# deb-src lenny-custom main

# Unofficial repository for Debian/Ubuntu -
# Blog:
# Contains Truecrypt and various other packages
# Secure APT: wget -O - | apt-key add -
# deb squeeze main
# deb wheezy main
# deb sid main

# Debian Neuroscience Package Repository -
# Stable/Lenny
#deb lenny main contrib non-free
#deb-src lenny main contrib non-free
# Testing/Squeeze
#deb squeeze main contrib non-free
#deb-src squeeze main contrib non-free
# Unstable/Sid
#deb sid main contrib non-free
#deb-src sid main contrib non-free

# Tryphon Debian Repository -
# Secure APT - wget -q -O - | apt-key add -
#deb stable main contrib
#deb-src stable main contrib
#deb testing main contrib
#deb-src testing main contrib
#deb unstable main contrib
#deb-src unstable main contrib

# aMule
# aMule stable release
# deb testing amule-stable wx
# deb stable amule-stable wx.
# aMule SVN
# deb testing amule
# deb stable amule

# The APT repository -
# Secure APT - wget -O- | apt-key add -
# deb squeeze-dev main

# Description: repository for SOGo and SOPE nightly builds
# Architectures: all, amd64, i386
#deb lenny lenny
#deb-src lenny lenny

# Scribus
# Secure APT - gpg --keyserver --recv-keys EEF818CF; gpg --armor --export EEF818CF | apt-key add -
#deb lenny main contrib non-free
#deb-src lenny main contrib non-free

# Squeezebox Slimdevice Stable Repository
#deb stable main

# Description: Waja's Lenny backports
# Architectures: all, amd64, i386
#deb lenny-backports main contrib non-free

# OFSET's non-official Debian repository -
#deb squeeze main
#deb-src squeeze main

# ::: Replica9000's Apt Repository ::: -
# deb unstable main beta
# deb-src unstable main beta

# Kevin Ryde's Debian Repo -
# deb unstable main other

# Logilab Debian Repository -
# Debian Sid
# deb sid/
# Debian Squeeze
# deb squeeze/
# Debian Lenny
# deb lenny/

# Jens' unofficial debian-repository for the Code::Blocks - IDE -
#Secure APT - apt-get install jens-lody-debian-keyring
#deb any main
#deb-src any main

# wxWidgets/wxPython repository at
# InstallingOnUbuntuOrDebian - wxPyWiki :
# Secure APT - wget -q -O- | apt-key add -
#deb squeeze-wx main
#deb-src squeeze-wx main

# NTP-Dev Debian Package Repository -
# Secure APt - gpg --keyserver --recv-key 2260E098
# gpg --armor --export 2260E098 | apt-key add -
# deb lenny main
# deb-src lenny main

#deb unstable/
#deb-src unstable/
# current development Cairo snapshots:
#deb experimental/
#deb-src experimental/

# jEdit - Programmer's Text Editor - download -
# JEdit is in the Debian unstable repo.
# deb /

# Joey Hess's bleeding edge repository ( only contains a few new packages)
# deb ./

# Ekiga snapshot
#deb ./

# Esmska -
#Secure APT - wget -q -O - | apt-key add -
#deb /

# Phoronix Test Suite
#deb pts.debian/

# OpenedHand Debian/Ubuntu Packages
# deb unstable/

# Unofficial Maintainers
# Secure APT - apt-get install dmo-archive-keyring
# Unofficial Maintainers (lenny/stable releases)
#deb lenny main contrib non-free restricted
#deb-src lenny main contrib non-free restricted
# Unofficial Maintainers (squeeze/testing releases)
# Note: This repository is not yet populated.
# deb squeeze main contrib non-free restricted
# deb-src squeeze main contrib non-free restricted
# Unofficial Maintainers (sid/unstable releases)
#deb sid main contrib non-free restricted
#deb-src sid main contrib non-free restricted

# Frostwire
#deb version main
#deb-src version main

#Twitim is a Twitter client for GNOME
# deb ./

#deb speedwave main
#deb-src speedwave main
#gpg --keyserver --recv-key 22455895
#gpg --armor --export 22455895 | sudo apt-key add -

# Orange - Data Mining Fruitful & Fun -
# deb lenny main
# deb-src lenny main

# Unofficial Debian packages from
# wget -O - | apt-key add -
#deb lenny main contrib non-free
#deb-src lenny main contrib non-free
# Unstable
#deb sid main contrib non-free
#deb-src sid main contrib non-free

+PIP - Usage examples (Aug. 22, 2014, 9:14 a.m.)

Install SomePackage and it’s dependencies from PyPI using Requirement Specifiers
pip install SomePackage # latest version
pip install SomePackage==1.0.4 # specific version
pip install 'SomePackage>=1.0.4' # minimum version

Install a list of requirements specified in a file.
pip install -r requirements.txt

Upgrade an already installed SomePackage to the latest from PyPI.
pip install --upgrade SomePackage

Install a local project in “editable” mode
pip install -e . # project in current directory
pip install -e path/to/project # project in another directory

Install a project from VCS in “editable” mode. See the sections on VCS Support and Editable Installs.
pip install -e git+https://git.repo/some_pkg.git#egg=SomePackage # from git
pip install -e hg+https://hg.repo/some_pkg.git#egg=SomePackage # from mercurial
pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomePackage # from svn
pip install -e git+https://git.repo/some_pkg.git@feature#egg=SomePackage # from 'feature' branch

Install a package with setuptools extras.
pip install SomePackage[PDF]
pip install SomePackage[PDF]==3.0
pip install -e .[PDF]==3.0 # editable project in current directory

Install a particular source archive file.
pip install ./downloads/SomePackage-1.0.4.tar.gz
pip install http://my.package.repo/

Install from alternative package repositories. (Install from a different index, and not PyPI):
pip install --index-url http://my.package.repo/simple/ SomePackage

Search an additional index during install, in addition to PyPI:
pip install --extra-index-url http://my.package.repo/simple SomePackage

Install from a local flat directory containing archives (and don’t scan indexes):
pip install --no-index --find-links:file:///local/dir/ SomePackage
pip install --no-index --find-links:/local/dir/ SomePackage
pip install --no-index --find-links:relative/dir/ SomePackage

Find pre-release and development versions, in addition to stable versions. By default, pip only finds stable versions.
pip install --pre SomePackage
pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...

pip is able to uninstall most installed packages. Known exceptions are:
Pure distutils packages installed with python install, which leave behind no metadata to determine what files were installed.
Script wrappers installed by python develop.

-r, --requirement <file>
Uninstall all the packages listed in the given requirements file. This option can be used multiple times.

-y, --yes
Don't ask for confirmation of uninstall deletions.

Uninstall a package.
pip uninstall simplejson
pip freeze [options]

Output installed packages in requirements format.

-r, --requirement <file>
Use the order in the given requirements file and it’s comments when generating output.

-f, --find-links <url>
URL for finding packages, which will be added to the output.

-l, --local
If in a virtualenv that has global access, do not output globally-installed packages.

Generate output suitable for a requirements file.
$ pip freeze

Generate a requirements file and then install from it in another environment.
$ env1/bin/pip freeze > requirements.txt
$ env2/bin/pip install -r requirements.txt
pip list [options]

List installed packages, including editable ones.

-o, --outdated
List outdated packages (excluding editables)

-u, --uptodate
List up-to-date packages (excluding editables)

-e, --editable
List editable projects.

-l, --local
If in a virtualenv that has global access, do not list globally-installed packages.

Include pre-release and development versions. By default, pip only finds stable versions.

List installed packages.
$ pip list
Pygments (1.5)
docutils (0.9.1)
Sphinx (1.1.2)
Jinja2 (2.6)

List outdated packages (excluding editables), and the latest version available
$ pip list --outdated
docutils (Current: 0.9.1 Latest: 0.10)
Sphinx (Current: 1.1.2 Latest: 1.1.3)
pip show [options] <package> ...

Show information about one or more installed packages.

-f, --files
Show the full list of installed files for each package.

Show information about a package:
$ pip show sphinx
`the output will be`:
Name: Sphinx
Version: 1.1.3
Location: /my/env/lib/pythonx.x/site-packages
Requires: Pygments, Jinja2, docutils
pip search [options] <query>

Search for PyPI packages whose name or summary contains <query>.

--index <url>
Base URL of Python Package Index (default

Search for “peppercorn”
pip search peppercorn
pepperedform - Helpers for using peppercorn with formprocess.
peppercorn - A library for converting a token stream into [...]
pip zip [options] <package> ...

Zip individual packages.

Unzip (rather than zip) a package.

Do not include .pyc files in zip files (useful on Google App Engine).

-l, --list
List the packages available, and their zip status.

With –list, sort packages according to how many files they contain.

--path <paths>
Restrict operations to the given paths (may include wildcards).

-n, --simulate
Do not actually perform the zip/unzip operation.
This command will download the zipped/tar file in the specified location:
pip install --download="/path/to/downloaded/files" `package_name`
pip install --allow-all-external pil --allow-unverified pil

+Hardware - Modem - WiMAX modem (Aug. 17, 2015, 9:55 a.m.)

For installing the driver, install these packages first:
apt-get install linux-headers-`uname -r` libssl-dev usb-modeswitch zip
The wimaxd would not get recognized by the terminal. So I copied it in the /bin directory.
There was an error "error while loading shared libraries: cannot open shared object file" so I did the following:
To fix the problem, I added the "" path to /etc/ and re-ran ldconfig.

Another incident which is not related to WiMAX, is that, one day when I was installing and running Apache, there was an error similar to this error of WiMAX: "error while loading shared libraries: cannot open shared object file", so I searched for the file using "locate" command and copied it in the address "/usr/lib" and ran Apache, it was solved!
WiMAX linux-headers error:
make: *** /lib/modules/3.13.0-37-generic/source: No such file or directory. Stop.

1-rm /lib/modules/3.13.0-37-generic/source
2-ln -s /usr/src/linux-headers-3.13.0-37 /lib/modules/3.13.0-37-generic/source
2-wimaxd -D -c wimaxd.conf
3- (in another console) wimaxc -i
4-(in another console) su
4.1-dhclient eth1

+Version, Distro, Release (Aug. 4, 2014, 4:38 a.m.)

uname -r
Find or identify which version of Debian Linux you are running:
cat /etc/debian_version
What is my current linux distribution
cat /etc/issue
How Do I Find Out My Kernel Version?
uname -mrs
lsb_release Command:
The lsb_release command displays certain LSB (Linux Standard Base) and distribution-specific information.
lsb_release -a

+List hardware information (Aug. 4, 2014, 4:37 a.m.)


+Hard Disk information (Aug. 4, 2014, 4:36 a.m.)

fdisk -l

+Sudoer (Aug. 4, 2014, 4:36 a.m.)

Scroll to the bottom of the page and enter:
mohsen ALL=(ALL) ALL

Mac OS
+VMware Tools (Jan. 23, 2017, 1:16 p.m.)

Darwin Image for VMware Tools for Mac OS X:

+Password Reset (Sept. 12, 2016, 12:39 a.m.)

1-Turn off your Mac (choose Apple > Shut Down).
2-Press the power button while holding down Command-R. The Mac will boot into Recovery mode. ...
3-Select Disk Utility and press Continue.
4-Choose Utilities > Terminal.
5-Enter resetpassword (all one word, lowercase letters) and press Return.
6-Select the volume containing the account (normally this will be your Main hard drive).
7-Choose the account to change with Select the User Account.
8-Enter a new password and re-enter it into the password fields.
9-Enter a new password hint related to the password.
10-Click Save.
11-A warning will appear that the password has changed, but not the Keychain Password. Click OK.
12-Click Apple > Shut Down.

Now start up the Mac. You can login using the new password.

+Install Ionic (June 21, 2016, 11:08 p.m.)

brew install npm

sudo npm install -g cordova ionic

npm install -g ios-sim

npm install -g ios-deploy
ionic platfrom add ios
ionic resources
ionic build ios

+Speed Up Mac by Disabling Features (June 21, 2016, 11:13 p.m.)

Disable Open/Close Window Animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
Disable Quick Look Animations
defaults write -g QLPanelAnimationDuration -float 0
Disable Window Size Adjustment Animations
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
Disable Dock Animations

defaults write launchanim -bool false
Disable the “Get Info” Animation
defaults write DisableAllAnimations -bool true
Get rid of Dashboard
defaults write mcx-disabled -boolean YES
killall Dock
Speed Up Window Resizing Animation Speed
defaults write -g NSWindowResizeTime -float 0.003
Disable The Eye Candy Transparent Windows & Effects
System Preferences -> Accessibility -> Display
Check the box for “Reduce Transparency”
Disable Unnecessary Widgets & Extensions in Notifications Center
System Preferences -> Extensions -> Today
Uncheck all options you don’t need or care about

+Disable SIP (June 20, 2016, 12:37 a.m.)

csrutil status
csrutil disable

+Recovery HD partition with El Capitan bootable via Clover (June 19, 2016, 7:46 p.m.)

1- diskutil list
You will get the partition list, note that the Recovery Partition is obviously named "Recovery HD"

2- Create a folder in Volumes folder for Recovery HD and mount it there:
sudo mkdir /Volumes/Recovery\ HD
sudo mount -t hfs /dev/disk0s3 /Volumes/Recovery\ HD

3- Remove the file `prelinkedkernel`from the directory ``
sudo rm -rf /Volumes/Recovery\ HD/

4- Copy your working `prelinkedkernel` there:
sudo cp /System/Library/PrelinkedKernels/prelinkedkernel /Volumes/Recovery\ HD/

5- Reboot

+Mac OS X on Virtualbox (June 12, 2016, 3:29 p.m.)

vboxmanage modifyvm "Mac OS X 10.11" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

VBoxManage setextradata "Mac OS X 10.11" "VBoxInternal2/EfiBootArgs" " "

+Convert Installation DMG to ISO - Create a Bootable ISO (June 11, 2016, 10:04 p.m.)

You need to run these commands on a Mac OS X:

# Mount the installer image
hdiutil attach /Applications/Install\ OS\ X\ El\
-noverify -nobrowse -mountpoint /Volumes/install_app

# Create the ElCapitan Blank ISO Image of 7316mb with a Single Partition - Apple Partition Map
hdiutil create -o /tmp/ElCapitan.cdr -size 7316m -layout SPUD -fs HFS+J

# Mount the ElCapitan Blank ISO Image
hdiutil attach /tmp/ElCapitan.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

# Restore the Base System into the ElCapitan Blank ISO Image
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

# Remove Package link and replace with actual files
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/

# Copy El Capitan installer dependencies
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

# Unmount the installer image
hdiutil detach /Volumes/install_app

# Unmount the ElCapitan ISO Image
hdiutil detach /Volumes/OS\ X\ Base\ System/

# Convert the ElCapitan ISO Image to ISO/CD master (Optional)
hdiutil convert /tmp/ElCapitan.cdr.dmg -format UDTO -o /tmp/ElCapitan.iso

# Rename the ElCapitan ISO Image and move it to the desktop
mv /tmp/ElCapitan.iso.cdr ~/Desktop/ElCapitan.iso

+Commands (June 9, 2016, 1:45 p.m.)

Locate command:
To create the database for using `locate` command, run the following command:
sudo launchctl load -w /System/Library/LaunchDaemons/

updatedb ==> sudo /usr/libexec/locate.updatedb

+Installing Xcode (June 6, 2016, 3:31 p.m.)

For downloading Xcode or other development tools, you need to log into using your Apple ID account and then open the following link:

Download Xcode and Command Line Tools!

+Applications (June 5, 2016, 2:04 p.m.)

brew install proxychains-ng

sudo nano /usr/local/Cellar/proxychains-ng/4.11/etc/proxychains.conf
brew install npm
brew install ssh-copy-id
brew install tmux

+Installing Homebrew (June 5, 2016, 1:47 p.m.)

Reference Site:
1-You need to install Developer Tools first. Using the `gcc --version` command check if you have the tools first. If the tools were not installed, a dialog will be opened asking you if you want to install the tools. You choose Install.

2-The website says you only need to use the following command to install brew. (But it might be blocked for us in Iran, as of the time writing this tutorial):
/usr/bin/ruby -e "$(curl -fsSL"

If it was still blocked, for installing it you need to open the following URL in a proxy activated browser, and save the script in your Mac OS:

Install it using this command:

+RainLoop WebMail (May 27, 2017, 12:30 p.m.)

This installation is not what I want! I need to have a detailed one + nginx! but it's apache, and no config file!
Redo it using this link:

1-Installation of Mariadb:
apt-get install mariadb-server

2- Create the database required for the Rainloop:
mysql -uroot -p
create database rainloopdb;
GRANT ALL PRIVILEGES ON rainloopdb.* TO 'rainloopuser'@'localhost' IDENTIFIED BY 'rainlooppassword';
flush privileges;

3- Installing PHP and Nginx:
apt-get install nginx php5-fpm php5-mysql php5-mcrypt php5-cli php5-curl php5-sqlite

2- Download and extract RainLoop:
mkdir rainloop
cd rainloop

3- Configure Permissions:
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;

4- Set owner for the application recursively:
chown -R www-data:www-data .


+Firefox - DownThemAll addon - exclude 128 MP3s (May 9, 2017, 3:49 p.m.)

/[^128]...\.mp3$/,1080,Full HD,HQ


/\/[^\/\?128]+\.mp3$/,320,720p,Full HD,HQ

+Firefox - Disable Auto Refresh (May 7, 2017, 5:18 p.m.)


+Faveo Help Desk Ticketing System (Feb. 15, 2017, 1:48 p.m.)

1- sudo apt-get install python-software-properties git curl openssl vim software-properties-common nginx php5-fpm php5-cli php5-mcrypt php5-gd php5-imap php5-mysql
2- sudo apt-key adv --recv-keys --keyserver 0xcbcb082a1bb943db
3- sudo add-apt-repository 'deb jessie main'
4- sudo apt-get update
5- sudo apt-get install mariadb-server
6- mysql_secure_installation (No password is required! You need to hit Enter on password prompt.)
mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE faveo;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON faveo.* TO 'faveouser'@'localhost' IDENTIFIED BY 'faveouser_passwd';
MariaDB [(none)]> \q
7- php5enmod mcrypt
curl -sS | php
sudo mv composer.phar /usr/local/bin/composer
9- mkdir -p /var/www/faveo-helpdesk
10- git clone /var/www/faveo-helpdesk/
11- cd /var/www/faveo-helpdesk
12- composer install --no-dev -o
13- vim config/database.php
Edit entries from line 50 to 61:

'mysql' => array(
'driver' => 'mysql',
'host' => 'localhost',
'database' => 'faveo',
'username' => 'faveouser',
'password' => 'faveouser_passwd',
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',

You need to delete the "env()" in each line too!!!
php artisan migrate
php artisan db:seed
cp example.env .env
php artisan key:generate

Copy the generated key inside the brackets, and paste it in:
vim config/app.php
'key' => env('APP_KEY', 'jBmA61vpe0NOXWmAQCWX8qMtEUgo2E2CdHJ+RHzGnqg='),
vim /etc/php5/fpm/pool.d/www.conf

user = www-data
group = www-data
listen = /var/run/php5-fpm-root.sock
listen.owner = www-data = www-data
listen.mode = 0666
17- Generate SSL certificate:

mkdir -p /etc/nginx/ssl
cd /etc/nginx/ssl
openssl genrsa -des3 -passout pass:x -out faveo.pass.key 2048
openssl rsa -passin pass:x -in faveo.pass.key -out faveo.key
rm faveo.pass.key
openssl req -new -key faveo.key -out faveo.csr
openssl x509 -req -days 365 -in faveo.csr -signkey faveo.key -out faveo.crt
18- Create a new Nginx server block:
vim /etc/nginx/sites-available/faveo

with this content:

erver {
listen 443 default;
server_name faveo-helpdesk.deskbit.local;
ssl on;
ssl_certificate /etc/nginx/ssl/faveo.crt;
ssl_certificate_key /etc/nginx/ssl/faveo.key;
ssl_session_timeout 5m;

ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL';
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
root /var/www/faveo-helpdesk/public;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log /var/log/nginx/faveo.access.log;
error_log /var/log/nginx/faveo.error.log;

sendfile off;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm-your_user.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
location ~ /\.ht {
deny all;
server {
1 listen 80;
server_name faveo-helpdesk.deskbit.local;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^ https://$server_name$request_uri? permanent;
19- ln -s /etc/nginx/sites-available/faveo /etc/nginx/sites-enabled/faveo
chown www-data:www-data /var/www/faveo-helpdesk -R
chmod 777 /var/www/faveo-helpdesk -R
sudo service php5-fpm restart
Restart nginx

+Serial Numbers (June 7, 2016, 10:50 a.m.)

VMware Workstation 12:
PyCharm + IntelliJ IDEA
For any change and update, follow up the comments on this website:



+Telegram (March 13, 2016, 11:36 a.m.)

var download_links = $("a[data-content='Download'], span[data-content='Download'], span:contains('Voice message')");
var index = 1;
download_interval = setInterval(function() {
if (download_links.length > index) {
console.log('Another One Was CLICKED...');
console.log('Mohsen Hassani ==> Downloading File (' + index + ') out of (' + download_links.length + ') ...');
index ++;
} else {
}, 30000);
// Mark Completed Downloads For Deletion
var completed_downloads = $('.im_message_file_button.im_message_file_button_dl_audio');
// Mark Videos For Deletion
var completed_downloads = $('span[data-content="Save file"]');
// Mark Images For Deletion
// Mark Files For Deletion
var completed_downloads = $('a[data-content="Save file"]');
// Mark Voice Messages For Deletion
// Mark images with text
// Stickers in Reply

+Firefox - A script on this page may be busy, or it may have stopped responding... (April 15, 2015, 4:08 p.m.)

In the Location bar, type about:config and press Enter.
Click I'll be careful, I promise! to continue to the about:config page.
In the about:config page, search for the preference dom.max_script_run_time, and double-click on it.
In the Enter integer value prompt, type 20.
Press OK.

+Web Proxies (Feb. 9, 2015, 1:17 p.m.)

+Error - Access denied for user 'test'@'localhost' (using password: YES) (April 7, 2018, 9:34 p.m.)

GRANT INSERT, SELECT, DELETE, UPDATE ON database.* TO 'user'@'localhost' IDENTIFIED BY ' ';

+Galera Cluster with MySQL (Sept. 4, 2017, 11:51 a.m.)

We need at least 3 servers in a network.

1- apt-get install galera-3 galera-arbitrator-3 default-mysql-server rsync
2- Create the following file with the content:
vim /etc/mysql/conf.d/galera.cnf


# Galera Provider Configuration

# Galera Cluster Configuration
wsrep_cluster_address="gcomm://first_ip,second_ip,third_ip" # The first_ip in here is

# Galera Synchronization Configuration

# Galera Node Configuration

DO THE SAME for the other two servers. Change the last two lines based on the server's configs.
3- vim /etc/mysql/mariadb.conf.d/50-server.cnf
bind-address =

DO THE SAME for the other two servers.
Shut down mysql on all of the servers:
4- systemctl stop mysql
5- On the first server:
# galera_new_cluster

On the 2nd & 3rd servers:
systemctl start mysql

+Remove root password (Feb. 15, 2017, 6:24 p.m.)

set password for root@localhost=PASSWORD('');

+Queries (Feb. 16, 2015, 11:41 a.m.)

show databases;
SELECT * FROM trunk WHERE status like '%unre%' and date_time BETWEEN DATE_SUB(NOW(), INTERVAL 4 DAY) AND NOW();
SELECT count(*) as errors FROM trunk WHERE status like '%unre%' and date_time BETWEEN DATE_SUB(NOW(), INTERVAL 4 DAY) AND NOW();
select * from cdr order by id desc limit 1;
select * from (select * from cdr order by acctid) as t1 order by acctid desc limit 100\G
show tables from asterisk;
show columns from cdr;

+Remote Connection (Feb. 16, 2015, 11:27 a.m.)

This link provide more than just a remote connection! It provides security too but I don't need it right now. So if for now security is not important to you, use the summary below:
Binding is limited to either 0, 1, or all IP addresses on the server. That means you can not provide more than one IP address at the same time.

nano /etc/mysql/my.cnf
bind-address =
/etc/init.d/mysql restart

And then in mysql console:
mysql -u root -p

+Update / Replace value (Feb. 14, 2015, 3:52 p.m.)

It's different from the replace() method in python :O

UPDATE table SET field = REPLACE(field, 'string', 'anothervalue') WHERE field LIKE '%string%';

'string' is the value to be found in the '%string%'
'anothervalue' is the value to be replaced.

+Show database / show table columns (Jan. 27, 2015, 3:27 p.m.)

show databases;
use a_database;
show tables;

+Reverse Query Results (Jan. 25, 2015, 11:05 a.m.)

select * from (select * from cdr order by acctid) as t1 order by acctid desc limit 200;

+Create table (Jan. 8, 2015, 11:59 a.m.)

You need to tell MySQL which database to use first:
USE database_name;

And here is a sample table:
calldate datetime NOT NULL default '0000-00-00 00:00:00',
clid varchar(80) NOT NULL default '',
src varchar(80) NOT NULL default '',
dst varchar(80) NOT NULL default '',
dcontext varchar(80) NOT NULL default '',
channel varchar(80) NOT NULL default '',
dstchannel varchar(80) NOT NULL default '',
lastapp varchar(80) NOT NULL default '',
lastdata varchar(80) NOT NULL default '',
duration int(11) NOT NULL default '0',
billsec int(11) NOT NULL default '0',
disposition varchar(45) NOT NULL default '',
amaflags int(11) NOT NULL default '0',
accountcode varchar(20) NOT NULL default '',
uniqueid varchar(32) NOT NULL default '',
userfield varchar(255) NOT NULL default ''

+Export / Import (Backup / Restore) (Jan. 8, 2015, 11:27 a.m.)

mysqldump -u [username] -p [database_name] > [dumpfilename.sql]

mysql -u [username] -p [database_name] < [dumpfilename.sql]
Export data to CSV file:

SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
Export data to CSV file (From multiple table + multiple Fields):
select table_1.field_1, table_1.field_2, table_2.field_1, table_3.field_7 from table_1, table_2, table_3 into outfile '/tmp/data.csv' fields terminated by ',' enclosed by "" lines terminated by '\n';
Import CSV file directly into MySQL:

LOAD DATA INFILE '/tmp/cdr.csv'

The IGNORE is used for the header of file (if you have created that file manually and it might have titles like in excel, name, family, id...)
Import ".sql" files:
$ mysql -u root db_name < db.sql

+Add a database along with its user (Jan. 8, 2015, 11:15 a.m.)

1- mysql -u root -p
2- create database demodb;
INSERT INTO mysql.user (User,Host,Password) VALUES('demouser','localhost',PASSWORD('demopassword'));

OR you might need the following based on the installed mysql version:

INSERT INTO mysql.user (User,Host,authentication_string, ssl_cipher, x509_issuer,x509_subject) VALUES('dianomi','localhost',PASSWORD('dfg3253'),'','','');
5- GRANT ALL PRIVILEGES ON demodb.* to demouser@localhost;

+Configuring MySQL Server on Debian (Jan. 8, 2015, 11:18 a.m.)

+Installation (Jan. 8, 2015, 11:03 a.m.)

apt-get install mysql-server mysql-client
During the installation, MySQL will ask you to set a root password. If you miss the chance to set the password while the program is installing, it is very easy to set the password later from within the MySQL shell.
Enter the shell using `mysql -u root -p` using blank password, and:
UPDATE mysql.user SET Password = PASSWORD('password') WHERE User = 'root';
You can now access your MySQL server like this:
mysql -u root -p

+Cisco Certification Program Overview (Feb. 19, 2018, 5:57 p.m.)

Data Center
Service Provider
Service Provider Operations
Cisco Certified Entry Networking Technician (CCENT)
Cisco Certified Technician (CCT)
Cisco Certified Network Associate (CCNA)
Cisco Certified Design Associate (CCDA)
Cisco Certified Network Professional (CCNP)
Cisco Certified Design Professional (CCDP)
Cisco Certified Internetwork Expert (CCIE)
Cisco Certified Design Expert (CCDE)
Cisco Certified Architect (CCAr)

+Subnet Mask (Sept. 19, 2017, 5:05 p.m.)

Addresses Hosts Netmask Amount of a Class C
/30 4 2 1/64
/29 8 6 1/32
/28 16 14 1/16
/27 32 30 1/8
/26 64 62 1/4
/25 128 126 1/2
/24 256 254 1
/23 512 510 2
/22 1024 1022 4
/21 2048 2046 8
/20 4096 4094 16
/19 8192 8190 32
/18 16384 16382 64
/17 32768 32766 128
/16 65536 65534 256

+Zabbix - Installation (April 26, 2017, 6:15 p.m.)

Zabbix Server:
1- apt-get install apache2 mysql-server php5 php5-cli php5-common php5-mysql

2- Update timezone in php configuration file /etc/php5/apache2/php.ini:
date.timezone = 'Asia/Tehran'

3- apt-get install zabbix-server-mysql zabbix-frontend-php

4- Create Database Schema:
mysql -u root -p
mysql> CREATE DATABASE zabbixdb;
mysql> GRANT ALL on zabbixdb.* to zabbix@localhost IDENTIFIED BY 'deskbit';

5- Restart zabbix database schema in newly created database:
cd /usr/share/zabbix-server-mysql
zcat schema.sql.gz | mysql -u root -p zabbixdb
zcat images.sql.gz | mysql -u root -p zabbixdb
zcat data.sql.gz | mysql -u root -p zabbixdb

6- Edit Zabbix Configuration File:
vim /etc/zabbix/zabbix_server.conf

7- Enable zabbix conf for apache:
cp /usr/share/doc/zabbix-frontend-php/examples/apache.conf /etc/apache2/sites-enabled/

8- Set some values confi files:
post_max_size = 16M
max_execution_time = 300
max_input_time = 300

9- Restart Apache and Zabbix:
/etc/init.d/apache2 restart
/etc/init.d/zabbix-server restart

10- Open the following address in a browser:
In the 3rd Step (Configure DB connection):
Database host: localhost
Database port: 0
Database name: zabbixdb
User: zabbix
Password: deskbit

11- In step 6, (Install), it can't create the file "zabbix.conf". To fix the error you need to:
chmod 777 /etc/zabbix

12- Zabbix Login Screen:
Username: admin
Password: zabbix
Zabbix Agent:
1- sudo apt-get install zabbix-agent

2- Edit Zabbix Agent Configuration:
vim /etc/zabbix/zabbix_agentd.conf

3- Restart Zabbix Agent:
/etc/init.d/zabbix-agent restart

+Fix Django Invalid HTTP_HOST header emails (June 3, 2018, 8:25 p.m.)

Add this block in "http" block of /etc/nginx/nginx.conf file.

server {
listen 80;
server_name _;
return 444;

Keep in mind to place the block before the include config files.

+Forward port 80 to 8080 (Dec. 15, 2018, 3:23 p.m.)

server {
listen 80;

location / {

+Set Up HTTP Authentication on a Directory (April 11, 2017, 2:28 p.m.)

1- apt install apache2-utils nginx-extras

2- htpasswd -c /etc/nginx/.htpasswd mohsen
Note that this htpasswd should be accessible by the user-account that is running Nginx.

server {
listen 80;

location / {
fancyindex on;
fancyindex_exact_size off;
root /home/mohsen/ftp;

location /private {
auth_basic "This is private zone!";
auth_basic_user_file /etc/nginx/.htpasswd;
fancyindex on;
fancyindex_exact_size off;
alias /home/mohsen/ftp/private;

+Create an SSL Certificate (Sept. 16, 2016, 4:46 a.m.)

1- Creating a directory that will be used to hold all of our SSL information. It should be created under the Nginx configuration directory:
sudo mkdir /etc/nginx/ssl


2- Create the SSL key and certificate files: (There is a sample some blocks below for the questions asked):

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -sha256 -keyout /etc/nginx/ssl/mohsenhassani_private.key -out /etc/nginx/ssl/mohsenhassani_public.pem

OR (insert the informations all together in here):

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -sha256 -keyout /etc/nginx/ssl/mohsenhassani_private.key -out /etc/nginx/ssl/mohsenhassani_public.pem -subj "/C=IR/ST=Tehran/L=Tehran/O=NozhanModern/"


3-We will be asked a few questions about our server in order to embed the information correctly in the certificate. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name that you want to be associated with your server. You can enter the public IP address instead if you do not have a domain name.


4-Configure Nginx to Use SSL:
server {
listen 80;
listen 443 ssl;

ssl_certificate /etc/nginx/ssl/mohsenhassani_public.pem;
ssl_certificate_key /etc/nginx/ssl/mohsenhassani_private.key;


A sample of questions asked:

Country Name (2 letter code) [AU]:US

State or Province Name (full name) [Some-State]:New York

Locality Name (eg, city) []:New York City

Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.

Organizational Unit Name (eg, section) []:Ministry of Water Slides

Common Name (e.g. server FQDN or YOUR name) []:server_IP_address

Email Address []



You will be asked a series of questions. Before we go over that, let's take a look at what is happening in the command we are issuing:

openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.

req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The "X.509" is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
-x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
-nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Nginx to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
-days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
-newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
-keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
-out: This tells OpenSSL where to place the certificate that we are creating.

+Permanently Redirect URLs (May 21, 2016, 4:33 p.m.)

server {
listen 80;
return 301 $scheme://$request_uri;


1. Redirect All Request to Specific URL

This will redirect all incoming requests on domain to url, as configured below.

server {
return 301;

2. Redirect All Request to Other Domain

This will redirect all incoming requests on domain to another domain ( with corresponding request url and query strings.

server {
return 301$request_uri;

3. Redirect Requests with Protocol Specific

This will redirect all incoming requests on domain to another domain ( with corresponding request url and query strings. Also it will use same protocol on redirected url.

server {
return 301 $scheme://$request_uri;

+Serve HTML file (May 17, 2016, 5:34 a.m.)

server {
root /home/shetab/websites/youstone_tmp;
listen 80;
index index.html index.htm;

# proxy request to node
location @proxy {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;

proxy_redirect off;

location / {
try_files $uri $uri/ @proxy;


+PHP Configuration (March 13, 2016, 10:52 p.m.)

server {
listen 80;

root /var/www/suitecrm;
index index.php index.html index.htm index.nginx-debian.html;
access_log /var/log/nginx/suitecrm.access.log;
error_log /var/log/nginx/suitecrm.error.log;

location / {
try_files $uri $uri/ =404;

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;


In case of errors, try checking (tail -f) the access.log and error.log files.

If no output in errors.log, check if the socket file "php7.0-fpm.sock" exists in the path mentioned in "location ~ \.php$" directive.


+Https with Django (March 13, 2016, 11 p.m.)

mkdir /etc/nginx/ssl

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
openssl req -newkey rsa:2048 -sha256 -nodes -keyout /home/mohsen/ssl/PRIVATE.key -x509 -days 365 -out /home/mohsen/ssl/PUBLIC.pem -subj "/C=IT/ST=state/L=location/O=description/"

--------------------------THIS IS THE OUTPUT --------------------------
[sudo] password for mohsen:
Generating a 2048 bit RSA private key
writing new private key to '/etc/nginx/ssl/nginx.key'
/etc/nginx/ssl/nginx.key: No such file or directory
3073349308:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/nginx/ssl/nginx.key','w')
3073349308:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
mohsen@mohsenhassani:~$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
Generating a 2048 bit RSA private key
writing new private key to '/etc/nginx/ssl/nginx.key'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []
--------------------------THIS IS THE OUTPUT --------------------------
The nginx sample config file:
server {
listen 80;
listen 443 ssl;

access_log /home/mohsen/logs/notes_azar.access.log;
error_log /home/mohsen/logs/notes_azar.error.log;

ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

add_header Access-Control-Allow-Origin '*';
location / {
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;

client_max_body_size 20M;

location /static/admin/ {
gzip on;
alias /home/mohsen/virtualenvs/django-1.8/lib/python3.4/site-packages/django/contrib/admin/static/admin/;

location /media/ {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/media/;

location /static {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/static;

+Access-Control-Allow-Origin downloading a JSON file (Dec. 23, 2015, 2:06 p.m.)

Add this line to server { } block:
add_header Access-Control-Allow-Origin '*';

server {
listen 80;
access_log /home/mohsen/logs/notes_azar.access.log;
error_log /home/mohsen/logs/notes_azar.error.log;
add_header Access-Control-Allow-Origin '*';
location / {
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;

client_max_body_size 20M;

location /static/admin/ {
gzip on;
alias /home/mohsen/virtualenvs/django-1.8/lib/python3.4/site-packages/django/contrib/admin/static/admin/;

location /media/ {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/media/;

location /static {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/static;

+Nginx Serve Fonts (Oct. 14, 2015, 3:48 p.m.)

add_header Access-Control-Allow-Origin '*';
location / {
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;

location ~* \.(ttf|ttc|otf|eot|woff|font.css)$ {
add_header "Access-Control-Allow-Origin" "*";

+Nginx and uWSGI confirguration (Aug. 22, 2014, 9:34 a.m.)

1-Install nginx using its help
2-Install uwsgi ==> pip install uwsgi; It needs ==> easy_install pip, and apt-get install python-dev
3-Copy the myuwsgi in /etc/init.d
4-Make sure you have the command /usr/local/bin/uwsgi or /usr/bin/uwsgi
5-Copy the config file of the website in web_cofings

+Configurations (Feb. 4, 2016, 11:19 a.m.)

nano /etc/nginx/nginx.conf

Add the following line:
include /home/mohsen/web_configs/*;

Afte these lines:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
To start nginx
For establishing a local Django project, I have to first know what the connected Modem IP address is, so that I can give this IP address to nginx "server_name". I thought it's "localhost" or "", but it was not! It's local IP address! I thought it would be, but it's still not! It's the IP address of Modem.
How to get the IP I need? It's done using "ifconfig". Using this command, I could see the IP that the Modem has given to the Computer. So it should be used in "server_name" of nginx.

+Installation (Feb. 4, 2016, 11:16 a.m.)

apt install nginx libpcre3-dev

+Installation (April 10, 2016, 8:27 a.m.)
1-sudo apt-get install curl build-essential

2-curl -sL | sudo -E bash -

3- sudo apt-get install -y nodejs
For Mac OS use this command:
brew install node

+Hardware requirements (Jan. 1, 2017, 7:52 p.m.)


The controller node runs the Identity service, Image service, management portions of Compute, management portion of Networking, various Networking agents, and the dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.

Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.

The controller node requires a minimum of two network interfaces.

The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups.

You can deploy more than one compute node. Each node requires a minimum of two network interfaces.
Block Storage

The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances.

For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security.

You can deploy more than one block storage node. Each node requires a minimum of one network interface.
Object Storage

The optional Object Storage node contains the disks that the Object Storage service uses for storing accounts, containers, and objects.

For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security.

This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.

+What is Cloud? (Jan. 1, 2017, 5:50 p.m.)

Let's quickly review just what a computing cloud is. Cloud technologies are built on existing technologies such as virtualization and clustering to virtualize hardware, software, storage, and networking resources into flexible units that are quickly allocated to meet demand. So rather than the old static model of dedicated hardware servers for various tasks, and static network and storage configurations, all of those formerly specialized devices are assimilated into a common resource pool. It's a more efficient use of hardware, and very fast to scale up or down according to demand. You can even configure self-service for users so they can grab whatever they need when they need it.

Private clouds are hosted on your own premises, and there are public clouds like Amazon's EC2 and the Rackspace Cloud. You can combine private and public clouds in many useful ways. For example, keep your sensitive data locked away in your private cloud, and use a public cloud for sharing, testing, and extra non-sensitive storage.

All computing resources are shareable in a cloud, and there are three basic service models:

SaaS, software as a service
PaaS, platform as a service
IaaS, infrastructure as a service

SaaS is centrally-hosted application software accessed by client software, with data typically kept on the server for access from any networked computer. Yes, just like in the olden client-server days, but the modern twist is to stuff everything through a Web browser. Using a Web browser as the client has its down sides, starting with HTTP, which was never designed for complex computing tasks, but by gosh we're making it haul water, chop wood, and dig ditches, and it's doing it cross-platform. SaaS is popular with software vendors because it reduces their support costs, gives them more control, and at long last supports that coveted grail of the monthly subscription model. It's nice for customers as well because they don't have to hassle with installation and maintenance.

PaaS is a nice option for customers who want more control of their datacenter, but not all the headaches of system and network administration. An example of this is managed cloud Web hosting where the host takes care of hardware, operating systems, networking, load balancing, backups, and updates and patches. The customer manages the development and configuration of whatever software they want to use. It's like sitting down to a fully-configured datacenter and getting right to work.

IaaS can be thought of as virtual bare hardware that the customer managers like a physical server, with control of all the software and configuration. You could also call it HaaS, hardware as a service.

+Definitions - Hypervisor (Dec. 25, 2016, 5:06 p.m.)

Software that arbitrates and controls VM access to the actual underlying hardware.
A hypervisor or virtual machine monitor (VMM) is computer software, firmware, or hardware, that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and OS X instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor,[1] with hyper- used as a stronger variant of super-.[a] The term dates to circa 1970;[2] in the earlier CP/CMS (1967) system the term Control Program was used instead.

A hypervisor is a function which abstracts -- isolates -- operating systems and applications from the underlying computer hardware. This abstraction allows the underlying host machine hardware to independently operate one or more virtual machines as guests, allowing multiple guest VMs to effectively share the system's physical compute resources, such as processor cycles, memory space, network bandwidth and so on. A hypervisor is sometimes also called a virtual machine monitor

Posted by: Margaret Rouse
Contributor(s): Stephen J. Bigelow
This definition is part of our Essential Guide: Fine-tune your virtualization performance management skills

Sponsored News

ABC’s of VDI in 2016
Building a Private Cloud on Converged Infrastructure
See More

Vendor Resources

Virtual Data Center E-Zine Volume 33: Time to Consider a Second Hypervisor?
Expert Strategies to Securing a Virtual Environment

A hypervisor is a function which abstracts -- isolates -- operating systems and applications from the underlying computer hardware. This abstraction allows the underlying host machine hardware to independently operate one or more virtual machines as guests, allowing multiple guest VMs to effectively share the system's physical compute resources, such as processor cycles, memory space, network bandwidth and so on. A hypervisor is sometimes also called a virtual machine monitor.
Download this free guide
Download: Essential Guide to Choosing Virtualization Hardware

Modern servers are shipping with massive amounts of memory, multiple network interface cards and support for solid-state storage. With all the options available, it's hard to know what you need. This complimentary guide on choosing the best hardware for virtualization can help.

Hypervisors provide several benefits to the enterprise data center. First, the ability of a physical host system to run multiple guest VMs can vastly improve the utilization of the underlying hardware. Where physical (nonvirtualized) servers might only host one operating system and application, a hypervisor virtualizes the server, allowing the system to host multiple VM instances -- each running an independent operating system and application -- on the same physical system using far more of the system's available compute resources.

VMs are also very mobile. The abstraction that takes place in a hypervisor also makes the VM independent of the underlying hardware. Traditional software can be tightly coupled to the underlying server hardware, meaning that moving the application to another server requires time-consuming and error-prone reinstallation and reconfiguration of the application. By comparison, a hypervisor makes the underlying hardware details irrelevant to the VMs. This allows any VMs to be moved or migrated between any local or remote virtualized servers -- with sufficient computing resources available -- almost at-will with effectively zero disruption to the VM; a feature often termed live migration.

VMs are also logically isolated from each other -- even though they run on the same physical machine. In effect, a VM has no native knowledge or dependence on any other VMs. An error, crash or malware attack on one VM does not proliferate to other VMs on the same or other machines. This makes hypervisor technology extremely secure.

Finally, VMs are easier to protect than traditional applications. A physical application typically needs to be first quiesced and then backed up using a time-consuming process that results in substantial downtime for the application. A VM is essentially little more than code operating in a server's memory space. Snapshot tools can quickly capture the content of that VM's memory space and save it to disk in moments -- usually without quiescing the application at all. Each snapshot captures a point-in-time image of the VM which can be quickly recalled to restore the VM on demand.
Types of hypervisors

Hypervisors are traditionally implemented as a software layer -- such as VMware vSphere or Microsoft Hyper-V -- but hypervisors can also be implemented as code embedded in a system's firmware. There are two principal types of hypervisor. Type 1 hypervisors are deployed directly atop the system's hardware without any underlying operating systems or other software. These are called "bare metal" hypervisors and are the most common and popular type of hypervisor for the enterprise data center. Examples include vSphere or Hyper-V. Type 2 hypervisors run as a software layer atop a host operating system and are usually called "hosted" hypervisors like VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints like PCs.

What are hypervisors used for?

Hypervisors are important to any system administrator or system operator because virtualization adds a crucial layer of management and control over the data center and enterprise environment. Staff members not only need to understand how the respective hypervisor works, but also how to operate supporting functionality such as VM configuration, migration and snapshots.

The role of a hypervisor is also expanding. For example, storage hypervisors are used to virtualize all of the storage resources in the environment to create centralized storage pools that administrators can provision -- without having to concern themselves with where the storage was physically located. Today, storage hypervisors are a key element of software-defined storage. Networks are also being virtualized with hypervisors, allowing networks and network devices to be created, changed, managed and destroyed entirely through software without ever touching physical network devices. As with storage, network virtualization is appearing in broader software-defined network or software-defined data center platforms.

+Installation (Dec. 25, 2016, 10:17 a.m.)

+Errors (Dec. 20, 2015, 11:21 p.m.)

When using `phonegap` command, I got error (‘cannot find bplist-parser’), for solving it I did:
sudo npm update -g

+Installation (Jan. 13, 2016, 3:27 p.m.)

1-sudo apt-get install nodejs npm git ant lib32z1 lib32ncurses5 lib32bz2-1.0 lib32stdc++6
And then:
sudo npm install -g phonegap cordova jquery-mobile
sudo npm update -g

2-The NodeJS is installed & named as nodejs. PhoneGap expect the executable to be named node. To fix this inconsistency, create a symlink named node that points to nodejs as follows.
sudo ln -s /usr/bin/nodejs /usr/bin/node

3-Type `phonegap` on the command line and check whether PhoneGap command is detected.
(You might get error about `cannot find bplist-parser`; refer to errors for solving the error.)

4-Copy `android-sdk`: (you already have it when using `Kivy`):
sudo cp -r ~/Programs/Android/Development/android-sdk-linux/ /usr/local/

5-Edit the file `~/.bashrc` and paste these lines to the end of it:
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/tools
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/platform-tools
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/build-tools

6-source ~/.bashrc



+MySQL Driver (Aug. 1, 2017, 6:29 p.m.)

apt-get install php-mysql

+Installation (Aug. 1, 2017, 5:54 p.m.)

Base PHP:
sudo apt-get install php-common php-cli

sudo apt-get install php-fpm

apt-get install libapache2-mod-php

+Removing a Constraint (Feb. 1, 2017, 1:02 p.m.)

To remove a constraint you need to know its name. If you gave it a name then that's easy. Otherwise, the system assigned a generated name, which you need to find out. The psql command \d tablename can be helpful here; other interfaces might also provide a way to inspect table details. Then the command is:

This works the same for all constraint types except not-null constraints. To drop a not null constraint use
(Recall that not-null constraints do not have names.)

+Adding a Constraint (Feb. 1, 2017, 1:01 p.m.)

To add a constraint, the table constraint syntax is used. For example:

ALTER TABLE products ADD CHECK (name <> '');
ALTER TABLE products ADD CONSTRAINT some_name UNIQUE (product_no);
ALTER TABLE products ADD FOREIGN KEY (product_group_id) REFERENCES product_groups;

To add a not-null constraint, which cannot be written as a table constraint, use this syntax:

The constraint will be checked immediately, so the table data must satisfy the constraint before it can be added.

+PostgreSQL history file (Feb. 1, 2017, 1:01 p.m.)

Similar to the Linux ~/.bash_history file, PostgreSQL stores all the SQL command that was executed in a history file called ~/.psql_history as shown below.

cat ~/.psql_history

+Turn on timing and check how much time a query takes to execute (Feb. 1, 2017, 1 p.m.)

# \timing — After this, if you execute a query it will show how much time it took for doing it.

# \timing
Timing is on.

# SELECT * from pg_catalog.pg_attribute ;
Time: 9.583 ms

+Change databse user password (Feb. 1, 2017, 12:59 p.m.)

Root user:
ALTER USER postgres WITH PASSWORD 'tmppassword';
psql cdrdb
alter user cdr with password 'abcdef';

+Export JSON from PostgreSQL (May 12, 2016, 12:11 a.m.)

select row_to_json(words) from words;
select row_to_json(row(id, text)) from words;
This will name the columsn as `f1`, 'f2`, 'f3`, ...

To solve the problem:
select row_to_json(t)
from (
select id, text from words
) t

The other commonly used technique is array_agg and array_to_json. array_agg is a aggregate function like sum or count. It aggregates its argument into a PostgreSQL array. array_to_json takes a PostgreSQL array and flattens it into a single JSON value.

select array_to_json(array_agg(row_to_json(t)))
from (
select id, text from words
) t


+Errors (July 6, 2015, 12:31 p.m.)

psql: could not connect to server: Connection refused
Is the server running on host "" and accepting TCP/IP connections on port 5432?

For solving this error, refere to "Remote Connection".
psycopg2.ProgrammingError: permission denied for relation notes_application


ERROR: role "mohsen_notes" does not exist (While importing a database)

For solving this error you need to access the database shell with `postgres` user:
su postgres
psql -d notesdb -U postgres

And using this command, you will grant all the needed permissions:
GRANT ALL PRIVILEGES ON TABLE notes_application TO notes;

+Remote Connection (Feb. 4, 2016, 11:57 a.m.)

If you get error:
psql: could not connect to server: Connection refused
Is the server running on host "" and accepting TCP/IP connections on port 5432?

You will need to configure PostgreSQL to accept TCP/IP connections:

Add this line to the end of the file pg_hba.conf:
host all all trust


And then:
nano /etc/postgresql/9.1/main/postgresql.conf
(For Postgresql 9.4 or later, you need to cd to `/usr/share/postgresql/9.4` and copy the file `postgresql.conf.sample` to `postgresql.conf`):

Uncomment the following line and put star instead of localhost:
listen_addresses = '*'

/etc/init.d/postgresql restart

+Log into a Postgresql database (June 27, 2015, 1:05 p.m.)
psql -d mydb -U myuser

+Changing a Column's Default Value (May 17, 2015, 1:08 p.m.)

To set a new default for a column, use a command like this:


Note that this doesn't affect any existing rows in the table, it just changes the default for future INSERT commands.

To remove any default value, use


This is effectively the same as setting the default to null. As a consequence, it is not an error to drop a default where one hadn't been defined, because the default is implicitly the null value.

+Update Values (Jan. 23, 2015, 7:42 p.m.)

UPDATE table SET column1 = value1, = value2 ,... WHERE condition;

+Counting the select (Jan. 23, 2015, 7:23 p.m.)

SELECT count(*) FROM sometable;

+Select unique column (Jan. 23, 2015, 7:14 p.m.)

SELECT DISTINCT column_1 FROM table_name
If you specify multiple columns, the DISTINCT clause will evaluate the duplicate based on the combination of values of those columns.
SELECT DISTINCT column_1, column_2 FROM tbl_name;
PostgreSQL also provides the DISTINCT ON (expression) to keep the “first” row of each group of duplicates where the expression is equal. See the following syntax:
SELECT DISTINCT ON (column_1), column_2 FROM tbl_name ORDER BY column_1, column_2;
select DISTINCT ip_src FROM (SELECT ip_src from acct order by stamp_inserted) as mohsen2

+Set password for postgres user (Jan. 22, 2015, 1:33 p.m.)

sudo -u postgres psql postgres
\password postgres

+Needed packages for Asterisk/Apache2 (Jan. 22, 2015, 1:10 p.m.)

apt-get install libapache2-mod-auth-pgsql

+psycopg2 installation error (Oct. 30, 2014, 11:41 p.m.)

While installing `psycopg2` in a virtualenv using `pip` I got this error:

Error: pg_config executable not found.

Please add the directory containing pg_config to the PATH

or specify the full executable path with the option:

python build_ext --pg-config /path/to/pg_config build ...

or with the pg_config option in 'setup.cfg'.

For solving this error I had to install:
apt-get install libpq-dev

+Extend/Increase the length of a varchar column (Oct. 25, 2014, 5:27 p.m.)

This is done using the way you change the type of a column:
alter table issue_tracker_sentsms alter column status type varchar(3);

+Display Tables and Columns (Oct. 25, 2014, 5:23 p.m.)

Using this command you will be connected to the database:
\d issue_tracker_db;

Using this command you will see the tables inside it:

Using this command you will see the columns:
\d issue_tracker_sms;

+Default current date time for a field, while altering (Sept. 8, 2014, 10:30 p.m.)

alter table m_tasks_attachment add column "date_time" timestamp with time zone NOT NULL default now();

+Add new column (Sept. 6, 2014, 11:12 p.m.)

alter table m_tasks_message add column "is_new" boolean NOT NULL;

If you have already some data in the table, it will raise an error:
ERROR: column "is_new" contains null values
Which means you have to first create the column without the NOT NULL constraint and then set it to NOT NULL.

But you can easily set the desired default values with:
alter table m_tasks_message add column "is_new" boolean NOT NULL DEFAULT False;

This will set the already created records with the default value `False`.

+Commands (Aug. 22, 2014, 9:28 a.m.)

Login as "postgres" (SuperUser) to start using database:
# su - postgres
Create a new database:
createdb mydb
Drop database:
dropdb mydb
Access database:
psql mydb
Get help:
mydb=# \h
Dump all database:
pg_dumpall > /var/lib/pgsql/backups/dumpall.sql
Restore database:
psql -f /var/lib/pgsql/backups/dumpall.sql mydb
Show databases:
# psql -l
mydb=# \l;
Show users:
mydb=# SELECT * FROM "pg_user";
Show tables:
mydb=# SELECT * FROM "pg_tables";
Set password:
mydb=# UPDATE pg_shadow SET passwd = 'new_password' where usename = 'username';
Clean all databases (Should be done via a daily cron):
vacuumdb --quiet --all
How to edit PostgreSQL queries in your favorite editor?

# \e

\e will open the editor, where you can edit the queries and save it. By doing so the query will get executed.
To rename a column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;
To rename a table:
ALTER TABLE products RENAME TO items;
Change type:
ALTER TABLE table ALTER COLUMN anycol TYPE anytype;

Renaming a Column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;
Update a field:
update menus set description='Payments: Carriersss' where username='mohsen' and menu='accountingcarrier';
Delete all records from a table:
delete from table_name;
Count unique records:
select count(distinct ip_src) from table_name;

+Import / Export (Backup / Restore) (Aug. 6, 2015, 10:08 a.m.)

2-su postgres
3-pg_dump dbname > outfile (If you want to compress the outfile) use step `4` instead of `3`)
4-pg_dump dbname | gzip > filename.gz (If you think your database output file is going to be so big, you can split it, using `5` instead of `3` and `4`)
5-pg_dump dbname | split -b 1m - filename (instead of 1mb you can write any size)
If you got permission denied error, it's because of the folder/directory you are using for backup!
Change the output path or use `cd` to move the path to postgres home (which is /var/lib/postgresql).


Create a folder and give it the permission for postgres to write to it by setting it the ownership
mkdir postgres_dumps
chown postgres.postgres postgres_dumps
2-su postgres
3-psql dbname < infile (if you have a compressed file, use step `4` instead of `3`)
4-gunzip -c filename.gz | psql dbname (If your backup files are already splitted, use `5` instead of `3` and `4`)
5-cat filename* | psql dbname
For selective tables:
Go to Postgre console using `psql -U db_user db_name` and then:
pg_dump -t table_name -t table_name2 -t table_name3 -U db_owner db_name > outfile.sql
Export Database into CSV file:
Go to Postgre console using `psql -U db_user db_name` and then:
COPY table_name TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
COPY (SELECT foo,bar FROM table_name limit 100) TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
COPY (SELECT foo,bar FROM table_name) TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
For importing dumped tables:
copy cdr from '/home/mohsen/MyTemp/as3.dat';
Error while importing:
ERROR: role "mohsen_notes" does not exist

For solving this error, refer to `Errors` section within this category.
Dump all database:
pg_dumpall > /var/lib/pgsql/backups/dumpall.sql
Restore database:
psql -f /var/lib/pgsql/backups/dumpall.sql mydb
Dump only parts of tables:
copy (select * from acct order by stamp_inserted limit 8000) to '/home/mohsen/Temp/acct.tsv';

copy acct from '/home/mohsen/Temp/acct.tsv';

+Configuration (Feb. 4, 2016, 11:48 a.m.)

1- Edit the file pg_hba.conf which can be found in either of the following paths:

2- Change the settings to this:
local all postgres trust
local all all password
host all all md5

3- Restart postgresql service:
service postgresql restart

+Installation (Feb. 5, 2016, 2:37 a.m.)

apt install python-dev postgresql-server-dev-all postgresql libpq-dev python3-dev


To check if postgresql is installed and run successfully on port 5432, use this command:
nc localhost 5432 < /dev/null
It should not return anything. It should only wait ...


If you got error like the following when creating databases or users:
Is the server running locally and accepting ..... postgresql/.s.PGSQL.5432"

Check if postgresql service is enabled!?
systemctl status postgresql

If not, start it:
systemctl enable postgresql

+Image to String conversion (Oct. 2, 2016, 11:38 p.m.)

Convert Image to String:

import base64

with open("t.png", "rb") as imageFile:
str = base64.b64encode(
Convert String to Image:

fh = open("imageToSave.png", "wb")

+Access PostgreSQL (Sept. 12, 2016, 9:26 p.m.)

import psycopg2
from psycopg2.extras import DictCursor

connection = psycopg2.connect("dbname='eccdb' user='ecc' host='localhost' password='EcC'")
cur = connection.cursor(cursor_factory=DictCursor)
cur.execute("""SELECT * from ecc_callservice where teacher_id='203'""")
rec = cur.fetchone()

+Python list subtraction (April 26, 2016, 12:47 a.m.)

list1 = ['a', 'b', 'c', 'd']
list2 = ['b', 'c']
list3 = list(set(list1) - set(list2))

+Add leading zeroes to numbers (Jan. 26, 2016, 8:50 p.m.)

In [3]: str(1).zfill(4)
Out[3]: '0001'

+Group a list of dictionaries (Dec. 20, 2015, 3:38 p.m.)

from itertools import groupby

d = [{'a': 1}, {'a': 2}, {'a': 2, 'a': 3}, {'a': 3}, {'a': 3}]
[(name, list(group)) for name, group in groupby(d, lambda p:p['a'])]

+Running Shell Commands (Oct. 5, 2015, 3:04 p.m.)

import subprocess

Use this if you need to run a command using `sudo`:
passwd = subprocess.Popen(['echo', 'Mohsen123'], stdout=subprocess.PIPE)

def run_command(command, passwd=None, concat=True):
if not passwd:
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, shell=True)
p = subprocess.Popen(command, stdin=passwd.stdout, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, shell=True)

result = [x.decode('utf-8').replace('\n', '') for x in p.stdout.readlines()]

if concat:
return ' '.join(result)
return result
A small sophisticated command example:
cmd = 'sudo -S asterisk -rx "core show channels verbose " | grep "from-sip"'

+Threading (June 18, 2015, 11:16 a.m.)

import threading

some_threads = []
some_threads.append(threading.Thread(target=save_sheet_to_db, args=(session, sheet, carrier)))
for some_thread in some_threads:

for some_thread in some_threads:

+Iterate through two lists in inner list (Dec. 25, 2014, 11:22 a.m.)

[[3, 3, 7, 8], ['a', 'b', 'd', 3]]
[y for x in d.values() for y in x]

[3, 3, 7, 8, 'a', 'b', 'd', 3]

+Limiting floats to two decimal points (Nov. 15, 2014, 2:41 p.m.)

f = 1000.1234
round(f, 2)

+Get all object attributes (Nov. 15, 2014, 1:56 p.m.)


+Requests (Nov. 1, 2014, 12:34 p.m.)

What is Requests?
Requests is an Apache2 Licensed HTTP library, written in Python, for human beings. Requests allow you to send HTTP/1.1 requests.
You can add headers, form data, multipart files, and parameters with simple Python dictionaries, and access the response data in the same way.
Install Requests
There are a few ways to install Requests. Either use pip, easy_install, or get the tarball.
We are using pip to install, simply type in:
pip install requests
Importing the module
To import the Requests module, put this command at the beginning of your script:
import requests
Making a request
# Get a webpage, this creates a Response object called "r"

r = requests.get('')
Passing Parameters In URLs:
params = {'key1': 'value1', 'key2': 'value2'}
r = requests.get('', params=payload)

You can see that the URL has been correctly encoded by printing the URL:

Note that any dictionary key whose value is None will not be added to the URL's query string.

You can also pass a list of items as a value:
params = {'key1': 'value1', 'key2': ['value2', 'value3']}
r = requests.get('', params=payload)
Response Code
We can check the response status code, and do a status code lookup with the dictionary look-up object.
r = requests.get('')

r.status_code ==
>>> True['temporary_redirect']
>>> 307
>>> 418['\o/']
>>> 200
Get the content
Get the content of the server's response.
import requests
r = requests.get('')
print r.text

# Requests also comes with a builtin JSON decoder, in case you’re dealing with JSON data
import requests
r = requests.get('')
print r.json
We can view the server’s response headers using a Python dictionary, and we can access the headers using any capitalization we want.
If a header doesn't exist in the Response, its value defaults to None

'status': '200 OK',
'content-encoding': 'gzip',
'transfer-encoding': 'chunked',
'connection': 'close',
'server': 'nginx/1.0.4',
'x-runtime': '148ms',
'etag': '"e1ca502697e5c9317743dc078f67693f"',
'content-type': 'application/json; charset=utf-8'

>>>'application/json; charset=utf-8'

>>>'application/json; charset=utf-8'


# Get the headers of a given URL
resp = requests.head("")
print resp.status_code, resp.text, resp.headers
Requests will automatically decode content from the server. Most Unicode charsets are seamlessly decoded. When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers.
The text encoding guessed by Requests is used when you access r.text.

You can find out what encoding Requests is using, and change it, using the r.encoding property:
If you change the encoding, Requests will use the new value of r.encoding whenever you call r.text.
print r.encoding
>> utf-8

>>> r.encoding = 'ISO-8859-1'
Custom Headers

If you’d like to add HTTP headers to a request, simply pass in a dict to the headers parameter.
import json
url = ''
payload = {'some': 'data'}
headers = {'content-type': 'application/json'}

r =, data=json.dumps(payload), headers=headers)
Redirection and History

Requests will automatically perform location redirection while using the GET and OPTIONS verbs.
GitHub redirects all HTTP requests to HTTPS.
We can use the history method of the Response object to track redirection.

r = requests.get('')
>>> ''

>>> 200

>>> []
Make a HTTP Post request
With Requests you can of course also do post requests.
r ="")
You can use other HTTP requests types as well (PUT, DELETE, HEAD and OPTIONS)
r = requests.put("")
r = requests.delete("")
r = requests.head("")
r = requests.options("")

# This small script creates a Github repo.
import requests, json
github_url = ""
data = json.dumps({'name':'test', 'description':'some test repo'})
r =, data, auth=('user', '*****'))
print r.json
Errors and Exceptions
In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a ConnectionError exception.
In the event of the rare invalid HTTP response, Requests will raise an HTTPError exception.
If a request times out, a Timeout exception is raised.

If a request exceeds the configured number of maximum redirections, a TooManyRedirects exception is raised.
All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException.

+type (Oct. 10, 2014, 1:52 a.m.)

The first use of type() is the most widely known and used: to determine the type of an object. Here, Python novices commonly interrupt and say, "But I thought Python didn't have types!" On the contrary, everything in Python has a type (even the types!) because everything is an object. Let's look at a few examples:

>>> type(1)
<class 'int'>
>>> type('foo')
<class 'str'>
>>> type(3.0)
<class 'float'>
>>> type(float)
<class 'type'>
The type of type
Everything is as expected, until we check the type of float. <class 'type'>? What is that? Well, odd, but let's continue:

>>> class Foo(object):
... pass
>>> type(Foo)
<class 'type'>
Ah! <class 'type'> again. Apparently the type of all classes themselves is type (regardless of if they're built-in or user-defined). What about the type of type itself?

>>> type(type)
<class 'type'>
Well, it had to end somewhere. type is the type of all types, including itself. In actuality, type is a metaclass, or "a thing that builds classes". Classes, like list(), build instances of that class, as in my_list = list(). In the same way, metaclasses build types, like Foo in:

class Foo(object):

As mentioned, it turns out that type has a totally separate use, when called with three arguments. type(name, bases, dict) creates a new type, programmatically. If I had the following code:

class Foo(object):
We could achieve the exact same effect with the following:

Foo = type('Foo', (), {})
Foo is now referencing a class named "Foo", whose base class is object (classes created with type, if specified without a base class, are automatically made new-style classes).

That's all well and good, but what if we want to add member functions to Foo? This is easily achieved by setting attributes of Foo, like so:

def always_false(self):
return False

Foo.always_false = always_false
We could have done it all in one go with the following:

Foo = type('Foo', (), {'always_false': always_false})
Of course, the bases parameter is a list of base classes of Foo. We've been leaving it empty, but it's perfectly valid to create a new class derived from Foo, again using type to create it:

FooBar = type('FooBar', (Foo), {})

+Excel Files (Aug. 22, 2014, 1:04 p.m.)

Read Excel files from Python

Use the excellent xlrd package, which works on any platform. That means you can read Excel files from Python in Linux! Example usage:

Open the workbook
import xlrd
wb = xlrd.open_workbook('myworkbook.xls')

Check the sheet names

Get the first sheet either by index or by name
sh = wb.sheet_by_index(0)
sh = wb.sheet_by_name(u'Sheet1')

Iterate through rows, returning each as a list that you can index:
for rownum in range(sh.nrows):
print sh.row_values(rownum)

If you just want the first column:
first_column = sh.col_values(0)

Index individual cells:
cell_A1 = sh.cell(0,0).value
cell_C4 = sh.cell(rowx=3,colx=2).value

(Note Python indices start at zero but Excel starts at one)

#sheet = book.sheet_by_index(0)
#print sheet.cell(13, 12).value
#print sheet.row_values(10)

+Get Python version (Aug. 22, 2014, 12:41 p.m.)

Get python version:
import sys

+Installation (Feb. 4, 2016, 10:05 a.m.)

1-Before installation make sure you have these packages already installed :
apt-get install libbz2-dev libsqlite3-dev python-dev python3-dev libedit-dev libreadline-dev libssl-dev make build-essential

For CentOS:
yum install yum install bzip2-devel bzip2-libs python-devel openssl-devel zlib-devel ncurses-devel sqlite-devel readline-devel gdbm-devel db4-devel libpcap-devel xz-devel
Installing Python:
2-mkdir ~/src; cd ~/src

3- From the following link download the version you need. Download the tgz file.

4-tar -zxvf Python-2.7.9.tar.gz

5-mkdir ~/.localpython

6-cd Python-2.7.9

7- ./configure --prefix=/home/mohsen/.localpython --enable-shared --enable-unicode=ucs4
--enable-shared: is used because when I wanted to build uwsgi binary plugins I got error:
can not be used when making a shared object; recompile with -fPIC collect2: error: ld returned 1 exit status

--enable-unicode=ucs4 is because of error:
/usr/lib/python2.7/lib-dynload/ undefined symbol: PyUnicodeUCS2_FromUnicode
when installing some modules in Mint

8-make (If you got make command not found error, refer to Debian category and search make not found error for solving the error)

9-make install

+JSON (Aug. 4, 2014, 4:41 a.m.)

import json
import urllib2

data = json.load(urllib2.urlopen('http://someurl/path/to/json'))
import requests

r = requests.get('')
[{u'repository': {u'open_issues': 0, u'url': '
import json
import requests

url = ''

params = dict(

resp = requests.get(url=url, params=params)
data = json.loads(resp.text)
Save JSON to file:

with open('db.json', 'w') as f:
json.dump(data, f) # data is a dictionary-like object.
Read/Load a JSON object from a file:

data = json.load(open('db.json'))

+DateTime (Jan. 29, 2016, 1:28 p.m.)

from datetime import datetime, timedelta
from django.utils.timezone import make_aware, get_current_timezone

datetime.fromtimestamp(int(request.POST['date']) / 1000).date()




date_time = Call.objects.order_by('-id').first().date_time

timestamp = int(date_time.strftime('%s'))


---------------------------------------------------------------------------- - timedelta(hours=24)


now =

dt_name = '%s-%s-%s--%s-%s' % (now.year, now.month,, now.hour, now.minute)


now = make_aware(, get_current_timezone())

current_hour = make_aware(datetime(now.year, now.month,, now.hour, 00, 00), get_current_timezone())

int((now - current_hour).seconds / 5)

end_time = current_hour + timedelta(seconds=5)






import time


Difference between two dates:

(appointment_date() -


Date string to date object:

datetime.datetime.strptime('24052010', "%d%m%Y").date()


Iterate through two dates:

start_date =
end_date = start_date.replace(year=start_date.year + 1)

for day_num in range((end_date - start_date).days + 1):
date = start_date + timedelta(days=day_num)


from datetime import datetime
dt = datetime(2017, 1, 1, 12, 30, 59, 0)


datetime.strptime('2014-12-04', '%Y-%m-%d').date()


Get string of Date or DateTime object:


Get object from the string format:
datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S.%f')

In case of getting an error like "ValueError: unconverted data remains: +00:00":
datetime.strptime(date_time_str.split('+')[0], '%Y-%m-%d %H:%M:%S.%f')


Subtract / Add to datetime:

from datetime import datetime, timedelta

d = - timedelta(days=days_to_subtract)

start_dt - datetime.timedelta(hours=1)


import datetime

selected_date =

if request.POST:
selected_date = datetime.datetime.strptime(request.POST['date'], '%Y-%m-%d').date()

earlier_date = selected_date - datetime.timedelta(days=1)

start_dt = datetime.datetime(earlier_date.year, earlier_date.month,, 23, 0, 0)
end_dt = datetime.datetime(selected_date.year, selected_date.month,, 23, 59, 59)


Determine whether datetimes are aware or naive:

from django.utils import timezone



+Accessing index in loops (July 29, 2015, 2:02 p.m.)

names = ['Mohsen', 'Hadi', 'Farhad']
for index, name in enumerate(names):
print index

+Sort dictionary by key or value (Aug. 4, 2014, 4:40 a.m.)

import operator

d = {1:2, 7:8, 31:5, 30:5}
e = sorted(d.iteritems(), key=operator.itemgetter(1))

Pass the itemgetter 0 to sort by key

In Python3 there is not iteritems(), use items() instead!

React Native
+Installation (June 19, 2018, 5:45 p.m.)

1- Install Nodejs (Refer to NodeJs topic in my notes)

2- sudo npm install -g react-native-cli

3- Configure Android development environment using my notes, in Android category.

+Start project and running it (Oct. 11, 2018, 5 p.m.)

1- react-native init my_project

2- cd my_project

3- react-native start (in a shell)

4- react-native run-android (in another shell)

+Run gradlew for debugging (Oct. 11, 2018, 6:29 p.m.)

cd to "android" folder inside your react native project:

./gradlew installDebug --debug

./gradlew installDebug --stacktrace

+Gradle Repositories (Oct. 12, 2018, 12:57 p.m.)

+Gradle Socks5 (Oct. 18, 2018, 8:25 p.m.)

Add the following line to

org.gradle.jvmargs=-DsocksProxyHost= -DsocksProxyPort=1080

+Google's Maven Repository (Oct. 18, 2018, 8:40 p.m.)

+Watchman - Installation (Oct. 18, 2018, 10:11 p.m.)

1- apt install libtool-bin automake autotools-dev

2- git clone

3- cd watchman

4- git checkout v4.9.0 # the latest stable release

5- ./

6- ./configure

7- make

8- sudo make install

+Description (Aug. 22, 2014, 9:29 a.m.)

how can I build a regex to match something like "-90.232414123_45.3456234624" in url?
Special Characters

Because we want to do more than simply search for literal pieces of text, we need to reserve certain characters for special use. In the regex flavors discussed in this tutorial, there are 12 characters with special meanings:
the backslash \
the caret ^
the dollar sign $
the period or dot .
the vertical bar or pipe symbol |
the question mark ?
the asterisk or star *
the plus sign +
the opening parenthesis (
the closing parenthesis )
the opening square bracket [
the opening curly brace {
These special characters are often called "metacharacters".

If you want to use any of these characters as a literal in a regex, you need to escape them with a backslash. If you want to match 1+1=2, the correct regex is 1\+1=2. Otherwise, the plus sign has a special meaning.

Note that 1+1=2, with the backslash omitted, is a valid regex. So you won't get an error message. But it doesn't match 1+1=2. It would match 111=2 in 123+111=234, due to the special meaning of the plus character.

If you forget to escape a special character where its use is not allowed, such as in +1, then you will get an error message.

Most regular expression flavors treat the brace { as a literal character, unless it is part of a repetition operator like a{1,3}. So you generally do not need to escape it with a backslash, though you can do so if you want. An exception to this rule is the Java, which requires all literal braces to be escaped.

All other characters should not be escaped with a backslash. That is because the backslash is also a special character. The backslash in combination with a literal character can create a regex token with a special meaning. E.g. \d is a shorthand that matches a single digit from 0 to 9.

Escaping a single metacharacter with a backslash works in all regular expression flavors. Many flavors also support the \Q…\E escape sequence. All the characters between the \Q and the \E are interpreted as literal characters. E.g. \Q*\d+*\E matches the literal text *\d+*. The \E may be omitted at the end of the regex, so \Q*\d+* is the same as \Q*\d+*\E. This syntax is supported by the JGsoft engine, Perl, PCRE, PHP, Delphi, and Java, both inside and outside character classes. Java 4 and 5 have bugs that cause \Q…\E to misbehave, however, so you shouldn't use this syntax with Java.

+Installation and Configuration (Aug. 22, 2014, 9:30 a.m.)


1-Install MySQL:
apt-get install mysql-server mysql-client
During installation set an optional password.

2-Install Nginx, as in the tutorial file of nginx, in my tutorials.

3-Create a directory named roundcube (or something else) in an optional path:
mkdir /home/mohsen/roundcube/
sudo chown www-data.www-data roundcube -R

4-Install PHP:
We can make PHP5 work in nginx through FastCGI. Fortunately, Debian Squeeze provides a FastCGI-enabled PHP5 package which we install like this (together with some PHP5 modules like php5-mysql which you need if you want to use MySQL from your PHP scripts):

sudo apt-get install php5-cgi php5-mysql php5-curl php5-gd php5-idn php-pear php5-imagick php5-imap php5-mcrypt php5-memcache php5-ming php5-pspell php5-recode php5-snmp php5-sqlite php5-tidy php5-xmlrpc php5-xsl php5-intl php5-pgsql

5-Install lighttpd:
There's no standalone FastCGI daemon package for Debian Squeeze, therefore we use the spawn-fcgi program from lighttpd. We install lighttpd as follows:

sudo apt-get install lighttpd
You will see an error message saying that lighttpd can't start because port 80 is already in use:
Starting web server: lighttpd2011-02-24 01:43:18: (network.c.358) can't bind to port: 80 Address already in use
invoke-rc.d: initscript lighttpd, action "start" failed.

That's how it's supposed to be because nginx is already listening on port 80. Run
update-rc.d -f lighttpd remove
so that lighttpd will not start at boot time.
Once I did this "update-rc.d ..." and well it should have been removed this lighttpd from auto starting when the VPS starts. But when I restarted the VPS it did run automatically and it caused nginx not run, so the websites would not open. If this happened, just run this "update-rc..." command again and reboot the VPS.

We've installed lighttpd because we need just one program that comes with the package, /usr/bin/spawn-fcgi, which we can use to start FastCGI processes. Take a look at

spawn-fcgi --help
to learn more about it.

6-To start a PHP FastCGI daemon listening on port 9000 on localhost and running as the user and group www-data, we run the following command:

/usr/bin/spawn-fcgi -a -p 9001 -u www-data -g www-data -f /usr/bin/php5-cgi -P /var/run/

Of course, you don't want to type in that command manually whenever you boot the system, so to have the system execute the command automatically at boot time, open /etc/rc.local...

7-Create an optional folder (e.g. configs) in /roundcube/ and then:
nano configs/virtual_host (this file name is also optional)

server {
access_log /home/mohsen/roundcube/logs/mohsenhassani_access.log;
error_log /home/mohsen/roundcube/logs/mohsenhassani_errors.log;

location / {
root /home/mohsen/roundcube/;
index index.html index.php;

location ~ \.php$ {
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/mohsen/roundcube$fastcgi_script_name;
include fastcgi_params;

8-Reload nginx:
sudo /usr/local/nginx/sbin/nginx -s reload
OR Better to: (I don't know why using -s reload it won't work; I mean it won't notice about the changes sometimes).
sudo /usr/local/nginx/sbin/nginx -s stop
sudo /usr/local/nginx/sbin/nginx

9-Installing APC:
APC is a free and open PHP opcode cacher for caching and optimizing PHP intermediate code. It's similar to other PHP opcode cachers, such as eAccelerator and XCache. It is strongly recommended to have one of these installed to speed up your PHP page.

APC can be installed as follows:

sudo apt-get install php-apc

10-Configure the default timezone in PHP:
nano /etc/php5/cli/php.ini
date.timezone = "Asia/Tehran"

11-Stop PHP:
First you have to find the PID by which PHP is running:
netstat -tap
Look for the process which is taking port <9001> and kill it by it id:
kill <pid>

12-Afterwards we create a new spawn-fcgi process:
/usr/bin/spawn-fcgi -a -p 9001 -u www-data -g www-data -f /usr/bin/php5-cgi -P /var/run/

13-Installing Roundcube:
Installation consists of downloading the gz file from the address:, and extracting it in the /roundcube/ directory.

tar xvfz roundcubemail-0.7.1.tar.gz
cd roundcubemail-0.7.1/
mv * /home/mohsen/roundcube/
mv .htaccess /home/mohsen/roundcube/

14-Make the document root and the Roundcube files in it writable by the nginx daemon which is running as user www-data and group www-data:
chown www-data.www-data /home/mohsen/roundcube/ -R

15-Reload nginx

16-For installing postgres user and database and its tables, access postgres shell, and:
createuser --pwprompt roundcube
createdb -O roundcube -E UNICODE roundcubemail
psql roundcubemail
\c - roundcube
\i ./roundcube/SQL/postgres.initial.sql
The last command should create tables. Done.

17-In this page, If you see the <NEXT> button as active, (I mean if it's clickable and not disabled) then everything is okay and you can go to the next step by clicking the <NEXT> button; But if it's not active, then you have to fix the errors listed in the page.
Just keep in mind that every time you fix an error, and you want to test if it works, you have to kill the php5-cgi using the <netstat> command as said in step 11, and then run it again using step 6. Then again refresh the setup page to see if the <NEXT> button is active.

18-The configuration for postgres database is as the follow: (it's mentioned in the INSTALL file of roundcube too)
$ createuser roundcube
$ createdb -O roundcube -E UNICODE roundcubemail
$ psql roundcubemail

roundcubemail =# ALTER USER roundcube WITH PASSWORD 'the_new_password';
roundcubemail =# \c - roundcube
roundcubemail => \i /home/mohsen/roundcube/SQL/postgres.initial.sql
Ok Now I could receive mails, but could not send mails, the damned "relay access denied" again; for solving the error:
$rcmail_config['smtp_user'] = '%u';
$rcmail_config['smtp_pass'] = '%p';
$rcmail_config['smtp_auth_type'] = 'plain';
$rcmail_config['smtp_server'] = '';
$rcmail_config['smtp_port'] = 25;
Roundcube is dependent on linux users. I mean if you don't have an account on the VPS, you can't use roundcube. If you don't have an account, you'll get an error like "Login failed." and still if you have an account you might get an error like "Connection to the storage server failed." which is because you have no directory named "Maildir" at the /home/ of the user. So create one, and you'll go okay :)
And of courseee, the "Maildir" directory should be of group and user same as of its /home/; I mean:
chown mohsen.mohsen /home/mohsen/Maildir
Don't forget to restart spawn-cfgi
For creating a new web mail for my customers:
1-Using adduser command add a username as their receiving name. For example In this case the username is "mohsen".

2-Creating a directory named 'Maildir' in their home:
su <new_username>
cd /home/<username>
mkdir Maildir
chown mohsen.mohsen /home/mohsen/Maildir -R

3-Register their domain name '' in server_name:
nano /home/mohsen/configs/roundcube

4-Register their domain name '' in mydestination:
nano /etc/postfix/

5-That's it! Restart nginx and you have to open the address: Everything should work.

<< Don't forget to create a sub-domain named "mail." in their domain panel and forward it to server. >>
Erorrs that I have encountered so far:
Before reading the errors and the solutions, don't forget to stop the spawn-fcgi, and run it again. And also reload nginx AFTER any changes you do for solving any of the problems.

Erorr 1-suhosin.session.encrypt: NOT OK(is '1', should be '0'):
For solving this error I needed to edit the file
nano /etc/php5/conf.d/suhosin.ini
and uncomment this line (by deleting the ; semi-colon) and edit it as follow:
suhosin.session.encrypt = off
And only kill the spawn-fcgi and run it again. No need to reload nginx.

Error 2-date.timezone Not OK or any other errors pertaining the date.timezone:
Edit the file:
nano /etc/php5/cgi/php.ini
Search for "timezone", and change it as follow:
date.timezone = "Asia/Tehran"
Re-run the spaw-fcgi

It's essential to set the variable "auto_create_user" to true in file "roundcube/config/":
rcmail_config['auto_create_user'] = true
And after the first user has been created and logged in, set it to false.
So having this variable set to true and restarting the spawn-fcgi I could finally log in.
The uploaded file exceeds the maximum size of 2.0 MB.:
For solving this error:
nano /etc/php5/cli/php.ini
Search for "upload_max_filesize" and change its value to for example 100M.

NOPE! Not solved yet....

+Exclude files and folders (Aug. 22, 2014, 10:02 a.m.)

--exclude 'sources.txt'
--exclude '*.pyc'
--exclude '/static'
--exclude 'abc*'
--exclude 'sources.txt' --exclude 'abc*'
The best way:
First, create a text file with a list of all the files and directories you don’t want to backup. This is the list of files and directories you want to exclude from the rsync.

nano rsync-exclude-list.txt (optional name)

Next, execute the rsync using --exclude-from option with the exclude-list.txt as shown below:

$ rsync -avz --exclude-from 'exclude-list.txt' source/ destination/

+Options (Aug. 22, 2014, 10:01 a.m.)

-a = recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.

-v = verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?

--delete = This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

l = preserves any links you may have created.

--progress = shows the progress of each file transfer. Can be useful to know if you have large files being backup up.

--stats = Adds a little more output regarding the file transfer status.

-I, --ignore-times
Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated.

-b, --backup
With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. Note that if you don’t specify --backup-dir, (1) the --omit-dir-times option will be implied, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a "protect" filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P *~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of ’*’, the auto-added rule would never be reached).

In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts
with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module’s path hierarchy, so take extra care not to delete it or copy into it.

This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string.

-u, --update
This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file’s, it will be updated if the sizes are different.) Note that this does not affect the copying of symlinks or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a transfer rule, not an exclude, so it doesn’t affect the data that goes into the file-lists, and thus it doesn’t affect deletions. It just limits the files that the receiver requests to be transferred.

+Examples (Feb. 15, 2016, 10:41 a.m.)

rsync -varzhne 'ssh -p 1220' --no-g --no-p --delete --force --exclude-from 'horreh/rsync' my_project
rsync -arvb --exclude-from 'my_project/rsync-exclude-list.txt' --delete --backup-dir='my_project/my_project/rsync-deletions' -e ssh my_project

rsync -varzhe 'ssh' --delete --force --exclude-from 'my_project/rsync_dev' my_project
Task : Copy file from a local computer to a remote server

Copy file from /www/backup.tar.gz to a remote server called
$ rsync -v -e ssh /www/backup.tar.gz


sent 19099 bytes received 36 bytes 1093.43 bytes/sec
total size is 19014 speedup is 0.99
Please note that symbol ~ indicate the users home directory (/home/jerry).
Task : Copy file from a remote server to a local computer

Copy file /home/jerry/webroot.txt from a remote server to a local computer's /tmp directory:
$ rsync -v -e ssh /tmp
Task: Synchronize a local directory with a remote directory

$ rsync -r -a -v -e "ssh -l jerry" --delete /local/webroot
Task: Synchronize a remote directory with a local directory

$ rsync -r -a -v -e "ssh -l jerry" --delete /local/webroot
Task: Synchronize a local directory with a remote rsync server or vise-versa

$ rsync -r -a -v --delete rsync:// /home/cvs
$ rsync -r -a -v --delete /home/cvs rsync://
Task: Mirror a directory between my "old" and "new" web server/ftp

You can mirror a directory between my "old" ( and "new" web server with the command (assuming that ssh keys are set for password less authentication)
$ rsync -zavrR --delete --links --rsh="ssh -l vivek" /home/lighttpd
rsync -av --delete /Directory1/ /Directory2/
rsync -av --delete -e ssh /Directory1/ geek@

The code above will synchronize the contents of Directory1 to Directory2, and leave no differences between the two. If rsync finds that Directory2 has a file that Directory1 does not, it will delete it. If rsync finds a file that has been changed, created, or deleted in Directory1, it will reflect those same changes to Directory2.
If you have SSH listening on some port other than 22, you would need to specify the port number, such as in this example where I use port 12345:

$ rsync -av --delete -e 'ssh -p 12345' /Directory1/ geek@
rsync --ignore-existing
rsync -avP --ignore-existing *.png

+Introduction (Aug. 22, 2014, 10 a.m.)

rsync is a free software computer program for Unix and Linux like systems which synchronizes files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction.

So what is unique about the rsync command?
It can perform differential uploads and downloads (synchronization) of files across the network, transferring only data that has changed. The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection.

Always use rsync over ssh
Since rsync does not provide any security while transferring data it is recommended that you use rsync over ssh session. This allows a secure remote connection. Now let us see some examples of rsync command.

Comman rsync command options

--delete : delete files that don't exist on sender (system)
-v : Verbose (try -vv for more detailed information)
-e "ssh options" : specify the ssh as remote shell
-a : archive mode
-r : recurse into directories
-z : compress file data

+Running Projects (June 6, 2017, 2:03 p.m.)

CD to the project root where the file "Gemfile" exists, and run these commands:
sudo gem install
bundler bundler install

+Installation (March 5, 2017, 4:21 p.m.)

1- Download the latest stable version:
2- apt install build-essential zlib1g-dev libssl-dev sqlite3 libsqlite3-dev nodejs
tar xvf ruby-2.4.0.tar.gz
./configure --with-openssl-dir=/usr/lib/ssl
sudo make install

+Introduction & Syntax Description (May 9, 2016, 7:26 p.m.)

Sass (Syntactically Awesome StyleSheets)

Sass is an extension of CSS that adds power and elegance to the basic language. It allows you to use variables, nested rules, mixins, inline imports, and more, all with a fully CSS-compatible syntax. Sass helps keep large stylesheets well-organized, and get small stylesheets up and running quickly, particularly with the help of the Compass style library.
There are two syntaxes available for Sass. The first, known as SCSS (Sassy CSS), is an extension of the syntax of CSS. This means that every valid CSS stylesheet is a valid SCSS file with the same meaning. In addition, SCSS understands most CSS hacks and vendor-specific syntax, such as IE’s old filter syntax. Files using this syntax have the .scss extension.

The second and older syntax, known as the indented syntax (or sometimes just “Sass”), provides a more concise way of writing CSS. It uses indentation rather than brackets to indicate nesting of selectors, and newlines rather than semicolons to separate properties. Some people find this to be easier to read and quicker to write than SCSS. The indented syntax has all the same features, although some of them have slightly different syntax; this is described in the indented syntax reference. Files using this syntax have the .sass extension.

Either syntax can import files written in the other. Files can be automatically converted from one syntax to the other using the sass-convert command line tool:

# Convert Sass to SCSS
$ sass-convert style.sass style.scss

# Convert SCSS to Sass
$ sass-convert style.scss style.sass

+Sentry Supervisor Config File (Oct. 4, 2015, 10:45 a.m.)

command=/home/mohsen/virtualenvs/sentry/bin/sentry --config=/home/mohsen/.sentry/ start

command=/home/mohsen/virtualenvs/sentry/bin/sentry celery worker -B

+Sentry Nginx Config File (Oct. 3, 2015, 2:37 p.m.)

server {
listen 80;
access_log /home/mohsen/logs/sentry_mohsenhassani.access.log;
error_log /home/mohsen/logs/sentry_mohsenhassani.error.log;
location / {
proxy_pass http://localhost:9000;
proxy_redirect off;

proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

+Sentry Config File (Oct. 4, 2015, 10:45 a.m.)

# This file is just Python, with a touch of Django which means
# you can inherit and tweak settings to your hearts content.
from sentry.conf.server import *

import os.path

CONF_ROOT = os.path.dirname(__file__)

'default': {
# You can swap out the engine for MySQL easily by changing this value
# to ``django.db.backends.mysql`` or to PostgreSQL with
# ``sentry.db.postgres``

# If you change this, you'll also need to install the appropriate python
# package: psycopg2 (Postgres) or mysql-python
'ENGINE': 'django.db.backends.postgresql_psycopg2',

'NAME': 'sentrydb',
'USER': 'sentry',
'PASSWORD': 'SentrY',
'HOST': '',
'PORT': '',

# You should not change this setting after your database has been created
# unless you have altered all schemas first

# If you're expecting any kind of real traffic on Sentry, we highly recommend
# configuring the CACHES and Redis settings

# General #

# The administrative email for this installation.
# Note: This will be reported back to as the point of contact. See
# the beacon documentation for more information. This **must** be a string.


# Instruct Sentry that this install intends to be run by a single organization
# and thus various UI optimizations should be enabled.

# Redis #

# Generic Redis configuration used as defaults for various things including:
# Buffers, Quotas, TSDB

'hosts': {
0: {
'host': '',
'port': 6379,

# Cache #

# If you wish to use memcached, install the dependencies and adjust the config
# as shown:
# pip install python-memcached
# CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION': [''],
# }
# }
# SENTRY_CACHE = 'sentry.cache.django.DjangoCache'

SENTRY_CACHE = 'sentry.cache.redis.RedisCache'

# Queue #

# See for more
# information on configuring your queue broker and workers. Sentry relies
# on a Python framework called Celery to manage queues.

BROKER_URL = 'redis://localhost:6379'

# Rate Limits #

# Rate limits apply to notification handlers and are enforced per-project
# automatically.

SENTRY_RATELIMITER = 'sentry.ratelimits.redis.RedisRateLimiter'

# Update Buffers #

# Buffers (combined with queueing) act as an intermediate layer between the
# database and the storage API. They will greatly improve efficiency on large
# numbers of the same events being sent to the API in a short amount of time.
# (read: if you send any kind of real data to Sentry, you should enable buffers)

SENTRY_BUFFER = 'sentry.buffer.redis.RedisBuffer'

# Quotas #

# Quotas allow you to rate limit individual projects or the Sentry install as
# a whole.

SENTRY_QUOTAS = 'sentry.quotas.redis.RedisQuota'

# TSDB #

# The TSDB is used for building charts as well as making things like per-rate
# alerts possible.

SENTRY_TSDB = 'sentry.tsdb.redis.RedisTSDB'

# File storage #

# Any Django storage backend is compatible with Sentry. For more solutions see
# the django-storages package:

'location': '/tmp/sentry-files',

# Web Server #

# You MUST configure the absolute URI root for Sentry:
SENTRY_URL_PREFIX = '' # No trailing slash!

# If you're using a reverse proxy, you should enable the X-Forwarded-Proto
# header and uncomment the following settings

# 'workers': 3, # the number of gunicorn workers
# 'secure_scheme_headers': {'X-FORWARDED-PROTO': 'https'},

# Mail Server #

# For more information check Django's documentation:

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'

EMAIL_HOST_PASSWORD = '4301Mohsen4301'
EMAIL_HOST_USER = 'mohsen'

# The email address to send on behalf of
SERVER_EMAIL = 'root@localhost'

# If you're using mailgun for inbound mail, set your API key and configure a
# route to forward to /api/hooks/mailgun/inbound/

+Installation and Configuration (Oct. 3, 2015, 4:11 p.m.)

For running sentry on my VPS, follow the following steps:
1-Create a virtualenv for 'sentry' and activate it (Do not use python3):
virtualenv -p /usr/bin/python2.7 ~/virtualenvs/sentry
source ~/virtualenvs/sentry/bin/activate

2-Before installing sentry, there are some packages you need to install:
apt-get install python-setuptools python-pip python-dev libxslt1-dev libxml2-dev libz-dev libffi-dev libssl-dev libxslt1-dev

2-pip install sentry
(There will be so many packages downloaded and installed. If any errors raised, check the error, you might need to install some more packages in spite of step 2 list.)

3- pip install sentry[postgres]

4- sentry init ~/.sentry/
Using this command, a directory named `.sentry` and a python config file for your sentry will created at your home
Open and manipulate the config file. Review it and change the default values based on your needs.
(You can find the contents of my config file in this tutorial, by the title of 'Sentry Config File)

5-In the settings file, you set a URL name:
Don't forget to create the sub domain using bind on you VPS.

6-Now you have to create the database:
a) su (enter the password)
b) su postgres
c) createuser --pwprompt sentry
d) createdb -O sentry sentrydb

7- You need to install `redis`:
Redis is an open source in-memory data structure store, used as database, cache and message broker.
It is a flexible, open-source, key value data store.
apt-get install redis-server (it will run its server on localhost:6379 after installation automatically).

8-sentry --config=~/.sentry/ upgrade
It is used for migrations on the database and creating the initial schema.
It will ask you to enter some information about the account and superuser information:
Enter (for example for email) and a password.
Choose (y) for using the same info as superuser.

9-To start the built-in webserver run:
sentry --config=~/.sentry/ start

You should now be able to test the web service by visiting

10-For deploying Sentry with Nginx, create an Nginx config file in '~/configs/sentry'.
(I have the contents of the file in my notes, in 'Sentry Nginx Config File'.)
After restarting nginx (/etc/init.d/nginx restart) you'll be able to open sentry using:

11-Sentry comes with a built-in queue to process tasks in a more asynchronous fashion. For example, with workers enabled, when an event comes in instead of writing it to the database immediately, it sends a job to the queue so that the request can be returned right away, and the background workers handle actually saving that data.
When I ran the program I intended to log the errors, I would get the error in the sentry web after about 5 minutes. But using this following command I was able to get it in less than 10 seconds.
sentry celery worker -B (This command is for tutorial only. It will be used in supervisor config file.)

If you're going to use `supervisor` along with Sentry, you'll need to create a user for sentry and add it to suoders:
adduser sentry (It does not matter what password you're entering)
adduser sentry sudo

Keep in mind that, when adding the supervisor config file, to get the `sentry-worker` work, you need to stop and start `supervisorctl`:
killall supervisorctl

If the website did not open, for debugging refer to this path:

You should be able to access '/admin/'. Enter the username and password of superuser and create a project, team, and stuff you need.

Shell Scripting
+ANSI escape codes (March 18, 2018, 3:22 a.m.)

Black 0;30 Dark Gray 1;30
Red 0;31 Light Red 1;31
Green 0;32 Light Green 1;32
Brown/Orange 0;33 Yellow 1;33
Blue 0;34 Light Blue 1;34
Purple 0;35 Light Purple 1;35
Cyan 0;36 Light Cyan 1;36
Light Gray 0;37 White 1;37


echo -e "${BROWN_ORANGE}No space!"

+Understanding find -exec option (curly braces & plus sign) (March 15, 2018, 11:42 p.m.)

The curly braces will be replaced by the results of the find command, and the chmod will be run on each of them. The + makes find attempt to run as few commands as possible (so, chmod 775 file1 file2 file3 as opposed to chmod 755 file1,chmod 755 file2,chmod 755 file3). Without them, the command just gives an error. This is all explained in man find:

-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of `;' is encountered. The string `{}' is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.

-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files.
“Obviously” -exec … must be terminated with either a semicolon (;) or a plus sign (+). Semicolon is a special character in the shell (or, at least, every shell I’ve ever used), so, if it is to be used as part of the find command, it must be escaped or quoted (\;, ";", or ';').

+Total Physical Memory RAM (Aug. 7, 2017, 2:27 p.m.)

awk '/MemTotal/ {print $2}' /proc/meminfo

+How to grep in dmesg (Aug. 6, 2017, 5:13 p.m.)

dmesg | grep "the BIOS has corrupted hw-PMU resources"

+Is OS running on a virtual machine (Aug. 6, 2017, 4:54 p.m.)

egrep "hypervisor" /proc/cpuinfo
if [ $? -eq 0 ]; then echo "This OS is running on a virtual machine."; else echo "This OS is NOT running on a virtual machine."; fi

+Check if CPU supports Virtualization (Aug. 5, 2017, 5:11 p.m.)

egrep '(vmx|svm)' /proc/cpuinfo
if [ $? -eq 0 ]; then echo "supported"; else echo "not supported"; fi

+Close a process by its saved PID in file (Aug. 4, 2017, 1:09 p.m.)

if [ -f /tmp/ ]
/bin/kill $(cat /tmp/

+Get & Save running application PID in a file (Aug. 4, 2017, 1:08 p.m.)

/usr/bin/transmission-gtk > /dev/null &
echo $! > /tmp/

+For Loop (July 5, 2017, 1:14 a.m.)

for i in {06..25}; do mkdir The\ Simpsons\ -\ Season\ $i; done
for i in 1 2 3 4 5
echo "Welcome $i times"
for i in {1..5}
echo "Welcome $i times"
echo "Bash version ${BASH_VERSION}..."
for i in {0..10..2}
echo "Welcome $i times"

+Infinite Loop (July 5, 2017, 1:16 a.m.)

for (( ; ; ))
echo "infinite loops [ hit CTRL+C to stop]"

+Copy files and take some part of the names (July 2, 2017, 12:08 p.m.)

for f in *.sample; do cp "$f" "${f/.sample/}"; done

+Remove spaces from file names (April 22, 2017, 12:08 p.m.)

rename "s/ //g" *

+Renaming multiple files (Dec. 4, 2014, 2:08 p.m.)

for i in *.mpa; do mv "$i" "${i/.mpa}".mp3; done

+Tutorial (Aug. 22, 2014, 10:03 a.m.)

The shell maintains a list of directories where executable files (programs) are kept, and just searches the directories in that list. If it does not find the program after searching each directory in the list, it will issue the famous command not found error message.

This list of directories is called your path. You can view the list of directories with the following command:

echo $PATH

This will return a colon separated list of directories that will be searched if a specific path name is not given when a command is attempted.
You can add directories to your path with the following command, where directory is the name of the directory you want to add:

export PATH=$PATH:directory

A better way would be to edit your .bash_profile file to include the above command. That way, it would be done automatically every time you log in.

Most modern Linux distributions encourage a practice in which each user has a specific directory for the programs he/she personally uses. This directory is called bin and is a subdirectory of your home directory.

Though placing your aliases and shell functions in your .bash_profile will work, it is not considered good form. There is a separate file named .bashrc that is intended to be used for your custom scripts. You may notice a piece of code near the beginning of your .bash_profile that looks something like this:

if [ -f ~/.bashrc ]; then
. ~/.bashrc

This script fragment checks to see if there is a .bashrc file in your home directory. If one is found, then the script will read its contents. If this code is in your .bash_profile, you should edit the .bashrc file and put your aliases and shell functions there.
Here Scripts

A here script (also sometimes called a here document) is an additional form of I/O redirection. It provides a way to include content that will be given to the standard input of a command.

command << token
content to be used as command's standard input

token can be any string of characters. I use "_EOF_" (EOF is short for "End Of File") because it is traditional, but you can use anything, as long as it does not conflict with a bash reserved word. The token that ends the here script must exactly match the one that starts it, or else the remainder of your script will be interpreted as more standard input to the command.

cat <<- _EOF_
Changing the the "<<" to "<<-" causes bash to ignore the leading tabs (but not spaces) in the here script. The output from the cat command will not contain any of the leading tab characters.
Environment Variables

When you start your shell session, some variables are already ready for your use. They are defined in scripts that run each time a user logs in. To see all the variables that are in your environment, use the printenv command.
Like constants, environment variables are given uppercase names by convention.
echo "My host name is \"$HOSTNAME\"."
My host name is "linuxbox".

+Edit Menus (June 27, 2015, 1:02 p.m.)

The menus are saved in the database, not in the perl or html files.
1-psql -d asterisk -U postgres
2-update menus set description='Payments: Carriersss' where username='mohsen' and menu='accountingcarrier';

+Perl Packages (June 27, 2015, 10:23 a.m.)

Install these packages using CPAN:
2-apt-get install libspreadsheet-xlsx-perl libexcel-writer-xlsx-perl libclass-date-perl libspreadsheet-writeexcel-perl libpg-perl libasterisk-agi-perl
4-install these packages using (install <package_name>):



CPAN::DistnameInfo // This is for displaying the results of errors (in case a package is not installable, for example when installing Archive::Zip, it says for viewing the errors use this commad: reports PHRED/Archive-Zip-1.48.tar.gz, and for this `reports` to work I needed to install this package).

LWP // This is for displaying the results of errors (in case a package is not installable).
If this error occurs:
Can't locate object method "data" via package "CPAN::Modulelist" (perhaps you forgot to load "CPAN::Modulelist"?)

DO this to solve it:
# mv ~/.cpan ~/.cpan-bak
If you get errors like:
'YAML' not installed, ....

This will solve it:
perl -e'use CPAN; force install "Bundle::CPAN"'

+Useful Links (Nov. 14, 2015, 4:28 p.m.)

+Query Examples (Oct. 27, 2015, 3:54 p.m.)

from sqlalchemy import or_

session = get_session()
cdr_records = session.query(CDR)
print('Total records: %s' % len(cdr_records.all()))
cdr_records = cdr_records.filter(CDR.calldate.between(data['from_call_date'], data['to_call_date']))
print('Dates: %s' % len(cdr_records.all()))
cdr_records = cdr_records.filter(or_('%%%s%%' % prefixes[0]),'%%%s%%' % prefixes[1])))
cdr_records = cdr_records.filter(CDR.disposition.in_(request.POST.getlist('disposition'))