Topics: 59 *** Notes: 793

View Topic List
Android
+Animations (June 14, 2019, 2:36 p.m.)

https://android.googlesource.com/platform/frameworks/base/+/HEAD/core/res/res/anim

+Platform codenames, versions, API levels, and NDK releases (May 26, 2019, 11:01 p.m.)

Codename Version API level/NDK release
Pie 9 API level 28
Oreo 8.1.0 API level 27
Oreo 8.0.0 API level 26
Nougat 7.1 API level 25
Nougat 7.0 API level 24
Marshmallow 6.0 API level 23
Lollipop 5.1 API level 22
Lollipop 5.0 API level 21
KitKat 4.4 - 4.4.4 API level 19
Jelly Bean 4.3.x API level 18
Jelly Bean 4.2.x API level 17
Jelly Bean 4.1.x API level 16
Ice Cream Sandwich 4.0.3 - 4.0.4 API level 15, NDK 8
Ice Cream Sandwich 4.0.1 - 4.0.2 API level 14, NDK 7
Honeycomb 3.2.x API level 13
Honeycomb 3.1 API level 12, NDK 6
Honeycomb 3.0 API level 11
Gingerbread 2.3.3 - 2.3.7 API level 10
Gingerbread 2.3 - 2.3.2 API level 9, NDK 5
Froyo 2.2.x API level 8, NDK 4
Eclair 2.1 API level 7, NDK 3
Eclair 2.0.1 API level 6
Eclair 2.0 API level 5
Donut 1.6 API level 4, NDK 2
Cupcake 1.5 API level 3, NDK 1
(no codename) 1.1 API level 2
(no codename) 1.0 API level 1

+Action Bar, Toolbar, App Bar (May 26, 2019, 9:17 p.m.)

Toolbar is a generalization of the Action Bar pattern that gives you much more control and flexibility. Toolbar is a view in your hierarchy just like any other, making it easier to interleave with the rest of your views, animate it, and react to scroll events.

You can also set it as your Activity’s action bar, meaning that your standard options menu actions will be display within it.
In other words, the ActionBar now became a special kind of Toolbar.

The app bar, formerly known as the action bar in Android, is a special kind of toolbar that is used for branding, navigation, search, and actions.

--------------------------------------------------------------------

Toolbar provides greater control to customize its appearance unlike old ActionBar. It fully supported Toolbar features to lower android os devices via AppCompact support library.

Use a Toolbar as an replacement to ActionBar. In this you can still continued to use the ActionBar features such as menus, selections, etc.

Use a standalone Toolbar, whereever you want to place in your application.

--------------------------------------------------------------------

Toolbar’s are more flexible than ActionBar. We can easily modify its color, size and position. We can also add labels, logos, navigation icons and other views in it. In Material Design Android has updated the AppCompat support libraries so that we can use Toolbar’s in our devices running API Level 7 and up.

--------------------------------------------------------------------

+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Java tells them how to behave.

+Common naming conventions for icon assets (April 22, 2019, 4:02 a.m.)

Asset Type Prefix Example
Icons ic_ ic_star.png
Launcher icons ic_launcher ic_launcher_calendar.png
Menu icons and Action Bar icons ic_menu ic_menu_archive.png
Status bar icons ic_stat_notify ic_stat_notify_msg.png
Tab icons ic_tab ic_tab_recent.png
Dialog icons ic_dialog ic_dialog_info.png

+Android Studio - Transparent Background Launcher Icon (April 22, 2019, 2:51 a.m.)

1- File > New > Image Asset.

2- Turn to Launcher Icons (Adaptive and Legacy) in Icon Type.

3- Choose Image in Asset Type and select your picture inside Path field (Foreground Layer tab).

4- Create or download below a PNG file with transparent background of 512x512 px size (this is a size of ic_launcher-web.png).
PNG link: https://i.stack.imgur.com/Pwbuz.png

5- In Background Layer tab select Image in Asset Type and load the transparent background from step 4.

6- In Legacy tab select Yes for all Generate, None for Shape.

7- In Foreground Layer and Background Layer tabs you can change trim size.

Though you will see a black background behind the image in Preview window, after pressing Next, Finish and compiling an application you will see a transparent background in Android 5, Android 8.

+NDK (April 19, 2019, 6:38 p.m.)

The Native Development Kit (NDK) is a set of tools that allow you to use C and C++ code in your Android app. It provides platform libraries to manage native activities and access hardware components such as sensors and touch input.

The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for the following cases:

- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

- Reuse code between your iOS and Android apps.

- Use libraries like FFMPEG, OpenCV, etc.

+SDK / NDK (April 19, 2019, 6:34 p.m.)

Software Development Kit (SDK)
Native Development Kit (NDK)


Traditionally, all Software Development Kit (SDK) were in C, very few in C++. Then Google comes along and releases a Java based library for Android and calls it a SDK.

However, then came the demand for C/C++ based library for development. Primarily from C/C++ developers aiming game development and some high performance apps.

So, Google released a C/C++ based library called Native Development Kit (NDK).

+ADB (Oct. 2, 2015, 5:04 p.m.)

apt install android-tools-adb android-tools-fastboot

+Android Development Environment (July 6, 2016, 11:58 a.m.)

Visit the following links to get information about the dependencies you might need for the SDK version you intend to download:

http://socialcompare.com/en/comparison/android-versions-comparison
http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels
https://cordova.apache.org/docs/en/latest/guide/platforms/android/

----------------------------------------------------------------------

You might find the tools and all the dependencies in the following links:

http://osgard.blogspot.com/2011/11/download-of-android-sdk-components.html
https://dl.zjuqsc.com/android/android-sdk-linux/
http://archive.virtapi.org/packages/a/android-sdk-build-tools/

----------------------------------------------------------------------

1- Create a folder preferably name it "android-sdk-linux" in any location.

2- Downloading SDK Tools:
From the following link, scroll to the bottom of the page, the table having the title "Command line tools only" and download the "Linux" package.
https://developer.android.com/studio/index.html
Extract the downloaded file "sdk-tools-linux.zip" to the folder you created in step 1.

3- Download an API level (for example, android-15_r03.zip or android-15.zip which is for Android 4.0.4).
Create a folder named "platforms" in "android-sdk-linux" and extract the downloaded file to it.

4- Download the latest version of `build-tools` (build-tools_r25-linux.zip).
Create a folder named `build-tools` in `android-sdk-linux` and extract it to it.
You need to rename the extracted folder to `25`.

5- Download the latest version of `platform-tools` (platform-tools_r23.0.1-linux.zip).
Extract it to the folder `android-sdk-linux`. It should have already a folder named `platform-tools`, so no need to create any further folders.

6- Open the file `~/.bashrc` and add the following line to it:
export ANDROID_HOME=/home/mohsen/Programs/Android/Development/android-sdk-linux

7- apt install openjdk-9-jdk
If you got errors like this:
\dpkg: warning: trying to overwrite '/usr/lib/jvm/java-9-openjdk-amd64/include/linux/jawt_md.h', which is also in package openjdk-9-jdk-headless

To solve the error:
apt-get -o Dpkg::Options::="--force-overwrite" install openjdk-9-jdk

----------------------------------------------------------------------

+AVD with HAXM or KVM (Emulators) (April 10, 2016, 9:25 a.m.)

Official Website:
https://software.intel.com/en-us/android/articles/intel-hardware-accelerated-execution-manager

--------------------------------------------------------

For a faster emulator, use the HAXM device driver.
Linux Link:
https://software.intel.com/en-us/blogs/2012/03/12/how-to-start-intel-hardware-assisted-virtualization-hypervisor-on-linux-to-speed-up-intel-android-x86-emulator

As described in the above link, Linux users need to use KVM.
Taken from the above website:
(Since Google mainly supports Android build on Linux platform (with Ubuntu 64-bit OS as top Linux platform, and OS X as 2nd), and a lot of Android Developers are using AVD on Eclipse or Android Studio hosted by a Linux system, it is very critical that Android developers take advantage of Intel hardware-assisted KVM virtualization for Linux just like HAXM for Windows and OS X.)

--------------------------------------------------------

KVM Installation:
https://help.ubuntu.com/community/KVM/Installation

1- egrep -c '(vmx|svm)' /proc/cpuinfo
If the output is 0 it means that your CPU doesn't support hardware virtualization.

2- apt install cpu-checker
Now you can check if your cpu supports kvm:
# kvm-ok

3- To see if your processor is 64-bit, you can run this command:
egrep -c ' lm ' /proc/cpuinfo
If 0 is printed, it means that your CPU is not 64-bit.
If 1 or higher, it is.
Note: lm stands for Long Mode which equates to a 64-bit CPU.

4- Now see if your running kernel is 64-bit:
uname -m

5- apt install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils ia32-libs-multiarch
If a screen with `Postfix Configuration` was displayed, ignore it by selecting `No Configuration`.

6- Next is to add your <username> account to the group kvm and libvirtd
sudo adduser mohsen kvm
sudo adduser mohsen libvirtd

7-Verify Installation:
You can test if your install has been successful with the following command:
sudo virsh -c qemu:///system list
Your screen will paint the following below if successful:
Id Name State

----------------------------------------------------

8- Install Java:
Oracle java has to be installed in order to run Android emulator x86 system Images.
sudo apt-get install openjdk-8-jre

9- Download a System Image from the following link:
http://mirrors.neusoft.edu.cn/android/repository/sys-img/android/
Create a folder named `system-images` in `android-sdk-linux` and extract the downloaded system image in it. (You might need to create another folder inside, named `default`.)
Run the Android SDK Manager, you will probably see the system image under `Extras` which is broken.
If it was so, for solving the problem, you need to download its API from this link and extract it in `platforms` folder:
http://downloads.puresoftware.org/files/android/API/

9- Start the AVD from Android SDK Directly from Terminal and create a Virtual Device:
~/Programs/Android/Development/android-sdk-linux/tools/android avd

--------------------------------------------------------

Angular
+Sort array of objects (Oct. 6, 2019, 8:26 a.m.)

this.menus.sort((obj1, obj2) => {
return obj1.ordering - obj2.ordering;
});

+Forms (Oct. 2, 2019, 10:52 p.m.)

Angular provides two different approaches for managing the forms:
1- Reactive approach (or Model-driven forms)
2-Template-driven approach

------------------------------------------------------------------------

Both reactive and template-driven forms share underlying common building blocks which are the following.

1- FormControl: It tracks the value and validation status of the individual form control.
2- FormGroup: It tracks the same values and status for the collection of form controls.
3- FormArray: It tracks the same values and status for the array of the form controls.
4- ControlValueAccessor: It creates the bridge between Angular FormControl instances and native DOM elements.

------------------------------------------------------------------------

Reactive forms:
Reactive forms or Model-driven forms are more robust, scalable, reusable, and testable. If forms are the key part of your application, or you’re already using reactive patterns for building your web application, use reactive forms.

In Reactive Forms, most of the work is done in the component class.

------------------------------------------------------------------------

Template-driven forms:
Template-driven forms are useful for adding the simple form to an app, such as the email list signup form. They’re easy to add to a web app, but they don’t scale as well as the reactive forms.

If you have the fundamental form requirements and logic that can be managed solely in the template, use template-driven forms.

In template-driven forms, most of the work is done in the template.

------------------------------------------------------------------------

FormControl:
It tracks the value and validity status of an angular form control. It matches to an HTML form control like an input.

this.username = new FormControl('agustin', Validators.required);

------------------------------------------------------------------------

FormGroup:
It tracks the value and validity state of a FormBuilder instance group. It aggregates the values of each child FormControl into one object, using the name of each form control as the key.
It calculates its status by reducing the statuses of its children. If one of the controls inside a group is invalid, the entire group becomes invalid.

this.user_data = new FormGroup({
username: new FormControl('agustin', Validators.required),
city: new FormControl('Montevideo', Validators.required)
});

------------------------------------------------------------------------

FormArray:
It is a variation of FormGroup. The main difference is that its data gets serialized as an array, as opposed to being serialized as an object in case of FormGroup. This might be especially useful when you don’t know how many controls will be present within the group, like in dynamic forms.

this.user_data = new FormArray({
new FormControl('agustin', Validators.required),
new FormControl('Montevideo', Validators.required)
});

------------------------------------------------------------------------

FormBuilder:
It is a helper class that creates FormGroup, FormControl and FormArray instances for us. It basically reduces the repetition and clutter by handling details of form control creation for you.

this.validations_form = this.formBuilder.group({
username: new FormControl('', Validators.required),
email: new FormControl('', Validators.compose([
Validators.required,
Validators.pattern('^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+$')
]))
});

------------------------------------------------------------------------

+Material Design (Aug. 31, 2019, 9:54 a.m.)

ng add @angular/material

-------------------------------------------------------------------------

https://material.angular.io/guide/getting-started

Colors:
https://material.io/archive/guidelines/style/color.html#color-color-system


Using a pre-built theme:
https://material.angular.io/guide/theming


Material Design Icons:
https://google.github.io/material-design-icons/

+Libraries / Packages (Aug. 31, 2019, 3:42 a.m.)

Bootstrap:
npm install bootstrap jquery popper.js


Material Design:
npm install --save @angular/material @angular/cdk @angular/animations @angular/flex-layout material-design-icons hammerjs


Misc:
npm install rxjs-compat --save
npm install ng2-slim-loading-bar @angular/core --save

+Angular Releases (Aug. 31, 2019, 2:15 a.m.)

https://angular.io/guide/releases#support-policy-and-schedule

+CLI commands (June 28, 2019, 7:39 p.m.)

Display list of available commands:
ng


ng new project_name


ng --version


npm install bootstrap jquery popper.js --save


ng serve -o
ng serve --watch


ng g c product-add --skipTests=true


ng build --prod

+Install / Update Angular CLI (June 28, 2019, 7:33 p.m.)

Angular CLI helps us to create projects, generate application and library code, and perform a variety of ongoing development tasks such as testing, bundling, and deployment.

First, install Nodejs using my Nodejs notes, then:
sudo npm install -g @angular/cli

Ansible
+Common Options (May 16, 2018, 3:06 p.m.)

--ask-su-pass

Ask for su password (deprecated, use become)

------------------------------------------------------------

--ask-sudo-pass

Ask for sudo password (deprecated, use become)

------------------------------------------------------------

--become-user

Run operations as this user (default=root)

------------------------------------------------------------

--list-hosts

Outputs a list of matching hosts; does not execute anything else

------------------------------------------------------------

--list-tasks

List all tasks that would be executed

------------------------------------------------------------

--private-key, --key-file

Use this file to authenticate the connection

------------------------------------------------------------

--start-at-task <START_AT_TASK>

Start the playbook at the task matching this name

------------------------------------------------------------

--step

One-step-at-a-time: confirm each task before running

------------------------------------------------------------

--syntax-check

Perform a syntax check on the playbook, but do not execute it

------------------------------------------------------------

-C, --check

Don’t make any changes; instead, try to predict some of the changes that may occur

------------------------------------------------------------

-D, --diff

When changing (small) files and templates, show the differences in those files; works great with –check

------------------------------------------------------------

-K, --ask-become-pass

Ask for privilege escalation password

------------------------------------------------------------

-S, --su

Run operations with su (deprecated, use become)

------------------------------------------------------------

-b, --become

Run operations with become (does not imply password prompting)

------------------------------------------------------------

-e, --extra-vars

Set additional variables as key=value or YAML/JSON, if filename prepend with @

------------------------------------------------------------

-f <FORKS>, --forks <FORKS>

Specify number of parallel processes to use (default=5)

------------------------------------------------------------

-i, --inventory, --inventory-file

Specify inventory host path (default=[[u’/etc/ansible/hosts’]]) or comma separated host list. –inventory-file is deprecated

------------------------------------------------------------

-k, --ask-pass

Ask for connection password

------------------------------------------------------------

-u <REMOTE_USER>, --user <REMOTE_USER>

Connect as this user (default=None)

------------------------------------------------------------

-v, --verbose

Verbose mode (-vvv for more, -vvvv to enable connection debugging)

------------------------------------------------------------

+Display output to console (May 16, 2018, 4:40 p.m.)

Every ansible task when run can save its results into a variable. To do this you have to specify which variable to save the results in, using "register" parameter.

Once you save the value to a variable you can use it later in any of the subsequent tasks. So for example if you want to get the standard output of a specific task you can write the following:

ansible-playbook ansible/postgres.yml -e delete_old_backups=true

---
- hosts: localhost
tasks:
- name: Delete old database backups
command: echo '{{ delete_old_backups }}'
register: out
- debug:
var: out.stdout_lines

-----------------------------------------------------------------

You can also use -v when running ansible-playbook.

-----------------------------------------------------------------

+Pass conditional boolean value (May 16, 2018, 4:53 p.m.)

- name: Delete old database backups
command: echo {{ delete_old_backups }}
when: delete_old_backups|bool

+Basic Commands (Jan. 7, 2017, 11:54 a.m.)

ansible test_servers -m ping

-----------------------------------------------------

ansible-playbook playbook.yml

ansible-playbook playbook.yml --check

-----------------------------------------------------

ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2

ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"

-----------------------------------------------------

+Inventory File (Jan. 7, 2017, 11:04 a.m.)

[postgres_servers]
mohsenhassani.com ansible_user=root
pythonist.ir ansible_user=mohsen
exam.myedu.ir:2020

--------------------------------------------------------

[webservers]
www[01:50].example.com

[databases]
db-[a:f].example.com

[targets]
localhost ansible_connection=local
other1.example.com ansible_connection=ssh ansible_user=mpdehaan
other2.example.com ansible_connection=ssh ansible_user=mdehaan

--------------------------------------------------------

Host Variables:

[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909

--------------------------------------------------------

Group Variables:

[atlanta]
host1
host2

[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com

--------------------------------------------------------

Groups of Groups, and Group Variables:

It is also possible to make groups of groups using the :children suffix. Just like above, you can apply variables using :vars:

[atlanta]
host1
host2

[raleigh]
host2
host3

[southeast:children]
atlanta
raleigh

[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2

[usa:children]
southeast
northeast
southwest
northwest

--------------------------------------------------------

+Installation (Dec. 13, 2016, 4:33 p.m.)

sudo apt-get install libffi-dev libssl-dev python-pip python-setuptools
pip install ansible
pip install markupsafe

Apache
+Auth Types (Oct. 14, 2019, midnight)

# Backward compatibility with apache 2.2
Order allow,deny
Allow from all

# Forward compatibility with apache 2.4
Require all granted
Satisfy Any

-----------------------------------------------------------

<IfVersion < 2.4>
Allow from all
</IfVersion>
<IfVersion >= 2.4>
Require all granted
</IfVersion>

-----------------------------------------------------------

+Installation (Sept. 6, 2017, 11:11 a.m.)

For Debian earlier than Stretch:
apt-get install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

For Debian Stretch:
apt-get install apache2 apache2-utils libexpat1 libapache2-mod-wsgi-py3 python-pip python-dev build-essential

+Password Protect via .htaccess (Feb. 26, 2017, 6:14 p.m.)

1- Create a file named `.htaccess` in the root of website, with this content:

AuthName "Deskbit's Support"
AuthUserFile /etc/apache2/.htpasswd
AuthType Basic
require valid-user
-----------------------------------------------------
2- htpasswd -c /etc/apache2/.htpasswd mohsen
-----------------------------------------------------
3- Add this to <Directory> block:

<Directory /var/www/support/>
Options Indexes FollowSymLinks
AllowOverride ALL
Require all granted
</Directory>
-----------------------------------------------------
4- Restart apache
/etc/init.d/apache2 restart
-----------------------------------------------------

+Configs for two different ports on same IP (Sept. 26, 2016, 10:07 p.m.)

NameVirtualHost *:80
<VirtualHost *:80>
ServerAdmin mohsen@mohsenhassani.com
#ServerName ecc.mohsenhassani.com
ServerName 93.118.96.41
ServerAlias ecc.mohsenhassani.com
LogLevel warn
ErrorLog /home/mohsen/logs/eccgroup_error.log
WSGIScriptAlias / /home/mohsen/websites/ecc/ecc/wsgi.py
WSGIDaemonProcess ecc python-path=/home/mohsen/websites/ecc:/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages
WSGIProcessGroup ecc

Alias /static /home/mohsen/websites/ecc/ecc/static
<Directory /home/mohsen/websites/ecc/ecc/static>
Require all granted
</Directory>

<Directory />
Require all granted
</Directory>
</VirtualHost>

------------------------------------------------------------------
Listen 8081
NameVirtualHost *:8081
<VirtualHost *:8081>
ServerName 93.118.96.41
ServerAdmin mohsen@mohsenhassani.com

ErrorLog /var/log/apache2/freepbx.error.log
CustomLog /var/log/apache2/freepbx.access.log combined
DocumentRoot /var/www/html

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>

+Error Check (March 4, 2015, 12:06 p.m.)

sudo systemctl status apache2.service -l

# tail -f /var/log/apache2/error.log

+VirtualHost For Django Sites (March 4, 2015, 10:34 a.m.)

For Centos:
1- yum install mod_wsgi httpd

-----------------------------------------------------------------

For Debian:

2-Create a virtual host:
sudo nano /etc/apache2/sites-available/mydomain.com.conf
OR
sudo nano /etc/httpd/conf.d/mydomain.com.conf

-----------------------------------------------------------------

3-Create your new virtual host node which should look something like this:

<VirtualHost *:80>
ServerName 192.168.92.241
DocumentRoot /srv/mpei
WSGIScriptAlias / /srv/mpei/mpei/wsgi.py

LogLevel info
ErrorLog /var/log/mpei_error.log

WSGIDaemonProcess mpei processes=2 threads=15 python-path=/var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages
# WSGISocketPrefix /var/run/wsgi
WSGIProcessGroup mpei


Alias /media/ /srv/mpei/mpei/media/
Alias /static/ /srv/mpei/mpei/static/

<Directory /srv/mpei/mpei/static>
Allow from all
</Directory>

<Directory /srv/mpei/mpei/media>
Allow from all
</Directory>

<Directory /srv/mpei/mpei>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
</VirtualHost>

-----------------------------------------------------------------

4-Edit the wsgi.py file within the main app of your project:
import os
import sys

# Add the site-packages of the chosen virtualenv to work with
sys.path.append('/home/mohsen/virtualenvs/django-1.10/lib/python3.4/site-packages')

# Add the app's directory to the PYTHONPATH
sys.path.append('/home/mohsen/websites/ecc/')
sys.path.append('/home/mohsen/websites/ecc/ecc/')

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ecc.settings")

# Activate your virtualenv
activate_env=os.path.expanduser("/home/mohsen/virtualenvs/django-1.10/bin/activate_this.py")
exec(open(activate_env).read())

from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()

-----------------------------------------------------------------

5-Enable the virtual host:
a2ensite site.mysite.com.conf

-----------------------------------------------------------------

6- If you want to disable a site, you would run a2dissite site.mysite.com.conf


==========================================

Compiling wsgi_mod

If you're using another version of python, you'll need to compile mod_wsgi from source to match your virtual env.

1- Download the latest version from the following website:
https://pypi.org/project/mod-wsgi/#files

2- Untar it, CD to the folder, and:
sudo ./configure --with-python=/usr/local/bin/python3.6
sudo LD_RUN_PATH=/usr/local/lib make
sudo make install

It will get replaced by the one, which you had probably installed via Linux package manager, and solves any probable import errors.

==========================================

Serving the admin files:

cd /srv/mpei/mpei/static/
ln -s /var/www/.virtualenvs/django-1.7/lib/python3.4/site-packages/django/contrib/admin/static/admin .

==========================================

Asterisk
+Apache config files (Jan. 5, 2015, 4:51 p.m.)

Contents of file: /etc/apache2/sites-enabled/000-default.conf

<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html

ScriptAlias /cgi-bin/ /var/cgi-bin/
<Directory "/var/cgi-bin">
AllowOverride All
Options None
Order allow,deny
Allow from all
</Directory>


ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
-----------------------------------------------------------------------------
Create a file named .htaccess in the /var/cgi-bin with this content.
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /var/cgi-bin/.htpasswd
Require user mohsen
-----------------------------------------------------------------------------
htpasswd -c /etc/apache2/.htpasswd mohsen
And enter a desired password to create the password file.
-----------------------------------------------------------------------------

+Creating /etc/init.d/asterisk (Jan. 5, 2015, 2:08 p.m.)

1-cp asterisk-13.1.0/contrib/init.d/rc.debian.asterisk /etc/init.d/asterisk

2-Change the lines to these values:
DAEMON=/usr/sbin/asterisk
ASTVARRUNDIR=/var/run/asterisk
ASTETCDIR=/etc/asterisk


If you run it right now, you will get the error:
Restarting asterisk (via systemctl): asterisk.serviceFailed to restart asterisk.service: Unit asterisk.service failed to load: No such file or directory.
failed!

I restarted the server (reboot) and after booting up it was run successfully (/etc/init.d/asterisk start)

+Perl Packages/Libraries for Debian (Jan. 2, 2015, 12:29 p.m.)

Before starting installation, be careful that, you will need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
--------------------------------------------------------------------------------
1-apt-get install libghc-ami-dev

2-Install this file `dpkg --install libasterisk-ami-perl_0.2.8-1_all.deb`
If you don't have it, refer to the following link for creating this .deb file
http://www.debian-administration.org/article/78/Building_Debian_packages_of_Perl_modules

3-Copy the codecs binary `codec_g729-ast130-gcc4-glibc2.2-x86_64-core2.so` to the path `/usr/lib/asterisk/modules`
Rename it to `code_g729.so` and based on other modules in this directory, set the chmod and chown of the file.
You can find it from this link: http://asterisk.hosting.lv/

+Running Asterisk as a Service (Dec. 15, 2014, 2:44 p.m.)

The most common way to run Asterisk in a production environment is as a service. Asterisk includes both a make target for installing Asterisk as a service, as well as a script - live_asterisk - that will manage the service and automatically restart Asterisk in case of errors.

Asterisk can be installed as a service using the make config target:
# make config
/etc/rc0.d/K91asterisk -> ../init.d/asterisk
/etc/rc1.d/K91asterisk -> ../init.d/asterisk
/etc/rc6.d/K91asterisk -> ../init.d/asterisk
/etc/rc2.d/S50asterisk -> ../init.d/asterisk
/etc/rc3.d/S50asterisk -> ../init.d/asterisk
/etc/rc4.d/S50asterisk -> ../init.d/asterisk
/etc/rc5.d/S50asterisk -> ../init.d/asterisk
Asterisk can now be started as a service:
# service asterisk start
* Starting Asterisk PBX: asterisk [ OK ]
And stopped:
# service asterisk stop
* Stopping Asterisk PBX: asterisk [ OK ]
And restarted:
# service asterisk restart
* Stopping Asterisk PBX: asterisk [ OK ]
* Starting Asterisk PBX: asterisk [ OK ]

+Executing as another User (Dec. 15, 2014, 2:42 p.m.)

Do not run as root
Running Asterisk as root or as a user with super user permissions is dangerous and not recommended. There are many ways Asterisk can affect the system on which it operates, and running as root can increase the cost of small configuration mistakes.

Asterisk can be run as another user using the -U option:
# asterisk -U asteriskuser

Often, this option is specified in conjunction with the -G option, which specifies the group to run under:
# asterisk -U asteriskuser -G asteriskuser

When running Asterisk as another user, make sure that user owns the various directories that Asterisk will access:
# sudo chown -R asteriskuser:asteriskuser /usr/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/lib/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/spool/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/log/asterisk
# sudo chown -R asteriskuser:asteriskuser /var/run/asterisk
# sudo chown asteriskuser:asteriskuser /usr/sbin/asterisk

+Commands (Dec. 15, 2014, 12:59 p.m.)

You can get a CLI (Command Line Interface) console to an already-running daemon by typing
asterisk -r
Another description for option '-r':
In order to connect to a running Asterisk process, you can attach a remote console using the -r option
------------------------------
To disconnect from a connected remote console, simply hit Ctrl+C.
------------------------------
To shut down Asterisk, issue:
core stop gracefully
------------------------------
There are three common commands related to stopping the Asterisk service. They are:
core stop now - This command stops the Asterisk service immediately, ending any calls in progress.
core stop gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk stops.
core stop when convenient - This command waits until Asterisk has no calls in progress, and then it stops the service. It does not prevent new calls from entering the system.

There are three related commands for restarting Asterisk as well.
core restart now - This command restarts the Asterisk service immediately, ending any calls in progress.
core restart gracefully - This command prevents new calls from starting up in Asterisk, but allows calls in progress to continue. When all the calls have finished, Asterisk restarts.
core restart when convenient - This command waits until Asterisk has no calls in progress, and then it restarts the service. It does not prevent new calls from entering the system.

There is also a command if you change your mind.
core abort shutdown - This command aborts a shutdown or restart which was previously initiated with the gracefully or when convenient options.
------------------------------
sip show peers - returns a list of chan_sip loaded peers
voicemail show users - returns a list of app_voicemail loaded users
core set debug 5 - sets the core debug to level 5 verbosity.
------------------------------
core show version
------------------------------
asterisk -h : Help. Run '/sbin/asterisk -h' to get a list of the available command line parameters.
asterisk -C <configfile>: Starts Asterisk with a different configuration file than the default /etc/asterisk/asterisk.conf.
-f : Foreground. Starts Asterisk but does not fork as a background daemon.
-c : Enables console mode. Starts Asterisk in the foreground (implies -f), with a console command line interface (CLI) that can be used to issue commands and view the state of the system.
-r : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon.
-R : Remote console. Starts a CLI console which connects to an instance of Asterisk already running on this machine as a background daemon and attempts to reconnect if disconnected.
-t : Record soundfiles in /var/tmp and move them where they belong after they are done.
-T : Display the time in "Mmm dd hh:mm:ss" format for each line of output to the CLI.
-n : Disable console colorization (for use with -c or -r)
-i: Prompt for cryptographic initialization passcodes at startup.
-p : Run as pseudo-realtime thread. Run with a real-time priority. (Whatever that means.)
-q : Quiet mode (supress output)
-v : Increase verbosity (multiple v's = more verbose)
-V : Display version number and exit.
-d : Enable extra debugging across all modules.
-g : Makes Asterisk dump core in the case of a segmentation violation.
-G <group> : Run as a group other than the caller.
-U <user> : Run as a user other than the caller
-x <cmd> : Execute command <cmd> (only valid with -r)
------------------------------

+Installation (Dec. 14, 2014, 9:36 p.m.)

Before starting installation, be careful that, you need to install some packages from synaptic, and they might cause/need to install another version of `asterisk` and `asterisk-core`, and lots of other libraries, which these all might break the one you just installed! So make sure that the packages you need, should be installed via source, and not say YES to apt-get without checking the libraries!
--------------------------------------------------------------------------------
Install these libraries first:
1-apt-get install libapache2-mod-auth-pgsql libanyevent-perl odbc-postgresql unixODBC unixODBC-dev libltdl-dev

2-Download the file asterisk-13-current.tar.gz from this link: http://downloads.asterisk.org/pub/telephony/asterisk/
a) Untar it.
You will need this untarred asterisk file in the following steps.

----------- Building and Installing pjproject -----------
1-Using the link http://www.pjsip.org/release/2.3/ download pjproject-2.3.tar.bz2

a) Untar and CD to the pjproject

b) ./configure --prefix=/usr --enable-shared --disable-sound --disable-resample --disable-video --disable-opencore-amr CFLAGS='-O2 -DNDEBUG'

c) make dep

d) make

e) make install

f) ldconfig

Now, for checking if you have successfully installed pjproject and asterisk detects the libraries, untar and CD to asterisk directory (I know you have not installed it yet, just move to the folder now :D), and enter the following command:

g) apt-get install libjansson-dev uuid-dev snmpd libperl-dev libncurses5-dev libxml2-dev libsqlite3-dev

*** important ***
Before continuing to next step, you have to know that based on needs of Shetab company you need to enable `res_snmp` module. For enabling it you need to install `net-snmp_5.4.3`, and since it's not in the Synaptic, you have to install it from the source:
1-Download it from: https://launchpad.net/debian/+source/net-snmp/5.4.3~dfsg-2.8+deb7u1
2-Install it using ./configure, make and make install
*** End of important ***

h) ./configure --without-pwlib (If you don't use this --without switch, you will get the following error, even if you have installed those ptlib package already!)
Cannot find ptlib-config - please install and try again

i) make menuselect

j) Browse to the eleventh category `Resource Modules` and make sure the `res_snmp` module at the bottom of the list is checked. Using escape key exit the menu and continue with installing asterisk.

----------- Building and Installing Asterisk -----------
2- Make sure you are still in the asterisk directory).

c) make
I got so many errors surrounded by '**************' (so many asterisks) telling me these modules were needed:
res_curl, res_odbcm, res_crypto, res_config_curl ... (and so many more) I just installed postgresql and the command `make` continued working with no errors!

d) make install

e) make samples

f) make progdocs

Now continue installation process with Perl packages from my tutorials.
After that, refer to `Creating /etc/init.d/asterisk` in my tutorials.

Beautiful Soup
+My Experience with text and string (Dec. 13, 2014, 3:12 a.m.)

I've been working a lot with these two methods/attributes and I noticed that:
If you use:
some_tag.string.replace_with('some_new_title')
You might get errors if the some_tag has an inner tag, like:
<h5>
<em>Examples</em>
</h5>
The exception would be: AttributeError: can't set attribute

So, you have to use .text for solving this problem and you should know that it will ignore the inner tag:
content = some_tag.text
some_tag.clear()
some_tag.string = 'some_new_title'

+Usage (Dec. 10, 2014, 2:10 p.m.)

To parse a document, pass it into the BeautifulSoup constructor. You can pass in a string or an open filehandle:
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("index.html"))
soup = BeautifulSoup("<html>data</html>")

First, the document is converted to Unicode, and HTML entities are converted to Unicode characters:
BeautifulSoup("Sacr&eacute; bleu!")
<html><head></head><body>Sacré bleu!</body></html>

Beautiful Soup then parses the document using the best available parser. It will use an HTML parser unless you specifically tell it to use an XML parser.
--------------------------------------------
print(soup.prettify())
--------------------------------------------
soup.title
# <title>The Dormouse's story</title>
--------------------------------------------
soup.title.name
# u'title'
--------------------------------------------
soup.title.string
# u'The Dormouse's story'
--------------------------------------------
soup.title.parent.name
# u'head'
--------------------------------------------
soup.p
# <p class="title"><b>The Dormouse's story</b></p>
--------------------------------------------
soup.p['class']
# u'title'
--------------------------------------------
soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
--------------------------------------------
soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
--------------------------------------------
One common task is extracting all the URLs found within a page’s <a> tags:

for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie
# http://example.com/tillie
--------------------------------------------
Another common task is extracting all the text from a page:

print(soup.get_text())
# The Dormouse's story
#
# The Dormouse's story
#
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
#
# ...
--------------------------------------------
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>')
tag = soup.b
type(tag)
# <class 'bs4.element.Tag'>

tag.name
# u'b'

tag.name = "blockquote"
tag
# <blockquote class="boldest">Extremely bold</blockquote>
--------------------------------------------
Attributes
A tag may have any number of attributes. The tag <b class="boldest"> has an attribute “class” whose value is “boldest”. You can access a tag’s attributes by treating the tag like a dictionary:
tag['class']
# u'boldest'

You can access that dictionary directly as .attrs:
tag.attrs
# {u'class': u'boldest'}

You can add, remove, and modify a tag’s attributes. Again, this is done by treating the tag as a dictionary:
tag['class'] = 'verybold'
tag['id'] = 1
tag
# <blockquote class="verybold" id="1">Extremely bold</blockquote>

del tag['class']
del tag['id']
tag
# <blockquote>Extremely bold</blockquote>

tag['class']
# KeyError: 'class'
print(tag.get('class'))
# None
--------------------------------------------
Multi-valued attributes
HTML 4 defines a few attributes that can have multiple values. HTML 5 removes a couple of them, but defines a few more. The most common multi-valued attribute is class (that is, a tag can have more than one CSS class). Others include rel, rev, accept-charset, headers, and accesskey. Beautiful Soup presents the value(s) of a multi-valued attribute as a list:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
css_soup.p['class']
# ["body", "strikeout"]

css_soup = BeautifulSoup('<p class="body"></p>')
css_soup.p['class']
# ["body"]

If an attribute looks like it has more than one value, but it’s not a multi-valued attribute as defined by any version of the HTML standard, Beautiful Soup will leave the attribute alone:

id_soup = BeautifulSoup('<p id="my id"></p>')
id_soup.p['id']
# 'my id'

When you turn a tag back into a string, multiple attribute values are consolidated:
rel_soup = BeautifulSoup('<p>Back to the <a rel="index">homepage</a></p>')
rel_soup.a['rel']
# ['index']
rel_soup.a['rel'] = ['index', 'contents']
print(rel_soup.p)
# <p>Back to the <a rel="index contents">homepage</a></p>

If you parse a document as XML, there are no multi-valued attributes:

xml_soup = BeautifulSoup('<p class="body strikeout"></p>', 'xml')
xml_soup.p['class']
# u'body strikeout'
--------------------------------------------
You can’t edit a string in place, but you can replace one string with another, using replace_with():

tag.string.replace_with("No longer bold")
tag
# <blockquote>No longer bold</blockquote>
--------------------------------------------
This code gets the first <b> tag beneath the <body> tag:

soup.body.b
# <b>The Dormouse's story</b>
--------------------------------------------
Using a tag name as an attribute will give you only the first tag by that name:

soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
--------------------------------------------
.contents and .children
A tag’s children are available in a list called .contents:
head_tag = soup.head
head_tag
# <head><title>The Dormouse's story</title></head>

head_tag.contents
[<title>The Dormouse's story</title>]

title_tag = head_tag.contents[0]
title_tag
# <title>The Dormouse's story</title>
title_tag.contents
# [u'The Dormouse's story']

The BeautifulSoup object itself has children. In this case, the <html> tag is the child of the BeautifulSoup object.:
len(soup.contents)
# 1
soup.contents[0].name
# u'html'

A string does not have .contents, because it can’t contain anything:
text = title_tag.contents[0]
text.contents
# AttributeError: 'NavigableString' object has no attribute 'contents'

Instead of getting them as a list, you can iterate over a tag’s children using the .children generator:
for child in title_tag.children:
print(child)
# The Dormouse's story
--------------------------------------------
.descendants
The .contents and .children attributes only consider a tag’s direct children. For instance, the <head> tag has a single direct child–the <title> tag:

head_tag.contents
# [<title>The Dormouse's story</title>]

But the <title> tag itself has a child: the string “The Dormouse’s story”. There’s a sense in which that string is also a child of the <head> tag. The .descendants attribute lets you iterate over all of a tag’s children, recursively: its direct children, the children of its direct children, and so on:
for child in head_tag.descendants:
print(child)
# <title>The Dormouse's story</title>
# The Dormouse's story

The <head> tag has only one child, but it has two descendants: the <title> tag and the <title> tag’s child. The BeautifulSoup object only has one direct child (the <html> tag), but it has a whole lot of descendants:

len(list(soup.children))
# 1
len(list(soup.descendants))
# 25
--------------------------------------------
soup.find_all(id='link2')
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
--------------------------------------------
soup.find_all(href=re.compile("elsie"))
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]
--------------------------------------------
This code finds all tags whose id attribute has a value, regardless of what the value is:
soup.find_all(id=True)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
--------------------------------------------
You can filter multiple attributes at once by passing in more than one keyword argument:
soup.find_all(href=re.compile("elsie"), id='link1')
# [<a class="sister" href="http://example.com/elsie" id="link1">three</a>]
--------------------------------------------
Some attributes, like the data-* attributes in HTML 5, have names that can’t be used as the names of keyword arguments:
data_soup = BeautifulSoup('<div data-foo="value">foo!</div>')
data_soup.find_all(data-foo="value")
# SyntaxError: keyword can't be an expression

You can use these attributes in searches by putting them into a dictionary and passing the dictionary into find_all() as the attrs argument:
data_soup.find_all(attrs={"data-foo": "value"})
# [<div data-foo="value">foo!</div>]
--------------------------------------------
Searching by CSS class
It’s very useful to search for a tag that has a certain CSS class, but the name of the CSS attribute, “class”, is a reserved word in Python. Using class as a keyword argument will give you a syntax error. As of Beautiful Soup 4.1.2, you can search by CSS class using the keyword argument class_:
soup.find_all("a", class_="sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
--------------------------------------------
As with any keyword argument, you can pass class_ a string, a regular expression, a function, or True:
soup.find_all(class_=re.compile("itl"))
# [<p class="title"><b>The Dormouse's story</b></p>]

def has_six_characters(css_class):
return css_class is not None and len(css_class) == 6
soup.find_all(class_=has_six_characters)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
--------------------------------------------
Remember that a single tag can have multiple values for its “class” attribute. When you search for a tag that matches a certain CSS class, you’re matching against any of its CSS classes:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
css_soup.find_all("p", class_="strikeout")
# [<p class="body strikeout"></p>]

css_soup.find_all("p", class_="body")
# [<p class="body strikeout"></p>]
--------------------------------------------
You can also search for the exact string value of the class attribute:

css_soup.find_all("p", class_="body strikeout")
# [<p class="body strikeout"></p>]

But searching for variants of the string value won’t work:

css_soup.find_all("p", class_="strikeout body")
# []

If you want to search for tags that match two or more CSS classes, you should use a CSS selector:

css_soup.select("p.strikeout.body")
# [<p class="body strikeout"></p>]

In older versions of Beautiful Soup, which don’t have the class_ shortcut, you can use the attrs trick mentioned above. Create a dictionary whose value for “class” is the string (or regular expression, or whatever) you want to search for:

soup.find_all("a", attrs={"class": "sister"})
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
--------------------------------------------
soup.find_all(text="Elsie")
# [u'Elsie']

soup.find_all(text=["Tillie", "Elsie", "Lacie"])
# [u'Elsie', u'Lacie', u'Tillie']

soup.find_all(text=re.compile("Dormouse"))
[u"The Dormouse's story", u"The Dormouse's story"]

def is_the_only_string_within_a_tag(s):
"""Return True if this string is the only child of its parent tag."""
return (s == s.parent.string)

soup.find_all(text=is_the_only_string_within_a_tag)
# [u"The Dormouse's story", u"The Dormouse's story", u'Elsie', u'Lacie', u'Tillie', u'...']
--------------------------------------------
Although text is for finding strings, you can combine it with arguments that find tags: Beautiful Soup will find all tags whose .string matches your value for text. This code finds the <a> tags whose .string is “Elsie”:

soup.find_all("a", text="Elsie")
# [<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>]
--------------------------------------------
The limit argument
find_all() returns all the tags and strings that match your filters. This can take a while if the document is large. If you don’t need all the results, you can pass in a number for limit. This works just like the LIMIT keyword in SQL. It tells Beautiful Soup to stop gathering results after it’s found a certain number.

soup.find_all("a", limit=2)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]
--------------------------------------------
The recursive argument
If you call mytag.find_all(), Beautiful Soup will examine all the descendants of mytag: its children, its children’s children, and so on. If you only want Beautiful Soup to consider direct children, you can pass in recursive=False. See the difference here:

soup.html.find_all("title")
# [<title>The Dormouse's story</title>]

soup.html.find_all("title", recursive=False)
# []
--------------------------------------------
Calling a tag is like calling find_all()

Because find_all() is the most popular method in the Beautiful Soup search API, you can use a shortcut for it. If you treat the BeautifulSoup object or a Tag object as though it were a function, then it’s the same as calling find_all() on that object. These two lines of code are equivalent:
soup.find_all("a")
soup("a")

These two lines are also equivalent:
soup.title.find_all(text=True)
soup.title(text=True)
--------------------------------------------
find_parents() and find_parent()
find_next_siblings() and find_next_sibling()
find_previous_siblings() and find_previous_sibling()
find_all_next() and find_next()
find_all_previous() and find_previous()
--------------------------------------------
CSS selectors

Beautiful Soup supports the most commonly-used CSS selectors. Just pass a string into the .select() method of a Tag object or the BeautifulSoup object itself.

You can find tags:
soup.select("title")
# [<title>The Dormouse's story</title>]

soup.select("p nth-of-type(3)")
# [<p class="story">...</p>]

Find tags beneath other tags:
soup.select("body a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("html head title")
# [<title>The Dormouse's story</title>]

Find tags directly beneath other tags:

soup.select("head > title")
# [<title>The Dormouse's story</title>]

soup.select("p > a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("p > a:nth-of-type(2)")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

soup.select("p > #link1")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select("body > a")
# []

Find the siblings of tags:
soup.select("#link1 ~ .sister")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("#link1 + .sister")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

Find tags by CSS class:
soup.select(".sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("[class~=sister]")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

Find tags by ID:
soup.select("#link1")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select("a#link2")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

Test for the existence of an attribute:
soup.select('a[href]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

Find tags by attribute value:
soup.select('a[href="http://example.com/elsie"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select('a[href^="http://example.com/"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select('a[href$="tillie"]')
# [<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select('a[href*=".com/el"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

Match language codes:
multilingual_markup = """
<p lang="en">Hello</p>
<p lang="en-us">Howdy, y'all</p>
<p lang="en-gb">Pip-pip, old fruit</p>
<p lang="fr">Bonjour mes amis</p>
"""
multilingual_soup = BeautifulSoup(multilingual_markup)
multilingual_soup.select('p[lang|=en]')
# [<p lang="en">Hello</p>,
# <p lang="en-us">Howdy, y'all</p>,
# <p lang="en-gb">Pip-pip, old fruit</p>]

This is a convenience for users who know the CSS selector syntax. You can do all this stuff with the Beautiful Soup API. And if CSS selectors are all you need, you might as well use lxml directly: it’s a lot faster, and it supports more CSS selectors. But this lets you combine simple CSS selectors with the Beautiful Soup API.
--------------------------------------------
Modifying .string
If you set a tag’s .string attribute, the tag’s contents are replaced with the string you give:
markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)

tag = soup.a
tag.string = "New link text."
tag
# <a href="http://example.com/">New link text.</a>

Be careful: if the tag contained other tags, they and all their contents will be destroyed.
--------------------------------------------
append()

You can add to a tag’s contents with Tag.append(). It works just like calling .append() on a Python list:
soup = BeautifulSoup("<a>Foo</a>")
soup.a.append("Bar")

soup
# <html><head></head><body><a>FooBar</a></body></html>
soup.a.contents
# [u'Foo', u'Bar']
--------------------------------------------
BeautifulSoup.new_string() and .new_tag()
If you need to add a string to a document, no problem–you can pass a Python string in to append(), or you can call the factory method BeautifulSoup.new_string():

soup = BeautifulSoup("<b></b>")
tag = soup.b
tag.append("Hello")
new_string = soup.new_string(" there")
tag.append(new_string)
tag
# <b>Hello there.</b>
tag.contents
# [u'Hello', u' there']

If you want to create a comment or some other subclass of NavigableString, pass that class as the second argument to new_string():

from bs4 import Comment
new_comment = soup.new_string("Nice to see you.", Comment)
tag.append(new_comment)
tag
# <b>Hello there<!--Nice to see you.--></b>
tag.contents
# [u'Hello', u' there', u'Nice to see you.']

(This is a new feature in Beautiful Soup 4.2.1.)

What if you need to create a whole new tag? The best solution is to call the factory method BeautifulSoup.new_tag():

soup = BeautifulSoup("<b></b>")
original_tag = soup.b

new_tag = soup.new_tag("a", href="http://www.example.com")
original_tag.append(new_tag)
original_tag
# <b><a href="http://www.example.com"></a></b>

new_tag.string = "Link text."
original_tag
# <b><a href="http://www.example.com">Link text.</a></b>

Only the first argument, the tag name, is required.
--------------------------------------------
insert()
Tag.insert() is just like Tag.append(), except the new element doesn’t necessarily go at the end of its parent’s .contents. It’ll be inserted at whatever numeric position you say. It works just like .insert() on a Python list:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

tag.insert(1, "but did not endorse ")
tag
# <a href="http://example.com/">I linked to but did not endorse <i>example.com</i></a>
tag.contents
# [u'I linked to ', u'but did not endorse', <i>example.com</i>]
--------------------------------------------
insert_before() and insert_after()
The insert_before() method inserts a tag or string immediately before something else in the parse tree:

soup = BeautifulSoup("<b>stop</b>")
tag = soup.new_tag("i")
tag.string = "Don't"
soup.b.string.insert_before(tag)
soup.b
# <b><i>Don't</i>stop</b>

The insert_after() method moves a tag or string so that it immediately follows something else in the parse tree:
soup.b.i.insert_after(soup.new_string(" ever "))
soup.b
# <b><i>Don't</i> ever stop</b>
soup.b.contents
# [<i>Don't</i>, u' ever ', u'stop']
--------------------------------------------
clear()

Tag.clear() removes the contents of a tag:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

tag.clear()
tag
# <a href="http://example.com/"></a>
--------------------------------------------
extract()
PageElement.extract() removes a tag or string from the tree. It returns the tag or string that was extracted:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

i_tag = soup.i.extract()

a_tag
# <a href="http://example.com/">I linked to</a>

i_tag
# <i>example.com</i>

print(i_tag.parent)
None

At this point you effectively have two parse trees: one rooted at the BeautifulSoup object you used to parse the document, and one rooted at the tag that was extracted. You can go on to call extract on a child of the element you extracted:

my_string = i_tag.string.extract()
my_string
# u'example.com'

print(my_string.parent)
# None
i_tag
# <i></i>
--------------------------------------------
decompose()
Tag.decompose() removes a tag from the tree, then completely destroys it and its contents:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

soup.i.decompose()

a_tag
# <a href="http://example.com/">I linked to</a>
--------------------------------------------
replace_with()
PageElement.replace_with() removes a tag or string from the tree, and replaces it with the tag or string of your choice:
markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

new_tag = soup.new_tag("b")
new_tag.string = "example.net"
a_tag.i.replace_with(new_tag)

a_tag
# <a href="http://example.com/">I linked to <b>example.net</b></a>
replace_with() returns the tag or string that was replaced, so that you can examine it or add it back to another part of the tree.
--------------------------------------------
wrap()
PageElement.wrap() wraps an element in the tag you specify. It returns the new wrapper:

soup = BeautifulSoup("<p>I wish I was bold.</p>")
soup.p.string.wrap(soup.new_tag("b"))
# <b>I wish I was bold.</b>

soup.p.wrap(soup.new_tag("div")
# <div><p><b>I wish I was bold.</b></p></div>

This method is new in Beautiful Soup 4.0.5.
--------------------------------------------
unwrap()
Tag.unwrap() is the opposite of wrap(). It replaces a tag with whatever’s inside that tag. It’s good for stripping out markup:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

a_tag.i.unwrap()
a_tag
# <a href="http://example.com/">I linked to example.com</a>

Like replace_with(), unwrap() returns the tag that was replaced.
--------------------------------------------
Output formatters
If you give Beautiful Soup a document that contains HTML entities like “&lquot;”, they’ll be converted to Unicode characters:
soup = BeautifulSoup("&ldquo;Dammit!&rdquo; he said.")
unicode(soup)
# u'<html><head></head><body>\u201cDammit!\u201d he said.</body></html>'

If you then convert the document to a string, the Unicode characters will be encoded as UTF-8. You won’t get the HTML entities back:

str(soup)
# '<html><head></head><body>\xe2\x80\x9cDammit!\xe2\x80\x9d he said.</body></html>'

By default, the only characters that are escaped upon output are bare ampersands and angle brackets. These get turned into “&amp;”, “&lt;”, and “&gt;”, so that Beautiful Soup doesn’t inadvertently generate invalid HTML or XML:

soup = BeautifulSoup("<p>The law firm of Dewey, Cheatem, & Howe</p>")
soup.p
# <p>The law firm of Dewey, Cheatem, &amp; Howe</p>

soup = BeautifulSoup('<a href="http://example.com/?foo=val1&bar=val2">A link</a>')
soup.a
# <a href="http://example.com/?foo=val1&amp;bar=val2">A link</a>

You can change this behavior by providing a value for the formatter argument to prettify(), encode(), or decode(). Beautiful Soup recognizes four possible values for formatter.

The default is formatter="minimal". Strings will only be processed enough to ensure that Beautiful Soup generates valid HTML/XML:

french = "<p>Il a dit &lt;&lt;Sacr&eacute; bleu!&gt;&gt;</p>"
soup = BeautifulSoup(french)
print(soup.prettify(formatter="minimal"))
# <html>
# <body>
# <p>
# Il a dit &lt;&lt;Sacré bleu!&gt;&gt;
# </p>
# </body>
# </html>

If you pass in formatter="html", Beautiful Soup will convert Unicode characters to HTML entities whenever possible:

print(soup.prettify(formatter="html"))
# <html>
# <body>
# <p>
# Il a dit &lt;&lt;Sacr&eacute; bleu!&gt;&gt;
# </p>
# </body>
# </html>

If you pass in formatter=None, Beautiful Soup will not modify strings at all on output. This is the fastest option, but it may lead to Beautiful Soup generating invalid HTML/XML, as in these examples:

print(soup.prettify(formatter=None))
# <html>
# <body>
# <p>
# Il a dit <<Sacré bleu!>>
# </p>
# </body>
# </html>

link_soup = BeautifulSoup('<a href="http://example.com/?foo=val1&bar=val2">A link</a>')
print(link_soup.a.encode(formatter=None))
# <a href="http://example.com/?foo=val1&bar=val2">A link</a>

Finally, if you pass in a function for formatter, Beautiful Soup will call that function once for every string and attribute value in the document. You can do whatever you want in this function. Here’s a formatter that converts strings to uppercase and does absolutely nothing else:

def uppercase(str):
return str.upper()

print(soup.prettify(formatter=uppercase))
# <html>
# <body>
# <p>
# IL A DIT <<SACRÉ BLEU!>>
# </p>
# </body>
# </html>

print(link_soup.a.prettify(formatter=uppercase))
# <a href="HTTP://EXAMPLE.COM/?FOO=VAL1&BAR=VAL2">
# A LINK
# </a>

If you’re writing your own function, you should know about the EntitySubstitution class in the bs4.dammit module. This class implements Beautiful Soup’s standard formatters as class methods: the “html” formatter is EntitySubstitution.substitute_html, and the “minimal” formatter is EntitySubstitution.substitute_xml. You can use these functions to simulate formatter=html or formatter==minimal, but then do something extra.

Here’s an example that replaces Unicode characters with HTML entities whenever possible, but also converts all strings to uppercase:

from bs4.dammit import EntitySubstitution
def uppercase_and_substitute_html_entities(str):
return EntitySubstitution.substitute_html(str.upper())

print(soup.prettify(formatter=uppercase_and_substitute_html_entities))
# <html>
# <body>
# <p>
# IL A DIT &lt;&lt;SACR&Eacute; BLEU!&gt;&gt;
# </p>
# </body>
# </html>

One last caveat: if you create a CData object, the text inside that object is always presented exactly as it appears, with no formatting. Beautiful Soup will call the formatter method, just in case you’ve written a custom method that counts all the strings in the document or something, but it will ignore the return value:

from bs4.element import CData
soup = BeautifulSoup("<a></a>")
soup.a.string = CData("one < three")
print(soup.a.prettify(formatter="xml"))
# <a>
# <![CDATA[one < three]]>
# </a>
--------------------------------------------
get_text()
If you only want the text part of a document or tag, you can use the get_text() method. It returns all the text in a document or beneath a tag, as a single Unicode string:

markup = '<a href="http://example.com/">\nI linked to <i>example.com</i>\n</a>'
soup = BeautifulSoup(markup)

soup.get_text()
u'\nI linked to example.com\n'
soup.i.get_text()
u'example.com'

You can specify a string to be used to join the bits of text together:

# soup.get_text("|")
u'\nI linked to |example.com|\n'

You can tell Beautiful Soup to strip whitespace from the beginning and end of each bit of text:

# soup.get_text("|", strip=True)
u'I linked to|example.com'

But at that point you might want to use the .stripped_strings generator instead, and process the text yourself:

[text for text in soup.stripped_strings]
# [u'I linked to', u'example.com']
--------------------------------------------
Encodings
Any HTML or XML document is written in a specific encoding like ASCII or UTF-8. But when you load that document into Beautiful Soup, you’ll discover it’s been converted to Unicode:

markup = "<h1>Sacr\xc3\xa9 bleu!</h1>"
soup = BeautifulSoup(markup)
soup.h1
# <h1>Sacré bleu!</h1>
soup.h1.string
# u'Sacr\xe9 bleu!'

It’s not magic. (That sure would be nice.) Beautiful Soup uses a sub-library called Unicode, Dammit to detect a document’s encoding and convert it to Unicode. The autodetected encoding is available as the .original_encoding attribute of the BeautifulSoup object:

soup.original_encoding
'utf-8'

Unicode, Dammit guesses correctly most of the time, but sometimes it makes mistakes. Sometimes it guesses correctly, but only after a byte-by-byte search of the document that takes a very long time. If you happen to know a document’s encoding ahead of time, you can avoid mistakes and delays by passing it to the BeautifulSoup constructor as from_encoding.

Here’s a document written in ISO-8859-8. The document is so short that Unicode, Dammit can’t get a good lock on it, and misidentifies it as ISO-8859-7:

markup = b"<h1>\xed\xe5\xec\xf9</h1>"
soup = BeautifulSoup(markup)
soup.h1
<h1>νεμω</h1>
soup.original_encoding
'ISO-8859-7'

We can fix this by passing in the correct from_encoding:

soup = BeautifulSoup(markup, from_encoding="iso-8859-8")
soup.h1
<h1>םולש</h1>
soup.original_encoding
'iso8859-8'

In rare cases (usually when a UTF-8 document contains text written in a completely different encoding), the only way to get Unicode may be to replace some characters with the special Unicode character “REPLACEMENT CHARACTER” (U+FFFD, �). If Unicode, Dammit needs to do this, it will set the .contains_replacement_characters attribute to True on the UnicodeDammit or BeautifulSoup object. This lets you know that the Unicode representation is not an exact representation of the original–some data was lost. If a document contains �, but .contains_replacement_characters is False, you’ll know that the � was there originally (as it is in this paragraph) and doesn’t stand in for missing data.
--------------------------------------------
Output encoding
When you write out a document from Beautiful Soup, you get a UTF-8 document, even if the document wasn’t in UTF-8 to begin with. Here’s a document written in the Latin-1 encoding:

markup = b'''
<html>
<head>
<meta content="text/html; charset=ISO-Latin-1" http-equiv="Content-type" />
</head>
<body>
<p>Sacr\xe9 bleu!</p>
</body>
</html>
'''

soup = BeautifulSoup(markup)
print(soup.prettify())
# <html>
# <head>
# <meta content="text/html; charset=utf-8" http-equiv="Content-type" />
# </head>
# <body>
# <p>
# Sacré bleu!
# </p>
# </body>
# </html>

Note that the <meta> tag has been rewritten to reflect the fact that the document is now in UTF-8.

If you don’t want UTF-8, you can pass an encoding into prettify():

print(soup.prettify("latin-1"))
# <html>
# <head>
# <meta content="text/html; charset=latin-1" http-equiv="Content-type" />
# ...

You can also call encode() on the BeautifulSoup object, or any element in the soup, just as if it were a Python string:

soup.p.encode("latin-1")
# '<p>Sacr\xe9 bleu!</p>'

soup.p.encode("utf-8")
# '<p>Sacr\xc3\xa9 bleu!</p>'

Any characters that can’t be represented in your chosen encoding will be converted into numeric XML entity references. Here’s a document that includes the Unicode character SNOWMAN:

markup = u"<b>\N{SNOWMAN}</b>"
snowman_soup = BeautifulSoup(markup)
tag = snowman_soup.b

The SNOWMAN character can be part of a UTF-8 document (it looks like ☃), but there’s no representation for that character in ISO-Latin-1 or ASCII, so it’s converted into “&#9731” for those encodings:

print(tag.encode("utf-8"))
# <b></b>

print tag.encode("latin-1")
# <b>&#9731;</b>

print tag.encode("ascii")
# <b>&#9731;</b>
--------------------------------------------
Unicode, Dammit
You can use Unicode, Dammit without using Beautiful Soup. It’s useful whenever you have data in an unknown encoding and you just want it to become Unicode:

from bs4 import UnicodeDammit
dammit = UnicodeDammit("Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# Sacré bleu!
dammit.original_encoding
# 'utf-8'

Unicode, Dammit’s guesses will get a lot more accurate if you install the chardet or cchardet Python libraries. The more data you give Unicode, Dammit, the more accurately it will guess. If you have your own suspicions as to what the encoding might be, you can pass them in as a list:

dammit = UnicodeDammit("Sacr\xe9 bleu!", ["latin-1", "iso-8859-1"])
print(dammit.unicode_markup)
# Sacré bleu!
dammit.original_encoding
# 'latin-1'

Unicode, Dammit has two special features that Beautiful Soup doesn’t use.
--------------------------------------------
Smart quotes
You can use Unicode, Dammit to convert Microsoft smart quotes to HTML or XML entities:

markup = b"<p>I just \x93love\x94 Microsoft Word\x92s smart quotes</p>"

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="html").unicode_markup
# u'<p>I just &ldquo;love&rdquo; Microsoft Word&rsquo;s smart quotes</p>'

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="xml").unicode_markup
# u'<p>I just &#x201C;love&#x201D; Microsoft Word&#x2019;s smart quotes</p>'

You can also convert Microsoft smart quotes to ASCII quotes:

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="ascii").unicode_markup
# u'<p>I just "love" Microsoft Word\'s smart quotes</p>'

Hopefully you’ll find this feature useful, but Beautiful Soup doesn’t use it. Beautiful Soup prefers the default behavior, which is to convert Microsoft smart quotes to Unicode characters along with everything else:

UnicodeDammit(markup, ["windows-1252"]).unicode_markup
# u'<p>I just \u201clove\u201d Microsoft Word\u2019s smart quotes</p>'
--------------------------------------------
Inconsistent encodings

Sometimes a document is mostly in UTF-8, but contains Windows-1252 characters such as (again) Microsoft smart quotes. This can happen when a website includes data from multiple sources. You can use UnicodeDammit.detwingle() to turn such a document into pure UTF-8. Here’s a simple example:

snowmen = (u"\N{SNOWMAN}" * 3)
quote = (u"\N{LEFT DOUBLE QUOTATION MARK}I like snowmen!\N{RIGHT DOUBLE QUOTATION MARK}")
doc = snowmen.encode("utf8") + quote.encode("windows_1252")

This document is a mess. The snowmen are in UTF-8 and the quotes are in Windows-1252. You can display the snowmen or the quotes, but not both:

print(doc)
# I like snowmen!

print(doc.decode("windows-1252"))
# ☃☃☃“I like snowmen!”

Decoding the document as UTF-8 raises a UnicodeDecodeError, and decoding it as Windows-1252 gives you gibberish. Fortunately, UnicodeDammit.detwingle() will convert the string to pure UTF-8, allowing you to decode it to Unicode and display the snowmen and quote marks simultaneously:

new_doc = UnicodeDammit.detwingle(doc)
print(new_doc.decode("utf8"))
# “I like snowmen!”

UnicodeDammit.detwingle() only knows how to handle Windows-1252 embedded in UTF-8 (or vice versa, I suppose), but this is the most common case.

Note that you must know to call UnicodeDammit.detwingle() on your data before passing it into BeautifulSoup or the UnicodeDammit constructor. Beautiful Soup assumes that a document has a single encoding, whatever it might be. If you pass it a document that contains both UTF-8 and Windows-1252, it’s likely to think the whole document is Windows-1252, and the document will come out looking like ` ☃☃☃“I like snowmen!”`.

UnicodeDammit.detwingle() is new in Beautiful Soup 4.1.0.
-------------------------------------------

+Differences between parsers (Dec. 10, 2014, 1:48 p.m.)

Beautiful Soup presents the same interface to a number of different parsers, but each parser is different. Different parsers will create different parse trees from the same document. The biggest differences are between the HTML parsers and the XML parsers. Here’s a short document, parsed as HTML:

BeautifulSoup("<a><b /></a>")
# <html><head></head><body><a><b></b></a></body></html>

Since an empty <b /> tag is not valid HTML, the parser turns it into a <b></b> tag pair.

Here’s the same document parsed as XML (running this requires that you have lxml installed). Note that the empty <b /> tag is left alone, and that the document is given an XML declaration instead of being put into an <html> tag.:

BeautifulSoup("<a><b /></a>", "xml")
# <?xml version="1.0" encoding="utf-8"?>
# <a><b/></a>

There are also differences between HTML parsers. If you give Beautiful Soup a perfectly-formed HTML document, these differences won’t matter. One parser will be faster than another, but they’ll all give you a data structure that looks exactly like the original HTML document.

But if the document is not perfectly-formed, different parsers will give different results. Here’s a short, invalid document parsed using lxml’s HTML parser. Note that the dangling </p> tag is simply ignored:

BeautifulSoup("<a></p>", "lxml")
# <html><body><a></a></body></html>

Here’s the same document parsed using html5lib:

BeautifulSoup("<a></p>", "html5lib")
# <html><head></head><body><a><p></p></a></body></html>

Instead of ignoring the dangling </p> tag, html5lib pairs it with an opening <p> tag. This parser also adds an empty <head> tag to the document.

Here’s the same document parsed with Python’s built-in HTML parser:

BeautifulSoup("<a></p>", "html.parser")
# <a></a>

Like html5lib, this parser ignores the closing </p> tag. Unlike html5lib, this parser makes no attempt to create a well-formed HTML document by adding a <body> tag. Unlike lxml, it doesn’t even bother to add an <html> tag.

Since the document “<a></p>” is invalid, none of these techniques is the “correct” way to handle it. The html5lib parser uses techniques that are part of the HTML5 standard, so it has the best claim on being the “correct” way, but all three techniques are legitimate.

Differences between parsers can affect your script. If you’re planning on distributing your script to other people, or running it on multiple machines, you should specify a parser in the BeautifulSoup constructor. That will reduce the chances that your users parse a document differently from the way you parse it.

+Introduction and Installation (Dec. 10, 2014, 1:41 p.m.)

http://www.crummy.com/software/BeautifulSoup/bs4/doc/

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.

Beautiful Soup 4 works on both Python 2 (2.6+) and Python 3
You can install it with pip install beautifulsoup4 or easy_install beautifulsoup4. It's also available as the python-beautifulsoup4 package in recent versions of Debian, Ubuntu, and Fedora .

Beautiful Soup 3
Beautiful Soup 3 was the official release line of Beautiful Soup from May 2006 to March 2012. It is considered stable, and only critical bugs will be fixed. Here's the Beautiful Soup 3 documentation.
Beautiful Soup 3 works only under Python 2.x. It is licensed under the same license as Python itself.
------------------------------------------------------------
Installing a parser

Beautiful Soup supports the HTML parser included in Python’s standard library, but it also supports a number of third-party Python parsers. One is the lxml parser. Depending on your setup, you might install lxml with one of these commands:

$ apt-get install python-lxml

$ easy_install lxml

$ pip install lxml

Another alternative is the pure-Python html5lib parser, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:

$ apt-get install python-html5lib

$ easy_install html5lib

$ pip install html5lib

BIND
+PTR Record (Aug. 19, 2018, 7:59 p.m.)

A Pointer (PTR) record resolves an IP address to a fully-qualified domain name (FQDN) as an opposite to what A record does. PTR records are also called Reverse DNS records.

PTR records are mainly used to check if the server name is actually associated with the IP address from where the connection was initiated.

IP addresses of all Intermedia mail servers already have PTR records created.

--------------------------------------------------------------

What is PTR Record?

PTR records are used for the Reverse DNS (Domain Name System) lookup. Using the IP address you can get the associated domain/hostname. An A record should exist for every PTR record. The usage of a reverse DNS setup for a mail server is a good solution.

While in the domain DNS zone the hostname is pointed to an IP address, using the reverse zone allows pointing an IP address to a hostname.
In the Reverse DNS zone, you need to use a PTR Record. The PTR Record resolves the IP address to a domain/hostname.

--------------------------------------------------------------

+Errors (Aug. 7, 2015, 3:31 p.m.)

managed-keys-zone ./IN: loading from master file managed-keys.bind

For solving it:
nano /etc/bind/named.conf
add include "/etc/bind/bind.keys";

And also create an empty file:
touch /etc/bind/managed-keys.bind
**********************************************************
When working with the Reverse DNS (rev.10.168.192.in-addre.arpa), and the zone file (mohsenhassani.ir.db) you can use the tool:
named-checkzone mohsenhassani.ir rev.10.168.192.in-addr.arpa
named-checkzone mohsenhassani.ir mohsenhassani.ir.db
to check the validity of the files.

+Configuration (Aug. 21, 2014, 12:48 p.m.)

This file contains a summary of my own experiences:

1-There are some default zones in "/etc/bind/named.conf.external-zones"; no need to change them, neither to exclude it from the file "/etc/bind/named.conf"
---------------------------------------------------------------------------------------------
2-Add a line at the bottom of the file "/etc/bind/named.conf":
"include "/etc/bind/named.conf.external-zones";
--------------------------------------------------------------------------------------------
3-Create a file named "/etc/bind/named.conf.external-zones" and fill it up with:
// -------------- Begin mohsenhassani.ir --------------
zone "mohsenhassani.ir" {
type master;
file "/etc/bind/zones/mohsenhassani.ir.db";
};

zone "1.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/1.10.168.192.in-addr.arpa";
};
// -------------- End mohsenhassani.ir --------------


// -------------- Begin shahbal.ir --------------
zone "shahbal.ir" {
type master;
file "/etc/bind/zones/shahbal.ir.db";
};

zone "2.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/2.10.168.192.in-addr.arpa";
};
// -------------- End shahbal.ir --------------
--------------------------------------------------------------------------------------------
4-There is an empty directory in "/etc/bind/zones/". This the place for holding the data for above paths. So create a file named "mohsenhassani.ir.db" and fill it up with:
$TTL 3h
@ IN SOA ns.mohsenhassani.ir. a.b.com. (
2013020828
20m
15m
1w
1h
)

IN NS ns.mohsenhassani.ir.

ns IN A 199.26.84.20
@ IN A 199.26.84.20
---------------------------------------------------------------------------------
5-Repeat the earlier step with different file name and data. I mean create a file named "1.10.168.192.in-addr.arpa" in "/zones/" and fill it up with:

$TTL 3h
@ IN SOA mohsenhassani.ir. mail.mohsenhassani.ir. (
3
15m
15m
1w
1h )

; main domain name servers
IN NS mohsenhassani.ir.
IN NS www.mohsenhassani.ir.
IN NS sites.mohsenhassani.ir.
; main domain mail servers
IN MX 10 mail.mohsenhassani.ir.
; A records for name servers above
IN A 192.69.200.153
www IN A 192.69.200.153
pania IN A 192.69.200.153
; A record for mail server above
mail IN A 192.69.200.153
---------------------------------------------------------------------------------------------
6- OK, Done!
When I was done doing this configurations, I was testing my work with "dig mohsenhassani.ir" but I got error like:

root@mohsenhassani:/home/mohsen# dig mohsenhassani.ir
; <<>> DiG 9.7.3 <<>> mohsenhassani.ir
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mohsenhassani.ir. IN A

;; Query time: 383 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sat Mar 16 17:00:19 2013
;; MSG SIZE rcvd: 34


In the line which is like ";; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 8929"
The word "SERVFAIL" shows that I have errors; There are many many many reasons which might cuase this error, and you may solve the error with its id.
Anyway for this error I had to do this:
sudo nano /etc/resolv.conf
And add:
127.0.0.1 to first line.
It had already 8.8.8.8 and 4.4.4.4

Then doing "dig mohsenhassani.ir" there was no more errors:
root@mohsenhassani:/home/mohsen# dig mohsenhassani.ir

; <<>> DiG 9.7.3 <<>> mohsenhassani.ir
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39792
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;mohsenhassani.ir. IN A

;; ANSWER SECTION:
mohsenhassani.ir. 10800 IN A 192.69.200.153

;; AUTHORITY SECTION:
mohsenhassani.ir. 10800 IN NS ns.mohsenhassani.ir.

;; ADDITIONAL SECTION:
ns.mohsenhassani.ir. 10800 IN A 192.69.200.153

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat Mar 16 17:02:26 2013
;; MSG SIZE rcvd: 83
-----------------------------------------------------------------------------
Oh! And you have to create two sub-domains named "ns1.mohsenhassani.COM" and "ns2.mohsenhassani.COM" so that you can forward the ".ir" domains to these sub-domains.

+Installation (Aug. 7, 2015, 4:22 p.m.)

http://jack-brennan.com/caching-dns-with-bind9-on-debian/
---------------------------------------------------------------------------------------------
apt-get install bind9 bind9utils
---------------------------------------------------------------------------------------------
Configuration:

When installing and configuring or restarting bind, in case of encountering errors, check the log files. The log files are not stored separately. BIND stores the logs in the syslog:
nano /var/log/syslog
***************************************************
1-nano /etc/bind/named.conf.options
We need to modify the forwarder. This is the DNS server to which your own DNS will forward the requests he cannot process.

forwarders {
# Replace the address below with the address of your provider's DNS server
88.135.34.227;
};
*******************************************
2-Add this line to the file: /etc/bind/named.conf
include "/etc/bind/named.conf.external-zones";
******************************************************
3-nano /etc/bind/named.conf.external-zones
This is where we will insert our zones. By the way, a zone is a domain name that is referenced in the DNS server.

// -------------- Begin mohsenhassani.ir --------------
zone "mohsenhassani.ir" {
type master;
file "/etc/bind/zones/mohsenhassani.ir.db";
};

zone "1.10.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/1.10.168.192.in-addr.arpa";
};
// -------------- End mohsenhassani.ir --------------
**********************************************
4-nano /etc/bind/zones/1.10.168.192.in-addr.arpa
$TTL 3h
@ IN SOA mohsenhassani.ir. mail.mohsenhassani.ir. (
3
15m
15m
1w
1h )


@ IN NS mohsenhassani.ir.
@ IN A 192.69.204.35
**********************************************************
5-Restart BIND:
sudo /etc/init.d/bind9 restart

in case of failing, check the errors:
nano /var/log/syslog

We can now test the new DNS server...
*******************************************************
Modify the file resolv.conf with the following settings:
sudo nano /etc/resolv.conf

enter the following:

search example.com
nameserver 4.4.4.4
nameserver 8.8.8.8
***********************************************************
Now, test your DNS:
dig example.com

In case of errors, refer to errors in BIND category

+Description (Aug. 21, 2014, 12:45 p.m.)

Every system on the Internet must have a unique IP address. (This does not include systems that are behind a NAT firewall because they are not directly on the Internet.) DNS acts as a directory service for all of these systems, allowing you to specify each one by its hostname. A telephone book allows you to look up an individual person by name and get their telephone number, their unique identifier on the telephone system's network. DNS allows you to look up individual server by name and get its IP address, its unique identifier on the Internet.
There are other hostname-to-IP directory services in use, mainly for LANs. Windows LANs can use WINS. UNIX LANs can use NIS. But because DNS is the directory service for the Internet (and can also be used for LANs) it is the most widely used. UNIX LANs could always use DNS instead of NIS, and starting with Windows 2000 Server, Windows LANs could use DNS instead of, or in addition to, WINS. And on small LANs where there are only a few machines you could just use HOSTS files on each system instead of setting up a server running DNS, NIS, or WINS.

As a service, DNS is critical to the operation of the Internet. When you enter www.some-domain.com in a Web browser, it's DNS that takes the www host name and translates it to an IP address. Without DNS, you could be connected to the Internet just fine, but you ain't goin' no where. Not unless you keep a record of the IP addresses of all of the resources you access on the Internet and use those instead of host/domain names.

So when you visit a Web site, you are actually doing so using the site's IP address even though you specified a host and domain name in the URL. In the background your computer quickly queried a DNS server to get the IP address that corresponds to the Web site's server and domain names. Now you know why you have to specify one or two DNS server IP addresses in the TCP/IP configuration on your desktop PC (in the resolv.conf file on a Linux system and the TCP/IP properties in the Network Control Panel on Windows systems).

A "cannot connect" error doesn't necessarily indicate there isn't a connection to the destination server. There may very well be. The error may indicate a failure in "resolving" the domain name to an IP address. I use the open-source Firefox Web browser on Windows systems because the status bar gives more informational messages like "Resolving host", "Connecting to", and "Transferring data" rather than just the generic "Opening page" with IE. (It also seems to render pages faster than IE.)

In short, always check for correct DNS operation when troubleshooting a problem involving the inability to access an Internet resource. The ability to resolve names is critical, and later in this page, we'll show you some tools you can use to investigate and verify this ability.
When you are surfing the Web viewing Web pages or sending an e-mail your workstation is sending queries to a DNS server to resolve server/domain names. (Back on the Modems page we showed you how to set up your resolv.conf file to do this.) When you have your own Web site that other people visit you need a DNS server to respond to the queries from their workstations.

When you visit Web sites, the DNS server your workstation queries for name resolution is typically run by your ISP, but you could have one of your own. When you have your own Web site the DNS servers which respond to visitors' queries are typically run by your Web hosting provider, but you could likewise have your own one of these too. Actually, if you set up your own DNS server it could be used to respond to both "internal" (from your workstation) and "external" (from your Web site's visitors) queries.

Even if you don't have your own domain name or even your own LAN, you can still benefit from using a DNS server to allow others to access your Debian system. If you have a single system connected to the Internet via a cable or DSL connection, you can have it act as a Web/e-mail/FTP server using a neat service called "dynamic DNS" which we'll cover later. Dynamic DNS will even work with a modem if you want to play around with it.

DNS Server Functions:
You can set up a DNS server for several different reasons:
Internet Domain Support: If you have a domain name and you're operating Web, e-mail, FTP, or other Internet servers, you'll use a DNS server to respond to resolution queries so others can find and access your server(s). This is a serious undertaking and you'd have to set up a minimum of two of them. On this page, we'll refer to these types of DNS servers as authoritative DNS servers for reasons you'll see later. However, there are alternatives to having your own authoritative DNS server if you have (or want to have) your own domain name. You can have someone else host your DNS records for you. Even if someone else is taking care of your domain's DNS records you could still set up one of the following types of DNS servers.

Local Name Resolution: Similar to the above scenario, this type of DNS server would resolve the hostnames of systems on your LAN. Typically in this scenario, there is one DNS server and it does both jobs. The first being that it receives queries from workstations and the second being that it serves as the authoritative source for the responses (this will be more clear as we progress). Having this type of DNS server would eliminate the need to have (and manually update) a HOSTS file on each system on your LAN. On this page, we'll refer to these as LAN DNS servers.

During the Debian installation, you are asked to supply a domain name. This is an internal (private) domain name that is not visible to the outside world so like the private IP address ranges you use on a LAN, it doesn't have to be registered with anyone. A LAN DNS server would be authoritative for this internal, private domain. For security reasons, the name for this internal domain should not be the same as any public domain name you have registered. Private domain names are not restricted to using one of the established public TLD (Top Level Domain) names such as .com or .net. You could use .corp or .inc or anything else for your TLD. Since a single DNS server can be authoritative for multiple domains, you could use the same DNS server for both your public and private domains. However, the server would need to be accessible from both the Internet and the LAN so you'd need to locate it in a DMZ. Though you want to use different public and private domain names, you can use the same name for the second-level domain. For example, my-domain.com for the public name and my-domain.inc for the private name.


Internet Name Resolution: LAN workstations and other desktop PCs need to send Internet domain name resolution queries to a DNS server. The DNS server most often used for this is the ISP's DNS servers. These are often the DNS servers you specify in your TCP/IP configuration. You can have your own DNS server respond to these resolution queries instead of using your ISP's DNS servers. My ISP recently had a problem where they would intermittently lose connectivity to the network segment that their DNS servers were connected to so they couldn't be contacted. It took me about 30 seconds to turn one of my Debian systems into this type of DNS server and I was surfing with no problems. On this page, we'll refer to these as simple DNS servers. If a simple DNS server fails, you could just switch back to using your ISP's DNS servers. As a matter of fact, given that you typically specify two DNS servers in the TCP/IP configuration of most desktop PCs, you could have one of your ISP's DNS servers listed as the second (fallback) entry and you'd never miss a beat if your simple DNS server did go down. Turning your Debian system into a simple DNS server is simply a matter of entering a single command.

Don't take from this that you need three different types of DNS servers. If you were to set up a couple of authoritative DNS servers they could also provide the functionality of LAN and simple DNS servers. And a LAN DNS server can simultaneously provide the functionality of a simple DNS server. It's a progressive type of thing.

If you were going to set up authoritative DNS servers or a simple DNS server you'd have to have a 24/7 broadband connection to the Internet. Naturally, a LAN DNS server that didn't resolve Internet host/domain names wouldn't need this.

A DNS server is just a Debian system running a DNS application. The most widely used DNS application is BIND (Berkeley Internet Name Domain) and it runs a daemon called named that, among other things, responds to resolution queries. We'll see how to install it after we cover some basics.

DNS Basics:
Finding a single server out of all of the servers on the Internet is like trying to find a single file on the drive with thousands of files. In both cases, it helps to have some hierarchy built into the directory to logically group things. The DNS "namespace" is hierarchical in the same type of upside-down tree structure seen with file systems. Just as you have the root of a partition or drive, the DNS namespace has a root which is signified by a period.

Namespace Root --> Top Level Domains --> Second Level Domains
Namesapce Root: .
Top Level Domains: com, net, org
Second Level Domains: com --> aboutdebian, cnn, net --> sbc, org --> samba, debian

When specifying the absolute path to a file in a file system you start at the root and go to the file:
/etc/bind/named.conf

When specifying the absolute path to a server in the DNS namespace you start at the server and go to the root:
www.aboutdebian.com.

Note that period after the 'com' as it's important. It's how you specify the root of the namespace. An absolute path in the DNS namespace is called an FQDN (Fully Qualified Domain Name). The use of FQDNs is prevalent in DNS configuration files and it's important that you always use that trailing period.

Internet resources are usually specified by a domain name and a server hostname. The www part of a URL is often the hostname of the Web server (or it could be an alias to a server with a different hostname). DNS is basically just a database with records for these hostnames. The directory for the entire telephone system is not stored in one huge phone book. Rather, it is broken up into many pieces with each city having and maintaining, its piece of the entire directory in its phone book. By the same token, pieces of the DNS directory database (the "zones") are stored, and maintained, on many different DNS servers located around the Internet. If you want to find the telephone number for a person in Poughkeepsie, you'd have to look in the Poughkeepsie telephone book. If you want to find the IP address of the www server in the some-domain.com domain, you'd have to query the DNS server that stores the DNS records for that domain.

The entries in the database map a host/domain name to an IP address. Here is a simple logical view of the type of information that is stored (we'll get to the A, CNAME, and MX designations in a bit).

A www.their-domain.com 172.29.183.103
MX mail.their-domain.com 172.29.183.217
A debian.your-domain.com 10.177.8.3
CNAME www.your-domain.com 10.177.8.3
MX debian.your-domain.com 10.177.8.3

This is why a real Internet server needs a static (unchanging) IP address. The IP address of the server's NIC connected to the Internet has to match whatever address is in the DNS database. Dynamic DNS does provide a way around this for home servers however, which we'll see later.

When you want to browse to www.their-domain.com your DNS server (the one you specify in the TCP/IP configuration on your desktop computer) most likely won't have a DNS record for the their-domain.com domain so it has to contact the DNS server that does. When your DNS server contacts the DNS server that has the DNS records (referred to as "resource records" or "zone records") for their-domain.com your DNS server gets the IP address of the www server and relays that address back to your desktop computer. So which DNS server has the DNS records for a particular domain?

When you register a domain name with someone like Network Solutions, one of the things they ask you for are the server names and addresses of two or three "name servers" (DNS servers). These are the servers where the DNS records for your domain will be stored (and queried by the DNS servers of those browsing to your site). So where do you get the "name servers" information for your domain? Typically, when you host your Web site using a Web hosting service they not only provide a Web server for your domain's Web site files but they will also provide a DNS server to store your domain's DNS records. In other words, you'll want to know who your Web hosting provider is going to be before you register a domain name (so you can enter the provider's DNS server information in the name servers section of the domain name registration application).

You'll see the term "zone" used in DNS references. Most of the time a zone just equates to a domain. The only times this wouldn't be true is if you set up subdomains and set up separate DNS servers to handle just those subdomains. For example, a company would set up the subdomains us.their-domain.com and europe.their-domain.com and would "delegate" a separate DNS server to each one of them. In the case of these two DNS servers, their zone would be just the subdomains. The zone of the DNS server for the parent their-domain.com (which would contain the servers www.their-domain.com and mail.their-domain.com) would only contain records for those few machines in the parent domain.

Note that in the above example "us" and "Europe" are subdomains while "www" and "mail" are hostnames of servers in the parent domain.

Once you've got your Web site up and running on your Web hosting provider's servers and someone surf's to your site, the DNS server they specified in their local TCP/IP configuration will query your hosting provider's DNS servers to get the IP address for your Web site. The DNS servers that host the DNS records for your domain, i.e. the DNS servers you specify in your domain name registration application, are the authoritative DNS servers for your domain. The surfer's DNS server queries one of your site's authoritative DNS servers to get an address and gets an authoritative response. When the surfer's DNS server relays the address information back to the surfer's local PC it is a "non authoritative" response because the surfer's DNS server is not an authoritative DNS server for your domain.

Example: If you surf to MIT's Web site the DNS server you have specified in your TCP/IP configuration queries one of MIT's authoritative DNS servers and gets an authoritative response with the IP address for the 'www' server. Your DNS server then sends a non-authoritative response back to your PC. You can easily see this for yourself. At a shell prompt, or a DOS window on a newer Windows system, type in:

nslookup www.mit.edu

First, you'll see the name and IP address of your locally-specified DNS server. Then you'll see the non-authoritative response your DNS server sent back containing the name and IP address of the MIT Web server.

If you're on a Linux system you can also see which name server(s) your DNS server contacted to get the IP address. At a shell prompt type in:

whois mit.edu

and you'll see three authoritative name servers listed with the hostnames STRAWB, W20NS, and BITSY. The 'whois' command simply returns the contents of a site's domain record.


DNS Records and Domain Records

Don't confuse DNS zone records with domain records. Your domain record is created when you fill out a domain name registration application and is maintained by the domain registration service (like Network Solutions) you used to register the domain name. A domain only has one domain record and it contains administrative and technical contact information as well as entries for the authoritative DNS servers (aka "name servers") that are hosting the DNS records for the domain. You have to enter the hostnames and addresses for multiple DNS servers in your domain record for redundancy (fail-over) purposes.

DNS records (aka zone records) for a domain are stored in the domain's zone file on the authoritative DNS servers. Typically, it is stored on the DNS servers of whatever Web hosting service is hosting your domain's Web site. However, if you have your own Web server (rather than using a Web hosting service) the DNS records could be hosted by you using your own authoritative DNS servers (as in MIT's case), or by a third party like EasyDNS.

In short, the name servers you specified in your domain record host the domain's zone file containing the zone records. The name servers, whether they be your Web hosting provider's, those of a third party like EasyDNS, or your own, which host the domain's zone file are authoritative DNS servers for the domain.

Because DNS is so important to the operation of the Internet, when you register a domain name you must specify a minimum of two name servers. If you set up your own authoritative DNS servers for your domain you must set up a minimum of two of them (for redundancy) and these would be the servers you specify in your domain record. While the multiple servers you specify in your domain record are authoritative for your domain, only one DNS server can be the primary DNS server for a domain. Any others are "secondary" servers. The zone file on the primary DNS server is "replicated" (transferred) to all secondary servers. As a result, any changes made to DNS records must be made on the primary DNS server. The zone files on secondary servers are read-only. If you made changes to the records in a zone file on a secondary DNS server they would simply be overwritten at the next replication. As you will see below, the primary server for a domain and the replication frequency are specified in a special type of zone record.

Early on in this page, we said that the DNS zone records are stored in a DNS database which we now know is called a zone file. The term "database" is used quite loosely. The zone file is actually just a text file that you can edit with any text editor. A zone file is domain-specific. That is, each domain has its own zone file. Actually, there are two zone files for each domain but we're only concerned with one right now. The DNS servers for a Web hosting provider will have many zone files, two for each domain it's hosting zone records for. A zone "record" is, in most cases, nothing more than a single line in the text zone file.

There are different types of DNS zone records. These numerous record types give you flexibility in setting up the servers in your domain. The most common types of zone records are:

An A (Address) record is a "host record" and it is the most common type. It is simply a static mapping of a hostname to an IP address. A common hostname for a Web server is 'www' so the A record for this server gives the IP address for this server in the domain.

An MX (Mail eXchanger) record is specifically for mail servers. It's a special type of service-specifier record. It identifies a mail server for the domain. That's why you don't have to enter a hostname like 'www' in an e-mail address. If you're running Sendmail (mail server) and Apache (Web server) on the same system (i.e. the same system is acting as both your Web server and e-mail server), both the A record for the system and the MX record would refer to the same server.

To offer some fail-over protection for e-mail, MX records also have a Priority field (numeric). You can enter two or three MX records each pointing to a different mail server, but the server specified in the record with the highest priority (lowest number) will be chosen first. A mail server with a priority of 10 in the MX record will receive an e-mail before a server with a priority of 20 in its MX record. Note that we are only talking about receiving mail from other Internet mail servers here. When a mail server is sending mail, it acts as a desktop PC when it comes to DNS. The mail server looks at the domain name in the recipient's e-mail address and the mail server then contacts its local DNS server (specified in the resolv.conf file) to get the IP address for the mail server in the recipient's domain. When an authoritative DNS server for the recipient's domain receives the query from the sender's DNS server it sends back the IP addresses from the MX records it has in that domain's zone file.

A CNAME (Canonical Name) record is an alias record. It's a way to have the same physical server to respond to two different hostnames. Let's say you're not only running Sendmail and Apache on your server, but you're also running WU-FTPD so it also acts as an FTP server. You could create a CNAME record with the alias name 'FTP' so people would use ftp.your-domain.com and www.your-domain.com to access different services on the same server.

Another use for a CNAME record was illustrated in the example near the top of the page. Suppose you name your Web server 'Debian' instead of 'www'. You could simply create a CNAME record with the alias name 'www' but with the hostname 'Debian' and Debian's IP address.

NS (Name Server) records specify the authoritative DNS servers for a domain.

There can multiples of all of the above record types. There is one special record type of which there is only one record in the zone file. That's the SOA (Start Of Authority) record and it's the first record in the zone file. An SOA record is only present in a zone file located on authoritative DNS servers (non-authoritative DNS servers can cache zone records). It specifies such things as:

The primary authoritative DNS server for the zone (domain).
The e-mail address of the zone's (domain's) administrator. In zone files, the '@' has a specific meaning (see below) so the e-mail address is written as me.my-domain.com.

Timing information as to when secondary DNS servers should refresh or expire a zone file and a serial number to indicate the version of the zone file for the sake of comparison.

The SOA record is the one that takes up several lines.

Several important points to note about the records in a zone file:

Records can specify servers in other domains. This is most commonly used with MX and NS records when backup servers are located in a different domain but receive mail or resolve queries for your domain.

There must be an A record for systems specified in all MX, NS, and CNAME records.

A and CNAME records can specify workstations as well as servers (which you'll see when we set up a LAN DNS server).

Now let's look at a typical zone file. When a Debian system is set up as a DNS server the zone files are stored in the /etc/bind directory. In a zone file, the two parentheses around the timer values act as line-continuation characters as does the '\' character at the end of the second line. The ';' is the comment character. The 'IN' indicates an INternet-class record.

$TTL 86400
my-name.com. IN SOA debns1.my-name.com. \
joe.my-name.com. {
2004011522 ; Serial no., based on date
21600 ; Refresh after 6 hours
3600 ; Retry after 1 hour
604800 ; Expire after 7 days
3600 ; Minimum TTL of 1 hour
)
;Name servers
debns1 IN A 192.168.1.41
debns2.joescuz.com. IN A 192.168.1.42

@ IN NS debns1
my-name.com. IN NS debns2.my-name.com.


;Mail servers
debmail1 IN A 192.168.1.51
debmail2.my-name.com. IN A 192.168.1.52

@ IN MX 10 debmail1
my-name.com. IN MX 20 debmail2.my-name.com.


;Aliased servers
debhp IN A 192.168.1.61
debdell.my-name.com. IN A 192.168.1.62

www IN CNAME debhp
ftp.my-name.com. IN CNAME debdell.my-name.com.


Source: http://www.aboutdebian.com/dns.htm

Celery
+Django Celery with django-celery-results extension (Nov. 11, 2016, 10:37 a.m.)

pip install celery
pip install django_celery_results
pip install django_celery_beat

------------------------------------------------

# project/project/celery.py

from __future__ import absolute_import, unicode_literals
import os

from celery import Celery


os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()


@app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))

------------------------------------------------

# project/project/__init__.py

from __future__ import absolute_import, unicode_literals

from .celery import app as celery_app


__all__ = ['celery_app']

------------------------------------------------

project/project/tasks.py

from __future__ import absolute_import

from celery import shared_task


@shared_task
def begin_ping():
return 'hi'

------------------------------------------------

# settings.py

INSTALLED_APPS = (
'celery',
'django_celery_results',
'django_celery_beat',
)

CELERY_RESULT_BACKEND = 'django-db'

------------------------------------------------

python manage.py migrate django_celery_results
python manage.py migrate django_celery_beat

------------------------------------------------

apt install rabbitmq-server
For running it:
rabbitmq-server

------------------------------------------------

Run these two commands in separated activated virtualenvs:
celery -A project beat -l info -S django
celery -A project worker -l info

The "celery -A project beat -l info -S django" is for "DatabaseScheduler" which gets the schedules from Django admin panel.
You can use "celery -A project beat -l info" which is for "PersistentScheduler" which gets the schedules from scripts in the tasks.

For having the schedules from Admin panel, refer to the link "Intervals" and define a suitable interval.
Then follow the link "Periodic tasks" and select the defined interval in the "Interval" dropdown list.

------------------------------------------------

+Celery and RabbitMQ with Django (Oct. 14, 2018, 9:54 a.m.)

1- pip install Celery

--------------------------------------------------------------

2- apt-get install rabbitmq-server

--------------------------------------------------------------

3- Enable and start the RabbitMQ service
systemctl enable rabbitmq-server
systemctl start rabbitmq-server

--------------------------------------------------------------

4- Add configuration to the settings.py file:
CELERY_BROKER_URL = 'amqp://localhost'

--------------------------------------------------------------

5- Create a new file named celery.py in your app:
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')

app = Celery('mysite')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

--------------------------------------------------------------

6- Edit the __init__.py file in the project root:

from .celery import app as celery_app

__all__ = ['celery_app']

--------------------------------------------------------------

7- Create a file named tasks.py inside a Django app:

from celery import shared_task

@shared_task
def my_task(x, y):
return x, y

--------------------------------------------------------------

8- In views.py

from .tasks import my_task

my_task.delay(x, y)

Instead of calling the "my_task" directly, we call my_task.delay(). This way we are instructing Celery to execute this function in the background.

--------------------------------------------------------------

9- Starting The Worker Process:

Open a new terminal tab, and run the following command:
celery -A mysite worker -l info

--------------------------------------------------------------

+Periodic Tasks from tasks.py (Oct. 14, 2018, 10:24 a.m.)

import datetime
from celery.task import periodic_task


@periodic_task(run_every=datetime.timedelta(minutes=5))
def myfunc():
print 'periodic_task'

+Periodic Tasks from settings.py (Oct. 14, 2018, 10:53 a.m.)

CELERYBEAT_SCHEDULE = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=30),
'args': (16, 16)
},
}

+Running tasks in shell (Oct. 11, 2018, 10:49 a.m.)

celery -A project_name beat

celery -A cdr worker -l info

+Daemon Scripts (Sept. 29, 2015, 11:39 a.m.)

These scripts are needed when you want to run the worker as a daemon.

The first is used for seeing the output of running tasks. For example, I had something printed in the console, from within the task, and I could see the output (the printed string) in this terminal.

The second is for firing up / starting the tasks.


1- Create a file /etc/supervisor/conf.d/celeryd.conf with this content:
[program:celery]
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery worker -A cdr --loglevel=INFO

directory=/home/mohsen/websites/cdr/
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown. Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it send SIGKILL to its whole process group instead, taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

--------------------------------------------------------------------------------------------

2- Create a file /etc/supervisor/conf.d/celerybeat.conf with this content:

[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/mohsen/virtualenvs/django-1.7/bin/celery beat -A cdr

; remove the -A myapp argument if you are not using an app instance

directory=/home/mohsen/websites/cdr/
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/beat.log
stderr_logfile=/var/log/celery/beat.log
autostart=true
autorestart=true
startsecs=10

; if rabbitmq is supervised, set its priority higher so it starts first
priority=999

Ceph
+RBD (Oct. 30, 2017, 10:01 a.m.)

rbd is a utility for manipulating rados block device (RBD) images, used by the Linux rbd driver and the rbd storage driver for Qemu/KVM. RBD images are simple block devices that are striped over objects and stored in a RADOS object store. The size of the objects the image is striped over must be a power of two.
-------------------------------------------------------------
rbd -p image ls

rbd -p image info Windows7x8

rbd -p image rm Win7x86WithApps

rbd export --pool=image disk_user01_2 /root/Windows7x86.qcow2

The "2" is the ID of the Template in deskbit admin panel.
-------------------------------------------------------------

+Changing a Monitor’s IP address (Sept. 19, 2017, 4:42 p.m.)

http://docs.ceph.com/docs/kraken/rados/operations/add-or-rm-mons/
-----------------------------------------------------------------------
ceph mon getmap -o /tmp/a

monmaptool --print /tmp/a

monmaptool --rm vdiali /tmp/a

monmaptool --add vdiali 10.10.1.121 /tmp/a

monmaptool --print /tmp/a

systemctl stop ceph-mon*

ceph-mon -i vdimohsen --inject-monmap /tmp/a

Change IP in the following files:
/etc/network/interfaces
/etc/default/avalaunch
/etc/ceph/ceph.conf
/etc/hosts

+Properly remove an OSD (Aug. 23, 2017, 12:35 p.m.)

Sometimes removing OSD, if not done properly can result in double rebalancing. The best practice to remove an OSD involves changing the crush weight to 0.0 as first step.

$ ceph osd crush reweight osd.<ID> 0.0

Then you wait for rebalancing to be completed. Eventually completely remove the OSD:

$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
----------------------------------------------------------
From the docs:
Remove an OSD

To remove an OSD from the CRUSH map of a running cluster, execute the following:
ceph osd crush remove {name}

For getting the name:
ceph osd tree

+Errors - undersized+degraded+peered (July 4, 2017, 5:25 p.m.)

http://mohankri.weebly.com/my-interest/single-host-multiple-osd
---------------------------------------------------------
ceph osd crush rule create-simple same-host default osd

ceph osd pool set rbd crush_ruleset 1
---------------------------------------------------------

+Commands (July 3, 2017, 3:53 p.m.)

ceph osd tree

ceph osd dump

ceph osd lspools

ceph osd pool ls

ceph osd pool get rbd all

ceph osd pool set rbd size 2

ceph osd crush rule ls
-----------------------------------------------------
ceph-osd -i 0

ceph-osd -i 0 --mkfs --mkkey
-----------------------------------------------------
ceph -w

ceph -s

ceph health detail
-----------------------------------------------------
ceph-disk activate /var/lib/ceph/osd/ceph-0

ceph-disk list

chown ceph:disk /dev/sda1 /dev/sdb1
-----------------------------------------------------
ceph-mon -f --cluster ceph --id vdi --setuser ceph --setgroup ceph
-----------------------------------------------------
systemctl -a | grep ceph

systemctl status ceph-osd*

systemctl status ceph-mon*

systemctl enable ceph-mon.target
-----------------------------------------------------
rbd -p image ls

rbd export --pool=image disk_win_7 /root/win7.img
-----------------------------------------------------
cd /var/lib/ceph/osd/
ceph-2 ceph-3 ceph-8


mount
mount | grep -i vda
mount | grep -i vdb
mount | grep -i vdc
mount | grep ceph

fdisk -l

mount /dev/vdc1 ceph-3/

systemctl restart ceph-osd@3
ceph osd tree
********************************
systemctl restart ceph-osd@5

mount | grep -i ceph


systemctl restart ceph-osd@5
Job for ceph-osd@5.service failed because the control process exited with error code.
See "systemctl status ceph-osd@5.service" and "journalctl -xe" for details.

systemctl daemon-reload
systemctl restart ceph-osd@5
ceph osd tree
ceph -w

-----------------------------------------------------

+ceph-ansible (Jan. 7, 2017, 10:58 a.m.)

https://github.com/ceph/ceph-ansible
---------------------------------------------------
0- apt-get update # Ensure you do this step before running ceph-ansible!!!

1- apt-get install libffi-dev libssl-dev python-pip python-setuptools sudo python-dev

git clone https://github.com/ceph/ceph-ansible/
---------------------------------------------------
2- pip install markupsafe ansible
---------------------------------------------------
3-Setup your Ansible inventory file:
[mons]
mohsen3.deskbit.local

[osds]
mohsen3.deskbit.local
---------------------------------------------------
4-Now enable the site.yml and group_vars files:

cp site.yml.sample site.yml

You need to copy all files within `group_vars` directory; omit the `.sample` part:
for f in *.sample; do cp "$f" "${f/.sample/}"; done
---------------------------------------------------
5-Open the file `group_vars/all.yml` for editing:

nano group_vars/all.yml

Uncomment the variable `ceph_origin` and replace `upstream` with `distro`:
ceph_origin: 'distro'

Uncomment and replace:
monitor_interface: eth0

Uncomment:
journal_size: 5120
---------------------------------------------------
6-Choosing a scenario:
Open the file `group_vars/osds.yml` and uncomment and set to `true` the following variables:

osd_auto_discovery: true
journal_collocation: true
---------------------------------------------------
7- Any needed configs for ceph should be added to the file `group_vars/all.yml`.
Uncomment and change:

ceph_conf_overrides:
global:
osd_pool_default_pg_num: 8
osd_pool_default_size: 1
---------------------------------------------------
Path to variables file:
/etc/ansible/playbooks/ceph/ceph-ansible/roles/ceph-common/templates/ceph.conf.j2
---------------------------------------------------

+Adding Monitors (Jan. 4, 2017, 2:13 p.m.)

A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.

Add two Ceph Monitors to your cluster.
-------------------------------------------
ceph-deploy mon add node2
ceph-deploy mon add node3
-------------------------------------------
Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

ceph quorum_status --format json-pretty
-------------------------------------------
When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.
-------------------------------------------

+Adding an OSD (Jan. 4, 2017, 2:08 p.m.)

1- mkdir /var/lib/ceph/osd/ceph-3

2- ceph-disk prepare /var/lib/ceph/osd/ceph-3

3- ceph-disk activate /var/lib/ceph/osd/ceph-3

4- Once you have added your new OSD, Ceph will begin rebalancing the cluster by migrating placement groups to your new OSD. You can observe this process with the ceph CLI:
ceph -w

You should see the placement group states change from active+clean to active with some degraded objects, and finally active+clean when migration completes. (Control-c to exit.)

+Storage Cluster (Jan. 3, 2017, 3:10 p.m.)

To purge the Ceph packages, execute: (Used for when you want to purge data)
ceph-deploy purge node1


If at any point you run into trouble and you want to start over, execute the following to purge the configuration:
ceph-deploy purgedata node1
ceph-deploy forgetkeys
--------------------------------------------
1-Create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster:
mkdir my-cluster
cd my-cluster
--------------------------------------------
2-Create the cluster:
ceph-deploy new node1

Using `ls` command, you should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster.
--------------------------------------------
3-Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section:

osd pool default size = 2
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64

These two last options are for EXT4; based on this link:
http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/
--------------------------------------------
4-Install Ceph:
ceph-deploy install node1

The ceph-deploy utility will install Ceph on each node.
--------------------------------------------
5-Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial

Once you complete the process, your local directory should have the following keyrings:

{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
--------------------------------------------
6-Add OSDs:
For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon.

See:
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd
for details on using separate disks/partitions for OSDs and journals.

Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.
ssh node2
sudo mkdir /var/local/osd0
exit

ssh node3
sudo mkdir /var/local/osd1
exit

Then, from your admin node, use ceph-deploy to prepare the OSDs.
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

Finally, activate the OSDs:
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
--------------------------------------------
7-Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

ceph-deploy admin node1 node2

Login to nodes and ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph health
-------------------------------------------

+Ceph Node Setup (Jan. 3, 2017, 2:55 p.m.)

1-Create a user on each Ceph Node.
--------------------------------------------
2-Add sudo privileges for the user on each Ceph Node.
--------------------------------------------
3-Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen and ssh-copy-id
--------------------------------------------
4-Modify the ~/.ssh/config file of your ceph-deploy admin node so that it logs into Ceph Nodes as the user you created.
Host node1
Hostname node1
User root
Host node2
Hostname node2
User root
Host node3
Hostname node3
User root
--------------------------------------------
5-Add to /etc/hosts:
10.10.0.84 node1
10.10.0.85 node2
10.10.0.86 node3
10.10.0.87 node4
--------------------------------------------
6-Change the hostname of each node to the ones from the earlier stpe (node1, node2, node3, ...):
nano /etc/hostname
reboot each node
--------------------------------------------

+Acronyms (Jan. 1, 2017, 3:40 p.m.)

CRUSH: Controlled Replication Under Scalable Hashing
EBOFS: Extent and B-tree based Object File System
HPC: High-Performance Computing
MDS: MetaData Server
OSD: Object Storage Device
PG: Placement Group
PGP = Placement Group for Placement purpose
POSIX: Portable Operating System Interface for Unix
RADOS: Reliable Autonomic Distributed Object Store
RBD: RADOS Block Devices

+Ceph Deploy (Dec. 28, 2016, 12:51 p.m.)

Descriptions:
The admin node must be password-less SSH access to Ceph nodes. When ceph-deploy logs into a Ceph node as a user, that particular user must have passwordless sudo privileges.

We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server
------------------------------------------------------
For ALL Ceph Nodes perform the following steps:
sudo apt-get install openssh-server
------------------------------------------------------
Create a Ceph Deploy User:
The ceph-deploy utility must log into a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the username. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}). The following procedure, substituting {username} for the username you define, describes how to create a user with passwordless sudo.

sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
------------------------------------------------------

------------------------------------------------------

+Installation (Dec. 27, 2016, 3:57 p.m.)

http://docs.ceph.com/docs/master/start/quick-start-preflight/
----------------------------------------------------
1- wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

2- echo deb https://download.ceph.com/debian-hammer/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

3- sudo apt-get install ceph ceph-deploy

+Definitions (Dec. 27, 2016, 1:10 p.m.)

Ceph:
Ceph is a storage technology.
-------------------------------------------------
Cluster:
A cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.
-------------------------------------------------
Clustering vs. Clouding:
Cluster differs from Cloud and Grid in that a cluster is a group of computers connected by a local area network (LAN), whereas cloud is more wide scale and can be geographically distributed. Another way to put it is to say that a cluster is tightly coupled, whereas a cloud is loosely coupled. Also, clusters are made up of machines with similar hardware, whereas clouds are made up of machines with possibly very different hardware configurations.
-------------------------------------------------
Ceph Storage Cluster:
A distributed object store that provides storage of unstructured data for applications.
-------------------------------------------------
Ceph Object Gateway:
A powerful S3- and Swift-compatible gateway that brings the power of the Ceph Object Store to modern applications.
-------------------------------------------------
Ceph Block Device:
A distributed virtual block device that delivers high-performance, cost-effective storage for virtual machines and legacy applications.
-------------------------------------------------
Ceph File System:
A distributed, scale-out filesystem with POSIX semantics that provides storage for a legacy and modern applications.
-------------------------------------------------
RADOS:
A reliable, autonomous, distributed object store comprised of self-healing, self-managing intelligent storage nodes.
-------------------------------------------------
LIBRADOS:
A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP.
-------------------------------------------------
RADOSGW:
A bucket-based REST gateway, compatible with S3 and Swift.
-------------------------------------------------
RBD:
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver.
-------------------------------------------------
Ceph FS:
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE.
-------------------------------------------------
pg_num = number of placement groups mapped to an OSD
-------------------------------------------------
Placement Groups (PGs):

Ceph maps objects to placement groups. Placement groups are shards or fragments of a logical object pool that place objects as a group into OSDs. Placement groups reduce the amount of per-object metadata when Ceph stores the data in OSDs. A larger number of placement groups (e.g., 100 per OSD) leads to better balancing.
-------------------------------------------------

CSS
+Remove href values when printing (July 2, 2019, 12:11 a.m.)

@media print {
a[href]:after {
visibility: hidden;
}
}

+Removing page title and date when printing (July 2, 2019, 12:09 a.m.)

@page {
size: auto;
margin: 0;
}

+Media Queries (Feb. 9, 2016, 12:05 p.m.)

@media all and (max-width: 480px) {

}


@media all and (min-width: 480px) and (max-width: 768px) {

}


@media all and (min-width: 768px) and (max-width: 1024px) {

}

@media all and (min-width: 1024px) {

}

/*------------------------------------------
Responsive Grid Media Queries - 1280, 1024, 768, 480
1280-1024 - desktop (default grid)
1024-768 - tablet landscape
768-480 - tablet
480-less - phone landscape & smaller
--------------------------------------------*/
@media all and (min-width: 1024px) and (max-width: 1280px) { }

@media all and (min-width: 768px) and (max-width: 1024px) { }

@media all and (min-width: 480px) and (max-width: 768px) { }

@media all and (max-width: 480px) { }

/*------------------------------------------
Foundation Media Queries
http://foundation.zurb.com/docs/media-queries.html
--------------------------------------------*/

/* Small screens - MOBILE */
@media only screen { } /* Define mobile styles - Mobile First */

@media only screen and (max-width: 40em) { } /* max-width 640px, mobile-only styles, use when QAing mobile issues */

/* Medium screens - TABLET */
@media only screen and (min-width: 40.063em) { } /* min-width 641px, medium screens */

@media only screen and (min-width: 40.063em) and (max-width: 64em) { } /* min-width 641px and max-width 1024px, use when QAing tablet-only issues */

/* Large screens - DESKTOP */
@media only screen and (min-width: 64.063em) { } /* min-width 1025px, large screens */

@media only screen and (min-width: 64.063em) and (max-width: 90em) { } /* min-width 1024px and max-width 1440px, use when QAing large screen-only issues */

/* XLarge screens */
@media only screen and (min-width: 90.063em) { } /* min-width 1441px, xlarge screens */

@media only screen and (min-width: 90.063em) and (max-width: 120em) { } /* min-width 1441px and max-width 1920px, use when QAing xlarge screen-only issues */

/* XXLarge screens */
@media only screen and (min-width: 120.063em) { } /* min-width 1921px, xlarge screens */

/*------------------------------------------*/



/* Portrait */
@media screen and (orientation:portrait) { /* Portrait styles here */ }
/* Landscape */
@media screen and (orientation:landscape) { /* Landscape styles here */ }


/* CSS for iPhone, iPad, and Retina Displays */

/* Non-Retina */
@media screen and (-webkit-max-device-pixel-ratio: 1) {
}

/* Retina */
@media only screen and (-webkit-min-device-pixel-ratio: 1.5),
only screen and (-o-min-device-pixel-ratio: 3/2),
only screen and (min--moz-device-pixel-ratio: 1.5),
only screen and (min-device-pixel-ratio: 1.5) {
}

/* iPhone Portrait */
@media screen and (max-device-width: 480px) and (orientation:portrait) {
}

/* iPhone Landscape */
@media screen and (max-device-width: 480px) and (orientation:landscape) {
}

/* iPad Portrait */
@media screen and (min-device-width: 481px) and (orientation:portrait) {
}

/* iPad Landscape */
@media screen and (min-device-width: 481px) and (orientation:landscape) {
}

<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no" />


/*------------------------------------------
Live demo samples
- http://andrelion.github.io/mediaquery/livedemo.html
--------------------------------------------*/

+Media Tag (Sept. 2, 2015, 4:44 p.m.)

@media (max-width: 767px) {
#inner-coffee-machine > div > img {
width: 30%;
height: 18%;
}

#inner-coffee-machine > div > div h3 {
font-size: 2.5vh;
font-weight: bold;
}

#inner-coffee-machine > div > div h5 {
font-size: 2vh;
}

#club-inner {
display: inline-table;
}

#inner-coffee-machine > div > div {
width: 100%;
}
}

@media (min-width: 768px) and (max-width: 991px) {

}

@media (min-width: 992px) and (max-width: 1199px) {

}

@media (min-width: 1200px) {

}

+Define new font (Sept. 1, 2015, 11:21 a.m.)

@font-face {
font-family: nespresso;
src: url("../fonts/nespresso.otf") format("opentype"),
url("../fonts/nespresso.ttf") format("truetype");
}

@font-face {
font-family: 'yekan';
src: url(../fonts/yekan.eot) format("eot"),
url(../fonts/yekan.woff) format("woff"),
url(../fonts/yekan.ttf) format("truetype");
}

+CSS for different IE versions (July 27, 2015, 1:40 p.m.)

IE-6 ONLY

* html #div {
height: 300px;
}
----------------------------------------------------------------------------
IE-7 ONLY

*+html #div {
height: 300px;
}
----------------------------------------------------------------------------
IE-8 ONLY

#div {
height: 300px\0/;
}
----------------------------------------------------------------------------
IE-7 & IE-8

#div {
height: 300px\9;
}
----------------------------------------------------------------------------
NON IE-7 ONLY:

#div {
_height: 300px;
}
----------------------------------------------------------------------------
Hide from IE 6 and LOWER:

#div {
height/**/: 300px;
}
----------------------------------------------------------------------------
html > body #div {
height: 300px;
}

+Fonts (July 13, 2015, 1:15 p.m.)

http://www.caritorsolutions.com/blog/162-how-to-use-font-awesome-icons
http://astronautweb.co/snippet/font-awesome/

+white-space (July 9, 2015, 3:44 a.m.)

white-space: normal;
The text will wrap.
-------------------------------
If you want to prevent the text from wrapping, you can apply:
white-space: nowrap;
-------------------------------
If we want to force the browser to display line breaks and extra white space characters we can use:
white-space: pre;
-------------------------------
If you want white space and breaks, but you need the text to wrap instead of potentially break out of its parent container:
white-space: pre-wrap;
-------------------------------
white-space: pre-line;
Will break lines where they break in code, but extra white space is still stripped.

Django
+Managers (Aug. 15, 2019, 11:06 a.m.)

class MyManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(last_data__startswith='SIP/Mohsen')


class MyModel(models.Model):
...

objects = models.Manager()
my_objects = MyManager()

+FloatField vs DecimalField (July 31, 2019, 2:58 a.m.)

Always use DecimalField for money. Even simple operations (addition, subtraction) are not immune to float rounding issues.

-------------------------------------------------------------

DecimalField:

- DecimalFields must define a 'decimal_places' and a 'max_digits' attribute.

- You get two free form validations included here from the above required attributes, i.e. If you set max_digits to 4, and you type in a decimal that is 4.00000 (5 digits), you will get this error: Ensure that there are no more than 4 digits in total.

- You also get a similar form validation done for decimal places (which in most browsers will also validate on the front end using the step attribute on the input field. If you set decimal_places = 1 and type in 0.001 as the value you will get an error that the minimum value has to be 0.1.

- With a Decimal type, rounding is also handled for you due to the required attributes that need to be set.

- In the database (postgresql), the DecimalField is saved as a numeric(max_digits,decimal_laces) Type, and Storage is set as "main"

-------------------------------------------------------------

FloatField:

- No smart rounding, and can actually result in rounding issues as described in Seths answer.

- Does not have the extra form validation that you get from DecimalField

- In the database (postgresql), the FloatField is saved as a "double precision" Type, and Storage is set as "plain"

-------------------------------------------------------------

+Aggregation vs Annotation (July 22, 2019, 12:18 p.m.)

Aggregate calculates values for the entire queryset.
Aggregate generates result (summary) values over an entire QuerySet. It operates over the rowset to get a single value from the rowset.(For example sum of all prices in the rowset). It is applied on entire QuerySet and generates result (summary) values over an entire QuerySet.

Book.objects.aggregate(average_price=Avg('price'))
Returns a dictionary containing the average price of all books in the queryset.

-----------------------------------------------------------------------------

Annotate calculates summary values for each item in the queryset.
Annotate generates an independent summary for each object in a QuerySet.(We can say it iterates each object in a QuerySet and applies operation)

Annotation
>>> q = Book.objects.annotate(num_authors=Count('authors'))
>>> q[0].num_authors
2
>>> q[1].num_authors
1
q is the queryset of books, but each book has been annotated with the number of authors.

Annotation
videos = Video.objects.values('id', 'name','video').annotate(Count('user_likes',distinct=True)

+m2m (July 11, 2019, 9:07 p.m.)

For ModelForm just do:
form.sve()


If you had to use commit=False in form.save(), then you have to save the m2m manually:
if form.is_valid():
project = form.save(commit=False)
# Do something extra with "project" ....
project.save()
form.save_m2m()

---------------------------------------------------------

if form.fields.get('units'):
new_category.units.set(data['units'])

---------------------------------------------------------

+Style Admin Interface in admin.py (April 18, 2018, 7:39 a.m.)

class NoteAdmin(admin.ModelAdmin):
search_fields = ('title', 'note')
list_filter = ('category',)

class Media:
css = {
'all': ('admin/css/interface.css',)
}

-------------------------------------------------------------

The path to "interface.css" is:
Projects/notes/notes/static/admin/css/interface.css

-------------------------------------------------------------

And finally, I couldn't make "nginx" recognize this file. For solving the problem I had to comment the "location /static/admin/" block in nginx file, and do "collectstatic" in my project to just gather together all admin static files.

-------------------------------------------------------------

+Ajax and CSRF (April 22, 2018, 7:08 p.m.)

$.ajax({
type: 'POST',
url: $(this).attr('href'),
data: {
csrfmiddlewaretoken: '{{ csrf_token }}',
},
dataType: 'json',
success: function (status) {

},
error: function () {

}
});

+Django-2 Sample settings.py (April 29, 2018, 2:44 p.m.)

import os
import re


def gettext_noop(s):
return s

BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

ROOT_URLCONF = 'mohsenhassani.urls'

DEBUG = True

ADMINS = [('Mohsen Hassani', 'Mohsen@MohsenHassani.com')]

ALLOWED_HOSTS = []
if DEBUG:
ALLOWED_HOSTS.extend(['localhost', '127.0.0.1'])

TIME_ZONE = 'Asia/Tehran'

USE_TZ = True

LANGUAGE_CODE = 'en-us'

LANGUAGES = [('en', gettext_noop('English')),
('fa', gettext_noop('Persian'))]

USE_I18N = True
LOCALE_PATHS = [
os.path.join(BASE_DIR, 'locale'),
]

USE_L10N = True

SERVER_EMAIL = 'report@mohsenhassani'

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mohsenhassanidb',
'USER': 'root',
'PASSWORD': '',
}
}

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'mohsenhassani',
]

TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]

PREPEND_WWW = False

DISALLOWED_USER_AGENTS = [
re.compile(r'^NaverBot.*'),
re.compile(r'^EmailSiphon.*'),
re.compile(r'^SiteSucker.*'),
re.compile(r'^sohu-search'),
re.compile(r'^DotBot'),
]

IGNORABLE_404_URLS = [
re.compile(r'^/favicon.ico$'),
re.compile(r'^/robots.txt$'),
]

SECRET_KEY = 'xqb&)90m*_!n3ovc$@%mo8!8!7j5d9o=8nm(iyw%#mzz&o1n6)'

MEDIA_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'media/')
MEDIA_URL = '/media/'

STATIC_ROOT = os.path.join(BASE_DIR, 'mohsenhassani', 'static/')
STATIC_URL = '/static/'

FILE_UPLOAD_MAX_MEMORY_SIZE = 52428800 # i.e. 50 MB

WSGI_APPLICATION = 'mohsenhassani.wsgi.application'

MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]

SESSION_EXPIRE_AT_BROWSER_CLOSE = True

AUTH_USER_MODEL = 'accounts.User'

LOGIN_URL = '/accounts/login/'

LOGIN_REDIRECT_URL = '/accounts/profile/'

LOGOUT_REDIRECT_URL = None

PASSWORD_RESET_TIMEOUT_DAYS = 3

AUTH_PASSWORD_VALIDATORS = []

NUMBER_GROUPING = 3

+Send HTML Email with Attachment (April 30, 2018, 6:40 p.m.)

from django.core.mail import EmailMessage

email = EmailMessage('subject',
'message',
'email_from',
['to_email']
)

email.content_subtype = "html"

if data['attachment']:
file_ = data['attachment']
email.attach(file_.name, file_.read(), file_.content_type)

email.send()

-----------------------------------------------------------------------------

for attachment in request.FILES:
if data[attachment]:
file_ = data[attachment]
email.attach(file_.name, file_.read(), file_.content_type)

-----------------------------------------------------------------------------

+URL - Login Required & is_superuser (May 1, 2018, 11:56 a.m.)

from django.contrib.auth.decorators import login_required
from django.contrib.auth.decorators import user_passes_test


urlpatterns = [
path('reports/', user_passes_test(lambda u: u.is_superuser)(
login_required(report.reports)), name='reports'),
]



oops....
iIt seems "user_passes_test" already does check the "login_required" somehow... so remove that decorator:

path('reports/', user_passes_test(lambda u: u.is_superuser)(report.reports), name='reports'),

+Database Functions, Aggregation, Annotations (June 16, 2018, 11:55 a.m.)

from django.db.models import F


OrgPayment.objects.update(shares=F('shares') / 70000)
Property.objects.filter(id=pid).update(views=F('views') + 1)

------------------------------------------------------------

from django.db.models import Count

Book.objects.annotate(num_authors=Count('authors')).order_by('num_authors')

------------------------------------------------------------

from django.db.models import Avg

Author.objects.annotate(average_rating=Avg('book__rating'))

------------------------------------------------------------

from django.db.models import Avg, Count

Book.objects.annotate(num_authors=Count('authors')).aggregate(Avg('num_authors'))

------------------------------------------------------------

Database Functions:

Coalesce:

from django.db.models import Sum, Value
from django.db.models.functions import Coalesce

certificates_total_hours = reward_request.chosen_certificates.aggregate(total_hours=Coalesce(Sum('course_hours'), Value(0)))

------------------------------------------------------------

Concat:

# Get the display name as "name (goes_by)"

from django.db.models import CharField, Value as V
from django.db.models.functions import Concat

Author.objects.create(name='Margaret Smith', goes_by='Maggie')
author = Author.objects.annotate(
screen_name=Concat('name', V(' ('), 'goes_by', V(')'),
output_field=CharField())).get()
print(author.screen_name)

------------------------------------------------------------

Length:

Accepts a single text field or expression and returns the number of characters the value has. If the expression is null, then the length will also be null.

from django.db.models.functions import Length


Author.objects.create(name='Margaret Smith')
author = Author.objects.annotate(
name_length=Length('name'),
goes_by_length=Length('goes_by')).get()
print(author.name_length, author.goes_by_length)

------------------------------------------------------------

Lower:

Accepts a single text field or expression and returns the lowercase representation.

Usage example:

>>> from django.db.models.functions import Lower
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_lower=Lower('name')).get()
>>> print(author.name_lower)
margaret smith

------------------------------------------------------------

Substr:

Returns a substring of length (length) from the field or expression starting at position pos. The position is 1-indexed, so the position must be greater than 0. If the length is None, then the rest of the string will be returned.

Usage example:

>>> # Set the alias to the first 5 characters of the name as lowercase
>>> from django.db.models.functions import Substr, Lower
>>> Author.objects.create(name='Margaret Smith')
>>> Author.objects.update(alias=Lower(Substr('name', 1, 5)))
1
>>> print(Author.objects.get(name='Margaret Smith').alias)
marga

------------------------------------------------------------

Upper:

Accepts a single text field or expression and returns the uppercase representation.


>>> from django.db.models.functions import Upper
>>> Author.objects.create(name='Margaret Smith')
>>> author = Author.objects.annotate(name_upper=Upper('name')).get()
>>> print(author.name_upper)
MARGARET SMITH

------------------------------------------------------------

+Create directories if they don't exist (June 17, 2018, 6:09 p.m.)

import os

from django.conf import settings


avatar_path = '%s/images/avatars' % settings.MEDIA_ROOT
if not os.path.exists(os.path.dirname(avatar_path)):
os.makedirs(avatar_path)

+Serve media files in debug mode (April 15, 2019, 12:11 p.m.)

urls.py:
---------

from django.conf import settings
from django.conf.urls.static import static


if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

+Save file path to Django ImageField (June 17, 2018, 7:10 p.m.)

models.py:
-------------
avatar = models.ImageField(_('avatar'), upload_to='manager/images/avatars/', null=True, blank=True)


views:
--------
request.user.avatar.name = 'images/avatars/mohsen.png'
request.user.save()

+Forms - Validate Excel File (July 2, 2018, 10:53 a.m.)

from xlrd import open_workbook, XLRDError

from django import forms
from django.utils.translation import ugettext_lazy as _


class UploadExcelForm(forms.Form):
file = forms.FileField(label=_('file'))

def clean_file(self):
try:
open_workbook(file_contents=self.cleaned_data['file'].read())
self.cleaned_data['file'].file.seek(0)
except XLRDError:
raise forms.ValidationError(_('Please upload a valid excel file.'))
return self.cleaned_data['file']

+Messages (July 6, 2018, 8:57 p.m.)

View:
------------
from django.contrib import messages


messages.success(request, _('The information was saved successfully.'))
return HttpResponseRedirect(reverse('url', args=(code,)))

-----------------------------------------------------------------

Template:
------------

{% if messages %}
<ul class="messages">
{% for message in messages %}
<li {% if message.tags %} class="{{ message.tags }}" {% endif %}>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}

-----------------------------------------------------------------

{% if message.tags == 'success' %}

+QuerySet - Filter based on Text Length (July 16, 2018, 3:04 p.m.)

from django.db.models.functions import Length

invalid_username = Driver.objects.annotate(
text_len=Length('username')).filter(text_len__lt=11)

+QuerySet - Duplicate objects based on a specific field (July 16, 2018, 3:16 p.m.)

duplicate_plate_number_ids = Driver.objects.values(
'plate_number').annotate(Count('plate_number')).order_by().filter(
plate_number__count__gt=1).values_list('plate_number', flat=True)

+Bulk Insert / Bulk Create (Oct. 7, 2018, 11:07 a.m.)

entry_records = []

for i in range(2000):
entry_records.append(Entry(headline='This is a test'))


Entry.objects.bulk_create(entry_records)

+Force files to open in the browser instead of downloading (Oct. 9, 2018, 8:48 a.m.)

Force browser that the file should be viewed in the browser:

Content-Type: application/pdf
Content-Disposition: inline; filename="filename.pdf"


To have the file downloaded rather than viewed:

Content-Type: application/pdf
Content-Disposition: attachment; filename="filename.pdf"

+Database creation error when running django tests (April 13, 2019, 2:10 p.m.)

In case of having this error when running django tests:
Got an error creating the test database: permission denied to create database

Log in to psql shell and let your settings.py database user to create databases:
alter user my_user createdb;

+Find Model Relations (Oct. 17, 2018, 4:48 p.m.)

for field in [f for f in file._meta.get_fields() if not f.concrete]

----------------------------------------------------------------------

model = field.related_model

model = type(instance)

# For deferred instances
model = instance._meta.proxy_for_model

----------------------------------------------------------------------

app_label = model._meta.app_label

app_label = instance._meta.app_label

----------------------------------------------------------------------

model_name = model.__name__

----------------------------------------------------------------------

if field.get_internal_type() == 'ForeignKey':

----------------------------------------------------------------------

field.remote_field.name

----------------------------------------------------------------------

field.through.objects.filter(file_id=file.id)

----------------------------------------------------------------------

ct = ContentType.objects.get_for_model(model)

----------------------------------------------------------------------

model._meta.local_fields

----------------------------------------------------------------------

+Pass JSON object data from view to template (April 13, 2019, 11:32 a.m.)

View:

import json


data = json.dumps(the_dictionary)
return render(request, 'abc.html', {'data': data})

----------------------------------------------------

Template:

<script type="text/javascript">
{{ data|safe }}
</script>

+Form - Access Field type in template (Dec. 8, 2018, 12:23 p.m.)

{{ field.field.widget.input_type }}

+QuerySet - Group By (Dec. 14, 2018, 8:52 a.m.)

requests = Loan.objects.filter(loan__type='n',
status__status__in=['1', '2', '3'])
stats = requests.values('personnel__center__title'
).annotate(Count('id')).order_by()



{% for stat in stats %}
<tr>
<td>{{ forloop.counter }}</td>
<td>{{ stat.personnel__center__title }}</td>
<td>{{ stat.id__count }}</td>
</tr>
{% endfor %}

-----------------------------------------------------------------------

this_week_articles = Article.objects.filter(
created_at__gte=seven_days_ago,
deleted=False
).values('creating_user__first_name',
'creating_user__last_name').\
annotate(Count('pk')).order_by()


# Result is:
<QuerySet [{'creating_user__last_name': 'Hassani', 'creating_user__first_name': 'Mohsen', 'pk__count': 286}, {'creating_user__last_name': 'BiGheri', 'creating_user__first_name': 'Mehdi', 'pk__count': 31}]>

-----------------------------------------------------------------------

from itertools import groupby

def extract_call_id(call):
return call.call_id

grouped_call_ids = [list(g) for t, g in groupby(today_calls, key=extract_call_id)]

-----------------------------------------------------------------------

+Google reCAPTCHA API (Dec. 17, 2018, 12:55 p.m.)

1- Register your application in the reCAPTCHA admin:
https://www.google.com/recaptcha/admin#list


2- After registering your website, you will be handed a Site key and a Secret key. The Site key will be used in the reCAPTCHA widget which is rendered within the page where you want to place it. The Secret key will be stored safely in the server, made available through the settings.py module.
GOOGLE_RECAPTCHA_SECRET_KEY = ''


3- Add the following tag to the head:
<script src='https://www.google.com/recaptcha/api.js'></script>


4- Add the following tag to the form:
<div class="g-recaptcha" data-sitekey=""></div>


5- pip install requests


6- Views.py
import requests
from django.conf import settings

if request.POST:
recaptcha_response = request.POST.get('g-recaptcha-response')
data = {
'secret': settings.GOOGLE_RECAPTCHA_SECRET_KEY,
'response': recaptcha_response
}
response = requests.post(
'https://www.google.com/recaptcha/api/siteverify', data=data)
result = response.json()

if result['success']:

else:

+Split QuerySets (Dec. 17, 2018, 10:26 p.m.)

def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]


------------------------------------------------------------

Usage Example:

excel_file = get_object_or_404(ExcelFile, id=eid)
job_list = list(chunks(excel_file.tempdata_set.all(), 250))

------------------------------------------------------------

+Get all related Django model objects (Dec. 30, 2018, 12:30 p.m.)

from django.db.models.deletion import Collector
from django.contrib.admin.utils import NestedObjects

user = User.objects.get(id=1)

collector = NestedObjects(using="default")
collector.collect([user])
print(collector.data)

+Admin - Render checkboxes for m2m (Jan. 13, 2019, 10:06 a.m.)

admin.py:

---------------------------------------------------------

from django.contrib.auth.admin import UserAdmin
from django.db import models
from django.forms import CheckboxSelectMultiple


class PersonnelAdmin(UserAdmin):
formfield_overrides = {
models.ManyToManyField: {'widget': CheckboxSelectMultiple}
}

+Truncate a long string (Jan. 27, 2019, 1:47 a.m.)

data = data[:75]

----------------------------------------------------------------------

import textwrap

textwrap.shorten("Hello world!", width=12)

textwrap.shorten("Hello world", width=10, placeholder="...")

----------------------------------------------------------------------

from django.utils.text import Truncator

value = Truncator(value).chars(75)

----------------------------------------------------------------------

+Model Conventions (Feb. 8, 2019, 7:53 a.m.)

https://steelkiwi.com/blog/best-practices-working-django-models-python/

+CSRF Token in an external javascript file (March 16, 2019, 2:11 p.m.)

function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie != '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = cookies[i].trim();
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) == (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}


// Then call it like the following:
getCookie('csrftoken');

+Forms - Validation (March 11, 2018, 4:29 p.m.)

class ReportForm1(forms.Form):
src_server_ip = forms.CharField(required=False)
dst_server_ip = forms.CharField(required=False)


def clean(self):
if self.cleaned_data['src_server_ip'] == '' and self.cleaned_data[
'dst_server_ip'] == '':
self.add_error('src_server_ip',
'At lease a source or destination is required.')

+URL Regex that accepts all characters (Jan. 20, 2018, 1:14 a.m.)

(.*)

+Forms - Custom ModelChoiceField (Nov. 15, 2017, 3:54 p.m.)

class AppointmentChoiceField(forms.ModelChoiceField):
def label_from_instance(self, appointment):
return "%s" % appointment.get_time()

--------------------------------------------------------------------

class IntCommaChoiceField(forms.ModelChoiceField):
def label_from_instance(self, base_amount):
return "%s" % intcomma(base_amount)

--------------------------------------------------------------------

class LoanAmountEditForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['base_amount'] = IntCommaChoiceField(
queryset=LoanBaseAmount.objects.all(),
label=_('base amount')
)

class Meta:
model = LoanAmount
exclude = []

--------------------------------------------------------------------

+JPG Validator (July 17, 2017, 10:23 a.m.)

from PIL import Image


def jpg_validator(certificate):
file_type = Image.open(certificate.file).format
certificate.file.seek(0)
if file_type == 'jpg' or file_type == 'JPEG':
return True
else:
raise ValidationError(_('The extension of certificate file should be jpg.'))

+Views - order_by sum of fields (June 10, 2017, 1:24 p.m.)

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},
order_by=('-sum',))


---------------------------------------------------

If you need to do some filtering, you can add filter() to the end:

top_traffic_servers = Server.objects.extra(
select={'sum': 'total_bytes_outgoing + total_bytes_incoming'},
order_by=('-sum',)).filter(status='1')

+Use MySQL or MariaDB with Django (May 18, 2017, 10:11 p.m.)

1- Installation:
MySQL:
sudo apt-get install python-pip python-dev mysql-server libmysqlclient-dev

MariaDB:
sudo apt-get install python-pip python-dev mariadb-server libmariadbclient-dev libssl-dev


2- mysql -u root -p


3- CREATE DATABASE myproject CHARACTER SET UTF8;


4- CREATE USER myprojectuser@localhost IDENTIFIED BY 'password';


5- GRANT ALL PRIVILEGES ON myproject.* TO myprojectuser@localhost;


6- FLUSH PRIVILEGES;


7- exit

8- In the project environment:
pip install mysqlclient

+X-Frame-Options (Sept. 26, 2016, 9:05 p.m.)

Error in remote calling:
..does not permit cross-origin framing

Description:
There is a special header to allow or disallow showing page inside i-frame - X-Frame-Options It's used to prevent an attack called clickjacking. You can check the Django's doc about it https://docs.djangoproject.com/en/dev/ref/clickjacking/

Sites that want their content to be shown in i-frame just don't set this header.

In your installation of Django this protection is turned on by default. If you wan't to allow embedding your content inside i-frames you can either disable the clickjack protection in your settings for the whole site, or use per view control with:

django.views.decorators.clickjacking decorators

xframe_options_exempt
xframe_options_deny
xframe_options_sameorigin

Per view control is a better option.

--------------------------------------------------------------

Example:

from django.views.decorators.clickjacking import xframe_options_exempt

@xframe_options_exempt
def home(request):
#

+Django Session Key (Sept. 20, 2016, 8:58 p.m.)

if not request.session.exists(request.session.session_key):
request.session.create()
session_key = request.session.session_key

+Django REST Framework - Installation and Configuration (Sept. 20, 2016, 12:44 a.m.)

1-pip install djangorestframework django-filter markdown


2-Add 'rest_framework' to your INSTALLED_APPS setting.
INSTALLED_APPS = (
...
'rest_framework',
)


3-If you're intending to use the browsable API you'll probably also want to add REST framework's login and logout views. Add the following to your root urls.py file.

urlpatterns = [
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
]

+User Timezone (Sept. 5, 2016, 2:12 a.m.)

There are several plugins you can use, but I guess there are reasons I need to avoid using them:

- They mainly require big .dat files which contain the timezones allover the world

- They use middlewares to check the user's timezone, which might be called on every request and finally cause speed problem when opening pages.

- They only work with templates (using template tags and filters).

--------------------------------------------------------

The simplest way I have achieved is using a snippet which uses an online web service:
import requests
import pytz
user_time_zone = requests.get('http://freegeoip.net/json/').json()['time_zone']
timezone.activate(pytz.timezone(user_time_zone))

This snippet can be used in only the views which need to detect user's timezone; no need of middleware.

--------------------------------------------------------

If you ever needed to use it in every request, you can use it in a middleware.

Create a file named `middleware.py` and add this middleware to it:

import requests
import pytz

from django.utils import timezone


class UserTimezoneMiddleware(object):
def process_request(self, request):
try:
freegeoip_response = requests.get('http://freegeoip.net/json/')
freegeoip_response_json = freegeoip_response.json()
user_time_zone = freegeoip_response_json['time_zone']
timezone.activate(pytz.timezone(user_time_zone))
except:
pass
return None


Add the `UserTimezoneMiddleware` class to settings.py `MIDDLEWARE_CLASSES` variable.


Now you can get the date/time based on user's timezone:
timezone.localtime(timezone.now()
timezone.localtime(settings_.under_construction_until)

--------------------------------------------------------

+Timestamp from datetime field (Sept. 5, 2016, 1:05 a.m.)

You can do it in template or in view.

Template:
------------

{% now "U" %}
{{ value|date:"U" }}

------------------------------------------------------------------

View:
-------

from django.utils.dateformat import format
format(mymodel.mydatefield, 'U')
OR
import time
time.mktime(mydate.timetuple())

+Manually create a POST/GET QueryDict from a dictionary (Aug. 27, 2016, 3:11 a.m.)

from django.http import QueryDict, MultiValueDict

get_data = {'p_type': request.GET['p_type'], 'facilities': request.GET.getlist('facilities')}
OR
get_data = dict(request.GET.iteritems())

qdict = QueryDict('', mutable=True)
qdict.update(MultiValueDict({'facilities': get_data['facilities']}))
qdict.update(post_data)
request.POST = qdict

+Django Dumpdata Field (Aug. 26, 2016, 3:43 a.m.)

https://github.com/bitmazk/django-dumpdata-field

1- pip install django-dumpdata-field

2-
INSTALLED_APPS = (
'dumpdata_field',
)

3- dumpdata_field facemelk.province --fields=id,province_name > /home/mohsen/Projects/facemelk/facemelk/fixtures/provinces_fields.json

+Ajax File Upload (Aug. 22, 2016, 10:20 p.m.)

<form action="{% url 'glasses:upload-face' %}" method="POST" id="upload-face-form" enctype="multipart/form-data"> {% csrf_token %}
<input type="file" id="upload-face" name="face" />
</form>

-------------------------------------------------------

$('#upload-face').change(function() {
var form = $('#upload-face-form');
var form_data = new FormData(form[0]);
$.ajax({
type: form.attr('method'),
url: form.attr('action'),
data: form_data,
contentType: false,
cache: false,
processData: false,
dataType: 'json',
success: function(image) {

}, error: function(error) {

}
});
});

-------------------------------------------------------

def upload_face(request):
if request.is_ajax():
image = request.FILES.get('face')
if image:
face = open('face.jpg', 'wb')
for chunk in image.chunks():
face.write(chunk)
face.close()
return JsonResponse({'hi': 'hi'})
else:
return HttpResponseRedirect(reverse('home'))

-------------------------------------------------------

+Django Grappelli (May 16, 2016, 4:04 a.m.)

Official Website:

http://grappelliproject.com/

-------------------------------------------------

Documentation

https://django-grappelli.readthedocs.io/en/latest/

-------------------------------------------------

Installation:

pip install django-grappelli

-------------------------------------------------

Setup:

1-
INSTALLED_APPS = (
'grappelli',
'django.contrib.admin',
)


2-Add URL-patterns:
urlpatterns = [
url(r'^grappelli/', include('grappelli.urls')),
url(r'^admin/', include(admin.site.urls)),
]

3-Add the request context processor (needed for the Dashboard and the Switch User feature):
TEMPLATES = [
{
...
'OPTIONS': {
'context_processors': [
...
'django.template.context_processors.request',
...
],
},
},
]

4-Collect the media files:
python manage.py collectstatic

-------------------------------------------------

Custmoization:

http://django-grappelli.readthedocs.io/en/latest/customization.html

-------------------------------------------------

Dashboard Setup:

http://django-grappelli.readthedocs.io/en/latest/dashboard_setup.html

-------------------------------------------------

Third Party Applications:

http://django-grappelli.readthedocs.io/en/latest/thirdparty.html

+Views - Receive and parse JSON data from a request using django-cors-headers (May 4, 2016, 3:19 a.m.)

import json

from django.views.decorators.csrf import csrf_exempt


@csrf_exempt
def update_note(request):
request_json_data = bytes.decode(request.body)
request_data = json.loads(request_json_data)
print(request_data)

------------------------------------------------------------------

You need to install a plugin too:
https://github.com/ottoyiu/django-cors-headers

1- pip install django-cors-headers


2-
INSTALLED_APPS = (
...
'corsheaders',
...
)


3-
MIDDLEWARE = [ # Or MIDDLEWARE_CLASSES on Django < 1.10
...
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
...
]


4-
CORS_ORIGIN_WHITELIST = (
'localhost:8100',
)


+Internationalization (May 2, 2016, 10:56 p.m.)

urls.py:

from django.conf.urls.i18n import i18n_patterns

urlpatterns += i18n_patterns()

----------------------------------------------------------------------

settings.py:

MIDDLEWARE = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)

----------------------------------------------------------------------

And finally in a context_processors.py file, add some snippet like this:

def change_language(request):
if '/admin/' not in request.get_full_path():
if '/fa/' not in request.get_full_path():
activate('en')
else:
activate('fa')
return {}
else:
return {}

----------------------------------------------------------------------

{% get_language_info for LANGUAGE_CODE as lang %}
{% get_language_info for "pl" as lang %}

You can then access the information:

Language code: {{ lang.code }}<br />
Name of language: {{ lang.name_local }}<br />
Name in English: {{ lang.name }}<br />
Bi-directional: {{ lang.bidi }}
Name in the active language: {{ lang.name_translated }}

There are also simple filters available for convenience:
{{ LANGUAGE_CODE|language_name }} (“German”)
{{ LANGUAGE_CODE|language_name_local }} (“Deutsch”)
{{ LANGUAGE_CODE|language_bidi }} (False)
{{ LANGUAGE_CODE|language_name_translated }} (“německy”, when active language is Czech)

<form action="{% url 'set_language' %}" method="post">{% csrf_token %}
<input name="next" type="hidden" value="{{ redirect_to }}" />
<select name="language">
{% get_current_language as LANGUAGE_CODE %}
{% get_available_languages as LANGUAGES %}
{% get_language_info_list for LANGUAGES as languages %}
{% for language in languages %}
<option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} selected="selected"{% endif %}>
{{ language.name_local }} ({{ language.code }})
</option>
{% endfor %}
</select>
<input type="submit" value="Go" />
</form>


from django.utils import translation
user_language = 'fr'
translation.activate(user_language)
request.session[translation.LANGUAGE_SESSION_KEY] = user_language


from django.http import HttpResponse

def hello_world(request, count):
if request.LANGUAGE_CODE == 'de-at':
return HttpResponse("You prefer to read Austrian German.")
else:
return HttpResponse("You prefer to read another language.")

----------------------------------------------------------------------

from django.conf import settings
from django.utils import translation

class ForceLangMiddleware:

def process_request(self, request):
request.LANG = getattr(settings, 'LANGUAGE_CODE', settings.LANGUAGE_CODE)
translation.activate(request.LANG)
request.LANGUAGE_CODE = request.LANG

----------------------------------------------------------------------

+Admin - Access ModelForm properties (April 23, 2016, 9:09 a.m.)

def __init__(self, *args, **kwargs):
initial = kwargs.get('initial', {})
initial['material'] = 'Test'
kwargs['initial'] = initial
super(ArtefactForm, self).__init__(*args, **kwargs)

-----------------------------------

for field in self.fields.items():
print(field[0]) # Prints field names
print(field[1].label) # Prints field labels

+View - Replace/Populate POST data (April 19, 2016, 11:38 a.m.)

If the request was the result of a Django form submission, then it is reasonable for POST being immutable to ensure the integrity of the data between the form submission and the form validation. However, if the request was not sent via a Django form submission, then POST is mutable as there is no form validation.

mutable = request.POST._mutable
request.POST._mutable = True
request.POST['some_data'] = 'test data'
request.POST._mutable = mutable

----------------------------------------------------------------

In an HttpRequest object, the GET and POST attributes are instances of django.http.QueryDict, a dictionary-like class customized to deal with multiple values for the same key. This is necessary because some HTML form elements, notably <select multiple>, pass multiple values for the same key.

The QueryDicts at request.POST and request.GET will be immutable when accessed in a normal request/response cycle. To get a mutable version you need to use .copy().

----------------------------------------------------------------

request.POST = request.POST.copy()
request.POST['some_key'] = 'some_value'

----------------------------------------------------------------

Methods

QueryDict implements all the standard dictionary methods because it’s a subclass of dictionary. Exceptions are outlined here:

QueryDict.__init__(query_string=None, mutable=False, encoding=None)[source]

Instantiates a QueryDict object based on query_string.

>>> QueryDict('a=1&a=2&c=3')
<QueryDict: {'a': ['1', '2'], 'c': ['3']}>

If query_string is not passed in, the resulting QueryDict will be empty (it will have no keys or values).

Most QueryDicts you encounter, and in particular those at request.POST and request.GET, will be immutable. If you are instantiating one yourself, you can make it mutable by passing mutable=True to its __init__().

Strings for setting both keys and values will be converted from encoding to unicode. If encoding is not set, it defaults to DEFAULT_CHARSET.

QueryDict.__getitem__(key)

Returns the value for the given key. If the key has more than one value, __getitem__() returns the last value. Raises django.utils.datastructures.MultiValueDictKeyError if the key does not exist. (This is a subclass of Python’s standard KeyError, so you can stick to catching KeyError.)

QueryDict.__setitem__(key, value)[source]

Sets the given key to [value] (a Python list whose single element is value). Note that this, as other dictionary functions that have side effects, can only be called on a mutable QueryDict (such as one that was created via copy()).

QueryDict.__contains__(key)

Returns True if the given key is set. This lets you do, e.g., if "foo" in request.GET.

QueryDict.get(key, default=None)

Uses the same logic as __getitem__() above, with a hook for returning a default value if the key doesn’t exist.

QueryDict.setdefault(key, default=None)[source]

Just like the standard dictionary setdefault() method, except it uses __setitem__() internally.

QueryDict.update(other_dict)

Takes either a QueryDict or standard dictionary. Just like the standard dictionary update() method, except it appends to the current dictionary items rather than replacing them. For example:

>>> q = QueryDict('a=1', mutable=True)
>>> q.update({'a': '2'})
>>> q.getlist('a')
['1', '2']
>>> q['a'] # returns the last
['2']

QueryDict.items()

Just like the standard dictionary items() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.items()
[('a', '3')]

QueryDict.iteritems()

Just like the standard dictionary iteritems() method. Like QueryDict.items() this uses the same last-value logic as QueryDict.__getitem__().

QueryDict.iterlists()

Like QueryDict.iteritems() except it includes all values, as a list, for each member of the dictionary.

QueryDict.values()

Just like the standard dictionary values() method, except this uses the same last-value logic as __getitem__(). For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.values()
['3']

QueryDict.itervalues()

Just like QueryDict.values(), except an iterator.

In addition, QueryDict has the following methods:

QueryDict.copy()[source]

Returns a copy of the object, using copy.deepcopy() from the Python standard library. This copy will be mutable even if the original was not.

QueryDict.getlist(key, default=None)

Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.

QueryDict.setlist(key, list_)[source]

Sets the given key to list_ (unlike __setitem__()).

QueryDict.appendlist(key, item)[source]

Appends an item to the internal list associated with key.

QueryDict.setlistdefault(key, default_list=None)[source]

Just like setdefault, except it takes a list of values instead of a single value.

QueryDict.lists()

Like items(), except it includes all values, as a list, for each member of the dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3')
>>> q.lists()
[('a', ['1', '2', '3'])]

QueryDict.pop(key)[source]

Returns a list of values for the given key and removes them from the dictionary. Raises KeyError if the key does not exist. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.pop('a')
['1', '2', '3']

QueryDict.popitem()[source]

Removes an arbitrary member of the dictionary (since there’s no concept of ordering), and returns a two value tuple containing the key and a list of all values for the key. Raises KeyError when called on an empty dictionary. For example:

>>> q = QueryDict('a=1&a=2&a=3', mutable=True)
>>> q.popitem()
('a', ['1', '2', '3'])

QueryDict.dict()

Returns dict representation of QueryDict. For every (key, list) pair in QueryDict, dict will have (key, item), where item is one element of the list, using same logic as QueryDict.__getitem__():

>>> q = QueryDict('a=1&a=3&a=5')
>>> q.dict()
{'a': '5'}

QueryDict.urlencode(safe=None)[source]

Returns a string of the data in query-string format. Example:

>>> q = QueryDict('a=2&b=3&b=5')
>>> q.urlencode()
'a=2&b=3&b=5'

Optionally, urlencode can be passed characters which do not require encoding. For example:

>>> q = QueryDict(mutable=True)
>>> q['next'] = '/a&b/'
>>> q.urlencode(safe='/')
'next=/a%26b/'

+Admin - Hide fields dynamically (April 11, 2016, 7:07 p.m.)

def get_fields(self, request, obj=None):
fields = admin.ModelAdmin.get_fields(self, request)
if settings.DEBUG:
return fields
else:
return ('parent', 'name_en', 'name_fa', 'content_en', 'content_fa', 'ordering',
'languages', 'header_image', 'project_thumbnail')

+Error ==> Permission denied when trying to access database after restore (migration) (April 10, 2016, 10:47 p.m.)

Enter the commands in postgresql shell:
psql mohsen_notesdb -c "GRANT ALL ON ALL TABLES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL SEQUENCES IN SCHEMA public to mohsen_notes;"
psql mohsen_notesdb -c "GRANT ALL ON ALL FUNCTIONS IN SCHEMA public to mohsen_notes;"

+Admin - Reisze Image Signal (April 5, 2016, 11:51 a.m.)

Create a file `resize_image.py` with this content:

from PIL import Image

from django.conf import settings


def resize_image(sender, instance, created, **kwargs):
if instance.position == 't':
width = settings.TOP_ADS_WIDTH
height = settings.TOP_ADS_HEIGHT
else:
width = settings.BOTTOM_ADS_WIDTH
height = settings.BOTTOM_ADS_HEIGHT

img = Image.open(instance.image.path)
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

--------------------------------------------------------------------------------------------

After model definition in your models.py file, import `resize_image` and:
models.signals.post_save.connect(resize_image, sender=TheModel)

+Admin - Hide model in admin dynamically (Feb. 29, 2016, 9:50 a.m.)

class AccessoryCategoryAdmin(admin.ModelAdmin):
def get_model_perms(self, request):
perms = admin.ModelAdmin.get_model_perms(self, request)
if request.user.username == settings.SECOND_ADMIN:
return {}
return perms

+Admin - Display readonly fields based on conditions (Feb. 28, 2016, 3:02 p.m.)

class AccessoryAdmin(admin.ModelAdmin):
list_display = ('name', 'category', 'price', 'quantity', 'ordering', 'display')
list_filter = ('category', 'display')

def get_readonly_fields(self, request, obj=None):
if request.user.username == settings.SECOND_ADMIN:
readonly_fields = ('category', 'name', 'image', 'price', 'main_image', 'description', 'ordering', 'url_name')
return readonly_fields
else:
return self.readonly_fields

+Form - How to add a star after fields (Feb. 27, 2016, 10:47 p.m.)

Add the `required_css_class` property to Form class like this:

class ProfileForm(forms.Form):
required_css_class = 'required'

first_name = forms.CharField(label=_('first name'), max_length=30)
last_name = forms.CharField(label=_('last name'), max_length=30)
cellphone_number = forms.CharField(label=_('cellphone'), max_length=20)


Then use the property `label_tag` of form fields to set the titles:
{{ form.first_name.errors }} {{ form.first_name.label_tag }}
{{ form.last_name.errors }} {{ form.last_name.label_tag }}
{{ form.cellphone_number.errors }} {{ form.cellphone_number.label_tag }}

Use it in CSS to style it or add an asterisk:
<style type="text/css">
.required:after {
content: " *";
color: red;
}
</style>

+Decorators (Jan. 29, 2016, 4:34 p.m.)

Create a python file named `decorators.py` in the app and write your decorators as follows:

def login_required(view_func):
def wrap(request, *args, **kwargs):
if request.user.is_authenticated():
return view_func(request, *args, **kwargs)
else:
return render(request, 'issue_tracker/access_denied.html',
{'login_required': 'yes'})
return wrap

-----------------------------------------------------------

from django.utils.functional import wraps


def can_participate_poll(view):
@wraps(view)
def inner(request, *args, **kwargs):
print(kwargs) # Prints {'qnum': 11, 'qid': 23}
return view(request, *args, **kwargs)
return inner


This will print the args which are passed to the view.

@can_participate_poll
def poll_view(request, qid, qnum):
pass

-----------------------------------------------------------

from django.contrib.auth.decorators import user_passes_test

@user_passes_test(lambda u: u.is_superuser)
def my_view(request):
pass

-----------------------------------------------------------

+Admin - Change Header Title (Jan. 14, 2016, 8:44 p.m.)

In the main urls.py file:

admin.site.site_header = _('YouStone Administration')

+Change app name for admin (Jan. 27, 2016, 11:51 p.m.)

1- Create a python file named `apps.py` in the app:

from django.apps import AppConfig
from django.utils.translation import ugettext_lazy as _

class CourseConfig(AppConfig):
name = 'course'
verbose_name = _('course')

2- Edit the __init__.py file within the app:
default_app_config = 'course.apps.CourseConfig'

+Save File/Image (Dec. 1, 2015, 3:16 p.m.)

import uuid
from PIL import Image as PILImage
import imghdr
import os

from django.conf import settings

from manager.home.models import Image


def save_image(img_file, width=0, height=0):
# Generate a random image name
img_name = uuid.uuid4().hex + '.' + img_file.name.split('.')[-1]

# Saving the picture on disk
img = open(settings.IMG_ROOT + img_name, 'wb')
for chunk in img_file.chunks():
img.write(chunk)
img.close()

img = open(img.name)
# Is the saved image a valid image file!?
if not imghdr.what(img) or imghdr.what(img).lower() not in ['jpg', 'jpeg', 'gif', 'png']:
os.remove(img.name)
return {'is_image': False}
else:
if width or height:
# Resizing the image
pil_img = PILImage.open(img.name)

if pil_img.mode != 'RGB':
pil_img = pil_img.convert('RGB')
pil_img.resize((width, height), PILImage.ANTIALIAS).save(img.name, format='JPEG')

# Saving the image location on the database
img = Image.objects.create(name=img_name)
return {'is_image': True, 'image': img}


def create_unique_file_name(path, file_name):
while os.path.exists(path + file_name):
if '.' in file_name:
file_name = file_name.replace('.', '_.', -1)
else:
file_name += '_'

return file_name

+Custom Middleware Class (Nov. 21, 2015, 10:39 p.m.)

Create a file named `middleware.py` in a module and add your middleware like this:

from django.shortcuts import render

from nespresso.models import Settings


class UnderConstruction:
def process_request(self, request):
settings_ = Settings.objects.all()
if settings_ and settings_[0].under_construction:
return render(request, 'nespresso/under_construction.html')


After defining a middleware, add it to the settings:
MIDDLEWARE_CLASSES = MIDDLEWARE_CLASSES + (
'nespresso.middleware.UnderConstruction',
)


--------------------------------------------------------------

Django 2:

from django.shortcuts import HttpResponseRedirect
from django.urls import reverse


class UnderConstructionMiddleWare:
def __init__(self, get_response):
self.get_response = get_response

def __call__(self, request):
response = self.get_response(request)
# Do the conditions here
return HttpResponseRedirect(reverse('under_construction:home'))



In settings.py:
Add the name to MIDDLEWARE

--------------------------------------------------------------

+Add Action Form to Action (Oct. 13, 2015, 10:48 a.m.)

from django.contrib.admin.helpers import ActionForm
from django.contrib import messages


class ChangeMembershipTypeForm(ActionForm):
MEMBERSHIP_TYPE = (
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
)
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)



class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
queryset.update(membership_type=membership_type)
self.message_user(request, _('Successfully updated membership type for selected rows.'), messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Admin - Hide action (Oct. 8, 2015, 10:56 a.m.)

class MyAdmin(admin.ModelAdmin):

def has_delete_permission(self, request, obj=None):
return False

def get_actions(self, request):
actions = super(MyAdmin, self).get_actions(request)
if 'delete_selected' in actions:
del actions['delete_selected']
return actions

--------------------------------------------------------------------

def get_actions(self, request):
actions = admin.ModelAdmin.get_actions(self, request)
if request.user.username == settings.SECOND_ADMIN:
return []
else:
return actions

+Model - Disable the Add and / or Delete action for a specific model (March 10, 2016, 11:02 p.m.)

def has_add_permission(self, request):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return
else:
return perms

def has_delete_permission(self, request, obj=None):
perms = admin.ModelAdmin.has_delete_permission(self, request)
if request.user.username == settings.SECOND_ADMIN:
return
else:
return perms

+URLS - Redirect (Oct. 6, 2015, 11:27 a.m.)

from django.views.generic import RedirectView

url(r'^$', RedirectView.as_view(url='/online-calls/'), name='home'),

+Send HTML email using send_mail (Sept. 28, 2015, 4:48 p.m.)

from django.template import loader
from django.core.mail import send_mail


html = loader.render_to_string('nespresso/admin_order_notification.html', {'order': order})
send_mail('Nespresso New Order from - %s' % order.customer.user.get_full_name(),
'',
'mail@buynespresso.ir',
OrderingEmail.objects.all().values_list('email', flat=True),
html_message=html)

+Admin - Many to Many Inline (Sept. 28, 2015, 10:23 a.m.)

class OrderInline(admin.TabularInline):
model = Order.items.through


class OrderItemAdmin(admin.ModelAdmin):
inlines = [OrderInline]


class OrderAdmin(admin.ModelAdmin):
list_display = ('customer', 'get_order_url',)
exclude = ('items',)
inlines = [OrderInline]


admin.site.register(OrderItem)
admin.site.register(Order, OrderAdmin)

+Change list display link in django admin (Sept. 27, 2015, 5:47 p.m.)

In models.py file:

class Order(models.Model):
customer = models.ForeignKey(Customer, null=True, on_delete=models.SET_NULL)
total_price = models.PositiveIntegerField()
items = models.ManyToManyField(OrderItem)
date_time = models.DateTimeField(default=now)

def __str__(self):
return '%s' % self.customer

def get_order_url(self):
return '<a href="%s" target="_blank">%s - %s</a>' % (reverse('customer:order', args=(self.pk,)),
self.customer.user.get_full_name(),
self.date_time.strftime('%D--%H:%M'))
# In django prior to version 2.0:
get_order_url.allow_tags = True

# In django after version 2.0:
from django.utils.safestring import mark_safe # At the top of your models.py file
mark_safe('<a href="#"></a>')

----------------------------------------------------------------------

And then in admin.py file:

class OrderAdmin(admin.ModelAdmin):
list_display = ('get_order_url',)

+Admin - Override User Form (Sept. 15, 2015, 2:13 p.m.)

from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from django import forms

from .models import Supervisor


class SupervisorChangeForm(UserChangeForm):
class Meta(UserChangeForm.Meta):
model = Supervisor


class SupervisorCreationForm(UserCreationForm):
class Meta(UserCreationForm.Meta):
model = Supervisor

def clean_username(self):
username = self.cleaned_data['username']
try:
Supervisor.objects.get(username=username)
except Supervisor.DoesNotExist:
return username
raise forms.ValidationError(self.error_messages['duplicate_username'])


class SupervisorAdmin(UserAdmin):
form = SupervisorChangeForm
add_form = SupervisorCreationForm
fieldsets = (
(None, {'fields': ('username', 'password')}),
('Personal info', {'fields': ('first_name', 'last_name', 'email')}),
('Permissions', {'fields': ('is_active',)}),
(None, {'fields': ('allowed_online_calls',)}),
)
exclude = ['user_permission']


admin.site.register(Supervisor, SupervisorAdmin)

------------------------------------------------------------------------------------

If you need to override the form fields:

class SupervisorChangeForm(UserChangeForm):

def __init__(self, *args, **kwargs):
super(UserChangeForm, self).__init__(*args, **kwargs)
self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(
queryset=Choices.objects.filter(choice='customer'),
widget=forms.CheckboxSelectMultiple())

class Meta(UserChangeForm.Meta):
model = Supervisor

+Ajax (Aug. 22, 2015, 3:54 p.m.)

def delete_order(request, p_type, pid):
if request.is_ajax():
return JsonResponse({'orders_length': len(request.session['orders']),
'total_price': request.session['orders_total_price'],
'status': 'deleted'})

---------------------------------------------------------------------------------------------

return HttpResponse('rejected', content_type='text/plain')


---------------------------------------------------------------------------------------------

$('#send-message-form').submit(function(e) {
e.preventDefault();
$.ajax({
type: 'POST',
url: $(this).attr('action'),
data: $(this).serialize(),
dataType: 'json',
success: function(status) {

},
error: function() {

}
});
});

---------------------------------------------------------------------------------------------

+Models - Ranges of IntegerFields (Aug. 21, 2015, 10:22 p.m.)

BigIntegerField:
A 64 bit integer, much like an IntegerField except that it is guaranteed to fit numbers from -9223372036854775808 to 9223372036854775807

-------------------------------------------------------------

IntegerField:
Values from -2147483648 to 2147483647 are safe in all databases supported by Django.

-------------------------------------------------------------

PositiveIntegerField:
Like an IntegerField, but must be either positive or zero (0). Values from 0 to 2147483647 are safe in all databases supported by Django. The value 0 is accepted for backward compatibility reasons.

-------------------------------------------------------------

PositiveSmallIntegerField:
Like a PositiveIntegerField, but only allows values under a certain (database-dependent) point. Values from 0 to 32767 are safe in all databases supported by Django.

-------------------------------------------------------------

SmallIntegerField:
Like an IntegerField, but only allows values under a certain (database-dependent) point. Values from -32768 to 32767 are safe in all databases supported by Django.

-------------------------------------------------------------

+Admin - Adding Action to Export/Download CSV file (Aug. 24, 2015, 1:04 p.m.)

class VirtualOfficeAdmin(admin.ModelAdmin):
actions = ['download_csv']
list_display = ('persian_name', 'english_name', 'office_type', 'active')
list_filter = ('office_type', 'active')

def download_csv(self, request, queryset):
import csv
from django.http import HttpResponse
import StringIO
from django.utils.encoding import smart_str

f = f = StringIO.StringIO()
writer = csv.writer(f)
writer.writerow(
["owner", "office type", "persian name", "english name", "cellphone number", "phone number", "address"])
for s in queryset:
owner = smart_str(s.owner.get_full_name())
persian_name = smart_str(s.persian_name)

# Office Type
office_type = s.office_type
if office_type == 're':
office_type = smart_str(ugettext('Real Estate'))
elif office_type == 'en':
office_type = smart_str(ugettext('Engineer'))
elif office_type == 'ar':
office_type = smart_str(ugettext('Architect'))
else:
office_type = office_type

writer.writerow(
[owner, office_type, persian_name, s.english_name, '09' + s.owner.username, s.phone_number, s.address])

f.seek(0)
response = HttpResponse(f, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=stat-info.csv'
return response

download_csv.short_description = _("Download CSV file for selected stats.")
---------------------------------------------------------------------------------------------
from django.contrib.admin.helpers import ActionForm
from django import forms
from django.utils.translation import ugettext_lazy as _
from django.contrib import messages

class ChangeMembershipTypeForm(ActionForm):
MEMBERSHIP_TYPE = (
('1', _('Gold')),
('2', _('Silver')),
('3', _('Bronze')),
('4', _('Basic'))
)
membership_type = forms.ChoiceField(choices=MEMBERSHIP_TYPE, label=_('membership type'), required=False)


class CompanyAdmin(admin.ModelAdmin):
action_form = ChangeMembershipTypeForm
actions = ['change_membership_type']

def change_membership_type(self, request, queryset):
membership_type = request.POST['membership_type']
queryset.update(membership_type=membership_type)
self.message_user(request, _('Successfully updated membership type for %d rows') % (queryset.count(),),
messages.SUCCESS)
change_membership_type.short_description = _('Change Membership Type')

+Custom Template Tags & FIlters (April 6, 2016, 2:30 p.m.)

1- Create a module named `templatetags` in an app.

2- Create a py file with a desired name. (I usually choose the app name for this python file name)

3- Write the methods you need, in the python file.

4- There is no need to introduce these methods or files in `settings.py`.

----------------------------------------------------------------------------------------------------

================= Template Filters Examples =================

from django.template import Library


register = Library()


@register.filter
def trim_value(value):
value = str(value)
if value.endswith('.0'):
return value.replace('.0', '')
else:
return value

----------------------------------------------------------------------------------------------------

@register.filter
def get_decimal(value):
if value:
import decimal
return str(decimal.Decimal('{0:.4f}'.format(value)))
else:
return '0'

----------------------------------------------------------------------------------------------------

@register.filter
def get_minutes(total_seconds):
if total_seconds:
return round(total_seconds / 60, 2)
else:
return 0

----------------------------------------------------------------------------------------------------

@register.filter
def get_acd(request):
if request:
minutes = get_minutes(request.session['total_seconds'])
if minutes:
return round(minutes / request.session['total_calls'], 2)
else:
return 0
else:
return 0

----------------------------------------------------------------------------------------------------

@register.filter
def round_values(value, digit):
if digit and digit.isdigit():
return round(value, int(digit))
else:
return value

----------------------------------------------------------------------------------------------------

@register.filter
def calculate_currency_rate(value, invoice):
from decimal import Decimal
if invoice.rate_currency:
return round(Decimal(value) * Decimal(invoice.rate), 2)
else:
return value

----------------------------------------------------------------------------------------------------

================= Template Tags Examples =================

Important Hint:
You can return anything you like from a tag, including a queryset. However, you can't use a tag inside the for tag ; you can only use a variable there (or a variable passed through a filter).

from django.template import Library, Node, TemplateSyntaxError, Variable

from youstone.models import Ad


register = Library()


class AdsNode(Node):
def __init__(self, usage, position, province):
self.usage, self.position, self.province = Variable(usage), Variable(position), Variable(province)

def render(self, context):
usage = self.usage.resolve(context)
position = self.position.resolve(context)
province = self.province.resolve(context)
ads = Ad.objects.filter(active=True, usage=usage)
if position:
ads = ads.filter(position=position)

if province:
print('PROVINCE', province)

context['ads'] = ads

return ''


@register.tag
def get_ads(parser, token):
try:
tag_name, usage, position, province, _as, var_name = token.split_contents()
except ValueError:
raise TemplateSyntaxError(
'get_ads takes 4 positional arguments but %s were given.' % len(token.split_contents()))

if _as != 'as':
raise TemplateSyntaxError('get_ads syntax must be "get_ads <usage> <position> <province> as <var_name>."')

return AdsNode(usage, position, province)

----------------------------------------------------------------------------------------------------

Then you can use the template tag like this in the template:
{% get_ads usage position province as ads %}
{% for ad in ads %}

{% endfor %}

----------------------------------------------------------------------------------------------------

+Resize Image (Aug. 9, 2015, 10:34 p.m.)

Create a python module named resize_image.py and copy & paste this snippet:

---------------------------------------------------------------------------------------------

from PIL import Image

from django.conf import settings


def resize_image(sender, instance, created, **kwargs):
width = settings.SLIDER_WIDTH
height = settings.SLIDER_HEIGHT

img = Image.open(instance.image.path)
if img.mode != 'RGB':
img = img.convert('RGB')
img.resize((width, height), Image.ANTIALIAS).save(instance.image.path, format='JPEG')

Note that resize() returns a resized copy of an image. It doesn't modify the original.
So do not write codes like this:
img.resize((width, height), Image.ANTIALIAS)
img.save(instance.image.path, format='JPEG')

--------------------------------------------------------------------------------------------

In the settings:
# Slider Image Size
SLIDER_WIDTH = 1000
SLIDER_HEIGHT = 600

---------------------------------------------------------------------------------------------

In models.py:
from resize_image import resize_image

class Slider(models.Model):
pass

models.signals.post_save.connect(resize_image, sender=Slider)

--------------------------------------------------------------------------------------------

+Extending User Model using OneToOne relationship (Aug. 5, 2015, 4:43 p.m.)

from django.db.models.signals import post_save
from django.conf import settings


class Customer(models.Model):
user = models.OneToOneField(settings.AUTH_USER_MODEL, unique=True, primary_key=True)


def create_customer(sender, instance, created, **kwargs):
if created:
Customer.objects.get_or_create(user=instance)

post_save.connect(create_customer, sender=settings.AUTH_USER_MODEL)

+Admin - Overriding admin ModelForm (Nov. 30, 2015, 3:49 p.m.)

class MachineCompareForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(MachineCompareForm, self).__init__(*args, **kwargs)
self.model_fields = [['field_%s' % title.pk, title.feature, title.pk] for title in CompareTitle.objects.all()]
for field in self.model_fields:
self.base_fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
self.fields[field[0]] = forms.CharField(max_length=400, label='%s' % field[1], required=False)
if self.instance.pk:
feature = CompareFeature.objects.filter(machine=self.instance.machine.pk, feature=field[2])
if feature:
self.base_fields[field[0]].initial = feature[0].value
self.fields[field[0]].initial = feature[0].value

def save(self, commit=True):
instance = super(MachineCompareForm, self).save(commit=False)
for field in self.model_fields:
if CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]):
CompareFeature.objects.filter(machine=self.cleaned_data['machine'], feature=field[2]).update(
feature_id=field[2],
machine=self.cleaned_data['machine'],
value=self.cleaned_data[field[0]])
else:
CompareFeature.objects.create(feature_id=field[2],
machine=self.cleaned_data['machine'],
value=self.cleaned_data[field[0]])

if commit:
instance.save()
return instance

class Meta:
model = MachineCompare
exclude = []


class MachineCompareAdmin(admin.ModelAdmin):
form = MachineCompareForm

def get_form(self, request, obj=None, **kwargs):
return MachineCompareForm

---------------------------------------------------------------------------------------------

class SpecialPageAdmin(admin.ModelAdmin):
list_display = ('company', 'url_name', 'active',)
search_fields = ('company__name', 'url_name')
form = SpecialPageForm

def get_form(self, request, obj=None, **kwargs):
return SpecialPageForm



class SpecialPageForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super(SpecialPageForm, self).__init__(*args, **kwargs)
for i in range(1, 16):
self.fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))
self.base_fields['image-%s' % i] = forms.ImageField(label='%s %s' % (_('Image'), i))

class Meta:
model = SpecialPage
exclude = []

---------------------------------------------------------------------------------------------

+Model - Overriding delete method in model (Nov. 28, 2015, 12:29 p.m.)

from django.db.models.signals import pre_delete
from django.dispatch.dispatcher import receiver

@receiver(pre_delete, sender=MyModel)
def _mymodel_delete(sender, instance, **kwargs):
print "deleting"

+Union of querysets (July 20, 2015, 5:14 p.m.)

import itertools

result = itertools.chain(qs1, qs2, qs3, qs4)

-------------------------------------------------------------------

records = query1 | query2

-------------------------------------------------------------------

+Views - Concatenating querysets and converting to JSON (July 17, 2015, 9:05 p.m.)

from itertools import chain


combined = list(chain(collectionA, collectionB))
json = serializers.serialize('json', combined)

---------------------------------------------------------------------------------------

final_queryset = (queryset1 | queryset2)

+Template - nbsp template tag (Replace usual spaces in string by non breaking spaces) (July 9, 2015, 2:45 a.m.)

from django import template
from django.utils.safestring import mark_safe

register = template.Library()


@register.filter()
def nbsp(value):
return mark_safe("&nbsp;".join(value.split(' ')))
------------------------------------------------------
Usage:
{% load nbsp %}

{{ user.full_name|nbsp }}

OR

{{ note.note|nbsp|linebreaksbr }}

+Views - Delete old uploaded file/image before saving the new one (July 8, 2015, 8:24 p.m.)

import os
from django.conf import settings

try:
os.remove(settings.BASE_DIR + logo.image.name)
logo.delete()
except (OSError, IOError):
pass

+Admin - list_display with a callable (Jan. 3, 2016, 10:17 a.m.)

class ExcelFile(models.Model):
file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' % self.business.title

def get_file_name(self):
return self.file.name.split('/')[1]
get_file_name.short_description = _('File Name')
------------------------------------------------------------------------------------------
class ExcelFileAdmin(admin.ModelAdmin):
list_display = ['get_file_name', 'business']
------------------------------------------------------------------------------------------
def change_order(self):
return '<a href="review/">%s</a>' % _('Edit Order')
change_order.short_description = _('Edit Order')
change_order.allow_tags = True

+Admin - Hide fields (July 8, 2015, 1:31 p.m.)

from django.contrib import admin

from .models import ExcelFile


class ExcelFileAdmin(admin.ModelAdmin):
exclude = ['companies']


admin.site.register(ExcelFile, ExcelFileAdmin)

+Model - Validators (Jan. 28, 2016, 12:03 a.m.)

from django.core.exceptions import ValidationError

def validate_excel_file(file):
try:
xlrd.open_workbook(file_contents=file.read())
except xlrd.XLRDError:
raise ValidationError(_('%s is not an Excel File') % file.name)


class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])

+Admin - Allow only one instance of object to be created (July 8, 2015, 12:41 p.m.)

def validate_only_one_instance(obj):
model = obj.__class__
if model.objects.count() > 0 and obj.id != model.objects.get().id:
raise ValidationError(_("Can only create 1 %s instance") % model.__name__)

class Settings(models.Model):
banner = models.ImageField(_('banner'), upload_to='images/machines/settings',
help_text=_('The required image size is 960px in 250px.'))

def __str__(self):
return '%s' % _('Settings')

def clean(self):
validate_only_one_instance(self)
--------------------------- ANOTHER ONE ---------------------------------------------
class ExcelFile(models.Model):
excel_file = models.FileField(_('excel file'), upload_to='excel-files/', validators=[validate_excel_file])
companies = models.ManyToManyField(Company, verbose_name=_('companies'), blank=True)
business = models.ForeignKey(BusinessTitle, verbose_name=_('business'))

def __str__(self):
return '%s' % self.business.title

def clean(self):
model = self.__class__
validation_error = _("Can only create 1 %s instance") % self.business.title
business = model.objects.filter(business=self.business)
# If the user is updating/editing an object
if self.pk:
if business and self.pk != business[0].pk:
raise ValidationError(validation_error)
# If the user is inserting/creating an object
else:
if business:
raise ValidationError(validation_error)

+Errors (Aug. 13, 2015, 12:05 a.m.)

_imagingft C module is not installed:
I got this error when django-simple-captcha tries to load the image.

1-apt-get install libfreetype6-dev
2-pip uninstall pillow
3-pip install pillow
4-restart the project

If you still got the same error, you need to look if the file has even been created at all!?
1-sudo update
2-locate _imagingft

If the file exists (and probably with a name (a little bit) different with what in error message looks for), you need to rename it:
The path and file name might be something like this:
/home/mohsen/virtualenvs/django-1.8/lib/python3.3/site-packages/PIL/_imagingft.cpython-33m.so
You need to rename it to:
mv _imagingft.cpython-33m.so _imagingft.so
And restart the project.

If the file is not found with locate command in the virtualenv you're working on, try to re-install pillow (even download the most updated version from https://codeload.github.com/python-pillow/Pillow/zip/master, and install it).
Anyway, you need to install it in a way, to get that file even with a different name.
---------------------------------------------------------------------------------------------
decoder jpeg not available
sudo apt-get install libjpeg-dev
pip install -I pillow

sudo ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/libfreetype.so /usr/lib
sudo ln -s /usr/lib/x86_64-linux-gnu/libz.so /usr/lib

Or for Ubuntu 32bit:

sudo ln -s /usr/lib/i386-linux-gnu/libjpeg.so /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/libfreetype.so.6 /usr/lib/
sudo ln -s /usr/lib/i386-linux-gnu/libz.so /usr/lib/

pip install -I pillow
---------------------------------------------------------------------------------------------
django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet:

from django.conf import settings


try:
from django.contrib.auth import get_user_model

User = settings.AUTH_USER_MODEL
except ImportError:
from django.contrib.auth.models import User

+Speeding Up Django Links (June 18, 2015, 12:41 p.m.)

http://vincent.is/speeding-up-django-postgres/

+Django Analytical (June 7, 2015, 4:52 p.m.)

1-easy_install django-analytical

2-
INSTALLED_APPS = [
...
'analytical',
...
]

3-In the base.html
{% load analytical %}
<!DOCTYPE ... >
<html>
<head>
{% analytical_head_top %}

...

{% analytical_head_bottom %}
</head>
<body>
{% analytical_body_top %}

...

{% analytical_body_bottom %}
</body>
</html>

4-Create an account on this site:
http://clicky.com/66453175
I have already registered: Username is Mohsen_Hassani and the password MohseN4301

5-There are some javascript codes which should be taken from clicky.com to you template. Those are like:

This should be before the </body> </html> tags:
<script src="//static.getclicky.com/js" type="text/javascript"></script>
<script type="text/javascript">try{ clicky.init(100851091); }catch(e){}</script>
<noscript><p><img alt="Clicky" width="1" height="1" src="//in.getclicky.com/100851091ns.gif" /></p></noscript>


+Templates - Do Mathematic (Jan. 14, 2016, 2:14 p.m.)

http://slacy.com/blog/2010/07/using-djangos-widthratio-template-tag-for-multiplication-division/

Using Django’s widthratio template tag for multiplication & division.

I find it a bit odd that Django has a template filter for adding values, but none for multiplication and division. It’s fairly straightforward to add your own math tags or filters, but why bother if you can use the built-in one for what you need?

Take a closer look at the widthratio template tag. Given {% widthratio a b c %} it computes (a/b)*c

So, if you want to do multiplication, all you have to do is pass b=1, and the result will be a*c.

Of course, you can do division by passing c=1. (a=1 would also work, but has possible rounding side effects)

Note: The results are rounded to an integer before returning, so this may have marginal utility for many cases.

So, in summary:

to compute A*B: {% widthratio A 1 B %}
to compute A/B: {% widthratio A B 1 %}

And, since add is a filter and not a tag, you can always to crazy stuff like:

compute A^2: {% widthratio A 1 A %}
compute (A+B)^2: {% widthratio A|add:B 1 A|add:B %}
compute (A+B) * (C+D): {% widthratio A|add:B 1 C|add:D %}

+URLS - Allow entering dot (.) in url pattern (Dec. 2, 2014, 10:03 p.m.)

[-\w.]+

+Change the value of QuerySet (Nov. 18, 2014, 2:17 a.m.)

If you change the value of QuerySet you will get an error:
“This QueryDict instance is immutable”

So this is how you should change the value of it: (the whole of it or any item inside)
mutable = request.POST._mutable
request.POST._mutable = True
request.session['search_criteria']['region'] = rid
request.session.save()
request.POST = request.session['search_criteria']
request.POST._mutable = mutable

+Templates - Conditional Extend (Sept. 22, 2014, 11:45 a.m.)

{% extends supervising|yesno:"supervising/tasks.html,desktop/tasks_list.html" %}

{% extends variable %} uses the value of variable. If the variable evaluates to a string, Django will use that string as the name of the parent template. If the variable evaluates to a Template object, Django will use that object as the parent template.

+Adding CSS class in a ModelForm (Sept. 13, 2014, 1:15 a.m.)

self.fields['specie'].widget.attrs['class'] = 'autocomplete'

+Views - JSON object serialization (AJAX) (Jan. 3, 2016, 3:03 p.m.)

from django.core import serializers

foos = Foo.objects.all()
data = serializers.serialize('json', foos)

return HttpResponse(data, mimetype='application/json')
----------------------------------------------------------------------------
import json

def json_response(something):
return HttpResponse(json.dumps(something), content_type='application/javascript; charset=UTF-8')
----------------------------------------------------------------------------
from django.core.serializers.json import DjangoJSONEncoder

def categories_view(request):
categories = Category.objects.annotate(notes_count=Count('notes__pk')).values('pk', 'name', 'notes_count')
data = json.dumps(list(categories), cls=DjangoJSONEncoder)
return HttpResponse(data, content_type='application/json')
----------------------------------------------------------------------------
data = serializers.serialize('xml', SomeModel.objects.all(), fields=('name','size'))
----------------------------------------------------------------------------
all_objects = list(Restaurant.objects.all()) + list(Place.objects.all())
data = serializers.serialize('xml', all_objects)
----------------------------------------------------------------------------
For Django 1.7 +

from django.http import JsonResponse

return JsonResponse({'foo':'bar'})
----------------------------------------------------------------------------
Serializing non-dictionary objects
In order to serialize objects other than dict you must set the safe parameter to False:

return JsonResponse([1, 2, 3], safe=False)
Without passing safe=False, a TypeError will be raised.
----------------------------------------------------------------------------
View:

indexed_companies = Company.objects.filter(index=True, business_group_id=request.POST['bid'])
indexed_companies = serialize('json', indexed_companies)

companies = Company.objects.filter(business_group_id=request.POST['bid'])
companies = serialize('json', filter_companies(companies, request.POST))
return JsonResponse({'indexed_companies': indexed_companies, 'companies': companies})


Jquery:

$('.search-forms').submit(function(e) {
e.preventDefault();
$.ajax({
type: 'POST',
url: $(this).attr("action"),
data: $(this).serialize(),
dataType: 'json',
success: function(json) {
var indexed_companies = $.parseJSON(json['indexed_companies']);
var companies = $.parseJSON(json['companies']);
$('#found-companies').text(companies.length);
$('#indexed-companies').text(indexed_companies.length);
$.each(indexed_companies, function(idx, indexed_company) {
console.log(indexed_company);
$('<tr>').appendTo('#indexed-members table');
$('<td>' + (idx + 1) + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.province + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.manager + '</td>').appendTo('#indexed-members table tr:last-child');
$('<td>' + indexed_company.fields.name + '</td>').appendTo('#indexed-members table tr:last-child');
$('</tr>').appendTo('#indexed-members table');
});
},
error: function() {
$('#search-preloader').css('display', 'none');
console.log('{% trans "Problem with connecting to the server" %}.');
}
});
});
----------------------------------------------------------------------------
If you need to serialize some fields of an object, you can not use this:
return JsonResponse({'products': serialize('json', Coffee.objects.all().values('id', 'name'))})

The correct way is:
return JsonResponse({'products': serialize('json', Coffee.objects.all(), fields=('id', 'name'))})

+Models - Overriding save method (Aug. 21, 2014, 1:03 p.m.)

from tastypie.utils.timezone import now
from django.contrib.auth.models import User
from django.db import models
from django.utils.text import slugify


class Entry(models.Model):
user = models.ForeignKey(User)
pub_date = models.DateTimeField(default=now)
title = models.CharField(max_length=200)
slug = models.SlugField()
body = models.TextField()

def __unicode__(self):
return self.title

def save(self, *args, **kwargs):
# For automatic slug generation.
if not self.slug:
self.slug = slugify(self.title)[:50]

return super(Entry, self).save(*args, **kwargs)

+Models - AUTO_NOW and AUTO_NOW_ADD (Aug. 21, 2014, 1:02 p.m.)

class Blog(models.Model):
title = models.CharField(max_length=100)
added = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)

auto_now_add tells Django that when you add a new row, you want the current date & time added. auto_now tells Django to add the current date & time will be added EVERY time the record is saved.

+Query - Call a field name by dynamic values (Aug. 21, 2014, 12:58 p.m.)

properties = Properties.objects.filter(**{'%s__age_status' % p_type: request.POST['age_status']})

+Settings - Set a settings for shell (Aug. 21, 2014, 12:56 p.m.)

DJANGO SETTINGS MODULE for shell:
python manage.py shell --settings=nimkatonilne.settings

+Admin - Deleting the file/image on deleting an object (Aug. 21, 2014, 12:54 p.m.)

1-Create a file named `clean_up.py` with the following contents:

import os

from django.conf import settings


def clean_up(sender, instance, *args, **kwargs):
for field in sender._meta.get_fields():
field_types = ['FileBrowseField', 'ImageField', 'FileField']
if field.__class__.__name__ in field_types:
try:
os.remove(settings.MEDIA_ROOT + str(getattr(instance, field.name)))
except (OSError, IOError):
pass
--------------------------------------------------------------------------------------------
2- Open the models.py file:

Import the `clean_up` function from the `clean_up` module and add the following line at the bottom of each model having a FileField or ImageField or FileBrowseField:

models.signals.post_delete.connect(clean_up, sender=Ads)

+URLS - Redirect to a URL in urls.py (Aug. 21, 2014, 12:53 p.m.)

from django.views.generic import RedirectView
from django.core.urlresolvers import reverse_lazy

(r'^one/$', RedirectView.as_view(url='/another/')),

OR

url(r'^some-page/$', RedirectView.as_view(url=reverse_lazy('my_named_pattern'))),

+Forms - Overriding and manipulating fields (Nov. 30, 2015, 12:35 p.m.)

class CheckoutForm(forms.ModelForm):

def __init__(self, request, *args, **kwargs):
super(CheckoutForm, self).__init__(*args, **kwargs)
self.request = request
print(request.user)


class Meta:
model = Address
exclude = ('fax_number',)

-------------------------------------------------

class ProfileForm(forms.Form):
required_css_class = 'required'

-------------------------------------------------

def __init__(self, request, *args, **kwargs):
super(InstituteRegistrationForm, self).__init__(*args, **kwargs)
self.request = request

-------------------------------------------------

if request.user.cellphone:
self.fields['cell_phone_number'].widget.attrs['readonly'] = 'true'

-------------------------------------------------

if request.user.email:
self.fields['email'].widget.attrs['readonly'] = 'true'

-------------------------------------------------

self.fields['city'].queryset = City.objects.filter(province__allow_delete=False)
self.fields['city'].initial = '1'

-------------------------------------------------

self.fields['first_name'].required = True
self.fields['first_name'].widget.attrs['required'] = True

-------------------------------------------------

for field in self.fields.values():
field.widget.attrs['required'] = True
field.required = True

-------------------------------------------------

self.fields['national_team'].empty_label = None

-------------------------------------------------

self.fields['allowed_online_calls'] = forms.ModelMultipleChoiceField(
queryset=Choices.objects.filter(choice='customer'),
widget=forms.CheckboxSelectMultiple())

-------------------------------------------------

Hide a field:
self.fields['state'].widget = forms.HiddenInput()

-------------------------------------------------

class UpdateShare(forms.ModelForm):
class Meta:
model = ManualEntries
exclude = ['dt']
widgets = {
'description': forms.Textarea(attrs={'rows': 3}),
}

-------------------------------------------------

class QuestionnaireForm(forms.ModelForm):
class Meta:
model = Questionnaire
fields = ['code', 'title', 'grades', 'description', 'enable']
widgets = {
'grades': forms.CheckboxSelectMultiple
}

-------------------------------------------------

self.fields['amount'].help_text = 'AAA'

-------------------------------------------------

Change ModelChoiceField items text:

self.fields['parent'].label_from_instance = lambda obj: obj.other_name

-------------------------------------------------

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

-------------------------------------------------

When passing instance in render, like:
{'form': ProfileForm(instance=request.user)}

if you needed to change values in __init__ of ModelForm use "self.initial":

self.initial['first_name'] = 'aa'

-------------------------------------------------

class CertificateForm(forms.ModelForm):

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
date = self['date'].value()
if date and not isinstance(date, str):
self.initial['date'] = '-'.join([str(x) for x in list(get_persian_date(date).values())])

-------------------------------------------------

Change max_length validator error message:

caller_id.validators[-1].message = _('The text is too long.')

-------------------------------------------------

Docker
+Docker behind socsk proxy (Oct. 24, 2018, 2:59 p.m.)

1- mkdir -p /etc/systemd/system/docker.service.d

2- vim /etc/systemd/system/docker.service.d/http-proxy.conf

3-
[Service]
Environment="HTTP_PROXY=socks5://127.0.0.1:1080/"

4- systemctl daemon-reload

5- systemctl restart docker

+Commands (Oct. 24, 2018, 3:22 p.m.)

docker run <image>
This command will download the image, if it is not already present, and runs it as a container.

----------------------------------------------------------------

docker start <name | id>

----------------------------------------------------------------

Get the process ID of the container
docker inspect container | grep Pid

----------------------------------------------------------------

Stop a running container:
docker stop ContainerID

----------------------------------------------------------------

We can see the ports by running:
docker port InstanceID

----------------------------------------------------------------

See the top processes within a container:
docker top ContainerID

----------------------------------------------------------------

docker images

docker images -q
q − It tells the Docker command to return the Image IDs only.

----------------------------------------------------------------

docker inspect <image>
The output will show detailed information on the Image.

----------------------------------------------------------------

docker ps [-a include stopped containers]
OR
docker container ls

----------------------------------------------------------------

Statistics of a running container:
docker stats ContainerID
The output will show the CPU and Memory utilization of the Container.

----------------------------------------------------------------

Delete a container:
docker rm ContainerID

----------------------------------------------------------------

Pause the processes in a running container:
docker pause ContainerID
The above command will pause the processes in a running container.

----------------------------------------------------------------

docker unpause ContainerID

----------------------------------------------------------------

Kill the processes in a running container
docker kill ContainerID

--------------------------------------------------------------

Attach to a running container:
docker attach ContainerID

I think this will hang/freeze, or I can't have any outputs. Use the following command instead:
docker exec -it <container-id> bash

----------------------------------------------------------------

docker pull gitlab/gitlab-ce

----------------------------------------------------------------

Listing All Docker Networks:
docker network ls

----------------------------------------------------------------

Inspecting a Docker network:
If you want to see more details on the network associated with Docker, you can use the Docker network inspect command.
docker network inspect networkname
Example:
docker network inspect bridge

----------------------------------------------------------------

docker logs -f <name>

----------------------------------------------------------------

--detach --name

----------------------------------------------------------------

See all the commands that were run with an image via a container:
docker history ImageID

----------------------------------------------------------------

Removing Docker Images:
docker rmi ImageID

----------------------------------------------------------------

Set the hostname inside the container:
--hostname gitlab.mohsenhassani.com

----------------------------------------------------------------

docker run centos -it /bin/bash
The -it argument is used to mention that we want to run in interactive tty mode.
/bin/bash is used to run the bash shell once CentOS is up and running.

----------------------------------------------------------------

docker run -p 8080:8080 -p 50000:50000 jenkins

The -p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly.

----------------------------------------------------------------

Tell Docker to expose the HTTP and SSH ports from GitLab on ports 30080 and 30022, respectively.

--publish 30080:80

--publish 30022:22

----------------------------------------------------------------

See information on the Docker running on the system:

docker info

Return Value

The output will provide the various details of the Docker installed on the system such as:

Number of containers
Number of images
The storage driver used by Docker
The root directory used by Docker
The execution driver used by Docker

----------------------------------------------------------------

Stop all running containers:
docker stop $(docker ps -a -q)

Delete all stopped containers:
docker rm $(docker ps -a -q)

----------------------------------------------------------------

+Docker Compose (Oct. 24, 2018, 8:31 p.m.)

1- curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose


2- chmod +x /usr/local/bin/docker-compose

+Difference between image and container (Dec. 14, 2018, 1:02 a.m.)

An instance of an image is called a container. When the image is started, you have a running container of this image. You can have many running containers of the same image.


You can see all your images with "docker images" whereas you can see your running containers with "docker ps" (and you can see all containers with docker ps -a).

+Command Examples - docker run (Dec. 14, 2018, 1:36 a.m.)

docker run -v /full/path/to/html/directory:/usr/share/nginx/html:ro -p 8080:80 -d nginx

-v /full/path/to/html/directory:/usr/share/nginx/html:ro
Maps the directory holding our web page to the required location in the image. The ro field instructs Docker to mount it in read-only mode. It’s best to pass Docker the full paths when specifying host directories.

-p 8080:80 maps network service port 80 in the container to 8080 on our host system.

-d detaches the container from our command line session. We don’t want to interact with this container.

----------------------------------------------------------------------

docker run --name foo -d -p 8080:80 mynginx

- name foo gives the container a name, rather than one of the randomly assigned names.

----------------------------------------------------------------------

docker run busybox echo "hello from busybox"

----------------------------------------------------------------------

-P will publish all exposed ports to random ports

We can see the ports by running:
docker port InstanceID

----------------------------------------------------------------------

docker run -d -p 80:80 my_image service nginx start

----------------------------------------------------------------------

docker run -d -p 80:80 my_image nginx -g 'daemon off;'

----------------------------------------------------------------------

Restart policies

--restart=always
Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.


--restart=always
Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.


--restart=unless-stopped
Always restart the container regardless of the exit status, including on daemon startup, except if the container was put into a stopped state before the Docker daemon was stopped.

----------------------------------------------------------------------

VOLUME (shared filesystems):

-v, --volume=[host-src:]container-dest[:<options>]: Bind mount a volume.
The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy].

The 'host-src' is an absolute path or a name value.
If neither 'rw' or 'ro' is specified then the volume is mounted in read-write mode.

The `nocopy` mode is used to disable automatically copying the requested volume path in the container to the volume storage location.
For named volumes, `copy` is the default mode. Copy modes are not supported for bind-mounted volumes.

--volumes-from="": Mount all volumes from the given container(s)

----------------------------------------------------------------------

USER

-u="", --user="": Sets the username or UID used and optionally the groupname or GID for the specified command.

----------------------------------------------------------------------

WORKDIR

The default working directory for running binaries within a container is the root directory (/), but the developer can set a different default with the Dockerfile WORKDIR command. The operator can override this with:

-w="": Working directory inside the container

----------------------------------------------------------------------

docker run \
--rm \
--detach \
--env KEY=VALUE \
--ip 10.10.9.75 \
--publish 3000:3000 \
--volume my_volume \
--name my_container \
--tty --interactive \
--volume /my_volume \
--workdir /app \
IMAGE bash

----------------------------------------------------------------------

--rm Automatically remove the container when it exits. The alternative would be to manually stop it and then remove it.

----------------------------------------------------------------------

+Managing Ports (Dec. 14, 2018, 1:24 a.m.)

In Docker, the containers themselves can have applications running on ports. When you run a container, if you want to access the application in the container via a port number, you need to map the port number of the container to the port number of the Docker host.

To understand what ports are exposed by the container, you should use the Docker inspect command to inspect the image:
docker inspect jenkins

The output of the inspect command gives a JSON output. If we observe the output, we can see that there is a section of "ExposedPorts" and see that there are two ports mentioned. One is the data port of 8080 and the other is the control port of 50000.

To run Jenkins and map the ports, you need to change the Docker run command and add the ‘p’ option which specifies the port mapping. So, you need to run the following command:

docker run -p 8080:8080 -p 50000:50000 jenkins

The left-hand side of the port number mapping is the Docker host port to map to and the right-hand side is the Docker container port number.

+Docker Network (Dec. 14, 2018, 2:03 a.m.)

When docker is installed, it creates three networks automatically.
docker network ls

NETWORK ID NAME DRIVER SCOPE
c2c695315b3a bridge bridge local
a875bec5d6fd host host local
ead0e804a67b none null local

--------------------------------------------------------------------

The bridge network is the network in which containers are run by default. So that means when we run a container, it runs in this bridge network. To validate this, let's inspect the network:

docker network inspect bridge

--------------------------------------------------------------------

You can see that our container is listed under the Containers section in the output. What we also see is the IP address this container has been allotted - 172.17.0.2.

--------------------------------------------------------------------

Defining our own networks:

docker network create my-network-net
docker run -d --name es --net my-network-net -p 9200:9200 -p 9300:9300

--------------------------------------------------------------------

+When to use --hostname in docker? (Dec. 15, 2018, 2:55 a.m.)

The --hostname flag only changes the hostname inside your container. This may be needed if your application expects a specific value for the hostname. It does not change DNS outside of docker, nor does it change the networking isolation, so it will not allow others to connect to the container with that name.

You can use the container name or the container's (short, 12 character) id to connect from container to container with docker's embedded dns as long as you have both containers on the same network and that network is not the default bridge.

+Installation (Feb. 28, 2017, 10:31 a.m.)

Debian:

1- Install packages to allow apt to use a repository over HTTPS:
apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common

2- Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -

4- Use the following command to set up the stable repository.
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

5- apt update

6- apt install docker-ce

------------------------------------------------------------------

Fedora:
Install Community Edition (CE)

1- Install the dnf-plugins-core package which provides the commands to manage your DNF repositories from the command line.
dnf -y install dnf-plugins-core


2- Use the following command to set up the stable repository. (You might need a proxy)
proxychains4 dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo

3- Install the latest version of Docker CE: (You might need a proxy)
dnf install docker-ce

------------------------------------------------------------------

+Introduction (Feb. 27, 2017, 12:30 p.m.)

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
------------------------------------------------------------
In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.
------------------------------------------------------------
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
------------------------------------------------------------
Docker can be integrated into various infrastructure tools, including Amazon Web Services, Ansible, CFEngine, Chef, Google Cloud Platform, IBM Bluemix, HPE Helion Stackato, Jelastic, Jenkins, Kubernetes, Microsoft Azure, OpenStack Nova, OpenSVC, Oracle Container Cloud Service, Puppet, Salt, Vagrant, and VMware vSphere Integrated Containers.

ELK Stack
+beats (May 19, 2019, 9:05 p.m.)

This input plugin enables Logstash to receive events from the Elastic Beats framework.

The following example shows how to configure Logstash to listen on port 5044 for incoming Beats connections and to index into Elasticsearch:

input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

+Difference between Logstash and Beats (May 19, 2019, 9:01 p.m.)

Beats are lightweight data shippers that you install as agents on your servers to send specific types of operational data to Elasticsearch. Beats have a small footprint and use fewer system resources than Logstash.

Logstash has a larger footprint, but provides a broad array of input, filter, and output plugins for collecting, enriching, and transforming data from a variety of sources.

+Elasticsearch cat APIs (April 22, 2019, 1:24 a.m.)

To check the cluster health, we will be using the _cat API.

cat APIs

JSON is great… for computers. Even if it’s pretty-printed, trying to find relationships in the data is tedious. Human eyes, especially when looking at a terminal, need compact and aligned text. The cat API aims to meet this need.

-------------------------------------------------------------

curl '127.0.0.1:9200/_cat/master?v'

_cat/master?help

-------------------------------------------------------------

List All Indices:
curl '127.0.0.1:9200/_cat/indices?v'

-------------------------------------------------------------

+Installation (April 19, 2019, 10:25 p.m.)

apt install openjdk-8-jdk apt-transport-https curl nginx libpcre3-dev

----------------------------------------------------------------------

Elasticsearch
-----------------

1- wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

2- echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

3- apt update

4- apt install elasticsearch

5- Uncomment the following options from the file "/etc/elasticsearch/elasticsearch.yml"
network.host: localhost
http.port: 9200

6-
systemctl restart elasticsearch
systemctl enable elasticsearch

7- Check the status of the elasticsearch server: (Its server takes time to start listening.)
curl -X GET http://localhost:9200

----------------------------------------------------------------------

Kibana
---------

1- apt install kibana

2- systemctl enable kibana

3-
echo "admin:$(openssl passwd -apr1 my_password)" | sudo tee -a /etc/nginx/htpasswd.kibana

4- vim /etc/nginx/sites-enabled/kibana
server {
listen 80;
server_name logs.mhass.ir logs.mohsenhassani.com;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.kibana;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

5- systemctl restart nginx

----------------------------------------------------------------------

Logstash
-----------

1- apt install logstash


2- Create a logstash filter config file in "/etc/logstash/conf.d/logstash.conf", with this content:
input {
tcp {
port => 4300 # optional port number
codec => json
}
}

filter { }

output {
elasticsearch { }
stdout { } # or stdout {codec => json} in case you want to see the data in logs for debugging
}


3- Restart logstash services:
systemctl restart logstash
systemctl enable logstash


----------------------------------------------------------------------

For debugging:
tcpdump -nti any port 4300
tail -f /var/log/syslog
tail -f /var/log/logstash/logstash*.log

----------------------------------------------------------------------

+Introduction / Definitions (April 19, 2019, 10:24 p.m.)

First Underlying Layer: Logstash + Beats

Upper Layer: Elasticsearch

Upper Layer: Kibabana

------------------------------------------------------

"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana.

Elasticsearch is a search and analytics engine.

Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.

Kibana lets users visualize data with charts and graphs in Elasticsearch.

------------------------------------------------------

Elasticsearch is a distributed, RESTful search and analytics NoSQL engine based on Lucene.

Logstash is a light-weight data processing pipeline for managing events and logs from a wide variety of sources.

Kibana is a web application for visualizing data that works on top of Elasticsearch.

------------------------------------------------------

The Elastic Stack is the next evolution of the ELK Stack.

------------------------------------------------------

FlexBox
+Make body skrinkable and extensible (Sept. 18, 2019, 1:05 a.m.)

.holy-grail, .holy-grail-body {
display: flex;
flex: 1 1 auto;
flex-direction: column;
}

Git
+.git/info/exclude vs. .gitignore (Aug. 8, 2018, 11:36 a.m.)

gitignore is applied to every clone of the repo (it comes along as a versioned file),
.git/info/exclude only applies to your local copy of the repository.

-----------------------------------------------------------------------

The advantage of .gitignore is that it can be checked into the repository itself, unlike .git/info/exclude. Another advantage is that you can have multiple .gitignore files, one inside each directory/subdirectory for directory-specific ignore rules, unlike .git/info/exclude.

So, .gitignore is available across all clones of the repository. Therefore, in large teams, all people are ignoring the same kind of files Example *.db, *.log. And you can have more specific ignore rules because of multiple .gitignore.

.git/info/exclude is available for individual clones only, hence what one person ignores in his clone is not available in some other person's clone. For example, if someone uses Eclipse for development it may make sense for that developer to add a .build folder to .git/info/exclude because other devs may not be using Eclipse.

In general, files/ignore rules that have to be universally ignored should go in .gitignore, and otherwise files that you want to ignore only on your local clone should go into .git/info/exclude

+Change Remote Origin (Oct. 24, 2018, 2:26 p.m.)

git remote rm origin
git remote add origin git@github.com:username/repositoryName.git
git config master.remote origin
git config master.merge refs/heads/master

+Force Push (Oct. 14, 2018, 2:13 p.m.)

git push https://git.... --force

git push --force origin .....

git push https://git.... -f

git push -f origin .....

+Cancel a local git commit (Feb. 25, 2019, 4:04 p.m.)

Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .

---------------------------------------------------------

Undo most recent commit:
git reset HEAD~1

---------------------------------------------------------

+Delete from reflog (Feb. 25, 2019, 4:04 p.m.)

git reflog delete HEAD@{3}

+Revert all local changes (Feb. 25, 2019, 4:03 p.m.)

Unstaged local changes (before you commit)

Discard all local changes, but save them for possible re-use later:
git stash

Discarding local changes (permanently) to a file:
git checkout -- <file>

Discard all local changes to all files permanently:
git reset --hard

+Comparing two branches (Feb. 25, 2019, 1:20 p.m.)

git diff branch_1 branch_2

+Rename a branch (Feb. 25, 2019, 4:01 p.m.)

1- Rename the local branch name:

If you are on the branch:
git branch -m <newname>

If you are on a different branch:
git branch -m <oldname> <newname>


2- Delete the old name remote branch and push the new name local branch:
git push origin :old-name new-name


3- Reset the upstream branch for the new-name local branch:

Switch to the branch and then:
git push origin -u new-name

+Delete a branch (Feb. 28, 2019, 9:17 a.m.)

Delete a Local GIT branch:

Use either of the following commands:
git branch -d branch_name
git branch -D branch_name


The -d option stands for --delete, which would delete the local branch, only if you have already pushed and merged it with your remote branches.

The -D option stands for --delete --force, which deletes the branch regardless of its push and merge status, so be careful using this one!

------------------------------------------------------

Delete a remote GIT branch:

Use either of the following commands:
git push <remote_name> --delete <branch_name>
git push <remote_name> :<branch_name>

------------------------------------------------------

Push to remote branch and delete:

If you ever want to push your local branch to remote and delete your local, you can use git push with the -d option as an alias for --delete.

------------------------------------------------------

+Fetch vs Pull (March 2, 2019, 10:03 a.m.)

In the simplest terms, git pull does a git fetch followed by a git merge.

---------------------------------------------------

git fetch only downloads new data from a remote repository - but it doesn't integrate any of this new data into your working files. Fetch is great for getting a fresh view of all the things that happened in a remote repository.

---------------------------------------------------

git pull, in contrast, is used with a different goal in mind: to update your current HEAD branch with the latest changes from the remote server. This means that pull not only downloads new data; it also directly integrates it into your current working copy files. This has a couple of consequences:

Since "git pull" tries to merge remote changes with your local ones, a so-called "merge conflict" can occur.

Like for many other actions, it's highly recommended to start a "git pull" only with a clean working copy. This means that you should not have any uncommitted local changes before you pull. Use Git's Stash feature to save your local changes temporarily.

+Merge (March 4, 2019, 12:29 p.m.)

Switch to the production branch and:
git merge other_branch

+Untracking/Re-indexing files based on .gitignore (March 4, 2019, 1:05 a.m.)

git add .

git commit -m "Some Message"

git push origin master

git rm -r --cached .

git add .

git commit -m "Reindexing..."

+Stash (March 4, 2019, 3:57 p.m.)

git stash

git stash pop

+Submodule (Nov. 29, 2017, 6:17 p.m.)

1- CD to the path you need the module get cloned.

2- git submodule add https://github.com/ceph/ceph-ansible.git

-----------------------------------------------------------

In case of this error raises:
blah blah already exists in the index :-D
git rm --cached blah blah
and you should also delete the files from this path:
rm -rf .git/modules/...

-----------------------------------------------------------

To remove a submodule you need to:

Delete the relevant section from the .gitmodules file.
Stage the .gitmodules changes git add .gitmodules
Delete the relevant section from .git/config.
Run git rm --cached path_to_submodule (no trailing slash).
Run rm -rf .git/modules/path_to_submodule
Commit git commit -m "Removed submodule <name>"
Delete the now untracked submodule files
rm -rf path_to_submodule

-----------------------------------------------------------

+Commands (July 29, 2017, 11:26 a.m.)

git pull

-------------------------------------------------

git fetch

-------------------------------------------------

git pull master

-------------------------------------------------

Create a branch:
git checkout -b branch_name

-------------------------------------------------

Work on an existing branch:
git checkout branch_name

-------------------------------------------------

View the changes you've made:
git status

-------------------------------------------------

View differences:
git diff

-------------------------------------------------

Delete all changes in the Git repository:
To delete all local changes in the repository that have not been added to the staging area, and leave unstaged files/folders, type:

git checkout .

-------------------------------------------------

Delete all untracked changes in the Git repository:
git clean -f

-------------------------------------------------

Unstage all changes that have been added to the staging area:
To undo the most recent add, but not committed, files/folders:

git reset .

-------------------------------------------------

Undo most recent commit:
git reset HEAD~1

-------------------------------------------------

Merge created branch with master branch:
You need to be in the created branch.

git checkout NAME-OF-BRANCH
git merge master

-------------------------------------------------

Merge master branch with created branch:
You need to be in the master branch.
git checkout master
git merge NAME-OF-BRANCH

-------------------------------------------------

+Diff (July 29, 2017, 11:17 a.m.)

If you want to see what you haven't git added yet:
git diff myfile.txt

or if you want to see already-added changes
git diff --cached myfile.txt

+Modify existing / unpushed commits (Jan. 28, 2017, 3:12 p.m.)

git commit --amend -m "New commit message"

+Delete file from repository (Jan. 28, 2017, 3:04 p.m.)

If you deleted a file from the working tree, then commit the deletion:
git add . -A
git commit -m "Deleted some files..."
git push origin master

----------------------------------------------------------------------

Remove a file from a Git repository without deleting it from the local filesystem:
git rm --cached <filename>
git rm --cached -r <dir_name>
git commit -m "Removed folder from repository"
git push origin master

+.gitingore Rules (Jan. 28, 2017, 2:56 p.m.)

A blank line matches no files, so it can serve as a separator for readability.

A line starting with # serves as a comment.

An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again. If a negated pattern matches, this will override lower precedence patterns sources.

If the pattern ends with a slash, it is removed for the purpose of the following description, but it would only find a match with a directory. In other words, foo/ will match a directory foo and paths underneath it, but will not match a regular file or a symbolic link foo (this is consistent with the way how path spec works in general in git).

If the pattern does not contain a slash /, git treats it as a shell glob pattern and checks for a match against the pathname relative to the location of the .gitignore file (relative to the top level of the work tree if not from a .gitignore file).

Otherwise, git treats the pattern as a shell glob suitable for consumption by fnmatch(3) with the FNM_PATHNAME flag: wildcards in the pattern will not match a / in the pathname. For example, Documentation/*.html matches Documentation/git.html but not Documentation/ppc/ppc.html or tools/perf/Documentation/perf.html.

A leading slash matches the beginning of the pathname. For example, /*.c matches cat-file.c but not mozilla-sha1/sha1.c.

+Examples (Aug. 21, 2014, 1:29 p.m.)

cd my_project
git init
git remote add origin https://MohsenHassani@bitbucket.org/MohsenHassani/my_project.git
git commit -m 'initial commit'
git push origin master

--------------------mkdir my_project---------------------------------------------------------------------

After each change in project:
git add .
git commit -m '<the comment>'
git push origin master

-----------------------------------------------------------------------------------------

git config http.postBuffer 1048576000
git config --global user.name "Mohsen Hassani"
git config --global user.email "mohsen@mohsenhassani.com"
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto
git config --list
git log

git add -A .
git commit -m "File nonsense.txt is now removed"

git commit -m "message with a tpyo here"
git commit --amend -m "More changes - now correct"

git remote
git remote -v

export http_proxy=http://proxy:8080
// Set proxy for git globally
git config --global http.proxy http://proxy:8080
// To check the proxy settings
git config --get http.proxy
// Just in case you need to you can also revoke the proxy settings
git config --global --unset http.proxy

Gitlab
+Gitlab Flow (Oct. 8, 2018, 3:08 p.m.)

In git you add files from the working copy to the staging area. After that you commit them to the local repo. The third step is pushing to a shared remote repository. After getting used to these three steps the branching model becomes the challenge.


Since many organizations new to git have no conventions how to work with it, it can quickly become a mess. The biggest problem they run into is that many long running branches that each contain part of the changes are around. People have a hard time figuring out which branch they should develop on or deploy to production. Frequently the reaction to this problem is to adopt a standardized pattern such as git flow and GitHub flow. We think there is still room for improvement and will detail a set of practices we call GitLab flow.


Git flow and its problems:
Git flow was one of the first proposals to use git branches and it has gotten a lot of attention. It advocates a master branch and a separate develop branch as well as supporting branches for features, releases and hotfixes. The development happens on the develop branch, moves to a release branch and is finally merged into the master branch. Git flow is a well defined standard but its complexity introduces two problems. The first problem is that developers must use the develop branch and not master, master is reserved for code that is released to production. It is a convention to call your default branch master and to mostly branch from and merge to this. Since most tools automatically make the master branch the default one and display that one by default it is annoying to have to switch to another one. The second problem of git flow is the complexity introduced by the hotfix and release branches. These branches can be a good idea for some organizations but are overkill for the vast majority of them. Nowadays most organizations practice continuous delivery which means that your default branch can be deployed. This means that hotfix and release branches can be prevented including all the ceremony they introduce. An example of this ceremony is the merging back of release branches. Though specialized tools do exist to solve this, they require documentation and add complexity. Frequently developers make a mistake and for example changes are only merged into master and not into the develop branch. The root cause of these errors is that git flow is too complex for most of the use cases. And doing releases doesn't automatically mean also doing hotfixes.


GitHub flow as a simpler alternative:
In reaction to git flow a simpler alternative was detailed, GitHub flow. This flow has only feature branches and a master branch. This is very simple and clean, many organizations have adopted it with great success. Atlassian recommends a similar strategy although they rebase feature branches. Merging everything into the master branch and deploying often means you minimize the amount of code in 'inventory' which is in line with the lean and continuous delivery best practices. But this flow still leaves a lot of questions unanswered regarding deployments, environments, releases and integrations with issues. With GitLab flow we offer additional guidance for these questions.


Production branch with GitLab flow:
GitHub flow does assume you are able to deploy to production every time you merge a feature branch. This is possible for e.g. SaaS applications, but there are many cases where this is not possible. One would be a situation where you are not in control of the exact release moment, for example an iOS application that needs to pass App Store validation. Another example is when you have deployment windows (workdays from 10am to 4pm when the operations team is at full capacity) but you also merge code at other times. In these cases you can make a production branch that reflects the deployed code. You can deploy a new version by merging in master to the production branch. If you need to know what code is in production you can just checkout the production branch to see. The approximate time of deployment is easily visible as the merge commit in the version control system. This time is pretty accurate if you automatically deploy your production branch. If you need a more exact time you can have your deployment script create a tag on each deployment. This flow prevents the overhead of releasing, tagging and merging that is common to git flow.


Environment branches with GitLab flow:
It might be a good idea to have an environment that is automatically updated to the master branch. Only in this case, the name of this environment might differ from the branch name. Suppose you have a staging environment, a pre-production environment and a production environment. In this case the master branch is deployed on staging. When someone wants to deploy to pre-production they create a merge request from the master branch to the pre-production branch. And going live with code happens by merging the pre-production branch into the production branch. This workflow where commits only flow downstream ensures that everything has been tested on all environments. If you need to cherry-pick a commit with a hotfix it is common to develop it on a feature branch and merge it into master with a merge request, do not delete the feature branch. If master is good to go (it should be if you are practicing continuous delivery) you then merge it to the other branches. If this is not possible because more manual testing is required you can send merge requests from the feature branch to the downstream branches.


Release branches with GitLab flow:
Only in case you need to release software to the outside world you need to work with release branches. In this case, each branch contains a minor version (2-3-stable, 2-4-stable, etc.). The stable branch uses master as a starting point and is created as late as possible. By branching as late as possible you minimize the time you have to apply bug fixes to multiple branches. After a release branch is announced, only serious bug fixes are included in the release branch. If possible these bug fixes are first merged into master and then cherry-picked into the release branch. This way you can't forget to cherry-pick them into master and encounter the same bug on subsequent releases. This is called an 'upstream first' policy that is also practiced by Google and Red Hat. Every time a bug-fix is included in a release branch the patch version is raised (to comply with Semantic Versioning) by setting a new tag. Some projects also have a stable branch that points to the same commit as the latest released branch. In this flow it is not common to have a production branch (or git flow master branch).


Merge/pull requests with GitLab flow:
Merge or pull requests are created in a git management application and ask an assigned person to merge two branches. Tools such as GitHub and Bitbucket choose the name pull request since the first manual action would be to pull the feature branch. Tools such as GitLab and others choose the name merge request since that is the final action that is requested of the assignee. In this article we'll refer to them as merge requests.

If you work on a feature branch for more than a few hours it is good to share the intermediate result with the rest of the team. This can be done by creating a merge request without assigning it to anyone, instead you mention people in the description or a comment (/cc @mark @susan). This means it is not ready to be merged but feedback is welcome. Your team members can comment on the merge request in general or on specific lines with line comments. The merge requests serves as a code review tool and no separate tools such as Gerrit and reviewboard should be needed. If the review reveals shortcomings anyone can commit and push a fix. Commonly the person to do this is the creator of the merge/pull request. The diff in the merge/pull requests automatically updates when new commits are pushed on the branch.

When you feel comfortable with it to be merged you assign it to the person that knows most about the codebase you are changing and mention any other people you would like feedback from. There is room for more feedback and after the assigned person feels comfortable with the result the branch is merged. If the assigned person does not feel comfortable they can close the merge request without merging.

In GitLab it is common to protect the long-lived branches (e.g. the master branch) so that normal developers can't modify these protected branches. So if you want to merge it into a protected branch you assign it to someone with maintainer authorizations.


Issue tracking with GitLab flow:
GitLab flow is a way to make the relation between the code and the issue tracker more transparent.

Any significant change to the code should start with an issue where the goal is described. Having a reason for every code change is important to inform everyone on the team and to help people keep the scope of a feature branch small. In GitLab each change to the codebase starts with an issue in the issue tracking system. If there is no issue yet it should be created first provided there is significant work involved (more than 1 hour). For many organizations this will be natural since the issue will have to be estimated for the sprint. Issue titles should describe the desired state of the system, e.g. "As an administrator I want to remove users without receiving an error" instead of "Admin can't remove users.".

When you are ready to code you start a branch for the issue from the master branch. The name of this branch should start with the issue number, for example '15-require-a-password-to-change-it'.

When you are done or want to discuss the code you open a merge request. This is an online place to discuss the change and review the code. Opening a merge request is a manual action since you do not always want to merge a new branch you push, it could be a long-running environment or release branch. If you open the merge request but do not assign it to anyone it is a 'Work In Progress' merge request. These are used to discuss the proposed implementation but are not ready for inclusion in the master branch yet. Pro tip: Start the title of the merge request with [WIP] or WIP: to prevent it from being merged before it's ready.

When the author thinks the code is ready the merge request is assigned to reviewer. The reviewer presses the merge button when they think the code is ready for inclusion in the master branch. In this case the code is merged and a merge commit is generated that makes this event easily visible later on. Merge requests always create a merge commit even when the commit could be added without one. This merge strategy is called 'no fast-forward' in git. After the merge the feature branch is deleted since it is no longer needed, in GitLab this deletion is an option when merging.

Suppose that a branch is merged but a problem occurs and the issue is reopened. In this case it is no problem to reuse the same branch name since it was deleted when the branch was merged. At any time there is at most one branch for every issue. It is possible that one feature branch solves more than one issue.

+Uninstall (Oct. 23, 2018, 4:25 p.m.)

1- sudo gitlab-ctl uninstall

2- sudo gitlab-ctl cleanse

3- sudo gitlab-ctl remove-accounts

4- sudo dpkg -P gitlab-ce

5- Delete these directories:
rm -r /opt/gitlab/
rm -r /var/opt/gitlab
rm -r /etc/gitlab
rm -r /var/log/gitlab

+Docker (Dec. 15, 2018, 4:04 p.m.)

docker pull gitlab/gitlab-ce:latest

-----------------------------------------------------------

docker run -d --hostname git.mohsenhassani.com -p 30443:443 -p 3080:80 -p 3022:22 --name gitlab --restart always -v /var/docker_data/gitlab/config:/etc/gitlab -v /var/docker_data/gitlab/logs:/var/log/gitlab -v /var/docker_data/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest

-----------------------------------------------------------

+Markdown Cheatsheet (March 10, 2018, 8:14 p.m.)

https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet

+Runner - .gitlab-ci.yml sample (Feb. 14, 2018, 11:38 a.m.)

update_docs:
script:
- mkdocs build
- ssh-keyscan -H mohsenhassani.com >> ~/.ssh/known_hosts
- scp -rC site/* root@mohsenhassani.com:/var/www/html/
- ssh root@mohsenhassani.com "/etc/init.d/nginx restart"

+Send Notifications to Email (April 12, 2017, 3:03 p.m.)

https://docs.gitlab.com/omnibus/settings/smtp.html
https://docs.gitlab.com/ce/administration/troubleshooting/debug.html
-----------------------------------------------------------------
To test the mail server:
1- sudo gitlab-rails console production
-----------------------------------------------------------------
2- Look at the ActionMailer delivery_method:
ActionMailer::Base.delivery_method
-----------------------------------------------------------------
3- Check the mail settings:

If it's configured with smtp:
ActionMailer::Base.smtp_settings

If it is sendmail:
ActionMailer::Base.sendmail_settings

You may need to check your local mail logs (e.g. /var/log/mail.log) for more details.
-----------------------------------------------------------------
4- Send a test message via the console.
Notify.test_email('mohsen@mohsenhassani.com', 'Hello World', 'This is a test message').deliver_now

In case the email is not sent (after checking your mail), you can see the reason/error in:
tail -f /var/log/mail.log
-----------------------------------------------------------------
5- If you needed to change any configs, refer to this file:

vim /var/opt/gitlab/gitlab-rails/etc/gitlab.yml

OR depending on your gitlab version, maybe this one:

/etc/gitlab/gitlab.rb

And after any change to it:
gitlab-ctl reconfigure
-----------------------------------------------------------------
For fixing some problems I had to replace "sendmail" with the default "postfix".
apt install sendmail (will remove postfix and install sendmail)

In /etc/hosts I had to put the required domain names to fix the error " Sender address rejected: Domain not found".
-----------------------------------------------------------------

+Deleting a runner (March 8, 2017, 7:38 p.m.)

gitlab-runner unregister --name runner-0

For deleting all:
gitlab-runner verify --delete

+Install Gitlab Runner (Feb. 25, 2017, 3:09 p.m.)

GitLab Runner is an application which processes builds. It can be deployed separately and works with GitLab CI through an API.
In order to run tests, you need at least one GitLab instance and one GitLab Runner.
-----------------------------------------------------------
Runners:
In GitLab CI, Runners run your YAML. A Runner is an isolated (virtual) machine that picks up jobs through the coordinator API of GitLab CI. A Runner can be specific to a certain project or serve any project in GitLab CI. A Runner that serves all projects is called a shared Runner.
-----------------------------------------------------------
https://docs.gitlab.com/runner/install/
-----------------------------------------------------------
1- Add GitLab's official repository:
apt-get install curl
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash

2-
cat > /etc/apt/preferences.d/pin-gitlab-runner.pref <<EOF
Explanation: Prefer GitLab provided packages over the Debian native ones
Package: gitlab-ci-multi-runner
Pin: origin packages.gitlab.com
Pin-Priority: 1001
EOF

3- Install gitlab-ci-multi-runner:
sudo apt-get install gitlab-ci-multi-runner

4- Register the Runner:
sudo gitlab-ci-multi-runner register

+Install GitLab on server (Feb. 25, 2017, 12:16 p.m.)

https://about.gitlab.com/installation/
https://about.gitlab.com/downloads/
-----------------------------------------------------------
1- Install and configure the necessary dependencies:
sudo apt-get install curl openssh-server ca-certificates postfix

2- Add the GitLab package server and install the package:
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
sudo apt-get install gitlab-ce

3- Configure and start GitLab:
sudo gitlab-ctl reconfigure

4- Browse to the hostname and login:
On your first visit, you'll be redirected to a password reset screen to provide the password for the initial administrator account. Enter your desired password and you'll be redirected back to the login screen.
The default account's username is "root". Provide the password you created earlier and login. After login you can change the username if you wish.

+Install GitLab CI (Feb. 25, 2017, 11:46 a.m.)

GitLab CI is a part of GitLab, a web application with an API that stores its state in a database. It manages projects/builds and provides a nice user interface, besides all the features of GitLab.

https://github.com/gitlabhq/gitlab-ci/blob/5-2-stable/doc/install/installation.md
----------------------------------------------------------------
Starting from version 8.0, GitLab Continuous Integration (CI) is fully integrated into GitLab itself and is enabled by default on all projects.
----------------------------------------------------------------
GitLab offers a continuous integration service. If you add a .gitlab-ci.yml file to the root directory of your repository, and configure your GitLab project to use a Runner, then each merge request or push, triggers your CI pipeline.
----------------------------------------------------------------

HTML
+iframe (June 3, 2018, 12:11 p.m.)

<!DOCTYPE html>
<html>
<head>
<title>Mohsen Hassani</title>
<style>
body, html {
margin: 0;
overflow: hidden;
}

iframe {
width: 100%;
height: 95vh;
border: 0;
}
</style>
</head>
<body>
<div class="iframe-link">
<iframe src="http://www.mohsenhassani.com">
Please switch to another modern browser.
</iframe>
</div>
</body>
</html>

+Favicon (Feb. 20, 2019, 11:20 a.m.)

<link rel="shortcut icon" type="image/png" href="favicon.ico" />
<link rel="apple-touch-icon" href="/custom_icon.png" />

+Conditions If (July 27, 2015, 3:02 p.m.)

You might need to change all the below condition syntaxes with this syntax:
<![if gte IE 9]>
<![endif]>
************************************************************
Target ALL VERSIONS of IE

<!--[if IE]>
<link rel="stylesheet" type="text/css" href="all-ie-only.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target everything EXCEPT IE

<!--[if !IE]><!-->
<link rel="stylesheet" type="text/css" href="not-ie.css" />
<!--<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 ONLY

<!--[if IE 7]>
<link rel="stylesheet" type="text/css" href="ie7.css">
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 ONLY

<!--[if IE 6]>
<link rel="stylesheet" type="text/css" href="ie6.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 5 ONLY

<!--[if IE 5]>
<link rel="stylesheet" type="text/css" href="ie5.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 5.5 ONLY

<!--[if IE 5.5000]>
<link rel="stylesheet" type="text/css" href="ie55.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 and LOWER

<!--[if lt IE 7]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
<![endif]-->

<!--[if lte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 and LOWER

<!--[if lt IE 8]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
<![endif]-->

<!--[if lte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 8 and LOWER

<!--[if lt IE 9]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
<![endif]-->

<!--[if lte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-down.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 6 and HIGHER

<!--[if gt IE 5.5]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
<![endif]-->

<!--[if gte IE 6]>
<link rel="stylesheet" type="text/css" href="ie6-and-up.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 7 and HIGHER

<!--[if gt IE 6]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
<![endif]-->

<!--[if gte IE 7]>
<link rel="stylesheet" type="text/css" href="ie7-and-up.css" />
<![endif]-->
-----------------------------------------------------------------------------------------------------
Target IE 8 and HIGHER

<!--[if gt IE 7]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />
<![endif]-->

<!--[if gte IE 8]>
<link rel="stylesheet" type="text/css" href="ie8-and-up.css" />
<![endif]-->

InfluxDB
+Queries (Dec. 12, 2018, 12:08 p.m.)

# influx

> show databases;

> show measurements

+Configuration (Dec. 9, 2018, 3:26 p.m.)

https://docs.influxdata.com/influxdb/v1.7/introduction/installation/#configuring-influxdb-oss

By default, InfluxDB uses the following network ports:

TCP port 8086 is used for client-server communication over InfluxDB’s HTTP API
TCP port 8088 is used for the RPC service for backup and restore

In addition to the ports above, InfluxDB also offers multiple plugins that may require custom ports. All port mappings can be modified through the configuration file, which is located at /etc/influxdb/influxdb.conf for default installations.

---------------------------------------------------------

The system has internal defaults for every configuration file setting. View the default configuration settings with the "influxd config" command.

--------------------------------------------------------

+Installation (Dec. 9, 2018, 3:24 p.m.)

https://docs.influxdata.com/influxdb/v1.7/introduction/installation/

Ubuntu & Debian installation are different. (Refer to the link above)

1- curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -

2- source /etc/lsb-release

3- echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

4- apt-get update && sudo apt-get install influxdb

5- service influxdb start

---------------------------------------------------------

+Introduction (Dec. 9, 2018, 3:12 p.m.)

https://docs.influxdata.com/influxdb/v1.7/

InfluxDB is an open-source time series database (TSDB) developed by InfluxData. It is written in Go and optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics. It also has support for processing data from Graphite.

Ionic
+Capacitor - Installation (Oct. 15, 2019, 9:16 p.m.)

npm install --save @capacitor/cli @capacitor/core

npx cap init tiptong ir.tiptong.www

+Capacitor - Description (Oct. 15, 2019, 9:14 p.m.)

Capacitor is an open-source native container (similar to Cordova) built by the Ionic team that you can use to build web/mobile apps that run on iOS, Android, Electron (Desktop), and as Progressive Web Apps with the same code base. It allows you to access the full native SDK on each platform, and easily deploy to App Stores or create a PWA version of your application.

Capacitor can be used with Ionic or any preferred frontend framework and can be extended with plugins. It has a rich set of official plugins and you can also use it with Cordova plugins.

+PWA (Oct. 15, 2019, 8:54 p.m.)

Start an app:
npx create-stencil tiptong-pwa

+CLI Commands (June 28, 2019, 11:24 p.m.)

Generate a new project:
ionic start
ionic start myApp tabs


ionic serve


npm uninstall @ionic-native/splash-screen


ng add @angular/pwa


ionic build --prod


ionic generate module auth
ionic generate module auth --flat
ionic g m auth --flat

+Installation (June 28, 2019, 12:11 a.m.)

1- Install the latest version of Node.js and npm

2- sudo npm install -g ionic

Jquery
+TimeOuts (April 24, 2019, 1:03 p.m.)

window.setInterval(function(){
/// call your function here
}, 5000);

-------------------------------------------------------------

$(function () {
setTimeout(runMyFunction, 10000);
});

-------------------------------------------------------------

+Find element by data attribute value (July 31, 2017, 1:18 p.m.)

$("li[data-step=2]").addClass('active');

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
}
})();

-----------------------------------------------------------------

or you can include this after loading the jquery.js file:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>

-----------------------------------------------------------------

+Call jquery code AFTER page loading (May 26, 2018, 6:07 p.m.)

$(window).on('load', function() {
$('#contact-us').click();
});

+if checkbox is checked (July 21, 2018, 11:31 a.m.)

$('#receive-sms').click(function() {
if ($(this).is(':checked')) {

}
});

+Disable Arrows on Number Inputs (Oct. 3, 2018, 12:43 p.m.)

CSS:

/* Hide HTML5 Up and Down arrows. */
input[type="number"]::-webkit-outer-spin-button, input[type="number"]::-webkit-inner-spin-button {
-webkit-appearance: none;
margin: 0;
}

input[type="number"] {
-moz-appearance: textfield;
}


---------------------------------------------------------------

jQuery(document).ready( function($) {

// Disable scroll when focused on a number input.
$('form').on('focus', 'input[type=number]', function(e) {
$(this).on('wheel', function(e) {
e.preventDefault();
});
});

// Restore scroll on number inputs.
$('form').on('blur', 'input[type=number]', function(e) {
$(this).off('wheel');
});

// Disable up and down keys.
$('form').on('keydown', 'input[type=number]', function(e) {
if ( e.which == 38 || e.which == 40 )
e.preventDefault();
});
});

---------------------------------------------------------------

+Combobox (Jan. 22, 2019, 12:43 p.m.)

Get the text value of a selected option:

$( "#myselect option:selected" ).text();

-------------------------------------------------------------

Get the value of a selected option:

$( "#myselect" ).val();

-------------------------------------------------------------

Event:

$('#my_select').change(function() {

})

-------------------------------------------------------------

+Bypass popup blocker on window.open (Jan. 20, 2018, 12:53 a.m.)

$('#myButton').click(function () {
var redirectWindow = window.open('http://google.com', '_blank');
$.ajax({
type: 'POST',
url: '/echo/json/',
success: function (data) {
redirectWindow.location;
}
});
});

+Error: Cannot read property 'msie' of undefined (Oct. 15, 2017, 11:43 a.m.)

Create a file, for example, "ie.js" and copy the content into it. Load it after jquery.js:

jQuery.browser = {};
(function () {
jQuery.browser.msie = false;
jQuery.browser.version = 0;
if (navigator.userAgent.match(/MSIE ([0-9]+)\./)) {
jQuery.browser.msie = true;
jQuery.browser.version = RegExp.$1;
}
})();
-----------------------------------------------------------------
or you can include this after loading the jquery.js file:
<script src="http://code.jquery.com/jquery-migrate-1.2.1.js"></script>

+Find element by data attribute value (July 31, 2017, 1:18 a.m.)

$("li[data-step=2]").addClass('active');

+Smooth Scrolling (Feb. 21, 2017, 4:09 p.m.)

$(function() {
$('a[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) +']');
if (target.length) {
$('html, body').animate({
scrollTop: target.offset().top
}, 1000);
return false;
}
}
});
});

+Check image width and height before upload with Javascript (Oct. 5, 2016, 3:01 a.m.)

var _URL = window.URL || window.webkitURL;
$('#upload-face').change(function() {
var file, img;
if (file = this.files[0]) {
img = new Image();
img.onload = function () {
if (this.width < 255 || this.height < 330) {
alert('{% trans "The file dimension should be at least 255 x 330 pixels." %}');
}
};
img.src = _URL.createObjectURL(file);
}
}

+Get value of selected radio button (Aug. 1, 2016, 3:46 p.m.)

$('input[type="radio"][name="machines"]:checked').val();

+Allow only numeric 0-9 in inputbox (April 25, 2016, 9:18 p.m.)

$(".numeric-inputs").keydown(function(event) {
// Allow only backspace, delete, tab, ctrlKey
if ( event.keyCode == 46 || event.keyCode == 8 || event.keyCode == 9 || event.ctrlKey ) {
// let it happen, don't do anything
}
else {
// Ensure that it is a number and stop the keypress
if ((event.keyCode >= 48 && event.keyCode <= 57) || (event.keyCode >= 96 && event.keyCode <= 105)) {
// let it happen, don't do anything
} else {
event.preventDefault();
}
}
});

+Access parent of a DOM using the (event) parameter (April 25, 2016, 1:47 p.m.)

var membership_id = $(e.target).parent().attr('id');

+Prevent big files to be uploaded (March 5, 2016, 12:08 a.m.)

$('#id_certificate').bind('change', function() {
if(this.files[0].size > 1048576) {
alert("{% trans 'The file size should be less than 1 MB.' %}");
$(this).val('');
}
});

+Background FullScreen Slider + Fade Effect (Feb. 5, 2016, 7:21 p.m.)

jQuery:

$(document).ready(function() {
var images = [];
var titles = [];
{% for slider in sliders %}
images.push('{{ slider.image.url }}');
titles.push('{{ slider.image.motto_en }}');
{% endfor %}

var image_index = 0;
$('#iind-slider').css('background-image', 'url(' + images[0] + ')');
setInterval(function() {
image_index++;
if(image_index == images.length) {
image_index = 0;
}
$('#iind-slider').fadeOut('slow', function() {
$(this).css('background-image', 'url(' + images[image_index] + ')');
$(this).fadeIn('slow');
});
}, 4000);
});
-----------------------------------------------------------------
CSS:

#iind-slider {
width: 100%;
height: 100vh;
background: no-repeat fixed 0 0;
background-size: 100% 100%;
}

+Convert Seconds to real Hour, Minutes, Seconds (Feb. 1, 2016, 10:54 p.m.)

function secondsTimeSpanToHMS(s) {
var h = Math.floor(s/3600); //Get whole hours
s -= h*3600;
var m = Math.floor(s/60); //Get remaining minutes
s -= m*60;
return h+":"+(m < 10 ? '0'+m : m)+":"+(s < 10 ? '0'+s : s); //zero padding on minutes and seconds
}

setInterval(function() {
var left_time = secondsTimeSpanToHMS(server_left_time);
$('#left-time').find('span').html(left_time);
server_left_time -= 1;
}, 1000);

+Error - TypeError: $.browser is undefined (Jan. 15, 2016, 1:53 a.m.)

Find this script file and include it after the main jquery file:
jquery-migrate-1.0.0.js

+Multiple versions of jQuery in one page (Jan. 8, 2016, 5:54 p.m.)

1- Load the jquery libraries like the example:

<script type="text/javascript" src="{% static 'iind/js/jquery-1.7.1.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_7_1 = $.noConflict(true);
</script>
<script type="text/javascript" src="{% static 'iind/js/jquery-1.11.3.min.js' %}"></script>
<script type="text/javascript">
var jQuery_1_11_3 = $.noConflict(true);
</script>
------------------------------------------------------------------------------------
2- Then use them as follows:

jQuery_1_11_3(document).ready(function() {
jQuery_1_11_3(".dropdown").hover(
function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeIn("fast");
jQuery_1_11_3(this).toggleClass('open');
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
}, function() {
jQuery_1_11_3('.dropdown-menu', this).stop( true, true ).fadeOut("fast");
jQuery_1_11_3(this).toggleClass('open');
jQuery_1_11_3('b', this).toggleClass("caret caret-up");
});
});
------------------------------------------------------------------------------------
And change the last line of jQuery libraries like this:

Change
}(jQuery, window, document));

To:
}(jQuery_1_11_3, window, document));
------------------------------------------------------------------------------------
And for bootstrap.min.js, I had to change this long line: (The last word, jQuery needed to be changed):

if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery)

To:
if("undefined"==typeof jQuery)throw new Error("Bootstrap's JavaScript requires jQuery");+function(a){var b=a.fn.jquery.split(" ")[0].split(".");if(b[0]<2&&b[1]<9||1==b[0]&&9==b[1]&&b[2]<1)throw new Error("Bootstrap's JavaScript requires jQuery version 1.9.1 or higher")}(jQuery_1_11_3)
------------------------------------------------------------------------------------

+Redirect Page (Dec. 20, 2015, 11:57 a.m.)

// similar behavior as an HTTP redirect
window.location.replace("http://stackoverflow.com");

// similar behavior as clicking on a link
window.location.href = "http://stackoverflow.com";

$(location).attr('href','http://yourPage.com/');

+Smooth scrolling when clicking an anchor link (Sept. 10, 2015, midnight)

var $root = $('html, body');
$('a').click(function () {
$root.animate({
scrollTop: $($.attr(this, 'href')).offset().top
}, 1500);
return false;
});

+Attribute Selector (Aug. 26, 2015, 4:01 p.m.)

$("[id=choose]")
---------------------------------------------------------------------------------------------
$( "input[value='Hot Fuzz']" ).next().text( "Hot Fuzz" );
---------------------------------------------------------------------------------------------
$("ul").find("[data-slide='" + current + "']");

$("ul[data-slide='" + current +"']");
---------------------------------------------------------------------------------------------

+Underscore Library (Aug. 26, 2015, 2:01 p.m.)

if(_.contains(intensity_filters, intensity_value)) {
intensity_filters = _.without(intensity_filters, intensity_value);
}
---------------------------------------------------------------------------------------------

+Get a list of checked/unchecked checkboxes (Aug. 26, 2015, 1:51 p.m.)

var selected = [];
$('#checkboxes input:checked').each(function() {
selected.push($(this).attr('name'));
});
------------------------------------------------------------------------------------------------------
And for getting the unchecked ones:
$('#checkboxes input:not(:checked)').each(function() {} });

+Comma Separate Number (Aug. 14, 2015, 11:59 a.m.)

function commaSeparateNumber(val) {
while (/(\d+)(\d{3})/.test(val.toString())) {
val = val.toString().replace(/(\d+)(\d{3})/, '$1' + ',' + '$2');
}
return val;
}

+Hide a DIV when the user clicks outside of it (Aug. 12, 2015, 2:42 p.m.)

$(document).mouseup(function (e) {
var container = $("#my-cart-box");
if (!container.is(e.target) && container.has(e.target).length === 0) {
container.hide();
}
});

+Reset a form in jquery (Aug. 1, 2015, 1:19 a.m.)

$('#the_form')[0].reset()

+Event binding on dynamically created elements (Aug. 14, 2015, 12:06 a.m.)

Add Click event for dynamically created tr in table

$('.found-companies-table').on('click', 'tr', function() {
alert('hi');
});
-----------------------------------------------------------------------------------------------------
$("body").on("mouseover mouseout", "select", function(e){

// Do some code here

});
-----------------------------------------------------------------------------------------------------
$(staticAncestors).on(eventName, dynamicChild, function() {});
-----------------------------------------------------------------------------------------------------
$('body').on('click', '.delete-order', function(e) { });

+Select all (table rows) except first (July 18, 2015, 3:12 a.m.)

$("div.test:not(:first)").hide();
--------------------------------------------------------------------------
$("div.test:not(:eq(0))").hide();
--------------------------------------------------------------------------
$("div.test").not(":eq(0)").hide();
--------------------------------------------------------------------------
$("div.test:gt(0)").hide();
--------------------------------------------------------------------------
$("div.test").gt(0).hide();
--------------------------------------------------------------------------
$("div.test").slice(1).hide();

+Deleting all rows in a table (July 15, 2015, 3:29 p.m.)

$("#mytable > tbody").html("");
---------------------------------------- OR ----------------------------------------
$("#myTable").empty();
---------------------------------------- OR ----------------------------------------
$("#myTable").find("tr:gt(0)").remove();
---------------------------------------- OR ----------------------------------------
$("#myTable").children( 'tr:not(:first)' ).remove();

+Plugins (April 6, 2016, 8:13 p.m.)

http://tutorialzine.com/2013/04/50-amazing-jquery-plugins/
http://www.unheap.com/
https://www.freshdesignweb.com/image-hover-effects/
http://apycom.com/webdev/top-creative-and-beautiful-bootstrap-slider-samples-2016-199.html
http://cssslider.com/jquery-content-slider-31.html
http://joaopereirawd.github.io/animatedModal.js/
http://www.jqueryscript.net/demo/Material-Inspired-Morphing-Button-with-jQuery-velocity-js-Quttons/
http://www.jqueryscript.net/demo/Modal-Like-Sliding-Panel-with-jQuery-CSS3/
http://www.jqueryscript.net/menu/Stylish-Off-canvas-Sidebar-Menu-with-jQuery-CSS3.html
http://plugins.compzets.com/animatescroll/
https://1stwebdesigner.com/jquery-gallery/
https://tympanus.net/codrops/2012/09/03/bookblock-a-content-flip-plugin/
http://www.eyecon.ro/spacegallery/
http://keith-wood.name/imageCube.html
http://www.jqueryscript.net/demo/Flexible-3D-Flipping-Cube-Pluigin-HexaFlip/index3.html
http://tympanus.net/Development/BookBlock/
http://tympanus.net/Development/ImageTransitions/
http://renatorib.github.io/janimate/
http://git.blivesta.com/rippler/
http://www.jqueryscript.net/demo/Simple-jQuery-Plugin-For-Responsive-Sliding-View-SimpleSlideView/
http://tympanus.net/TipsTricks/DirectionAwareHoverEffect/
http://www.jqueryscript.net/demo/jQuery-Plugin-For-Circular-Popup-Html-Elements-Radiate-Elements/
http://www.jqueryscript.net/demo/jQuery-3D-Animation-Plugin-With-HTML5-CSS3-Transforms-jworld/
http://lab.ejci.net/favico.js/
http://www.jqueryscript.net/demo/jQuery-Plugin-To-Auto-Scroll-Down-A-Web-Page-Hungry-Scroller/
http://www.jqueryscript.net/demo/jQuery-Plugin-To-Auto-Scroll-Down-Html-Page-Slow-Auto-Scroll/
https://haltu.github.io/muuri/
https://ilkeryilmaz.github.io/timelinejs/
http://www.thepetedesign.com/demos/tiltedpage_scroll_demo.html
https://github.com/soundar24/roundSlider

+Focus the first input in your form (June 30, 2015, 3:05 p.m.)

$('.forms').find("input[type!='hidden']").first().focus();

+jQuery `data` vs `attr`? (Aug. 21, 2014, 3:03 p.m.)

If you are passing data to a DOM element from the server, you should set the data on the element:

<a id="foo" data-foo="bar" href="#">foo!</a>
The data can then be accessed using .data() in jQuery:

console.log( $('#foo').data('foo') );
//outputs "bar"
However when you store data on a DOM node in jQuery using data, the variables are stored in on the node object. This is to accommodate complex objects and references as storing the data on the node element as an attribute will only accommodate string values.

Continuing my example from above:
$('#foo').data('foo', 'baz');

console.log( $('#foo').attr('data-foo') );
//outputs "bar" as the attribute was never changed

console.log( $('#foo').data('foo') );
//outputs "baz" as the value has been updated on the object
Also, the naming convention for data attributes has a bit of a hidden "gotcha":

HTML:
<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
JS:
console.log( $('#bar').data('fooBarBaz') );
//outputs "fizz-buzz" as hyphens are automatically camelCase'd
The hyphenated key will still work:

HTML:
<a id="bar" data-foo-bar-baz="fizz-buzz" href="#">fizz buzz!</a>
JS:
console.log( $('#bar').data('foo-bar-baz') );
//still outputs "fizz-buzz"
However the object returned by .data() will not have the hyphenated key set:

$('#bar').data().fooBarBaz; //works
$('#bar').data()['fooBarBaz']; //works
$('#bar').data()['foo-bar-baz']; //does not work
It's for this reason I suggest avoiding the hyphenated key in javascript.

The .data() method will also perform some basic auto-casting if the value matches a recognized pattern:

HTML:
<a id="foo"
href="#"
data-str="bar"
data-bool="true"
data-num="15"
data-json='{"fizz":["buzz"]}'>foo!</a>
JS:
$('#foo').data('str'); //`"bar"`
$('#foo').data('bool'); //`true`
$('#foo').data('num'); //`15`
$('#foo').data('json'); //`{fizz:['buzz']}`
This auto-casting ability is very convenient for instantiating widgets & plugins:

$('.widget').each(function () {
$(this).widget($(this).data());
//-or-
$(this).widget($(this).data('widget'));
});
If you absolutely must have the original value as a string, then you'll need to use .attr():

HTML:
<a id="foo" href="#" data-color="ABC123"></a>
<a id="bar" href="#" data-color="654321"></a>
JS:
$('#foo').data('color').length; //6
$('#bar').data('color').length; //undefined, length isn't a property of numbers

$('#foo').attr('data-color').length; //6
$('#bar').attr('data-color').length; //6

+Leading colon in a jQuery selector (Aug. 21, 2014, 3:01 p.m.)

What's the purpose of a leading colon in a jQuery selector?
The :input selector basically selects all form controls (input, textarea, select and button elements) where as input selector selects all the elements by tag name input.

Since radio button is a form element and also it uses input tag so they both can be used to select radio button. However both approaches differ the way they find the elements and thus each have different performance benefits.

+Colon and question mark (Aug. 21, 2014, 3 p.m.)

What is the meaning of the colon (:) and question mark (?) in jquery?
That's an inline if.
If true, do the thing after the question mark, otherwise do the thing after the colon. The thing before the question mark is what you're testing.

+Commands and examples (Aug. 21, 2014, 2:57 p.m.)

$('#toggle_message').attr('value', 'Show')
-------------------------------------------------------
$('#message').toggle('fast');
-------------------------------------------------------
$(document).ready(function() {});
$(window).load(function() {});
-------------------------------------------------------
$(window).unload(function() {
alert('You\'re leaving this page');
});
This alert will be raised when move to another window by clicking on a link or click on the back or preivous buttons of browser, or when you close the tab.
-------------------------------------------------------
$('*').length();
Returns the number of all the elements in the page.
-------------------------------------------------------
$('p:first')
$('p:last')
$('input:button')
$('input[type="email"]')
-------------------------------------------------------
$(':text').focusin(function() {});
$(':text').blur(function() {});
-------------------------------------------------------
$('#email').attr('value', 'Write your email address').focus(function() {
# Some code
}).blur(function() {
# Some code
});
-------------------------------------------------------
search_name = jQuery.trim($(this).val());
$("#names li:contains('" + search_name + "')").addClass('.highlight');
-------------------------------------------------------
$('input[type="file"]').change(function() {
$(this).next().removeAttr('disabled');
}).next().attr('disabled', 'disabled');
-------------------------------------------------------
$('#menu_link').dbclick(function() {});
-------------------------------------------------------
$('#click_me').toggle(function() {
# Code here
}, function() {
# Code here
};
-------------------------------------------------------
var scroll_pos = $('#some_text').scrollTop();
-------------------------------------------------------
$('#some_text').select(function() {});
-------------------------------------------------------
$('a').bind('mouseenter mouseleave', function() {
$(this).toggleClass('bold');
});
bind() is specified to use for series of events.
-------------------------------------------------------
$('.hover').mousemove(function(e) {
$('some_div').text('x: ' + e.clientX + ' y: ' + e.clientY);
});
-------------------------------------------------------
Hover over description:
$('.hover').mousemove(function(e) {
var hovertext = $(this).attr('hovertext');
$('#hoverdiv').text(hovertext').show();
$('#hoverdiv').css('top', e.clientY+10).css('left', e.clientX+10);
}).mouseout(function() {
$('#hoverdiv').hide();
});

Create an empty div with id="hovertext" in HTML, and style it in CSS.
-------------------------------------------------------
.addClass('class1 class2 class3')
-------------------------------------------------------
$(":input').focus(function() {
$(this).toggleClass('highlight');
});
-------------------------------------------------------
Traversing using .each():

$('input[type="text"]').each(function(index) {
alert(index);
});
This index argument prints 0, 1, 2, ... per the items which are selected by .each statement/function.
-------------------------------------------------------
These two statements do the same thing:
$('.names li:first').append('Hello');
$('.names').find('li').first().append('Hello');

if($(this).has('li').length == 0) { }

if($(this).has(':contains')) {}
-------------------------------------------------------
$(this).nextAll().toggle();
This is useful when you want to toggle a sub-menu using the first/top item.
-------------------------------------------------------
$(this).hide('slow', 'linear', function() {});
.slideup()
.slideLow()
.slideToggle()

.stop() Will cause the animation of slide effect to stop
-------------------------------------------------------
.fadeTo(100, 0.4, function() {})
$('.fadeto').not(this).fadeTo(100, 0.4);
-------------------------------------------------------
$('.fadeto').css('opacity', '0.4');
$('.fadeto').mouseover(function() {
$(this).fadeTo(100, 1);
$('.fadeto').not(this).fadeTo(100, 0.4);
});
-------------------------------------------------------
append()
appendTo()
clone()
-------------------------------------------------------
$('html, body').animate({scrollTop: 0}, 10000);
-------------------------------------------------------
$('#terms').scroll(function() {
var textarea_height = $(this)[0].scrollHeight();
var scroll_height = textarea_height - $(this).innerHeight();

var scroll_top = $(this).scrollTop();
});
-------------------------------------------------------
var names = ['Alex', 'Billy', 'Dale'];
if (jQuery.inArray('Alex', names) != '-1') {
alert('Found');
}
-------------------------------------------------------
$.each(names, function(index, value) {})
-------------------------------------------------------
setInterval(function() {
var timestamp = jQuery.now();
$("#time").text(timestamp);
}, 1);
-------------------------------------------------------
(function($) {
$.fn.your_new_function_name = function() {}
})(jQuery)
-------------------------------------------------------
Options:
$('#drag').draggable({axis: 'x'});
$('#drag').draggable({containment: 'document'});
$('#drag').draggable({containment: 'window'});
$('#drag').draggable({containment: 'parent'});
$('#drag').draggable({containment: [0, 0, 200, 200]});
$('#drag').draggable({cursor: 'pointer'});
$('#drag').draggable({opacity: 0.6});
$('#drag').draggable({grid: [20, 20]});
$('#drag').draggable({revert: true});
$('#drag').draggable({revertDuration: 1000});
Events:
$('#drag').draggable({start: function() {}});
$('#drag').draggable({drag: function() {}});
$('#drag').draggable({stop: function() {}});
-------------------------------------------------------
$('#drop').droppable({hoverClass': 'border'});
$('#drop').droppable({tolerance': 'fit'});
$('#drop').droppable({tolerance': 'intersect'});
$('#drop').droppable({tolerance': 'pointer'});
$('#drop').droppable({tolerance': 'touch'});
$('#drop').droppable({accept': '.name'});
$('#drop').droppable({over': function() {}});
$('#drop').droppable({out': function() {}});
$('#drop').droppable({drop': function() {}});
-------------------------------------------------------
$('#names').sortable({containment: 'parent'});
$('#names').sortable({tolerance: 'pointer'});
$('#names').sortable({cursor: 'pointer'});
$('#names').sortable({revert: true});
$('#names').sortable({opacity: 0.6});
$('#names').sortable({connectWith: '#palces, #names'});
$('#names').sortable({update: function() {}});
-------------------------------------------------------
Resizable:
This required a css file `jquery-ui-custom.css`

$('#box').resizable({containment: 'document'});
$('#box').resizable({animate: true});
$('#box').resizable({ghost: true});

$('#box').resizable({animateDuration: 'slow'});
`slow`, `medium`, `fast`, `normal`, `1000`

$('#box').resizable({animateEasing: 'swing'});
`swing`, `linear`

$('#box').resizable({aspectRatio: true});
`0.4`, `2/5`, `9/10`

$('#box').resizable({autoHide: true});

$('#box').resizable({handles: 'n, e, se');
n=North, e=East, w=West, s=South, or `all`
If you do not specify `all`, you can not resize the box from left or top, as they are so closed to the browser.

$('#box').resizable({grid: [20, 20]});
$('#box').resizable({minHeight: 200);
$('#box').resizable({maxHeight: 100);
$('#box').resizable({minWidth: 200);
$('#box').resizable({maxWidth: 100);
-------------------------------------------------------
Accordion:
$('#content').accordion({fillSpace: true})
$('#content').accordion({icons: {'header': 'ui-icon-plus', 'headerSelected': 'ui-icon-minus'}})
$('#content').accordion({collabsable: true})
$('#content').accordion({active: 2})
`false`
-------------------------------------------------------
Dialog:
$('#dialog').dialog()
$('#dialog').attr('title', 'Saved').text('Settings were saved.').dialog();
.dialog({buttons: {'OK': function() {
$(this).dialog('close');
});
closeOnEscape: true
draggable: false
resizable: false
show: 'fade', 'bounce'
modal: true
position: 'top', 'top, left', 'bottom', 'top, center', [100, 100]
-------------------------------------------------------
Progressbar:

var val = 0;
var interval = setInterval(function() {
val = val + 1;
$('#pb').progressbar({value: val});
$('#percent').text(val + '%');
if (val == 100) {
clearInterval(interval);
}
});
----------------------------------------------

$("#header_menus img:not(.hover_menus)").mouseenter(function() {
$(this).hide();
$("#" + $(this).attr('data-hover')).show();
});

KDE
+Editing KDE Application Launcher Menus (May 11, 2015, 5:31 p.m.)

Use `kmenuedit`

+Delete session (March 20, 2015, 11:36 a.m.)

Delete the files in:
rm ~/.kde/share/config/session/*

And delete the file:
~/.kde/share/config/ksmserverrc

Kivy
+Create a package for IOS (Nov. 4, 2015, 6:06 a.m.)

http://kivy.org/docs/guide/packaging-ios.html

sudo apt-get install autoconf automake libtool pkg-config

+PyCharm Completion (March 19, 2015, 9:25 a.m.)

https://github.com/kivy/kivy/wiki/Setting-Up-Kivy-with-various-popular-IDE%27s
---------------------------------------------------------------------------------------------
1-Download this jar plugin:
https://github.com/Zen-CODE/kivybits/blob/master/IDE/PyCharm_kv_completion.jar?raw=true

2-On Pycharm’s main menu, click "File" -> Import Settings

3-Select this file and PyCharm will present a dialog with filetypes ticked. Click OK.

4-You are done. Restart PyCharm

+Android API (Feb. 12, 2015, 9:54 p.m.)

http://developer.android.com/reference/android/speech/tts/TextToSpeech.html
I have this class in Java docs:
android.speech.tts.TextToSpeech

And in python it is:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech')

Baed on these, I thought for getting another class in Java (android.speech.tts.TextToSpeech.Engine) I had to:
Engine = autoclass('android.speech.tts.TextToSpeech.Engine')

But I got this error at runtime on my cellphone and the app would not open:
java.lang.ClassNotFoundException: android.speech.tts.TextToSpeech.Engine

I even could not access `Engine` using the pythonic way either:
TextToSpeech.Engine

I had to access the class by:
TextToSpeech = autoclass('android.speech.tts.TextToSpeech$Engine')
--------------------------------------------------------------------------------------------
Python Dictionaries = Java HashMap:

Java:
HashMap<String, String> phoneBook = new HashMap<String, String>();
phoneBook.put("Mike", "555-1111");
phoneBook.put("Lucy", "555-2222");
phoneBook.put("Jack", "555-3333");

Python:
phoneBook = {}
phoneBook = {"Mike":"555-1111", "Lucy":"555-2222", "Jack":"555-3333"}

And for implementing it Kivy:
HashMap = autoclass('java.util.HashMap')
hash_map = HashMap()
hash_map.put(key, value)
---------------------------------------------------------------------------------------------
To access nested classes, use $ like: autoclass('android.provider.MediaStore$Images$Media').
---------------------------------------------------------------------------------------------

+Sign apk files (Oct. 4, 2015, 11:42 a.m.)

https://developer.android.com/tools/publishing/app-signing.html#studio

1-Generate a private key using keytool. For example:
$ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000
This example prompts you for passwords for the keystore and key, and to provide the Distinguished Name fields for your key. It then generates the keystore as a file called my-release-key.keystore. The keystore contains a single key, valid for 10000 days. The alias is a name that you will use later when signing your app.

2-Compile your app in release mode to obtain an unsigned APK:
buildozer android release

3-Sign your app with your private key using jarsigner:
jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore my_application.apk alias_name
This example prompts you for passwords for the keystore and key. It then modifies the APK in-place to sign it. Note that you can sign an APK multiple times with different keys.

4-Verify that your APK is signed. For example:
jarsigner -verify -verbose -certs my_application.apk

5-Align the final APK package using zipalign.
The zipalign does not exist in Synaptic Package Manager, it exists in AndroidSD Build Tools. Use locate to find `zipalign` and create a symbolic link in /usr/bin:
ln -s /home/moh3en/Programs/Android/Development/android-sdk-linux/build-tools/android-5.0/zipalign /usr/bin/
zipalign -v 4 your_project_name-unaligned.apk your_project_name.apk
---------------------------------------------------------------------------------------------
Example:

buildozer android release

jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore excludes/my-release-key.keystore bin/NimkatOnline-1.2.4-release-unsigned.apk mohsen_hassani

jarsigner -verify -verbose -certs bin/NimkatOnline-1.2.4-release-unsigned.apk

zipalign -v 4 bin/NimkatOnline-1.2.4-release-unsigned.apk bin/NimkatOnline-1.2.4.apk

+Label (Feb. 12, 2015, 9:52 p.m.)

When creating a label, by default, it places at the bottom left corner with some part of it hidden, but by changing its `size` property it will be solved:
size: self.texture_size

Scrolling a Label:
Label:
text: str('A very long text' * 100)
font_size: 50
text_size: self.width, None
size_hint_y: None
height: self.texture.size[1]

+FloatLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to RelativeLayout, except now position is relative to window, and not Layout.
Thus in FloatLayout, pos = 0, 0 refers to lower-left corner.

+RelativeLayout (Feb. 12, 2015, 9:51 p.m.)

Each child widget size and position has to be give.
size_hint, pos_hint: numbers relative to Layout.
If those two parameters are used, it does not make any difference if RelativeLayout or FloatLayout are used, as both will yield the same result.

+GridLayout (Feb. 12, 2015, 9:51 p.m.)

Similar to StackLayout 'lr-tb'
Either cols or rows has to be given and the Layout adjusts so the given number is the maximum number of cols or rows.

+Canvas (Feb. 12, 2015, 9:51 p.m.)

Canvas refers to graphical instructions.
The instructions could be non-visual, called context instructions, or visual, called vertex instructions.
An example of a non-visual instruction would be to set a color.
An example of a visual instruction would be draw a rectangle.

+StackLayout (Feb. 12, 2015, 9:50 p.m.)

1-More flexible than BoxLayout
2-Orientations:
right to left or left to right
top to bottom or bottom to top
rl-bt, rl-tb, lr-bt, lr-tb (Row-wise)
bt-rl, bt-lr, tb-rl, tb-lr (Column-wise)

+Snippets (Feb. 12, 2015, 9:50 p.m.)

pos_hint: {'x': .1}
size_hint: [.2, 2]
pos_hint: {'center_x': .3}
----------------------------------------------------------------------------------------------
textinput.bind(text=label.setter('text'))
----------------------------------------------------------------------------------------------
in kv file:
TextInput:
on_text: my_label.color = [random.random() for i in xrange(3)] + [1]
----------------------------------------------------------------------------------------------
center: self.parent.center
----------------------------------------------------------------------------------------------

+on_touch_up vs on_release (Feb. 12, 2015, 9:49 p.m.)

When using on_touch_up event with partial, you have to pass three arguments to the calling method:

Example:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

@staticmethod
def speak_word(word, arg1, arg2): # I don't know yet what these two extra args are used for.
print(word)

After touching the button, all the same buttons on the page are also triggered. You have to solve it using something like this:
on_touch_up: vibrate() if self.collide_point(*args[1].pos) else None
******
But using on_release, two args are passed:
button.ids.speaker_button.bind(on_touch_up=partial(self.speak_word, main_word))

@staticmethod
def speak_word(word, button):
print(word)

After clicking, the only button which has been touched, will be triggered. That's good!

+Partial (Feb. 12, 2015, 9:49 p.m.)

In Kivy, you register a button release callback with the “bind()” function:
myButton.bind(on_release=my_button_release)
But the signature of the “on_release” method is “on_release(self)”, which means that the method you provide will receive only one parameter — the button that generated the event. When you release the button, Kivy will invoke your callback method and pass in the button that you released.

So does this mean we can’t pass user-defined parameters to our handlers? Does it mean we need to use globals or a bunch of specialized methods to write our button handlers? No, this is where Python’s functools.partial comes in handy.

To oversimplify, partial allows you to create a function with one set of arguments that calls another function with a different set of arguments. For example, consider the following function that takes two arguments:

def addTwoNumbers(x, y):
print "x: %d, y: %d" % (x, y)
return x+y
You can create a partial from this that automatically supplies one or more of the arguments. Let’s create one that supplies ’1′ for ‘x’:

addOne = partial(addTwoNumbers, 1)
Which you would then invoke as such:

>>> #We pass in '2' for 'y' here. The partial fills in '1' for 'x'
...
>>> addOne(2)
x: 1, y: 2
3
*****************
Let’s create a function that can set any label to any text:

def changeLabel(label, text, button):
#Kivy gives us 'button' to let us know which button
# caused the event, but we don't use it
label.text = text

In our UI setup, we can then bind two different buttons to this handler, creating partials that supply values for the extra arguments:

startButton = Button(text='Start Car')
stopButton = Button(text='Stop Car')

startButton.bind(
on_press=partial(
changeLabel,
statusLabel,
"Starting Car..."))

stopButton.bind(
on_press=partial(
changeLabel,
statusLabel,
"Stopping Car..."))
Now, by inspecting the setup code, it’s fairly easy to see what the UI does when various events occur. We can even extend this further to perform an action after setting the label:

def changeLabelAndRun(label, text, command, button):
label.text = text
command()


This allows our setup code to specify a UI behavior and trigger an action (assume ‘startCar’ and ‘stopCar’ have been defined as functions elsewhere):

startButton.bind(
on_press=partial(
changeLabelAndRun,
statusLabel, "Starting Car...",
startCar))

stopButton.bind(
on_press=partial(
changeLabelAndRun,
statusLabel, "Stopping Car...",
stopCar))
Unlike C, there’s no casting, no packing things into structs, and it’s easy to extend for different needs. Snazzy! This might not scale perfectly to complicated UI interactions, but it greatly simplifies straightforward event processing, making it easier to see at a glance what the application is doing.

+BoxLayout vs. GridLayout (Sept. 9, 2015, 12:47 p.m.)

The widgets in a BoxLayout can have different width and height, but in a GridLayout, each row or column should have the same size.

The widgets in BoxLayout are placed from bottom to top, but those in a GridLayout are placed from top to bottom.

In a BoxLayout the widgets can not be placed next to each other! I mean, they are placed one widget per row (if orientation is vertical) or column (if orientation is horizontal)

+Background Image for Button (Feb. 12, 2015, 9:48 p.m.)

background_normal: 'home_button.png'
background_down: 'home_button_down.png'

+DropDown (Feb. 12, 2015, 9:48 p.m.)

1-First of all, make sure that dropdown doesn't get called while widget is not on screen. That is, you have to only instantiate it, do not use it for add_widget or anything so that it's called.

2-For getting the data which is passed through `a_button.on_release: root.select('the_value')`, you have to use:
on_select: select_controller(args[1])
on the DropDown. Here is the exmaple:
<MainDropDown@DropDown>:
on_select: select_controller(args[1]) # Try printing `args` to see the whole items.
Button:
text: 'Update Database'
on_release: root.select('update_db')

+Spinner vs. DropDown (Sept. 9, 2015, 12:44 p.m.)

Spinner is a widget that provides a quick way to select one value from a set. In the default state, a spinner shows its currently selected value. Touching the spinner displays a dropdown menu with all other available values from which the user can select a new one.

+Commands (Feb. 12, 2015, 9:47 p.m.)

buildozer android debug

+Buildozer (Feb. 12, 2015, 9:47 p.m.)

-------------------Installation:-------------------
1-git clone https://github.com/kivy/buildozer
2-Activate virtualenv (and test if the default `python` command will lead to python version 2.7) because buildozer needs python2.7
3-cd_to_downloaded_buildozer
4-python setup.py install
----------------------------------------------------------------------------
buildozer init
buildozer android debug
buildozer android logcat
adb logcat
----------------------------------------------------------------------------
AndroidSDK and AndroidNDK are needed for buildozer, if you have already downloaded them, provide the paths like these:
android.ndk_path = /home/moh3en/Programs/Android/Development/android-ndk-r9c
android.sdk_path = /home/moh3en/Programs/Android/Development/android-sdk-linux

if not, buildozer will try to download them, but unfortunately because of the embargo, they won't get downloaded since the source originates from google.com. So you have to download them using proxy and untar/unzip them somewhere.
----------------------------------------------------------------------------
sudo adb uninstall com.nimkatonline.en
sudo adb install bin/NimkatOnline-1.2.0.apk
--------------------------------------------------------------------------

+Installing python packages (Feb. 12, 2015, 9:46 p.m.)

For installing python packages use this command:
./distribute.sh -m "kivy requests==2.1.0 SQLAlchemy"

You will need these environment variables:
export ANDROIDSDK="/home/mohsen/Programs/android-sdk-linux"
export ANDROIDNDK="/home/mohsen/Programs/android-ndk-r8c"
export ANDROIDNDKVER=r8c
export ANDROIDAPI=14

+Python Android Path (Feb. 12, 2015, 9:46 p.m.)

This is the path to the python used for android. Use this path for managing (installing or uninstalling) packages which are going to be installed, packed and used for your app.
python-for-android/dist/default/private/lib/python2.7/site-packages

+Error ==> Source resource does not exist: python-for-android/dist/default/project.properties (Feb. 12, 2015, 9:43 p.m.)

export ANDROIDAPI=15

+Chat (Feb. 12, 2015, 9:42 p.m.)

<Mohsen_Hassani> Hello guys. I am very new to Kivy. I am using psycopg2 to read data from my remote VPS. I wanted to know if it will work after making apk too?
<brousch> Mohsen_Hassani: Pure Python modules will work fine. I'm not sure if psycopg2 is pure Python
<kovak> Mohsen_Hassani: the first step is to write a recipe for python-for-android to see if you can compile for ARM without any problems
<kovak> I think psycopg2 has C bits
<kovak> if it compiles in arm no problem you are good to go, if not you may need to patch the source
<brousch> However, except in very rare cases, your Android app should not be communicating directly with your database server. There should be a proper API on top of that database
-------------------------------------------------------------------------------
<tito> Mohsen_Hassani: the best shot you have is to put your tgz into a directory, go into the directory, and start python -m SimpleHTTPServer
<tito> then do: URL_python=http://localhost:8000/Python-2.7.2.tar.bz2 URL_hostpython=http://localhost:8000/Python-2.7.2.tar.bz2 ./distribute.sh -m 'openssl pil kivy'

+Building the application (Feb. 12, 2015, 9:36 p.m.)

cd dist/default
./build.py --permission INTERNET --orientation sensor --package com.mohsenhassani.notes --name My\ Notes --version 1.0 --dir ~/Projects/kivy_projects/notes/ debug
----------------------------------------------------------------------------
Install the debug apk to your device:
adb install bin/touchtracer-1.0-debug.apk
----------------------------------------------------------------------------
/usr/bin/python2.7 build.py --name 'My Notes' --version 1.0 --package com.mohsenhassani.notes --private /home/mohsen/Projects/kivy_projects/notes/.buildozer/android/app --s
dk 14 --minsdk 8 --permission INTERNET --icon /home/mohsen/Projects/kivy_projects/notes/./static/icon.png --orientation sensor debug
----------------------------------------------------------------------------

+Installation (July 17, 2015, 1:26 a.m.)

Installation:
http://kivy.org/docs/installation/installation-linux.html#linux-run-app

---------------------------------------------------------------------------------------------
Installation Steps:
1-apt-get install python-gst0.10-dev python-gst-1.0 freeglut3-dev libsdl-image1.2-dev libsdl-ttf2.0-dev libsdl-mixer1.2-dev libsmpeg-dev libportmidi-dev libswscale-dev libavformat-dev libavcodec-dev libv4l-dev libserf-1-1 libsvn1 subversion openjdk-7-jdk python-pygame
2-Create and activate a virtualenv
3-easy_install requests
4-easy_install -U setuptools
5-pip install cython==0.20
6-pip install pygments
7-pip install --allow-all-external pil --allow-unverified pil

8.1-For installing next step (pygame) you will need to link a file or get the following error. So first create the symlink:
fatal error: linux/videodev.h: No such file or directory:
sudo ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

8.2-pip install pygame (It won't be found or downloaded! You need to download the tar file from www.pygame.org/download.shtml and install it using pip install <the_downloaded_tar_file>.)

9-pip install kivy

Kotlin
+Objects Declarations and Companion Objects - Singleton (May 22, 2019, 11:58 p.m.)

Singleton:
When we have just ONE INSTANCE of a class in the whole application.

object MySingleton

object MySingleton {
fun someFunction(...) {...}
}

And then use it:
MySingleton.someFunction(...)

-----------------------------------------------------

In java, we define SINGLETON, by using the keyword "static" variables and methods.

In Kotlin we use "object" for declaring a class.
Contrary to a class, an object can’t have any constructor, but init blocks are allowed if some initialization code is needed.


object Customer {
var id: Int = -1 // Behaves like STATIC variable

init {

}

fun registerCustomer() { // Behaves like STATIC method

}
}


We don't need to instantiate the class! We call it without creating instance.
Customer.id = 27
Customer.registerCustomer()

-----------------------------------------------------

Companion Objects are same as "object" but declared within a class.

class MyClass {
companion object {
var count: Int = -1 // Behaves like STATIC variable

fun typeOfCustomers(): String { // Behaves like STATIC method
return "American"
}
}
}


MyClass.count

MyClass.typeOfCustomers()

-----------------------------------------------------

+Data class and Super class "Any" (May 22, 2019, 10:37 p.m.)

The purpose of Data class is to deal with Data, not the Objects!

---------------------------------------------------------------

var user1 = User("Mohsen", 10)
var user2 = User("Mohsen", 10)

if (user1 == user2 ) {
// returns false (They are not equal). The User class must be defined with "Any" keyword to have these variables equal.
}

class User(var name: String, var id: Int) {

}

---------------------------------------------------------------

data class User(var name: String, var id: Int) {

}

---------------------------------------------------------------

+lazy initialization (May 22, 2019, 9:09 p.m.)

// If you don't use the following "pi" variable anywhere in your codes, it is a waste of memory.
val pi: Float = 3.14f

You should use lazy initialization (lazy lambda function):
val pi: Float by lazy {
3.14f
}
When you use the "pi" variable, it will get initialized.

------------------------------------------------------------
- "Lazy initialization" was designed to prevent unnecessary initialization of objects.

- Your variables will not be initialized unless you use it in your code.

- It is initialized only once. Next time when you use it, you get the value from cache memory.

- It is thread-safe.
It is initialized in the thread where it is used for the first time.
Other threads can use the same value stored in the cache.

- The variable can be var or val.

- The variable can be nullable or non-nullable data types.

+lateinit keyword (May 22, 2019, 9:04 p.m.)

- lateinit used only with mutable data type [ var ]
- lateinit used only with non-nullable data type
- lateinit values must be initialized before you see it

class Country {
lateinit var name: String
}

+Null Safe (May 22, 2019, 8:46 p.m.)

We have a lot of null safety operators which help up avoid the NullPointerException:
?. Safe Call Operator

?: Elvis

!! Not-null Assertion

?.let { .. } Safe Call with let

------------------------------------------------------------

val name: String = null // We can't do this.

val name: String? = null // Now it will accept null values

------------------------------------------------------------

1- Safe Call (?. )
- Returns the length if "name" is not null else returns NULL
- Use it if you don't mind getting NULL value

println("The length of name is ${name?.length}") // returns null because it has null value at the top.

------------------------------------------------------------

2- Safe Call with let ( ?.let )
- It executes the block ONLY IF name is NOT NULL

name?.let {
println("The length of name is ${name.length}")
}

------------------------------------------------------------

3- Elvis-operator ( ?: )
- When we have nullable reference "name", we can say "if name is not null", use it, otherwise use some non-null value.

val len = if (name != null )
name.length
else:
-1

OR (the above code can be simplified as follow):

val len = name?.length ?: -1

------------------------------------------------------------

4- Non-null assertion operator ( !! )
// Use it when you are sure the value is NOT Null
// Throws NullPointerException if the value is found to be NULL.

println("The length of name is ${name!!.length}")

------------------------------------------------------------

+Predicates: a condition returning TRUE of FALSE (May 22, 2019, 8:35 p.m.)

"all": Do all elements satisfy the predicate/condition?

"any": Do any element in the list satisfy the predicate?

"count": Total elements that satisfy the predicate

"find", "last": Returns the FIRST/LAST element that satisfy predicate

---------------------------------------------------------------

val myNumbers = listOf( 2, 3, 4, 6, 23, 90)

check1 = myNumbers.all { it > 10 } // or all( { it > 10 } ) // Returns false

---------------------------------------------------------------

val check2: Boolean = myNumbers.any( { num -> num > 10 } ) // or { it > 10 } // Returns true

---------------------------------------------------------------

val totalCount: Int = myNumbers.count { it > 10 }

---------------------------------------------------------------

// Returns the first number that matches the predicate
val num: Int? = myNumbers.find { it > 10 }

---------------------------------------------------------------

Store lambda function as a variable:

val myPredicate = { num: Int -> num > 10 }

---------------------------------------------------------------

+Filter and Map using Lambdas (May 22, 2019, 8:21 p.m.)

val myNumbers: List<Int> listOf(2, 3, 4, 5, 23, 90)

val mySmallNums = myNumbers.filter { it < 10 } // or { num -> num < 10 }

for (num in mySmallNums) {
println(num) // Will print 2, 3, 4, 5
}

--------------------------------------------------------

val mySquareNums = myNumbers.map { it * it } // or { num -> num * num }

will return 4, 9, 16, 25, so on....

--------------------------------------------------------

val mySmallSquareNums = myNumbers.filter { it < 10 }.map { it * it }

--------------------------------------------------------

var people: List<Person> = listOf<Person>(Person(23, "Mohsen"), Person(30, "Ali"))

var names = people.map { p -> p.name } // or { it.name }

var names = people.filter { person -> person.name.startsWith("M") }.map { it.name }

--------------------------------------------------------

+Collections - Set and Hash Set (May 22, 2019, 8:11 p.m.)

// "Set" contains unique elements
// "HashSet" also contains unique elements but sequence is not guaranteed in output


// The "9"s will get unify. It means there will be only ONE 9.
var mySet = setOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Immutable, Read Only

for (element in mySet) {
println(element)
}

----------------------------------------------------------

var mySet = mutableSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set, Read and Write
mySet.remove(14)
mySet.add(100)

----------------------------------------------------------

// HashSet, the sequence is not guaranteed in output.
var mySet = hashSetOf<Int>( 2, 9, 7, 1, 9, 14, 0, 9 ) // Mutable Set

----------------------------------------------------------

+Collections - Map and Hash Map (May 22, 2019, 4:41 p.m.)

// Immutable, Fixed Size, Read Only
var myMap = mapOf<Int, String>(2 to "Mohsen", 7 to "Mehdi")
myMap.put()

for (key in myMap.keys) {
println(myMap[key]) // myMap.get(key)
println("Element at Key: $key = ${myMap.get(key)}") // ${myMap[key]}
}

---------------------------------------------------------

// Mutable, Read and Write both, No Fixed Size
var myMap = HashMap<Int, String>() // You can also use mutableMapOf and hashMapOf
myMap.put(4, "Mohsen")
myMap.put(7, "Mehdi")

myMap.replace(4, "Akbar")
OR
myMap.put(4, "Akbar")

---------------------------------------------------------

+Collections - List and ArrayList (May 22, 2019, 4:16 p.m.)

Immutable Collections: Read Only Operations
- Immutable List: listOf
- Immutable Map: mapOf
- Immutable Set: setOf

Mutable Collections: Read and Write Both
- Mutable List: ArrayList, arrayListOf, mutableListOf
- Mutable Map: HashMap, hashMapOf, mutableMapOf
- Mutable Set: mutableSetOf, hashSetOf

-----------------------------------------------------------

Mutable:

var list = mutableListOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")
list.add("Ali")
list.remove("Alex")
list.add(3, "Akbar")
list[2] = "Asghar"

------------------------

An array with 5 elements, all values are zero.
var myArray = Array<Int>(5) { 0 } // Mutable. Fixed Size.

myArray[0] = 32
myArray[3] = 54

println(myArray[3])


for (element in myArray) {
println(element)
}


for (index in 0..myArray.size - 1) { }

-----------------------------------------------------------

Immutable:

// Fixed Size, Read Only, Immutable
var list = listOf<String>("Mohsen", "Alex", "Hadi", "Mehdi")

-----------------------------------------------------------

ArrayList is an implementation of the MutableList interface in Kotlin:

class ArrayList<E> : MutableList<E>, RandomAccess


MutableList should be chosen whenever possible, but ArrayList is a MutableList. So if you're already using ArrayList, there's really no reason to use MutableList instead, especially since you can't actually directly create an instance of it (MutableList is an interface, not a class).

In fact, if you look at the mutableListOf() Kotlin extension method:

public inline fun <T> mutableListOf(): MutableList<T> = ArrayList()

you can see that it just returns an ArrayList of the elements you supplied.

-----------------------------------------------------------

+WITH and APPLY Lambdas (May 22, 2019, 4:14 p.m.)

fun main() {
var person = Person()

with(person) { // Using "with" you can do the same as "person.name, person.age". It seems to be neater.
name = "Mohsen"
age = 33
}

person.apply { // Using "apply" you can also call the methods.
name = "Mohsen"
age = 33
}.someMethod()
}


class Person {
var name: String = ""
var age: Int = 0

fun someMethod() {
println("Some string")
}
}

+tailrec - Tail recursive functions (May 18, 2019, 3 p.m.)

When a function is marked with the tailrec modifier the compiler optimises out the recursion, leaving behind a fast and efficient loop based version instead.

+Infix Functions (May 18, 2019, 2:24 p.m.)

Infix Functions can be a Member Function or Extension Function.
They have SINGLE parameter.
They have prefix of "infix"


All Infix functions are extension function, but all extension functions are not Infix functions.
Infix function can only have ONE parameter.

-----------------------------------------------------------

infix fun Int.greaterValue(number: Int): Int {
if (this > number)
return this
else
return number
}


Then you can use it like this:
val x = Int = 6

val greaterVal = x.greaterValue(y)

OR

val greaterVal = x greaterValue y

+Extension Functions (May 18, 2019, 2:22 p.m.)

Adds new function to the classes:
- Can "add" functions to a class without declaring it.
- The new functions added behaves like "static".

+Functions as Expressions - One line functions (May 18, 2019, 1:24 p.m.)

fun max(a: Int, b: Int): Int = if (a > b) a else b

-------------------------------------------------------------------

fun max(a: Int, b: Int): Int
= if (a > b) {
print("$a is greater")
a
} else {
print("$b is greater")
b
}

+Functions and Methods (May 18, 2019, 1:13 p.m.)

fun findArea(length: Int, breadth: Int): Int {
return length * breadth
}



fun findArea(length: Int, breadth: Int): Unit {
print(length * breadth)
}


Unit is same as Void in Java

+BREAK statement with LABELED FOR Loop (May 18, 2019, 1:09 p.m.)

myLoop@ for (i in 1..3) {
for (j in 1..3) {
println("$i $j")
if (i == 2 && j == 2)
break@myLoop
}
}

It will BREAK when reaching to "2 2" :
1 1
1 2
1 3
2 1
2 2

+do-while (May 18, 2019, 1:07 p.m.)

var i: Int = 1

do {
println(i)
i++
} while (i <= 10)

+when (May 18, 2019, 1:01 p.m.)

when (x) {
in 1..20 -> println("A message")
!in 5..9 -> println("Another message")
2 -> {

}
4 -> str = "A string value"
else -> {

}
}

+Ranges (May 18, 2019, 12:51 p.m.)

val r1 = 1..5 // 1, 2, 3, 4, 5

val r2 = 5 downTo 1 // 5, 4, 3, 2, 1

val r3 = 5 downTo 1 step 2 // 5, 3, 1

var r4 = 'a'..'z' // "a", "b", "c", .... "z"

var isPresent = 'c' in r4

var countDown = 10.downTo(1) // 10, 9, 8, .... 1

var moveUp = 1.rangeTo(10 // 1, 2, 3, ..... 10

+Class and Function Class (May 18, 2019, 12:38 p.m.)

class Person {
var name: String = ""
}

----------------------------------------------------------

var personObj = Person()
personObj.name = "Mohsen"
print("My name is ${personObje.name}")

----------------------------------------------------------

class Student constructor(name: String) {
init {
println("The student name is $name")
}
}


You can also drop the constructor:

class Student(name: String) {
init {
println("The student name is $name")
}

// Secondary constructor
constructor(name: String, id: Int): this(name) {
// The body of the secondary constructor is called after the init block
}

constructor(my_name: String, var id: Int): this(my_name) { // var is not allowed here.
// You should do the following instead of putting var at the parameters:
this.id = id
}
}

----------------------------------------------------------

By default all classes are "public" and "final" which means you can not inherit from a class.

public final class Student {
public final name: String = ""
}

You can drop "public final" keywords.

----------------------------------------------------------

For inheritance you need to make a class "open".

open class Human { }

class Student: Human() { }

----------------------------------------------------------

Overriding:

open class Animal {
open fun eat() {
println("Animal Eating")
}
}

class Dog: Animal() {
override fun eat() {
println("Dog is eating")
}

override fun eat() {
super.eat() // Better to use the next line, if used interfaces at the class definition.
super<Animal>.eat()
print("Dog is eating")
}
}

----------------------------------------------------------

Visibility Modifiers:

public // This is the default
protected
internal
private


open class Person {
private val = 1
protected val b = 2
internal val c = 3
val d = 10 // public by default
}


class Indian: Person() {
// a is not visible
// b, c, d are visible
}


----------------------------------------------------------

+Variables and Data Types (May 18, 2019, 12:34 p.m.)

var age = 33 // Int

var grade = 21.5 // Float
var myName: String // Mutable String
myName = "Mohsen"
myName = "MohseNN"

val myFamilyName = "Hassani" // Immutable String

var gender: Char = 'M'

var percentage: Double = 90.78

var marks: Float = 97.4F

var isStudying: Boolean = true

+Static Members for class (May 17, 2019, 12:03 p.m.)

Most of the programming language have concepts where classes can have static members — fields that are only created once per class and can be accessed without an instance of their containing class.

Kotlin doesn’t have static member for class, it means that you can’t create static method and static variable in Kotlin class.

Fortunately, Kotlin object can handle this. If you declare a companion object inside your class, you'll be able to call its members with the same syntax as calling static methods in Java/C#, using only the class name as a qualifier.


class MyClass {
companion object {
val info = "This is info"
fun getMoreInfo():String { return "This is more fun" }
}
}

MyClass.info // This is info
MyClass.getMoreInfo() // This is more fun


Note that, even though the members of companion objects look like static members in other languages, at runtime those are still instance members of real objects, and can, for example, implement interfaces.

+for Loop / Iteration (May 10, 2019, 11:22 a.m.)

for (item in collection) {
// body of loop
}

-------------------------------------------------------------

Iterate Through a Range:

fun main(args: Array<String>) {

for (i in 1..5) {
println(i)
}
}

-------------------------------------------------------------

If the body of the loop contains only one statement (like above example), it's not necessary to use curly braces { }.

fun main(args: Array<String>) {
for (i in 1..5) println(i)
}

-------------------------------------------------------------

for (i in 1..5) print(i)

for (i in 5 downTo 1) print(i)

for (i in 1..5 step 2) print(i)

for (i in 5 downTo 1 step 2) print(i)

-------------------------------------------------------------

Iterating Through an Array:

var language = arrayOf("Ruby", "Koltin", "Python" "Java")
for (item in language)
println(item)

-------------------------------------------------------------

Iterate through an array with an index:

var language = arrayOf("Ruby", "Koltin", "Python", "Java")

for (item in language.indices) {
// printing array elements having even index only
if (item%2 == 0)
println(language[item])
}

-------------------------------------------------------------

Iterating Through a String:

var text= "Kotlin"
for (letter in text) {
println(letter)
}


-------------------------------------------------------------
-------------------------------------------------------------

+List (May 10, 2019, 10:41 a.m.)

List is by default immutable and mutable version of Lists is called MutableList!


val list: List<String> = ArrayList()
In this case you will not get an add() method as list is immutable.

-----------------------------------------------------------------

val list: MutableList<String> = ArrayList()
Now you will see an add() method and you can add elements to list.

-----------------------------------------------------------------

MUTABLE collection:
val list = mutableListOf(1, 2, 3)
list += 4

-----------------------------------------------------------------

IMMUTABLE collection:
var list = listOf(1, 2, 3)
list += 4

-----------------------------------------------------------------

+Getters and setters (May 9, 2019, 4:08 a.m.)

If you are calling
var side: Int = square.a

it does not mean that you are accessing a directly. It is same as:
int side = square.getA();

in Java, cause Kotlin autogenerates default getters and setters.


In Kotlin, only if you have special setter or getter you should specify it. Otherwise, Kotlin autogenerates it for you.

+Null Operators ? !! (May 9, 2019, 3:36 a.m.)

What is the meaning of ? in savedInstanceState: Bundle? ?
It means that savedInstanceState parameter can be Bundle type or null. Kotlin is null safety language.


var a : String // you will get a compilation error, cause a must be initialized and it cannot be null.


That means you have to write:
var a : String = "Init value"



Also, you will get a compilation error if you do:
a = null


To make a nullable, you have to write:
var a : String?


Let’s say that we have nullable nameTextView. The following code will give us NPE if it is null:
nameTextView.setEnabled(true)


Kotlin will not allow us to even do such a thing. It will force us to use ? or !! operator.
If we use ? operator:
nameTextView?.setEnabled(true)

the line will be proceeded only if nameTextView is not a null. In another case, if we use !! operator:
nameTextView!!.setEnabled(true)

it will give us NPE if nameTextView is a null. It is just for adventurers.



lateinit modifier allows us to have non-null variables waiting for initialization.

Kotlin - Android
+Components of a RecyclerView (June 22, 2019, 2:56 p.m.)

1- LayoutManagers:

A RecyclerView needs to have a layout manager and an adapter to be instantiated. A layout manager positions item views inside a RecyclerView and determines when to reuse item views that are no longer visible to the user.

RecyclerView provides these built-in layout managers:
- LinearLayoutManager shows items in a vertical or horizontal scrolling list.
- GridLayoutManager shows items in a grid.
- StaggeredGridLayoutManager shows items in a staggered grid.

To create a custom layout manager, extend the RecyclerView.LayoutManager class.

------------------------------------------------------------------

2- RecyclerView.Adapter

RecyclerView includes a new kind of adapter. It’s a similar approach to the ones you already used, but with some peculiarities, such as a required ViewHolder. You will have to override two main methods: one to inflate the view and its view holder, and another one to bind data to the view. The good thing about this is that the first method is called only when we really need to create a new view. No need to check if it’s being recycled.

------------------------------------------------------------------

3- ItemAnimator

RecyclerView.ItemAnimator will animate ViewGroup modifications such as add/delete/select that are notified to the adapter. DefaultItemAnimator can be used for basic default animations and works quite well. See the section of this guide for more information.

------------------------------------------------------------------

+RecyclerView Compared to ListView (June 22, 2019, 2:48 p.m.)

RecyclerView differs from its predecessor ListView primarily:

- Required ViewHolder in Adapters - ListView adapters do not require the use of the ViewHolder pattern to improve performance. In contrast, implementing an adapter for RecyclerView requires the use of the ViewHolder pattern for which it uses RecyclerView.Viewholder.

- Customizable Item Layouts - ListView can only layout items in a vertical linear arrangement and this cannot be customized. In contrast, the RecyclerView has a RecyclerView.LayoutManager that allows any item layouts including horizontal lists or staggered grids.

- Easy Item Animations - ListView contains no special provisions through which one can animate the addition or deletion of items. In contrast, the RecyclerView has the RecyclerView.ItemAnimator class for handling item animations.

- Manual Data Source - ListView had adapters for different sources such as ArrayAdapter and CursorAdapter for arrays and database results respectively. In contrast, the RecyclerView.Adapter requires a custom implementation to supply the data to the adapter.

- Manual Item Decoration - ListView has the android:divider property for easy dividers between items in the list. In contrast, RecyclerView requires the use of a RecyclerView.ItemDecoration object to setup much more manual divider decorations.

- Manual Click Detection - ListView has a AdapterView.OnItemClickListener interface for binding to the click events for individual items in the list. In contrast, RecyclerView only has support for RecyclerView.OnItemTouchListener which manages individual touch events but has no built-in click handling.

+Difference between gravity and layout_gravity (June 12, 2019, 3:47 a.m.)

gravity:

- sets the gravity of the contents (i.e. its subviews) of the View it's used on.

- arranges the content inside the view.

--------------------------------------------------------------

layout_gravity:

- sets the gravity of the View or Layout relative to its parent.

- arranges the view's position outside of itself.

--------------------------------------------------------------

HTML/CSS Equivalents:

Android CSS
android:layout_gravity float
android:gravity text-align

+Retrofit (May 25, 2019, 10:53 a.m.)

1- Create an Interface:
that will contain various functions which will map to the endpoint URLs of your web service, such as:
getStudents()
deleteStudent()


2- Create a service that calls the functions present within the interface.
createService( <T> Service) -> studentsService


3- Last step, within your activity, you have to initialize the step-2 service and then call the functions of the interface in step-1.
destinationService.getDestination()

+Shared Preferences (May 14, 2019, 12:54 a.m.)

It allows activities and applications to keep preferences, in the form of key-value pairs similar to a Map that will persist even when the user closes the application.

Android stores Shared Preferences settings as XML file in shared_prefs folder under DATA/data/{application package} directory. The DATA folder can be obtained by calling Environment.getDataDirectory().

------------------------------------------------------------

SharedPreferences is application specific, i.e. the data is lost on performing one of the following options:
- on uninstalling the application
- on clearing the application data (through Settings)

------------------------------------------------------------

As the name suggests, the primary purpose is to store user-specified configuration details, such as user specific settings, keeping the user logged into the application.

------------------------------------------------------------

To get access to the preferences, we have three APIs to choose from:
- getPreferences() : used from within your Activity, to access activity-specific preferences

- getSharedPreferences() : used from within your Activity (or other application Context), to access application-level preferences

- getDefaultSharedPreferences() : used on the PreferenceManager, to get the shared preferences that work in concert with Android’s overall preference framework

------------------------------------------------------------

// Storing Data:
sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
with(sharedPref.edit()) {
putBoolean("intro_screen_displayed", true)
apply()
}



// Retrieving Data
var sharedPref = getSharedPreferences(getString(R.string.preference_file_key), MODE_PRIVATE)
if (sharedPref.getBoolean("intro_screen_displayed", false))
startActivity(mainActivity)

------------------------------------------------------------

editor.putBoolean("key_name", true); // Storing boolean - true/false
editor.putString("key_name", "string value"); // Storing string
editor.putInt("key_name", "int value"); // Storing integer
editor.putFloat("key_name", "float value"); // Storing float
editor.putLong("key_name", "long value"); // Storing long

pref.getString("key_name", null); // getting String
pref.getInt("key_name", -1); // getting Integer
pref.getFloat("key_name", null); // getting Float
pref.getLong("key_name", null); // getting Long
pref.getBoolean("key_name", null); // getting boolean

------------------------------------------------------------

// Clearing or Deleting Data:
remove(“key_name”) is used to delete that particular value.

clear() is used to remove all data

------------------------------------------------------------

+Repeat background image (May 11, 2019, 10:11 p.m.)

1- Copy the background image in drawable


2- Create a file in drawable "bg_pattern.xml" with this content:
<bitmap xmlns:android="http://schemas.android.com/apk/res/android"
android:src="@drawable/bg"
android:tileMode="repeat" />


3- Add the following attribute to the XML file for the specific view:
android:background="@drawable/bg_pattern"

+Get asset image by its string name (May 11, 2019, 4:17 p.m.)

import android.graphics.BitmapFactory
import android.graphics.Bitmap


var icon: Bitmap? = BitmapFactory.decodeStream(assets.open("intro_screen/img1.jpg"))
icon.setImageBitmap(icon)

+dimensions (May 10, 2019, 9:51 p.m.)

xxxhdpi: 1280x1920 px
xxhdpi: 960x1600 px
xhdpi: 640x960 px
hdpi: 480x800 px
mdpi: 320x480 px
ldpi: 240x320 px

+mipmap directories (May 10, 2019, 9:40 p.m.)

Like all other bitmap assets, you need to provide density-specific versions of you app icon. However, some app launchers display your app icon as much as 25% larger than what's called for by the device's density bucket.

For example, if a device's density bucket is xxhdpi and the largest app icon you provide is in drawable-xxhdpi, the launcher app scales up this icon, and that makes it appear less crisp. So you should provide an even higher density launcher icon in the mipmap-xxxhdpi directory. Now the launcher can use the xxxhdpi asset instead.

Because your app icon might be scaled up like this, you should put all your app icons in mipmap directories instead of drawable directories. Unlike the drawable directory, all mipmap directories are retained in the APK even if you build density-specific APKs. This allows launcher apps to pick the best resolution icon to display on the home screen.

+Configuration qualifiers for different pixel densities (May 10, 2019, 9:31 p.m.)

ldpi Resources for low-density (ldpi) screens (~120dpi).
mdpi Resources for medium-density (mdpi) screens (~160dpi). (This is the baseline density.)
hdpi Resources for high-density (hdpi) screens (~240dpi).
xhdpi Resources for extra-high-density (xhdpi) screens (~320dpi).
xxhdpi Resources for extra-extra-high-density (xxhdpi) screens (~480dpi).
xxxhdpi Resources for extra-extra-extra-high-density (xxxhdpi) uses (~640dpi).
nodpi Resources for all densities. These are density-independent resources. The system does not scale resources tagged with this qualifier, regardless of the current screen's density.
tvdpi Resources for screens somewhere between mdpi and hdpi; approximately 213dpi. This is not considered a "primary" density group. It is mostly intended for televisions and most apps shouldn't need it—providing mdpi and hdpi resources is sufficient for most apps and the system will scale them as appropriate. If you find it necessary to provide tvdpi resources, you should size them at a factor of 1.33*mdpi. For example, a 100px x 100px image for mdpi screens should be 133px x 133px for tvdpi.

+ConstraintLayout (March 24, 2019, 2:45 p.m.)

Constraints help us to describe what are relations of views.

-----------------------------------------------------------------

A constraint is a connection or an alignment to the element the constraint is tied to. You define various constraints for every child view relative to other views present. This gives you the ability to construct complex layouts with a flat view hierarchy.

A constraint is simply a relationship between two components within the layout that controls how the view will be positioned.

-----------------------------------------------------------------

The ConstraintLayout system has three parts: constraints, equations, and solver.

Constraints are relationships between your views and are determined when you set up your UI. Once you create these relationships, the system will translate them into a linear system of equations.

The equations go in the solver and it returns the positions, and view sizes to be used in the layout.

-----------------------------------------------------------------

The ConstraintLayout becomes very necessary most especially when building complex layouts. Android actually has other layouts, which have their own unique features. Some of which could be used to build complex layouts also. However, they have their own bottlenecks, hence the need to introduce a new layout.


These older layouts have rules that tend to be too rigid. As a result of this, the tendency to nest layouts become higher. For instance, the LinearLayout only permits placing views linearly, either horizontally or vertically. The FrameLayout places views in a stacked manner, the topmost view hides the rest. The RelativeLayout places the views relative to each other.

-----------------------------------------------------------------

When creating constraints, there are a few rules to follow:
Every view must have at least two constraints: one horizontal and one vertical. If a constraint for any axis is not added, your view jumps to the zero point of that axis.

You can create constraints only between a constraint handle and an anchor point that share the same plane. So a vertical plane (the left and right sides) of a view can be constrained only to another vertical plane, and baselines can constrain only to other baselines.

Each constraint handle can be used for just one constraint, but you can create multiple constraints (from different views) to the same anchor point.

-----------------------------------------------------------------

+Custom font (April 26, 2019, 10:46 p.m.)

https://medium.com/@studymongolian/using-a-custom-font-in-your-android-app-cc4344b977a5

+Creating actions in the action bar / toolbar (April 26, 2019, 12:40 a.m.)

https://developer.android.com/training/appbar/actions
--------------------------------------------------------------------

Buttons in the toolbar are typically called actions.

Space in the app bar is limited. If an app declares more actions than can fit in the app bar, the app bar sends the excess actions to an overflow menu.

The app can also specify that an action should always be shown in the overflow menu, instead of being displayed on the app bar.

--------------------------------------------------------------------

Add Action Buttons:

All action buttons and other items available in the action overflow are defined in an XML menu resource.


To add actions to the action bar, create a new XML file in your project's res/menu/ directory as follows:
1- In Android Studio, in project view, select "Project", right click on "res" folder and choose the menu "New" -> "Android Resource File".


2- In the window for "file name" set for example "main_toolbar" and for "Resource type" choose "menu", hit OK button.


3- Add an <item> element for each item you want to include in the action bar, as shown in this code example of a menu XML file:

<menu xmlns:android="http://schemas.android.com/apk/res/android" >

<item
android:id="@+id/action_favorite"
android:icon="@drawable/ic_favorite_black_48dp"
android:title="@string/action_favorite"
app:showAsAction="ifRoom"/>

<!-- Settings, should always be in the overflow -->
<item android:id="@+id/action_settings"
android:title="@string/action_settings"
app:showAsAction="never"/>

</menu>


4- Add the following code to MainActivity.kt
override fun onCreateOptionsMenu(menu: Menu): Boolean {
menuInflater.inflate(R.menu.main_toolbar, menu)
return true
}

// This is to only display where the above code should be placed.
override fun onCreate(savedInstanceState: Bundle?) { }

+Set up the app bar (Toolbar) (April 26, 2019, 12:20 a.m.)

https://developer.android.com/training/appbar/setting-up#kotlin
-------------------------------------------------------------------------------

1- Replace android:theme="@style/AppTheme" with android:theme="@style/Theme.AppCompat.Light.NoActionBar" in AndroidManifest.xml

2- Add a Toolbar to the activity's layout (activity_main.xml)
<android.support.v7.widget.Toolbar
android:id="@+id/my_toolbar"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
android:background="?attr/colorPrimary"
android:elevation="4dp"
android:theme="@style/ThemeOverlay.AppCompat.ActionBar"
app:popupTheme="@style/ThemeOverlay.AppCompat.Light"/>

It might display an error about "This view is not constrained vertically...", for fixing the error:
Go to Design View, use the magic wand icon in the toolbar menu above the design preview. This will automatically add some lines in the text field and the red line will be removed.

You can also set the background color to transparent:
android:background="@android:color/transparent";

3- Add the 3rd line to MainActivity.kt
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
setSupportActionBar(findViewById(R.id.my_toolbar))

+Views (April 25, 2019, 1:52 p.m.)

A view is basically any of the widgets that make up a typical utility app.

Examples include images (ImageViews), text (TextView), editable text boxes (EditText), web pages (WebViews), and buttons (err, Button).

+XML - Introduction (April 25, 2019, 1:44 p.m.)

XML describes the views in your activities, and Kotlin tells them how to behave.


Sometimes XML will be used to describe types of data other than views in your apps; acting as a kind of index that your code can refer to. This is how most apps will define their color palettes for instance, meaning that there’s just one file you need to edit if you want to change the look of your entire app.

Linux
+httrack (Oct. 19, 2019, 1:09 a.m.)

1- Installation:
apt install httrack


2- Usage:
httrack https://songslover.app/best-of-year/v-a-best-of-2018.html -r2 '-*' '+*mp3' -X0 --update

+Radio Streaming Apps (Feb. 20, 2019, 10:32 a.m.)

Cantata

apt install cantata mpd

Favorite List file location:
.local/share/data/cantata/mpd/playlists/

-----------------------------------------------------------

Odio

apt install snapd
snap install odio

-----------------------------------------------------------

Lollypop

add-apt-repository ppa:gnumdk/lollypop
apt update
apt install lollypop

If not found, maybe it's "lollypop-xenial". Do an apt-cache search lollypop to find the correct name.

-----------------------------------------------------------

Guayadeque

add-apt-repository ppa:anonbeat/guayadeque
apt-get update
apt install guayadeque

-----------------------------------------------------------

+CentOS - yum nogpgcheck (July 7, 2019, 9:39 p.m.)

yum --nogpgcheck localinstall packagename.arch.rpm

+CentOS - EPEL (July 7, 2019, 7 p.m.)

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

+CentOS - Check version (July 7, 2019, 6:56 p.m.)

rpm -q centos-release

+SMB (June 26, 2019, 9:27 p.m.)

apt install smbclient

-------------------------------------------------------------

List all shares:
smbclient -L <IP Address> -U Mohsen

Connect to a Disk or other services:
smbclient //<IP Address>/<Disk or Service Name> -U Mohsen

-------------------------------------------------------------

To copy the file from the local file system to the SMB server:
smb: \> put local_file remote_file

To copy the file from the SMB server to the local file system:
smb: \> get remote_file local_file

-------------------------------------------------------------

+aria2c (April 26, 2018, 10:55 a.m.)

aria2c -d ~/Downloads/ -i ~/Downloads/dl.txt --summary-interval=600 -c -x16 -s16 -j1

For limiting speed add:
--max-overall-download-limit=1400K

+Download dependencies and packages to directory (June 24, 2019, 1:38 p.m.)

1- In server with no Internet:
apt-get --print-uris --yes install <my_package_name> | grep ^\' | cut -d\' -f2 > downloads.list

2- Download the links from another server with Internet connection:
wget --input-file downloads.list

3- Copy the files to the location /var/cache/apt/archives in destination server.

4- Install the package using apt install.

+Change/Rename username/group (June 16, 2019, 5:13 p.m.)

usermod -l new-name old-name

groupmod -n new-group old-group

-------------------------------------------------------------------

If following error occurred:
usermod: user tom is currently used by process 123:

pkill -u old_name 123
pkill -9 -u old_name

-------------------------------------------------------------------

+rsync (May 5, 2018, 11:26 a.m.)

--delete : delete files that don't exist on sender (system)
-v : Verbose (try -vv for more detailed information)
-e "ssh options" : specify the ssh as remote shell
-a : archive mode
-r : recurse into directories
-z : compress file data

---------------------------------------------------------------------------

rsync -civarzhne 'ssh -p 22' --no-g --no-p --delete --force --exclude-from 'fair/rsync' fair root@fair.mohsenhassani.ir:/srv/

---------------------------------------------------------------------------

rsync -arvb --exclude-from 'my_project/rsync-exclude-list.txt' --delete --backup-dir='my_project/my_project/rsync-deletions' -e ssh my_project mohsen@mohsenhassani.com:/srv/

---------------------------------------------------------------------------

rsync -varPe 'ssh' --ignore-existing mohsenhasani.com:~/temp/music/* /home/mohsen/Audio/Music/Unsorted/music/

---------------------------------------------------------------------------

Exclude files and folders:

Files:
--exclude 'sources.txt'
--exclude '*.pyc'

Directories:
--exclude '/static'
--exclude 'abc*'

Together:
--exclude 'sources.txt' --exclude 'abc*'

---------------------------------------------------------------------------

-a = recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.

-v = verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?

--delete = This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

l = preserves any links you may have created.

--progress = shows the progress of each file transfer. Can be useful to know if you have large files being backup up.

--stats = Adds a little more output regarding the file transfer status.

-I, --ignore-times
Normally rsync will skip any files that are already the same size and have the same modification timestamp. This option turns off this "quick check" behavior, causing all files to be updated.

-b, --backup
With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options. Note that if you don’t specify --backup-dir, (1) the --omit-dir-times option will be implied, and (2) if --delete is also in effect (without --delete-excluded), rsync will add a "protect" filter-rule for the backup suffix to the end of all your existing excludes (e.g. -f "P *~"). This will prevent previously backed-up files from being deleted. Note that if you are supplying your own filter rules, you may need to manually insert your own exclude/protect rule somewhere higher up in the list so that it has a high enough priority to be effective (e.g., if your rules specify a trailing inclusion/exclusion of ’*’, the auto-added rule would never be reached).

--backup-dir=DIR
In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames). Note that if you specify a relative path, the backup directory will be relative to the destination directory, so you probably want to specify either an absolute path or a path that starts
with "../". If an rsync daemon is the receiver, the backup dir cannot go outside the module’s path hierarchy, so take extra care not to delete it or copy into it.

--suffix=SUFFIX
This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string.

-u, --update
This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file. (If an existing destination file has a modification time equal to the source file’s, it will be updated if the sizes are different.) Note that this does not affect the copying of symlinks or other special files. Also, a difference of file format between the sender and receiver is always considered to be important enough for an update, no matter what date is on the objects. In other words, if the source has a directory where the destination has a file, the transfer would occur regardless of the timestamps. This option is a transfer rule, not an exclude, so it doesn’t affect the data that goes into the file-lists, and thus it doesn’t affect deletions. It just limits the files that the receiver requests to be transferred.

---------------------------------------------------------------------------

+Shadowsocks - Proxy tool (May 13, 2018, 9:25 p.m.)

Server Installation:

(Use python 2.7)
1- pip install shadowsocks
(You can create a virtualenv if you want.)


2- Create a file /etc/shadowsocks.json:
{
"server": "[server ip address]",
"port_password": {
"8381": "Mohsen123",
"8382": "Mohsen321",
"8383": "MoMo"
},
"local_port": 1080,
"timeout": 600,
"method": "aes-256-cfb"
}


3- ssserver --manager-address /var/run/shadowsocks-manager.sock -c /etc/shadowsocks.json start
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "ssserver")

If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: undefined symbol: EVP_CIPHER_CTX_cleanup
Refer to the bottom of this note for solution!

If you got these errors:
[Errno 98] Address already in use
can not bind to manager address
Delete the file in:
rm /var/run/shadowsocks-manager.sock


4- Open Firewall Port to Shadowsocks Client for each ports defined at the above json file:
ufw allow proto tcp to 0.0.0.0/0 port 8381 comment "Shadowsocks server listen port"
Do the same for other ports too, 8382, 8383, etc

5- Automatically Start Shadowsocks Service:
put the whole line in step 3 in the file /etc/rc.local

---------------------------------------------------------------------

Client Installation: (Linux)

1- pip install shadowsocks
(You can create a virtualenv if you want.)


2- Create a file /etc/shadowsocks.json with the exact content from step 2 of "Server Installation".

3- sslocal -c /etc/shadowsocks.json
(If you installed shadowsocks in a virtualenv, you need to "activate" it to see the command "sslocal")

---------------------------------------------------------------------

Client Installation: (Android)

Install the Shadowsocks app from the link below:
https://play.google.com/store/apps/details?id=com.github.shadowsocks

---------------------------------------------------------------------

If you got error like this:
AttributeError: /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1: undefined symbol: EVP_CIPHER_CTX_cleanup

Open the file:
vim /usr/local/lib/python2.7/dist-packages/shadowsocks/crypto/openssl.py

Replace "cleanup" with "reset" in line 52:
libcrypto.EVP_CIPHER_CTX_cleanup.argtypes = (c_void_p,)
libcrypto.EVP_CIPHER_CTX_reset.argtypes = (c_void_p,)

And also replace "cleanup" with "reset" in line 111:
libcrypto.EVP_CIPHER_CTX_cleanup
with:
libcrypto.EVP_CIPHER_CTX_reset

---------------------------------------------------------------------

+Check if a disk is an SSD or an HDD (Dec. 18, 2018, 9:21 a.m.)

cat /sys/block/sda/queue/rotational

You should get 1 for hard disks and 0 for an SSD

--------------------------------------------------------------

lsblk -d -o name,rota

--------------------------------------------------------------

Verify VPS provided is on SSD drive:

dd if=/dev/zero of=/tmp/basezap.img bs=512 count=1000 oflag=dsync

This command should take only a few seconds if it is an SSD. If it took longer, it is a normal hard disk.

--------------------------------------------------------------

time for i in `seq 1 1000`; do
dd bs=4k if=/dev/sda count=1 skip=$(( $RANDOM * 128 )) >/dev/null 2>&1;
done

--------------------------------------------------------------

+ffmpeg (May 10, 2019, 4:30 p.m.)

Cut Movies:
ffmpeg -i 4.VOB -ss 00:14 -t 02:11 -c copy cut2.mp4

---------------------------------------------------------

Resize resolution:
ffmpeg -i input.mp4 -s 640x480 -b:v 1024k -vcodec mpeg4 -acodec copy input.mp4


List of all formats & codes supported by ffmpeg:
ffmpeg -formats

ffmpeg -codecs

---------------------------------------------------------

Converting mp4 to mp3:

ffmpeg -i video.mp4 -vn -acodec libmp3lame -ac 2 -qscale:a 4 -ar 48000 audio.mp3

---------------------------------------------------------

Merge audio & video:

ffmpeg -i video.mp4 -i audio.mp3 -c:v copy -c:a mp3 -strict experimental output.mp4

---------------------------------------------------------

+OpenVPN (Nov. 18, 2018, 9:52 a.m.)

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04

=================== Server Configuration ===================

1- apt install openvpn easy-rsa

2- make-cadir /var/openvpn-ca

3- Build the Certificate Authority:
cd /var/openvpn-ca
mv openssl-1.0.0.cnf openssl.cnf
source vars
./clean-all
./build-ca


4- Create the Server Certificate, Key, and Encryption Files:
./build-key-server server
When asked for "Sign the certificate" reply "y"
./build-dh


5- Generate an HMAC signature to strengthen the server's TLS integrity verification capabilities:
openvpn --genkey --secret keys/ta.key


6- Generate a Client Certificate and Key Pair:
./build-key user1


7- Copy the Files to the OpenVPN Directory:
cd keys
cp ca.crt server.crt server.key ta.key dh2048.pem /etc/openvpn
If the file "dh2048.pem" was not available, you can copy it from:
cp /usr/share/doc/openvpn/examples/sample-keys/dh2048.pem /etc/openvpn
or you might need to locate it.


8- Copy and unzip a sample OpenVPN configuration file into configuration directory:
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | tee /etc/openvpn/server.conf


9- Adjust the OpenVPN Configuration:
vim /etc/openvpn/server.conf

* Find the directive "tls-auth ta.key 0", uncomment it (if it's commented) and add "key-direction 0" below it.

* Find "cipher AES-256-CBC", uncomment it and add "auth SHA256" below it.

* Find and uncomment:
user nobody
group nogroup
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"


10- Allow IP Forwarding:
Uncomment the line "net.ipv4.ip_forward" in the file "vim /etc/sysctl.conf".
To read the file and adjust the values for the current session, type:
sysctl -p


11- Adjust the UFW Rules to Masquerade Client Connections:
Find the public network interface using:
ip route | grep default
The part after "dev" is the public network interface. We need it for next step.


12- Add the following lines to the the bottom of the file "/etc/ufw/before.rules":

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to server public network interface
-A POSTROUTING -s 10.8.0.0/8 -o <your_public_network_interface> -j MASQUERADE
COMMIT
# END OPENVPN RULES


13- Open the file "/etc/default/ufw":
Find "DEFAULT_FORWARD_POLICY="DROP"" and change "DROP" to "ACCEPT".


14- Open the OpenVPN Port and Enable the Changes:
ufw allow 1194/udp
ufw allow OpenSSH
ufw disable
ufw enable


15- Start and Enable the OpenVPN Service:
systemctl start openvpn@server
systemctl status openvpn@server

Also check that the OpenVPN tun0 interface is available:
ip addr show tun0


16- Enable the service so that it starts automatically at boot:
systemctl enable openvpn@server


17- Create the Client Config Directory Structure:
mkdir -p /var/client-configs/files
chmod 700 /var/client-configs/files


18- Copy an example client configuration:
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /var/client-configs/base.conf


19- Open the "/var/client-configs/base.conf" file and enter your server IP to the directive:
remote <your_server_ip> 1194

Uncomment:
user nobody
group nogroup

Comment:
# ca ca.crt
# cert client.crt
# key client.key

Add "auth SHA256" after the line "cipher AES-256-CBC"

Add "key-direction 1" somewhere in the file.

Add a few commented out lines:
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
If your client is running Linux and has an /etc/openvpn/update-resolv-conf file, you should uncomment these lines from the generated OpenVPN client configuration file.


20- Creating a Configuration Generation Script:
vim /var/client-configs/make_config.sh

Paste the following script:
#!/bin/bash

# First argument: Client identifier

KEY_DIR=/var/openvpn-ca/keys
OUTPUT_DIR=/var/client-configs/files
BASE_CONFIG=/var/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn


21- Mark the file as executable:
chmod 700 /var/client-configs/make_config.sh


22- Generate Client Configurations:
cd /var/client-configs/
./make_config.sh user1

If everything went well, we should have a "user1.ovpn" file in our "/var/client-configs/files" directory.


23- Transferring Configuration to Client Devices:
Use scp or any other methods to download a copy of the create "user1.ovpn" file to your client.


=================== Client Configuration ===================

24- Install the Client Configuration:
apt install openvpn


25- Check to see if your distribution includes a "/etc/openvpn/update-resolv-conf" script:
ls /etc/openvpn
If you see a file "update-resolve-conf":
Edit the OpenVPN client configuration file you transferred and uncomment the three lines we placed in to adjust the DNS settings.


26- If you are using CentOS, change the group from nogroup to nobody to match the distribution's available groups:


27- Now, you can connect to the VPN by just pointing the openvpn command to the client configuration file:
sudo openvpn --config user1.ovpn

+DVB - TV Card Driver (April 17, 2015, 7:49 p.m.)

This will install the driver automatically:

1- mkdir it9135 && cd it9135

2- wget http://www.ite.com.tw/uploads/firmware/v3.6.0.0/dvb-usb-it9135.zip

3- unzip dvb-usb-it9135.zip

4- dd if=dvb-usb-it9135.fw ibs=1 skip=64 count=8128 of=dvb-usb-it9135-01.fw

5- dd if=dvb-usb-it9135.fw ibs=1 skip=12866 count=5817 of=dvb-usb-it9135-02.fw

6- rm dvb-usb-it9135.fw

7- sudo install -D *.fw /lib/firmware

8- sudo chmod 644 /lib/firmware/dvb-usb-it9135* && cd .. && rm -rf it9135

9- sudo apt install kaffeine

After the above solution, you should be able to watch Channels via Kaffeine (or any other DVB Players). Just grab Kaffein, scan the frequencies and you should be fine!

-------------------------------------------------------------

If you had problems with the above solution, check the older method below:

http://nucblog.net/2014/11/installing-media-build-drivers-for-additional-tv-tuner-support-in-linux/

1-sudo apt-get install libproc-processtable-perl git libc6-dev

2-git clone git://linuxtv.org/media_build.git

3-cd media_build

4-$ ./build

5-sudo make install

6-apt-get install me-tv kaffeine

7-reboot for loading the driver (I don't know the driver for modprobe yet).

-----------------------------------------------------

Scan channels using Kaffein:

1-Open Kaffein

2-From `Television` menu, choose `Configure Television`.

3-From `Device 1` tab, from `Source` option, choose `Autoscan`

4-From `Television` menu choose `Channels`

5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.

-------------------------------------------------------------

Scan channels using Me-TV

1-Open Me-TV

2-When the scan dialog opens, choose `Czech Republic` from `Auto Scan`.

-------------------------------------------------------------

+Permanently set $PATH (April 19, 2019, 9:39 p.m.)

vim /root/.profile

export PATH="$PATH:/usr/share/logstash/bin/"

+Test if a port is open (April 7, 2018, 9:07 p.m.)

telnet mohsenhassani.ir 80
nc -z mohsenhassani.ir 80

+sed - inline string replace (April 7, 2018, 6:29 p.m.)

echo "the old string . . . " | sed -e "s/old/new/g/"

+Install GRUB manually (March 9, 2018, 12:05 p.m.)

sudo mount /dev/sdax /mnt
sudo mount --bind /dev /mnt/dev
sudo mount --bind /dev/pts /mnt/dev/pts
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt

update-initramfs -u
update-grub2

+Forwarding X (March 6, 2018, 7:55 p.m.)

1- Edit the file sshd_config:
vim /etc/ssh/sshd_config

X11Forwarding yes
X11UseLocalhost no

2- Restart ssh server:
/etc/init.d/ssh reload

3- Install xauth:
apt install xauth

4- SSH to the server:
ssh -X root@mohsenhassani.com

+Partitioning Error - Partition table entries are not in disk order (Feb. 13, 2018, 5:37 p.m.)

sudo gdisk /dev/sda
p (the p-command prints the recent partition-table on-screen)
s (the s-command sorts the partition-table entries)
p (use the p-command again to see the result on your screen)
w (write the changed partition-table to the disk)
q (quit gdisk)

+OwnCloud (Feb. 3, 2018, 3:37 p.m.)

Installation:

1- apt install -y apache2 mariadb-server libapache2-mod-php7.0 php7.0-gd php7.0-json php7.0-mysql php7.0-curl php7.0-intl php7.0-mcrypt php-imagick php7.0-zip php7.0-xml php7.0-mbstring php-apcu php-redis redis-server php7.0-ldap php-smbclient

2- Download tar file from the address: https://owncloud.org/download/
Extract the file to /srv/

3- Remove the config files in /etc/apache2/sites-available and "sites-enabled".
Create an Apache config file with the content:
vim /etc/apache2/sites-available/owncloud.conf

Redirect permanent /owncloud https://files.deskbit.office
<VirtualHost *:443>
Header add Strict-Transport-Security: "max-age=15768000;includeSubdomains"
SSLEngine on

DocumentRoot /srv/owncloud

<Directory /srv/owncloud>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>

SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /srv/owncloud
SetEnv HTTP_HOME /srv/owncloud
</VirtualHost>

4- Create a symlink:
ln -s /etc/apache2/sites-available/owncloud.conf /etc/apache2/sites-enabled/owncloud.conf

5- Enable some required modules for Apache:
systemctl restart apache2
a2enmod rewrite
a2enmod headers

6- chown -R www-data:www-data /srv/owncloud

7- Configure Database:
mysql -u root -p
GRANT ALL PRIVILEGES ON owncloud.* TO 'root'@'localhost' IDENTIFIED BY 'password';
quit

8- Open the server address in browser and complete the installation:
http://<ip>/owncloud

9- vim /etc/php/7.0/cli/conf.d/20-apcu.ini
extension=redis.so

10- Add these two lines at the top of the file /srv/owncloud/data/.htaccess
deny from all
IndexIgnore *


11- Check the owncloud config file is the same as the following: /srv/owncloud/config/config.php
<?php
$CONFIG = array (
'instanceid' => '...',
'passwordsalt' => '...',
'secret' => '...',
'trusted_domains' =>
array (
0 => 'your.domain.name',
),
'datadirectory' => '/srv/owncloud/data',
'overwrite.cli.url' => 'http://your.domain.name/owncloud',
'dbtype' => 'mysql',
'version' => '10.0.6.1',
'dbname' => 'owncloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => 'oc_admin',
'dbpassword' => '...',
'logtimezone' => 'UTC',
'installed' => true,
'filelocking.enabled' => true,
'memcache.local' => '\OC\Memcache\APCu',
'memcache.locking' => '\OC\Memcache\APCu',
);


12- Enabling SSL:
a2enmod ssl
a2ensite default-ssl
service apache2 reload

13- Edit the file /etc/php/7.0/cli/conf.d/20-apcu.ini and make sure it has only the value:
extension=apcu.so


Restart apache:
/etc/init.d/apache2 restart
=============================================

Management Commands:

sudo -u www-data php /var/www/owncloud/occ user:resetpassword admin
--------------------------------------------------------------------------------
See OwnCloud version:
sudo -u www-data php /var/www/owncloud/occ -V
OR
<yourowncloudurl>/status.php
OR
sudo -u www-data php /var/www/owncloud/occ status
--------------------------------------------------------------------------------
User Commands:
user:add adds a user
user:delete deletes the specified user
user:disable disables the specified user
user:enable enables the specified user
user:inactive reports users who are known to owncloud,
but have not logged in for a certain number of days
user:lastseen shows when the user was logged in last time
user:list list users
user:list-groups list groups for a user
user:report shows how many users have access
user:resetpassword Resets the password of the named user
user:setting Read and modify user settings
user:sync Sync local users with an external backend service
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

+pmacct configuration with PostgreSQL (Jan. 30, 2018, 10:19 p.m.)

https://wiki.alpinelinux.org/wiki/IP_Accounting
-------------------------------------------------------------------
su postgres
psql -d template1 -f pmacct-create-db.pgsql
psql -d pmacct -f pmacct-create-table_v1.pgsql
vim /etc/pmacct/pmacctd.conf

+Get Hardware Information (Jan. 24, 2018, 4:40 p.m.)

lshw

+tcpdump (Jan. 13, 2018, 11:29 a.m.)

apt install tcpdump

sudo tcpdump -i any -n host 5.219.145.86
sudo tcpdump -nti any port 80

+Use cURL on specific interface (Jan. 9, 2018, 1:09 p.m.)

curl -o rootLast.tbz2 http://ftp.mohsenhassani.com/rootLast.tbz2 --interface eno2

+pmacct (Jan. 1, 2018, 10:49 a.m.)

su
su postgres
/usr/share/doc/pmacct/sql/pmacct-create-table_v1.pgsql
psql -d template1 -f /tmp/pmacct-create-db.pgsql
psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql
----------------------------------------------------------------------------------------------
Configuration Directives:
https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS
http://wiki.pmacct.net/OfficialConfigKeys
----------------------------------------------------------------------------------------------
vim /etc/pmacct/nfacctd.conf
! nfacctd configuration
!
!
!
daemonize: true
pidfile: /var/run/nfacctd.pid
syslog: daemon
!
! interested in in and outbound traffic
aggregate: src_host,dst_host
! on this network
pcap_filter: net 127.0.0.0/8
! on this interface
interface: lo
!
! storage methods
plugins: pgsql
sql_host: localhost
sql_passwd: myrealsecurepwd
! refresh the db every minute
sql_refresh_time: 600
! reduce the size of the insert/update clause
sql_optimize_clauses: false
! accumulate values in each row for up to an hour
sql_history: 10m
! create new rows on the minute, hour, day boundaries
sql_history_roundoff: 10m
! in case of emergency, log to this file
!sql_recovery_logfile: /var/lib/pmacct/nfacctd_recovery_log
nfacctd_port: 6653
imt_mem_pools_number: 0
plugin_pipe_size: 4096000
! plugin_buffer_size: 32212254720
----------------------------------------------------------------------------------------------

+Chroot (Dec. 25, 2017, 11:11 a.m.)

chroot /srv/root /bin/bash

+PDF Conversions (Nov. 6, 2017, 3:21 p.m.)

Installation:
apt install graphicsmagick-imagemagick-compat

-------------------------------------------------------------

Convert multiple images to a PDF file:
convert *.jpg aa.pdf

-------------------------------------------------------------

Convert a PDF file to images:

convert 1.pdf 1.jpg

For a single page:
convert 1.pdf[4] 1.jpg

-------------------------------------------------------------

If the following error occurred:
convert: not authorized `1.pdf' @ error/constitute.c/ReadImage/412.
convert: no images defined `1.jpg' @ error/convert.c/ConvertImageCommand/3210.

Solution:
This problems comes from a security update.
Edit the file: /etc/ImageMagick-6/policy.xml
Change "none" to "read|write" in the line:
<policy domain="coder" rights="read|write" pattern="PDF" />

-------------------------------------------------------------

+Add a New Disk to an Existing Linux Server (Oct. 25, 2017, 3:44 p.m.)

1- Check if the added disk is shown:
fdisk -l

2- For partitioning:
fdisk /dev/vdb
n
p
1
2048
+49G (For a 50G disk)
w
------------------------------
Now format the disk with mkfs command.
mkfs.ext4 /dev/vdb1

Make an entry in /etc/fstab file for permanent mount at boot time:
/dev/vdb1 /mnt/ftp ext4 defaults 0 0

+DevStack (Oct. 4, 2017, 12:36 a.m.)

https://docs.openstack.org/devstack/latest/
------------------------------------------------------------------------------------
apt install sudo git sudo

1- Add Stack User
useradd -s /bin/bash -d /opt/stack -m stack


2- Since this user will be making many changes to your system, it should have sudo privileges:
echo "stack ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/stack
su - stack


3- Download DevStack
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack

4- Create a local.conf with the following content
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
------------------------------------------------------------------------------------

+Clear Terminal Completely (Sept. 18, 2017, 6:13 p.m.)

clear && printf '\e[3J'

+Add SSH Private Key (Sept. 18, 2017, 5:01 p.m.)

ssh-add .ssh/id_rsa

If you got an error:
Could not open a connection to your authentication agent.

For fixing it run:
eval `ssh-agent -s`
OR
eval $(ssh-agent)

And then repeat the earlier command (ssh-add ....)
------------------------------------------------------------
Add SSH private key permanently:
Create a file ~/.ssh/config with the content:
IdentityFile ~/.ssh/id_mohsen
------------------------------------------------------------

+Commands - IP (Sept. 16, 2017, 5:29 p.m.)

Assign an IP Address to Specific Interface:
ip addr add 192.168.50.5 dev eth1
---------------------------------------------------------------------
Check an IP Address
ip addr show
---------------------------------------------------------------------
Remove an IP Address
ip addr del 192.168.50.5/24 dev eth1
---------------------------------------------------------------------
Enable Network Interface
ip link set eth1 up
---------------------------------------------------------------------
Disable Network Interface
ip link set eth1 down
---------------------------------------------------------------------
Check Route Table
ip route show
---------------------------------------------------------------------
Add Static Route
ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0
---------------------------------------------------------------------
Remove Static Route
ip route del 10.10.20.0/24
---------------------------------------------------------------------
Add Default Gateway
ip route add default via 192.168.50.1
---------------------------------------------------------------------

+Commands - Find (Sept. 12, 2017, 11:08 a.m.)

Find Files Using Name in Current Directory
find . -name mohsen.txt

----------------------------------------------------------

Find Files Under Home Directory
find /home -name mohsen.txt

----------------------------------------------------------

Find Files Using Name and Ignoring Case
find /home -iname mohsen.txt

----------------------------------------------------------

Find Directories Using Name
find / -type d -name Mohsen

----------------------------------------------------------

Find PHP Files Using Name
find . -type f -name mohsen.php

----------------------------------------------------------

Find all PHP Files in Directory
find . -type f -name "*.php"

----------------------------------------------------------

Find Files With 777 Permissions
find . -type f -perm 0777 -print

----------------------------------------------------------

Find Files Without 777 Permissions
find / -type f ! -perm 777

----------------------------------------------------------

Find SGID Files with 644 Permissions
find / -perm 2644

----------------------------------------------------------

Find Sticky Bit Files with 551 Permissions
find / -perm 1551

----------------------------------------------------------

Find SUID Files
find / -perm /u=s

----------------------------------------------------------

Find SGID Files
find / -perm /g=s

----------------------------------------------------------

Find Read Only Files
find / -perm /u=r

----------------------------------------------------------

Find Executable Files
find / -perm /a=x

----------------------------------------------------------

Find Files with 777 Permissions and Chmod to 644
find / -type f -perm 0777 -print -exec chmod 644 {} \;

----------------------------------------------------------

Find Directories with 777 Permissions and Chmod to 755
find / -type d -perm 777 -print -exec chmod 755 {} \;

----------------------------------------------------------

Find and remove single File
find . -type f -name "tecmint.txt" -exec rm -f {} \;

----------------------------------------------------------

Find and remove Multiple File
find . -type f -name "*.txt" -exec rm -f {} \;
OR
# find . -type f -name "*.mp3" -exec rm -f {} \;

----------------------------------------------------------

Find all Empty Files
find /tmp -type f -empty

----------------------------------------------------------

Find all Empty Directories
find /tmp -type d -empty

----------------------------------------------------------

File all Hidden Files
find /tmp -type f -name ".*"

----------------------------------------------------------

Find Single File Based on User
find / -user root -name mohsen.txt

----------------------------------------------------------

Find all Files Based on User
find /home -user mohsen

----------------------------------------------------------

Find all Files Based on Group
find /home -group developer

----------------------------------------------------------

Find Particular Files of User
find /home -user mohsen -iname "*.txt"

----------------------------------------------------------

Find Last 50 Days Modified Files
find / -mtime 50

----------------------------------------------------------

Find Last 50 Days Accessed Files
find / -atime 50

----------------------------------------------------------

Find Last 50-100 Days Modified Files
find / -mtime +50 –mtime -100

----------------------------------------------------------

Find Changed Files in Last 1 Hour
find / -cmin -60

----------------------------------------------------------

Find Modified Files in Last 1 Hour
find / -mmin -60

----------------------------------------------------------

Find Accessed Files in Last 1 Hour
find / -amin -60

----------------------------------------------------------

Find 50MB Files
find / -size 50M

----------------------------------------------------------

Find Size between 50MB – 100MB
find / -size +50M -size -100M

----------------------------------------------------------

Find and Delete 100MB Files
find / -size +100M -exec rm -rf {} \;

----------------------------------------------------------

Find Specific Files and Delete
find / -type f -name *.mp3 -size +10M -exec rm {} \;

----------------------------------------------------------

Find + grep

find . -type f -iname "*.py" -exec grep --exclude=./PC-Projects/* -Riwl 'sqlalchemy' {} \;

----------------------------------------------------------

find /var/mohsen_backups -name "*`date --date='-20 days' +%Y-%m-%d`.tar.gz" -exec rm {} +

----------------------------------------------------------

Files created/modified before the date "2019-05-07":
find . ! -newermt "2019-05-07"

After the date:
find . -newermt "2019-05-07"

Using datetime:
find . ! -newermt "2019-05-07 12:23:17"

Also:
find . -newermt "june 01, 2019"
find . -not -newermt "june 01, 2019"

find . -type f ! -newermt "June 01, 2019" -exec rm {} +

----------------------------------------------------------

+Commands - Netstat (Sept. 12, 2017, 11 a.m.)

netstat (network statistics)
---------------------------------------------------------
Listing all the LISTENING Ports of TCP and UDP connections
netstat -a
---------------------------------------------------------
Listing TCP Ports connections
netstat -at
---------------------------------------------------------
Listing UDP Ports connections
netstat -au
---------------------------------------------------------
Listing all LISTENING Connections
netstat -l
---------------------------------------------------------
Listing all TCP Listening Ports
netstat -lt
---------------------------------------------------------
Listing all UDP Listening Ports
netstat -lu
---------------------------------------------------------
Listing all UNIX Listening Ports
netstat -lx
---------------------------------------------------------
Showing Statistics by Protocol
netstat -s
---------------------------------------------------------
Showing Statistics by TCP Protocol
netstat -st
---------------------------------------------------------
Showing Statistics by UDP Protocol
netstat -su
---------------------------------------------------------
Displaying Service name with PID
netstat -tp
---------------------------------------------------------
Displaying Promiscuous Mode
netstat -ac 5 | grep tcp
---------------------------------------------------------
Displaying Kernel IP routing
netstat -r
---------------------------------------------------------
Showing Network Interface Transactions
netstat -i
---------------------------------------------------------
Showing Kernel Interface Table
netstat -ie
---------------------------------------------------------
Displaying IPv4 and IPv6 Information
netstat -g
---------------------------------------------------------
Print Netstat Information Continuously
netstat -c
---------------------------------------------------------
Finding non supportive Address
netstat --verbose
---------------------------------------------------------
Finding Listening Programs
netstat -ap | grep http
---------------------------------------------------------
Displaying RAW Network Statistics
netstat --statistics --raw
---------------------------------------------------------

+NSQ (Sept. 12, 2017, 9:45 a.m.)

Installation:

http://nsq.io/deployment/installing.html

1-Download and extract:
https://s3.amazonaws.com/bitly-downloads/nsq/nsq-1.0.0-compat.linux-amd64.go1.8.tar.gz

2-Copy:
cp nsq-1.0.0-compat.linux-amd64.go1.8/bin/* /usr/local/bin/
-------------------------------------------------------------------
Quick Start:
1- In one shell, start nsqlookupd:
$ nsqlookupd

2- In another shell, start nsqd:
$ nsqd --lookupd-tcp-address=127.0.0.1:4160

3- In another shell, start nsqadmin:
$ nsqadmin --lookupd-http-address=127.0.0.1:4161

4- Publish an initial message (creates the topic in the cluster, too):
$ curl -d 'hello world 1' 'http://127.0.0.1:4151/pub?topic=test'

5- Finally, in another shell, start nsq_to_file:
$ nsq_to_file --topic=test --output-dir=/tmp --lookupd-http-address=127.0.0.1:4161

6- Publish more messages to nsqd:
$ curl -d 'hello world 2' 'http://127.0.0.1:4151/pub?topic=test'
$ curl -d 'hello world 3' 'http://127.0.0.1:4151/pub?topic=test'

7- To verify things worked as expected, in a web browser open http://127.0.0.1:4171/ to view the nsqadmin UI and see statistics. Also, check the contents of the log files (test.*.log) written to /tmp.

The important lesson here is that nsq_to_file (the client) is not explicitly told where the test topic is produced, it retrieves this information from nsqlookupd and, despite the timing of the connection, no messages are lost.
-------------------------------------------------------------------
Clustering NSQ:

nsqlookup

nsqd --lookupd-tcp-address=10.10.0.101:4160,10.10.0.102:4160,10.10.0.103:4160

nsqadmin --lookupd-http-address=10.10.0.101:4161,10.10.0.102:4161,10.10.0.103:4161

+Reverse SSH Tunneling (Sept. 10, 2017, 3:08 p.m.)

1- SSH from the destination to the source (with public IP) using the command below:
ssh -R 19999:localhost:22 sourceuser@138.47.99.99
* port 19999 can be any unused port.

2- Now you can SSH from source to destination through SSH tunneling:
ssh localhost -p 19999

3- 3rd party servers can also access 192.168.20.55 through Destination (138.47.99.99).
Destination (192.168.20.55) <- |NAT| <- Source (138.47.99.99) <- Bob's server

3.1 From Bob's server:
ssh sourceuser@138.47.99.99

3.2 After the successful login to Source:
ssh localhost -p 19999

The connection between destination and source must be alive at all time.
Tip: you may run a command (e.g. watch, top) on Destination to keep the connection active.

+Auto Mount Hard Disk using /etc/fstab (Sept. 8, 2017, 8:11 a.m.)

UUID=e6a27fec-b822-4cc1-9f41-ca14655f938c /media/mohsen/4TB-Internal ext4 rw,user,exec 00

+Traffic Control - Limit Network Interface (Aug. 28, 2017, 4:58 p.m.)

For slowing an interface down:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
tc qdisc add dev eno3 root tbf rate 8096kbit latency 1ms burst 4096

explanation:
qdisc - queueing discipline
latency - number of bytes that can be queued waiting for tokens to become available.
burst - Size of the bucket, in bytes.
rate - speedknob

+Crontab (July 11, 2017, 12:55 a.m.)

The crontab (cron derives from chronos, Greek for time; tab stands for table).

----------------------------------------------

To see what crontabs are currently running on your system:

sudo crontab -l
crontab -u username -l

----------------------------------------------

To edit the list of cronjobs::
sudo crontab -e

----------------------------------------------

To remove or erase all crontab jobs:
crontab -r

----------------------------------------------

Running GUI Applications:
0 1 * * * env DISPLAY=:0.0 transmission-gtk

Replace :0.0 with your actual DISPLAY.
Use "echo $DISPLAY" to find the display.

----------------------------------------------

Cronjobs are written in the following format:

* * * * * /bin/execute/this/script.sh

As you can see there are 5 stars. The stars represent different date parts in the following order:

minute (from 0 to 59)
hour (from 0 to 23)
day of month (from 1 to 31)
month (from 1 to 12)
day of week (from 0 to 6) (0=Sunday)

----------------------------------------------

Execute every minute:

* * * * * /bin/execute/this/script.sh

This means execute /bin/execute/this/script.sh:

every minute
of every hour
of every day of the month
of every month
and every day in the week.

----------------------------------------------

Execute every Friday 1 AM

0 1 * * 5 /bin/execute/this/script.sh

----------------------------------------------

Execute on workdays 1AM

0 1 * * 1-5 /bin/execute/this/script.sh

----------------------------------------------

Execute 10 past after every hour on the 1st of every month

10 * 1 * * /bin/execute/this/script.sh

----------------------------------------------

Run every 10 minutes:

0,10,20,30,40,50 * * * * /bin/execute/this/script.sh

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/script.sh

----------------------------------------------

Special words:

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *"
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/script.sh

----------------------------------------------

List of the English abbreviated day of the week, which can be used in place of numbers:

0 -> Sun

1 -> Mon
2 -> Tue
3 -> Wed
4 -> Thu
5 -> Fri
6 -> Sat

7 -> Sun

Having two numbers for Sunday (0 and 7) can be useful for writing weekday ranges starting with 0 or ending with 7.

Examples of Number or Abbreviation Use

The next four examples will do all the same and execute a command every Friday, Saturday, and Sunday at 9.15 o'clock:

15 09 * * 5,6,0 command
15 09 * * 5,6,7 command
15 09 * * 5-7 command
15 09 * * Fri,Sat,Sun command

----------------------------------------------

Getting output from a cron job on the terminal:
You can redirect the output of your program to the pts file of an already existing terminal!
To know the pts file just type tty command
tty
And then add it to the end of your cron task:
38 23 * * * /home/mohsen/Programs/downloader.sh >> /dev/pts/4

----------------------------------------------

Cron jobs get logged to:
/var/log/syslog

You can see just cron jobs in that logfile by running:
grep CRON /var/log/syslog

OR

tail -f /var/log/syslog | grep CRON

----------------------------------------------

Mailing the crontab output

By default, cron saves the output in the user's mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:

MAILTO="yourname@yourdomain.com"

Mailing the crontab output of just one cronjob.
If you'd rather receive only one cronjob's output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/script.sh 2>&1 | mail -s "Cronjob ouput" yourname@yourdomain.com

----------------------------------------------

Trashing the crontab output

Now that's easy:

*/10 * * * * /bin/execute/this/script.sh > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.

----------------------------------------------

Many scripts are tested in a Bash environment with the PATH variable set. This way it's possible your scripts work in your shell, but when running from cron (where the PATH variable is different), the script cannot find referenced executables and fails.

It's not the job of the script to set PATH, it's the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).

----------------------------------------------

Applicable Examples:

0 * * * DISPLAY=:0 /home/mohsen/Programs/transmission-startup.sh
0 11 * * * /home/mohsen/Programs/transmission-shutdown.sh

Do not forget to chomd +x both the following files.

-----------

transmission-startup.sh:
#! /bin/bash

/usr/bin/transmission-gtk > /dev/null &
echo $! > /tmp/transmission.pid
exit

-----------

transmission-shutdown.sh:
#! /bin/bash

if [ -f /tmp/transmission.pid ]
then
/bin/kill $(cat /tmp/transmission.pid)
fi

----------------------------------------------

How do I use operators?

An operator allows you to specify multiple values in a field. There are three operators:

The asterisk (*): This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,) : This operator specifies a list of values, for example: “1,5,10,15,20, 25”.

The dash (-): This operator specifies a range of values, for example, “5-15” days, which is equivalent to typing “5,6,7,8,9,….,13,14,15” using the comma operator.

The separator (/): This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2.

+fdisk (July 8, 2017, 5:03 p.m.)

Merge Partitions:

1- fdisk /dev/sda


2- p
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 6293503 6291456 3G 83 Linux
/dev/sda2 6295550 10483711 4188162 2G 5 Extended


3- Delete both partitions you are going to merge:
d
Partition number (1,2, default 2): 2
Partition 2 has been deleted.

Command (m for help): d
Partition number (1-4): 1


4- n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 2): 1
First sector (63-1953520064, default: 63): (Choose the default value)
Last sector, +sectors... (Choose the default value)


5- t
Partition number (1-4): 1
Hex code (type L to list codes): 83


6- Make sure you've got what you're expecting:
Command (m for help): p


7- Finally, save it:
Command (m for help): w


8- resize2fs /dev/sda1
Reboot the system, then check if the partitions have been merged by:
fdisk -l
-----------------------------------------------------------

+Removing Swap Space (July 8, 2017, 2:52 p.m.)

1- swapoff /dev/sda5

2- Remove its entry from /etc/fstab

3- Remove the partition using parted:
apt-get install parted
parted /dev/sda
Type "print" to view the existing partitions and determine the minor number of the swap partition you wish to delete.
rm 5 (5 is the NUMBER of the partition.
Type "quit" to exit parted.

Done!

Now you need to merge the unused partition space with another partition. You can do it using the "fdisk" note.

+GRUB Timeout (July 3, 2017, 12:30 p.m.)

/etc/default/grub

+KDE - Location of User Wallpapers (July 2, 2017, 9:51 a.m.)

~/.kde/share/wallpapers

+NFS (July 1, 2017, 10:19 a.m.)

NFS is a network-based file system that allows computers to access files across a computer network.
------------------------------------------------------------------------
1- Installation:
apt-get install nfs-kernel-server nfs-common
------------------------------------------------------------------------
2- Server Configuration:
In order to expose a directory over NFS, open the file /etc/exports and attach the following line at the bottom:
/home/mohsen/Audio 10.10.0.32(ro,async,no_subtree_check)

This IP is the client which is going to have access to the shared folder. You can also use IP range.

service nfs-kernel-server restart
------------------------------------------------------------------------
3- Client Configuration:
sudo apt-get install nfs-common

Create a directory named "Audio" and:
mount 10.10.0.192:/home/mohsen/Audio /mnt/Audio/

By running df -h, you can ensure that your operation was successful.
------------------------------------------------------------------------
For MacOS use this command:
sudo mount -o resvport 10.10.0.192:/home/mohsen/Audio /mnt/Audio/

+Trim & Merge MP3 files (June 25, 2017, 2:11 p.m.)

sudo apt-get install sox libsox-fmt-mp3
--------------------------------------------------------------------------------
Trim:
sox infile outfile trim 0 1:06
sox infile outfile trim 1:52 =2:40
--------------------------------------------------------------------------------
Merge:
sox first.mp3 second.mp3 third.mp3 result.mp3
--------------------------------------------------------------------------------
Merge two audio files with a pad:
sox short.ogg -p pad 6 0 | sox - -m long.ogg output.ogg

+Fix Wireless Headphone Problem (June 10, 2017, 5:34 p.m.)

https://gist.github.com/pylover/d68be364adac5f946887b85e6ed6e7ae

+Convert deb to iso (May 14, 2017, 3:37 p.m.)

mkisofs firmware-bnx2_0.43_all.deb > iso

+Change DNS settings (May 9, 2017, 4:27 p.m.)

The DNS servers that the system uses for name resolution are defined in the /etc/resolv.conf file.
That file should contain at least one nameserver line.
Each nameserver line defines a DNS server.
The name servers are prioritized in the order the system finds them in the file.

+Samba - Active Directory Infrastructure (May 7, 2017, 10:31 a.m.)

1- sudo apt-get install samba krb5-user krb5-config winbind libpam-winbind libnss-winbind


2- While the installation is running a series of questions will be asked by the installer in order to configure the domain controller.
First, DESKBIT.LOCAL
Second, deskbit.local
Third, deskbit.local


3- Provision Samba AD DC for Your Domain:
systemctl stop samba-ad-dc.service smbd.service nmbd.service winbind.service
systemctl disable samba-ad-dc.service smbd.service nmbd.service winbind.service


4- Rename or remove samba original configuration. This step is absolutely required before provisioning Samba AD because at the provision time Samba will create a new configuration file from scratch and will throw up some errors in case it finds an old smb.conf file.
sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.initial


5- Start the domain provisioning interactively:
samba-tool domain provision --use-rfc2307 --interactive
(Leave everything as default and set a desired password.)
Here is the last result after the process gets finished:
Server Role: active directory domain controller
Hostname: samba
NetBIOS Domain: DESKBIT
DNS Domain: deskbit.local
DOMAIN SID: S-1-5-21-163349405-2119569559-686966403


6- Rename or remove Kerberos main configuration file from /etc directory and replace it using a symlink with Samba newly generated Kerberos file located in /var/lib/samba/private path:
mv /etc/krb5.conf /etc/krb5.conf.initial
ln -s /var/lib/samba/private/krb5.conf /etc/


7- Start and enable Samba Active Directory Domain Controller daemons:
systemctl start samba-ad-dc.service
systemctl status samba-ad-dc.service (You may get some error logs, like (Cannot contact any KDC for requested realm), which is okay.
systemctl enable samba-ad-dc.service


8- Use netstat command in order to verify the list of all services required by an Active Directory to run properly.
netstat –tulpn| egrep 'smbd|samba'


9- At this moment Samba should be fully operational at your premises. The highest domain level Samba is emulating should be Windows AD DC 2008 R2.
It can be verified with the help of samba-tool utility.
samba-tool domain level show


10- In order for DNS resolution to work locally, you need to open end edit network interface settings and point the DNS resolution by modifying dns-nameservers statement to the IP Address of your Domain Controller (use 127.0.0.1 for local DNS resolution) and dns-search statement to point to your realm.
When finished, reboot your server and take a look at your resolver file to make sure it points back to the right DNS name servers.


11- Test the DNS resolver by issuing queries and pings against some AD DC crucial records, as in the below excerpt. Replace the domain name accordingly.
ping -c3 deskbit.local # Domain Name
ping -c3 samba.deskbit.local # FQDN
ping -c3 samba # Host

+OpenLDAP (May 6, 2017, 6:22 p.m.)

http://www.itzgeek.com/how-tos/linux/debian/install-and-configure-openldap-on-ubuntu-16-04-debian-8.html/2

Description:

OpenLDAP is an open-source software implementation of Lightweight Directory Access Protocol, created by OpenLDAP project. It is released under OpenLDAP public license; it is available for all major Linux operating systems, AIX, Android, HP-UX, OS X, Solaris,z/OS, and Windows.

It works like a relational database in certain ways and can be used to store any information. It is not limited to store the information; it can also be used as a backend database for “single sign-on”.
-------------------------------------------------------
Installation:
1- sudo apt-get -y install slapd ldap-utils
During the installation, the installer will prompt you to set a password for LDAP administrator. Just enter a password of your wish.
-------------------------------------------------------
2- Reconfigure OpenLDAP Server:
The installer will automatically create an LDAP directory based on the hostname of your server which is not we want, so we are now going to reconfigure the LDAP. To do that, execute the following command.

sudo dpkg-reconfigure slapd

You would need to answer for series of questions prompted by reconfiguration tool.
Omit OpenLDAP server configuration? Select "No". (If you select yes, it will just cancel the configuration)

.
.
.

Choose the backend format for LDAP: HDB

Choose whether you want the database to be removed when slapd is purged. Select No.

If you have any old data in the LDAP, you could consider moving the database out of the way before creating a database. Select Yes.

You have the option to allow or disable LDAPv2 protocol. Select No.
-------------------------------------------------------
3- Verify the LDAP:
sudo netstat -antup | grep -i 389
-------------------------------------------------------
4- Generate base.ldif file for your domain:
vim /root/base.ldif

dn: ou=People,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=deskbit,dc=local
objectClass: organizationalUnit
ou: Group
-------------------------------------------------------
5- Build the directory structure:
ldapadd -x -W -D "cn=admin,dc=itzgeek,dc=local" -f /root/base.ldif
-------------------------------------------------------
6- Add LDAP Accounts:
Let’s create an LDIF (LDAP Data Interchange Format) file for a new user “ldapuser”:
vim /root/ldapuser.ldif

dn: uid=ldapuser,ou=People,dc=deskbit,dc=local
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldapuser
uid: ldapuser
uidNumber: 9999
gidNumber: 100
homeDirectory: /home/ldapuser
loginShell: /bin/bash
gecos: Test LdapUser
userPassword: {crypt}x
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
-------------------------------------------------------
7- Use the ldapadd command to create a new user “ldapuser” in OpenLDAP directory:
ldapadd -x -W -D "cn=admin,dc=deskbit,dc=local" -f /root/ldapuser.ldif
-------------------------------------------------------

-------------------------------------------------------

-------------------------------------------------------

-------------------------------------------------------

+Date and Time From Command Prompt (May 3, 2017, 1:42 p.m.)

Display Current Date and Time:
$ date

----------------------------------------------------

Display The Hardware Clock (RTC):

# hwclock -r

OR show it in Coordinated Universal time (UTC):
# hwclock --show --utc

----------------------------------------------------

Set Date Command Example:
date -s "2 OCT 2006 18:00:00"

OR
date --set="2 OCT 2006 18:00:00"

----------------------------------------------------

Set Time Examples:

date +%T -s "10:13:13"

Use %p locale’s equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"

----------------------------------------------------

How do I set the Hardware Clock to the current System Time?

Use the following syntax:
# hwclock --systohc

OR
# hwclock -w

----------------------------------------------------

A note about systemd based Linux system

With systemd based system you need to use the timedatectl command to set or view the current date and time. Most modern distro such as RHEL/CentOS v.7.x+, Fedora Linux, Debian, Ubuntu, Arch Linux and other systemd based system need to the timedatectl utility. Please note that the above command should work on modern system too.

----------------------------------------------------

timedatectl: Display the current date and time:

$ timedatectl

----------------------------------------------------

Change the current date using the timedatectl command:
# timedatectl set-time YYYY-MM-DD

OR
$ sudo timedatectl set-time YYYY-MM-DD

For example set the current date to 2015-12-01 (1st, Dec, 2015):
# timedatectl set-time '2015-12-01'
# timedatectl

----------------------------------------------------

To change both the date and time, use the following syntax:
# timedatectl set-time '2015-11-23 08:10:40'
# date

----------------------------------------------------

To set the current time only:

The syntax is:
# timedatectl set-time HH:MM:SS
# timedatectl set-time '10:42:43'
# date

----------------------------------------------------

Set the time zone using timedatectl command:

To see the list of all available time zones, enter:
$ timedatectl list-timezones
$ timedatectl list-timezones | more
$ timedatectl list-timezones | grep -i asia
$ timedatectl list-timezones | grep America/New

To set the time zone to ‘Asia/Kolkata’, enter:
# timedatectl set-timezone 'Asia/Kolkata'

Verify it:
# timedatectl

----------------------------------------------------

How to synchronizing the system clock with a remote server using NTP?

# timedatectl set-ntp yes

Verify it:
$ timedatectl

----------------------------------------------------

For changing the timezone:
dpkg-reconfigure tzdata

----------------------------------------------------

+OpManager (May 3, 2017, 10:37 a.m.)

1- apt-get install iputils-ping

2- Download OpManager for linux:
https://www.manageengine.com/network-monitoring/download.html
or another earlier version from the archive link:
https://archives.manageengine.com/opmanager/

3-
chmod a+x ManageEngine_OpManager_64bit.bin
./ManageEngine_OpManager_64bit.bin -console
cd /opt/ManageEngine/OpManager/bin
./StartOpManagerServer.sh

+SNMP (May 1, 2017, 3:51 p.m.)

1- apt-get install snmp snmpd

2- /etc/snmp/snmpd.conf
Edit to:
agentAddress udp:0.0.0.0:161
view systemonly included .1

Add to the bottom:
com2sec readonly 10.10.0.198 public
com2sec readonly 10.10.0.199 public
com2sec readonly localhost public

3- /etc/init.d/snmpd restart
-------------------------------------------------------------------------
For checking if snmpd is running, and on what ip/port it's listening to, you can use:
netstat -apn | grep snmpd
-------------------------------------------------------------------------
Test the Configuration with an SNMP Walk:
snmpwalk -v1 -c public localhost
snmpwalk -v1 -c public 10.10.0.192
-------------------------------------------------------------------------
For getting information based on OID:
snmpwalk -v1 -c public localhost iso.3.6.1.2.1.1.1

The OID Tree:
http://www.oidview.com/mibs/712/LANART-AGENT.html
-------------------------------------------------------------------------

+SPICE (April 29, 2017, 1:21 p.m.)

What is SPICE?
SPICE (Simple Protocol for Independent Computing Environments) is a communication protocol for virtual environments. It allows users to see the console of virtual machines (VM) from anywhere via the Internet. It is a client-server model that imagines Virtualization Station as a host and users can connect to VMs via the SPICE client.
--------------------------------------------------------------------
remote-viewer spice://srv1:5908
remote-viewer "spice://srv1:5901?password=1362913207771306286"
--------------------------------------------------------------------
SPICE Tools:
https://www.spice-space.org/download.html
--------------------------------------------------------------------
To compile SPICE agent on Linux, download the agent from the following link:
https://www.spice-space.org/download/releases/spice-vdagent-0.17.0.tar.bz2

Install the following packages:
1- apt install libglib2.0-dev libdrm-dev sudo libxxf86vm-dev libxt-dev xutils-dev flex bison xcb libx11-xcb-dev libxcb-glx0 libxcb-glx0-dev xorg-dev libxcb-dri2-0-dev libasound2-dev libdbus-1-dev

2- Extract the already downloaded agent file, and:
./configure
make
sudo make install
--------------------------------------------------------------------
SPICE client on Ubuntu:
1- sudo apt install spice-vdagent
2- Create a file /etc/default/spice-vdagentd with the value:
SPICE_VDAGENTD_EXTRA_ARGS=-X
--------------------------------------------------------------------

+Extract ISO files (April 26, 2017, 12:28 p.m.)

sudo mount -o loop an_iso_file.iso /home/mohsen/Temp/foo/

+List all IPs in the connected network (April 21, 2017, 1:53 p.m.)

sudo apt-get install arp-scan
sudo arp-scan --interface=eth0 --localnet
---------------------------------------------------------------
sudo apt-get install nmap
nmap -sn 192.168.21.0/24

+reprepro (March 4, 2017, 11:46 a.m.)

https://www.howtoforge.com/setting-up-an-apt-repository-with-reprepro-and-nginx-on-debian-wheezy
-------------------------------------------------------------------------
1-Install GnuPG and generate a GPG key for Signing Packages:
apt-get install gnupg dpkg-sig rng-tools
-------------------------------------------------------------------------
2-Open /etc/default/rng-tools:
vim /etc/default/rng-tools

and make sure you have the following line in it:
[...]
HRNGDEVICE=/dev/urandom
[...]

Then start rng-tools:
/etc/init.d/rng-tools start
-------------------------------------------------------------------------
3-Generate your key:
gpg --gen-key
-------------------------------------------------------------------------
4-Install and configure reprepro:
apt-get install reprepro

Let's use the directory /var/www/repo as the root directory for our repository. Create the directory /var/www/repo/conf:
mkdir -p /var/www/repo/conf
-------------------------------------------------------------------------
5-Let's find out about the key we have created in step 3:
gpg --list-keys

Our public key is D753ED90. We have to use this from now on.
-------------------------------------------------------------------------
6-Create the file /var/www/repo/conf/distributions as follows:
vim /var/www/repo/conf/distributions
-------------------------------------------------------------------------
7-The address of our apt repository will be apt.example.com, so we use this in the Origin and Label lines. In the SignWith line, we add our public key (D753ED90). Drop out the "2048R/" part:

Origin: reprepro.deskbit.local
Label: reprepro.deskbit.local
Codename: stable
Architectures: amd64
Components: main
Description: Deskbit Proprietary Softwares
SignWith: D753ED90
-------------------------------------------------------------------------
8-Create the (empty) file /var/www/repo/conf/override.stable:
touch /var/www/repo/conf/override.stable
-------------------------------------------------------------------------
9-Then create the file /var/www/repo/conf/options with this content:
verbose
ask-passphrase
basedir /var/www/repo
-------------------------------------------------------------------------
10-To sign our deb packages with our public key, we need the package dpkg-sig:
dpkg-sig -k D753ED90 --sign builder /usr/src/my-packages/*.deb
-------------------------------------------------------------------------
11-Now we import the deb packages into our apt repository:
cd /var/www/repo
reprepro includedeb stable /usr/src/my-packages/*.deb
-------------------------------------------------------------------------
12-Configuring nginx:
We need a webserver to serve our apt repository. In this example, I'm using an nginx webserver.

server {
listen 80;
server_name apt.example.com;

access_log /var/log/nginx/packages-error.log;
error_log /var/log/nginx/packages-error.log;

location / {
root /var/packages;
index index.html;
autoindex on;
}

location ~ /(.*)/conf {
deny all;
}

location ~ /(.*)/db {
deny all;
}
}
***************************************************************************
OR for Apache:

<VirtualHost *:80>
ServerName reprepro.deskbit.local
DocumentRoot /var/www/repo
ServerName reprepro.deskbit.local
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
-------------------------------------------------------------------------
13-Let's create a GPG key for the repository:
gpg --armor --output /var/www/repo/repo.deskbit.io.gpg.key --export C7C1365D
-------------------------------------------------------------------------
14-To use the repository, place the following line in your /etc/apt/sources.list:
vim /etc/apt/sources.list

[...]
deb http://repo.deskbit.io/ stable main
[...]
-------------------------------------------------------------------------
15-If you want this repository to always have precedence over other repositories, you should have this line right at the beginning of your /etc/apt/sources.list and add the following entry to /etc/apt/preferences:

vim /etc/apt/preferences:

Package: *
Pin: origin apt.example.com
Pin-Priority: 1001
-------------------------------------------------------------------------
16-Before we can use the repository, we must import its key:
wget -O - -q http://repo.deskbit.io/repo.deskbit.io.gpg.key | apt-key add -

apt-get update
-------------------------------------------------------------------------

+Packages to Install (Feb. 24, 2017, 10:15 a.m.)

pavucontrol proxychains android-tools-adb android-tools-fastboot gimp-plugin-registry gimp gir1.2-keybinder-3.0 quodlibet python3-dev python-dev libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev zip python-setuptools vim postgresql-server-dev-all postgresql libpq-dev curl geany python-pip tmux git virtaal gdebi-core gdebi smplayer yakuake vlc gparted krita transmission-gtk htop

+PulseAudio Volume Control (Jan. 25, 2017, 9:12 a.m.)

pavucontrol

+Find Gateway IP (Jan. 8, 2017, 2:49 p.m.)

ip route | grep default

+Faster grep (Jan. 7, 2017, 4:59 p.m.)

1- Install `parallel`
sudo apt-get install parallel

2- Begin search:
find . -type f | parallel -k -j150% -n 1000 -m grep -H -n "keyring doesn\'t exist" {}

+OpenCV - Facial Keypoint Detection (Sept. 24, 2016, 10:58 a.m.)

As computer vision engineers and researchers we have been trying to understand the human face since the very early days. The most obvious application of facial analysis is Face Recognition. But to be able to identify a person in an image we first need to find where in the image a face is located. Therefore, face detection — locating a face in an image and returning a bounding rectangle / square that contains the face — was a hot research area.

Once you have a bounding box around the face, the obvious research problem is to see if you can find the location of different facial features ( e.g. corners of the eyes, eyebrows, and the mouth, the tip of the nose etc ) accurately. Facial feature detection is also referred to as “facial landmark detection”, “facial keypoint detection” and “face alignment” in the literature, and you can use those keywords in Google for finding additional material on the topic.

+Check outgoing port (Sept. 14, 2016, 10:27 p.m.)

Use one of the tools to check if the outgoing VPS port is blocked:

curl portquiz.net:80
OR
telnet portquiz.net 80
OR
nc -v portquiz.net 80
OR
wget -qO- portquiz.net:80

+Write ISO file to DVD in terminal (Sept. 3, 2016, 9:13 p.m.)

Using this command, check where the DVD Writer is mounted: (/dev/sr0)
inxi -d

And using this command, start writing on the DVD:
wodim -eject -tao speed=8 dev=/dev/sr0 -v -data Downloads/linuxmint-18-kde-64bit-beta.iso

+See Linux Version (Aug. 15, 2016, 3:26 p.m.)

cat /etc/os-release

cat /etc/*release

uname -a

lsb_release -a

+Install OpenCV 3.0 with Python 3.4+ (Aug. 3, 2016, 4:31 p.m.)

sudo apt-get install libopenexr-dev
Install the above package in addition to the packages the links says to! It does not include in the documents.

First try doing the way the tutorial links in github says:
https://github.com/chudur-budur/opencv#notes-from-chudur-budur


If you encountered probles, you could try the following notes too.
The following caused errors about ffmpeg libraries not being found but the link above solved it.
------------------------------------------------------------
1- sudo apt-get install build-essential cmake git pkg-config libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk2.0-dev libatlas-base-dev gfortran python3.4-dev libgtk-3-dev libgstreamer0.10-dev libgstreamer-plugins-base1.0-dev libv4l-dev libopencv-dev build-essential cmake git libgtk2.0-dev pkg-config python-dev python-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libtiff4-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libxine-dev libtbb-dev libqt4-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip libavresample-dev yasm libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libx264-dev libxvidcore-dev libxvidcore4

ln -s /usr/include/libv4l1-videodev.h /usr/include/linux/videodev.h

*********************
It think this part is not needed. It was supposed to help fixing ffmpeg errors when builing opencv but it did not! :

cd ~/MyTemp/
wget http://ffmpeg.org/releases/ffmpeg-3.1.tar.gz
tar xvf ffmpeg-0.11.1.tar.bz2
cd ffmpeg-0.11.1
./configure --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab
make -j4
sudo make install
*********************

2- Create a virtualenv and activate it

3- pip install numpy

4- Build and install OpenCV 3.0 with Python 3.4+ bindings:
cd ~/MyTemp
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.0.0 (Referring to this website you can see what version you need to write instead of 3.0.0: http://opencv.org/downloads.html. As of right now it's 3.1.0)

5- We’ll also need to grab the opencv_contrib repo as well:
cd ~/MyTemp
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.0.0
Again, make sure that you checkout the same version for opencv_contrib that you did for opencv above, otherwise you could run into compilation errors.

6- Time to setup the build:
cd ~/MyTemp/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/MyTemp/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON ..

7- make -j8

8-

+PyCharm / IntelliJ IDEA allows only two spaces (July 26, 2016, 12:37 p.m.)

In settings search for `EditorConfig` and disable the plugin.

+Enable/Disalbe Bluetooth (July 26, 2016, 10:42 a.m.)

sudo rfkill block bluetooth
sudo update-rc.d bluetooth disable
service bluetooth status
--------------------------------------------------------------------
sudo rfkill unblock bluetooth
sudo update-rc.d bluetooth enable
service bluetooth status

+Identify Computer Model (July 23, 2016, 10:48 a.m.)

sudo grep "" /sys/class/dmi/id/[bpc]*

+Error: Fixing recursive fault but reboot is needed! (July 17, 2016, 9:49 a.m.)

sudo nano /etc/default/grub

Change:
GRUB_CMDLINE_LINUX_DEFAULT
GRUB_CMDLINE_LINUX

To:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX="acpi=off"

sudo update-grub2

+No partitions found while installing Linux (July 15, 2016, 9:28 p.m.)

1- Boot up linux with Live CD (the installation disk)
2- sudo su
3- sudo apt-get install gdisk
4- sudo gdisk /dev/sda
5- Select (1) for MBR
6- Type x for expert stuff
7- Type z to zap the GPT data
8- Type y to proceed destroying GPT data
9- Type n in order to not lose MBR data

Now restart the installation procedure.

+VMware Workstation (June 21, 2016, 5:37 p.m.)

Using this address, find the bundle file in "/linux/core/":
http://softwareupdate.vmware.com/cds/vmw-desktop/ws/

Extract the file (if it's a tar file) and run the bundle file with root permission:
# bash ./VMware-Workstation-12.5.2-4638234.x86_64.bundle
------------------------------------------------------------------------------
After installation, you'll need a serial number. Google the version and you'll find it finally ;-)
For this current version (12.5.2) the serial number is:
5A02H-AU243-TZJ49-GTC7K-3C61N
------------------------------------------------------------------------------

+Remove invalid characters from filenames (May 29, 2016, 8:18 a.m.)

find . -exec rename 's/[^\x00-\x7F]//g' "{}" \;

+PyCharm Regex (May 23, 2016, 2:07 a.m.)

https://www.jetbrains.com/help/pycharm/2016.1/regular-expression-syntax-reference.html

{8}"name_ru": ".+?",\n
-----------------------------------------
Search for any occurrences starting with a double quote:
.?"

+SASL authentication for IRC network using freenode (April 14, 2016, 7:36 p.m.)

https://userbase.kde.org/Konversation/Configuring_SASL_authentication

chat.freenode.net
port: 6697
Make sure to use "Secure Connectsion (SSL)"

+PouchDB (April 13, 2016, 9:54 a.m.)

Installation:
sudo npm -g install pouchdb
sudo npm -g install angular-pouchdb
ionic plugin add cordova-sqlite-storage
--------------------------------------------------------------------------------------------
There is a Chrome extension called PouchDB Inspector that allows you to view the contents of the database in the Chrome Developer Tools.
https://chrome.google.com/webstore/detail/pouchdb-inspector/hbhhpaojmpfimakffndmpmpndcmonkfa?hl=en
--------------------------------------------------------------------------------------------
You can not use the PouchDB Inspector if you loaded the app with ionic serve --lab because it uses iframes to display the iOS and the Androw views. The PouchDB Inspector needs to access PouchDB via window.PouchDB and it can't access that when the window is inside an <iframe>
---------------------------------------------------------------------------------------------
Keep in mind that when you're testing your Ionic app on a desktop browser it will use an IndexedDB or WebSQL adapter, depending on which browser you use. If you'd like to know which adapter is used by PouchDB, you can look it up:
var db = new PouchDB('birthdays');
console.log(db.adapter);
-----------------------------------------------------------------------------
On a mobile device the adapter will be displayed as websql even if it is using SQLite, so to confirm that it is actually using SQLite you'll have to do this (see answer on StackOverflow):

var db = new PouchDB('birthdays');
db.info().then(console.log.bind(console));
This will output an object with a sqlite_plugin set to true or false.
-----------------------------------------------------------------------------------------
There are 2 ways to insert data, the post method and the put method. The difference is that if you add something with the post method, PouchDB will generate an _id for you, whereas if you use the put method you're generating the _id yourself.
---------------------------------------------------------------------------------------------
SQLite plugin for Cordova/PhoneGap

On Cordova/PhoneGap, the native SQLite database is often a popular choice, because it allows unlimited storage (compared to IndexedDB/WebSQL storage limits). It also offers more flexibility in backing up and pre-loading databases, because the SQLite files are directly accessible to app developers.

Luckily, there is a SQLite Plugin (also known as SQLite Storage) that accomplishes exactly this. If you include this plugin in your project, then PouchDB will automatically pick it up based on the window.sqlitePlugin object.

However, this only occurs if the adapter is 'websql', not 'idb' (e.g. on Android 4.4+). To force PouchDB to use the WebSQL adapter, you can do:
var db = new PouchDB('myDB', {adapter: 'websql'});

If you are unsure whether PouchDB is using the SQLite Plugin or not, just run:
db.info().then(console.log.bind(console));

This will print some database information, including the attribute sqlite_plugin, which will be true if the SQLite Plugin is being used.
---------------------------------------------------------------------------------------------

+KDE Menu Editor (April 2, 2016, 9:14 a.m.)

kdemenuedit

+Batch rename files (March 11, 2016, 10:53 a.m.)

for file in *.html
do
mv "$file" "${file%.html}.txt"
done
----------------------------------------------
for file in *
do mv "$file" "$file.mp3"
done

+Thinkpad Lenovo Bluetooth Driver (Feb. 15, 2016, 10:12 a.m.)

http://askubuntu.com/questions/617260/bluetooth-does-not-detect-devices-on-lenovo-with-12-04/617284
------------------------------------------------------------------------------------
sudo apt-get install build-essential linux-headers-generic
wget https://github.com/lwfinger/rtl8723au_bt/archive/troy.zip
unzip troy.zip
cd rtl8723au_bt-troy
make
sudo make install

+Genymotion (April 10, 2016, 7:22 p.m.)

1-apt-get install libdouble-conversion1

2-Download `Ubuntu 14.10 and older, Debian 8` genymotion version from the following link:
https://www.genymotion.com/download/
The downloaded file name should be `genymotion-2.8.0-linux_x64.bin`.

3-sudo bash ./genymotion-2.8.0-linux_x64.bin

4-For running it, use this command:
/opt/genymobile/genymotion/genymotion

5-You should already have the genymotion VirtualBox (ovd) files. If so, you need to change the path of VirtualBox Virtual devices in settings, to the location of your files.
Settings --> Virtualbox (tab) --> Browse

Hint:
After this step I still could not see the list of virtual devices in genymotion program. I imported the ovd files in virtualbox program, and they got displayed in genymotion too.

+ADB (Nov. 2, 2015, 5:04 p.m.)

sudo apt-get install android-tools-adb android-tools-fastboot

+Gimp Plugin (Nov. 2, 2015, 5:03 p.m.)

sudo apt-get install gimp-plugin-registry

+Diff over SSH (Oct. 12, 2015, 10:40 a.m.)

diff /home/mohsen/Projects/Shetab/nespresso/nespresso/urls.py <(ssh shetab@buynespresso.ir 'cat /home/shetab/websites/nespresso/nespresso/urls.py')

+Handbrake in Mint (Sept. 14, 2015, 5:02 p.m.)

sudo add-apt-repository ppa:stebbins/handbrake-snapshots
sudo apt-get update
sudo apt-get install handbrake

+Trim/Cut video files (Sept. 14, 2015, 2:03 p.m.)

ffmpeg -i video.mp4 -ss 10 -t 10 -c copy cut2.mp4

The first 10 is the start time in seconds:
10 ==> 10 seconds from start
1:10 ==> One minute and 10 seconds
1:10:10 ==> One hour, one minute and ten seconds


The second 10 is the duration.

+Retrieve Video File Information (Sept. 14, 2015, 12:02 p.m.)

mplayer -vo null -ao null -frames 0 -identify test.mp4

+Routing (Aug. 22, 2015, 4:58 p.m.)

ip route add {dst ip} via {gateway ip} dev ethx src {src ip}

+Change Hostname (Aug. 6, 2015, 11:14 p.m.)

nano /etc/hostname
/etc/init.d/hostname.sh start

nano /etc/hosts
service hostname restart

+Get public IP address and email it (July 25, 2015, 1:17 p.m.)

Getting public IP address in bash:

wget -qO- ifconfig.me/ip
OR
curl ifconfig.me/ip
------------------------------------------------------------
Getting it and emailing it (copy this script and paste it in a file with `.sh` extension):
#/bin/sh
IPADDRESS=$(wget -qO- ifconfig.me/ip)
# IPADDRESS=$(curl ifconfig.me)
if [[ "${IPADDRESS}" != $(cat ~/.current_ip) ]]
then
echo "Your new IP address is ${IPADDRESS}" |
mail -s "IP address change" mohsen@mohsenhassani.com
echo ${IPADDRESS} >|~/.current_ip
fi

+Libreoffice - Add/Remove RTL and LTR buttons to formating toolbar to Libreoffice (July 8, 2015, 7:41 p.m.)

You have to enable Complex Text Layout (CTL) support:
Tools → Options → Language Settings → Languages
Enable `Complex Text Layout (CTL)`
Restart libreoffice.

+Installing Irancell 3G-4G Modem Driver (July 8, 2015, 10:53 a.m.)

1-sudo apt-get install g++-multilib libusb-dev libusb-0.1-4:i386

2-Connect the modem and copy the `linuxdrivers.tar.gz` file to your computer, extract it and cd to the directory.

3-CD to directory `drivers` and using the `install_driver` file, install the driver:
sudo ./install_driver

4-Create a shortcut from the file `lcdofshift.sh` to make the connection procedure easier:
ln -s /home/mohsen/Programs/linuxdrivers/drivers/lcdoshift.sh .

5-To establish a connection use the command:
sudo ~/lcdoshift.sh
--------------------------------------------------------------------------
And this is the output:

Looking for default devices ...
Found default devices (1)
Accessing device 007 on bus 003 ...

USB description data (for identification)
-------------------------
Manufacturer: Longcheer
Product: LH9207
Serial No.:
-------------------------
Looking for active driver ...
No driver found. Either detached before or never attached
Setting up communication with interface 0 ...
Trying to send the message to endpoint 0x01 ...
OK, message successfully sent
-> Run lsusb to note any changes. Bye.

sleep 3
ifconfig ecm0 up
dhclient ecm0
mohsen drivers #

+Installing KDE and/or Gnome in Debian (June 9, 2015, 9:22 a.m.)

Install KDE in debian

#apt-get install x-window-system-core kde

You'll probably also want to install KDM, for the KDE-style login screen.

#apt-get install kdm

Starting KDE

To start KDE, type

#startkde

you may need to start X-Server if it is not running, to start it run

#startx

To start KDE each time (you probably want this) you'll need to edit your startup files. If you use KDM or XDM to log in, edit .xsession, otherwise edit .xinitrc or .Xclients.

Install Gnome in Debian

#apt-get install gnome

This will install additional software (gnome-office, evolution) that you may or may not want.

Custom

For a smaller set of apps, you can also do

# aptitude install gnome-desktop-environment

A set of additional productivity apps will be installed by

# aptitude install gnome-fifth-toe

+Quodlibet Multimedia Keys (June 3, 2015, 9:12 p.m.)

apt-get install gir1.2-keybinder-3.0

+Connecting to wifi network through command line (June 3, 2015, 6:13 p.m.)

1-sudo iwlist wlan0 scan
2-sudo iwconfig wlan0 essid "THE SSID"
3-iwconfig wlan0 key s:password
4-sudo dhclient wlan0

+Root Password Recovery (May 27, 2015, 1:24 p.m.)

rw init=/bin/bash

+Locale Settings (Feb. 5, 2016, 1:40 a.m.)

This first solution has been worked. So before checking the other solutions, try this one first!

nano /etc/environment
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8

Restart server and it should be fixed now!
------------------------------------------------------------------------
locale-gen en_US.UTF-8

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
dpkg-reconfigure locales
------------------------------------------------------------------------
This is a common problem if you are connecting remotely, so the solution is to not forward your locale. Edit /etc/ssh/ssh_config and comment out SendEnv LANG LC_* line.

+Proxy (May 10, 2015, 3:48 p.m.)

1-sudo apt-get install proxychains
2-ssh -D 1080 -fN root@192.168.1.3
3-nano /ect/proxychains.conf
4-At the bottom of the file:
[ProxyList]
# add proxy here
# defaults set to "tor"
# socks4 127.0.0.1 9050
socks5 127.0.0.1 1080

5-sudo proxychains synaptic

6-If you did everything with a normal user or super user, keep in mind that in terminal you should use the proxy using the same user. I mean if you did (ssh -D ...) using the root user, that port is only available in root.

+Recover/Restore Firefox Master Password (April 19, 2015, 9:58 a.m.)

For resetting copy this url in the address-bar:
chrome://pippki/content/resetpassword.xul
---------------------------------------------------------------------------------------------

+TV Card Driver (April 17, 2015, 7:08 p.m.)

http://nucblog.net/2014/11/installing-media-build-drivers-for-additional-tv-tuner-support-in-linux/
1-sudo apt-get install libproc-processtable-perl git libc6-dev
2-git clone git://linuxtv.org/media_build.git
3-cd media_build
4-$ ./build
5-sudo make install
6-apt-get install me-tv kaffeine
7-reboot for loading the driver (I don't know the driver for modprobe yet).
--------------------------------------------------------------------------------------------------
Scan channels using Kaffein:
1-Open Kaffein
2-From `Television` menu, choose `Configure Television`.
3-From `Device 1` tab, from `Source` option, choose `Autoscan`
4-From `Television` menu choose `Channels`
5-Click on `Start Scan` and after the scan procedure is done, select all channels from the side panel and click on `Add Selected` to add them to your channels.
--------------------------------------------------------------------------------------------------
Scan channels using Me-TV
1-Open Me-TV
2-When the scan dialog opens, choose `Czech Republic` from `Auto scan`.

+PYTHONHOME and PYTHONPATH (April 4, 2015, 3:29 p.m.)

For most installations, you should not set these variables since they are not needed for Python to run. Python knows where to find its standard library.

The only reason to set PYTHONPATH is to maintain directories of custom Python libraries that you do not want to install in the global default location (i.e., the site-packages directory).

PYTHONHOME actually points to the directory of the standard library by default (e.g. /usr/local/lib/pythonXX).

+Environment Variable (April 3, 2015, 8:46 p.m.)

www.cyberciti.biz/faq/set-environment-variable-linux/
---------------------------------------------------------------------------------------------
Commonly Used Shell Variables:
http://bash.cyberciti.biz/guide/Variables#Commonly_Used_Shell_Variables
---------------------------------------------------------------------------------------------
Use `set` command to display current environment
---------------------------------------------------------------------------------------------
The $PATH defines the search path for commands. It is a colon-separated list of directories in which the shell looks for commands.
---------------------------------------------------------------------------------------------
You can display the value of a variable using printf or echo command:
$ echo "$HOME"
---------------------------------------------------------------------------------------------
You can modify each environmental or system variable using the export command. Set the PATH environment variable to include the directory where you installed the bin directory with perl and shell scripts:

export PATH=${PATH}:/home/vivek/bin

OR

export PATH=${PATH}:${HOME}/bin
--------------------------------------------------------------------------------------------
You can set multiple paths as follows:
export ANT_HOME=/path/to/ant/dir
export PATH=${PATH}:${ANT_HOME}/bin:${JAVA_HOME}/bin
---------------------------------------------------------------------------------------------
How Do I Make All Settings permanent?
The ~/.bash_profile ($HOME/.bash_profile) or ~/.prfile file is executed when you login using console or remotely using ssh. Type the following command to edit ~/.bash_profile file, enter:
$ vi ~/.bash_proflle
Append the $PATH settings, enter:
export PATH=${PATH}:${HOME}/bin
Save and close the file.
---------------------------------------------------------------------------------------------------------

+subprocess installed post-installation script returned error exit status 1 (March 19, 2015, 12:30 a.m.)

Error:

Setting up python-gst0.10-dev (0.10.22-3ubuntu2) ...
dpkg: error processing package python-gst0.10-dev (--configure):
subprocess installed post-installation script returned error exit status 1
E: Sub-process /usr/bin/dpkg returned an error code (1)
---------------------------------------------------------------------------------------------------
Solve:
sh -x /var/lib/dpkg/info/python-gst0.10-dev.postinst configure 0.10.22-3ubuntu2
---------------------------------------------------------------------------------------------------
Returned:

+ set -e
+ pyversions --default
+ PYTHON_DEFAULT=pyversions: /usr/bin/python does not match the python default version. It must be reset to point to python2.7
---------------------------------------------------------------------------------------------------
Solve:
ln -sf /usr/bin/python2.7 /usr/bin/python
---------------------------------------------------------------------------------------------------

+Ubuntu Sources List Generator (March 18, 2015, 3:52 p.m.)

http://repogen.simplylinux.ch/

http://www.ubuntuupdates.org/ppa/mint_main

+Delete special files recursively (March 7, 2015, 2:36 p.m.)

find . -name "*.bak" -type f -delete

find . -name "*.bak" -type f

+How to stop services / programs from starting automatically (March 3, 2015, 11:27 a.m.)

update-rc.d -f apache2 remove

+Truetype Fonts (Arial Font) (Feb. 22, 2015, 1:10 p.m.)

http://www.cyberciti.biz/faq/howto-debian-install-use-ms-windows-truetype-fonts-under-xorg/
---------------------------------------------------------------------------------------------
apt-get install ttf-liberation

+Add Resolutions (Feb. 15, 2015, 11:19 a.m.)

1. Install arandr
apt install arandr


2. Run "arandr" from the applications menu.


3. Create a resolution by doing the following:
In this example, the resolution I want is 1920x1080
cvt 1920 1080

This will create a modeline like this:
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync

Create the new mode:
xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync


4. Add the mode (resolution) to the desired monitor: (Get the list of active outputs from the "output" menu in Arandr application)
xrandr --addmode VGA-1 "1920x1080_60.00"


5- For switching to the newly created resolution:
xrandr -s 1920x1080

OR

xrandr --output VGA-1 --mode "1920x1080"

OR

5. Run arandr and position your monitors correctly

6. Choose 'layout' then 'save as' to save the script

7. I found the best place to load the script (under Xubuntu) is the settings manager:
xfce4-settings-manager

OR

Menu -> Settings -> Settings Manager -> Session and Startup -> Application Autostart

+Dump traffic on a network (Feb. 7, 2015, 11:33 a.m.)

tcpdump -nti any port 4301

To connect to it:
telnet 5.32.34.54 4301

+Show open ports and listening services (Feb. 7, 2015, 10:33 a.m.)

http://www.techrepublic.com/blog/it-security/list-open-ports-and-listening-services/

netstat -an | egrep 'Proto|LISTEN'
netstat -lnptu

+Make Bootable USB stick (Jan. 8, 2015, 7:50 p.m.)

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdx oflag=direct bs=1048576

+Change locale/timezone and set the clock (Sept. 20, 2015, 1:57 p.m.)

1-ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
2-apt install ntp
3-ntpd
4-hwclock -w

-------------------------------------------------------------------

Linux Set Date Command Example
# date -s "2 OCT 2006 18:00:00"

OR

# date --set="2 OCT 2006 18:00:00"

OR

# date +%Y%m%d -s "20081128"

OR

# date +%T -s "10:13:13"
Where,

10: Hour (hh)
13: Minute (mm)
13: Second (ss)

Use %p locale's equivalent of either AM or PM, enter:
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"

-------------------------------------------------------------------

yum install ntp
ln -sf /usr/share/zoneinfo/Asia/Tehran /etc/localtime
/etc/init.d/ntpd stop
ntpdate 0.pool.ntp.org

+error ==> error while loading shared libraries (Dec. 18, 2014, 10:02 p.m.)

Locate the file using locate <the_file.so.0> and copy it to /usr/lib

I also needed to copy it here too:

Locate the file using locate <the_file.so.0> and copy it to /usr/lib64

+error ==> make command not found (Dec. 18, 2014, 11:46 a.m.)

apt-get install make build-essential

+wget certificate error (Dec. 18, 2014, 11:38 a.m.)

ERROR: The certificate of `www.dropbox.com' is not trusted.
ERROR: The certificate of `www.dropbox.com' hasn't got a known issuer.

If you don't care about checking the validity of the certificate just add the --no-check-certificate option on the wget command-line.

wget --no-check-certificate <url_link>

+Split and Join/Merging Files (Nov. 28, 2014, 11:58 a.m.)

split --bytes=1M NimkatOnline-1.0.0.apk NimkatOnline
-l ==> lines

b ==> bytes
M ==> Megabyte
G ==> Gigabytes


split --bytes=1M images/myimage.jpg new

split -b 22 newfile.txt new
Split the file newfile.txt into three separate files called newaa, newab and newac..., with each file containing 22 bytes of data.

split -l 300 file.txt new
Split the file newfile.txt into files beginning with the name new, each containing 300 lines of text.
------------------------------------------------------
For merging or joining files:
cat new* > newimage.jpg

+Locate (Nov. 13, 2014, 10:03 p.m.)

Match the exact filename:
locate -b '\filename'

Don’t output all the results, but only the number of matching entries.
locate -c test

+SSH login without password (Nov. 13, 2014, 7:29 p.m.)

1-ssh-keygen -t rsa (No need to set a password)
2-ssh-copy-id mohsen@mohsenhassani.com

Now you can log in without a password

+APT - The location where apt-get caches/stores .deb files (Oct. 18, 2014, 6:16 a.m.)

/var/cache/apt/archives/

+nano - Replace (Oct. 3, 2014, 11:08 p.m.)

In some versions of nano for `replacing` you can use:
Shift + Tab

And in some other versions:
CTRL + \

+Recover Files (Sept. 14, 2014, 7:24 p.m.)

Using this program you can undelete/recover deleted files:
testdisk

After selecting the desired Hard Disk, press capital (p) the `P` key to show all the deleted files.

+Setting Proxy Variable (Aug. 22, 2014, 12:44 p.m.)

export http_proxy="localhost:9000"
export https_proxy="localhost:9000"
export ftp_proxy="localhost:9000"

And for removing environment variables:
unset http_proxy
unset https_proxy
unset ftp_proxy

+Getting folder size (Aug. 22, 2014, 12:38 p.m.)

For getting the folder size along with its sub-folders:
du -sh /path/to/directory

+Join *.001, *.002, .... files (Aug. 22, 2014, 12:33 p.m.)

cat filename.avi.* > filename.avi

+Virtualbox (Nov. 4, 2015, 11:31 a.m.)

Virtualbox has some dependencies. You'd better follow this solution to install it.

1- Add the following line to your /etc/apt/sources.list:
deb http://download.virtualbox.org/virtualbox/debian xenial contrib

According to your distribution, replace 'xenial' by 'vivid', 'utopic', 'trusty', 'raring', 'quantal', 'precise', 'lucid', 'jessie', 'wheezy', or 'squeeze'.

For viewing the complete list of dists:
http://download.virtualbox.org/virtualbox/debian/dists/

To see your Linux dist:
cat /etc/*release
Based on the line:
UBUNTU_CODENAME=xenial
choose the dist! (which is xenial)

2- apt-get update (using a proxy tool like proxychains)

3- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A2F683C52980AECF
The key depends on what you might get after apt-get update.
You need to re-run apt-get update.
-------------------------------------------------------------------------------------------
Virtualbox has some dependencies. You'd better follow the top solution to install it.

Virtualbox 5 Download link: (It's blocked for us in Iran; use a proxy tool to bypass it).
http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html

Or

You can download the file directly from: (It's also blocked; use a proxy tool).
http://download.virtualbox.org/virtualbox/5.1.6/virtualbox-5.1_5.1.6-110634~Ubuntu~xenial_amd64.deb
-------------------------------------------------------------------------------------------
Installing virtualbox:
apt-get install virtualbox virtualbox-4.3 virtualbox-dkms
---------------------------------------------------------------------------------------------
For enabling USB.2 in Virtual Box, when checking the `Enable USB 2.0...` in settings, I noticed an alert at the bottom of the window `Invalid settings detected`. Hovering the mouse over it, it displayed:
"USB 2.0 is currently enabled for this virtual machine. However, this requires the Oracle VM VirtualBox Extension Pack to be installed..."

So, for solving this problem:
1-Check what version of virtual box you're using:
VBoxManage -version
It will display something like 4.3.6_Debianr91406

2-Open this link and follow the version of virtual box you got from `step 1`:
http://download.virtualbox.org/virtualbox/

3-Find the package and download it:
Oracle_VM_VirtualBox_Extension_Pack-4.3.6-91406.vbox-extpack
Don't forget to find the whole version number... I mean the 91406 (from the `step 1`)

4-Install the package:
sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.6-91406.vbox-extpack

5-Now, you need to add your username to the "vboxusers" group in order to gain access to your USB devices in the Virtual Machine:
sudo usermod -a -G vboxusers mohsen

6-Restart your PC/Laptop.
Finished.

For viewing a list of installed packages:
VBoxManage list extpacks

For uninstalling the package:
sudo vboxmanage extpack unistall "Oracle VM VirtualBox Extension Pack"
--------------------------------------------------------------------------------------------
bash: /etc/init.d/vboxdrv: No such file or directory
sudo apt-get install build-essential linux-headers-`uname -r`

sudo dpkg-reconfigure virtualbox-dkms
sudo dpkg-reconfigure virtualbox
-------------------------------------------------------------------------------------------
Increase VDI size:
vboxmanage modifymedium /media/mohsen/Programs/Virtual\ OS/VirtualBox\ VMs/Windows\ 10/ --resize 22000

After resizing, using the Disk Management tool available in Windows, right click on partition C: and extend it.
-------------------------------------------------------------------------------------------
Commands:
VBoxManage list vms
VBoxManage startvm "Debian - 8"
-------------------------------------------------------------------------------------------

+Help, Mannual (Aug. 22, 2014, 12:34 p.m.)

Get help:
Some commands don't have help messages or don't use --help to invoke them. On these mysterious commands, use this trick:

First, find out where the executable file is located (this trick will only work with programs, not shell builtins):
which command

The `which` command will tell you the path and file name of the executable program. Next, use the `strings` command to display text that may be embedded within the executable file. For example, if you wanted to look inside the bash program, you would do the following:
which bash
/bin/bash
strings /bin/bash

The strings command will display any human readable content buried inside the program. This might include copyright notices, error messages, help text, etc.

Finally, if you have a very inquisitive nature, get the command's source code and read that. Even if you cannot fully understand the programming language in which the command is written, you may be able to gain valuable insight by reading the author's comments in the program's source.

+Dolphin (Aug. 22, 2014, 12:34 p.m.)

When working with `dolphin`, I can't disable the notification sounds. It breaks the alsa volume too. The only way to disable the sounds is to delete or move (or rename) the sound files. So here is the path to the sounds. Do whatever that pleases you :D
/usr/share/sounds

+ISO files (Aug. 22, 2014, 12:33 p.m.)

Convert .DAA Files To .ISO

Download and install power PowerISO using the following link:
http://www.poweriso.com/download.php
Scroll to the bottom of the page, in `Other downloads` section to get the linux version.

1- wget http://www.poweriso.com/poweriso-1.3.tar.gz

2- tar -zxvf poweriso-1.3.tar.gz

3- You can copy the extracted file “poweriso” to /usr/bin to help all users of a computer to use it.
Now if you want to convert for example a .daa file to .iso use this command:
poweriso convert /path/to/source.daa -o /path/to/target.iso -ot iso
*******
There are more useful commands of poweriso:
Task: list all files and directories in home direcory of /media/file.iso

poweriso list /media/file.iso /
poweriso list /media/file.iso / -r
*******
Fore more commands please type
poweriso -?
-----------------------------------------------------------
Convert DMG to ISO

1- Install the tool
sudo apt-get install dmg2img

2- The following command will convert the .dmg to .img file in ISO format:
dmg2img <file_name>.dmg

3- And finally, rename the extension:
mv <file_name>.img <file_name>.iso
-----------------------------------------------------------
Create ISO file from a directory:
mkisofs -allow-limited-size -o abcd.iso abcd

+Installing Flash Player (Aug. 22, 2014, 12:32 p.m.)

sudo apt-get install adobe-flashplugin

+Nautilus Bookmarks (Aug. 22, 2014, 12:26 p.m.)

Nautilus bookmarks configuration file location:
~/.config/gtk-3.0/bookmarks

For seeing which version of nautilus you have:
nautilus --version

+Convert mp3 to ogg (Aug. 22, 2014, 12:32 p.m.)

Convert mp3 to ogg:
1-apt-get install mpg321 vorbis-tools
2-mpg321 input.mp3 -w raw && oggenc raw -o output.ogg

+Convert rmp to deb (Aug. 22, 2014, 12:26 p.m.)

Convert rmp to deb:
1-apt-get install alien
2-alien -d package-name.rpm

+tmux (Aug. 22, 2014, 12:31 p.m.)

Prompt not following normal bash colors:

For fixing the problem, create a file `~/.tmux.conf` if it does not exist, and add the following to it:
set -g default-terminal "screen-256color"

set -g history-limit 100000
---------------------------------------------------
Tmux Plugin Manager:
git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm

Put this at the bottom of ~/.tmux.conf:

# List of plugins
set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

# Initialize TMUX plugin manager (keep this line at the very bottom of tmux.conf)
run '~/.tmux/plugins/tpm/tpm'
---------------------------------------------------
Installing plugins:
1-Add new plugin to ~/.tmux.conf with set -g @plugin '...'
2-Press prefix + I (capital I, as in Install) to fetch the plugin.
---------------------------------------------------
Uninstalling plugins:
1-Remove (or comment out) plugin from the list.
2-Press prefix + alt + u (lowercase u as in uninstall) to remove the plugin.
---------------------------------------------------
tmux-continuum plugin:
set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'

Automatic restore:
Last saved environment is automatically restored when tmux is started.
Put this in tmux.conf to enable:
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'
---------------------------------------------------
CPU/RAM/battery stats chart bar:
install the plugin using CPAN:
sudo cpan -i App::rainbarf

If it's the first time you're using CPAN you might be asked to let some plugins get installed automatically...
You choose (yes) and then choose(sudo) to let the plugin installed.

After installation, create a config file ~/.rainbarf.conf with this content:
width=20 # widget width
bolt # fancy charging character
remaining # display remaining battery
rgb # 256-colored palette
---------------------------------------------------
Whole config file:
set -g default-terminal "screen-256color"
set-option -g status-utf8 on

set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'

set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
set -g @plugin 'tmux-plugins/tmux-logging'
set -g @continuum-restore 'on'
set -g @resurrect-capture-pane-contents 'on'

set -g history-limit 500000

set -g status-right '#(rainbarf)'
set -g default-command bash

run '~/.tmux/plugins/tpm/tpm'
---------------------------------------------------
PRESS CTRL+B and CTRL+I to install plugins after editing the .tmux.conf file.
---------------------------------------------------
CTRL + B and SHIFT + P to start (and end) logging in current pane.
CTRL + B and ALT + P to start (and end) to capture screen.

Save complete history:
CTRL + B and ALT + SHIFT + P

Clear pane history:
CTRL + B and ALT + C
---------------------------------------------------
Swap Window:
swap-window -s 3 -t 1
---------------------------------------------------

+PIL (Feb. 15, 2016, 11:04 a.m.)

For a successful and complete installation of PIL, you need to install these packages before installing PIL:

sudo apt-get install libjpeg-dev libfreetype6 libfreetype6-dev zlib1g-dev

If you're going to install it on python3:
apt-get install python3-dev
If it's for python 2:
apt-get install python-dev
------------------------------------------------------------------------
The installation should be finished by now. Do the following if you still get errors and the jpeg library is not recognized by linux:

# ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/libfreetype.so /usr/lib
# ln -s /usr/lib/x86_64-linux-gnu/libz.so /usr/lib

Now proceed and reinstal PiL, pip install -U PIL

In case of this error:
#include <freetype/fterrors.h>
Create a symlink as follow:
ln -s /usr/local/include/freetype2/ /usr/local/include/freetype

+Undeleteing (Aug. 22, 2014, 12:30 p.m.)

1-Install extundelete: apt-get install extundelete

2-Either "unmount" or "remount" the partition as read-only:
sudo mount -t vfat -O remount,ro /dev/sdb /mnt

To remount it back to read-write: (This task is not part of this tutorial. It's just for keeping a note.)
sudo mount -t vfat -O remount,rw /dev/sdb /mnt

3-For restoring the files from the whole partition:
extundelete /dev/sdb1 –restore-all
And for restoring important files quickly, you may use the --restore-file, --restore-files, or --restore-directory options.

+Error - ia32-libs : Depends: ia32-libs-i386 but it is not installable (Aug. 22, 2014, 12:29 p.m.)

The ia32-libs-i386 package is only installable from the i386 repository, which becomes available with the following commands:

dpkg --add-architecture i386
apt-get update

+Driver - Samsung Printer (July 20, 2015, 11:23 p.m.)

https://doc.ubuntu-fr.org/tutoriel/installer_imprimante_samsung

Installing My Samsung Printer Driver (SCX-4521F):

1-Add the following repository to /etc/apt/sources.list:
deb http://www.bchemnet.com/suldr/ debian extra

2-Install the GPG key:
sudo apt-get install suldr-keyring
apt-get update

3-Install these packages:
apt-get install samsungmfp-driver-4.00.39 suld-configurator-2-qt4

+Grub rescue (Aug. 22, 2014, 12:02 p.m.)

I haven't tried it yet, so keep in mind to correct the problems:
mount /dev/masax /mnt
groub-install --root-directory=/mnt/ /dev/sda

OR

Another day I just used these commands, some would give me errors, but some would work...but in my surprise it worked:
set prefix=(hd0,1)/boot/grub
insmod (hd0,1)/boot/grub/linux.mod
insmod part_msdos
insmod ext2
set root=(hd0,1)
reboot using CTRL+ALT+DELETE

+Commands - iftop (Aug. 22, 2014, 12:23 p.m.)

iftop: InterFace Table of Processes

Install iftop for viewing what applications are using/eating up Internet.

iftop -i eth1

# The logs from xchat help:
in iftop hit `p` to toggle port display
now you know which port on your machine is connecting out to that domain
now use netstat -nlp to list all pids on which ports are connecting out
you should now know which pid is hitting that domain... provided all traffic originates on your local box
also consider using lsof for this sort of mining

+Error - Cannot Open Display (Aug. 22, 2014, 12:04 p.m.)

export XAUTHORITHY=/home/<user>/.Xauthority

OR

Try this new method:
"aptitude -r install linux-headers-2.6-`uname -r|sed 's,[^-]*-[^-]*-,,'` nvidia-kernel-dkms nvidia-glx && mkdir /etc/X11/xorg.conf.d ; echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf
/etc/X11/xorg.conf

This is the old xorg.conf:

# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 280.13 (buildmeister@swio-display-x86-rhel47-03.nvidia.com) Wed Jul 27 17:15:58 PDT 2011

Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection

Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 28.0 - 33.0
VertRefresh 43.0 - 72.0
Option "DPMS"
EndSection

Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:1:0:0"
# option "MetaModes" "1280x1024"
option "MetaModes" "1920x1080"
EndSection

Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection

+unrar (Aug. 22, 2014, 12:03 p.m.)

How to use Unrar command
First move the rar file to a directory, and then extract it there:
$ unrar e file.rar

$ unrar l file.rar
--------
Unrar all files:
for file in *.part01.rar; do unrar x ${file}; done;

+Swap file (Aug. 22, 2014, 12:02 p.m.)

How to create a swap file:
1-dd if=/dev/zero of=/swapfile1 bs=1024 count=524288

Where,
if=/dev/zero : Read from /dev/zero file. /dev/zero is a special file in that provides as many null characters to build storage file called /swapfile1.
of=/swapfile1 : Read from /dev/zero write stoage file to /swapfile1.
bs=1024 : Read and write 1024 BYTES bytes at a time.
count=524288 : Copy only 523288 BLOCKS input blocks.

2-mkswap /swapfile1

3-chown root:root /swapfile1
chmod 0600 /swapfile1

4-swapon /swapfile1

5-nano /etc/fstab
Append the following line:
/swapfile1 swap swap defaults 0 0

6-To test/see the free space:
free -m

+Commands - rm (Aug. 22, 2014, noon)

rm -rfv `find . -iname "*.pyc"`

+Define aliases (Aug. 22, 2014, noon)

Defining alias:
1-Open the file ~/.bashrc and write an alias like this:
alias myvps='ssh -p 54321 mohsen@mohsenhassani.com'
2-Enter this command to make the changes affect:
source .bashrc
3-Keep in mind that every time a change is done to .bashrc file, you have to reload it with:
source .bashrc

+Commands - mount (Aug. 22, 2014, noon)

mount -t ntfs /dev/sda1 /mnt/exhdd

To mount a floppy image:
sudo mount -t msdos -o loop -o umask=000 ./floppy.img /media/floppy

+Error - Errors were encountered while processing (Aug. 22, 2014, 11:59 a.m.)

samsungmfp-driver
E: Sub-process /usr/bin/dpkg returned an error code (1)
rm /var/lib/dpkg/info/samsungmfp-*

+ALSA (Aug. 22, 2014, 11:58 a.m.)

Find ALSA version:
cat /proc/asound/version
--------
My sound card was installed. I knew it, using the command:
cat /proc/asound/modules
and
cat /proc/asound/cards

But there was no sound from my Laptop. I ran gstreamer-properties in normal user bash (not root), to test Audio device of my Laptop. I saw I don't have ALSA in the plugins section. Installing it, I could test my sound card. I heard sound and it solved my problem.
so using repo I found gstreamer0.10-alsa and installed it.

And I of course had to use the command:
alsactl init

For not doing the above command every time the system is turned on, I made my snd-hda-intel as the default sound card. (The tutorial is in this same file.)

+Commands - scp (Aug. 22, 2014, 11:43 a.m.)

The scp command allows you to copy files over ssh connections. This is pretty useful if you want to transport files between computers, for example to backup something. The scp command uses the ssh command and they are very much alike. However, there are some important differences.
The scp command can be used in three* ways:
1-To copy from a (remote) server to your computer.
2-To copy from your computer to a (remote) server.
3-To copy from a (remote) server to another (remote) server.

In the third case, the data is transferred directly between the servers; your own computer will only tell the servers what to do. These options are very useful for a lot of things that require files to be transferred, so let's have a look at the syntax of this command:
scp examplefile yourusername@yourserver:/home/yourusername/
**********
You can also copy a file (or multiple files) from the (remote) server to your own computer. Let's have a look at an example of that:
scp yourusername@yourserver:/home/yourusername/examplefile .

The dot at the end means the current local directory. This is a handy trick that can be used about everywhere in Linux. Besides a single dot, you can also type a double dot ( .. ), which is the parent directory of the current directory.
**********
You probably already guessed that the following command copies a file from a (remote) server to another (remote) server:
scp yourusername@yourserver:/home/yourusername/examplefile yourusername2@yourserver2:/home/yourusername2/
**********
Please note that, to make the above command work, the servers must be able to reach each other, as the data will be transferred directly between them. If the servers somehow can't reach each other (for example, if port 22 is not open on one of the sides) you won't be able to copy anything. In that case, copy the files to your own computer first, then to the other host. Or make the servers able to reach each other (for example by opening the port).
**********
Specifying a port with scp:
The scp command acts a little different when it comes to ports. You'd expect that specifying a port should be done this way:
scp -p yourport yourusername@yourserver:/home/yourusername/examplefile .
However, that will not work. You will get an error message like this one:
cp: cannot stat `yourport': No such file or directory
This is caused by the different architecture of scp. It aims to resemble cp, and cp also features the -p option. However, in cp terms it means 'preserve', and it causes the cp command to preserve things like ownership, permissions and creation dates. The scp command can also preserve things like that, and the -p option enables this feature. The port specification should be done with the -P option. Therefore, the following command will work:
scp -P yourport yourusername@yourserver:/home/yourusername/examplefile .
Also note that the -P option must be in front of the (remote) server. The ssh command will still work if you put -p yourport behind the host syntax, but scp won't. Why? Because scp also supports copying between two servers and therefore needs to know which server the -P option applies to.
--------------
Copying files from a remote computer using ssh
scp root@80.75.14.97:~/Desktop/Programs/Tutorial/Python.zip /home/mohsen/Desktop/

To copy from the local machine, to the remote machine, just reverse things:
scp /home/mohsen/Desktop/Python.zip root@80.75.14.97:~/Desktop/Programs/Tutorial/

+Auto start script at boot time (Aug. 22, 2014, 11:39 a.m.)

To make a script run when the server starts and stops:
First make the script executable with this command:
sudo chmod 755 <path to the script>
Then:
sudo /usr/sbin/update-rc.d -f <path to the script> defaults

+Hardware - Sound card (Aug. 22, 2014, 11:36 a.m.)

Removing and Re-installing Sound card
sudo apt-get --purge remove linux-sound-base alsa-base alsa-utils
sudo apt-get install linux-sound-base alsa-base alsa-utils

+Network - Server config (Aug. 22, 2014, 11:33 a.m.)

I used this command in rc.local to allow the eth0 get IP:
route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.1.1

Add this to /etc/network/interfaces
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.1

Create a file named /etc/resolv.conf and write this command in it:
nameserver 4.2.2.4

ifconfig eth0 broadcast 255.255.255.192

+Backlight (Screen Brightness) (Aug. 22, 2014, 11:32 a.m.)

For solving the back light brightness problem, got to /etc/default/grub and edit the line: GRUB_CMDLINE_LINUX_DEFAULT to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi_osi=Linux acpi_backlight=vendor splash"
And then:
update-grub2
----------------------------------------------------------------------------
Check if graphics card is intel:
ls /sys/class/backlight

You should see something like:
ideapad intel_backlight

Fix backlight:
Create this file: /usr/share/X11/xorg.conf.d/20-intel.conf

Section "Device"
Driver "intel"
Option "Backlight" "intel_backlight"
Identifier "card0"
EndSection

Logout and Login. Done.

+IRC (Aug. 22, 2014, 11:28 a.m.)

1-Join the Freenode network. Open your favorite IRC client and type:
/server irc.freenode.net

2-Choose a user name or nick. This user name should consist only of the letters from A-Z, the numbers from 0-9 and certain symbols such as "_" and "-". It may have a maximum of 16 characters.

3-Change your user name to the user name you have chosen. Suppose you chose the nickname "awesomenickname". Type the following in the window titled Freenode:
/nick awesomenickname

4-Register your nick or user name. Type the following command and replace "your_password" with a password that will be easy to remember, and replace "your_email_address" with your email address.
/msg nickserv register your_password your_email_address

5-Verify your registration. After you register, you will not be able to identify to NickServ until you have verified your registration. To do this, check your email for an account verification code.

6-Group an alternate nickname with your main one. If you would like to register an alternate nickname, first switch to the alternate nickname that you want while you are identified as the main one, then group your nicks together with this command:
/msg nickserv group

7-Identify with Nickserv. Each time you connect, you should sign in, or "identify" yourself, using the following command:
/msg nickserv identify your_password


You can send private messages anytime after step 4. The advantage of the other steps is to make your registration much more secure. To send a private message, you simply do the following, replacing Nick with the nick or user name of the person you wish to contact privately and message with the message you want to start with:
/msg Nick message

Take care to follow this process in the Freenode window, not directly in a channel. If you type all the commands correctly, nothing should be visible to others, but it's very easy to type something else by mistake, and in so doing, you could expose your password.

Choose a nick between 5 and 8 characters long. This will make it easier to identify and avoid confusion. Choose your nick wisely. Remember that users will identify this name with your person.

User names will automatically expire after 60 days of disuse. This is counted from the last time it was identified with NickServ. If the nickname you want is not in use and you want it, you can contact somebody with Freenode staff to unassign it for you. If you will not be able to use IRC for 60 days you can extend the time using the vacation command (/msg nickserv vacation). Vacation will be disabled automatically next time you identify to NickServ.

To check when a nick was last identified with NickServ, use /msg NickServ info Nick

The Freenode staff have an option enabled to receive private messages from unregistered users so if you wish to request that a nick be freed, you do not have to register another.
To contact a member of the staff, use the command /stats p or /quote stats p if the first doesn't work. Send them a private message using /query nick.
In case there is no available staff member in /stats p, use /who freenode/staff/* or join the channel #freenode using /join #freenode.

Avoid using user names that are brand names or famous people, to avoid conflicts.

If you don't want your IP to be seen to the public, contact FreeNode staff and they can give you a generic "unaffiliated" user cloak, if you are not a member of a project.

If you want to hide your email address, use /msg nickserv set hidemail on.

If you need to change your password, type /ns set password new_password. You will need to be logged in.
**********
# select nick name
/nick yournickname

# better don't show your email address:
/ns set hide email on

# register (only one time needed) - PW is in clear text!!
/msg NickServ register [password] [email]

# identify yourself to the IRC server (always needed) (xxxx == pw)
/msg NickServ IDENTIFY xxxx

# Join a channel
/join #grass
--------
Registering a channel:
1-To check whether a channel has already been registered, use the command:
/msg ChanServ info #Mohsen or ##Mohsen

2-/join #Mohsen

3-/msg ChanServ register #Mohsen

For gaining OP:
/MSG chanserv op #shahbal Mohsen_Hassani

+zip (Aug. 22, 2014, 11:25 a.m.)

To zip just one file (file.txt) to a zipfile (zipfile.zip), type the following:
zip zipfile.zip file.txt

To zip an entire directory:
zip -r zipfile.zip directory

zip -r -e saverestorepassword saverestore
The -e flag will prompt you to specify a password and then verify the password. You will see nothing happening in Terminal as you type the password. This will create a password protected zip file named saverestorepassword.zip containg your saverestore directory.
In the above examples, the name of the zip file can be whatever name you choose.

unzip test.zip

unzip test.zip -d music
This will extract the contents of test.zip to the music folder. Caveat, the directory must already exist.

Now let's extract the saverestorebackup.zip file. In this example I'll extract it to my music folder so I don't overwrite my current data in the saverestore folder. Again, this assumes you've just launched Terminal:
cd /media/internal
unzip saverestorebackup.zip -d music

In the above two examples, the -d flag indicates to extract the zip file to the directory specified, music in this case.
--------
For excluding a directory in zip:
zip -r test.zip test -x "path/to/exclusion/directory/*"
1-Take note that the exclusion path should be in quotes, and a star at the end.
2-There is a * (star) at the end of the command which is used to exclude `ALL` the sub-files and sub-directories, so don't forget it use it!
3-The path should not be started from '/home/mohsen/....' it should be started from the path you're using the command.

+Commands - ssh (Aug. 22, 2014, 11:22 a.m.)

SSH is some kind of an abbreviation of Secure SHell. It is a protocol that allows secure connections between computers.
To move the ssh service to another port:
ssh -p yourport yourusername@yourserver

Running a command on the remote server:
Sometimes, especially in scripts, you'll want to connect to the remote server, run a single command and then exit again. The ssh command has a nice feature for this. You can just specify the command after the options, username and hostname. Have a look at this:
ssh yourusername@yourserver updatedb
This will make the server update its searching database. Of course, this is a very simple command without arguments. What if you'd want to tell someone about the latest news you read on the web? You might think that the following will give him/her that message:
ssh yourusername@yourserver wall "Hey, I just found out something great! Have a look at www.examplenewslink.com!"
However, bash will give an error if you run this command:
bash: !": event not found
What happened? Bash (the program behind your shell) tried to interpret the command you wanted to give ssh. This fails because there are exclamation marks in the command, which bash will interpret as special characters that should initiate a bash function. But we don't want this, we just want bash to give the command to ssh! Well, there's a very simple way to tell bash not to worry about the contents of the command but just pass it on to ssh already: wrapping it in single quotes. Have a look at this:
ssh yourusername@yourserver 'wall "Hey, I just found out something great! Have a look at www.examplenewslink.com!"'
The single quotes prevent bash from trying to interpret the command, so ssh receives it unmodified and can send it to the server as it should. Don't forget that the single quotes should be around the whole command, not anywhere else.
------------------
sudo ssh-keygen -R hostname
------------------
Creating ssh key:
ssh-keygen -t rsa
------------------
When the server is just installed, the first access is possible via:
ssh-keygen -R <ip of server>
------------------
SSH Tunnel:
1-Create a user on the server:
adduser <username>

2-Copy the user's ssh_key from his computer to the server:
ssh-copy-id -i ~/.ssh/id_rsa.pub <username>@<server_ip>

3-Run this command on user's computer:
ssh -D <an optional port, like 9000> -fN <username>@<server_ip>

4-Change the Connection Settings of Mozilla, SOCKS Host:
localhost 9000

+Error - GPG error: ... NO_PUBKEY (Aug. 22, 2014, 11:21 a.m.)

While "apt-get update" I encountered an error telling me "GPG error: ... NO_PUBKEY DB141E2302FDF932"
So, for solving the problem I used this command:
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DB141E2302FDF932

+wget (Aug. 22, 2014, 11:18 a.m.)

ERROR: The certificate of `www.dropbox.com' is not trusted.
ERROR: The certificate of `www.dropbox.com' hasn't got a known issuer.

wget --no-check-certificate <url_link>

-------------------------------------------------------------

Mirror an entire website
wget -m http://google.com

-------------------------------------------------------------

Mirror entire website:

wget --mirror --random-wait --convert-links --adjust-extension --page-requisites --no-host-directories -erobots=off --no-cache http://domain.com/

-------------------------------------------------------------

Print file to stdout like curl does:

wget -O - http://exmaple.com/text.txt

-------------------------------------------------------------

Recursively download only files with the pdf extension upto two levels away:

wget -r -l 2 -A "*.pdf" http://papers.xtremepapers.com/CIE/Cambridge%20Checkpoint/

-------------------------------------------------------------

Get your external ip address from icanhazip.com and echo to STDOUT:

wget -O - http://icanhazip.com/ | tail

-------------------------------------------------------------

Open tarball without downloading:

wget -qO - "http://www.tarball.com/tarball.gz" | tar zxvf -

-------------------------------------------------------------

The option -c or --continue will resume an interrupted download:

wget -c https://scans.io/data/umich/https/certificates/raw_certificates.csv.gz

-------------------------------------------------------------

Download a list of urls from a file:

wget -i urls.txt

-------------------------------------------------------------

Save file into directory:

wget -P path/to/directory http://bropages.org/bro.html

-------------------------------------------------------------

Saves the HTML of a webpage to a particular file:

wget -O bro.html http://bropages.org/

-------------------------------------------------------------

Download entire website:

Short Version:
wget --user-agent="Mozilla" -mkEpnp http://example.org


Explanation:

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent http://example.org

Explanation of the various flags:

--mirror – Makes (among other things) the download recursive.
--convert-links – convert all the links (also to stuff like CSS stylesheets) to relative, so it will be suitable for offline viewing.
--adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content-type.
--page-requisites – Download things like CSS style-sheets and images required to properly display the page offline.
--no-parent – When recursing do not ascend to the parent directory. It useful for restricting the download to only a portion of the site.

-------------------------------------------------------------

+VPN (Aug. 22, 2014, 11:15 a.m.)

Configure VPN:
Start by browsing to System » Preferences » Network Connections » VPN.
If you have never setup a VPN connection before there is a good chance that all the buttons, like "Add", are grayed out. Fix this by opening a terminal and running this command:
sudo apt-get install pptp-linux network-manager-pptp
Now go back to the Network Connections window and the VPN tab inside of it; the Add button should now be clickable. Click it, select Point-to-Point Tunneling Protocol (PPTP) in the drop-down and click Create.
Type something like RaptorVPN in for Connection name. For Gateway, enter 208.43.150.122
Type in the RaptorVPN-provided password and then click Advanced.
In the Authentication section, uncheck all but MSCHAPv2.
In the Security and Compression section, check the box for Use Point-to-Point encryption (MPPE) and select 128-bit (most secure) in the drop-down below it. Then check the box for Allow stateful encryption and click OK and Apply.
If at any point during the VPN setup you see a keyring message like the one below, click Always Allow.
Restart the network manager by running this command in the terminal:
sudo /etc/init.d/network-manager restart
Now you are ready to take your new RaptorVPN connection for a test drive. Click the network icon in the taskbar and click on your new VPN connection.
A few seconds later you should be successfully connected!

+Change default sound card (Aug. 22, 2014, 11:14 a.m.)

nano /etc/modprobe.d/alsa-base.conf
and add:
options audigy (or whatever it is called) index=0
options logitech (or whatever it is called) index=1
and restart alsa
/etc/init.d/alsa-utils restart
*******
asoundconf set-default-card Xmod
*******
In terminal type
less /proc/asound/modules
That will show you which soundcards occupy which slot and what're their names.
My output is
0 snd_au8830
1 snd_intel8x0
so it should look something like that.
Now identify which cards you don't wanna use and take their names.
In terminal now type
sudo nano /etc/modprobe.d/alsa-base.conf
Find the place where it says something like
# Prevent abnormal drivers from grabbing index 0
and in the list below add
options snd_whateveryourcardnameswere index=-2
Since you have two card you want to blacklist you add two lines with different names then.
Now save /etc/modprobe.d/alsa-base.conf and reboot the computer.

+Commands - lsof (Aug. 22, 2014, 11:12 a.m.)

lsof -i:<port>
Example: lsof -i:80
Displayes the process which uses port 80.

+VGA Switcheroo (Aug. 22, 2014, 11:11 a.m.)

Once you've ensured that vga_switcheroo is available, you can use these options to switch between GPUs.
echo ON > /sys/kernel/debug/vgaswitcheroo/switch
Turns on the GPU that is disconnected (not currently driving outputs), but does not switch outputs.
echo IGD > /sys/kernel/debug/vgaswitcheroo/switch
Connects integrated graphics with outputs.
echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
Connects discrete graphics with outputs.
echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
Turns off the graphics card that is currently disconnected.
There are also a couple of options that are useful from inside an X-Windows session:
echo DIGD > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to integrated graphics to occur when the X server is next restarted.
echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
Queues a switch to discrete graphics to occur when the X server is next restarted.

+Changing the boot count down time (Aug. 22, 2014, 11:07 a.m.)

nano /etc/default/grub
GRUB_TIMEOUT=5

+Commands - ps (Aug. 22, 2014, 11:06 a.m.)

ps
Lists all processes

ps -A
Displays all processes

kill + PID of process
Terminates a process

+Changing the attributes of a file/directory (Aug. 22, 2014, 11:05 a.m.)

Use the chmod command.
The attributes are read/write/execute for root/user/group with the values being:
4-2-1, 4-2-1, 4-2-1.

To give everyone execute only access to a file, you'd
chmod 111

or all permissions, it'd be
chmod 777

Root only r/w/x would be
chmod 700

4 = owner
2 = group
1 = other

+Commands - ls (Aug. 22, 2014, 11:04 a.m.)

ls -r
Reverse order while sorting

ls -F
Shows executable files with '*' sign and link files with '@'

ls -t
Sort by time

+Commands - echo (Aug. 22, 2014, 11:03 a.m.)

echo + message
Displayes the message on the screen.

echo + message > + filename
If the filename exists, it overwrites the "message" to the content of the file. And if the file doesn't exist, it creates the file and writes the "message" in it.

echo + message >> + filename
Adds the "message" to the end of the file.

+Commands - head and tail (Aug. 22, 2014, 10:57 a.m.)

head
prints the first part of files
head + filename + 4
prints the 4 lines of the file

tail
prints the last part of files

+Bash - Adding commands to bash (Aug. 22, 2014, 10:54 a.m.)

1-Using this command, you can see the paths that Linux uses to find the commands:
env | grep PATH

2-Now you should add the address of your program to this PATH, using 'export' command, as follows:
If you use
export PATH=address-of-porgam
The existence addresses will be removed and it will cause the terminal to not recognize the commands.

So the thing you should do is:
Copy and Paste what there is in "env | grep PATH" and add "The address of specific program" like this command:
export PATH=/usr/local/sbin:/usr/local/bin:...:/home/mohsen/Programs/Debian/MyBashCommands

This directory MyBashCommand should be created already and only the executable files should be copied

+Kernel - Remove (Aug. 22, 2014, 10:53 a.m.)

Delete the files/directories:
/boot/vmlinuz -*kernel version*
/boot/ initrd-*kernel version*
/boot/config-*kernel version*
/boot/System.map-*kernel version*

/lib/modules/*kernel version*

/var/lib/initramfs-tools/*"kernel version"

update-grub2
update-initramsfs -u

+Kernel - Update (Aug. 22, 2014, 10:51 a.m.)

First way:

Copy kernel to /usr/src
tar -xvf kernel-source.tar.bz2
cd kernel-source
mkdir ../build
make clean
make mrproper
make O=../build menuconfig
make -j3 O=../build
make O=../build modules_install install
cd /boot/
mkinitramfs -v -o linux_version // if it didn't create initrd.img+linux_version, then use the following command
update-initramfs -u
//update-initramfs -c -k linux_version // to see the list of available versions go to /lib/modules
update-grub2
**********
Second Way:

1-What to install before starting:
gcc
build-essential
kernel-package
kernel-source-2.4.18 (or whatever kernel sources you will be using)
libc6-dev
tk8.0 or tk8.1 or tk8.3
libncurses5-dev
fakeroot
bin86 (for building 2.2.x kernels on PCs)

2-Expanding the source tarball
Copy the kernel-source to /usr/src and unzip it using the following command:
tar -jxf kernel-source-2.4.18.tar.bz2

3-Setting up the symlink
ln -s kernel-source-2.4.18 linux

4-Checking Current Minimal Requirements
/usr/src/kernel-source-2.4.18/Documentation/Changes
The part of "Current Minimal Requirements" should be studied and the requirements should be installed.

5-Configuring the kernel:
make xconfig
or
make menuconfig
This command should display a long list of available kernel elemets so that we can select what to be compiled.

6-make
This command makes the system prepare the kernel using the selected elemets, which might take hours to finish this step.

7-Check in the same /usr/src address to see if the new Kernel-image-2.6.38_...Custom.deb is created!

8-Making the kernel image:
fakeroot make-kpkg clean
fakeroot make-kpkg --append-to-version=.030320 kernel_image

9-Installing the kernel-image package:
dpkg -i kernel-image-2.4.18.030320_10.00.Custom_i386.deb

10-echo "kernel-image-2.4.18.030320 hold" | dpkg --set-selections
After this command, when you use this command "dpkg --get-selections | grep kernel-image", the output should be like this: "kernel-image-2.4.18.030320 hold"

11-Removing the symlink:
cd /usr/src
rm linux

12-(Optional) Removing old kernels:
cd /boot
dpkg -P kernel-image-2.4.18.030309
dpkg -P pcmcia-modules-2.4.18.030309

13-Updating Grub
update-initramfs -c -k 2.6.38-1-amd64 // to see the list of available versions go to /lib/modules
update-grub

14-Restart

Sources:
http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.html
http://newbiedoc.sourceforge.net/system/kernel-pkg.html

+Driver - A site for checking and reporting device drivers (Aug. 22, 2014, 10:49 a.m.)

http://kmuto.jp/debian/hcl/index.cgi

+Fan (Aug. 22, 2014, 10:48 a.m.)

echo -n 3 > /proc/acpi/fan/FAN/state
The value 3 my need to be 1 or 0.
0 turns the fan on and other to turn off.

+Dictionary - StarDict (Aug. 22, 2014, 10:42 a.m.)

sdcv is the console version of Stardict.

Installation:
apt-get install sdcv

Install downloaded dictionaries:
Make the directory where sdcv looks for the dictionary:
sudo mkdir -p /usr/share/stardict/dic/

Usage:
-l: display list of available dictionaries and exit.
-u: for search use only dictionary with this bookname
-n: for use in scripts
--data-dir path/to/directory: Use this directory as path to stardict data directory. This means that sdcv search dictionaries in data-dir/dic directory.

Converting Babylon glossaries to StarDict dictionary:
dictconv INPUT_FILENAME.BGL -o OUTPUT_FILENAME.ifo
The output of this command is three file :
INPUT_FILENAME.ifo , INPUT_FILENAME.idx , INPUT_FILENAME.dict

Place all these 3 files in /usr/share/stardict/dic/ creating a separate folder for each dictionary.

+Shutting down (Aug. 22, 2014, 10:42 a.m.)

shutdown -r now
shutdown -r 7:00

+Directories (Aug. 22, 2014, 10:28 a.m.)

/bin - Essential user commands
The /bin directory contains essential commands that every user will need. This includes your login shell and basic utilities like ls. The contents of this directory are usually fixed at the time you install Linux. Programs you install later will usually go elsewhere.

/usr/bin - Most user commands
The /usr hierarchy contains the programs and related files meant for users. (The original Unix makers had a thing for abbreviation.) The /usr/bin directory contains the program binaries. If you just installed a software package and don't know where the binary went, this is the first place to look. A typical desktop system will have many programs here.

/usr/local/bin - "Local" commands
When you compile software from source code, those install files are usually kept separate from those provided as part of your Linux distribution. That is what the /usr/local/ hierarchy is for.

/sbin - Essential System Admin Commands
The /sbin directory contains programs needed by the system administrator, like fsck, which is used to check file systems for errors. Like /bin, /sbin is populated when you install your Linux system, and rarely changes.

/usr/sbin - Non-essential System Administration Programs (binaries)
This is where you will find commands for optional system services and network servers. Desktop tools will not show up here, but if you just installed a new mail server, this is where to look for the binaries.

/usr/local/sbin - "Local" System Administration Commands
When you compile servers or administration utilities from source code, this is where the binaries normally will go.

Libraries:
Libraries are shared bits of code. On Windows these are called DLL files (Dynamic Loading Libraries). On Linux systems they are usually called SO (Shared Object) files. As to location, are you detecting a pattern yet? There are three directories where library files are placed: /lib, /usr/lib, and /usr/local/lib.

Documentation:
Documentation is a minor exception to the pattern of file placement. Pages of the system manual (man pages) follow the same pattern as the programs they document: /man, /usr/man, and /usr/local/man. You should not access these files directly, however, but by using the man command.
Many programs install addition documentation in the form of text files, HTML, or other things not man pages. This extra documentation is stored in directories under /usr/share/doc or /usr/local/share/doc. (On older systems you may find this under /usr/doc instead.)


+configure (Aug. 22, 2014, 10:25 a.m.)

When installing a package, the first phase is `./configure`. This is some information about it:

The primary job of the configure script is to detect information about your system and "configure" the source code to work with it.
Usually it will do a fine job at this. The secondary job of the configure script is to allow you, the system administrator, to customize the software a bit.
Running ./configure --help should give you a list of command line arguments you can pass to the configure script. Usually these extra arguments are for enabling or disabling optional features of the software, and it is often safe to ignore them and just type ./configure to take the default configuration.

There is one common argument to configure that you should be aware of. The --prefix argument defines where you want the software installed. In most source packages this will default to /usr/local/ and that is usually what you want. But sometimes you may not have root access to the system, and you would like to install the software into your home directory. You can do this with the last command in the example, ./configure --prefix=/home/vince (where vince is your user name).

+Tarballs (Tar Archive) (Aug. 22, 2014, 10:21 a.m.)

tar -xzvf filename.tar.gz

x : eXtract
j : deal with bzipped file
f : read from a file (rather than a tape device)

-------------------------------------------------------------

Creating a tar File:
tar -cvf output.tar /dirname

tar -cvf Projects.tar Projects --exclude=Projects/virtualenvs --exclude=".buildozer" --exclude=".git"

tar -cvf output.tar /dirname1 /dirname2 filename1 filename2

tar -cvf output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

tar -cvf /tmp/output.tar /home/vivek/data /home/vivek/pictures /home/vivek/file.txt

Where,

-c : Create a tar ball.
-v : Verbose output (show progress).
-f : Output tar ball archive file name.
-x : Extract all files from archive.tar.
-t : Display the contents (file list) of an archive.

-------------------------------------------------------------

Create a tar Archive File:
tar -cf abcd.tar /home/mohsen/abcd


Untar Single file from tar File:
tar -xf abcd.tar x.png
OR
tar --extract --file=abcd.tar x.png


Untar Multiple files:
tar -xf abcd.tar "x.png" "y.png" "z.png"

-------------------------------------------------------------

Create tar.gz Archive File (compressed gzip archive):
tar -czf abcd.gz /home/mohsen/abcd


Uncompress tar.gz Archive File:
tar -xf abcd.tar.gz
tar -xf abcd.tar.gz -C /home/mohsen/Temp/


List Content tar.gz Archive File:
tar -tvf abcd.tar.gz


Untar Single file from tar.gz File:
tar -zxf abcd.tar.gz x.png
tar --extract --file=abcd.tar.gz x.png


Untar Multiple files:
tar -zxf abcd.tar.gz "x.png" "y.png" "z.png"

-------------------------------------------------------------

Create tar.bz2 Archive File:

The bz2 feature compresses and creates archive files less than the size of the gzip. The bz2 compression takes more time to compress and decompress files as compared to gzip which takes less time.

tar -cfj abcd.tar.bz2 /home/mohsen/abcd


Uncompress tar.bz2 Archive File:
tar -xf abcd.tar.bz2


List content tar.bz2 archive file:
tar -tvf abcd.tar.bz2


Untar single file from tar.bz2 File:
tar -jxf abcd.tar.bz2 home/mohsen/x.png
tar --extract --file=abcd.tar.bz2 /home/mohsen/x.png


Untar multiple files:
tar -jxf abcd.tar.bz2 "x.png" "y.png" "z.png"

-------------------------------------------------------------

Extract group of files using wildcard:
tar -xf abcd.tar --wildcards '*.png'
tar -zxf abcd.tar.gz --wildcards '*.png'
tar -jxf abcd.tar.bz2 --wildcards '*.png'

-------------------------------------------------------------

Add files or directories to tar archive file:
Use the option r (append)

tar -rf abcd.tar m.png
tar -rf abcd.tar images


The tar command doesn’t have an option to add files or directories to an existing compressed tar.gz and tar.bz2 archive file. If we do try will get the following error:
tar: This does not look like a tar archive
tar: Skipping to next header

-------------------------------------------------------------

Create a tar archive using xz compression:
tar -cJf abcd.tar.xz /path/to/archive/

Decompression:
tar xf abcd.tar.xz

-------------------------------------------------------------

Compress supporting source and destination directory:
tar -cf /home/mohsen/Temp/abcd.tar -P /home/mohsen/Temp/abcd
tar -cPf /home/mohsen/Temp/abcd.tar /home/mohsen/Temp/abcd

-------------------------------------------------------------

Tar Usage and Options:

c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.

-P (--absolute-names) – don't strip leading '/'s from file names

-------------------------------------------------------------

xz:
tar -cJf my_folder.tar.xz my_folder

-------------------------------------------------------------

tar zc --exclude node_modules -f tiptong.tar.gz tiptong

-------------------------------------------------------------

+apt-get (Aug. 22, 2014, 10:21 a.m.)

apt-get upgrade
Updating the software

apt-get -s upgrade
To simulate an update installation, i.e. to see which software will be updated.

+Search for text in files (Aug. 9, 2015, 9:45 p.m.)

find . -name "*.txt" | xargs grep -i "text_pattern"
------------------------------------------------------------------------
find / -type f -exec grep -l "text-to-find-here" {} \;
------------------------------------------------------------------------
grep word_to_find file_name -n --c
The --c is for coloring the words
------------------------------------------------------------------------
grep "<the word or text to be searched>" / -Rn --color -T
Description:
/: The location to be searched
R: Search in recursive mode
n: Display the number of the line in which the occurrence word or text is located
color: Display the search result colored
T: Separate the search result with a tab
l: stands for "show the file name, not the result itself"
------------------------------------------------------------------------
grep -Rin "text-to-find-here" /
OR
grep --color -Rin "text-to-find-here" / (to make it colorful)
OR
egrep -w -R 'word1|word2' ~/projects/ (for two words)

i stands for upper/lower case
w stands for whole word
----------------------------------------------------------------
Find specific files and search for specific words:

find . -name '*.py' -exec grep -Rin 'resize' {} +
Finds the word `resize` in python files.
OR
find -iname "*.py" | xargs grep -i django

+dpkg (Aug. 22, 2014, 10:19 a.m.)

dpkg --get-selections
To get list of all installed software

dpkg-query -W
To get list of installed software packages

dpkg -l
Description of installed software packages

+Driver - See PCI devices along with their kernel modules (device drivers) (Aug. 22, 2014, 10:05 a.m.)

lspci -k

It first shows you all the PCI devices attached to your system and then tells you what kernel modules (device drivers), are being used by them.

+sources.list (Aug. 22, 2014, 9:58 a.m.)

deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main

deb http://ftp.debian.org/debian/ jessie-updates main
deb-src http://ftp.debian.org/debian/ jessie-updates main


deb http://ftp.debian.org/debian/ jessie main
deb-src http://ftp.debian.org/debian/ jessie main

-----------------------------------------------------------------------

deb http://deb.debian.org/debian stretch main
deb-src http://deb.debian.org/debian stretch main

deb http://deb.debian.org/debian stretch-updates main
deb-src http://deb.debian.org/debian stretch-updates main

deb http://security.debian.org/debian-security/ stretch/updates main
deb-src http://security.debian.org/debian-security/ stretch/updates main

+PIP (Aug. 22, 2014, 9:14 a.m.)

Install SomePackage and it’s dependencies from PyPI using Requirement Specifiers
pip install SomePackage # latest version
pip install SomePackage==1.0.4 # specific version
pip install 'SomePackage>=1.0.4' # minimum version

Install a list of requirements specified in a file.
pip install -r requirements.txt

Upgrade an already installed SomePackage to the latest from PyPI.
pip install --upgrade SomePackage

Install a local project in “editable” mode
pip install -e . # project in current directory
pip install -e path/to/project # project in another directory

Install a project from VCS in “editable” mode. See the sections on VCS Support and Editable Installs.
pip install -e git+https://git.repo/some_pkg.git#egg=SomePackage # from git
pip install -e hg+https://hg.repo/some_pkg.git#egg=SomePackage # from mercurial
pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomePackage # from svn
pip install -e git+https://git.repo/some_pkg.git@feature#egg=SomePackage # from 'feature' branch

Install a package with setuptools extras.
pip install SomePackage[PDF]
pip install SomePackage[PDF]==3.0
pip install -e .[PDF]==3.0 # editable project in current directory

Install a particular source archive file.
pip install ./downloads/SomePackage-1.0.4.tar.gz
pip install http://my.package.repo/SomePackage-1.0.4.zip

Install from alternative package repositories. (Install from a different index, and not PyPI):
pip install --index-url http://my.package.repo/simple/ SomePackage

Search an additional index during install, in addition to PyPI:
pip install --extra-index-url http://my.package.repo/simple SomePackage

Install from a local flat directory containing archives (and don’t scan indexes):
pip install --no-index --find-links:file:///local/dir/ SomePackage
pip install --no-index --find-links:/local/dir/ SomePackage
pip install --no-index --find-links:relative/dir/ SomePackage

Find pre-release and development versions, in addition to stable versions. By default, pip only finds stable versions.
pip install --pre SomePackage

--------------------------------------------------------------------------

pip uninstall [options] <package> ...
pip uninstall [options] -r <requirements file> ...

Description:
pip is able to uninstall most installed packages. Known exceptions are:
Pure distutils packages installed with python setup.py install, which leave behind no metadata to determine what files were installed.
Script wrappers installed by python setup.py develop.

Options:
-r, --requirement <file>
Uninstall all the packages listed in the given requirements file. This option can be used multiple times.

-y, --yes
Don't ask for confirmation of uninstall deletions.

Examples:
Uninstall a package.
pip uninstall simplejson

--------------------------------------------------------------------------

pip freeze [options]

Description:
Output installed packages in requirements format.

Options:
-r, --requirement <file>
Use the order in the given requirements file and it’s comments when generating output.

-f, --find-links <url>
URL for finding packages, which will be added to the output.

-l, --local
If in a virtualenv that has global access, do not output globally-installed packages.

Examples:
Generate output suitable for a requirements file.
$ pip freeze
Jinja2==2.6
Pygments==1.5
Sphinx==1.1.3
docutils==0.9.1

Generate a requirements file and then install from it in another environment.
$ env1/bin/pip freeze > requirements.txt
$ env2/bin/pip install -r requirements.txt

--------------------------------------------------------------------------

pip list [options]

Description:
List installed packages, including editable ones.

Options:
-o, --outdated
List outdated packages (excluding editables)

-u, --uptodate
List up-to-date packages (excluding editables)

-e, --editable
List editable projects.

-l, --local
If in a virtualenv that has global access, do not list globally-installed packages.

--pre
Include pre-release and development versions. By default, pip only finds stable versions.

Examples:
List installed packages.
$ pip list
Pygments (1.5)
docutils (0.9.1)
Sphinx (1.1.2)
Jinja2 (2.6)

List outdated packages (excluding editables), and the latest version available
$ pip list --outdated
docutils (Current: 0.9.1 Latest: 0.10)
Sphinx (Current: 1.1.2 Latest: 1.1.3)

--------------------------------------------------------------------------

pip show [options] <package> ...

Description:
Show information about one or more installed packages.

Options:
-f, --files
Show the full list of installed files for each package.

Examples:
Show information about a package:
$ pip show sphinx
`the output will be`:
Name: Sphinx
Version: 1.1.3
Location: /my/env/lib/pythonx.x/site-packages
Requires: Pygments, Jinja2, docutils

--------------------------------------------------------------------------

pip search [options] <query>

Description:
Search for PyPI packages whose name or summary contains <query>.

Options:
--index <url>
Base URL of Python Package Index (default https://pypi.python.org/pypi)

Examples:
Search for “peppercorn”
pip search peppercorn
pepperedform - Helpers for using peppercorn with formprocess.
peppercorn - A library for converting a token stream into [...]

--------------------------------------------------------------------------

pip zip [options] <package> ...

Description:
Zip individual packages.

Options:
--unzip
Unzip (rather than zip) a package.

--no-pyc
Do not include .pyc files in zip files (useful on Google App Engine).

-l, --list
List the packages available, and their zip status.

--sort-files
With –list, sort packages according to how many files they contain.

--path <paths>
Restrict operations to the given paths (may include wildcards).

-n, --simulate
Do not actually perform the zip/unzip operation.

--------------------------------------------------------------------------

This command will download the zipped/tar file in the specified location:
pip download `package_name`


pip download \
--only-binary=:all: \
--platform linux_x86_64 \
--python-version 33 \
--implementation cp \
--abi cp34m \
pip>=8


pip download \
--only-binary=:all: \
--platform macosx-10_10_x86_64 \
--python-version 27 \
--implementation cp \
SomePackage

--------------------------------------------------------------------------

pip install --allow-all-external pil --allow-unverified pil

--------------------------------------------------------------------------

ReadTimeoutError: HTTPSConnectionPool(host='pypi.python.org', port=443)

pip install --default-timeout=200 <package_name>

--------------------------------------------------------------------------

pip install pip-review
pip-review --local --interactive

--------------------------------------------------------------------------

mkdir pip_files && cd pip_files
pip download -r requirements.txt

--------------------------------------------------------------------------

+Hardware - Modem - WiMAX modem (Aug. 17, 2015, 9:55 a.m.)

For installing the driver, install these packages first:
apt-get install linux-headers-`uname -r` libssl-dev usb-modeswitch zip
---------------------------------------------------------------------------------------------
The wimaxd would not get recognized by the terminal. So I copied it in the /bin directory.
There was an error "error while loading shared libraries: libeap_supplicant.so cannot open shared object file" so I did the following:
To fix the problem, I added the "libeap_supplicant.so" path to /etc/ld.so.conf and re-ran ldconfig.

Another incident which is not related to WiMAX, is that, one day when I was installing and running Apache, there was an error similar to this error of WiMAX: "error while loading shared libraries: libexpat.so.0: cannot open shared object file", so I searched for the file using "locate" command and copied it in the address "/usr/lib" and ran Apache, it was solved!
---------------------------------------------------------------------------------------------
WiMAX linux-headers error:
make: *** /lib/modules/3.13.0-37-generic/source: No such file or directory. Stop.

1-rm /lib/modules/3.13.0-37-generic/source
2-ln -s /usr/src/linux-headers-3.13.0-37 /lib/modules/3.13.0-37-generic/source
--------------------------------------------------------------------------------------------
Usage:
1-su
2-wimaxd -D -c wimaxd.conf
3- (in another console) wimaxc -i
3.1-search
3.2-connect
4-(in another console) su
4.1-dhclient eth1

+Version, Distro, Release (Aug. 4, 2014, 4:38 a.m.)

uname -r
-------------------
Find or identify which version of Debian Linux you are running:
cat /etc/debian_version
-------------------
What is my current linux distribution
cat /etc/issue
-------------------
How Do I Find Out My Kernel Version?
uname -mrs
-------------------
lsb_release Command:
The lsb_release command displays certain LSB (Linux Standard Base) and distribution-specific information.
lsb_release -a

+List hardware information (Aug. 4, 2014, 4:37 a.m.)

lshw

+Hard Disk information (Aug. 4, 2014, 4:36 a.m.)

fdisk -l

+Sudoer (Aug. 4, 2014, 4:36 a.m.)

visudo
Scroll to the bottom of the page and enter:
mohsen ALL=(ALL) ALL

Mac OS
+VMware Tools (Jan. 23, 2017, 1:16 p.m.)

Darwin Image for VMware Tools for Mac OS X:
http://www.insanelymac.com/forum/files/file/31-vmware-tools-for-os-x-darwiniso/

+Password Reset (Sept. 12, 2016, 12:39 a.m.)

1-Turn off your Mac (choose Apple > Shut Down).
2-Press the power button while holding down Command-R. The Mac will boot into Recovery mode. ...
3-Select Disk Utility and press Continue.
4-Choose Utilities > Terminal.
5-Enter resetpassword (all one word, lowercase letters) and press Return.
6-Select the volume containing the account (normally this will be your Main hard drive).
7-Choose the account to change with Select the User Account.
8-Enter a new password and re-enter it into the password fields.
9-Enter a new password hint related to the password.
10-Click Save.
11-A warning will appear that the password has changed, but not the Keychain Password. Click OK.
12-Click Apple > Shut Down.

Now start up the Mac. You can login using the new password.

+Install Ionic (June 21, 2016, 11:08 p.m.)

brew install npm

sudo npm install -g cordova ionic

npm install -g ios-sim

npm install -g ios-deploy
-----------------------------
ionic platfrom add ios
ionic resources
-----------------------------
ionic build ios

+Speed Up Mac by Disabling Features (June 21, 2016, 11:13 p.m.)

Disable Open/Close Window Animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
-------------------------------------
Disable Quick Look Animations
defaults write -g QLPanelAnimationDuration -float 0
-------------------------------------
Disable Window Size Adjustment Animations
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
-------------------------------------
Disable Dock Animations

defaults write com.apple.dock launchanim -bool false
-------------------------------------
Disable the “Get Info” Animation
defaults write com.apple.finder DisableAllAnimations -bool true
-------------------------------------
Get rid of Dashboard
defaults write com.apple.dashboard mcx-disabled -boolean YES
killall Dock
-------------------------------------
Speed Up Window Resizing Animation Speed
defaults write -g NSWindowResizeTime -float 0.003
-------------------------------------
Disable The Eye Candy Transparent Windows & Effects
System Preferences -> Accessibility -> Display
Check the box for “Reduce Transparency”
-------------------------------------
Disable Unnecessary Widgets & Extensions in Notifications Center
System Preferences -> Extensions -> Today
Uncheck all options you don’t need or care about
-------------------------------------

+Disable SIP (June 20, 2016, 12:37 a.m.)

csrutil status
csrutil disable
reboot

+Recovery HD partition with El Capitan bootable via Clover (June 19, 2016, 7:46 p.m.)

1- diskutil list
You will get the partition list, note that the Recovery Partition is obviously named "Recovery HD"

2- Create a folder in Volumes folder for Recovery HD and mount it there:
sudo mkdir /Volumes/Recovery\ HD
sudo mount -t hfs /dev/disk0s3 /Volumes/Recovery\ HD

3- Remove the file `prelinkedkernel`from the directory `com.apple.recovery.boot`
sudo rm -rf /Volumes/Recovery\ HD/com.apple.recovery.boot/prelinkedkernel

4- Copy your working `prelinkedkernel` there:
sudo cp /System/Library/PrelinkedKernels/prelinkedkernel /Volumes/Recovery\ HD/com.apple.recovery.boot/

5- Reboot

+Mac OS X on Virtualbox (June 12, 2016, 3:29 p.m.)

vboxmanage modifyvm "Mac OS X 10.11" --cpuidset 00000001 000106e5 00100800 0098e3fd bfebfbff

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "iMac11,3"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Iloveapple"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"

vboxmanage setextradata "Mac OS X 10.11" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

VBoxManage setextradata "Mac OS X 10.11" "VBoxInternal2/EfiBootArgs" " "

+Convert Installation DMG to ISO - Create a Bootable ISO (June 11, 2016, 10:04 p.m.)

You need to run these commands on a Mac OS X:

# Mount the installer image
hdiutil attach /Applications/Install\ OS\ X\ El\ Capitan.app/Contents/SharedSupport/InstallESD.dmg
-noverify -nobrowse -mountpoint /Volumes/install_app

# Create the ElCapitan Blank ISO Image of 7316mb with a Single Partition - Apple Partition Map
hdiutil create -o /tmp/ElCapitan.cdr -size 7316m -layout SPUD -fs HFS+J

# Mount the ElCapitan Blank ISO Image
hdiutil attach /tmp/ElCapitan.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

# Restore the Base System into the ElCapitan Blank ISO Image
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

# Remove Package link and replace with actual files
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/

# Copy El Capitan installer dependencies
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

# Unmount the installer image
hdiutil detach /Volumes/install_app

# Unmount the ElCapitan ISO Image
hdiutil detach /Volumes/OS\ X\ Base\ System/

# Convert the ElCapitan ISO Image to ISO/CD master (Optional)
hdiutil convert /tmp/ElCapitan.cdr.dmg -format UDTO -o /tmp/ElCapitan.iso

# Rename the ElCapitan ISO Image and move it to the desktop
mv /tmp/ElCapitan.iso.cdr ~/Desktop/ElCapitan.iso

+Commands (June 9, 2016, 1:45 p.m.)

Locate command:
To create the database for using `locate` command, run the following command:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.locate.plist

updatedb ==> sudo /usr/libexec/locate.updatedb
----------------------------------------------------------------------

+Installing Xcode (June 6, 2016, 3:31 p.m.)

For downloading Xcode or other development tools, you need to log into apple.com using your Apple ID account and then open the following link:
https://developer.apple.com/downloads/

Download Xcode and Command Line Tools!

+Applications (June 5, 2016, 2:04 p.m.)

brew install proxychains-ng

sudo nano /usr/local/Cellar/proxychains-ng/4.11/etc/proxychains.conf
----------------------------------------------------------------
brew install npm
----------------------------------------------------------------
brew install ssh-copy-id
----------------------------------------------------------------
brew install tmux
----------------------------------------------------------------

+Installing Homebrew (June 5, 2016, 1:47 p.m.)

Reference Site:
http://brew.sh/
------------------------------------------------
1-You need to install Developer Tools first. Using the `gcc --version` command check if you have the tools first. If the tools were not installed, a dialog will be opened asking you if you want to install the tools. You choose Install.

2-The website says you only need to use the following command to install brew. (But it might be blocked for us in Iran, as of the time writing this tutorial):
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

If it was still blocked, for installing it you need to open the following URL in a proxy activated browser, and save the script in your Mac OS:
https://raw.githubusercontent.com/Homebrew/install/master/install

Install it using this command:
ruby brew.sh

Mail Server
+Virtual domains (Aug. 22, 2014, 9:54 a.m.)

1-sudo nano /etc/postfix/main.cf

2-Add these lines:
virtual_alias_domains = mohsenhassani.com facemelk.com tamaral.ir nimkatonline.com
virtual_alias_maps = hash:/etc/postfix/virtual

3-Create the file "/etc/postfix/virtual" and specify the domains and users to accept mail for.
info@mohsenhassani.com mohsen
support@mohsenhassani.com mohsen

info@facemelk.com facemelk
facemelk@facemelk.com facemelk
mail@facemelk.com facemelk
support@facemelk.com facemelk

info@nimkatonline.com nimkatonline
mail@nimkatonline.com nimkatonline
support@nimkatonline.com nimkatonline
sales@nimkatonline.com nimkatonline

4-postmap /etc/postfix/virtual

5-/etc/init.d/postfix restart

+Find Postfix mail server version (Dec. 15, 2018, 2:54 a.m.)

postconf -d mail_version

+Roundcube (Dec. 15, 2019, 2:52 a.m.)

1- You will need these packages for Roundcube installer:
apt install php-mbstring php-gd php-imagick php-pgsql php-intl php-pear php-zip php-common php-cli php-fpm


2- Download and extract the latest "complete" Roundcube version from:
https://roundcube.net/download/
Extract it and give it write/read permission:
chmod 777 roundcubemail -R


3- Copy the "PHP Configuration" from my notes in "Nginx" category.


4- Create a postgres user "roundcube", with a proper password, and a database named "roundcubemail".
- Download the Roundcube Webmail initial database structure.
- You need to DOWNLOAD this file, do not copy & paste the content of the file, it will get broken.
https://raw.githubusercontent.com/roundcube/roundcubemail/master/SQL/postgres.initial.sql
psql -U roundcube -f /tmp/postgres.initial.sql roundcubemail


5- Edit the file "/etc/php/7.0/fpm/php.ini" and set:
date.timezone = 'Asia/Tehran'
upload_max_filesize = 300M
post_max_size = 300M


6- After restarting the required services, such as Nginx and probably php7.0-fpm, browse the address:
http://mail.mohsenhassani.com/installer/


7- Add the following line to the file /srv/roundcube/config/config.inc.php:
$config['mail_domain'] = 'mohsenhassani.com';


You can edit the settings and configurations you have selected or filled-up in the installer web page using this file:
roundcube/config/config.inc.php



For debug purpose:
tail -f /srv/roundcube/logs/errors
tail -f /var/log/mail*.log

+Web Mail Installation (Dec. 15, 2019, 2:52 a.m.)

apt install postfix dovecot-core dovecot-imapd

----------------------------------------------------

For connecting your cellphone to the webmail:

Add these lines to /etc/postfix/main.cf
mydestination = mohsenhassani.com (Do not put mail.mohsenhassani.com. Only the main domain name!)
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

Edit these lines from /etc/dovecot/conf.d/10-auth.conf:
disable_plaintext_auth = no
auth_mechanisms = plain login


If there was any problem when connecting cellphone to your webmail, check the logs for solving the problems:
tail -f /var/log/mail*.log

----------------------------------------------------

Edit the file /etc/dovecot/conf.d/10-master.conf:

# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0666
user = postfix
group = postfix
}

----------------------------------------------------

For having "Maildir", edit the file /etc/dovecot/conf.d/10-mail.conf:
mail_location = maildir:~/Maildir

And the file /etc/postfix/main.cf:
home_mailbox = Maildir/

----------------------------------------------------

After making the above changes, restart the services:
dovecot
postfix

----------------------------------------------------

Debug IMAP:

telnet mail.mohsenhassani.com 143

Now type each line as a command:
a login USERNAME PASSWORD
a examine inbox
a logout

----------------------------------------------------

When receiving mails, I noticed "delivered to command: procmail -a" message in logs. Mails would not appear in inbox. For solving the problem I had to use the following commands:

postconf -e 'home_mailbox = Maildir/'
postconf -e 'mailbox_command ='
/etc/init.d/postfix restart

----------------------------------------------------

+TXT Records (Dec. 15, 2019, 2:51 a.m.)

Create an account in https://www.agari.com, and using the instructions create DMARC DNS records.

You need to create TXT record like this:
Host Name: _dmarc.mohsenhassani.com
Destination: <The values the agari.com site gives you> (without the double quotations)


Description:

DMARC stands for “Domain-based Message Authentication, Reporting & Conformance”, is an email authentication, policy, and reporting protocol. It builds on the widely deployed SPF and DKIM protocols, adding linkage to the author (“From:”) domain name, published policies for recipient handling of authentication failures, and reporting from receivers to senders, to improve and monitor protection of the domain from fraudulent email.

-----------------------------------------------------------

Creating an SPF or Caller ID record:

Create a TXT record:
Host Name: mail.mohsenhassani.com
Destination: v=spf1 mx ip4:185.94.96.67 -all

-----------------------------------------------------------

+Test your Reverse PTR record (April 8, 2019, 2:51 a.m.)

http://mxtoolbox.com/ReverseLookup.aspx

+Is your domain's SPF record correct? (Dec. 15, 2018, 2:50 a.m.)

https://www.kitterman.com/spf/validate.html

+Is your domain's DKIM record correct? (Dec. 15, 2018, 2:50 a.m.)

http://www.dkim.org/

+Check your server IP is not on any email blacklists (Dec. 15, 2018, 2:48 a.m.)

whatismyipaddress.com/blacklist-check

+Description (Aug. 22, 2014, 9:49 a.m.)

Debian Mail Server Setup with Postfix + Dovecot + SASL

Postfix is an attempt to provide an alternative to the widely-used Sendmail program. Postfix attempts to be fast, easy to administer, and hopefully secure, while at the same time being sendmail compatible enough to not upset your users.

Dovecot is an open source IMAP and POP3 server for Linux/UNIX-like systems, written with security primarily in mind. Dovecot is an excellent choice for both small and large installations. It’s fast, simple to set up, requires no special administration and it uses very little memory.

When sending mail, the Postfix SMTP client can look up the remote SMTP server hostname or destination domain (the address right-hand part) in a SASL password table, and if a username/password is found, it will use that username and password to authenticate to the remote SMTP server. And as of version 2.3, Postfix can be configured to search its SASL password table by the sender email address.

Note : If you install Postfix/Dovecot mail server you will ONLY be able to send mail within your network. You can only send mail externally if you install SASL authentication with TLS. As otherwise you get “Relay Access Denied” error.

SASL Configuration + TLS (Simple authentication security layer with transport layer security) used mainly to authenticate users before sending email to external server, thus restricting relay access. If your relay server is kept open, then spammers could use your mail server to send spam. It is very essential to protect your mail server from misuse.

Misc
+Telegram Font Problem (Sept. 10, 2019, 12:13 p.m.)

1- Download a TTF font:
https://github.com/rastikerdar/vazir-font/tree/master/dist


2- Create a directory and copy the font to it:
Make sure to rename the file name to a small case letters.
mkdir ~/.fonts/


3- Edit the Telegram font config file:
vim ~/.local/share/TelegramDesktop/tdata/fc-custom-1.conf

Replace every "sans-serif", "sans serif", "mono", "Mono" with "vazir"

+CAPTCHA (Oct. 14, 2018, 9:39 a.m.)

CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart.

This is a challenging test to differentiate between humans and automated bots based on the response. reCAPTCHA is one of the CAPTCHA spam protection services bought by Google. Now it is being offered for free to webmasters and Google also uses the reCAPTCHA on it’s own services like Google Search.

-------------------------------------------------------------

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

+List of administrative divisions by country (Sept. 15, 2018, 12:57 p.m.)

https://en.wikipedia.org/wiki/List_of_administrative_divisions_by_country

+Accuracy of latitude and longitude (July 5, 2018, 8:39 a.m.)

1 10 kilometers 6.2 miles
2 1 kilometer 0.62 miles
3 100 meters About 328 feet
4 10 meters About 33 feet
5 1 meter About 3 feet
6 10 centimeters About 4 inches
7 1.0 centimeter About 1/2 an inch
8 1.0 millimeter The width of paperclip wire.
9 0.1 millimeter The width of a strand of hair.
10 10 microns A speck of pollen.
11 1.0 micron A piece of cigarette smoke.
12 0.1 micron You're doing virus-level mapping at this point.
13 10 nanometers Does it matter how big this is?
14 1.0 nanometer Your fingernail grows about this far in one second.
15 0.1 nanometer An atom. An atom! What are you mapping?

+Exporting an object as svg from inkscape (May 11, 2019, 1:42 a.m.)

A straight-forward method is the following:

Select the object(s) to export.
"Resize page to drawing or selection" with Ctrl+Shift+R.
"Invert selection" with !, and Del all other objects.
"Save As" with Ctrl+Shift+S.
Select Optimized SVG as the format if you want to use it on the web.

+Firefox - DownThemAll addon - exclude 128 MP3s (May 9, 2017, 3:49 p.m.)

/[^128]...\.mp3$/,1080,Full HD,HQ

/\/[^\/\?128]+\.mp3$/,mp3

/\/[^\/\?128]+\.mp3$/,320,720p,Full HD,HQ

+Firefox - Disable Auto Refresh (May 7, 2017, 5:18 p.m.)

about:config
accessibility.blockautorefresh

+Serial Numbers (June 7, 2016, 10:50 a.m.)

VMware Workstation 12:
CA5MH-6YF0K-480WQ-8YM5V-XLKV4
-------------------------------------------------------------------
PyCharm + IntelliJ IDEA
For any change and update, follow up the comments on this website: http://us.idea.lanyus.com/

2016.1
https://бэкдор.рф/pycharm-activate-key-3-4-5-2016/

2016.2
http://jetbrains.tencent.click/
-------------------------------------------------------------------
iLO:
34T6L-4C9PX-X8D9C-GYD26-8SQWM
-------------------------------------------------------------------

+Telegram (March 13, 2016, 11:36 a.m.)

$('.im_message_webpage_photo').parent().parent().parent().parent().click();
--------------------------------------------------------------------------------------------
var download_links = $("a[data-content='Download'], span[data-content='Download'], span:contains('Voice message')");
download_links[0].click();
var index = 1;
console.log(download_links.length);
download_interval = setInterval(function() {
if (download_links.length > index) {
download_links[index].click();
console.log('Another One Was CLICKED...');
console.log('Mohsen Hassani ==> Downloading File (' + index + ') out of (' + download_links.length + ') ...');
index ++;
} else {
clearInterval(download_interval);
}
}, 30000);
---------------------------------------------------------------------------------------------
// Mark Completed Downloads For Deletion
var completed_downloads = $('.im_message_file_button.im_message_file_button_dl_audio');
console.log(completed_downloads.length);
completed_downloads.parent().click();
---------------------------------------------------------------------------------------------
// Mark Videos For Deletion
var completed_downloads = $('span[data-content="Save file"]');
console.log(completed_downloads.length);
completed_downloads.parent().parent().parent().parent().parent().parent().parent().click();
---------------------------------------------------------------------------------------------
// Mark Images For Deletion
$('.im_message_photo_thumb').parent().parent().parent().parent().parent().parent().parent().click();
--------------------------------------------------------------------------------------------
// Mark Files For Deletion
var completed_downloads = $('a[data-content="Save file"]');
console.log(completed_downloads.length);
completed_downloads.parent().parent().parent().parent().click();
---------------------------------------------------------------------------------------------
// Mark Voice Messages For Deletion
$("a[data-content='Play'].nocopy").parent().find('a:first-child:not([data-content="Download"])').parent().click();
---------------------------------------------------------------------------------------------
// Mark images with text
$('img.im_message_photo_thumb').parent().parent().click();
---------------------------------------------------------------------------------------------
// Stickers in Reply
$("div[my-load-sticker='']").parent().click();
---------------------------------------------------------------------------------------------

+Firefox - A script on this page may be busy, or it may have stopped responding... (April 15, 2015, 4:08 p.m.)

In the Location bar, type about:config and press Enter.
Click I'll be careful, I promise! to continue to the about:config page.
In the about:config page, search for the preference dom.max_script_run_time, and double-click on it.
In the Enter integer value prompt, type 20.
Press OK.

+Web Proxies (Feb. 9, 2015, 1:17 p.m.)

http://buka.link/

MongoDB
MySQL
+Access denied with non-root user (July 14, 2019, 11:30 a.m.)

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';

+Recover root password (April 15, 2018, 5:25 p.m.)

1- /etc/init.d/mysql stop


2- Using the following command find the processes which use mysql and (kill -9 pid) to stop them:
ps aux | grep mysql


3- /usr/sbin/mysqld --skip-grant-tables --skip-networking &


4- mysql -u root


5- FLUSH PRIVILEGES;


6-
Reset/update your password:
SET PASSWORD FOR root@'localhost' = PASSWORD('password');

If you have a mysql root account that can connect from everywhere, you should also do:
UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='root';

And if you have a root account that can access from everywhere:
USE mysql
UPDATE user SET Password = PASSWORD('newpwd')
WHERE Host = '%' AND User = 'root';


7- FLUSH PRIVILEGES;


8-/etc/init.d/mysql start

+Error - Access denied for user 'test'@'localhost' (using password: YES) (April 7, 2018, 9:34 p.m.)

GRANT INSERT, SELECT, DELETE, UPDATE ON database.* TO 'user'@'localhost' IDENTIFIED BY ' ';

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';

+Galera Cluster with MySQL (Sept. 4, 2017, 11:51 a.m.)

We need at least 3 servers in a network.

1- apt-get install galera-3 galera-arbitrator-3 default-mysql-server rsync
----------------------------------------------------------------
2- Create the following file with the content:
vim /etc/mysql/conf.d/galera.cnf

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://first_ip,second_ip,third_ip" # The first_ip in here is 10.10.0.101

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address="10.10.0.101"
wsrep_node_name="node1"


DO THE SAME for the other two servers. Change the last two lines based on the server's configs.
----------------------------------------------------------------
3- vim /etc/mysql/mariadb.conf.d/50-server.cnf
Comment:
bind-address = 127.0.0.1

DO THE SAME for the other two servers.
----------------------------------------------------------------
Shut down mysql on all of the servers:
4- systemctl stop mysql
----------------------------------------------------------------
5- On the first server:
# galera_new_cluster

On the 2nd & 3rd servers:
systemctl start mysql
----------------------------------------------------------------

+Remove root password (Feb. 15, 2017, 6:24 p.m.)

set password for root@localhost=PASSWORD('');

+Queries (Feb. 16, 2015, 11:41 a.m.)

show databases;

---------------------------------------------------------------------------------------------

SELECT * FROM trunk WHERE status like '%unre%' and date_time BETWEEN DATE_SUB(NOW(), INTERVAL 4 DAY) AND NOW();

----------------------------------------------------------------

SELECT count(*) as errors FROM trunk WHERE status like '%unre%' and date_time BETWEEN DATE_SUB(NOW(), INTERVAL 4 DAY) AND NOW();

----------------------------------------------------------------

select * from cdr order by id desc limit 1;
select * from (select * from cdr order by acctid) as t1 order by acctid desc limit 100\G

----------------------------------------------------------------

show tables from asterisk;

----------------------------------------------------------------

show columns from cdr;

----------------------------------------------------------------

SELECT UNIQUE VALUE:
SELECT DISTINCT mycolumn FROM mytable

----------------------------------------------------------------

List columns with indexes:
SHOW INDEX FROM mytable;

----------------------------------------------------------------

+Remote Connection (Feb. 16, 2015, 11:27 a.m.)

This link provide more than just a remote connection! It provides security too but I don't need it right now. So if for now security is not important to you, use the summary below:
http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html
--------------------------------------------------------------------------------------------
Binding is limited to either 0, 1, or all IP addresses on the server. That means you can not provide more than one IP address at the same time.

nano /etc/mysql/my.cnf
bind-address = 0.0.0.0
/etc/init.d/mysql restart

And then in mysql console:
mysql -u root -p
GRANT ALL PRIVILEGES ON your_database.* TO 'root'@'88.135.38.2' IDENTIFIED BY 'passw0rd' WITH GRANT OPTION;

+Update / Replace value (Feb. 14, 2015, 3:52 p.m.)

It's different from the replace() method in python :O

UPDATE table SET field = REPLACE(field, 'string', 'anothervalue') WHERE field LIKE '%string%';

'string' is the value to be found in the '%string%'
'anothervalue' is the value to be replaced.

+Show database / show table columns (Jan. 27, 2015, 3:27 p.m.)

show databases;
-----------------------------------------
use a_database;
show tables;
SHOW COLUMNS FROM City;
-----------------------------------------

+Reverse Query Results (Jan. 25, 2015, 11:05 a.m.)

select * from (select * from cdr order by acctid) as t1 order by acctid desc limit 200;

+Create table (Jan. 8, 2015, 11:59 a.m.)

You need to tell MySQL which database to use first:
USE database_name;

And here is a sample table:
CREATE TABLE cdr (
calldate datetime NOT NULL default '0000-00-00 00:00:00',
clid varchar(80) NOT NULL default '',
src varchar(80) NOT NULL default '',
dst varchar(80) NOT NULL default '',
dcontext varchar(80) NOT NULL default '',
channel varchar(80) NOT NULL default '',
dstchannel varchar(80) NOT NULL default '',
lastapp varchar(80) NOT NULL default '',
lastdata varchar(80) NOT NULL default '',
duration int(11) NOT NULL default '0',
billsec int(11) NOT NULL default '0',
disposition varchar(45) NOT NULL default '',
amaflags int(11) NOT NULL default '0',
accountcode varchar(20) NOT NULL default '',
uniqueid varchar(32) NOT NULL default '',
userfield varchar(255) NOT NULL default ''
);

+Export / Import (Backup / Restore) (Jan. 8, 2015, 11:27 a.m.)

Export:
mysqldump -u [username] -p [database_name] > [dumpfilename.sql]

Import:
mysql -u [username] -p [database_name] < [dumpfilename.sql]

------------------------------------------------------------------------

Export data to CSV file:

SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';

------------------------------------------------------------------------

Export data to CSV file (From multiple table + multiple Fields):
select table_1.field_1, table_1.field_2, table_2.field_1, table_3.field_7 from table_1, table_2, table_3 into outfile '/tmp/data.csv' fields terminated by ',' enclosed by "" lines terminated by '\n';

------------------------------------------------------------------------

Import CSV file directly into MySQL:

LOAD DATA INFILE '/tmp/cdr.csv'
INTO TABLE cdr
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;

The IGNORE is used for the header of file (if you have created that file manually and it might have titles like in excel, name, family, id...)

------------------------------------------------------------------------

Import ".sql" files:
$ mysql -u root db_name < db.sql

------------------------------------------------------------------------

+Add a database along with its user (Jan. 8, 2015, 11:15 a.m.)

1- mysql -u root -p


2- create database demodb;


3-
INSERT INTO mysql.user (User,Host,Password) VALUES('demouser','localhost',PASSWORD('demopassword'));

OR you might need the following based on the installed mysql version:

INSERT INTO mysql.user (User,Host,authentication_string, ssl_cipher, x509_issuer,x509_subject) VALUES('dianomi','localhost',PASSWORD('dfg3253'),'','','');


4- FLUSH PRIVILEGES;


5- GRANT ALL PRIVILEGES ON demodb.* to demouser@localhost;


6- FLUSH PRIVILEGES;

+Installation (Jan. 8, 2015, 11:03 a.m.)

1- Configure MySQL PPA
wget http://repo.mysql.com/mysql-apt-config_0.8.9-1_all.deb
dpkg -i mysql-apt-config_0.8.9-1_all.deb


2- Install MySQL
apt update
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <the_GPG_key>
apt install mysql-server python-dev python3-dev default-libmysqlclient-dev


3- Secure MySQL Installation (You might not need this step for PC/Laptop or testing environments)
systemctl restart mysql
mysql_secure_installation


4- Connect MySQL
mysql -u root -p

Network
+Cisco Certification Program Overview (Feb. 19, 2018, 5:57 p.m.)

Routing/Switching
Data Center
Voice
Security
Wireless
Design
Service Provider
Service Provider Operations
Video
----------------------------------------------------------------------------
Cisco Certified Entry Networking Technician (CCENT)
Cisco Certified Technician (CCT)
Cisco Certified Network Associate (CCNA)
Cisco Certified Design Associate (CCDA)
Cisco Certified Network Professional (CCNP)
Cisco Certified Design Professional (CCDP)
Cisco Certified Internetwork Expert (CCIE)
Cisco Certified Design Expert (CCDE)
Cisco Certified Architect (CCAr)

+Subnet Mask (Sept. 19, 2017, 5:05 p.m.)

Addresses Hosts Netmask Amount of a Class C
/30 4 2 255.255.255.252 1/64
/29 8 6 255.255.255.248 1/32
/28 16 14 255.255.255.240 1/16
/27 32 30 255.255.255.224 1/8
/26 64 62 255.255.255.192 1/4
/25 128 126 255.255.255.128 1/2
/24 256 254 255.255.255.0 1
/23 512 510 255.255.254.0 2
/22 1024 1022 255.255.252.0 4
/21 2048 2046 255.255.248.0 8
/20 4096 4094 255.255.240.0 16
/19 8192 8190 255.255.224.0 32
/18 16384 16382 255.255.192.0 64
/17 32768 32766 255.255.128.0 128
/16 65536 65534 255.255.0.0 256

+Zabbix - Installation (April 26, 2017, 6:15 p.m.)

Zabbix Server:
1- apt-get install apache2 mysql-server php5 php5-cli php5-common php5-mysql

2- Update timezone in php configuration file /etc/php5/apache2/php.ini:
date.timezone = 'Asia/Tehran'

3- apt-get install zabbix-server-mysql zabbix-frontend-php

4- Create Database Schema:
mysql -u root -p
mysql> CREATE DATABASE zabbixdb;
mysql> GRANT ALL on zabbixdb.* to zabbix@localhost IDENTIFIED BY 'deskbit';
mysql> FLUSH PRIVILEGES;

5- Restart zabbix database schema in newly created database:
cd /usr/share/zabbix-server-mysql
zcat schema.sql.gz | mysql -u root -p zabbixdb
zcat images.sql.gz | mysql -u root -p zabbixdb
zcat data.sql.gz | mysql -u root -p zabbixdb

6- Edit Zabbix Configuration File:
vim /etc/zabbix/zabbix_server.conf
DBHost=localhost
DBName=zabbixdb
DBUser=zabbix
DBPassword=password

7- Enable zabbix conf for apache:
cp /usr/share/doc/zabbix-frontend-php/examples/apache.conf /etc/apache2/sites-enabled/

8- Set some values confi files:
/etc/php5/apache2/php.ini
post_max_size = 16M
max_execution_time = 300
max_input_time = 300

9- Restart Apache and Zabbix:
/etc/init.d/apache2 restart
/etc/init.d/zabbix-server restart

10- Open the following address in a browser:
http://zabbix.deskbit.local/zabbix/zabbix
In the 3rd Step (Configure DB connection):
Database host: localhost
Database port: 0
Database name: zabbixdb
User: zabbix
Password: deskbit

11- In step 6, (Install), it can't create the file "zabbix.conf". To fix the error you need to:
chmod 777 /etc/zabbix


12- Zabbix Login Screen:
Username: admin
Password: zabbix
------------------------------------------------------------
Zabbix Agent:
1- sudo apt-get install zabbix-agent

2- Edit Zabbix Agent Configuration:
vim /etc/zabbix/zabbix_agentd.conf
Server=192.168.1.11
Hostname=Server2

3- Restart Zabbix Agent:
/etc/init.d/zabbix-agent restart
------------------------------------------------------------

Nginx
+Serving Angular application (Oct. 8, 2019, 1:40 a.m.)

server {
listen 80;
server_name tiptong.ir www.tiptong.ir;
index index.html;
root /srv/tiptong;

location / {
try_files $uri$args $uri$args/ /index.html;
}
}

+Change 502 Bad Gateway Error page (Oct. 1, 2019, 12:30 p.m.)

1- Create an empty HTML file:
sudo touch /srv/blank.html


2- Edit the nginx config file:
location / {
uwsgi_pass 127.0.0.1:22220;
include uwsgi_params;
error_page 502 = /blank.html;
}


location = /blank.html {
root /srv/;
}

+Remove favicon.ico error_log (June 25, 2019, 12:04 p.m.)

location = /favicon.ico {
access_log off;
log_not_found off;
}

+Serving robots.txt (Aug. 22, 2014, 9:34 a.m.)

location /robots.txt {
alias /path/to/static/robots.txt;
}

+Fix Django Invalid HTTP_HOST header emails (June 3, 2018, 8:25 p.m.)

Add this block in "http" block of /etc/nginx/nginx.conf file.

server {
listen 80;
server_name _;
return 444;
}


Keep in mind to place the block before the include config files.

+Forward port 80 to 8080 (Dec. 15, 2018, 3:23 p.m.)

server {
listen 80;
server_name stats.mohsenhassani.com;

location / {
proxy_pass http://127.0.0.1:8080;
}
}

+Set Up HTTP Authentication on a Directory (April 11, 2017, 2:28 p.m.)

1- apt install apache2-utils nginx-extras

2- htpasswd -c /etc/nginx/.htpasswd mohsen
Note that this htpasswd should be accessible by the user-account that is running Nginx.

3-
server {
listen 80;
server_name ftp.mohsenhassani.com;

location / {
fancyindex on;
fancyindex_exact_size off;
root /home/mohsen/ftp;
}

location /private {
auth_basic "This is private zone!";
auth_basic_user_file /etc/nginx/.htpasswd;
fancyindex on;
fancyindex_exact_size off;
alias /home/mohsen/ftp/private;
}
}

+Create an SSL Certificate (Sept. 16, 2016, 4:46 a.m.)

1- Creating a directory that will be used to hold all of our SSL information. It should be created under the Nginx configuration directory:
sudo mkdir /etc/nginx/ssl

------------------------------------------------------------------

2- Create the SSL key and certificate files: (There is a sample some blocks below for the questions asked):

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -sha256 -keyout /etc/nginx/ssl/mohsenhassani_private.key -out /etc/nginx/ssl/mohsenhassani_public.pem

OR (insert the informations all together in here):

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -sha256 -keyout /etc/nginx/ssl/mohsenhassani_private.key -out /etc/nginx/ssl/mohsenhassani_public.pem -subj "/C=IR/ST=Tehran/L=Tehran/O=NozhanModern/CN=bot.mohsenhassani.com"

------------------------------------------------------------------

3-We will be asked a few questions about our server in order to embed the information correctly in the certificate. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name that you want to be associated with your server. You can enter the public IP address instead if you do not have a domain name.

------------------------------------------------------------------

4-Configure Nginx to Use SSL:
server {
listen 80;
listen 443 ssl;
server_name bot.mohsenhassani.com;

ssl_certificate /etc/nginx/ssl/mohsenhassani_public.pem;
ssl_certificate_key /etc/nginx/ssl/mohsenhassani_private.key;
}

-------------------------------------------------------------------------------

A sample of questions asked:

Country Name (2 letter code) [AU]:US

State or Province Name (full name) [Some-State]:New York

Locality Name (eg, city) []:New York City

Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.

Organizational Unit Name (eg, section) []:Ministry of Water Slides

Common Name (e.g. server FQDN or YOUR name) []:server_IP_address

Email Address []:admin@your_domain.com

-------------------------------------------------------------------------------

Descriptions:

You will be asked a series of questions. Before we go over that, let's take a look at what is happening in the command we are issuing:

openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.

req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The "X.509" is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
-x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
-nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Nginx to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
-days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
-newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
-keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
-out: This tells OpenSSL where to place the certificate that we are creating.

+Permanently Redirect URLs (May 21, 2016, 4:33 p.m.)

server {
listen 80;
server_name buynespresso.ir www.buynespresso.ir;
return 301 $scheme://vinidit.ir$request_uri;
}

-------------------------------------------------------

1. Redirect All Request to Specific URL

This will redirect all incoming requests on domain to url http://anotherdomain.com/dir1/index.php, as configured below.

server {
listen 192.168.1.100:80;
server_name mydomain.com;
return 301 http://anotherdomain.com/dir1/index.php;
}

2. Redirect All Request to Other Domain

This will redirect all incoming requests on domain to another domain (http://anotherdomain.com/) with corresponding request url and query strings.

server {
listen 192.168.1.100:80;
server_name mydomain.com;
return 301 http://anotherdomain.com$request_uri;
}

3. Redirect Requests with Protocol Specific

This will redirect all incoming requests on domain to another domain (http://anotherdomain.com/) with corresponding request url and query strings. Also it will use same protocol on redirected url.

server {
listen 192.168.1.100:80;
server_name mydomain.com;
return 301 $scheme://anotherdomain.com$request_uri;
}

+Serve HTML file (May 17, 2016, 5:34 a.m.)

server {
root /home/shetab/websites/youstone_tmp;
listen 80;
server_name youstone.org www.youstone.org;
index index.html index.htm;

# proxy request to node
location @proxy {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://127.0.0.1:3010;
proxy_redirect off;
break;
}

location / {
try_files $uri $uri/ @proxy;
}

}

+PHP Configuration (March 13, 2016, 10:52 p.m.)

server {
listen 80;
server_name 10.10.0.237;

root /var/www/suitecrm;
index index.php index.html index.htm index.nginx-debian.html;
access_log /var/log/nginx/suitecrm.access.log;
error_log /var/log/nginx/suitecrm.error.log;

client_max_body_size 300M;

location / {
try_files $uri $uri/ =404;
}

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
}

-------------------------------------------------------

In case of errors, try checking (tail -f) the access.log and error.log files.

If no output in errors.log, check if the socket file "php7.0-fpm.sock" exists in the path mentioned in "location ~ \.php$" directive.

-------------------------------------------------------

+Https with Django (March 13, 2016, 11 p.m.)

mkdir /etc/nginx/ssl

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
openssl req -newkey rsa:2048 -sha256 -nodes -keyout /home/mohsen/ssl/PRIVATE.key -x509 -days 365 -out /home/mohsen/ssl/PUBLIC.pem -subj "/C=IT/ST=state/L=location/O=description/CN=telegram.mohsenhassani.com"

--------------------------THIS IS THE OUTPUT --------------------------
[sudo] password for mohsen:
Generating a 2048 bit RSA private key
..+++
...................+++
writing new private key to '/etc/nginx/ssl/nginx.key'
/etc/nginx/ssl/nginx.key: No such file or directory
3073349308:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/etc/nginx/ssl/nginx.key','w')
3073349308:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
mohsen@mohsenhassani:~$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
Generating a 2048 bit RSA private key
.......+++
............................................................................................................+++
writing new private key to '/etc/nginx/ssl/nginx.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:notes.mohsenhassani.com
--------------------------THIS IS THE OUTPUT --------------------------
The nginx sample config file:
server {
listen 80;
listen 443 ssl;
server_name notes.mohsenhassani.com notes.mohsenhassani.ir;

access_log /home/mohsen/logs/notes_mohsen.access.log;
error_log /home/mohsen/logs/notes_mohsen.error.log;

ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

add_header Access-Control-Allow-Origin '*';
location / {
uwsgi_pass 127.0.0.1:22222;
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;
}

client_max_body_size 20M;

location /static/admin/ {
gzip on;
alias /home/mohsen/virtualenvs/django-1.8/lib/python3.4/site-packages/django/contrib/admin/static/admin/;
}

location /media/ {
gzip on;
alias /home/mohsen/websites/notes_mohsen/notes/media/;
}

location /static {
gzip on;
alias /home/mohsen/websites/notes_mohsen/notes/static;
}
}

+Access-Control-Allow-Origin downloading a JSON file (Dec. 23, 2015, 2:06 p.m.)

Add this line to server { } block:
add_header Access-Control-Allow-Origin '*';

Example:
server {
listen 80;
server_name notes.azarshafiei.com notes.azarshafiei.ir;
access_log /home/mohsen/logs/notes_azar.access.log;
error_log /home/mohsen/logs/notes_azar.error.log;
add_header Access-Control-Allow-Origin '*';
location / {
uwsgi_pass 127.0.0.1:22222;
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;
}

client_max_body_size 20M;

location /static/admin/ {
gzip on;
alias /home/mohsen/virtualenvs/django-1.8/lib/python3.4/site-packages/django/contrib/admin/static/admin/;
}

location /media/ {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/media/;
}

location /static {
gzip on;
alias /home/mohsen/websites/notes_azar/notes/static;
}
}

+Nginx Serve Fonts (Oct. 14, 2015, 3:48 p.m.)

add_header Access-Control-Allow-Origin '*';
location / {
uwsgi_pass 127.0.0.1:22222;
include uwsgi_params;
uwsgi_read_timeout 6000s;
uwsgi_send_timeout 6000s;

location ~* \.(ttf|ttc|otf|eot|woff|font.css)$ {
add_header "Access-Control-Allow-Origin" "*";
}
}

+Nginx and uWSGI confirguration (Aug. 22, 2014, 9:34 a.m.)

1-Install nginx using its help
2-Install uwsgi ==> pip install uwsgi; It needs ==> easy_install pip, and apt-get install python-dev
3-Copy the myuwsgi in /etc/init.d
4-Make sure you have the command /usr/local/bin/uwsgi or /usr/bin/uwsgi
5-Copy the config file of the website in web_cofings

+Configurations (Feb. 4, 2016, 11:19 a.m.)

nano /etc/nginx/nginx.conf

Add the following line:
include /home/mohsen/web_configs/*;

Afte these lines:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
--------------------------------------------------------------
To start nginx
/usr/local/nginx/sbin/nginx
--------------------------------------------------------------
For establishing a local Django project, I have to first know what the connected Modem IP address is, so that I can give this IP address to nginx "server_name". I thought it's "localhost" or "127.0.0.1", but it was not! It's local IP address! I thought it would be 192.168.1.2, but it's still not! It's the IP address of Modem.
How to get the IP I need? It's done using "ifconfig". Using this command, I could see the IP that the Modem has given to the Computer. So it should be used in "server_name" of nginx.

+Installation (Feb. 4, 2016, 11:16 a.m.)

apt install nginx libpcre3-dev

NodeJs
+install Error: rollbackFailedOptional (Sept. 12, 2019, 4:49 p.m.)

npm config rm proxy
npm config rm https-proxy
npm config set registry http://registry.npmjs.org/

+Enable NPM cache (Sept. 28, 2018, 11:01 p.m.)

npm config set cache-min 9999999

+Error - Too many open files in system (July 17, 2018, 2:05 p.m.)

Add "ulimit -n 4096" to ~/.profile

Then:
source ~/.bashrc

+Uninstalling npm (July 15, 2018, 11:13 a.m.)

npm uninstall npm -g

+npm behind socks5 proxy (June 19, 2018, 7:12 p.m.)

1- apt install polipo


2-
vim /etc/polipo/config
socksParentProxy = “127.0.0.1:1337”
socksProxyType = socks5
proxyAddress = “::0”
proxyPort = 8123


3- sudo service polipo restart


4- ssh -D 1337 -fN root@ca.mohsenhassani.com


5- Set these NPM configurations in the terminal:
npm config set proxy http://127.0.0.1:8123
npm config set https-proxy http://127.0.0.1:8123

-------------------------------------------------------------------

Clear https proxy:
npm config rm proxy
npm config rm https-proxy

-------------------------------------------------------------------

+Installation (April 10, 2016, 8:27 a.m.)

+Installation (April 10, 2016, 8:27 a.m.)

https://nodejs.org/en/download/package-manager/#debian-and-ubuntu-based-linux-distributions

---------------------------------------------------------------------------------------------

1- apt install curl build-essential

2- curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -

3- apt install -y nodejs


Update Node Package Manager (NPM):
sudo npm i npm@latest -g

---------------------------------------------------------------------------------------------

For Mac OS use this command:
brew install node

OpenStack
+Hardware requirements (Jan. 1, 2017, 7:52 p.m.)

Controller

The controller node runs the Identity service, Image service, management portions of Compute, management portion of Networking, various Networking agents, and the dashboard. It also includes supporting services such as an SQL database, message queue, and NTP.

Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services.

The controller node requires a minimum of two network interfaces.
---------------------------------------
Compute

The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups.

You can deploy more than one compute node. Each node requires a minimum of two network interfaces.
---------------------------------------
Block Storage

The optional Block Storage node contains the disks that the Block Storage and Shared File System services provision for instances.

For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security.

You can deploy more than one block storage node. Each node requires a minimum of one network interface.
---------------------------------------
Object Storage

The optional Object Storage node contains the disks that the Object Storage service uses for storing accounts, containers, and objects.

For simplicity, service traffic between compute nodes and this node uses the management network. Production environments should implement a separate storage network to increase performance and security.

This service requires two nodes. Each node requires a minimum of one network interface. You can deploy more than two object storage nodes.

+What is Cloud? (Jan. 1, 2017, 5:50 p.m.)

Let's quickly review just what a computing cloud is. Cloud technologies are built on existing technologies such as virtualization and clustering to virtualize hardware, software, storage, and networking resources into flexible units that are quickly allocated to meet demand. So rather than the old static model of dedicated hardware servers for various tasks, and static network and storage configurations, all of those formerly specialized devices are assimilated into a common resource pool. It's a more efficient use of hardware, and very fast to scale up or down according to demand. You can even configure self-service for users so they can grab whatever they need when they need it.

Private clouds are hosted on your own premises, and there are public clouds like Amazon's EC2 and the Rackspace Cloud. You can combine private and public clouds in many useful ways. For example, keep your sensitive data locked away in your private cloud, and use a public cloud for sharing, testing, and extra non-sensitive storage.

All computing resources are shareable in a cloud, and there are three basic service models:

SaaS, software as a service
PaaS, platform as a service
IaaS, infrastructure as a service

SaaS is centrally-hosted application software accessed by client software, with data typically kept on the server for access from any networked computer. Yes, just like in the olden client-server days, but the modern twist is to stuff everything through a Web browser. Using a Web browser as the client has its down sides, starting with HTTP, which was never designed for complex computing tasks, but by gosh we're making it haul water, chop wood, and dig ditches, and it's doing it cross-platform. SaaS is popular with software vendors because it reduces their support costs, gives them more control, and at long last supports that coveted grail of the monthly subscription model. It's nice for customers as well because they don't have to hassle with installation and maintenance.

PaaS is a nice option for customers who want more control of their datacenter, but not all the headaches of system and network administration. An example of this is managed cloud Web hosting where the host takes care of hardware, operating systems, networking, load balancing, backups, and updates and patches. The customer manages the development and configuration of whatever software they want to use. It's like sitting down to a fully-configured datacenter and getting right to work.

IaaS can be thought of as virtual bare hardware that the customer managers like a physical server, with control of all the software and configuration. You could also call it HaaS, hardware as a service.

+Definitions - Hypervisor (Dec. 25, 2016, 5:06 p.m.)

Software that arbitrates and controls VM access to the actual underlying hardware.
------------------------------
A hypervisor or virtual machine monitor (VMM) is computer software, firmware, or hardware, that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and OS X instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor,[1] with hyper- used as a stronger variant of super-.[a] The term dates to circa 1970;[2] in the earlier CP/CMS (1967) system the term Control Program was used instead.

------------------------------------------
A hypervisor is a function which abstracts -- isolates -- operating systems and applications from the underlying computer hardware. This abstraction allows the underlying host machine hardware to independently operate one or more virtual machines as guests, allowing multiple guest VMs to effectively share the system's physical compute resources, such as processor cycles, memory space, network bandwidth and so on. A hypervisor is sometimes also called a virtual machine monitor


hypervisor
Posted by: Margaret Rouse
WhatIs.com
Contributor(s): Stephen J. Bigelow
This definition is part of our Essential Guide: Fine-tune your virtualization performance management skills

Sponsored News

ABC’s of VDI in 2016
–Dell
Building a Private Cloud on Converged Infrastructure
–Dell
See More

Vendor Resources

Virtual Data Center E-Zine Volume 33: Time to Consider a Second Hypervisor?
–SearchDataCenter.com
Expert Strategies to Securing a Virtual Environment
–SearchSecurity.com

A hypervisor is a function which abstracts -- isolates -- operating systems and applications from the underlying computer hardware. This abstraction allows the underlying host machine hardware to independently operate one or more virtual machines as guests, allowing multiple guest VMs to effectively share the system's physical compute resources, such as processor cycles, memory space, network bandwidth and so on. A hypervisor is sometimes also called a virtual machine monitor.
Download this free guide
Download: Essential Guide to Choosing Virtualization Hardware

Modern servers are shipping with massive amounts of memory, multiple network interface cards and support for solid-state storage. With all the options available, it's hard to know what you need. This complimentary guide on choosing the best hardware for virtualization can help.

Hypervisors provide several benefits to the enterprise data center. First, the ability of a physical host system to run multiple guest VMs can vastly improve the utilization of the underlying hardware. Where physical (nonvirtualized) servers might only host one operating system and application, a hypervisor virtualizes the server, allowing the system to host multiple VM instances -- each running an independent operating system and application -- on the same physical system using far more of the system's available compute resources.

VMs are also very mobile. The abstraction that takes place in a hypervisor also makes the VM independent of the underlying hardware. Traditional software can be tightly coupled to the underlying server hardware, meaning that moving the application to another server requires time-consuming and error-prone reinstallation and reconfiguration of the application. By comparison, a hypervisor makes the underlying hardware details irrelevant to the VMs. This allows any VMs to be moved or migrated between any local or remote virtualized servers -- with sufficient computing resources available -- almost at-will with effectively zero disruption to the VM; a feature often termed live migration.

VMs are also logically isolated from each other -- even though they run on the same physical machine. In effect, a VM has no native knowledge or dependence on any other VMs. An error, crash or malware attack on one VM does not proliferate to other VMs on the same or other machines. This makes hypervisor technology extremely secure.

Finally, VMs are easier to protect than traditional applications. A physical application typically needs to be first quiesced and then backed up using a time-consuming process that results in substantial downtime for the application. A VM is essentially little more than code operating in a server's memory space. Snapshot tools can quickly capture the content of that VM's memory space and save it to disk in moments -- usually without quiescing the application at all. Each snapshot captures a point-in-time image of the VM which can be quickly recalled to restore the VM on demand.
Types of hypervisors

Hypervisors are traditionally implemented as a software layer -- such as VMware vSphere or Microsoft Hyper-V -- but hypervisors can also be implemented as code embedded in a system's firmware. There are two principal types of hypervisor. Type 1 hypervisors are deployed directly atop the system's hardware without any underlying operating systems or other software. These are called "bare metal" hypervisors and are the most common and popular type of hypervisor for the enterprise data center. Examples include vSphere or Hyper-V. Type 2 hypervisors run as a software layer atop a host operating system and are usually called "hosted" hypervisors like VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints like PCs.

What are hypervisors used for?

Hypervisors are important to any system administrator or system operator because virtualization adds a crucial layer of management and control over the data center and enterprise environment. Staff members not only need to understand how the respective hypervisor works, but also how to operate supporting functionality such as VM configuration, migration and snapshots.


The role of a hypervisor is also expanding. For example, storage hypervisors are used to virtualize all of the storage resources in the environment to create centralized storage pools that administrators can provision -- without having to concern themselves with where the storage was physically located. Today, storage hypervisors are a key element of software-defined storage. Networks are also being virtualized with hypervisors, allowing networks and network devices to be created, changed, managed and destroyed entirely through software without ever touching physical network devices. As with storage, network virtualization is appearing in broader software-defined network or software-defined data center platforms.

+Installation (Dec. 25, 2016, 10:17 a.m.)

PhoneGap
+Errors (Dec. 20, 2015, 11:21 p.m.)

When using `phonegap` command, I got error (‘cannot find bplist-parser’), for solving it I did:
sudo npm update -g
---------------------------------------------------------------------------------------------

+Installation (Jan. 13, 2016, 3:27 p.m.)

http://dasunhegoda.com/installrun-phonegap-ubuntu/797/

1-sudo apt-get install nodejs npm git ant lib32z1 lib32ncurses5 lib32bz2-1.0 lib32stdc++6
And then:
sudo npm install -g phonegap cordova jquery-mobile
sudo npm update -g

2-The NodeJS is installed & named as nodejs. PhoneGap expect the executable to be named node. To fix this inconsistency, create a symlink named node that points to nodejs as follows.
sudo ln -s /usr/bin/nodejs /usr/bin/node

3-Type `phonegap` on the command line and check whether PhoneGap command is detected.
(You might get error about `cannot find bplist-parser`; refer to errors for solving the error.)

4-Copy `android-sdk`: (you already have it when using `Kivy`):
sudo cp -r ~/Programs/Android/Development/android-sdk-linux/ /usr/local/

5-Edit the file `~/.bashrc` and paste these lines to the end of it:
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/tools
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/platform-tools
export PATH=$PATH:/home/mohsen/Programs/Android/Development/android-sdk-linux/build-tools

6-source ~/.bashrc

7-android

8-

PHP
+MySQL Driver (Aug. 1, 2017, 6:29 p.m.)

apt-get install php-mysql

+Installation (Aug. 1, 2017, 5:54 p.m.)

Base PHP:
sudo apt-get install php-common php-cli


Nginx:
sudo apt-get install php-fpm


Apache:
apt-get install libapache2-mod-php

PostgreSQL
+Increase max connections (June 16, 2019, 11:50 a.m.)

Edit postgresql.conf:

max_connections = 400
shared_buffers = 512MB

+Removing a Constraint (Feb. 1, 2017, 1:02 p.m.)

To remove a constraint you need to know its name. If you gave it a name then that's easy. Otherwise, the system assigned a generated name, which you need to find out. The psql command \d tablename can be helpful here; other interfaces might also provide a way to inspect table details. Then the command is:
ALTER TABLE products DROP CONSTRAINT some_name;

This works the same for all constraint types except not-null constraints. To drop a not null constraint use
ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL;
(Recall that not-null constraints do not have names.)

+Adding a Constraint (Feb. 1, 2017, 1:01 p.m.)

To add a constraint, the table constraint syntax is used. For example:

ALTER TABLE products ADD CHECK (name <> '');
ALTER TABLE products ADD CONSTRAINT some_name UNIQUE (product_no);
ALTER TABLE products ADD FOREIGN KEY (product_group_id) REFERENCES product_groups;

To add a not-null constraint, which cannot be written as a table constraint, use this syntax:
ALTER TABLE products ALTER COLUMN product_no SET NOT NULL;

The constraint will be checked immediately, so the table data must satisfy the constraint before it can be added.

+PostgreSQL history file (Feb. 1, 2017, 1:01 p.m.)

Similar to the Linux ~/.bash_history file, PostgreSQL stores all the SQL command that was executed in a history file called ~/.psql_history as shown below.

cat ~/.psql_history

+Turn on timing and check how much time a query takes to execute (Feb. 1, 2017, 1 p.m.)

# \timing — After this, if you execute a query it will show how much time it took for doing it.

# \timing
Timing is on.

# SELECT * from pg_catalog.pg_attribute ;
Time: 9.583 ms

+Change databse user password (Feb. 1, 2017, 12:59 p.m.)

Root user:
ALTER USER postgres WITH PASSWORD 'tmppassword';
-------------------------------------------------
psql cdrdb
alter user cdr with password 'abcdef';

+Export JSON from PostgreSQL (May 12, 2016, 12:11 a.m.)

select row_to_json(words) from words;
{"id":6013,"text":"advancement","pronunciation":"advancement",...}
----------------------------------------
select row_to_json(row(id, text)) from words;
{"f1":6013,"f2":"advancement"}
This will name the columsn as `f1`, 'f2`, 'f3`, ...

To solve the problem:
select row_to_json(t)
from (
select id, text from words
) t

{"id":6013,"text":"advancement"}
----------------------------------------
The other commonly used technique is array_agg and array_to_json. array_agg is a aggregate function like sum or count. It aggregates its argument into a PostgreSQL array. array_to_json takes a PostgreSQL array and flattens it into a single JSON value.

select array_to_json(array_agg(row_to_json(t)))
from (
select id, text from words
) t

[{"id":6001,"text":"abaissed"},{"id":6002,"text":"abbatial"},{"id":6003,"text":"abelia"},...]
----------------------------------------

+Errors (July 6, 2015, 12:31 p.m.)

psql: could not connect to server: Connection refused
Is the server running on host "192.168.0.6" and accepting TCP/IP connections on port 5432?

For solving this error, refere to "Remote Connection".
-------------------------------------------------------------------------
psycopg2.ProgrammingError: permission denied for relation notes_application

OR

ERROR: role "mohsen_notes" does not exist (While importing a database)

For solving this error you need to access the database shell with `postgres` user:
su
su postgres
psql -d notesdb -U postgres

And using this command, you will grant all the needed permissions:
GRANT ALL PRIVILEGES ON TABLE notes_application TO notes;
-------------------------------------------------------------------------

+Remote Connection (Feb. 4, 2016, 11:57 a.m.)

If you get error:
psql: could not connect to server: Connection refused
Is the server running on host "192.168.0.6" and accepting TCP/IP connections on port 5432?

You will need to configure PostgreSQL to accept TCP/IP connections:

Add this line to the end of the file pg_hba.conf:
host all all 88.135.34.18/32 trust

/etc/postgresql/9.1/main/pg_hba.conf
/usr/share/postgresql/9.4/pg_hba.conf

And then:
nano /etc/postgresql/9.1/main/postgresql.conf
(For Postgresql 9.4 or later, you need to cd to `/usr/share/postgresql/9.4` and copy the file `postgresql.conf.sample` to `postgresql.conf`):

Uncomment the following line and put star instead of localhost:
listen_addresses = '*'

/etc/init.d/postgresql restart

+Log into a Postgresql database (June 27, 2015, 1:05 p.m.)

http://alvinalexander.com/blog/post/postgresql/log-in-postgresql-database
---------------------------------------------------------------------------------------------
psql -d mydb -U myuser

+Changing a Column's Default Value (May 17, 2015, 1:08 p.m.)

To set a new default for a column, use a command like this:

ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;

Note that this doesn't affect any existing rows in the table, it just changes the default for future INSERT commands.

To remove any default value, use

ALTER TABLE products ALTER COLUMN price DROP DEFAULT;

This is effectively the same as setting the default to null. As a consequence, it is not an error to drop a default where one hadn't been defined, because the default is implicitly the null value.

+Update Values (Jan. 23, 2015, 7:42 p.m.)

UPDATE table SET column1 = value1, = value2 ,... WHERE condition;

+Counting the select (Jan. 23, 2015, 7:23 p.m.)

SELECT count(*) FROM sometable;

+Select unique column (Jan. 23, 2015, 7:14 p.m.)

SELECT DISTINCT column_1 FROM table_name
---------------------------------------------------------------------
If you specify multiple columns, the DISTINCT clause will evaluate the duplicate based on the combination of values of those columns.
SELECT DISTINCT column_1, column_2 FROM tbl_name;
---------------------------------------------------------------------
PostgreSQL also provides the DISTINCT ON (expression) to keep the “first” row of each group of duplicates where the expression is equal. See the following syntax:
SELECT DISTINCT ON (column_1), column_2 FROM tbl_name ORDER BY column_1, column_2;
---------------------------------------------------------------------
select DISTINCT ip_src FROM (SELECT ip_src from acct order by stamp_inserted) as mohsen2
---------------------------------------------------------------------

+Set password for postgres user (Jan. 22, 2015, 1:33 p.m.)

sudo -u postgres psql postgres
\password postgres

+Needed packages for Asterisk/Apache2 (Jan. 22, 2015, 1:10 p.m.)

apt-get install libapache2-mod-auth-pgsql

+Extend/Increase the length of a varchar column (Oct. 25, 2014, 5:27 p.m.)

This is done using the way you change the type of a column:
alter table issue_tracker_sentsms alter column status type varchar(3);

+Display Tables and Columns (Oct. 25, 2014, 5:23 p.m.)

Using this command you will be connected to the database:
\d issue_tracker_db;

Using this command you will see the tables inside it:
\d
or
\d+

Using this command you will see the columns:
\d issue_tracker_sms;

+Default current date time for a field, while altering (Sept. 8, 2014, 10:30 p.m.)

alter table m_tasks_attachment add column "date_time" timestamp with time zone NOT NULL default now();

+Add new column (Sept. 6, 2014, 11:12 p.m.)

alter table m_tasks_message add column "is_new" boolean NOT NULL;

If you have already some data in the table, it will raise an error:
ERROR: column "is_new" contains null values
Which means you have to first create the column without the NOT NULL constraint and then set it to NOT NULL.

But you can easily set the desired default values with:
alter table m_tasks_message add column "is_new" boolean NOT NULL DEFAULT False;

This will set the already created records with the default value `False`.

+Commands (Aug. 22, 2014, 9:28 a.m.)

Login as "postgres" (SuperUser) to start using database:
# su - postgres

--------------------------------------------------------------------------------

Create a new database:
createdb mydb

--------------------------------------------------------------------------------

Drop database:
dropdb mydb

--------------------------------------------------------------------------------

Access database:
psql mydb

--------------------------------------------------------------------------------

Get help:
mydb=# \h

--------------------------------------------------------------------------------

Dump all database:
pg_dumpall > /var/lib/pgsql/backups/dumpall.sql

--------------------------------------------------------------------------------

Restore database:
psql -f /var/lib/pgsql/backups/dumpall.sql mydb

--------------------------------------------------------------------------------

Show databases:

# psql -l
mydb=# \l;
mydb=# \dt: # lists all tables in the current database
mydb=# \dt+:

--------------------------------------------------------------------------------

Show users:
mydb=# SELECT * FROM "pg_user";

--------------------------------------------------------------------------------

Show tables:
mydb=# SELECT * FROM "pg_tables";

--------------------------------------------------------------------------------

Set password:
mydb=# UPDATE pg_shadow SET passwd = 'new_password' where usename = 'username';

--------------------------------------------------------------------------------

Clean all databases (Should be done via a daily cron):
vacuumdb --quiet --all

--------------------------------------------------------------------------------

How to edit PostgreSQL queries in your favorite editor?

# \e

\e will open the editor, where you can edit the queries and save it. By doing so the query will get executed.

--------------------------------------------------------------------------------

To rename a column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;

--------------------------------------------------------------------------------

To rename a table:
ALTER TABLE products RENAME TO items;

--------------------------------------------------------------------------------

Change type:
ALTER TABLE table ALTER COLUMN anycol TYPE anytype;

Renaming a Column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;

--------------------------------------------------------------------------------

Update a field:
update menus set description='Payments: Carriersss' where username='mohsen' and menu='accountingcarrier';

--------------------------------------------------------------------------------

Delete all records from a table:
delete from table_name;

--------------------------------------------------------------------------------

Count unique records:
select count(distinct ip_src) from table_name;

--------------------------------------------------------------------------------

List columns with indexes:

SELECT * FROM pg_indexes WHERE tablename = 'mytable';

--------------------------------------------------------------------------------

Delete all data in a table and reset auto increment counter:

truncate table my_table RESTART IDENTITY;

--------------------------------------------------------------------------------

Reset auto increment counter:

ALTER SEQUENCE my_table_id_seq RESTART WITH 1;

--------------------------------------------------------------------------------

+Import / Export (Backup / Restore) (Aug. 6, 2015, 10:08 a.m.)

Backup:
1-su
2-su postgres
3-pg_dump dbname > outfile (If you want to compress the outfile) use step `4` instead of `3`)
4-pg_dump dbname | gzip > filename.gz (If you think your database output file is going to be so big, you can split it, using `5` instead of `3` and `4`)
5-pg_dump dbname | split -b 1m - filename (instead of 1mb you can write any size)
**********
If you got permission denied error, it's because of the folder/directory you are using for backup!
Change the output path or use `cd` to move the path to postgres home (which is /var/lib/postgresql).

OR

Create a folder and give it the permission for postgres to write to it by setting it the ownership
mkdir postgres_dumps
chown postgres.postgres postgres_dumps
cd/to/postgres_dumps
--------------------------------------------------------------------------------
Restore:
1-su
2-su postgres
3-psql dbname < infile (if you have a compressed file, use step `4` instead of `3`)
4-gunzip -c filename.gz | psql dbname (If your backup files are already splitted, use `5` instead of `3` and `4`)
5-cat filename* | psql dbname
--------------------------------------------------------------------------------
For selective tables:
Go to Postgre console using `psql -U db_user db_name` and then:
pg_dump -t table_name -t table_name2 -t table_name3 -U db_owner db_name > outfile.sql
--------------------------------------------------------------------------------
Export Database into CSV file:
Go to Postgre console using `psql -U db_user db_name` and then:
COPY table_name TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
COPY (SELECT foo,bar FROM table_name limit 100) TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
COPY (SELECT foo,bar FROM table_name) TO '/tmp/file_name.csv' DELIMITER ',' CSV HEADER;
--------------------------------------------------------------------------------
For importing dumped tables:
copy cdr from '/home/mohsen/MyTemp/as3.dat';
--------------------------------------------------------------------------------
Error while importing:
ERROR: role "mohsen_notes" does not exist

For solving this error, refer to `Errors` section within this category.
--------------------------------------------------------------------------------
Dump all database:
pg_dumpall > /var/lib/pgsql/backups/dumpall.sql
--------------------------------------------------------------------------------
Restore database:
psql -f /var/lib/pgsql/backups/dumpall.sql mydb
--------------------------------------------------------------------------------
Dump only parts of tables:
copy (select * from acct order by stamp_inserted limit 8000) to '/home/mohsen/Temp/acct.tsv';

Restore:
copy acct from '/home/mohsen/Temp/acct.tsv';
--------------------------------------------------------------------------------

+Configuration (Feb. 4, 2016, 11:48 a.m.)

1- Edit the file pg_hba.conf which can be found in either of the following paths:
/var/lib/pgsql/data/
/usr/share/postgresql/9.x/
/etc/postgresql/9.x/main/


2- Change the settings to this:
local all postgres trust
local all all password
host all all 127.0.0.1/32 md5

3- Restart postgresql service:
service postgresql restart

+Installation (Feb. 5, 2016, 2:37 a.m.)

apt install python-dev postgresql-server-dev-all postgresql libpq-dev python3-dev

-----------------------------------------------------------

To check if postgresql is installed and run successfully on port 5432, use this command:
nc localhost 5432 < /dev/null
It should not return anything. It should only wait ...

-----------------------------------------------------------

If you got error like the following when creating databases or users:
Is the server running locally and accepting ..... postgresql/.s.PGSQL.5432"

Check if postgresql service is enabled!?
systemctl status postgresql

If not, start it:
systemctl enable postgresql

Python
+Selenium (Oct. 9, 2019, 1:05 a.m.)

mozilla/geckodriver drivers:

https://github.com/mozilla/geckodriver/releases


Copy geckodriver in /usr/local/bin

----------------------------------------------------------------

Chrome:

https://sites.google.com/a/chromium.org/chromedriver/downloads

----------------------------------------------------------------

List of Chrome preferences:

http://www.assertselenium.com/java/list-of-chrome-driver-command-line-arguments/

----------------------------------------------------------------

List of Firefox preferences:

http://kb.mozillazine.org/About:config_entries

----------------------------------------------------------------

Efficient Web Crawling:

https://medium.com/dreamcatcher-its-blog/5-simple-tips-for-improving-automated-web-testing-or-efficient-web-crawling-using-selenium-python-43038d7b7916

----------------------------------------------------------------

+Generate random Hex colors (Oct. 9, 2019, 1:05 a.m.)

import random

r = lambda: random.randint(0,255)
print('#%02X%02X%02X' % (r(),r(),r()))

-----------------------------------------------------

import random
color = "%06x" % random.randint(0, 0xFFFFFF)

-----------------------------------------------------

+requests over SOCKS proxy (Oct. 9, 2019, 1:04 a.m.)

proxies = {
'http': 'socks5h://127.0.0.1:1090',
'https': 'socks5h://127.0.0.1:1090'
}

request = requests.get('http://mohsenhassani.com', proxies=proxies)

---------------------------------------------------------------

Using socks5h will make sure that DNS resolution happens over the proxy instead of on the client-side.

---------------------------------------------------------------

+Print without newline (Oct. 9, 2019, 1:03 a.m.)

for l in reversed('mohsen'):
print(l, sep=' ', end='', flush=True)

+PEP (Oct. 9, 2019, 1:03 a.m.)

PEP stands for Python Enhancement Proposal.

A PEP is a design document providing information to the Python community, or describing a new feature for Python or its processes or environment.

--------------------------------------------------------

There are three kinds of PEP:

1- A Standards Track PEP describes a new feature or implementation for Python. It may also describe an interoperability standard that will be supported outside the standard library for current Python versions before a subsequent PEP adds standard library support in a future version.


2- An Informational PEP describes a Python design issue, or provides general guidelines or information to the Python community, but does not propose a new feature. Informational PEPs do not necessarily represent a Python community consensus or recommendation, so users and implementers are free to ignore Informational PEPs or follow their advice.


3- A Process PEP describes a process surrounding Python, or proposes a change to (or an event in) a process. Process PEPs are like Standards Track PEPs but apply to areas other than the Python language itself. They may propose an implementation, but not to Python's codebase; they often require community consensus; unlike Informational PEPs, they are more than recommendations, and users are typically not free to ignore them. Examples include procedures, guidelines, changes to the decision-making process, and changes to the tools or environment used in Python development. Any meta-PEP is also considered a Process PEP.

--------------------------------------------------------

+Remove file & directories (Oct. 9, 2019, 1:03 a.m.)

os.remove() will remove a file.

os.rmdir() will remove an empty directory.

shutil.rmtree() will delete a directory and all its contents.

+Get the file name from a path (Oct. 9, 2019, 1:02 a.m.)

avatar_name = os.path.basename(request.user.avatar.url)

+Converting Eastern Arabic numbers to Western (Oct. 9, 2019, 1 a.m.)

table = {
1776: 48, # 0
1777: 49, # 1
1778: 50, # 2
1779: 51, # 3
1780: 52, # 4
1781: 53, # 5
1782: 54, # 6
1783: 55, # 7
1784: 56, # 8
1785: 57, # 9
}

print('۱'.translate(table))
print('۸'.translate(table))

+Image to String conversion (Oct. 9, 2019, 12:52 a.m.)

Convert Image to String:

import base64

with open("t.png", "rb") as imageFile:
str = base64.b64encode(imageFile.read())

----------------------------------------------------------------

Convert String to Image:

fh = open("imageToSave.png", "wb")
fh.write(str.decode('base64'))
fh.close()

----------------------------------------------------------------

For python3:
image_base64 = request.POST['image-data'].split('base64,', 1)
fh = open("/home/mohsen/imageToSave.png", "wb")
fh.write(base64.b64decode(image_base64[1]))
fh.close()

----------------------------------------------------------------

+Read/Load a JSON object from a file: (Oct. 9, 2019, 12:50 a.m.)

Save JSON to file:

with open('db.json', 'w') as f:
json.dump(data, f) # data is a dictionary-like object.

----------------------------------------------------------------

data = json.load(open('db.json'))

----------------------------------------------------------------

+Truncate a long string (Oct. 9, 2019, 12:47 a.m.)

data = data[:75]

----------------------------------------------------------------------

import textwrap

textwrap.shorten("Hello world!", width=12)

textwrap.shorten("Hello world", width=10, placeholder="...")

----------------------------------------------------------------------

+Print to file (Oct. 8, 2019, 11:14 p.m.)

with open("cities.txt", 'w') as city_file:
for city in cities:
print(city, file=city_file)

+Binary data (Oct. 8, 2019, 11:14 p.m.)

Binary:
is a number system like Decimal whereas decimal is based on ten and uses the digits zero to nine, binary is actually based on two and so, therefore, can only use the digits zero and one.

---------------------------------------------------

with open('binary', 'bw') as bin_file:
for i in range(17):
bin_file.write(bytes([i]))
The last two lines can also be summarized as following:
with open('binary', 'bw') as bin_file:
bin_file.write(bytes(range(17)))


with open('binary', 'br') as binfile:
for b in binfile:
print(b)

---------------------------------------------------

x = 0x20
print(x) ==> 32

y = 0x0a
print(y) ==> 10

print(0b00101010) ==> 42 ==> prints binary

---------------------------------------------------

for i in range(17):
print("{0:>2} in binary is {0:>08b}".format(i))

for i in range(17):
print("{0:>2} in hex is {0:>02x}".format(i))

---------------------------------------------------

+Shelve (Oct. 8, 2019, 11:13 p.m.)

The shelve provides a shelve and you can think of it as a dictionary but it's actually stored in a file rather than in memory.

Like a dictionary, the shelve holds key: value pairs and the values can be anything. The keys must be strings, unlike a dictionary where keys can be immutable objects, such as tuples.

All the methods we use with dictionaries can also be used for shelve objects. So it can be really useful to think of them as a persistent dictionary.

It's very easy to convert code using a dictionary to use a shelve instead.


with shelve.open('file_name', as my_shelve:
my_shelve['a'] = 1
my_shelve['b'] = 2
my_shelve['c'] = 3
my_shelve.get('a')

del my_shelve['a']

for key in my_shelve:
print(key)


You can use it without "with" too!

my_shelve = shelve.open('abc')
my_shelve['a'] = 1
.
.
my_shelve.close()

+Pickle (Oct. 8, 2019, 11:13 p.m.)

A mechanism for serializing objects called pickling.
Serialization: The process that allows objects to be saved to a file so that they can be stored or restored from a file for example.

with open('abcd.pickle', 'wb') as pickle_file:
pickle.dump(a_tuple_or_any_data, picle_file)

with open('abcd.pickle', 'rb') as pickle_file:
data = pickle.load(pickle_file

+re (Oct. 8, 2019, 11:12 p.m.)

re.match('(http|https):', url)

url.startswith(('http:', 'https:'))

--------------------------------------------------------

Verify string only contains letters, numbers, and underscores:

re.match("^[A-Za-z0-9_]*$", username)

--------------------------------------------------------

Find extensions using regex:
regex = re.compile('^.*\.(\w{3})$')
if regex.match('some_text'):
print True

--------------------------------------------------------

+Sorting data (Oct. 8, 2019, 11 p.m.)

stocks = [
# (name, shares, price)
('AA', 100, 32.20),
('IBM', 50, 91.10),
('CAT', 150, 83.44),
('GE', 200, 51.23)
]

# Sorts according to the first tuple field (the name)
print(sorted(stocks))
>>> [('AA', 100, 32.2), ('CAT', 150, 83.44), ('GE', 200, 51.23), ('IBM', 50, 91.1)]


# Sort by shares
print(sorted(stocks, key=lambda s: s[1]))
>>> [('IBM', 50, 91.1), ('AA', 100, 32.2), ('CAT', 150, 83.44), ('GE', 200, 51.23)]


# Sort by price
print(sorted(stocks, key=lambda s: s[2]))
>>> [('AA', 100, 32.2), ('GE', 200, 51.23), ('CAT', 150, 83.44), ('IBM', 50, 91.1)]


# Find the lowest price
print(min(stocks, key=lambda s: s[2]))
>>> ('AA', 100, 32.2)


# Find the maximum number of shares
print(max(stocks, key=lambda s: s[1]))
>>> ('GE', 200, 51.23)


# Find 3 lowest prices
import heapq
print(heapq.nsmallest(3, stocks, key=lambda s: s[2]))
>>> [('AA', 100, 32.2), ('GE', 200, 51.23), ('CAT', 150, 83.44)]

------------------------------------------------------------

import operator

d = {1:2, 7:8, 31:5, 30:5}
e = sorted(d.iteritems(), key=operator.itemgetter(1))

Pass the itemgetter 0 to sort by key

In Python3 there is no iteritems(), use items() instead!

------------------------------------------------------------

+Manipulating network addresses (Oct. 8, 2019, 10:50 p.m.)

import ipaddress


net = ipaddress.IPv4Network('129.168.2.0/29')

net
>>> IPv4Network('129.168.2.0/29')

net.netmask
>>> IPv4Address('255.255.255.248')


for n in net:
print(n)

>>>
129.168.2.0
129.168.2.1
129.168.2.2
129.168.2.3
129.168.2.4
129.168.2.5
129.168.2.6
129.168.2.7


a = ipaddress.IPv4Address('192.168.2.14')

a in net
>>> False


str(a)
>>> '192.168.2.14'


int(a)
>>> 3232236046

+Formatting text for Terminal (Oct. 8, 2019, 10:45 p.m.)

import textwrap

text = 'some long text ...'

print(textwrap.fill(text, 40))

+Get the Terminal width (Oct. 8, 2019, 10:41 p.m.)

import os

size = os.get_terminal_size()
print(size.columns)
print(size.lines)

+Performance Measurment (Oct. 8, 2019, 10:30 p.m.)

import time

start = time.perf_counter()
print('do some stuff...')
end = time.perf_counter()
print('Took {} seconds!'.format(end - start))
>>> Took 14.458690233001107 seconds!

----------------------------------------------------------

process_time is used to measure elapsed CPU time.

start = time.process_time()
end = time.process_time()

----------------------------------------------------------

There is also time.monotonic() which provides a monotonic timer where the reported values are guaranteed never to go backward, even if adjustments have been made to the system clock while the program is running.

----------------------------------------------------------

+Format (Oct. 8, 2019, 10:17 p.m.)

x = 1234567890
print(format(x, ','))
>>> 1,234,567,890

---------------------------------------------------

from datetime import datetime

d = datetime(2019, 5, 21)
format(d, '%a, %b %d %m, %Y')
>>> Tue, May 21 05, 2019'


'The time is {:%Y-%m-%d}'.format(d)
'The time is 2019-05-21'

---------------------------------------------------

'this is {0} test. {1:>2} {2}'.format('a', 23, 'c')

'Hello {}, How {}, you?'.format('mohsen', 'are')

for i in range(17):
print("{0:>2} in binary is {0:>08b}".format(i))

for i in range(17):
print("{0:>2} in hex is {0:>02x}".format(i))

---------------------------------------------------

+Sets (Oct. 8, 2019, 6:19 p.m.)

x = set(['foo', 'bar', 'baz', 'foo', 'qux'])
>>> x
{'qux', 'foo', 'bar', 'baz'}

>>> x = set(('foo', 'bar', 'baz', 'foo', 'qux'))
>>> x
{'qux', 'foo', 'bar', 'baz'}


To create an empty set u must use set(), as {} creates an empty dictionary.

They are unordered, which means that they can't be indexed.

They cannot contain duplicate elements.

Due to the way they're stored, it's faster to check whether an item is part of a set, rather than part of a list

Instead of using append to add to a set, use add.

The method remove removes a specific element from a set; pop removes an arbitrary element.

Sets can be combined using mathematical operations.
The union operator | combines two sets to form a new one containing items in either.
The intersection operator & gets items only in both.
The difference operator - gets items in the first set but not in the second.
The symmetric difference operator ^ gets items in either set, but not both.


When to use a dictionary:
- When you need a logical association between a key: value pair.
- When you need a fast lookup for your data, based on a custom key.
- When your data is being constantly modified. Remember, dictionaries are mutable.

When to use the other types:
- Use lists if you have a collection of data that does not need random access. Try to choose lists when you need a simple, iterable collection that is modified frequently.
- Use a set if you need uniqueness for the elements.
- Use tuples when your data cannot change.


x1 = {'foo', 'bar', 'baz'}
x2 = {'baz', 'qux', 'quux'}
>>> x1.union(x2)
{'baz', 'quux', 'qux', 'bar', 'foo'}

>>> x1 | x2
{'baz', 'quux', 'qux', 'bar', 'foo'}



>>> x1.intersection(x2)
{'baz'}

>>> x1 & x2
{'baz'}



>>> x1.difference(x2)
{'foo', 'bar'}

>>> x1 - x2
{'foo', 'bar'}



x1.symmetric_difference(x2) and x1 ^ x2 return the set of all elements in either x1 or x2, but not both:
>>> x1.symmetric_difference(x2)
{'foo', 'qux', 'quux', 'bar'}

>>> x1 ^ x2
{'foo', 'qux', 'quux', 'bar'}



x1.isdisjoint(x2) returns True if x1 and x2 have no elements in common:
>>> x1.isdisjoint(x2)
False



>>> x1.issubset({'foo', 'bar', 'baz', 'qux', 'quux'})
True

A set is considered to be a subset of itself:

>>> x = {1, 2, 3, 4, 5}
>>> x.issubset(x)
True
>>> x <= x
True




x1 < x2 returns True if x1 is a proper subset of x2:
>>> x1 = {'foo', 'bar'}
>>> x2 = {'foo', 'bar', 'baz'}
>>> x1 < x2
True

>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'bar', 'baz'}
>>> x1 < x2
False



While a set is considered a subset of itself, it is not a proper subset of itself:
>>> x = {1, 2, 3, 4, 5}
>>> x <= x
True
>>> x < x
False



x1.issuperset(x2) and x1 >= x2 return True if x1 is a superset of x2:
>>> x1 = {'foo', 'bar', 'baz'}
>>> x1.issuperset({'foo', 'bar'})
True

>>> x2 = {'baz', 'qux', 'quux'}
>>> x1 >= x2
False


You have already seen that a set is considered a subset of itself. A set is also considered a superset of itself:
>>> x = {1, 2, 3, 4, 5}
>>> x.issuperset(x)
True
>>> x >= x
True



x1 > x2 returns True if x1 is a proper superset of x2:
>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'bar'}
>>> x1 > x2
True

>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'bar', 'baz'}
>>> x1 > x2
False

A set is not a proper superset of itself:

>>> x = {1, 2, 3, 4, 5}
>>> x > x
False


>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'baz', 'qux'}

>>> x1 |= x2
>>> x1
{'qux', 'foo', 'bar', 'baz'}

>>> x1.update(['corge', 'garply'])
>>> x1
{'qux', 'corge', 'garply', 'foo', 'bar', 'baz'}




>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'baz', 'qux'}

>>> x1 &= x2
>>> x1
{'foo', 'baz'}

>>> x1.intersection_update(['baz', 'qux'])
>>> x1
{'baz'}



>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'baz', 'qux'}

>>> x1 -= x2
>>> x1
{'bar'}

>>> x1.difference_update(['foo', 'bar', 'qux'])
>>> x1
set()




>>> x1 = {'foo', 'bar', 'baz'}
>>> x2 = {'foo', 'baz', 'qux'}
>>>
>>> x1 ^= x2
>>> x1
{'bar', 'qux'}
>>>
>>> x1.symmetric_difference_update(['qux', 'corge'])
>>> x1
{'bar', 'corge'}



>>